This is a family of notations related to the big O notation. A good mnemonic summary of all notations would be:
Module bound above, possibly multiplied by a constant:
is defined as:
E.g.:
  • . For , is enough. Otherwise, any will do, the bottom line will always catch up to the top one eventually.
Stronger version of the big O notation, basically means that ratio goes to zero. In big O notation, the ratio does not need to go to zero.
So in informal terms, big O notation means , and little-o notation means .
E.g.:
  • K does not tend to zero
In intuitive terms it consists of all integer functions, possibly with multiple input arguments, that can be written only with a sequence of:
  • variable assignments
  • addition and subtraction
  • integer comparisons and if/else
  • for loops
for (i = 0; i < n; i++)
and such that n does not change inside the loop body, i.e. no while loops with arbitrary conditions.
n does not have to be a constant, it may come from previous calculations. But it must not change inside the loop body.
Primitive recursive functions basically include every integer function that comes up in practice. Primitive recursive functions can have huge complexity, and it strictly contains EXPTIME. As such, they mostly only come up in foundation of mathematics contexts.
The cool thing about primitive recursive functions is that the number of iterations is always bound, so we are certain that they terminate and are therefore computable.
This also means that there are necessarily functions which are not primitive recursive, as we know that there must exist uncomputable functions, e.g. the busy beaver function.
Adding unbounded while loops of course enables us to simulate arbitrary Turing machines, and therefore increases the complexity class.
More finely, there are non-primitive total recursive functions, e.g. most famously the Ackermann function.
To get an intuition for it, see the sample computation at: en.wikipedia.org/w/index.php?title=Ackermann_function&oldid=1170238965#TRS,_based_on_2-ary_function where in this context. From this, we immediately get the intuition that these functions are recursive somehow.
A problem that is both NP and NP-hard.
Interesting because of the Cook-Levin theorem: if only a single NP-complete problem were in P, then all NP-complete problems would also be P!
We all know the answer for this: either false or independent.
A problem such that all NP problems can be reduced in polynomial time to it.
This is the most interesting class of problems for BQP as we haven't proven that they are neither:
  • P: would be boring on quantum computer
  • NP-complete: would likely be impossible on a quantum computer
Heck, we know nothing about this class yet related to non quantum classes!
  • conjectured not to intersect with NP-complete, because if it were, all NP-complete problems could be solved efficiently on quantum computers, and none has been found so far as of 2020.
  • conjectured to be larger than P, but we don't have a single algorithm provenly there:
    • it is believed that the NP complete ones can't be solved
    • if they were neither NP-complete nor P, it would imply P != NP
  • we just don't know if it is even contained inside NP!