Computational problem where the solution is either yes or no.
When there are more than two possible answers, it is called a function problem.
Decision problems come up often in computer science because many important problems are often stated in terms of "decide if a given string belongs to given formal language".
The canonical undecidable problem.
A Turing machine decider is a program that decides if one or more Turing machines halts of not.
Of course, because what we know about the halting problem, there cannot exist a single decider that decies all Turing machines.
E.g. The Busy Beaver Challenge has a set of deciders clearly published, which decide a large part of BB(5). Their proposed deciders are listed at: discuss.bbchallenge.org/c/deciders/5 and actually applied ones at: bbchallenge.org.
But there are deciders that can decide large classes of turing machines.
Many (all/most?) deciders are based on simulation of machines with arbitrary cutoff hyperparameters, e.g. the cutoff space/time of a Turing machine cycler decider.
The simplest and most obvious example is the Turing machine cycler decider
Turing machine regex tape notation is Ciro Santilli's made up name for the notation used e.g. at:Most of it is just regular regular expression notation, with a few differences:
  • denotes the right or left edge of the (zero initialized) tape. It is often omitted as we always just assume it is always present on both sides of every regex
  • A, B, C, D and E denotes the current machine state. This is especially common notation in the context of the BB(5) problem
  • < and > next to the state indicate if the head is on top of the left or right element. E.g.:
    11 (01)^n <A 00 (0011)^{n+2}
    indicates that the head A is on top of the last 1 of the last sequence of n 01s to the left of the head.
This notation is very useful, as it helps compress long repeated sequences of Turing machine tape and extract higher level patterns from them, which is how you go about understanding a Turing machin in order to apply Turing machine acceleration.
These are very simple, they just check for exact state repetitions, which obviously imply that they will run forever.
Unfortunately, cyclers may need to run throun an initial setup phase before reaching the initial cycle point, which is not very elegant.
Also, we have no way of knowing the initial setup length of the actual cycle length, so we just need an arbitrary cutoff value.
And unfortunatly, this can lead to misses, e.g. Skelet machine #1, a 5 state machine, has a (translated) cycle that starts at around 50-200M styeps, and takes 8 trillion steps to repeat.
Like a cycler, but the cycle starts at an offset.
To see infinity, we check that if the machine only goes left N squares until reaching the repetition, then repetition must only be N squares long.
The busy beaver game consists in finding, for a given , the turing machine with states that writes the largest possible number of 1's on a tape initially filled with 0's. In other words, computing the busy beaver function for a given .
There are only finitely many Turing machines with states, so we are certain that there exists such a maxium. Computing the Busy beaver function for a given then comes down to solving the halting problem for every single machine with states.
Some variant definitions define it as the number of time steps taken by the machine instead. Wikipedia talks about their relationship, but no patience right now.
The Busy Beaver problem is cool because it puts the halting problem in a more precise numerical light, e.g.:
The step busy beaver is a variant of the busy beaver game counts the number of steps before halt, instead of the number of 1's written to the tape.
As of 2023, it appears that BB(5) the same machine, , will win both for 5 states. But this is not always necessarily the case.
is the largest number of 1's written by a halting -state Turing machine on a tape initially filled with 0's.
Video 1. The Boundary of Computation by Mutual Information (2023) Source.
The following things come to mind when you look into research in this area, especially the search for BB(5) which was hard but doable:
Turing machine acceleration refers to using high level understanding of specific properties of specific Turing machines to be able to simulate them much fatser than naively running the simulation as usual.
Acceleration allows one to use simulation to find infinite loops that might be very long, and would not be otherwise spotted without acceleration.
The last value we will likely every know for the busy beaver function! BB(6) is likely completely out of reach forever.
By 2023, it had basically been decided by the The Busy Beaver Challenge as mentioned at: discuss.bbchallenge.org/t/the-30-to-34-ctl-holdouts-from-bb-5/141, pending only further verification. It is going to be one of those highly computational proofs that will be needed to be formally verified for people to finally settle.
As that project beautifully puts it, as of 2023 prior to full resolution, this can be considered the:
simplest open problem in mathematics
on the Busy beaver scale.
Best busy beaver machine known since 1989 as of 2023, before a full proof of all 5 state machines had been carried out.
Paper extracted to HTML by Heiner Marxen: turbotm.de/~heiner/BB/mabu90.html
Non formal proof with a program March 2023: www.sligocki.com/2023/03/13/skelet-1-infinite.html Awesome article that describes the proof procedure.
The proof uses Turing machine acceleration to show that Skelet machine #1 is a Translated cycler Turing machine with humongous cycle paramters:
  • start between 50-200 M steps, not calculated precisely on the original post
  • period: ~8 billion steps
Project trying to compute BB(5) once and for all. Notably it has better presentation and organization than any other previous effort, and appears to have grouped everyone who cares about the topic as of the early 2020s.
Very cool initiative!
By 2023, they had basically decided every machine: discuss.bbchallenge.org/t/the-30-to-34-ctl-holdouts-from-bb-5/141
The Busy beaver scale allows us to gauge the difficulty of proving certain (yet unproven!) mathematical conjectures!
To to this, people have reduced certain mathematical problems to deciding the halting problem of a specific Turing machine.
A good example is perhaps the Goldbach's conjecture. We just make a Turing machine that successively checks for each even number of it is a sum of two primes by naively looping down and trying every possible pair. Let the machine halt if the check fails. So this machine halts iff the Goldbach's conjecture is false! See also Conjecture reduction to a halting problem.
Therefore, if we were able to compute , we would be able to prove those conjectures automatically, by letting the machine run up to , and if it hadn't halted by then, we would know that it would never halt.
Of course, in practice, is generally uncomputable, so we will never know it. And furthermore, even if it were computable, it would take a lot longer than the age of the universe to compute any of it, so it would be useless.
However, philosophically speaking at least, the number of states of the equivalent Turing machine gives us a philosophical idea of the complexity of the problem.
The busy beaver scale is likely mostly useless, since we are able to prove that many non-trivial Turing machines do halt, often by reducing problems to simpler known cases. But still, it is cute.
But maybe, just maybe, reduction to Turing machine form could be useful. E.g. The Busy Beaver Challenge and other attempts to solve BB(5) have come up with large number of automated (usually parametrized up to a certain threshold) Turing machine decider programs that automatically determine if certain (often large numbers of) Turing machines run forever.
So it it not impossible that after some reduction to a standard Turing machine form, some conjecture just gets automatically brute-forced by one of the deciders, this is a path to
If you can reduce a mathematical problem to the Halting problem of a specific turing machine, as in the case of a few machines of the Busy beaver scale, then using Turing machine deciders could serve as a method of automated theorem proving.
That feels like it could be an elegant proof method, as you reduce your problem to one of the most well studied representations that exists: a Turing machine.
However it also appears that certain problems cannot be reduced to a halting problem... OMG life sucks (or is awesome?): Section "Turing machine that halts if and only if Collatz conjecture is false".
bbchallenge.org/story#what-is-known-about-bb lists some (all?) cool examples,
Intuitively we see that the situation is fundamentally different from the Turing machine that halts if and only if the Goldbach conjecture is false because for Collatz the counter example must go off into infinity, while in Goldbach conjecture we can finitely check any failures.
Amazing.
A problem that has more than two possible yes/no outputs.
It is therefore a generalization of a decision problem.
Complexity: NP-intermediate as of 2020:
The basis of RSA: RSA. But not proved NP-complete, which leads to:
This is natural question because both integer factorization and discrete logarithm are the basis for the most popular public-key cryptography systems as of 2020 (RSA and Diffie-Hellman key exchange respectively), and both are NP-intermediate. Why not use something more provenly hard?
NP-intermediate as of 2020 for similar reasons as integer factorization.
An important case is the discrete logarithm of the cyclic group in which the group is a cyclic group.
This is the discrete logarithm problem where the group is a cyclic group.
In this case, the problem becomes equivalent to reversing modular exponentiation.
This computational problem forms the basis for Diffie-Hellman key exchange, because modular exponentiation can be efficiently computed, but no known way exists to efficiently compute the reverse function.
A solution to a computational problem!
Draft by Ciro Santilli with cross language input/output test cases: github.com/cirosantilli/algorithm-cheat
More commonly known as a map or dictionary.
Like Binary search tree, but each node can have multiple objects and more than two children.
Figure 1. Source.
This is a family of notations related to the big O notation. A good mnemonic summary of all notations would be:
Module bound above, possibly multiplied by a constant:
is defined as:
E.g.:
  • . For , is enough. Otherwise, any will do, the bottom line will always catch up to the top one eventually.
Stronger version of the big O notation, basically means that ratio goes to zero. In big O notation, the ratio does not need to go to zero.
So in informal terms, big O notation means , and little-o notation means .
E.g.:
  • K does not tend to zero
In intuitive terms it consists of all integer functions, possibly with multiple input arguments, that can be written only with a sequence of:
  • variable assignments
  • addition and subtraction
  • integer comparisons and if/else
  • for loops
for (i = 0; i < n; i++)
and such that n does not change inside the loop body, i.e. no while loops with arbitrary conditions.
n does not have to be a constant, it may come from previous calculations. But it must not change inside the loop body.
Primitive recursive functions basically include every integer function that comes up in practice. Primitive recursive functions can have huge complexity, and it strictly contains EXPTIME. As such, they mostly only come up in foundation of mathematics contexts.
The cool thing about primitive recursive functions is that the number of iterations is always bound, so we are certain that they terminate and are therefore computable.
This also means that there are necessarily functions which are not primitive recursive, as we know that there must exist uncomputable functions, e.g. the busy beaver function.
Adding unbounded while loops of course enables us to simulate arbitrary Turing machines, and therefore increases the complexity class.
More finely, there are non-primitive total recursive functions, e.g. most famously the Ackermann function.
To get an intuition for it, see the sample computation at: en.wikipedia.org/w/index.php?title=Ackermann_function&oldid=1170238965#TRS,_based_on_2-ary_function where in this context. From this, we immediately get the intuition that these functions are recursive somehow.
A problem that is both NP and NP-hard.
Interesting because of the Cook-Levin theorem: if only a single NP-complete problem were in P, then all NP-complete problems would also be P!
We all know the answer for this: either false or independent.
A problem such that all NP problems can be reduced in polynomial time to it.
This is the most interesting class of problems for BQP as we haven't proven that they are neither:
  • P: would be boring on quantum computer
  • NP-complete: would likely be impossible on a quantum computer
Heck, we know nothing about this class yet related to non quantum classes!
  • conjectured not to intersect with NP-complete, because if it were, all NP-complete problems could be solved efficiently on quantum computers, and none has been found so far as of 2020.
  • conjectured to be larger than P, but we don't have a single algorithm provenly there:
    • it is believed that the NP complete ones can't be solved
    • if they were neither NP-complete nor P, it would imply P != NP
  • we just don't know if it is even contained inside NP!
The exact same problem appears over and over, e.g.:
  • transportaion: the last mile of the trip when everyone leaves the train and goes to their different respective offices is the most expensive
  • telecommunications: the last mile of wire linking local hubs to actual homes is the most expensive
  • electrical grid: same as telecommunications
Ciro Santilli also identified knowledge version of this problem: the missing link between basic and advanced.
The function being maximized in a optimization problem.
It is cool how even for such a "simple looking" problem, we were still unable to prove optimality as of 2020.