Quantum is getting hot in 2019, and even Ciro Santilli got a bit excited: quantum computing could be the next big thing.
No useful algorithm has been economically accelerated by quantum yet as of 2019, only useless ones, but the bets are on, big time.
To get a feeling of this, just have a look at the insane number of startups that are already developing quantum algorithms for hardware that doesn't/barely exists! quantumcomputingreport.com/players/privatestartup (archive). Some feared we might be in a bubble: Are we in a quantum computing bubble?
To get a basic idea of what programming a quantum computer looks like start by reading: Section "Quantum computing is just matrix multiplication".
Some people have their doubts, and that is not unreasonable, it might truly not work out. We could be on the verge of an AI winter of quantum computing. But Ciro Santilli feels that it is genuinely impossible to tell as of 2020 if something will work out or not. We really just have to try it out and see. There must have been skeptics before every single next big thing.
This is a quick tutorial on how a quantum computer programmer thinks about how a quantum computer works. If you know:
a concrete and precise hello world operation can be understood in 30 minutes.
Although there are several types of quantum computer under development, there exists a single high level model that represents what most of those computers can do, and we are going to explain that model here. This model is the is the digital quantum computer model, which uses a quantum circuit, that is made up of many quantum gates.
Beyond that basic model, programmers only may have to consider the imperfections of their hardware, but the starting point will almost always be this basic model, and tooling that automates mapping the high level model to real hardware considering those imperfections (i.e. quantum compilers) is already getting better and better.
This model is very simple to understand, being only marginally more complex than that of a classical computer, see also: quantumcomputing.stackexchange.com/questions/6639/is-my-background-sufficient-to-start-quantum-computing/14317#14317
The situation of quantum computers today in the 2020's is somewhat analogous to that of the early days of classical circuits and computers in the 1950's and 1960's, before CPU came along and software ate the world. Even though the exact physics of a classical computer might be hard to understand and vary across different types of integrated circuits, those early hardware pioneers (and to this day modern CPU designers), can usefully view circuits from a higher level point of view, thinking only about concepts such as:
as modelled at the register transfer level, and only in a separate compilation step translated into actual chips. This high level understanding of how a classical computer works is what we can call "the programmer's model of a classical computer". So we are now going to describe the quantum analogue of it.
The way quantum programmers think about a quantum computer in order to program can be described as follows:
  • the input of a N qubit quantum computer is a vector of dimension N containing classic bits 0 and 1
  • the quantum program, also known as circuit, is a unitary matrix of complex numbers that operates on the input to generate the output
  • the output of a N qubit computer is also a vector of dimension N containing classic bits 0 and 1
To operate a quantum computer, you follow the step of operation of a quantum computer:
Each time you do this, you are literally conducting a physical experiment of the specific physical implementation of the computer:
  • setup your physical system to represent the classical 0/1 inputs
  • let the state evolve for long enough
  • measure the classical output back out
and each run as the above can is simply called "an experiment" or "a measurement".
The output comes out "instantly" in the sense that it is physically impossible to observe any intermediate state of the system, i.e. there are no clocks like in classical computers, further discussion at: quantum circuits vs classical circuits. Setting up, running the experiment and taking the does take some time however, and this is important because you have to run the same experiment multiple times because results are probabilistic as mentioned below.
Unlike in a classical computer, the output of a quantum computer is not deterministic however.
But the each output is not equally likely either, otherwise the computer would be useless except as random number generator!
This is because the probabilities of each output for a given input depends on the program (unitary matrix) it went through.
Therefore, what we have to do is to design the quantum circuit in a way that the right or better answers will come out more likely than the bad answers.
We then calculate the error bound for our circuit based on its design, and then determine how many times we have to run the experiment to reach the desired accuracy.
The probability of each output of a quantum computer is derived from the input and the circuit as follows.
First we take the classic input vector of dimension N of 0's and 1's and convert it to a "quantum state vector" of dimension :
We are after all going to multiply it by the program matrix, as you would expect, and that has dimension !
Note that this initial transformation also transforms the discrete zeroes and ones into complex numbers.
For example, in a 3 qubit computer, the quantum state vector has dimension and the following shows all 8 possible conversions from the classic input to the quantum state vector:
000 -> 1000 0000 == (1.0, 0.0, 0.0, 0.0,  0.0, 0.0, 0.0, 0.0)
001 -> 0100 0000 == (0.0, 1.0, 0.0, 0.0,  0.0, 0.0, 0.0, 0.0)
010 -> 0010 0000 == (0.0, 0.0, 1.0, 0.0,  0.0, 0.0, 0.0, 0.0)
011 -> 0001 0000 == (0.0, 0.0, 0.0, 1.0,  0.0, 0.0, 0.0, 0.0)
100 -> 0000 1000 == (0.0, 0.0, 0.0, 0.0,  1.0, 0.0, 0.0, 0.0)
101 -> 0000 0100 == (0.0, 0.0, 0.0, 0.0,  0.0, 1.0, 0.0, 0.0)
110 -> 0000 0010 == (0.0, 0.0, 0.0, 0.0,  0.0, 0.0, 1.0, 0.0)
111 -> 0000 0001 == (0.0, 0.0, 0.0, 0.0,  0.0, 0.0, 0.0, 1.0)
This can be intuitively interpreted as:
  • if the classic input is 000, then we are certain that all three bits are 0.
    Therefore, the probability of all three 0's is 1.0, and all other possible combinations have 0 probability.
  • if the classic input is 001, then we are certain that bit one and two are 0, and bit three is 1. The probability of that is 1.0, and all others are zero.
  • and so on
Now that we finally have our quantum state vector, we just multiply it by the unitary matrix of the quantum circuit, and obtain the dimensional output quantum state vector :
And at long last, the probability of each classical outcome of the measurement is proportional to the square of the length of each entry in the quantum vector, analogously to what is done in the Schrödinger equation.
For example, suppose that the 3 qubit output were:
Then, the probability of each possible outcomes would be the length of each component squared:
i.e. 75% for the first, and 25% for the third outcomes, where just like for the input:
  • first outcome means 000: all output bits are zero
  • third outcome means 010: the first and third bits are zero, but the second one is 1
All other outcomes have probability 0 and cannot occur, e.g.: 001 is impossible.
Keep in mind that the quantum state vector can also contain complex numbers because we are doing quantum mechanics, but we just take their magnitude in that case, e.g. the following quantum state would lead to the same probabilities as the previous one:
This interpretation of the quantum state vector clarifies a few things:
  • the input quantum state is just a simple state where we are certain of the value of each classic input bit
  • the matrix has to be unitary because the total probability of all possible outcomes must be 1.0
    This is true for the input matrix, and unitary matrices have the probability of maintaining that property after multiplication.
    Unitary matrices are a bit analogous to self-adjoint operators in general quantum mechanics (self-adjoint in finite dimensions implies is stronger)
    This also allows us to understand intuitively why quantum computers may be capable of accelerating certain algorithms exponentially: that is because the quantum computer is able to quickly do an unitary matrix multiplication of a humongous sized matrix.
    If we are able to encode our algorithm in that matrix multiplication, considering the probabilistic interpretation of the output, then we stand a chance of getting that speedup.
Bibliography:
This is the true key question: what are the most important algorithms that would be accelerated by quantum computing?
Some candidates:
Do you have proper optimization or quantum chemistry algorithms that will make trillions?
Maybe there is some room for doubt because some applications might be way better in some implementations, but we should at least have a good general idea.
However, clear information on this really hard to come by, not sure why.
The most comprehensive list is the amazing curated and commented list of quantum algorithms as of 2020.
There is no fundamental difference between them, a quantum algorithm is a quantum circuit, which can be seen as a super complicated quantum gate.
Perhaps the greats practical difference is that algorithms tend to be defined for an arbitrary number of N qubits, i.e. as a function for that each N produces a specific quantum circuit with N qubits solving the problem. Most named gates on the other hand have fixed small sizes.
Toy/test/tought experiment algorithm.
Sample implementations:
Video 1. Shor's algorithm Explained by minutephysics (2019) Source.
A group of Chinese researchers have just published a paper claiming that they can—although they have not yet done so—break 2048-bit RSA. This is something to take seriously. It might not be correct, but it’s not obviously wrong.
We have long known from Shor’s algorithm that factoring with a quantum computer is easy. But it takes a big quantum computer, on the orders of millions of qbits, to factor anything resembling the key sizes we use today. What the researchers have done is combine classical lattice reduction factoring techniques with a quantum approximate optimization algorithm. This means that they only need a quantum computer with 372 qbits, which is well within what’s possible today. (The IBM Osprey is a 433-qbit quantum computer, for example. Others are on their way as well.)
Software that maps higher level languages like Qiskit into actual quantum circuits.
These appear to be benchmarks that don't involve running anything concretely, just compiling and likely then counting gates:
Used e.g. by Oxford Quantum Circuits, www.linkedin.com/in/john-dumbell-627454121/ mentions:
Using LLVM to consume QIR and run optimization, scheduling and then outputting hardware-specific instructions.
Presumably the point of it is to allow simulation in classical computers?
Technique that uses multiple non-ideal qubits (physical qubits) to simulate/produce one perfect qubit (logical).
One is philosophically reminded of classical error correction codes, where we also have multiple input bits per actual information bit.
TODO understand in detail. This appears to be a fundamental technique since all physical systems we can manufacture are imperfect.
Part of the fundamental interest of this technique is due to the quantum threshold theorem.
For example, when PsiQuantum raised 215M in 2020, they announced that they intended to reach 1 million physical qubits, which would achieve between 100 and 300 logical qubits.
This theorem roughly states that states that for every quantum algorithm, once we reach a certain level of physical error rate small enough (where small enough is algorithm dependant), then we can perfectly error correct.
This algorithm provides the conceptual division between noisy intermediate-scale quantum era and post-NISQ.
Era of quantum computing before we reach physical errors small enough to do perfect quantum error correction as demonstrated by the quantum threshold theorem.
A quantum algorithm that is thought to be more likely to be useful in the NISQ era of quantum computing.
TODO clear example of the computational problem that it solves.
TODO clear example of the computational problem that it solves.
This is a term "invented" by Ciro Santilli to refer to quantum compilers that are able to convert non-specifically-quantum (functional, since there is no state in quantum software) programs into quantum circuit.
The term is made by adding "quantum" to the more "classical" concept of "high-level synthesis", which refers to software that converts an imperative program into register transfer level hardware, typicially for FPGA applications.
It is hard to beat the lists present at: quantumcomputingreport.com (closed source unfortunately, no GitHub) in particular:
One of their learning sites: www.qutube.nl/
The educational/outreach branch of QuTech.
Not a quantum computing pure-play, they also do sensing.
Publicly traded in 2007, but only pivoted to quantum computing much later.
The co-founder's name, Toby Cubitt, is the mos awesome thing ever (Cubitt -> qubit). From UCL.
One possibly interesting and possibly obvious point of view, is that a quantum computer is an experimental device that executes a quantum probabilistic experiment for which the probabilities cannot be calculated theoretically efficiently by a nuclear weapon.
This is how quantum computing was originally theorized by the likes of Richard Feynman: they noticed that "Hey, here's a well formulated quantum mechanics problem, which I know the algorithm to solve (calculate the probability of outcomes), but it would take exponential time on the problem size".
The converse is then of course that if you were able to encode useful problems in such an experiment, then you have a computer that allows for exponential speedups.
This can be seen very directly by studying one specific quantum computer implementation. E.g. if you take the simplest to understand one, photonic quantum computer, you can make systems for which you need exponential time to calculate the probabilities that photons will exit through certain holes and not others.
The obvious aspect of this idea is by coming from quantum logic gates are needed because you can't compute the matrix explicitly as it grows exponentially: knowing the full explicit matrix is impossible in practice, and knowing the matrix is equivalent to knowing the probabilities of every outcome.
These are two conflicting constraints:
TODO confirm: apparently in the paradigm you can choose to measure only certain output qubits.
This makes things irreversible (TODO what does reversibility mean in this random context?), as opposed to Circuit-based quantum computer where you measure all output qubits at once.
TODO what is the advantage?
As of 2022, this tends to be the more "default" when you talk about a quantum computer.
But there are some serious analog quantum computer contestants in the field as well.
Quantum circuits are the prevailing model of quantum computing as of the 2010's - 2020's
Sample implementations:
We don't need to understand a super generalized version of tensor products to know what they mean in basic quantum computing!
Intuitively, taking a tensor product of two qubits simply means putting them together on the same quantum system/computer.
When we write the bra-ket notation: that is the same as .
The tensor product is called a "product" because it distributes over addition.
E.g. consider:
Intuitively, in this operation we just put a Hadamard gate qubit together with a second pure qubit.
And the outcome still has the second qubit as always 0, because we haven't made them interact.
The quantum state is called a separable state, because it can be written as a single product of two different qubits. We have simply brought two qubits together, without making them interact.
If we then add a CNOT gate to make a Bell state:
we can now see that the Bell state is non-separable: we've made the two qubits interact, and there is no way to write this state with a single tensor product. The qubits are fundamentally entangled.
Just like a classic programmer does not need to understand the intricacies of how transistors are implemented and CMOS semiconductors, the quantum programmer does not understand physical intricacies of the underlying physical implementation.
The main difference to keep in mind is that quantum computers cannot save and observe intermediate quantum state, so programming a quantum computer is basically like programming a combinatorial-like circuit with gates that operate on (qu)bits:
For this reason programming a quantum computer is much like programming a classical combinatorial circuit as you would do with SPICE, verilog-or-vhdl, in which you are basically describing a graph of gates that goes from the input to the output
For this reason, we can use the words "program" and "circuit" interchangeably to refer to a quantum program
Also remember that and there is no no clocks in combinatorial circuits because there are no registers to drive; and so there is no analogue of clock in the quantum system either,
Another consequence of this is that programming quantum computers does not look like programming the more "common" procedural programming languages such as C or Python, since those fundamentally rely on processor register / memory state all the time.
Quantum programmers can however use classic languages to help describe their quantum programs more easily, for example this is what happens in Qiskit, where you write a Python program that makes Qiskit library calls that describe the quantum program.
At Section "Quantum computing is just matrix multiplication" we saw that making a quantum circuit actually comes down to designing one big unitary matrix.
We have to say though that that was a bit of a lie.
Quantum programmers normally don't just produce those big matrices manually from scratch.
Instead, they use quantum logic gates.
One key insight, is that the matrix of a non-trivial quantum circuit is going to be huge, and won't fit into any amount classical memory that can be present in this universe.
This is because the matrix is exponential in the number qubits, and is more than the number of atoms in the universe!
Therefore, off the bat we know that we cannot possibly describe those matrices in an explicit form, but rather must use some kind of shorthand.
But it gets worse.
Even if we had enough memory, the act of explicitly computing the matrix is not generally possible.
This is because knowing the matrix, basically means knowing the probability result for all possible outputs for each of the possible inputs.
But if we had those probabilities, our algorithmic problem would already be solved in the first place! We would "just" go over each of those output probabilities (OK, there are of those, which is also an insurmountable problem in itself), and the largest probability would be the answer.
So if we could calculate those probabilities on a classical machine, we would also be able to simulate the quantum computer on the classical machine, and quantum computing would not be able to give exponential speedups, which we know it does.
To see this, consider that for a given input, say 000 on a 3 qubit machine, the corresponding 8-sized quantum state looks like:
000 -> 1000 0000 == (1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0)
and therefore when you multiply it by the unitary matrix of the quantum circuit, what you get is the first column of the unitary matrix of the quantum circuit. And 001, gives the second column and so on.
As a result, to prove that a quantum algorithm is correct, we need to be a bit smarter than "just calculate the full matrix".
Which is why you should now go and read: Section "Quantum algorithm".
This type of thinking links back to how physical experiments relate to quantum computing: a quantum computer realizes a physical experiment to which we cannot calculate the probabilities of outcomes without exponential time.
So for example in the case of a photonic quantum computer, you are not able to calculate from theory the probability that photons will show up on certain wires or not.
One direct practical reason is that we need to map the matrix to real quantum hardware somehow, and all quantum hardware designs so far and likely in the future are gate-based: you manipulate a small number of qubits at a time (2) and add more and more of such operations.
While there are "quantum compilers" to increase the portability of quantum programs, it is to be expected that programs manually crafted for a specific hardware will be more efficient just like in classic computers.
TODO: is there any clear reason why computers can't beat humans in approximating any unitary matrix with a gate set?
This is analogous to what classic circuit programmers will do, by using smaller logic gates to create complex circuits, rather than directly creating one huge truth table.
The most commonly considered quantum gates take 1, 2, or 3 qubits as input.
The gates themselves are just unitary matrices that operate on the input qubits and produce the same number of output qubits.
For example, the matrix for the CNOT gate, which takes 2 qubits as input is:
1 0 0 0
0 1 0 0
0 0 0 1
0 0 1 0
The final question is then: if I have a 2 qubit gate but an input with more qubits, say 3 qubits, then what does the 2 qubit gate (4x4 matrix) do for the final big 3 qubit matrix (8x8)? In order words, how do we scale quantum gates up to match the total number of qubits?
The intuitive answer is simple: we "just" extend the small matrix with a larger identity matrix so that the sum of the probabilities third bit is unaffected.
More precisely, we likely have to extend the matrix in a way such that the partial measurement of the original small gate qubits leaves all other qubits unaffected.
For example, if the circuit were made up of a CNOT gate operating on the first and second qubits as in:
0 ----+----- 0
      |
1 ---CNOT--- 1

2 ---------- 2
then we would just extend the 2x2 CNOT gate to:
TODO lazy to properly learn right now. Apparently you have to use the Kronecker product by the identity matrix. Also, zX-calculus appears to provide a powerful alternative method in some/all cases.
Just like as for classic gates, we would like to be able to select quantum computer physical implementations that can represent one or a few gates that can be used to create any quantum circuit.
Unfortunately, in the case of quantum circuits this is obviously impossible, since the space of N x N unitary matrices is infinite and continuous.
Therefore, when we say that certain gates form a "set of universal quantum gates", we actually mean that "any unitary matrix can be approximated to arbitrary precision with enough of these gates".
Or if you like fancy Mathy words, you can say that the subgroup of the unitary group generated by our basic gate set is a dense subset of the unitary group.
The first two that you should study are:
The Hadamard gate takes or (quantum states with probability 1.0 of measuring either 0 or 1), and produces states that have equal probability of 0 or 1.
Figure 1. Hadamard gate symbol. Source.
The quantum NOT gate swaps the state of and , i.e. it maps:
As a result, this gate also inverts the probability of measuring 0 or 1, e.g.
  • if the old probability of 0 was 0, then it becomes 1
  • if the old probability of 0 was 0.2, then it becomes 0.8
Figure 1. Quantum NOT gate symbol. Source.
The most common way to construct multi-qubit gates is to use single-qubit gates as part of a controlled quantum gate.
Controlled quantum gates are gates that have two types of input qubits:
These gates can be understood as doing a certain unitary operation only if the control qubits are enabled or disabled.
The first example to look at is the CNOT gate.
Figure 1. Generic controlled quantum gate symbol. Source.
The black dot means "control qubit", and "U" means an arbitrary Unitary operation.
When the operand has a conventional symbol, e.g. the Figure "Quantum NOT gate symbol" for the quantum NOT gate to form the CNOT gate, that symbol is used in the operand instead.
Some authors use the convention of:
  • filled black circle: conventional controlled quantum gate, i.e. operate if control qubit is active
  • empty (White) circle: operarate if control qubit is inactive
The CNOT gate is a controlled quantum gate that operates on two qubits, flipping the second (operand) qubit if the first (control) qubit is set.
This gate is the first example of a controlled quantum gate that you should study.
Figure 1. CNOT gate symbol. Source. The symbol follow the generic symbol convention for controlled quantum gates shown at Figure "Generic controlled quantum gate symbol", but replacing the generic "U" with the Figure "Quantum NOT gate symbol".
To understand why the gate is called a CNOT gate, you should think as follows.
First let's produce a generic quantum state vector where the control qubit is certain to be 0.
On the standard basis:
we see that this means that only and should be possible. Therefore, the state must be of the form:
where and are two complex numbers such that
If we operate the CNOT gate on that state, we obtain:
and so the input is unchanged as desired, because the control qubit is 0.
If however we take only states where the control qubit is for sure 1:
Therefore, in that case, what happened is that the probabilities of and were swapped from and to and respectively, which is exactly what the quantum NOT gate does.
So from this we understand more concretelly what "the gate only operates if the first qubit is set to one" means.
Now go and study the Bell state and understand intuitively how this gate is used to produce it.
This gate set alone is not a set of universal quantum gates.
Notably, circuits containing those gates alone can be fully simulated by classical computers according to the Gottesman-Knill theorem, so there's no way they could be universal.
This means that if we add any number of Clifford gates to a quantum circuit, we haven't really increased the complexity of the algorithm, which can be useful as a transformational device.
A popular set of universal quantum gates derived from Clifford gates is Clifford plus T.
Set of quantum logic gate composed of the Clifford gates plus the Toffoli gate. It forms a set of universal quantum gates.
Video 1. TensorFlow quantum by Masoud Mohseni (2020) Source. At the timestamp, Masoud gives a thought experiment example of the perhaps simplest to understand analog quantum computer: chained double-slit experiments with carefully calculated distances between slits. Calulating the final propability distribution of that grows exponentially.
TODO synonym to analog quantum computer?
It is also possible to carry out quantum computing without qubits using processes with a continuous spectrum of measurement.
As of 2020, these approaches seem less developed/promising, but who knows.
These computers can be seen as analogous to classical non-quantum analog computers.
Video 1. How To Build A Quantum Computer by Lukas's Lab (2023) Source.
Super quick overview of the main types of quantum computer physical implementations, so doesn't any much to a quick Google.
He sais he's going to make a series about it, so then something useful might actually come out. The first one was: Video "How to Turn Superconductors Into A Quantum Computer by Lukas's Lab (2023)", but it is still too basic.
The author's full name is Lukas Baker, www.linkedin.com/in/lukasbaker1331/, found with Google reverse image search, even though the LinkedIn image is very slightly different from the YouTube one.
As of 2023 he was a PhD student at NYU.
5 Companies Working With Diamond NV Quantum Computing Technology.
Video 1. Architecture All Access: Quantum Computing by James Clarke (2021) Source.
Based on the Josephson effect. Yet another application of that phenomenal phenomena!
It is fun to see that the representation of information in the QC basically uses an LC circuit, which is a very classical resonator circuit.
As mentioned at en.wikipedia.org/wiki/Superconducting_quantum_computing#Qubit_archetypes there are actually a few different types of superconducting qubits:
  • flux
  • charge
  • phase
and hybridizations of those such as:
Input:
  • microwave radiation to excite circuit, or do nothing and wait for it to fall to 0 spontaneously
  • interaction: TODO
  • readout: TODO
Video 1. Quantum Computing with Superconducting Qubits by Alexandre Blais (2012) Source.
Video 2. Quantum Transport, Lecture 16: Superconducting qubits by Sergey Frolov (2013) Source. youtu.be/Kz6mhh1A_mU?t=1171 describes several possible realizations: charge, flux, charge/flux and phase.
Video 3. Building a quantum computer with superconducting qubits by Daniel Sank (2019) Source. Daniel wears a "Google SB" t-shirt, which either means shabi in Chinese, or Santa Barbara. Google Quantum AI is based in Santa Barbara, with links to UCSB.
Video 4. A Brief History of Superconducting Quantum Computing by Steven Girvin (2021) Source.
Video 5. Superconducting Qubits I Part 1 by Zlatko Minev (2020) Source.
The Q&A in the middle of talking is a bit annoying.
Video 6. Superconducting Qubits I Part 2 by Zlatko Minev (2020) Source.
Video 7. How to Turn Superconductors Into A Quantum Computer by Lukas's Lab (2023) Source. This video is just the introduction, too basic. But if he goes through with the followups he promisses, then something might actually come out of it.
Non-linearity is needed otherwise the input energy would just make the state go to higher and higher energy levels, e.g. from 1 to 2. But we only want to use levels 0 and 1.
The way this is modelled in by starting from a pure LC circuit, which is an harmonic oscillator, see also quantum LC circuit, and then replacing the linear inductor with a SQUID device, e.g. mentioned at: youtu.be/eZJjQGu85Ps?t=1655 Video "Superconducting Qubits I Part 1 by Zlatko Minev (2020)".