Setting: you are sending bits through a communication channel, each bit has a random probability of getting flipped, and so you use some error correction code to achieve some minimal error, at the expense of longer messages.
This theorem sets an upper bound on how efficient you can be in your encoding, for any encoding.
The next big question, which the theorem does not cover is how to construct codes that reach or approach the limit. Important such codes include:
But besides this, there is also the practical consideration of if you can encode/decode fast enough to keep up with the coded bandwidth given your hardware capabilities.
news.mit.edu/2010/gallagercodes0121 explains how turbo codes were first reached without a very good mathematical proof behind them, but were still revolutionary in experimental performance, e.g. turbo codes were used in 3G/4G.
But this motivated researchers to find other such algorithms that they would be able to prove things about, and so they rediscovered the much earlier lowdensity paritycheck code, which had been published in the 60's but was forgotten, partially because it was computationally expensive.
TODO how close does it get to Shannon's limit?
Quantum is getting hot in 2019, and even Ciro Santilli got a bit excited: quantum computing could be the next big thing.
No useful algorithm has been economically accelerated by quantum yet as of 2019, only useless ones, but the bets are on, big time.
To get a feeling of this, just have a look at the insane number of startups that are already developing quantum algorithms for hardware that doesn't/barely exists! quantumcomputingreport.com/players/privatestartup (archive). Some feared we might be in a bubble: Are we in a quantum computing bubble?
To get a basic idea of what programming a quantum computer looks like start by reading: Section "Quantum computing is just matrix multiplication".
Some people have their doubts, and that is not unreasonable, it might truly not work out. We could be on the verge of an AI winter of quantum computing. But Ciro Santilli feels that it is genuinely impossible to tell as of 2020 if something will work out or not. We really just have to try it out and see. There must have been skeptics before every single next big thing.
Course plan:
 Section "Programmer's model of quantum computers"
 look at a Qiskit hello world
 e.g. ours: qiskit/hello.py
 learn about quantum circuits.
 tensor product in quantum computing
 First we learn some quantum logic gates. This shows an alternative, and extremely important view of a quantum computer besides a matrix multiplication: as a circuit. Fundamental subsections:
 quantum algorithms
This is a quick tutorial on how a quantum computer programmer thinks about how a quantum computer works. If you know:a concrete and precise hello world operation can be understood in 30 minutes.
 what a complex number is
 how to do matrix multiplication
 what is a probability
Although there are several types of quantum computer under development, there exists a single high level model that represents what most of those computers can do, and we are going to explain that model here. This model is the is the digital quantum computer model, which uses a quantum circuit, that is made up of many quantum gates.
Beyond that basic model, programmers only may have to consider the imperfections of their hardware, but the starting point will almost always be this basic model, and tooling that automates mapping the high level model to real hardware considering those imperfections (i.e. quantum compilers) is already getting better and better.
This model is very simple to understand, being only marginally more complex than that of a classical computer, see also: quantumcomputing.stackexchange.com/questions/6639/ismybackgroundsufficienttostartquantumcomputing/14317#14317
The situation of quantum computers today in the 2020's is somewhat analogous to that of the early days of classical circuits and computers in the 1950's and 1960's, before CPU came along and software ate the world. Even though the exact physics of a classical computer might be hard to understand and vary across different types of integrated circuits, those early hardware pioneers (and to this day modern CPU designers), can usefully view circuits from a higher level point of view, thinking only about concepts such as:as modelled at the register transfer level, and only in a separate compilation step translated into actual chips. This high level understanding of how a classical computer works is what we can call "the programmer's model of a classical computer". So we are now going to describe the quantum analogue of it.
 logic gates like AND, NOR and NOT
 a clock + registers
The way quantum programmers think about a quantum computer in order to program can be described as follows:
 the input of a N qubit quantum computer is a vector of dimension N containing classic bits 0 and 1
 the quantum program, also known as circuit, is a $2_{n}×2_{n}$ unitary matrix of complex numbers $Q∈C_{2_{n}}×C_{2_{n}}$ that operates on the input to generate the output
 the output of a N qubit computer is also a vector of dimension N containing classic bits 0 and 1
To operate a quantum computer, you follow the step of operation of a quantum computer:
 set the input qubits to classic input bits (state initialization)
 press a big red "RUN" button
 read the classic output bits (readout)
Each time you do this, you are literally conducting a physical experiment of the specific physical implementation of the computer:and each run as the above can is simply called "an experiment" or "a measurement".
 setup your physical system to represent the classical 0/1 inputs
 let the state evolve for long enough
 measure the classical output back out
The output comes out "instantly" in the sense that it is physically impossible to observe any intermediate state of the system, i.e. there are no clocks like in classical computers, further discussion at: quantum circuits vs classical circuits. Setting up, running the experiment and taking the does take some time however, and this is important because you have to run the same experiment multiple times because results are probabilistic as mentioned below.
Unlike in a classical computer, the output of a quantum computer is not deterministic however.
But the each output is not equally likely either, otherwise the computer would be useless except as random number generator!
This is because the probabilities of each output for a given input depends on the program (unitary matrix) it went through.
Therefore, what we have to do is to design the quantum circuit in a way that the right or better answers will come out more likely than the bad answers.
We then calculate the error bound for our circuit based on its design, and then determine how many times we have to run the experiment to reach the desired accuracy.
The probability of each output of a quantum computer is derived from the input and the circuit as follows.
First we take the classic input vector of dimension N of 0's and 1's and convert it to a "quantum state vector" $q_{in} $ of dimension $2_{n}$:
$q_{in} ∈C_{2_{n}}$
We are after all going to multiply it by the program matrix, as you would expect, and that has dimension $2_{n}×2_{n}$!
Note that this initial transformation also transforms the discrete zeroes and ones into complex numbers.
For example, in a 3 qubit computer, the quantum state vector has dimension $2_{3}=8$ and the following shows all 8 possible conversions from the classic input to the quantum state vector:
000 > 1000 0000 == (1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0)
001 > 0100 0000 == (0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0)
010 > 0010 0000 == (0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0)
011 > 0001 0000 == (0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0)
100 > 0000 1000 == (0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0)
101 > 0000 0100 == (0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0)
110 > 0000 0010 == (0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0)
111 > 0000 0001 == (0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0)
This can be intuitively interpreted as:
 if the classic input is
000
, then we are certain that all three bits are 0.Therefore, the probability of all three 0's is 1.0, and all other possible combinations have 0 probability.  if the classic input is
001
, then we are certain that bit one and two are 0, and bit three is 1. The probability of that is 1.0, and all others are zero.  and so on
Now that we finally have our quantum state vector, we just multiply it by the unitary matrix $Q$ of the quantum circuit, and obtain the $2_{n}$ dimensional output quantum state vector $q_{out} $:
$q_{out} =Qq_{in} $
And at long last, the probability of each classical outcome of the measurement is proportional to the square of the length of each entry in the quantum vector, analogously to what is done in the Schrödinger equation.
For example, suppose that the 3 qubit output were:
$q_{out} =⎣⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎡ 23 0.021 0.00.00.00.00.0 ⎦⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎤ $
Then, the probability of each possible outcomes would be the length of each component squared:
i.e. 75% for the first, and 25% for the third outcomes, where just like for the input:
$P(000)P(001)P(010)P(011)P(100)P(101)P(110)P(111) =∣∣∣∣∣∣ 23 ∣∣∣∣∣∣ _{2}=∣0∣_{2}=∣∣∣∣∣∣ 21 ∣∣∣∣∣∣ _{2}=∣0∣_{2}=∣0∣_{2}=∣0∣_{2}=∣0∣_{2}=∣0∣_{2} =23 _{2}=0_{2}=21 _{2}=0_{2}=0_{2}=0_{2}=0_{2}=0_{2} =43 =0=41 =0=0=0=0=0 $
 first outcome means
000
: all output bits are zero  third outcome means
010
: the first and third bits are zero, but the second one is 1
All other outcomes have probability 0 and cannot occur, e.g.:
001
is impossible.Keep in mind that the quantum state vector can also contain complex numbers because we are doing quantum mechanics, but we just take their magnitude in that case, e.g. the following quantum state would lead to the same probabilities as the previous one:
$∣∣∣∣∣∣ 21+2 i ∣∣∣∣∣∣ _{2}∣∣∣∣∣ 2i ∣∣∣∣∣ _{2} =2_{2}1_{2}+2_{2} =2_{2}1_{2} =43 =41 $
This interpretation of the quantum state vector clarifies a few things:
 the input quantum state is just a simple state where we are certain of the value of each classic input bit
 the matrix has to be unitary because the total probability of all possible outcomes must be 1.0This is true for the input matrix, and unitary matrices have the probability of maintaining that property after multiplication.Unitary matrices are a bit analogous to selfadjoint operators in general quantum mechanics (selfadjoint in finite dimensions implies is stronger)This also allows us to understand intuitively why quantum computers may be capable of accelerating certain algorithms exponentially: that is because the quantum computer is able to quickly do an unitary matrix multiplication of a humongous $2_{N}$ sized matrix.If we are able to encode our algorithm in that matrix multiplication, considering the probabilistic interpretation of the output, then we stand a chance of getting that speedup.
Bibliography:
 arxiv.org/pdf/1804.03719.pdf Quantum Algorithm Implementations for Beginners by Abhijith et al. 2020
This is the true key question: what are the most important algorithms that would be accelerated by quantum computing?
Some candidates:
 Shor's algorithm: this one will actually make humanity worse off, as we will be forced into postquantum cryptography that will likely be less efficient than existing classical cryptography to implement
 quantum algorithm for linear systems of equations, and related application of systems of linear equations
 Grover's algorithm: speedup not exponential. Still useful for anything?
 Quantum Fourier transform: TODO is the speedup exponential or not?
 Deutsch: solves an useless problem
 NISQ algorithms
Do you have proper optimization or quantum chemistry algorithms that will make trillions?
Maybe there is some room for doubt because some applications might be way better in some implementations, but we should at least have a good general idea.
However, clear information on this really hard to come by, not sure why.
Whenever asked e.g. at: physics.stackexchange.com/questions/3390/cananybodyprovideasimpleexampleofaquantumcomputeralgorithm/3407 on Physics Stack Exchange people say the infinite mantra:
Lists:
 Quantum Algorithm Zoo: the leading list as of 2020
 quantum computing computational chemistry algorithms is the area that Ciro and many people are te most excited about is
 cstheory.stackexchange.com/questions/3888/npintermediateproblemswithefficientquantumsolutions
 mathoverflow.net/questions/33597/arethereanyknownquantumalgorithmsthatclearlyfalloutsideafewnarrowcla
Only NPintermediate, which includes notably integer factorization:
 quantumcomputing.stackexchange.com/questions/16506/canquantumcomputersolvenpcompleteproblems
 www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdf by Scott Aaronson
 cs.stackexchange.com/questions/130470/canquantumcomputinghelpsolvenpcompleteproblems
 www.quora.com/HowcanquantumcomputinghelptosolveNPhardproblems
The most comprehensive list is the amazing curated and commented list of quantum algorithms as of 2020.
There is no fundamental difference between them, a quantum algorithm is a quantum circuit, which can be seen as a super complicated quantum gate.
Perhaps the greats practical difference is that algorithms tend to be defined for an arbitrary number of N qubits, i.e. as a function for that each N produces a specific quantum circuit with N qubits solving the problem. Most named gates on the other hand have fixed small sizes.
Toy/test/tought experiment algorithm.
Sample implementations:
 2023 www.schneier.com/blog/archives/2023/01/breakingrsawithaquantumcomputer.html comments on "Factoring integers with sublinear resources on a superconducting quantum processor” arxiv.org/pdf/2212.12372.pdf
A group of Chinese researchers have just published a paper claiming that they can—although they have not yet done so—break 2048bit RSA. This is something to take seriously. It might not be correct, but it’s not obviously wrong.We have long known from Shor’s algorithm that factoring with a quantum computer is easy. But it takes a big quantum computer, on the orders of millions of qbits, to factor anything resembling the key sizes we use today. What the researchers have done is combine classical lattice reduction factoring techniques with a quantum approximate optimization algorithm. This means that they only need a quantum computer with 372 qbits, which is well within what’s possible today. (The IBM Osprey is a 433qbit quantum computer, for example. Others are on their way as well.)
Software that maps higher level languages like Qiskit into actual quantum circuits.
These appear to be benchmarks that don't involve running anything concretely, just compiling and likely then counting gates:
Used e.g. by Oxford Quantum Circuits, www.linkedin.com/in/johndumbell627454121/ mentions:
Using LLVM to consume QIR and run optimization, scheduling and then outputting hardwarespecific instructions.
Presumably the point of it is to allow simulation in classical computers?
Technique that uses multiple nonideal qubits (physical qubits) to simulate/produce one perfect qubit (logical).
One is philosophically reminded of classical error correction codes, where we also have multiple input bits per actual information bit.
TODO understand in detail. This appears to be a fundamental technique since all physical systems we can manufacture are imperfect.
Part of the fundamental interest of this technique is due to the quantum threshold theorem.
For example, when PsiQuantum raised 215M in 2020, they announced that they intended to reach 1 million physical qubits, which would achieve between 100 and 300 logical qubits.
Video "Jeremy O'Brien: "Quantum Technologies" by GoogleTechTalks (2014)" youtu.be/7wCBkAQYBZA?t=2778 describes an error correction approach for a photonic quantum computer.
Bibliography:
This theorem roughly states that states that for every quantum algorithm, once we reach a certain level of physical error rate small enough (where small enough is algorithm dependant), then we can perfectly error correct.
This algorithm provides the conceptual division between noisy intermediatescale quantum era and postNISQ.
Era of quantum computing before we reach physical errors small enough to do perfect quantum error correction as demonstrated by the quantum threshold theorem.
A quantum algorithm that is thought to be more likely to be useful in the NISQ era of quantum computing.
TODO clear example of the computational problem that it solves.
TODO clear example of the computational problem that it solves.
This is a term "invented" by Ciro Santilli to refer to quantum compilers that are able to convert nonspecificallyquantum (functional, since there is no state in quantum software) programs into quantum circuit.
The term is made by adding "quantum" to the more "classical" concept of "highlevel synthesis", which refers to software that converts an imperative program into register transfer level hardware, typicially for FPGA applications.
It is hard to beat the lists present at: quantumcomputingreport.com (closed source unfortunately, no GitHub) in particular:
 quantumcomputingreport.com/scorecards/qubitcount/ lists what are the latest qubit counts and technologies that each player is using
 quantumcomputingreport.com/players/publiccompanies/ summarizes what each player has been doing in a few paragraphs
Also of interest: quantumzeitgeist.com/interactivemapofquantumcomputingcompaniesfromaroundtheglobe/
 Paulo Nussenzveig physics researcher at University of São Paulo. Laboratory page: portal.if.usp.br/lmcal/ptbr/node/323: LMCAL, laboratory of coherente manipulation of atoms and light. Google Scholar: scholar.google.com/citations?user=FbGL0BEAAAAJ
 Brazil Quantum: interest group created by students. Might be a software consultancy: www.terra.com.br/noticias/tecnologia/inovacao/pesquisadorespaulistastentamcolocarbrasilnomapadacomputacaoquantica,2efe660fbae16bc8901b1d00d139c8d2sz31cgc9.html
 DOBSLIT dobslit.com/en/thecompany/ company in São Carlos, as of 2022 a quantum software consultancy with 3 people: www.linkedin.com/search/results/people/?currentCompany=%5B%2272433615%22%5D&origin=COMPANY_PAGE_CANNED_SEARCH&sid=TAj two of them from the Federal University of São Carlos
 computacaoquanticabrasil.com/ Website half broken as of 2022. Mentions a certain Lagrange Foundation, but their website is down.
 QuInTec academic interest group
 www.terra.com.br/noticias/tecnologia/inovacao/pesquisadorespaulistastentamcolocarbrasilnomapadacomputacaoquantica,2efe660fbae16bc8901b1d00d139c8d2sz31cgc9.html mentions 6 professors, 3 from USP 3 from UNICAMP interest group:
 drive.google.com/file/d/1geGaRuCpRHeuLH2MLnLoxEJ1iOz4gNa9/view white paper gives all names
 Celso VillasBôas
 Frederico Brito
 Gustavo Wiederhecker
 Marcelo Terra Cunha
 Paulo Nussenzveig
 Philippe Courteille
 sites.google.com/unicamp.br/quintec/home their website.
 a 2021 symposium they organized: www.saocarlos.usp.br/dia09quintecquantumengineeringworkshop/ some people of interest:
 Samuraí Brito www.linkedin.com/in/samuraíbrito4a57a847/ at Itaú Unibanco, a Brazilian bank
 www.linkedin.com/in/dariosassithober5ba2923/ from wvblabs.com.br/
 www.linkedin.com/in/robertopanepucciphd from en.wikipedia.org/wiki/Centro_de_Pesquisas_Renato_Archer in Campinas
 Quanby quantum software in Florianópolis, founder Eduardo Duzzioni
 thequantumhubs.com/category/quantumbrazilnews/ good links
 qubit.lncc.br/?lang=en Quantum Computing Group of the National Laboratory for Scientific Computing: pt.wikipedia.org/wiki/Laboratório_Nacional_de_Computação_Científica in Rio. The principal researcher seems to be www.lncc.br/~portugal/. He knows what GitHub is: github.com/programaquantica/tutoriais, PDF without .tex though.
 quantumlatino.com/ conference. E.g. 2022: www.canva.com/design/DAFErjU3Wvk/2xo25nEuqv9O7RbCPLNEkw/view
One of their learning sites: www.qutube.nl/
The educational/outreach branch of QuTech.
Not a quantum computing pureplay, they also do sensing.
Really weird and obscure company, good coverage: thequantuminsider.com/2020/02/06/quantumcomputingincorporatedthefirstpubliclytradedquantumcomputingstock/
Publicly traded in 2007, but only pivoted to quantum computing much later.
Social media:
Funding:
One possibly interesting and possibly obvious point of view, is that a quantum computer is an experimental device that executes a quantum probabilistic experiment for which the probabilities cannot be calculated theoretically efficiently by a nuclear weapon.
This is how quantum computing was originally theorized by the likes of Richard Feynman: they noticed that "Hey, here's a well formulated quantum mechanics problem, which I know the algorithm to solve (calculate the probability of outcomes), but it would take exponential time on the problem size".
The converse is then of course that if you were able to encode useful problems in such an experiment, then you have a computer that allows for exponential speedups.
This can be seen very directly by studying one specific quantum computer implementation. E.g. if you take the simplest to understand one, photonic quantum computer, you can make systems for which you need exponential time to calculate the probabilities that photons will exit through certain holes and not others.
The obvious aspect of this idea is by coming from quantum logic gates are needed because you can't compute the matrix explicitly as it grows exponentially: knowing the full explicit matrix is impossible in practice, and knowing the matrix is equivalent to knowing the probabilities of every outcome.
Mentioned e.g. at:
These are two conflicting constraints:
 long coherence times: require isolation from external world, otherwise observation destroys quantum state
 fast control and readout: require coupling with external world
Synonym to gatebased quantum computer/digital quantum computer?
TODO confirm: apparently in the paradigm you can choose to measure only certain output qubits.
This makes things irreversible (TODO what does reversibility mean in this random context?), as opposed to Circuitbased quantum computer where you measure all output qubits at once.
TODO what is the advantage?
As of 2022, this tends to be the more "default" when you talk about a quantum computer.
But there are some serious analog quantum computer contestants in the field as well.
Quantum circuits are the prevailing model of quantum computing as of the 2010's  2020's
Sample implementations:
We don't need to understand a super generalized version of tensor products to know what they mean in basic quantum computing!
Intuitively, taking a tensor product of two qubits simply means putting them together on the same quantum system/computer.
The tensor product is called a "product" because it distributes over addition.
E.g. consider:
$(2 ∣0⟩+∣1⟩ )⊗∣0⟩=2 ∣0⟩⊗∣0⟩+∣1⟩⊗∣0⟩ =2 ∣00⟩+∣10⟩ $
Intuitively, in this operation we just put a Hadamard gate qubit together with a second pure $∣0⟩$ qubit.
And the outcome still has the second qubit as always 0, because we haven't made them interact.
The quantum state $2 ∣00⟩+∣10⟩ $ is called a separable state, because it can be written as a single product of two different qubits. We have simply brought two qubits together, without making them interact.
If we then add a CNOT gate to make a Bell state:
we can now see that the Bell state is nonseparable: we've made the two qubits interact, and there is no way to write this state with a single tensor product. The qubits are fundamentally entangled.
$2 ∣00⟩+∣11⟩ =2 ∣0⟩⊗∣0⟩+∣1⟩⊗∣1⟩ $
Just like a classic programmer does not need to understand the intricacies of how transistors are implemented and CMOS semiconductors, the quantum programmer does not understand physical intricacies of the underlying physical implementation.
The main difference to keep in mind is that quantum computers cannot save and observe intermediate quantum state, so programming a quantum computer is basically like programming a combinatoriallike circuit with gates that operate on (qu)bits:
For this reason programming a quantum computer is much like programming a classical combinatorial circuit as you would do with SPICE, verilogorvhdl, in which you are basically describing a graph of gates that goes from the input to the output
For this reason, we can use the words "program" and "circuit" interchangeably to refer to a quantum program
Also remember that and there is no no clocks in combinatorial circuits because there are no registers to drive; and so there is no analogue of clock in the quantum system either,
Another consequence of this is that programming quantum computers does not look like programming the more "common" procedural programming languages such as C or Python, since those fundamentally rely on processor register / memory state all the time.
Quantum programmers can however use classic languages to help describe their quantum programs more easily, for example this is what happens in Qiskit, where you write a Python program that makes Qiskit library calls that describe the quantum program.
At Section "Quantum computing is just matrix multiplication" we saw that making a quantum circuit actually comes down to designing one big unitary matrix.
We have to say though that that was a bit of a lie.
Quantum programmers normally don't just produce those big matrices manually from scratch.
Instead, they use quantum logic gates.
The following are the main reasons for that:
One key insight, is that the matrix of a nontrivial quantum circuit is going to be huge, and won't fit into any amount classical memory that can be present in this universe.
This is because the matrix is exponential in the number qubits, and $2_{100}$ is more than the number of atoms in the universe!
Therefore, off the bat we know that we cannot possibly describe those matrices in an explicit form, but rather must use some kind of shorthand.
But it gets worse.
Even if we had enough memory, the act of explicitly computing the matrix is not generally possible.
This is because knowing the matrix, basically means knowing the probability result for all possible $2_{N}$ outputs for each of the $2_{N}$ possible inputs.
But if we had those probabilities, our algorithmic problem would already be solved in the first place! We would "just" go over each of those output probabilities (OK, there are $2_{N}$ of those, which is also an insurmountable problem in itself), and the largest probability would be the answer.
So if we could calculate those probabilities on a classical machine, we would also be able to simulate the quantum computer on the classical machine, and quantum computing would not be able to give exponential speedups, which we know it does.
To see this, consider that for a given input, say
and therefore when you multiply it by the unitary matrix of the quantum circuit, what you get is the first column of the unitary matrix of the quantum circuit. And
000
on a 3 qubit machine, the corresponding 8sized quantum state looks like:
000 > 1000 0000 == (1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0)
001
, gives the second column and so on.As a result, to prove that a quantum algorithm is correct, we need to be a bit smarter than "just calculate the full matrix".
Which is why you should now go and read: Section "Quantum algorithm".
This type of thinking links back to how physical experiments relate to quantum computing: a quantum computer realizes a physical experiment to which we cannot calculate the probabilities of outcomes without exponential time.
So for example in the case of a photonic quantum computer, you are not able to calculate from theory the probability that photons will show up on certain wires or not.
One direct practical reason is that we need to map the matrix to real quantum hardware somehow, and all quantum hardware designs so far and likely in the future are gatebased: you manipulate a small number of qubits at a time (2) and add more and more of such operations.
While there are "quantum compilers" to increase the portability of quantum programs, it is to be expected that programs manually crafted for a specific hardware will be more efficient just like in classic computers.
TODO: is there any clear reason why computers can't beat humans in approximating any unitary matrix with a gate set?
This is analogous to what classic circuit programmers will do, by using smaller logic gates to create complex circuits, rather than directly creating one huge truth table.
The most commonly considered quantum gates take 1, 2, or 3 qubits as input.
The gates themselves are just unitary matrices that operate on the input qubits and produce the same number of output qubits.
For example, the matrix for the CNOT gate, which takes 2 qubits as input is:
1 0 0 0
0 1 0 0
0 0 0 1
0 0 1 0
The final question is then: if I have a 2 qubit gate but an input with more qubits, say 3 qubits, then what does the 2 qubit gate (4x4 matrix) do for the final big 3 qubit matrix (8x8)? In order words, how do we scale quantum gates up to match the total number of qubits?
The intuitive answer is simple: we "just" extend the small matrix with a larger identity matrix so that the sum of the probabilities third bit is unaffected.
More precisely, we likely have to extend the matrix in a way such that the partial measurement of the original small gate qubits leaves all other qubits unaffected.
For example, if the circuit were made up of a CNOT gate operating on the first and second qubits as in:
0 + 0

1 CNOT 1
2  2
then we would just extend the 2x2 CNOT gate to:
TODO lazy to properly learn right now. Apparently you have to use the Kronecker product by the identity matrix. Also, zXcalculus appears to provide a powerful alternative method in some/all cases.
Bibliography:
Just like as for classic gates, we would like to be able to select quantum computer physical implementations that can represent one or a few gates that can be used to create any quantum circuit.
Unfortunately, in the case of quantum circuits this is obviously impossible, since the space of N x N unitary matrices is infinite and continuous.
Therefore, when we say that certain gates form a "set of universal quantum gates", we actually mean that "any unitary matrix can be approximated to arbitrary precision with enough of these gates".
Or if you like fancy Mathy words, you can say that the subgroup of the unitary group generated by our basic gate set is a dense subset of the unitary group.
The first two that you should study are:
The Hadamard gate takes $∣0⟩$ or $∣1⟩$ (quantum states with probability 1.0 of measuring either 0 or 1), and produces states that have equal probability of 0 or 1.
$H=2 1 [11 1−1 ]$
The quantum NOT gate swaps the state of $∣0⟩$ and $∣1⟩$, i.e. it maps:
As a result, this gate also inverts the probability of measuring 0 or 1, e.g.
$x∣0⟩+y∣y⟩→y∣0⟩+x∣y⟩$
 if the old probability of 0 was 0, then it becomes 1
 if the old probability of 0 was 0.2, then it becomes 0.8
$[01 10 ]$
The most common way to construct multiqubit gates is to use singlequbit gates as part of a controlled quantum gate.
Controlled quantum gates are gates that have two types of input qubits:These gates can be understood as doing a certain unitary operation only if the control qubits are enabled or disabled.
 control qubits
 operand qubits (terminology made up by Ciro Santilli just now)
The first example to look at is the CNOT gate.
Some authors use the convention of:
 filled black circle: conventional controlled quantum gate, i.e. operate if control qubit is active
 empty (White) circle: operarate if control qubit is inactive
The CNOT gate is a controlled quantum gate that operates on two qubits, flipping the second (operand) qubit if the first (control) qubit is set.
This gate is the first example of a controlled quantum gate that you should study.
$⎣⎢⎢⎢⎡ 1000 0100 0001 0010 ⎦⎥⎥⎥⎤ $
To understand why the gate is called a CNOT gate, you should think as follows.
First let's produce a generic quantum state vector where the control qubit is certain to be 0.
On the standard basis:
we see that this means that only $∣00⟩$ and $∣01⟩$ should be possible. Therefore, the state must be of the form:
where $x$ and $y$ are two complex numbers such that $∣x∣+∣y∣=1.0$
$∣00⟩∣01⟩∣10⟩∣11⟩$
$⎣⎢⎢⎢⎡ xy00 ⎦⎥⎥⎥⎤ $
If we operate the CNOT gate on that state, we obtain:
and so the input is unchanged as desired, because the control qubit is 0.
$⎣⎢⎢⎢⎡ 1000 0100 0001 0010 ⎦⎥⎥⎥⎤ ×⎣⎢⎢⎢⎡ xy00 ⎦⎥⎥⎥⎤ =⎣⎢⎢⎢⎡ xy00 ⎦⎥⎥⎥⎤ $
If however we take only states where the control qubit is for sure 1:
$⎣⎢⎢⎢⎡ 1000 0100 0001 0010 ⎦⎥⎥⎥⎤ ×⎣⎢⎢⎢⎡ 00xy ⎦⎥⎥⎥⎤ =⎣⎢⎢⎢⎡ 00yx ⎦⎥⎥⎥⎤ $
Therefore, in that case, what happened is that the probabilities of $∣10⟩$ and $∣11⟩$ were swapped from $x$ and $y$ to $y$ and $x$ respectively, which is exactly what the quantum NOT gate does.
So from this we understand more concretelly what "the gate only operates if the first qubit is set to one" means.
Now go and study the Bell state and understand intuitively how this gate is used to produce it.
This gate set alone is not a set of universal quantum gates.
Notably, circuits containing those gates alone can be fully simulated by classical computers according to the GottesmanKnill theorem, so there's no way they could be universal.
This means that if we add any number of Clifford gates to a quantum circuit, we haven't really increased the complexity of the algorithm, which can be useful as a transformational device.
Set of quantum logic gate composed of the Clifford gates plus the Toffoli gate. It forms a set of universal quantum gates.
 quantumtech.blog/2023/01/17/quantumcomputingwithneutralatoms/ OK this one hits it:
As Alex Keesling, CEO of QuEra told me, "... whereas in gatebased [digital] quantum computing the focus is on the sequence of the gates, in analog quantum processing it's more about the position of the atoms and where you place them so they can mirror real life problems. We arrange the atoms and define the forces that drive them and then measure the result... so it’s a geometric encoding of the problem itself."
So we understand that it is truly like the classical computer analog vs digital case.  thequantuminsider.com/2022/06/28/whyanalogneutralatomsquantumcomputingisapromisingdirectionforearlyquantumadvantage on The Quantum Insider useless article mostly by Pasqal