CNOT gate Updated +Created
The CNOT gate is a controlled quantum gate that operates on two qubits, flipping the second (operand) qubit if the first (control) qubit is set.
This gate is the first example of a controlled quantum gate that you should study.
Figure 1.
CNOT gate symbol
. Source. The symbol follow the generic symbol convention for controlled quantum gates shown at Figure "Generic controlled quantum gate symbol", but replacing the generic "U" with the Figure "Quantum NOT gate symbol".
To understand why the gate is called a CNOT gate, you should think as follows.
First let's produce a generic quantum state vector where the control qubit is certain to be 0.
On the standard basis:
we see that this means that only and should be possible. Therefore, the state must be of the form:
where and are two complex numbers such that
If we operate the CNOT gate on that state, we obtain:
and so the input is unchanged as desired, because the control qubit is 0.
If however we take only states where the control qubit is for sure 1:
Therefore, in that case, what happened is that the probabilities of and were swapped from and to and respectively, which is exactly what the quantum NOT gate does.
So from this we understand more concretely what "the gate only operates if the first qubit is set to one" means.
Now go and study the Bell state and understand intuitively how this gate is used to produce it.
Continuous spectrum (functional analysis) Updated +Created
Unlike the simple case of a matrix, in infinite dimensional vector spaces, the spectrum may be continuous.
The quintessential example of that is the spectrum of the position operator in quantum mechanics, in which any real number is a possible eigenvalue, since the particle may be found in any position. The associated eigenvectors are the corresponding Dirac delta functions.
Definition of the indefinite orthogonal group Updated +Created
Given a matrix with metric signature containing positive and negative entries, the indefinite orthogonal group is the set of all matrices that preserve the associated bilinear form, i.e.:
Note that if , we just have the standard dot product, and that subcase corresponds to the following definition of the orthogonal group: Section "The orthogonal group is the group of all matrices that preserve the dot product".
As shown at all indefinite orthogonal groups of matrices of equal metric signature are isomorphic, due to the Sylvester's law of inertia, only the metric signature of matters. E.g., if we take two different matrices with the same metric signature such as:
and:
both produce isomorphic spaces. So it is customary to just always pick the matrix with only +1 and -1 as entries.
Exponential map (Lie theory) Updated +Created
Like everything else in Lie group theory, you should first look at the matrix version of this operation: the matrix exponential.
The exponential map links small transformations around the origin (infinitely small) back to larger finite transformations, and small transformations around the origin are something we can deal with a Lie algebra, so this map links the two worlds.
The idea is that we can decompose a finite transformation into infinitely arbitrarily small around the origin, and proceed just like the product definition of the exponential function.
The definition of the exponential map is simply the same as that of the regular exponential function as given at Taylor expansion definition of the exponential function, except that the argument can now be an operator instead of just a number.
Hadamard gate Updated +Created
The Hadamard gate takes or (quantum states with probability 1.0 of measuring either 0 or 1), and produces states that have equal probability of 0 or 1.
Hermitian operator Updated +Created
This is the possibly infinite dimensional version of a Hermitian matrix, since linear operators are the possibly infinite dimensional version of matrices.
There's a catch though: now we don't have explicit matrix indices here however in general, the generalized definition is shown at: en.wikipedia.org/w/index.php?title=Hermitian_adjoint&oldid=1032475701#Definition_for_bounded_operators_between_Hilbert_spaces
Lie algebra Updated +Created
Like everything else in Lie groups, first start with the matrix as discussed at Section "Lie algebra of a matrix Lie group".
Intuitively, a Lie algebra is a simpler object than a Lie group. Without any extra structure, groups can be very complicated non-linear objects. But a Lie algebra is just an algebra over a field, and one with a restricted bilinear map called the Lie bracket, that has to also be alternating and satisfy the Jacobi identity.
Another important way to think about Lie algebras, is as infinitesimal generators.
Because of the Lie group-Lie algebra correspondence, we know that there is almost a bijection between each Lie group and the corresponding Lie algebra. So it makes sense to try and study the algebra instead of the group itself whenever possible, to try and get insight and proofs in that simpler framework. This is the key reason why people study Lie algebras. One is philosophically reminded of how normal subgroups are a simpler representation of group homomorphisms.
To make things even simpler, because all vector spaces of the same dimension on a given field are isomorphic, the only things we need to specify a Lie group through a Lie algebra are:
Note that the Lie bracket can look different under different basis of the Lie algebra however. This is shown for example at Physics from Symmetry by Jakob Schwichtenberg (2015) page 71 for the Lorentz group.
As mentioned at Lie Groups, Physics, and Geometry by Robert Gilmore (2008) Chapter 4 "Lie Algebras", taking the Lie algebra around the identity is mostly a convention, we could treat any other point, and things are more or less equivalent.
Linear map Updated +Created
A linear map is a function where and are two vector spaces over underlying fields such that:
A common case is , and .
One thing that makes such functions particularly simple is that they can be fully specified by specifyin how they act on all possible combinations of input basis vectors: they are therefore specified by only a finite number of elements of .
Every linear map in finite dimension can be represented by a matrix, the points of the domain being represented as vectors.
As such, when we say "linear map", we can think of a generalization of matrix multiplication that makes sense in infinite dimensional spaces like Hilbert spaces, since calling such infinite dimensional maps "matrices" is stretching it a bit, since we would need to specify infinitely many rows and columns.
The prototypical building block of infinite dimensional linear map is the derivative. In that case, the vectors being operated upon are functions, which cannot therefore be specified by a finite number of parameters, e.g.
For example, the left side of the time-independent Schrödinger equation is a linear map. And the time-independent Schrödinger equation can be seen as a eigenvalue problem.
Matrix multiplication Updated +Created
Since a matrix can be seen as a linear map , the product of two matrices can be seen as the composition of two linear maps:
One cool thing about linear functions is that we can easily pre-calculate this product only once to obtain a new matrix, and so we don't have to do both multiplications separately each time.
Matrix representation of a bilinear form Updated +Created
As usual, it is useful to think about how a bilinear form looks like in terms of vectors and matrices.
Unlike a linear form, which was a vector, because it has two inputs, the bilinear form is represented by a matrix which encodes the value for each possible pair of basis vectors.
In terms of that matrix, the form is then given by:
Matrix representation of a symmetric bilinear form Updated +Created
Like the matrix representation of a bilinear form, it is a matrix, but now the matrix has to be a symmetric matrix.
We can then immediately see that the matrix is symmetric, then so is the form. We have:
But because is a scalar, we have:
and:
Metric (mathematics) Updated +Created
A metric is a function that give the distance, i.e. a real number, between any two elements of a space.
A metric may be induced from a norm as shown at: Section "Metric induced by a norm".
Because a norm can be induced by an inner product, and the inner product given by the matrix representation of a positive definite symmetric bilinear form, in simple cases metrics can also be represented by a matrix.
Minkowski inner product matrix Updated +Created
Since that is a symmetric bilinear form, the associated matrix is a symmetric matrix.
By default, we will use the time negative representation unless stated otherwise:
but another equivalent one is to use a time positive representation:
The matrix is typically denoted by the Greek letter eta.
Pauli-X gate Updated +Created
The quantum NOT gate swaps the state of and , i.e. it maps:
As a result, this gate also inverts the probability of measuring 0 or 1, e.g.
  • if the old probability of 0 was 0, then it becomes 1
  • if the old probability of 0 was 0.2, then it becomes 0.8
Representation theory Updated +Created
Basically, a "representation" means associating each group element as an invertible matrices, i.e. a matrix in (possibly some subset of) , that has the same properties as the group.
Or in other words, associating to the more abstract notion of a group more concrete objects with which we are familiar (e.g. a matrix).
Each such matrix then represents one specific element of the group.
This is basically what everyone does (or should do!) when starting to study Lie groups: we start looking at matrix Lie groups, which are very concrete.
Or more precisely, mapping each group element to a linear map over some vector field (which can be represented by a matrix infinite dimension), in a way that respects the group operations:
As shown at Physics from Symmetry by Jakob Schwichtenberg (2015)
  • page 51, a representation is not unique, we can even use matrices of different dimensions to represent the same group
  • 3.6 classifies the representations of . There is only one possibility per dimension!
  • 3.7 "The Lorentz Group O(1,3)" mentions that even for a "simple" group such as the Lorentz group, not all representations can be described in terms of matrices, and that we can construct such representations with the help of Lie group theory, and that they have fundamental physical application
Bibliography:
Ring (mathematics) Updated +Created
A Ring can be seen as a generalization of a field where:
Addition however has to be commutative and have inverses, i.e. it is an Abelian group.
The simplest example of a ring which is not a full fledged field and with commutative multiplication are the integers. Notably, no inverses exist except for the identity itself and -1. E.g. the inverse of 2 would be 1/2 which is not in the set. More specifically, the integers are a commutative ring.
A polynomial ring is another example with the same properties as the integers.
The simplest non-commutative, non-division is is the set of all 2x2 matrices of real numbers:
  • we know that 2x2 matrix multiplication is non-commutative in general
  • some 2x2 matrices have a multiplicative inverse, but others don't
Note that is not a ring because you can by addition reach the zero matrix.
Solving the Schrodinger equation with the time-independent Schrödinger equation Updated +Created
Before reading any further, you must understand heat equation solution with Fourier series, which uses separation of variables.
Once that example is clear, we see that the exact same separation of variables can be done to the Schrödinger equation. If we name the constant of the separation of variables for energy, we get:
  • a time-only part that does not depend on space and does not depend on the Hamiltonian at all. The solution for this part is therefore always the same exponentials for any problem, and this part is therefore "boring":
  • a space-only part that does not depend on time, bud does depend on the Hamiltonian:
    Since this is the only non-trivial part, unlike the time part which is trivial, this spacial part is just called "the time-independent Schrodinger equation".
    Note that the here is not the same as the in the time-dependent Schrodinger equation of course, as that psi is the result of the multiplication of the time and space parts. This is a bit of imprecise terminology, but hey, physics.
Because the time part of the equation is always the same and always trivial to solve, all we have to do to actually solve the Schrodinger equation is to solve the time independent one, and then we can construct the full solution trivially.
Once we've solved the time-independent part for each possible , we can construct a solution exactly as we did in heat equation solution with Fourier series: we make a weighted sum over all possible to match the initial condition, which is analogous to the Fourier series in the case of the heat equation to reach a final full solution:
  • if there are only discretely many possible values of , each possible energy . we proceed
    Equation 3.
    Solution of the Schrodinger equation in terms of the time-independent and time dependent parts
    .
    and this is a solution by selecting such that at time we match the initial condition:
    A finite spectrum shows up in many incredibly important cases:
  • if there are infinitely many values of E, we do something analogous but with an integral instead of a sum. This is called the continuous spectrum. One notable
The fact that this approximation of the initial condition is always possible from is mathematically proven by some version of the spectral theorem based on the fact that The Schrodinger equation Hamiltonian has to be Hermitian and therefore behaves nicely.
It is interesting to note that solving the time-independent Schrodinger equation can also be seen exactly as an eigenvalue equation where:
The only difference from usual matrix eigenvectors is that we are now dealing with an infinite dimensional vector space.
Furthermore:
Symmetric matrix Updated +Created