Bra-ket notation Updated +Created
Notation used in quantum mechanics.
Ket is just a vector. Though generally in the context of quantum mechanics, this is an infinite dimensional vector in a Hilbert space like .
Bra is just the dual vector corresponding to a ket, or in other words projection linear operator, i.e. a linear function which can act on a given vector and returns a single complex number. Also known as... dot product.
For example:
is basically a fancy way of saying:
that is: we are taking the projection of along the direction. Note that in the ordinary dot product notation however, we don't differentiate as clearly what is a vector and what is an operator, while the bra-ket notation makes it clear.
The projection operator is completely specified by the vector that we are projecting it on. This is why the bracket notation makes sense.
It also has the merit of clearly differentiating vectors from operators. E.g. it is not very clear in that is an operator and is a vector, except due to the relative position to the dot. This is especially bad when we start manipulating operators by themselves without vectors.
This notation is widely used in quantum mechanics because calculating the probability of getting a certain outcome for an experiment is calculated by taking the projection of a state on one an eigenvalue basis vector as explained at: Section "Mathematical formulation of quantum mechanics".
Making the projection operator "look like a thing" (the bra) is nice because we can add and multiply them much like we can for vectors (they also form a vector space), e.g.:
just means taking the projection along the direction.
Ciro Santilli thinks that this notation is a bit over-engineered. Notably the bra's are just vectors, which we should just write as usual with ... the bra thing makes it look scarier than it needs to be. And then we should just find a different notation for the projection part.
Maybe Dirac chose it because of the appeal of the women's piece of clothing: bra, in an irresistible call from British humour.
But in any case, alas, we are now stuck with it.
Continuous spectrum (functional analysis) Updated +Created
Unlike the simple case of a matrix, in infinite dimensional vector spaces, the spectrum may be continuous.
The quintessential example of that is the spectrum of the position operator in quantum mechanics, in which any real number is a possible eigenvalue, since the particle may be found in any position. The associated eigenvectors are the corresponding Dirac delta functions.
Eigendecomposition of a matrix Updated +Created
Every invertible matrix can be written as:
where:
Note therefore that this decomposition is unique up to swapping the order of eigenvectors. We could fix a canonical form by sorting eigenvectors from smallest to largest in the case of a real number.
Intuitively, Note that this is just the change of basis formula, and so:
  • changes basis to align to the eigenvectors
  • multiplies eigenvectors simply by eigenvalues
  • changes back to the original basis
Helmholtz equation Updated +Created
Linear map Updated +Created
A linear map is a function where and are two vector spaces over underlying fields such that:
A common case is , and .
One thing that makes such functions particularly simple is that they can be fully specified by specifyin how they act on all possible combinations of input basis vectors: they are therefore specified by only a finite number of elements of .
Every linear map in finite dimension can be represented by a matrix, the points of the domain being represented as vectors.
As such, when we say "linear map", we can think of a generalization of matrix multiplication that makes sense in infinite dimensional spaces like Hilbert spaces, since calling such infinite dimensional maps "matrices" is stretching it a bit, since we would need to specify infinitely many rows and columns.
The prototypical building block of infinite dimensional linear map is the derivative. In that case, the vectors being operated upon are functions, which cannot therefore be specified by a finite number of parameters, e.g.
For example, the left side of the time-independent Schrödinger equation is a linear map. And the time-independent Schrödinger equation can be seen as a eigenvalue problem.
Schrödinger picture Updated +Created
To better understand the discussion below, the best thing to do is to read it in parallel with the simplest possible example: Schrödinger picture example: quantum harmonic oscillator.
The state of a quantum system is a unit vector in a Hilbert space.
"Making a measurement" for an observable means applying a self-adjoint operator to the state, and after a measurement is done:
  • the state collapses to an eigenvector of the self adjoint operator
  • the result of the measurement is the eigenvalue of the self adjoint operator
  • the probability of a given result happening when the spectrum is discrete is proportional to the modulus of the projection on that eigenvector.
    For continuous spectra such as that of the position operator in most systems, e.g. Schrödinger equation for a free one dimensional particle, the projection on each individual eigenvalue is zero, i.e. the probability of one absolutely exact position is zero. To get a non-zero result, measurement has to be done on a continuous range of eigenvectors (e.g. for position: "is the particle present between x=0 and x=1?"), and you have to integrate the probability over the projection on a continuous range of eigenvalues.
    In such continuous cases, the probability collapses to an uniform distribution on the range after measurement.
    The continuous position operator case is well illustrated at: Video "Visualization of Quantum Physics (Quantum Mechanics) by udiprod (2017)"
Those last two rules are also known as the Born rule.
Self adjoint operators are chosen because they have the following key properties:
  • their eigenvalues form an orthonormal basis
  • they are diagonalizable
Perhaps the easiest case to understand this for is that of spin, which has only a finite number of eigenvalues. Although it is a shame that fully understanding that requires a relativistic quantum theory such as the Dirac equation.
The next steps are to look at simple 1D bound states such as particle in a box and quantum harmonic oscillator.
The solution to the Schrödinger equation for a free one dimensional particle is a bit harder since the possible energies do not make up a countable set.
This formulation was apparently called more precisely Dirac-von Neumann axioms, but it because so dominant we just call it "the" formulation.
Quantum Field Theory lecture notes by David Tong (2007) mentions that:
if you were to write the wavefunction in quantum field theory, it would be a functional, that is a function of every possible configuration of the field .
Solving the Schrodinger equation with the time-independent Schrödinger equation Updated +Created
Before reading any further, you must understand heat equation solution with Fourier series, which uses separation of variables.
Once that example is clear, we see that the exact same separation of variables can be done to the Schrödinger equation. If we name the constant of the separation of variables for energy, we get:
  • a time-only part that does not depend on space and does not depend on the Hamiltonian at all. The solution for this part is therefore always the same exponentials for any problem, and this part is therefore "boring":
  • a space-only part that does not depend on time, bud does depend on the Hamiltonian:
    Since this is the only non-trivial part, unlike the time part which is trivial, this spacial part is just called "the time-independent Schrodinger equation".
    Note that the here is not the same as the in the time-dependent Schrodinger equation of course, as that psi is the result of the multiplication of the time and space parts. This is a bit of imprecise terminology, but hey, physics.
Because the time part of the equation is always the same and always trivial to solve, all we have to do to actually solve the Schrodinger equation is to solve the time independent one, and then we can construct the full solution trivially.
Once we've solved the time-independent part for each possible , we can construct a solution exactly as we did in heat equation solution with Fourier series: we make a weighted sum over all possible to match the initial condition, which is analogous to the Fourier series in the case of the heat equation to reach a final full solution:
  • if there are only discretely many possible values of , each possible energy . we proceed
    Equation 3.
    Solution of the Schrodinger equation in terms of the time-independent and time dependent parts
    .
    and this is a solution by selecting such that at time we match the initial condition:
    A finite spectrum shows up in many incredibly important cases:
  • if there are infinitely many values of E, we do something analogous but with an integral instead of a sum. This is called the continuous spectrum. One notable
The fact that this approximation of the initial condition is always possible from is mathematically proven by some version of the spectral theorem based on the fact that The Schrodinger equation Hamiltonian has to be Hermitian and therefore behaves nicely.
It is interesting to note that solving the time-independent Schrodinger equation can also be seen exactly as an eigenvalue equation where:
The only difference from usual matrix eigenvectors is that we are now dealing with an infinite dimensional vector space.
Furthermore:
Spectrum (functional analysis) Updated +Created
Sylvester's law of inertia Updated +Created
The theorem states that the number of 0, 1 and -1 in the metric signature is the same for two symmetric matrices that are congruent matrices.
For example, consider:
The eigenvalues of are and , and the associated eigenvectors are:
symPy code:
A = Matrix([[2, sqrt(2)], [sqrt(2), 3]])
A.eigenvects()
and from the eigendecomposition of a real symmetric matrix we know that:
Now, instead of , we could use , where is an arbitrary diagonal matrix of type:
With this, would reach a new matrix :
Therefore, with this congruence, we are able to multiply the eigenvalues of by any positive number and . Since we are multiplying by two arbitrary positive numbers, we cannot change the signs of the original eigenvalues, and so the metric signature is maintained, but respecting that any value can be reached.
Note that the matrix congruence relation looks a bit like the eigendecomposition of a matrix:
but note that does not have to contain eigenvalues, unlike the eigendecomposition of a matrix. This is because here is not fixed to having eigenvectors in its columns.
But because the matrix is symmetric however, we could always choose to actually diagonalize as mentioned at eigendecomposition of a real symmetric matrix. Therefore, the metric signature can be seen directly from eigenvalues.
Also, because is a diagonal matrix, and thus symmetric, it must be that:
What this does represent, is a general change of basis that maintains the matrix a symmetric matrix.