Atomic orbital Updated +Created
In the case of the Schrödinger equation solution for the hydrogen atom, each orbital is one eigenvector of the solution.
Remember from time-independent Schrödinger equation that the final solution is just the weighted sum of the eigenvector decomposition of the initial state, analogously to solving partial differential equations with the Fourier series.
This is the table that you should have in mind to visualize them: en.wikipedia.org/w/index.php?title=Atomic_orbital&oldid=1022865014#Orbitals_table
Continuous spectrum (functional analysis) Updated +Created
Unlike the simple case of a matrix, in infinite dimensional vector spaces, the spectrum may be continuous.
The quintessential example of that is the spectrum of the position operator in quantum mechanics, in which any real number is a possible eigenvalue, since the particle may be found in any position. The associated eigenvectors are the corresponding Dirac delta functions.
Distribution (mathematics) Updated +Created
Generalize function to allow adding some useful things which people wanted to be classical functions but which are not,
It therefore requires you to redefine and reprove all of calculus.
For this reason, most people are tempted to assume that all the hand wavy intuitive arguments undergrad teachers give are true and just move on with life. And they generally are.
One notable example where distributions pop up are the eigenvectors of the position operator in quantum mechanics, which are given by Dirac delta functions, which is most commonly rigorously defined in terms of distribution.
Distributions are also defined in a way that allows you to do calculus on them. Notably, you can define a derivative, and the derivative of the Heaviside step function is the Dirac delta function.
Eigendecomposition of a matrix Updated +Created
Every invertible matrix can be written as:
where:
Note therefore that this decomposition is unique up to swapping the order of eigenvectors. We could fix a canonical form by sorting eigenvectors from smallest to largest in the case of a real number.
Intuitively, Note that this is just the change of basis formula, and so:
Schrödinger picture Updated +Created
To better understand the discussion below, the best thing to do is to read it in parallel with the simplest possible example: Schrödinger picture example: quantum harmonic oscillator.
The state of a quantum system is a unit vector in a Hilbert space.
"Making a measurement" for an observable means applying a self-adjoint operator to the state, and after a measurement is done:
Those last two rules are also known as the Born rule.
Self adjoint operators are chosen because they have the following key properties:
Perhaps the easiest case to understand this for is that of spin, which has only a finite number of eigenvalues. Although it is a shame that fully understanding that requires a relativistic quantum theory such as the Dirac equation.
The next steps are to look at simple 1D bound states such as particle in a box and quantum harmonic oscillator.
The solution to the Schrödinger equation for a free one dimensional particle is a bit harder since the possible energies do not make up a countable set.
This formulation was apparently called more precisely Dirac-von Neumann axioms, but it because so dominant we just call it "the" formulation.
Quantum Field Theory lecture notes by David Tong (2007) mentions that:
if you were to write the wavefunction in quantum field theory, it would be a functional, that is a function of every possible configuration of the field .
Solving the Schrodinger equation with the time-independent Schrödinger equation Updated +Created
Once that example is clear, we see that the exact same separation of variables can be done to the Schrödinger equation. If we name the constant of the separation of variables for energy, we get:
Because the time part of the equation is always the same and always trivial to solve, all we have to do to actually solve the Schrodinger equation is to solve the time independent one, and then we can construct the full solution trivially.
Once we've solved the time-independent part for each possible , we can construct a solution exactly as we did in heat equation solution with Fourier series: we make a weighted sum over all possible to match the initial condition, which is analogous to the Fourier series in the case of the heat equation to reach a final full solution:
The fact that this approximation of the initial condition is always possible from is mathematically proven by some version of the spectral theorem based on the fact that The Schrodinger equation Hamiltonian has to be Hermitian and therefore behaves nicely.
It is interesting to note that solving the time-independent Schrodinger equation can also be seen exactly as an eigenvalue equation where:
The only difference from usual matrix eigenvectors is that we are now dealing with an infinite dimensional vector space.
Furthermore:
Sylvester's law of inertia Updated +Created
The theorem states that the number of 0, 1 and -1 in the metric signature is the same for two symmetric matrices that are congruent matrices.
For example, consider:
The eigenvalues of are and , and the associated eigenvectors are:
symPy code:
A = Matrix([[2, sqrt(2)], [sqrt(2), 3]])
A.eigenvects()
and from the eigendecomposition of a real symmetric matrix we know that:
Now, instead of , we could use , where is an arbitrary diagonal matrix of type:
With this, would reach a new matrix :
Therefore, with this congruence, we are able to multiply the eigenvalues of by any positive number and . Since we are multiplying by two arbitrary positive numbers, we cannot change the signs of the original eigenvalues, and so the metric signature is maintained, but respecting that any value can be reached.
Note that the matrix congruence relation looks a bit like the eigendecomposition of a matrix:
but note that does not have to contain eigenvalues, unlike the eigendecomposition of a matrix. This is because here is not fixed to having eigenvectors in its columns.
But because the matrix is symmetric however, we could always choose to actually diagonalize as mentioned at eigendecomposition of a real symmetric matrix. Therefore, the metric signature can be seen directly from eigenvalues.
Also, because is a diagonal matrix, and thus symmetric, it must be that:
What this does represent, is a general change of basis that maintains the matrix a symmetric matrix.