Atomic orbital Updated +Created
In the case of the Schrödinger equation solution for the hydrogen atom, each orbital is one eigenvector of the solution.
Remember from time-independent Schrödinger equation that the final solution is just the weighted sum of the eigenvector decomposition of the initial state, analogously to solving partial differential equations with the Fourier series.
This is the table that you should have in mind to visualize them: en.wikipedia.org/w/index.php?title=Atomic_orbital&oldid=1022865014#Orbitals_table
Continuous spectrum (functional analysis) Updated +Created
Unlike the simple case of a matrix, in infinite dimensional vector spaces, the spectrum may be continuous.
The quintessential example of that is the spectrum of the position operator in quantum mechanics, in which any real number is a possible eigenvalue, since the particle may be found in any position. The associated eigenvectors are the corresponding Dirac delta functions.
Distribution (mathematics) Updated +Created
Generalize function to allow adding some useful things which people wanted to be classical functions but which are not,
It therefore requires you to redefine and reprove all of calculus.
For this reason, most people are tempted to assume that all the hand wavy intuitive arguments undergrad teachers give are true and just move on with life. And they generally are.
One notable example where distributions pop up are the eigenvectors of the position operator in quantum mechanics, which are given by Dirac delta functions, which is most commonly rigorously defined in terms of distribution.
Distributions are also defined in a way that allows you to do calculus on them. Notably, you can define a derivative, and the derivative of the Heaviside step function is the Dirac delta function.
Eigendecomposition of a matrix Updated +Created
Every invertible matrix can be written as:
where:
Note therefore that this decomposition is unique up to swapping the order of eigenvectors. We could fix a canonical form by sorting eigenvectors from smallest to largest in the case of a real number.
Intuitively, Note that this is just the change of basis formula, and so:
  • changes basis to align to the eigenvectors
  • multiplies eigenvectors simply by eigenvalues
  • changes back to the original basis
Schrödinger picture Updated +Created
To better understand the discussion below, the best thing to do is to read it in parallel with the simplest possible example: Schrödinger picture example: quantum harmonic oscillator.
The state of a quantum system is a unit vector in a Hilbert space.
"Making a measurement" for an observable means applying a self-adjoint operator to the state, and after a measurement is done:
  • the state collapses to an eigenvector of the self adjoint operator
  • the result of the measurement is the eigenvalue of the self adjoint operator
  • the probability of a given result happening when the spectrum is discrete is proportional to the modulus of the projection on that eigenvector.
    For continuous spectra such as that of the position operator in most systems, e.g. Schrödinger equation for a free one dimensional particle, the projection on each individual eigenvalue is zero, i.e. the probability of one absolutely exact position is zero. To get a non-zero result, measurement has to be done on a continuous range of eigenvectors (e.g. for position: "is the particle present between x=0 and x=1?"), and you have to integrate the probability over the projection on a continuous range of eigenvalues.
    In such continuous cases, the probability collapses to an uniform distribution on the range after measurement.
    The continuous position operator case is well illustrated at: Video "Visualization of Quantum Physics (Quantum Mechanics) by udiprod (2017)"
Those last two rules are also known as the Born rule.
Self adjoint operators are chosen because they have the following key properties:
  • their eigenvalues form an orthonormal basis
  • they are diagonalizable
Perhaps the easiest case to understand this for is that of spin, which has only a finite number of eigenvalues. Although it is a shame that fully understanding that requires a relativistic quantum theory such as the Dirac equation.
The next steps are to look at simple 1D bound states such as particle in a box and quantum harmonic oscillator.
The solution to the Schrödinger equation for a free one dimensional particle is a bit harder since the possible energies do not make up a countable set.
This formulation was apparently called more precisely Dirac-von Neumann axioms, but it because so dominant we just call it "the" formulation.
Quantum Field Theory lecture notes by David Tong (2007) mentions that:
if you were to write the wavefunction in quantum field theory, it would be a functional, that is a function of every possible configuration of the field .
Solving the Schrodinger equation with the time-independent Schrödinger equation Updated +Created
Before reading any further, you must understand heat equation solution with Fourier series, which uses separation of variables.
Once that example is clear, we see that the exact same separation of variables can be done to the Schrödinger equation. If we name the constant of the separation of variables for energy, we get:
  • a time-only part that does not depend on space and does not depend on the Hamiltonian at all. The solution for this part is therefore always the same exponentials for any problem, and this part is therefore "boring":
  • a space-only part that does not depend on time, bud does depend on the Hamiltonian:
    Since this is the only non-trivial part, unlike the time part which is trivial, this spacial part is just called "the time-independent Schrodinger equation".
    Note that the here is not the same as the in the time-dependent Schrodinger equation of course, as that psi is the result of the multiplication of the time and space parts. This is a bit of imprecise terminology, but hey, physics.
Because the time part of the equation is always the same and always trivial to solve, all we have to do to actually solve the Schrodinger equation is to solve the time independent one, and then we can construct the full solution trivially.
Once we've solved the time-independent part for each possible , we can construct a solution exactly as we did in heat equation solution with Fourier series: we make a weighted sum over all possible to match the initial condition, which is analogous to the Fourier series in the case of the heat equation to reach a final full solution:
  • if there are only discretely many possible values of , each possible energy . we proceed
    Equation 3.
    Solution of the Schrodinger equation in terms of the time-independent and time dependent parts
    .
    and this is a solution by selecting such that at time we match the initial condition:
    A finite spectrum shows up in many incredibly important cases:
  • if there are infinitely many values of E, we do something analogous but with an integral instead of a sum. This is called the continuous spectrum. One notable
The fact that this approximation of the initial condition is always possible from is mathematically proven by some version of the spectral theorem based on the fact that The Schrodinger equation Hamiltonian has to be Hermitian and therefore behaves nicely.
It is interesting to note that solving the time-independent Schrodinger equation can also be seen exactly as an eigenvalue equation where:
The only difference from usual matrix eigenvectors is that we are now dealing with an infinite dimensional vector space.
Furthermore:
Sylvester's law of inertia Updated +Created
The theorem states that the number of 0, 1 and -1 in the metric signature is the same for two symmetric matrices that are congruent matrices.
For example, consider:
The eigenvalues of are and , and the associated eigenvectors are:
symPy code:
A = Matrix([[2, sqrt(2)], [sqrt(2), 3]])
A.eigenvects()
and from the eigendecomposition of a real symmetric matrix we know that:
Now, instead of , we could use , where is an arbitrary diagonal matrix of type:
With this, would reach a new matrix :
Therefore, with this congruence, we are able to multiply the eigenvalues of by any positive number and . Since we are multiplying by two arbitrary positive numbers, we cannot change the signs of the original eigenvalues, and so the metric signature is maintained, but respecting that any value can be reached.
Note that the matrix congruence relation looks a bit like the eigendecomposition of a matrix:
but note that does not have to contain eigenvalues, unlike the eigendecomposition of a matrix. This is because here is not fixed to having eigenvectors in its columns.
But because the matrix is symmetric however, we could always choose to actually diagonalize as mentioned at eigendecomposition of a real symmetric matrix. Therefore, the metric signature can be seen directly from eigenvalues.
Also, because is a diagonal matrix, and thus symmetric, it must be that:
What this does represent, is a general change of basis that maintains the matrix a symmetric matrix.