Bessel function Updated +Created
Shows up when trying to solve 2D wave equation on a circular domain in polar coordinates with separation of variables, where we have to decompose the initial condition in termes of a fourier-Bessel series, exactly like the Fourier series appears when solving the wave equation in linear coordinates.
For the same fundamental reasons, also appears when calculating the Schrödinger equation solution for the hydrogen atom.
Carleson's theorem Updated +Created
The Fourier series of an function (i.e. the function generated from the infinite sum of weighted sines) converges to the function pointwise almost everywhere.
The theorem also seems to hold (maybe trivially given the transform result) for the Fourier series (TODO if trivially, why trivially).
Only proved in 1966, and known to be a hard result without any known simple proof.
This theorem of course implies that Fourier basis is complete for , as it explicitly constructs a decomposition into the Fourier basis for every single function.
TODO vs Riesz-Fischer theorem. Is this just a stronger pointwise result, while Riesz-Fischer is about norms only?
Fourier series Updated +Created
Approximates an original function by sines. If the function is "well behaved enough", the approximation is to arbitrary precision.
Fourier's original motivation, and a key application, is solving partial differential equations with the Fourier series.
The Fourier series behaves really nicely in , where it always exists and converges pointwise to the function: Carleson's theorem.
Video 1.
But what is a Fourier series? by 3Blue1Brown (2019)
Source. Amazing 2D visualization of the decomposition of complex functions.
Fourier transform Updated +Created
Continuous version of the Fourier series.
Can be used to represent functions that are not periodic: math.stackexchange.com/questions/221137/what-is-the-difference-between-fourier-series-and-fourier-transformation while the Fourier series is only for periodic functions.
Of course, every function defined on a finite line segment (i.e. a compact space).
Therefore, the Fourier transform can be seen as a generalization of the Fourier series that can also decompose functions defined on the entire real line.
As a more concrete example, just like the Fourier series is how you solve the heat equation on a line segment with Dirichlet boundary conditions as shown at: Section "Solving partial differential equations with the Fourier series", the Fourier transform is what you need to solve the problem when the domain is the entire real line.
Lebesgue integral vs Riemann integral Updated +Created
Advantages over Riemann:
Video 1.
Riemann integral vs. Lebesgue integral by The Bright Side Of Mathematics (2018)
Source.
youtube.com/watch?v=PGPZ0P1PJfw&t=808 shows how Lebesgue can be visualized as a partition of the function range instead of domain, and then you just have to be able to measure the size of pre-images.
One advantage of that is that the range is always one dimensional.
But the main advantage is that having infinitely many discontinuities does not matter.
Infinitely many discontinuities can make the Riemann partitioning diverge.
But in Lebesgue, you are instead measuring the size of preimage, and to fit infinitely many discontinuities in a finite domain, the size of this preimage is going to be zero.
So then the question becomes more of "how to define the measure of a subset of the domain".
Which is why we then fall into measure theory!
Riesz-Fischer theorem Updated +Created
A measurable function defined on a closed interval is square integrable (and therefore in ) if and only if Fourier series converges in norm the function:
Solving partial differential equations with the Fourier series Updated +Created
Separation of variables of certain equations like the heat equation and wave equation are solved immediately by calculating the Fourier series of initial conditions!
Other basis besides the Fourier series show up for other equations, e.g.:
Solving the Schrodinger equation with the time-independent Schrödinger equation Updated +Created
Before reading any further, you must understand heat equation solution with Fourier series, which uses separation of variables.
Once that example is clear, we see that the exact same separation of variables can be done to the Schrödinger equation. If we name the constant of the separation of variables for energy, we get:
  • a time-only part that does not depend on space and does not depend on the Hamiltonian at all. The solution for this part is therefore always the same exponentials for any problem, and this part is therefore "boring":
  • a space-only part that does not depend on time, bud does depend on the Hamiltonian:
    Since this is the only non-trivial part, unlike the time part which is trivial, this spacial part is just called "the time-independent Schrodinger equation".
    Note that the here is not the same as the in the time-dependent Schrodinger equation of course, as that psi is the result of the multiplication of the time and space parts. This is a bit of imprecise terminology, but hey, physics.
Because the time part of the equation is always the same and always trivial to solve, all we have to do to actually solve the Schrodinger equation is to solve the time independent one, and then we can construct the full solution trivially.
Once we've solved the time-independent part for each possible , we can construct a solution exactly as we did in heat equation solution with Fourier series: we make a weighted sum over all possible to match the initial condition, which is analogous to the Fourier series in the case of the heat equation to reach a final full solution:
  • if there are only discretely many possible values of , each possible energy . we proceed
    Equation 3.
    Solution of the Schrodinger equation in terms of the time-independent and time dependent parts
    .
    and this is a solution by selecting such that at time we match the initial condition:
    A finite spectrum shows up in many incredibly important cases:
  • if there are infinitely many values of E, we do something analogous but with an integral instead of a sum. This is called the continuous spectrum. One notable
The fact that this approximation of the initial condition is always possible from is mathematically proven by some version of the spectral theorem based on the fact that The Schrodinger equation Hamiltonian has to be Hermitian and therefore behaves nicely.
It is interesting to note that solving the time-independent Schrodinger equation can also be seen exactly as an eigenvalue equation where:
The only difference from usual matrix eigenvectors is that we are now dealing with an infinite dimensional vector space.
Furthermore: