Approximates an original function by sines. If the function is "well behaved enough", the approximation is to arbitrary precision.

Fourier's original motivation, and a key application, is solving partial differential equations with the Fourier series.

Can only be used to approximate for periodic functions (obviously from its definition!). The Fourier transform however overcomes that restriction:

The Fourier series behaves really nicely in $L_{2}$, where it always exists and converges pointwise to the function: Carleson's theorem.

See: math.stackexchange.com/questions/579453/real-world-application-of-fourier-series/3729366#3729366 from heat equation solution with Fourier series.

Separation of variables of certain equations like the heat equation and wave equation are solved immediately by calculating the Fourier series of initial conditions!

Other basis besides the Fourier series show up for other equations, e.g.:

Output: another sequence of $N$ complex numbers $X_{k}$ such that:
Intuitively, this means that we are braking up the complex signal into $N$ sinusoidal frequencies:and is the amplitude of each sine.

$x_{n}=N1 ∑_{k=0}X_{k}e_{i2πNkn}$

- $X_{0}$: is kind of magic and ends up being a constant added to the signal because $e_{i2πNkn}=e_{0}=1$
- $X_{1}$: sinusoidal that completes one cycle over the signal. The larger the $N$, the larger the resolution of that sinusoidal. But it completes one cycle regardless.
- $X_{2}$: sinusoidal that completes two cycles over the signal
- ...
- $X_{N−1}$: sinusoidal that completes $N−1$ cycles over the signal

We use Zero-based numbering in our definitions because it just makes every formula simpler.

Motivation: similar to the Fourier transform:In particular, the discrete Fourier transform is used in signal processing after a analog-to-digital converter. Digital signal processing historically likely grew more and more over analog processing as digital processors got faster and faster as it gives more flexibility in algorithm design.

- compression: a sine would use N points in the time domain, but in the frequency domain just one, so we can throw the rest away. A sum of two sines, only two. So if your signal has periodicity, in general you can compress it with the transform
- noise removal: many systems add noise only at certain frequencies, which are hopefully different from the main frequencies of the actual signal. By doing the transform, we can remove those frequencies to attain a better signal-to-noise

Sample software implementations:

- numpy.fft, notably see the example: numpy/fft.py

See sections: "Example 1 - N even", "Example 2 - N odd" and "Representation in terms of sines and cosines" of www.statlect.com/matrix-algebra/discrete-Fourier-transform-of-a-real-signal

The transform still has complex numbers.

Summary:Therefore, we only need about half of $X_{k}$ to represent the signal, as the other half can be derived by conjugation.

- $X_{0}$ is real
- $X_{1}=X_{N−1}ˉ $
- $X_{2}=X_{N−2}ˉ $
- $X_{k}=X_{N−k}ˉ $

"Representation in terms of sines and cosines" from www.statlect.com/matrix-algebra/discrete-Fourier-transform-of-a-real-signal then gives explicit formulas in terms of $X_{k}$.

NumPy for example has "Real FFTs" for this: numpy.org/doc/1.24/reference/routines.fft.html#real-ffts

There are actually two possible definitions for the DFT:

- 1/N, given as "the default" in many sources:
$x_{n}=N1 ∑_{k=0}X_{k}e_{i2πNkn}$
- $1/N $, known as the "normalized DFT" by some sources: www.dsprelated.com/freebooks/mdft/Normalized_DFT.html, definition which we adopt:
$x_{n}=N1 ∑_{k=0}X_{k}e_{i2πNkn}$

The $1/N $ is nicer mathematically as the inverse becomse more symmetric, and power is conserved between time and frequency domains.

An efficient algorithm to calculate the discrete Fourier transform.

Continuous version of the Fourier series.

Can be used to represent functions that are not periodic: math.stackexchange.com/questions/221137/what-is-the-difference-between-fourier-series-and-fourier-transformation while the Fourier series is only for periodic functions.

Of course, every function defined on a finite line segment (i.e. a compact space).

Therefore, the Fourier transform can be seen as a generalization of the Fourier series that can also decompose functions defined on the entire real line.

As a more concrete example, just like the Fourier series is how you solve the heat equation on a line segment with Dirichlet boundary conditions as shown at: Section "Solving partial differential equations with the Fourier series", the Fourier transform is what you need to solve the problem when the domain is the entire real line.

Lecture notes:

- www.robots.ox.ac.uk/~az/lectures/ia/lect2.pdf Lecture 2: 2D Fourier transforms and applications by A. Zisserman (2014)

A set of theorems that prove under different conditions that the Fourier transform has an inverse for a given space, examples:

First published by Fourier in 1807 to solve the heat equation.

## Articles by others on the same topic

There are currently no matching articles.