The first quantum mechanics theories developed.

Their most popular formulation has been the Schrödinger equation.

Experiments explained:

- via the Schrödinger equation solution for the hydrogen atom it predicts:
- spectral line basic lines, plus Zeeman effect

- Schrödinger equation solution for the helium atom: perturbative solutions give good approximations to the energy levels
- double-slit experiment: I think we have a closed solution for the max and min probabilities on the measurement wall, and they match experiments

Experiments not explained: those that the Dirac equation explains like:

- fine structure
- spontaneous emission coefficients

To get some intuition on the equation on the consequences of the equation, have a look at:

The easiest to understand case of the equation which you must have in mind initially that of the Schrödinger equation for a free one dimensional particle.

Then, with that in mind, the general form of the Schrödinger equation is:
where:

$iℏ∂t∂ψ(x,t) =H^[ψ(x,t)]$

- $ℏ$ is the reduced Planck constant
- $ψ$ is the wave function
- $t$ is the time
- $H^$ is a linear operator called the Hamiltonian. It takes as input a function $ψ$, and returns another function. This plays a role analogous to the Hamiltonian in classical mechanics: determining it determines what the physical system looks like, and how the system evolves in time, because we can just plug it into the equation and solve it. It basically encodes the total energy and forces of the system.

The $x$ argument of $ψ$ could be anything, e.g.:Note however that there is always a single magical $t$ time variable. This is needed in particular because there is a time partial derivative in the equation, so there must be a corresponding time variable in the function. This makes the equation explicitly non-relativistic.

- we could have preferred polar coordinates instead of linear ones if the potential were symmetric around a point
- we could have more than one particle, e.g. solutions of the Schrodinger equation for two electrons, which would have e.g. $x_{1}$ and $x_{2}$ for different particles. No matter how many particles there are, we have just a single $ψ$, we just add more arguments to it.
- we could have even more generalized coordinates. This is much in the spirit of Hamiltonian mechanics or generalized coordinates

The general Schrödinger equation can be broken up into a trivial time-dependent and a time-independent Schrödinger equation by separation of variables. So in practice, all we need to solve is the slightly simpler time-independent Schrödinger equation, and the full equation comes out as a result.

Existence and uniqueness: mathoverflow.net/questions/212913/existence-and-uniqueness-for-two-dimensional-time-dependent-schr%C3%B6dinger-equation

The time-independent Schrödinger equation is a variant of the Schrödinger equation defined as:

$H^[ψ_{x}(E,x)]=Eψ_{x}x$

So we see that for any Schrödinger equation, which is fully defined by the Hamiltonian $H^$, there is a corresponding time-independent Schrödinger equation, which is also uniquely defined by the same Hamiltonian.

The cool thing about the Time-independent Schrödinger equation is that we can always reduce solving the full Schrödinger equation to solving this slightly simpler time-independent version, as described at: Section "Solving the Schrodinger equation with the time-independent Schrödinger equation".

Because this method is fully general, and it simplifies the initial time-dependent problem to a time independent one, it is the approach that we will always take when solving the Schrodinger equation, see e.g. quantum harmonic oscillator.

Before reading any further, you

*must*understand heat equation solution with Fourier series, which uses separation of variables.Once that example is clear, we see that the exact same separation of variables can be done to the Schrödinger equation. If we name the constant of the separation of variables $E$ for energy, we get:

- a time-only part that does not depend on space and does not depend on the Hamiltonian at all. The solution for this part is therefore always the same exponentials for any problem, and this part is therefore "boring":
$ψ_{t}(E,t)=e_{−iEt/ℏ}$
- a space-only part that does not depend on time, bud does depend on the Hamiltonian:Since this is the only non-trivial part, unlike the time part which is trivial, this spacial part is just called "the time-independent Schrodinger equation".$H^[ψ_{x}(E,x)]=Eψ_{x}x$Note that the $ψ$ here is not the same as the $ψ$ in the time-dependent Schrodinger equation of course, as that psi is the result of the multiplication of the time and space parts. This is a bit of imprecise terminology, but hey, physics.

Because the time part of the equation is always the same and always trivial to solve, all we have to do to actually solve the Schrodinger equation is to solve the time independent one, and then we can construct the full solution trivially.

Once we've solved the time-independent part for each possible $E$, we can construct a solution exactly as we did in heat equation solution with Fourier series: we make a weighted sum over all possible $E$ to match the initial condition, which is analogous to the Fourier series in the case of the heat equation to reach a final full solution:

- if there are only discretely many possible values of $E$, each possible energy $E_{i}$. we proceed
and this is a solution by selecting $E_{i}$ such that at time $t=0$ we match the initial condition:$∑_{i=0}=ψ_{t}(E_{i},t)ψ_{x}(E_{i},x)=e_{−iE_{i}t/ℏ}ψ_{x}(E_{i},x)$A finite spectrum shows up in many incredibly important cases:$∑_{i=0}e_{−iE0/ℏ}E_{i}ψ_{i}(x)=∑_{i=0}E_{i}ψ_{i}(x)=initialcondition$
- if there are infinitely many values of E, we do something analogous but with an integral instead of a sum. This is called the continuous spectrum. One notable

The fact that this approximation of the initial condition is always possible from is mathematically proven by some version of the spectral theorem based on the fact that The Schrodinger equation Hamiltonian has to be Hermitian and therefore behaves nicely.

It is interesting to note that solving the time-independent Schrodinger equation can also be seen exactly as an eigenvalue equation where:The only difference from usual matrix eigenvectors is that we are now dealing with an infinite dimensional vector space.

- the Hamiltonian is a linear operator
- the value of the energy
`E`

is an eigenvalue

Furthermore:

- we immediately see from the equation that the time-independent solutions are states of deterministic energy because the energy is an eigenvalue of the Hamiltonian operator
- by looking at Equation 3. "Solution of the Schrodinger equation in terms of the time-independent and time dependent parts", it is obvious that if we take an energy measurement, the probability of each result never changes with time, because it is only multiplied by a constant

Bibliography:

Where derivation == "intuitive routes", since a "law of physics" cannot be derived, only observed right or wrong.

TODO also comment on why are complex numbers used in the Schrodinger equation?.

Some approaches:

- en.wikipedia.org/w/index.php?title=Schr%C3%B6dinger_equation&oldid=964460597#Derivation: holy crap, this just goes all in into a Lie group approach, nice
- Richard Feynman's derivation of the Schrodinger equation:
- physics.stackexchange.com/questions/263990/feynmans-derivation-of-the-schrödinger-equation
- www.youtube.com/watch?v=xQ1d0M19LsM "Class Y. Feynman's Derivation of the Schrödinger Equation" by doctorphys (2020)

- www.youtube.com/watch?v=zC_gYfAqjZY&list=PL54DF0652B30D99A4&index=53 "I5. Derivation of the Schrödinger Equation" by doctorphys

Why is there a complex number in the equation? Intuitively and mathematically:

The Schrödinger equation Hamiltonian has to be a Hermitian so we will have only positive energies I think: quantumcomputing.stackexchange.com/questions/12113/why-does-a-hamiltonian-have-to-be-hermitian

As always, the best way to get some intuition about an equation is to solve it for some simple cases, so let's give that a try with different fixed potentials.

- www.youtube.com/watch?v=1Z9wo2CzJO8 "Schrodinger equation solved numerically in 3D" by Tetsuya Matsuno. 3D hydrogen atom, code may be hidden in some paper, maybe
- www.youtube.com/playlist?list=PLdCdV2GBGyXM0j66zrpDy2aMXr6cgrBJA "Computational Quantum Mechanics" by Let's Code Physics. Uses a 1D trinket.io.
- www.youtube.com/watch?v=BBt8EugN03Q Simulating Quantum Systems [Split Operator Method] by LeiosOS (2018)
- www.youtube.com/watch?v=86x0_-JGlGQ Simulating the Quantum World on a Classical Computer by Garnet Chan (2016) discusses how modeling only local entanglement can make certain simulations feasible

This is basically how quantum computing was first theorized by Richard Feynman: quantum computers as experiments that are hard to predict outcomes.

TODO answer that: quantumcomputing.stackexchange.com/questions/5005/why-it-is-hard-to-simulate-a-quantum-device-by-a-classical-devices. A good answer would be with a more physical example of quantum entanglement, e.g. on a photonic quantum computer.

We select for the general Equation "Schrodinger equation":giving the full explicit partial differential equation:

- $x=x$, the linear cartesian coordinate in the x direction
- $H^=−2mℏ_{2} ∂x_{2}∂_{2} +V(x,t)$, which analogous to the sum of kinetic and potential energy in classical mechanics

$iℏ∂t∂ψ(x,t) =[−2mℏ_{2} ∂x_{2}∂_{2} +V(x,t)]ψ(x,t)$

The corresponding time-independent Schrödinger equation for this equation is:

$[−2mℏ_{2} ∂x_{2}∂_{2} +V(x)]ψ(x)=Eψ(x)$

Schrödinger equation for a one dimensional particle with $V=0$. The first step is to calculate the time-independent Schrödinger equation for a free one dimensional particle

Then, for each energy $E$, from the discussion at Section "Solving the Schrodinger equation with the time-independent Schrödinger equation", the solution is:
Therefore, we see that the solution is made up of infinitely many plane wave functions.

$ψ(x)=∫_{E=−∞}e_{ℏ2mEix}e_{−iEt/ℏ}=e_{iℏ2mEx−Et}$

In this solution of the Schrödinger equation, by the uncertainty principle, position is completely unknown (the particle could be anywhere in space), and momentum (and therefore, energy) is perfectly known.

The plane wave function appears for example in the solution of the Schrödinger equation for a free one dimensional particle. This makes sense, because when solving with the time-independent Schrödinger equation, we do separation of variable on fixed energy levels explicitly, and the plane wave solutions are exactly fixed energy level ones.

$∂x_{2}∂_{2} ψ(x)=−ℏ_{2}2mE ψ(x)$

$ψ(x)=e_{ℏ2mEix}$

This equation is a subcase of Equation "Schrödinger equation for a one dimensional particle" with $V(x)=x_{2}$.

We get the time-independent Schrödinger equation by substituting this $V$ into Equation "time-independent Schrödinger equation for a one dimensional particle":

$[−2mℏ ∂x∂_{2} +x_{2}]ψ=Eψ(x)$

Now, there are two ways to go about this.

The first is the stupid "here's a guess" + "hey this family of solutions forms a complete bases"! This is exactly how we solved the problem at Section "Solving partial differential equations with the Fourier series", except that now the complete basis are the Hermite functions.

The second is the much celebrated ladder operator method.

A quantum version of the LC circuit!

TODO are there experiments, or just theoretical?

Show up in the solution of the quantum harmonic oscillator after separation of variables leading into the time-independent Schrödinger equation, much like solving partial differential equations with the Fourier series.

I.e.: they are both:

- solutions to the time-independent Schrödinger equation for the quantum harmonic oscillator
- a complete basis of that space

Not the same as Hermite polynomials.

www.physics.udel.edu/~jim/PHYS424_17F/Class%20Notes/Class_5.pdf by James MacDonald shows it well.

The operators are a natural guess on the lines of "if p and x were integers".

And then we can prove the ladder properties easily.

The commutator appear in the middle of this analysis.

Examples:

- flash memory uses quantum tunneling as the basis for setting and resetting bits
- alpha decay is understood as a quantum tunneling effect in the nucleus

Is the only atom that has a closed form solution, which allows for very good predictions, and gives awesome intuition about the orbitals in general.

It is arguably the most important solution of the Schrodinger equation.

In particular, it predicts:

- the major spectral line of the hydrogen atom by taking the difference between energy levels

The explicit solution can be written in terms of spherical harmonics.

In the case of the Schrödinger equation solution for the hydrogen atom, each orbital is one eigenvector of the solution.

Remember from time-independent Schrödinger equation that the final solution is just the weighted sum of the eigenvector decomposition of the initial state, analogously to solving partial differential equations with the Fourier series.

This is the table that you should have in mind to visualize them: en.wikipedia.org/w/index.php?title=Atomic_orbital&oldid=1022865014#Orbitals_table

Quantum numbers appear directly in the Schrödinger equation solution for the hydrogen atom.

However, it very cool that they are actually discovered before the Schrödinger equation, and are present in the Bohr model (principal quantum number) and the Bohr-Sommerfeld model (azimuthal quantum number and magnetic quantum number) of the atom. This must be because they observed direct effects of those numbers in some experiments. TODO which experiments.

E.g. The Quantum Story by Jim Baggott (2011) page 34 mentions:

As the various lines in the spectrum were identified with different quantum jumps between different orbits, it was soon discovered that not all the possible jumps were appearing. Some lines were missing. For some reason certain jumps were forbidden. An elaborate scheme of ‘selection rules’ was established by Bohr and Sommerfeld to account for those jumps that were allowed and those that were forbidden.This refers to forbidden mechanism. TODO concrete example, ideally the first one to be noticed. How can you notice this if the energy depends only on the principal quantum number?

Determines energy. This comes out directly from the resolution of the Schrödinger equation solution for the hydrogen atom where we have to set some arbitrary values of energy by separation of variables just like we have to set some arbitrary numbers when solving partial differential equations with the Fourier series. We then just happen to see that only certain integer values are possible to satisfy the equations.

Fixed total angular momentum.

The direction however is not specified by this number.

To determine the quantum angular momentum, we need the magnetic quantum number, which then selects which orbital exactly we are talking about.

Fixed quantum angular momentum in a given direction.

Can range between $±l$.

E.g. consider gallium which is 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p1:

- the electrons in s-orbitals such as 1s, 2d, and 3d are $l=0$, and so the only value for $m_{l}$ is 0
- the electrons in p-orbitals such as 2p, 3p and 4p are $l=1$, and so the possible values for $m_{l}$ are -1, 0 and 1
- the electrons in d-orbitals such as 2d are $l=2$, and so the possible values for $m_{l}$ are -2, -1, 0 and 1 and 2

The z component of the quantum angular momentum is simply:
so e.g. again for gallium:

$L_{z}=m_{l}ℏ$

- s-orbitals: necessarily have 0 z angular momentum
- p-orbitals: have either 0, $−ℏ$ or $+ℏ$ z angular momentum

Note that this direction is arbitrary, since for a fixed azimuthal quantum number (and therefore fixed total angular momentum), we can only know one direction for sure. $z$ is normally used by convention.

This notation is cool as it gives the spin quantum number, which is important e.g. when talking about hyperfine structure.

But it is a bit crap that the spin is not given simply as $±1/2$ but rather mixes up both the azimuthal quantum number and spin. What is the reason?

Bibliography:

- Quantum Mechanics for Engineers by Leon van Dommelen (2011) "5. Multiple-Particle Systems"

TODO. Can't find it easily. Anyone?

This is closely linked to the Pauli exclusion principle.

What does a particle even mean, right? Especially in quantum field theory, where two electrons are just vibrations of a single electron field.

Another issue is that if we consider magnetism, things only make sense if we add special relativity, since Maxwell's equations require special relativity, so a non approximate solution for this will necessarily require full quantum electrodynamics.

As mentioned at lecture 1 youtube.com/watch?video=H3AFzbrqH68&t=555, relativistic quantum mechanical theories like the Dirac equation and Klein-Gordon equation make no sense for a "single particle": they must imply that particles can pop in out of existence.

Bibliography:

- www.youtube.com/watch?v=Og13-bSF9kA&list=PLDfPUNusx1Eo60qx3Od2KLUL4b7VDPo9F "Advanced quantum theory" by Tobias J. Osborne says that the course will essentially cover multi-particle quantum mechanics!
- physics.stackexchange.com/questions/54854/equivalence-between-qft-and-many-particle-qm "Equivalence between QFT and many-particle QM"
- Course: Quantum Many-Body Physics in Condensed Matter by Luis Gregorio Dias (2020) from course: Quantum Many-Body Physics in Condensed Matter by Luis Gregorio Dias (2020) give a good introduction to non-interacting particles

Just ignore the electron electron interactions.

No closed form solution, but good approximation that can be calculated by hand with the Hartree-Fock method, see hartree-Fock method for the helium atom.

Bibliography:

That is, two electrons per atomic orbital, each with a different spin.

As shown at Schrödinger equation solution for the helium atom, they do repel each other, and that affects their measurable energy.

However, this energy is still lower than going up to the next orbital. TODO numbers.

This changes however at higher orbitals, notably as approximately described by the aufbau principle.

Boring rule that says that less energetic atomic orbitals are filled first.

Much more interesting is actually determining that order, which the Madelung energy ordering rule is a reasonable approximation to.

We will sometimes just write them without superscript, as it saves typing and is useless.

The principal quantum number thing fully determining energy is only true for the hydrogen emission spectrum for which we can solve the Schrödinger equation explicitly.

For other atoms with more than one electron, the orbital names are just a very good approximation/perturbation, as we don't have an explicit solution. And the internal electrons do change energy levels.

Note however that due to the more complex effect of the Lamb shift from QED, there is actually a very small 2p/2s shift even in hydrogen.

Looking at the energy level of the Schrödinger equation solution for the hydrogen atom, you would guess that for multi-electron atoms that only the principal quantum number would matter, azimuthal quantum number getting filled randomly.

However, orbitals energies for large atoms don't increase in energy like those of hydrogen due to electron-electron interactions.

In particular, the following would not be naively expected:

- 2s fills up before 2p. From the hydrogen solution, you might guess that they would randomly go into either one as they'd have the same energy
- $4s_{1}$ in potassium fills up before 3d, even though it has a higher principal quantum number!

This rule is only an approximation, there exist exceptions to the Madelung energy ordering rule.

Bibliography:

The most notable exception is the borrowing of 3d-orbital electrons to 4s as in chromium, leading to a 3d5 4s1 configuration instead of the 3d4 4s2 we would have with the rule. TODO how is that observed observed experimentally?

This notation is so confusing! People often don't manage to explain the intuition behind it, why this is an useful notation. When you see Indian university entry exam level memorization classes about this, it makes you want to cry.

The key reason why term symbols matter are Hund's rules, which allow us to predict with some accuracy which electron configurations of those states has more energy than the other.

web.chem.ucsb.edu/~devries/chem218/Term%20symbols.pdf puts it well: electron configuration notation is not specific enough, as each such notation e.g. 1s2 2s2 2p2 contains several options of spins and z angular momentum. And those affect energy.

This is why those symbols are often used when talking about energy differences: they specify more precisely which levels you are talking about.

Basically, each term symbol appears to represent a group of possible electron configurations with a given quantum angular momentum.

We first fix the energy level by saying at which orbital each electron can be (hyperfine structure is ignored). It doesn't even have to be the ground state: we can make some electrons excited at will.

The best thing to learn this is likely to draw out all the possible configurations explicitly, and then understand what is the term symbol for each possible configuration, see e.g. term symbols for carbon ground state.

It also confusing how uppercase letters S, P and D are used, when they do not refer to orbitals s, p and d, but rather to states which have the same angular momentum as individual electrons in those states.

It is also very confusing how extremelly close it looks to spectroscopic notation!

The form of the term symbol is:

$_{2S+1}L_{J}$

The $2S+1$ can be understood directly as the degeneracy, how many configurations we have in that state.

Bibliography:

- chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Electronic_Spectroscopy/Spin-orbit_Coupling/Atomic_Term_Symbols
- chem.libretexts.org/Courses/Pacific_Union_College/Quantum_Chemistry/08%3A_Multielectron_Atoms/8.08%3A_Term_Symbols_Gives_a_Detailed_Description_of_an_Electron_Configuration The PDF origin: web.chem.ucsb.edu/~devries/chem218/Term%20symbols.pdf
- chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Inorganic_Coordination_Chemistry_(Landskron)/08%3A_Coordination_Chemistry_III_-_Electronic_Spectra/8.01%3A_Quantum_Numbers_of_Multielectron_Atoms
- physics.stackexchange.com/questions/8567/how-do-electron-configuration-microstates-map-to-term-symbols How do electron configuration microstates map to term symbols?

Allow us to determine with good approximation in a multi-electron atom which electron configuration have more energy. It is a bit like the Aufbau principle, but at a finer resolution.

Note that this is not trivial since there is no explicit solution to the Schrödinger equation for multi-electron atoms like there is for hydrogen.

For example, consider carbon which has electron configuration 1s2 2s2 2p2.

If we were to populate the 3 p-orbitals with two electrons with spins either up or down, which has more energy? E.g. of the following two:

```
m_L -1 0 1
u_ u_ __
u_ __ u_
__ ud __
```

Higher spin multiplicity means lower energy. I.e.: you want to keep all spins pointin in the same direction.

This example covered for example at Video 1. "Term Symbols Example 1 by TMP Chem (2015)".

Carbon has electronic structure 1s2 2s2 2p2.

For term symbols we only care about unfilled layers, because in every filled layer the total z angular momentum is 0, as one electron necessarily cancels out each other:

- magnetic quantum number varies from -l to +l, each with z angular momentum $−lℏ$ to $+lℏ$ and so each cancels the other out
- spin quantum number is either + or minus half, and so each pair of electron cancels the other out

So in this case, we only care about the 2 electrons in 2p2. Let's list out all possible ways in which the 2p2 electrons can be.

There are 3 p orbitals, with three different magnetic quantum numbers, each representing a different possible z quantum angular momentum.

We are going to distribute 2 electrons with 2 different spins across them. All the possible distributions that don't violate the Pauli exclusion principle are:

```
m_l +1 0 -1 m_L m_S
u_ u_ __ 1 1
u_ __ u_ 0 1
__ u_ u_ -1 1
d_ d_ __ 1 -1
d_ __ d_ 0 -1
__ d_ d_ -1 -1
u_ d_ __ 1 0
d_ u_ __ 1 0
u_ __ d_ 0 0
d_ __ u_ 0 0
__ u_ d_ -1 0
__ d_ u_ -1 0
ud __ __ 2 0
__ ud __ 0 0
__ __ ud -2 0
```

where:

`m_l`

is $m_{l}$, the magnetic quantum number of each electron. Remember that this gives a orbital (non-spin) quantum angular momentum of $m_{l}ℏ$ to each such electron`m_L`

with a capital L is the sum of the $m_{l}$ of each electron`m_S`

with a capital S is the sum of the spin angular momentum of each electron

For example, on the first line:
we have:and so the sum of them has angular momentum $0+1ℏ=1ℏ$. So the value of $m_{L}$ is 1, we just omit the $ℏ$.

```
m_l +1 0 -1 m_L m_S
u_ u_ __ 1 1
```

- one electron with $m_{l}=+1$, and so angular momentum $ℏ$
- one electron with $m_{l}=+0$, and so angular momentum 0

TODO now I don't understand the logic behind the next steps... I understand how to mechanically do them, but what do they mean? Can you determine the term symbol for individual microstates at all? Or do you have to group them to get the answer? Since there are multiple choices in some steps, it appears that you can't assign a specific term symbol to an individual microstate. And it has something to do with the Slater determinant. The previous lecture mentions it: www.youtube.com/watch?v=7_8n1TS-8Y0 more precisely youtu.be/7_8n1TS-8Y0?t=2268 about carbon.

youtu.be/DAgEmLWpYjs?t=2675 mentions that $_{3}D$ is not allowed because it would imply $L=2,S=1$, which would be a state

`uu __ __`

which violates the Pauli exclusion principle, and so was not listed on our list of 15 states.He then goes for $_{1}D$ and mentions:and so that corresponds to states on our list:
Note that for some we had a two choices, so we just pick any one of them and tick them off off from the table, which now looks like:

- S = 1 so $m_{S}$ can only be 0
- L = 2 (D) so $m_{L}$ ranges in -2, -1, 0, 1, 2

```
ud __ __ 2 0
u_ d_ __ 1 0
u_ __ d_ 0 0
__ u_ d_ -1 0
__ __ ud -2 0
```

```
+1 0 -1 m_L m_S
u_ u_ __ 1 1
u_ __ u_ 0 1
__ u_ u_ -1 1
d_ d_ __ 1 -1
d_ __ d_ 0 -1
__ d_ d_ -1 -1
d_ u_ __ 1 0
d_ __ u_ 0 0
__ d_ u_ -1 0
__ ud __ 0 0
```

Then for $_{3}P$ the choices are:so we have 9 possibilities for both together. We again verify that 9 such states are left matching those criteria, and tick them off, and so on.

- S = 2 so $m_{S}$ is either -1, 0 or 1
- L = 1 (P) so $m_{L}$ ranges in -1, 0, 1

For the $m_{S}$, we have two electrons with spin up. The angular momentum of each electron is $1/2ℏ$, and so given that we have two, the total is $1ℏ$, so again we omit $ℏ$ and $m_{S}$ is 1.

Bibliography:

Can we make any ab initio predictions about it all?

A 2016 paper: aip.scitation.org/doi/abs/10.1063/1.4948309

Isomers were quite confusing for early chemists, before atomic theory was widely accepted, and people where thinking mostly in terms of proportions of equations, related: Section "Isomers suggest that atoms exist (1874)".

Exist because double bonds don't rotate freely. Have different properties of course, unlike enantiomer.

Bibliography:

Mirror images.

Key exmaple: d and L amino acids. Enantiomers have identical physico-chemical properties. But their biological roles can be very different, because an enzyme might only be able to act on one of them.

TODO definition. Appears to be isomers

Example:

- the three most table polymorphs of calcium carbonate polymorphs are:

Molecules that are the same if you just look at "what atom is linked to what atom", they are only different if you consider the relative spacial positions of atoms.

Discrete quantum system model that can model both spin in the Stern-Gerlach experiment or photon polarization in polarizer.

Also known in quantum computing as a qubit :-)

A more concrete and easier to understand version of it is the more photon-specific Poincaré sphere, have a look at that one first.

The wave equation contains the entire state of a particle.

From mathematical formulation of quantum mechanics remember that the wave equation is a vector in Hilbert space.

And a single vector can be represented in many different ways in different basis, and two of those ways happen to be the position and the momentum representations.

More importantly, position and momentum are first and foremost operators associated with observables: the position operator and the momentum operator. And both of their eigenvalue sets form a basis of the Hilbert space according to the spectral theorem.

When you represent a wave equation as a function, you have to say what the variable of the function means. And depending on weather you say "it means position" or "it means momentum", the position and momentum operators will be written differently.

This is well shown at: Video "Visualization of Quantum Physics (Quantum Mechanics) by udiprod (2017)".

Furthermore, the position and momentum representations are equivalent: one is the Fourier transform of the other: position and momentum space. Remember that notably we can always take the Fourier transform of a function in $L_{2}$ due to Carleson's theorem.

Then the uncertainty principle follows immediately from a general property of the Fourier transform: en.wikipedia.org/w/index.php?title=Fourier_transform&oldid=961707157#Uncertainty_principle

In precise terms, the uncertainty principle talks about the standard deviation of two measures.

We can visualize the uncertainty principle more intuitively by thinking of a wave function that is a real flat top bump function with a flat top in 1D. We can then change the width of the support, but when we do that, the top goes higher to keep probability equal to 1. The momentum is 0 everywhere, except in the edges of the support. Then:

- to localize the wave in space at position 0 to reduce the space uncertainty, we have to reduce the support. However, doing so makes the momentum variation on the edges more and more important, as the slope will go up and down faster (higher top, and less x space for descent), leading to a larger variance (note that average momentum is still 0, due to to symmetry of the bump function)
- to localize the momentum as much as possible at 0, we can make the support wider and wider. This makes the bumps at the edges smaller and smaller. However, this also obviously delocalises the wave function more and more, increasing the variance of x

Bibliography:

- www.youtube.com/watch?v=bIIjIZBKgtI&list=PL54DF0652B30D99A4&index=59 "K2. Heisenberg Uncertainty Relation" by doctorphys (2011)
- physics.stackexchange.com/questions/132111/uncertainty-principle-intuition Uncertainty Principle Intuition on Physics Stack Exchange

One of the main reasons why physicists are obsessed by this topic is that position and momentum are mapped to the phase space coordinates of Hamiltonian mechanics, which appear in the matrix mechanics formulation of quantum mechanics, which offers insight into the theory, particularly when generalizing to relativistic quantum mechanics.

One way to think is: what is the definition of space space? It is a way to write the wave function $ψ_{x}(x)$ such that:And then, what is the definition of momentum space? It is of course a way to write the wave function $ψ_{p}(p)$ such that:

- the position operator is the multiplication by $x$
- the momentum operator is the derivative by $x$

- the momentum operator is the multiplication by $p$

physics.stackexchange.com/questions/39442/intuitive-explanation-of-why-momentum-is-the-fourier-transform-variable-of-posit/39508#39508 gives the best idea intuitive idea: the Fourier transform writes a function as a (continuous) sum of plane waves, and each plane wave has a fixed momentum.

Bibliography:

A way to write the wavefunction $ψ(x)$ such that the position operator is:
i.e., a function that takes the wavefunction as input, and outputs another function:

$x$

$xψ(x)$

If you believe that mathematicians took care of continuous spectrum for us and that everything just works, the most concrete and direct thing that this representation tells us is that:

the probability of finding a particle between $x_{0}$ and $x_{1}$ at time $t$equals:

$∫_{x_{0}}xψx,tdx$

This operator case is surprisingly not necessarily mathematically trivial to describe formally because you often end up getting into the Dirac delta functions/continuous spectrum: as mentioned at: mathematical formulation of quantum mechanics

One dimension in position representation:

$p^ =−iℏ∂x∂ $

In three dimensions In position representation, we define it by using the gradient, and so we see that

$p^ =−iℏ∂x∂ $

Appears directly on Schrödinger equation! And in particular in the time-independent Schrödinger equation.

There is also a time-energy uncertainty principle, because those two operators are also complementary.

Basically the operators are just analogous to the classical ones e.g. the classical:
becomes:

$L_{z}=xp_{y}−yp_{x}$

$L^_{z}=−iℏ(x∂y∂ −y∂x∂ )$

Besides the angular momentum in each direction, we also have the total angular momentum:

$L^_{2}=L^_{x}+L^_{y}+L^_{z}$

Then you have to understand what each one of those does to the each atomic orbital:

- total angular momentum: determined by the azimuthal quantum number
- angular momentum in one direction ($z$ by convention): determined by the magnetic quantum number

There is an uncertainty principle between the x, y and z angular momentums, we can only measure one of them with certainty at a time. Video 1. "Quantum Mechanics 7a - Angular Momentum I by ViaScience (2013)" justifies this intuitively by mentioning that this is analogous to precession: if you try to measure electrons e.g. with the Zeeman effect the precess on the other directions which you end up modifing.

TODO experiment. Likely Zeeman effect.

TODO is there any good intuitive argument or proof of conservation of energy, momentum, angular momentum?

Proof that the probability 1 is conserved by the time evolution:

It can be derived directly from the Schrödinger equation.

Bibliography:

- That proof also mentions that if the potential
`V`

is not real, then there is no conservation of probability! Therefore the potential*must*be real valued!

Contains the full state of the quantum system.

This is in contrast to classical mechanics where e.g. the state of mechanical system is given by two real functions: position and speed.

The wave equation in position representation on the other hand encodes speed in "how fast does the complex phase spin around", and direction in "does it spin clockwise or counterclockwise", as described well at: Video "Visualization of Quantum Physics (Quantum Mechanics) by udiprod (2017)". Then once you understand that, it is more compact to just view those graphs with the phase color coded as in Video "Simulation of the time-dependent Schrodinger equation (JavaScript Animation) by Coding Physics (2019)".

Relates particle momentum and its wavelength, or equivalently, energy and frequency.

The wavelength relation is:
but since:
the wavelength relation implies:

$λ=ph $

$v=λfE=pv$

$fv =ph f=hvp =hE $

Particle wavelength can be for example measured very directly on a double-slit experiment.

So if we take for example electrons of different speeds, we should be able to see the diffraction pattern change accordingly.

Published by Werner Heisenberg in 1925-07-25 as quantum mechanical re-interpretation of kinematic and mechanical relations by Heisenberg (1925), it offered the first general formulation of quantum mechanics.

It is apparently more closely related to the ladder operator method, which is a more algebraic than the more analytical Schrödinger equation.

It appears that this formulation makes the importance of the Poisson bracket clear, and explains why physicists are so obsessed with talking about position and momentum space. This point of view also apparently makes it clearer that quantum mechanics can be seen as a generalization of classical mechanics through the Hamiltonian.

QED and the men who made it: Dyson, Feynman, Schwinger, and Tomonaga by Silvan Schweber (1994) mentions however that relativistic quantum mechanics broke that analogy, because some 2x2 matrix had a different form, TODO find that again.

Inward Bound by Abraham Pais (1988) chapter 12 "Quantum mechanics, an essay" part (c) "A chronology" has some ultra brief, but worthwhile mentions of matrix mechanics and the commutator.

This Heisenberg's breakthrough paper on matrix mechanics which later led to the Schrödinger equation, see also: history of quantum mechanics.

Published on the Zeitschrift für Physik volume 33 page pages 879-893, link.springer.com/article/10.1007%2FBF01328377

Modern overview: www.mat.unimi.it/users/galgani/arch/heisenberg25amer_j_phys.pdf

Basically the same as matrix mechanics it seems, just a bit more generalized.

Deterministic, but non-local.

## Articles by others on the same topic (0)

There are currently no matching articles