This section is more precisely about classical mechanics.
Basically the same as classical mechanics.
The idea tha taking the limit of the non-classical theories for certain parameters (relativity and quantum mechanics) should lead to the classical theory.
It appears that classical limit is only very strict for relativity. For quantum mechanics it is much more hand-wavy thing. See also: Subtle is the Lord by Abraham Pais (1982) page 55.
Basically the same as classical limit, but more for quantum mechanics.
Boooooring.
Originally it was likely created to study constrained mechanical systems where you want to use some "custom convenient" variables to parametrize things instead of global x, y, z. Classical examples that you must have in mind include:
When doing lagrangian mechanics, we just lump together all generalized coordinates into a single vector that maps time to the full state:
where each component can be anything, either the x/y/z coordinates relative to the ground of different particles, or angles, or nay other crazy thing we want.
The Lagrangian is a function that maps:
to a real number.
Then, the stationary action principle says that the actual path taken obeys the Euler-Lagrange equation:
This produces a system of partial differential equations with:
  • equations
  • unknown functions
  • at most second order derivatives of . Those appear because of the chain rule on the second term.
The mixture of so many derivatives is a bit mind mending, so we can clarify them a bit further. At:
the is just identifying which argument of the Lagrangian we are differentiating by: the i-th according to the order of our definition of the Lagrangian. It is not the actual function, just a mnemonic.
Then at:
  • the part is just like the previous term, just identifies the argument with index ( because we have the non derivative arguments)
  • after the partial derivative is taken and returns a new function , then the multivariable chain rule comes in and expands everything into terms
However, people later noticed that the Lagrangian had some nice properties related to Lie group continuous symmetries.
Basically it seems that the easiest way to come up with new quantum field theory models is to first find the Lagrangian, and then derive the equations of motion from them.
For every continuous symmetry in the system (modelled by a Lie group), there is a corresponding conservation law: local symmetries of the Lagrangian imply conserved currents.
Genius: Richard Feynman and Modern Physics by James Gleick (1994) chapter "The Best Path" mentions that Richard Feynman didn't like the Lagrangian mechanics approach when he started university at MIT, because he felt it was too magical. The reason is that the Lagrangian approach basically starts from the principle that "nature minimizes the action across time globally". This implies that things that will happen in the future are also taken into consideration when deciding what has to happen before them! Much like the lifeguard in the lifegard problem making global decisions about the future. However, chapter "Least Action in Quantum Mechanics" comments that Feynman later notice that this was indeed necessary while developping Wheeler-Feynman absorber theory into quantum electrodynamics, because they felt that it would make more sense to consider things that way while playing with ideas such as positrons are electrons travelling back in time. This is in contrast with Hamiltonian mechanics, where the idea of time moving foward is more directly present, e.g. as in the Schrödinger equation.
Furthermore, given the symmetry, we can calculate the derived conservation law, and vice versa.
And partly due to the above observations, it was noticed that the easiest way to describe the fundamental laws of particle physics and make calculations with them is to first formulate their Lagrangian somehow: why do symmetries such as SU(3), SU(2) and U(1) matter in particle physics?s.
Bibliography:
Video 1.
Euler-Lagrange equation explained intuitively - Lagrangian Mechanics by Physics Videos by Eugene Khutoryansky (2018)
Source. Well, unsurprisingly, it is exactly what you can expect from an Eugene Khutoryansky video.
Original playlist name: "PHYSICS 68 ADVANCED MECHANICS: LAGRANGIAN MECHANICS"
Author: Michel van Biezen.
High school classical mechanics material, no mention of the key continuous symmetry part.
But does have a few classic pendulum/pulley/spring worked out examples that would be really wise to get under your belt first.
As mentioned on the Wikipedia page en.wikipedia.org/w/index.php?title=Stationary_Action_Principle&oldid=1020413171, "principle of least action" is not accurate since it could not necessarily be a minima, we could just be in a saddle-point.
Calculus of variations is the field that searches for maxima and minima of Functionals, rather than the more elementary case of functions from to .
A function that takes input function and outputs a real number.
Let's start with the one dimensional case. Let the and a Functional defined by a function of three variables :
Then, the Euler-Lagrange equation gives the maxima and minima of the that type of functional. Note that this type of functional is just one very specific type of functional amongst all possible functionals that one might come up with. However, it turns out to be enough to do most of physics, so we are happy with with it.
Given , the Euler-Lagrange equations are a system of ordinary differential equations constructed from that such that the solutions to that system are the maxima/minima.
In the one dimensional case, the system has a single ordinary differential equation:
By and we simply mean "the partial derivative of with respect to its second and third arguments". The notation is a bit confusing at first, but that's all it means.
Therefore, that expression ends up being at most a second order ordinary differential equation where is the unknown, since:
  • the term is a function of
  • the term is a function of . And so it's derivative with respect to time will contain only up to
Now let's think about the multi-dimensional case. Instead of having , we now have . Think about the Lagrangian mechanics motivation of a double pendulum where for a given time we have two angles.
Let's do the 2-dimensional case then. In that case, is going to be a function of 5 variables rather than 3 as in the one dimensional case, and the functional looks like:
This time, the Euler-Lagrange equations are going to be a system of two ordinary differential equations on two unknown functions and of order up to 2 in both variables:
At this point, notation is getting a bit clunky, so people will often condense the vector
or just omit the arguments of entirely:
Video 1.
Calculus of Variations ft. Flammable Maths by vcubingx (2020)
Source.
These are the final equations that you derive from the Lagrangian via the Euler-Lagrange equation which specify how the system evolves with time.
The function that fully describes a physical system in Lagrangian mechanics.
When we particles particles, the action is obtained by integrating the Lagrangian over time:
In the case of field however, we can expand the Lagrangian out further, to also integrate over the space coordinates and their derivatives.
Since we are now working with something that gets integrated over space to obtain the total action, much like density would be integrated over space to obtain a total mass, the name "Lagrangian density" is fitting.
E.g. for a 2-dimensional field :
Of course, if we were to write it like that all the time we would go mad, so we can just write a much more condensed vectorized version using the gradient with :
And in the context of special relativity, people condense that even further by adding to the spacetime Four-vector as well, so you don't even need to write that separate pesky .
The main point of talking about the Lagrangian density instead of a Lagrangian for fields is likely that it treats space and time in a more uniform way, which is a basic requirement of special relativity: we have to be able to mix them up somehow to do Lorentz transformations. Notably, this is a key ingredient in a/the formulation of quantum field theory.
The variables of the Lagrangian, e.g. the angles of a double pendulum. From that example it is clear that these variables don't need to be simple things like cartesian coordinates or polar coordinates (although these tend to be the overwhelming majority of simple case encountered): any way to describe the system is perfectly valid.
In quantum field theory, those variables are actually fields.
For every continuous symmetry in the system (Lie group), there is a corresponding conservation law.
Furthermore, given the symmetry, we can calculate the derived conservation law, and vice versa.
As mentioned at buzzard.ups.edu/courses/2017spring/projects/schumann-lie-group-ups-434-2017.pdf, what the symmetry (Lie group) acts on (obviously?!) are the Lagrangian generalized coordinates. And from that, we immediately guess that manifolds are going to be important, because the generalized variables of the Lagrangian can trivially be Non-Euclidean geometry, e.g. the pendulum lives on an infinite cylinder.
Video 1.
The most beautiful idea in physics - Noether's Theorem by Looking Glass Universe (2015)
Source. One sentence stands out: the generated quantities are called the generators of the transforms.
Video 2.
The Biggest Ideas in the Universe | 15. Gauge Theory by Sean Carroll (2020)
Source. This attempts a one hour hand wave explanation of it. It is a noble attempt and gives some key ideas, but it falls a bit short of Ciro's desires (as would anything that fit into one hour?)
Video 3.
The Symmetries of the universe by ScienceClic English (2021)
Source. youtu.be/hF_uHfSoOGA?t=144 explains intuitively why symmetry implies consevation!
Equivalent to Lagrangian mechanics but formulated in a different way.
TODO understand original historical motivation, www.youtube.com/watch?v=SZXHoWwBcDc says it is from optics.
Intuitively, the Hamiltonian is the total energy of the system in terms of arbitrary parameters, a bit like Lagrangian mechanics.
The key difference from Lagrangian mechanics is that the Hamiltonian approach groups variables into pairs of coordinates called the phase space coordinates:
  • generalized coordinates, generally positions or angles
  • their corresponding conjugate momenta, generally velocities, or angular velocities
This leads to having two times more unknown functions than in the Lagrangian. However, it also leads to a system of partial differential equations with only first order derivatives, which is nicer. Notably, it can be more clearly seen in phase space.
Analogous to what the Euler-Lagrange equation is to Lagrangian mechanics, Hamilton's equations give the equations of motion from a given input Hamiltonian:
So once you have the Hamiltonian, you can write down this system of partial differential equations which can then be numerically solved.
This is how you transform the Lagrangian into the Hamiltonian.
Video 1.
Lagrangian Mechanics Example: The Compound Atwood Machine by Michel van Biezen (2017)
Source. Part of lagrangian mechanics lectures by Michel van Biezen (2017).
The simplest harmonic oscillator system.
Video 1.
Pendulum Waves by Harvard Natural Sciences Lecture Demonstrations (2010)
Source. Holy crap.
youtu.be/Ca7c5B7Js18?t=803 compares Lagrangian mechanics equation vs the direct x/y coordinate equation.
resonance in a mechanical system.
This idealization does not seems to be possible at all in the context of Maxwell's equations with pointlike particles.
Video 1.
Simulation on the GPU by Ten Minute Physics (2022)
Source. The author is a big PhysX guy.
Figure 1.
gnuplot plot of the y position of a sphere bouncing on a plane simulated in Bullet Physics
. Source. From: What is the simplest collision example possible in a Bullet Physics simulation?
Does not seem to support it unfortunately:
Python library for Bullet Physics.
Became very popular as of result of people using Bullet Physics for reinforcement learning AI training robot simulations.
Website: pybullet.org/
Source code: somewhere inside the main Bullet Physics source tree. Yay.
Was a closed source project by "Roboti LLC", which was then acquired by DeepMind in October 2021 and open sourced March 2022: www.deepmind.com/blog/open-sourcing-mujoco
This library is quite cool. Feel very brutally lean and mean.
Tested on Ubuntu 23.10;
git clone https://github.com/google-deepmind/mujoco
cd mujoco
git checkout 5d46c39529819d1b31249e249ca399f306a108ac
mkdir -p build
cd build
cmake ..
make -j
Now let's play. Minimal interactive UI simulation of a simple MJCF scene with one falling cube:
bin/basic ../doc/_static/hello.xml
Test soure code: github.com/google-deepmind/mujoco/blob/5d46c39529819d1b31249e249ca399f306a108ac/sample/basic.cc. The only thing you can do is rotate the scene with the computer mouse it seems. Mentioned at: mujoco.readthedocs.io/en/2.2.2/programming.html#sabasic
Some more interesting models can be found under the model/ directory: github.com/google-deepmind/mujoco/tree/5d46c39529819d1b31249e249ca399f306a108ac/model E.g. the imaginary humanoid robot DeepMind used in many demos can be seen with:
bin/basic ../model/humanoid/humanoid.xml
A more advanced UI with a few controls:
bin/simulate ../doc/_static/hello.xml
Test soure code: github.com/google-deepmind/mujoco/tree/5d46c39529819d1b31249e249ca399f306a108ac/simulate. Mentioned at: mujoco.readthedocs.io/en/2.2.2/programming.html#sasimulate
A very cool thing about that UI is that you can manually control joints. There are no joints in the hello.xml, but e.g. with the humanoid model:
bin/simulate ../model/humanoid/humanoid.xml
under "Control" you move each joint of the robot separately which is quite cool.
Video 1.
Demo of MuJoCo's built-in simulate viewer by Yuval Tassa (2019)
Source.
There's also a bin/record test executable that presumably renders the simulation directly to a file:
bin/record ../doc/_static/hello.xml 5 60 rgb.out
ffmpeg -f rawvideo -pixel_format rgb24 -video_size 800x800 -framerate 60 -i rgb.out -vf "vflip" video.mp4
Mentioned at: mujoco.readthedocs.io/en/2.2.2/programming.html#sarecord but TODO that produced a broken video, related issues:
Had hardware acceleration in mind from the very start, and for a long time that has meant GPU acceleration.

Articles by others on the same topic (1)