Of course, if academic journals require greater reproducibility for publication, then the cost per paper increases.

However, the total cost has to be smaller than the cost everyone who reads the paper spends to reproduce, no?

The truth is, part of the replication crisis is also due to research groups not wanting to share their precious secrets with others, so they can keep ahead of the publication curve, or maybe spin off a startup.

And when it comes to papers, things are even crazier: big companies manage to publish white papers in peer reviewed journals.

Ciro Santilli wants to help in this area with his videos of all key physics experiments project idea.

Cool initiative. Papers that do not share source code should be banned from peer reviewed academic journals.

It is understandable that you might not be able to reproduce a paper that does a natural science experiment, given that physics is brutal.

But for papers that have either source code or data sets, academic journals must require that those be made available, or refuse to publish.

Any document without such obvious reproducibility elements is a white paper, not a proper peer reviewed paper.

Big companies like Google are able to publish white papers as peer reviewed papers just due to their reputation, e.g. without giving any source code that is central for the article.

It is insane.

E.g.: AlphaGo is closed source but published as www.nature.com/articles/natnure16961 in 2016 on Nature.

Not the usual bullshit you were expecting from the philosophy of Science, right?

Some notable quoters:

- Jacques Monod has the exact quote as presented here: pubmed.ncbi.nlm.nih.gov/22042272/, though presumably it was in French, TODO find the French version
- youtu.be/AYC5lE0b8os?t=41 A Computational Whole-Cell Model Predicts Genotype From Phenotype- Markus Covert by "Calit2ube" (2013), see also: Section "Whole cell simulation"
- the book Genius: Richard Feynman and Modern Physics by James Gleick (1994) mentions a few incidents of this involving Feynman, see e.g. chapter "New Particles, New Language" where he and fellow theorist Hans Bethe immediately spot problems with experimentalists' data in suspicious results

The natural sciences are not just a tool to predict the future.

Everything is magic out of our control.

The natural sciences allow us peek, with huge concentrated effort, into tiny little bits a little of those unknowns, and blow our minds as we notice that we don't know anything.

For all practical purposes in life, there is a huge macro micro gap. We are only able to directly perceive and influence the macro events. And through those we try to affect micro events. Because for good or bad, micro events reflect in the macro world.

It is as if we live in a different plane of existence above molecules, and below galaxies. The hierarchy of Figure "xkcd 435: Fields arranged by purity" puts that nicely into perspective, shame it only starts at the economical level, not going up to astronomy.

The great beauty of science is that it allows us to puncture through some of the layers of reality, either up or down, away from our daily experience.

And the great beauty of artificial intelligence research is that it allows to peer deeper into exactly our layer of existence.

Every one or two weeks Ciro Santilli remembers that he and everything he touches are just a bunch of atoms, and that is an amazing feeling. This is Ciro's preferred source of Great doubt. Another concept that comes to mind is when you see it, you'll shit bricks.

Perhaps, the feeling of physics and the illusion of life reaches its peak in molecular biology.

Just look at your fucking hand right now.

Do you have any idea of each of the cells in it work? Isn't is at least 100 times more complex than the materials of the table you hand is currently resting on?

This is the non-science fiction version of the lotus-Eater Machine.

Alan Watts's "Philosopher" talk mentions related ideas:

The origin of a person who is defined as a philosopher, is one who finds that existence itself is exceedingly odd.

The toddler of a friend of Ciro Santilli's wife asked her mum:

Why doesn't my tiger doll close its eyes when we sleep?Our perception of the macroscopic world is so magic that children have to learn the difference between living and non-living things.

James Somers put it very well as well in his article I should have loved biology by James Somers, this quote was brought to Ciro's attention by Bert Hubert's website
The same applies to other natural sciences.

^{[ref]}.I should have loved biology but I found it to be a lifeless recitation of names: the Golgi apparatus and the Krebs cycle; mitoses, meioses; DNA, RNA, mRNA, tRNA.In the textbooks, astonishing facts were presented without astonishment. Someone probably told me that every cell in my body has the same DNA. But no one shook me by the shoulders, saying how crazy that was. I needed Lewis Thomas, who wrote in The Medusa and the Snail:For the real amazement, if you wish to be amazed, is this process. You start out as a single cell derived from the coupling of a sperm and an egg; this divides in two, then four, then eight, and so on, and at a certain stage there emerges a single cell which has as all its progeny the human brain. The mere existence of such a cell should be one of the great astonishments of the earth. People ought to be walking around all day, all through their waking hours calling to each other in endless wonderment, talking of nothing except that cell.

Nothing makes the fact that your life is an illusion clearer than animations of molecular biology processes. You just have no idea what is going on inside your own body right now!

And yet, we live, oblivious to all of it.

Amazing creators:

Drew Berry recommends having a look at clarafi.

Uses CC BY-SA, what a hero.

Goes along: if you could control your life multiple times to be perfect, you would eventually get tired of paradise, and you would go further and further into creating uncertain worlds with some suffering, until you would reach the current real world.

Very similar to The Matrix (1999) when Agent Smith talks about the failed Paradise Matrix shown at www.youtube.com/watch?v=9Qs3GlNZMhY:

Did you know that the first Matrix was designed to be a perfect human world where none suffered, where everyone would be happy? It was a disaster. No one would accept the program. Entire crops were lost. Some believed that we lacked the programming language to describe your "perfect world". But I believe that, as a species, human beings define their reality through misery and suffering. So the perfect world was a dream that your primitive cerebrum kept trying to wake up from.

From episode "Mortynight Run"

Look at this. You beat cancer, and then you went back to work at the carpet store? Booooh.

Figure "xkcd 435: Fields arranged by purity" must again be cited.

The opposite of from first principles.

Basically the opposite of reductionism.

Ciro Santilli often wonders to himself, how much of the natural sciences can one learn in a lifetime? Certainly, a very strong basis, with concrete experimental and physics, chemistry and biology should be attainable to all? How much Ciro manages to learning and teach in those areas is a kind of success metric of Ciro's life.

Physics (like all well done science) is the art of predicting the future by modelling the world with mathematics.

And predicting the future is the first step towards controlling it, i.e.: engineering.

Ciro Santilli doesn't know physics. He writes about it partly to start playing with some scientific content for: OurBigBook.com, partly because this stuff is just amazingly beautiful.

Ciro's main intellectual physics fetishes are to learn quantum electrodynamics (understanding the point of Lie groups being a subpart of that) and condensed matter physics.

Every science is Physics in disguise, but the number of objects in the real world is so large that we can't solve the real equations in practice.

Luckily, due to emergence, we can use uglier higher level approximations of the world to solve many problems, with the complex limits of applicability of those approximations.

As of 2019, all known physics can be described by two theories:

Unifying those two into the theory of everything one of the major goals of modern physics.

The approach many courses take to physics, specially "modern Physics" is really bad, this is how it should be taught:

- start by describing experiments that the previous best theory did not explain, see also: Section "Physics education needs more focus on understanding experiments and their history"
- then, give the final formula for the next best theory
- then, give all the important final implications of that formula, and how it amazingly describes the experiments. In particular this means: doing physics means calculating a number
- then, give some mathematical intuition on the formulas, and how the main equation could have been derived
- finally, then and only then, start deriving the outcomes of the main formula in detail

This is likely because at some point, experiments get more and more complicated, and so people are tempted to say "this is the truth" instead of "this is why we think this is the truth", which is much harder.

But we can't be lazy, there is no replacement to the why.

Related:

- settheory.net/learnphysics and www.youtube.com/watch?v=5MKjPYuD60I&list=PLJcTRymdlUQPwx8qU4ln83huPx-6Y3XxH from settheory.net
- math.ucr.edu/home/baez/books.html by John Baez. Mentions:
This webpage doesn't have lots of links to websites. Websites just don't have the sort of in-depth material you need to learn technical subjects like advanced math and physics — at least, not yet. To learn this stuff, you need to read lots of books

Ciro Santilli is trying to change that: OurBigBook.com. - web.archive.org/web/20210324182549/http://jakobschwichtenberg.com/one-thing/ by Jakob Schwichtenberg

This is the only way to truly understand and appreciate the subject.

Understanding the experiments gets intimately entangled with basically learning the history of physics, which is extremely beneficial as also highlighted by Ron Maimon, related: there is value in tutorials written by early pioneers of the field.

"How we know" is a basically more fundamental point than "what we know" in the natural sciences.

In the Surely You're Joking, Mr. Feynman chapter O Americano, Outra Vez! Richard Feynman describes his experience teaching in Brazil in the early 1950s, and how everything was memorized, without any explanation of the experiments or that the theory has some relationship to the real world!

Although things have improved considerably since in Brazil, Ciro still feels that some areas of physics are still taught without enough experiments described upfront. Notably, ironically, quantum field theory, which is where Feynman himself worked.

Feynman gave huge importance to understanding and explaining experiments, as can also be seen on Richard Feynman Quantum Electrodynamics Lecture at University of Auckland (1979).

Everyone is beginner when the field is new, and there is value in tutorials written by beginners.

For example, Ciro Santilli felt it shocking how direct and satisfying Richard Feynman's scientific vulgarization of quantum electrodynamics were, e.g. at: Richard Feynman Quantum Electrodynamics Lecture at University of Auckland (1979), and that if he had just assumed minimal knowledge of mathematics, he was about to give a full satisfactory picture in just a few hours.

The same also applies to early original papers of the field, as notably put forward by Ron Maimon.

In Physics, in order to test a theory, you must be able to extract a number from it.

It does not matter how, if it is exact, or numerical, or a message from God: a number has to come out of the formulas in the end, and you have to compare it with the experimental data.

Many theoretical physicists seem to forget this in their lectures, see also: Section "How to teach and learn physics".

Nature is a black box, right?

You don't need to understand the from first principles derivation of every single phenomena.

And most important of all: you should not start learning phenomena by reading the from first principles derivation.

Instead, you should see what happens in experiments, and how matches some known formula (which hopefully has been derived from first principles).

Only open the boxes (understand from first principles derivation) if the need is felt!

E.g.:

- you don't need to understand everything about why SQUID devices have their specific I-V curve curve. You have to first of all learn what the I-V curve would be in an experiment!
- you don't need to understand the fine details of how cavity magnetrons work. What you need to understand first is what kind of microwave you get from what kind of input (DC current), and how that compares to other sources of microwaves
- lasers: same

Physics is all about predicting the future. If you can predict the future with an end result, that's already predicting the future, and valid.

Videos should be found/made for all of those: videos of all key physics experiments

- speed of light experiment
- basically all experiments listed under Section "Quantum mechanics experiment" such as:
- Davisson-Germer experiment

This shows that viewing electromagnetism as gauge theory does have experimentally observable consequences. TODO understand what that means.

In more understandable terms, it shows that the magnetic vector potential matters where the magnetic field is 0.

Classic theory predicts that the output frequency must be the same as the input one since the electromagnetic wave makes the electron vibrate with same frequency as itself, which then irradiates further waves.

But the output waves are longer because photons are discrete and energy is proportional to frequency:

The formula is exactly that of two relativistic billiard balls colliding.

Therefore this is evidence that photons exist and have momentum.

No matter how hight the wave intensity, if it the frequency is small, no photons are removed from the material.

This is different from classic waves where energy is proportional to intensity, and coherent with the existence of photons and the Planck-Einstein relation.

The key thing in a good system of units is to define units in a way that depends only on physical properties of nature.

Ideally (or basically necessarily?) the starting point generally has to be discrete phenomena, e.g.

- number of times some light oscillates per second
- number of steps in a quantum Hall effect or Josephson junction

What we don't want is to have macroscopics measurement artifacts, (or even worse, the size of body parts! Inset dick joke) as you can always make a bar slightly more or less wide. And even metals evaporate over time! Though the mad people of the Avogadro project still attempted otherwise well into the 2010s!

Standards of measure that don't depend on artifacts are known as intrinsic standards.

The key is to define only the minimum number of measures: if you define more definitions become over constrained and could be inconsistent.

Learning the modern SI is also a great way to learn some interesting Physics.

Great overview of the earlier history of unit standardization.

Gives particular emphasis to the invention of gauge blocks.

web.archive.org/web/20181119214326/https://www.bipm.org/utils/common/pdf/CGPM-2018/26th-CGPM-Resolutions.pdf gives it in raw:

The breakdown is:

- the unperturbed ground state hyperfine transition frequency of the caesium-133 atom $Δv_{Cs}$ is 9 192 631 770 Hz
- the speed of light in vacuum c is 299 792 458 m/s
- the Planck constant h is 6.626 070 15 × $10_{−34}$ J s
- the elementary charge e is 1.602 176 634 × $10_{−19}$ C
- the Boltzmann constant k is 1.380 649 × $10_{−23}$ J/K
- the Avogadro constant NA is 6.022 140 76 × $10_{23}$ mol
- the luminous efficacy of monochromatic radiation of frequency 540 × 1012 Hz, Kcd, is 683 lm/W,

- actually use some physical constant:
the unperturbed ground state hyperfine transition frequency of the caesium-133 atom $Δv_{Cs}$ is 9 192 631 770 Hz

Defines the second in terms of caesium-133 experiments. The beauty of this definition is that we only have to count an integer number of discrete events, which is what allows us to make things precise.the speed of light in vacuum c is 299 792 458 m/s

Defines the meter in terms of speed of light experiments. We already had the second from the previous definition.the Planck constant h is 6.626 070 15 × $10_{−34}$ J s

Defines the kilogram in terms of the Planck constant.the elementary charge e is 1.602 176 634 × $10_{−19}$ C

Defines the Coulomb in terms of the electron charge.

- arbitrary definitions based on the above just to match historical values as well as possible:
the Boltzmann constant k is 1.380 649 × $10_{−23}$ J/K

Arbitrarily defines temperature from previously defined energy (J) to match historical values.the Avogadro constant NA is 6.022 140 76 × $10_{23}$ mol

the luminous efficacy of monochromatic radiation of frequency 540 × 1012 Hz, Kcd, is 683 lm/W

Arbitrarily defines the Candela in terms of previous values to match historical records. The most useless unit comes last as you'd expect.

TODO how does basing it on the elementary charge help at all? Can we count individual electrons going through a wire? www.nist.gov/si-redefinition/ampere/ampere-quantum-metrology-triangle by the NIST explains that is it basically due to the following two quantized solid-state physics phenomena/experiments that allows for extremely precise measurements of the elementary charge:

- quantum Hall effect, which has discrete resistances of type:
for integer values of $ν$.$R_{xy}=I_{channel}V_{Hall} =e_{2}νh $
- Josephson effect, which provides the Josephson constant which equals:
$K_{J}=h2e $

Unit of electric current.

Affected by the ampere in the 2019 redefinition of the SI base units.

Unit of mass.

Defined in the 2019 redefinition of the SI base units via the Planck constant. This was possible due to the development of the kibble balance.

Measures weight from a voltage.

TODO appears to rely on both quantum Hall effect and Josephson effect

Named after radio pioneer Heinrich Hertz.

Uses the frequency of the hyperfine structure of caesium-133 ground state, i.e spin up vs spin down of its valence electron $6s_{1}$, to define the second.

International System of Units definition of the second since 1967, because this is what atomic clocks use.

TODO why does this have more energy than the hyperfine split of the hydrogen line given that it is further from the nucleus?

Highlighted at the Origins of Precision by Machine Thinking (2017).

A series of systems usually derived from the International System of Units that are more convenient for certain applications.

Currently an informal name for the Standard Model

Chronological outline of the key theories:

- Maxwell's equations
- Schrödinger equation
- Date: 1926
- Numerical predictions:
- hydrogen spectral line, excluding finer structure such as 2p up and down split: en.wikipedia.org/wiki/Fine-structure_constant

- Dirac equation
- Date: 1928
- Numerical predictions:
- hydrogen spectral line including 2p split, but excluding even finer structure such as Lamb shift

- Qualitative predictions:
- Antimatter
- Spin as part of the equation

- quantum electrodynamics
- Date: 1947 onwards
- Numerical predictions:
- Qualitative predictions:
- Antimatter
- spin as part of the equation

As of the 20th century, this can be described well as "the phenomena described by Maxwell's equations".

Back through its history however, that was not at all clear. This highlights how big of an achievement Maxwell's equations are.

Unified all previous electro-magnetism theories into one equation.

Explains the propagation of light as a wave, and matches the previously known relationship between the speed of light and electromagnetic constants.

The equations are a limit case of the more complete quantum electrodynamics, and unlike that more general theory account for the quantization of photon.

The equations are a system of partial differential equation.

The system consists of 6 unknown functions that map 4 variables: time t and the x, y and z positions in space, to a real number:and two known input functions:

- $E_{x}(t,x,y,z)$, $E_{y}(t,x,y,z)$, $E_{z}(t,x,y,z)$: directions of the electric field $E:R_{4}→R_{3}$
- $B_{x}(t,x,y,z)$, $B_{y}(t,x,y,z)$, $B_{z}(t,x,y,z)$: directions of the magnetic field $B:R_{4}→R_{3}$

- $ρ:R_{3}toR$: density of charges in space
- $J:R_{3}→R_{3}$: current vector in space. This represents the strength of moving charges in space.

Due to the conservation of charge however, those input functions have the following restriction:

$∂t∂ρ +∇⋅J=0$

Also consider the following cases:

- if a spherical charge is moving, then this of course means that $ρ$ is changing with time, and at the same time that a current exists
- in an ideal infinite cylindrical wire however, we can have constant $ρ$ in the wire, but there can still be a current because those charges are movingSuch infinite cylindrical wire is of course an ideal case, but one which is a good approximation to the huge number of electrons that travel in a actual wire.

The goal of finding $E$ and $B$ is that those fields allow us to determine the force that gets applied to a charge via the Equation "Lorentz force", and then to find the force we just need to integrate over the entire body.

Finally, now that we have defined all terms involved in the Maxwell equations, let's see the equations:

$divE=ε_{0}ρ $

$divB=0$

$∇×E=−∂t∂B $

$∇×B=μ_{0}(J+ε_{0}∂t∂E )$

You should also review the intuitive interpretation of divergence and curl.

$force_density=ρE+J×B$

A little suspicious that it bears the name of Lorentz, who is famous for special relativity, isn't it? See: Maxwell's equations require special relativity.

For numerical algorithms and to get a more low level understanding of the equations, we can expand all terms to the simpler and more explicit form:

$∂x∂E_{x} +∂x∂E_{y} +∂x∂E_{z} =ε_{0}ρ ∂x∂B_{x} +∂x∂B_{y} +∂x∂B_{z} =0∂y∂E_{z} −∂z∂E_{y} =−∂t∂B_{x} ∂z∂E_{x} −∂x∂E_{z} =−∂t∂B_{y} ∂x∂E_{y} −∂y∂E_{x} =−∂t∂B_{z} ∂y∂B_{z} −∂z∂B_{y} =μ_{0}(J_{x}+ε_{0}∂t∂E_{x} )∂z∂B_{x} −∂x∂B_{z} =μ_{0}(J_{y}+ε_{0}∂t∂E_{y} )∂x∂B_{y} −∂y∂B_{x} =μ_{0}(J_{z}+ε_{0}∂t∂E_{z} )$

As seen from explicit scalar form of the Maxwell's equations, this expands to 8 equations, so the question arises if the system is over-determined because it only has 6 functions to be determined.

As explained on the Wikipedia page however, this is not the case, because if the first two equations hold for the initial condition, then the othe six equations imply that they also hold for all time, so they can be essentially omitted.

It is also worth noting that the first two equations don't involve time derivatives. Therefore, they can be seen as spacial constraints.

TODO: the electric field and magnetic field can be expressed in terms of the electric potential and magnetic vector potential. So then we only need 4 variables?

Static case of Maxwell's law for electricity only.

The "static" part is important: if this law were true for moving charges, we would be able to transmit information instantly at infinite distances. This is basically where the idea of field comes in.

In the standard formulation of Maxwell's equations, the electric current is a convient but magic input.

Would it be possible to use Maxwell's equations to solve a system of pointlike particles such as electrons instead?

The following suggest no, or only for certain subcases less general than Maxwell's equations:

This is the type of thing where the probability aspect of quantum mechanics seems it could "help".

TODO it would be awesome if we could de-generalize the equations in 2D and do a JavaScript demo of it!

Not sure it is possible though because the curl appears in the equations:

TODO: I'm surprised that the Wiki page barely talks about it, and there are few Google hits too! A sample one: www.researchgate.net/publication/228928756_On_the_existence_and_uniqueness_of_Maxwell's_equations_in_bounded_domains_with_application_to_magnetotellurics

In the context of Maxwell's equations, it is vector field that is one of the inputs of the equation.

Section "Maxwell's equations with pointlike particles" asks if the theory would work for pointlike particles in order to predict the evolution of this field as part of the equations themselves rather than as an external element.

Measured in amperes in the International System of Units.

After the 2019 redefinition of the SI base units it is by definition exactly $1.60217663410_{−19}$ Joules.

The voltage changes perpendicular to the current when magnetic field is applied.

An intuitive video is:

The key formula for it is:
where:

$V_{H}=nteI_{x}B_{z} $

- $I_{x}$: current on x direction, which we can control by changing the voltage $V_{x}$
- $B_{z}$: strength of transversal magnetic field applied
- $n$: charge carrier density, a property of the material used
- $t$: height of the plate
- $e$: electron charge

Applications:

- the direction of the effect proves that electric currents in common electrical conductors are made up of negative charged particles
- measure magnetic fields, TODO vs other methods

Other more precise non-classical versions:

Bibliography:

In some contexts, we want to observe what happens for a given fixed magnetic field strength on a specific plate (thus $t$ and $n$ are also fixed).

In those cases, it can be useful to talk about the "Hall resistance" defined as:
So note that it is not a "regular resistance", it just has the same dimensions, and is more usefully understood as a proportionality constant for the voltage given an input $I_{x}$ current:

$R_{xy}=I_{x}V_{y} $

$V_{y}=R_{xy}I_{x}$

This notion can be useful because everything else being equal, if we increase the current $I_{x}$, then $V_{y}$ also increases proportionally, making this a way to talk about the voltage in a current independent manner.

And this is particularly the case for the quantum Hall effect, where $R_{xy}$ is constant for wide ranges of applied magnetic field and TODO presumably the height $t$ can be made to a single molecular layer with chemical vapor deposition of the like, and if therefore fixed.