Starting in the 2019 redefinition of the SI base units, the elementary charge is assigned a fixed number, and the Ampere is based on it and on the second, which is beautiful.
This choice is not because we attempt to count individual electrons going through a wire, as it would be far too many to count!
Rather, it is because because there are two crazy quantum mechanical effects that give us macroscopic measures that are directly related to the electron charge. www.nist.gov/si-redefinition/ampere/ampere-quantum-metrology-triangle by the NIST explains that the two effects are:
- quantum Hall effect, which has discrete resistances of type:for integer values of .
- Josephson effect, used in the Josephson voltage standard. With the Inverse AC Josephson effect we are able to produce:per Josephson junction. This is about 2 microvolt / GHz, where GHz is a practical input frequency. Video "The evolution of voltage metrology to the latest generation of JVSs by Alain Rüfenacht" mentions that a typical operating frequency is 20 GHz.Therefore to attain a good 10 V, we need something in the order of a million Josephson junctions.But this is possible to implement in a single chip with existing micro fabrication techniques, and is exactly what the Josephson voltage standard does!
Those effect work because they also involve dividing by the Planck constant, the fundamental constant of quantum mechanics, which is also tiny, and thus brings values into a much more measurable order of size.
Prototype: github.com/cirosantilli/Urho3D-cheat
Prior art research: github.com/cirosantilli/awesome-reinforcement-learning-games
Less good discrete prototype: github.com/cirosantilli/rl-game-2d-grid YouTube demo: Video 1. "Top Down 2D Continuous Game with Urho3D C++ SDL and Box2D for Reinforcement learning by Ciro Santilli (2018)".
The goal of this project is to reach artificial general intelligence.
A few initiatives have created reasonable sets of robotics-like games for the purposes of AI development, most notably: OpenAI and DeepMind.
However, all projects so far have only created sets of unrelated games, or worse: focused on closed games designed for humans!
What is really needed is to create a single cohesive game world, designed specifically for this purpose, and with a very large number of game mechanics.
Notably, by "game mechanic" is meant "a magic aspect of the game world, which cannot be explained by object's location and inertia alone" in order to test the the missing link between continuous and discrete AI.
Much in the spirit of gvgai, we have to do the following loop:
- create an initial game that a human can solve
- find an AI that beats it well
- study the AI, and add a new mechanic that breaks the AI, but does not break a human!
The question then becomes: do we have enough computational power to simulation a game worlds that is analogous enough to the real world, so that our AI algorithms will also apply to the real world?
To reduce computation requirements, it is better to focus on a 2D world at first. Such world with the right mechanics can break any AI, while still being faster to simulate than a 3D world.
The initial prototype uses the Urho3D open source game engine, and that is a reasonable project, but a raw Simple DirectMedia Layer + Box2D + OpenGL solution from scratch would be faster to develop for this use case, since Urho3D has a lot of human-gaming features that are not needed, and because 2019 Urho3D lead developers disagree with the China censored keyword attack.
Simulations such as these can be viewed as a form of synthetic data generation procedure, where the goal is to use computer worlds to reduce the costs of experiments and to improve reproducibility.
Ciro has always had a feeling that AI research in the 2020's is too unambitious. How many teams are actually aiming for AGI? When he then read Superintelligence by Nick Bostrom (2014) it said the same. AGI research has become a taboo in the early 21st century.
Related projects:
- github.com/deepmind/lab2d: 2D gridworld games, C++ with Lua bindings
Related ideas:
- www.youtube.com/watch?v=MHFrhIAj0ME?t=4183 Can't get you out of my head by Adam Curtis (2021) Part 1: Bloodshed on Wolf Mountain :)
- www.youtube.com/watch?v=EUjc1WuyPT8 AI alignment: Why It's Hard, and Where to Start by Eliezer Yudkowsky (2016)
Bibliograpy:
- agents.inf.ed.ac.uk/blog/multiagent-learning-environments/ Multi-Agent Learning Environments (2021) by Lukas Schäfer from the Autonomous agents research group of the University of Edinburgh. One of their games actually uses apples as visual represntation of rewards, exactly like Ciro's game. So funny. They also have a 2d continuous game: agents.inf.ed.ac.uk/blog/multiagent-learning-environments/#mpe
- humanoid robot simulation
- Section "AI training game"
- Section "Software-based artificial life"
Ciro Santilli has a bad memory for events that happened a medium time ago, for example in order of months/years. Especially if they are one-off things that have no relation to anything else.
For example, Ciro never remembers which places he travelled to just once, and who was in each trip! He has images of several places he travelled to in his head, and would recognize them, but he just doesn't know where they were!
Another example, Ciro was looking at the carpet at their house, and asked where it came from. His wife replied immeidately: from Bercy shopping quarter in Paris about 10 years ago, and you took it on your back for a long walk until we could find the bus back home because we were concerned it wouldn't fit in the train!
The same goes for scenes from movies and passages from music, which explains why Ciro's art consumption focuses on innovative discrete "what happened" and "general gist" ideas, rather than, analog details such as colors and shapes.
Going back even further in time, Ciro starts to forget the less close friends he had, because the events start to fade away.
Paradoxically however, Ciro believes that this bad memory is one of his greatest strengths and key defining characteristics, because it leads Ciro to want to write down every interesting thing he learns, which motivated OurBigBook.com and his Stack Overflow contributions and his related Ciro Santilli's documentation superpowers.
It also somewhat leads Ciro to like physics and mathematics, because in these fields you "can deduce everything" from very few base principles, so if you forget them, it does not matter that much as you can re-deduce stuff over and over. Which is somewhat where the high flying bird attitude comes from. It is hard to go deep when you have to re-prove everything every time. But the upside is that anything that sticks, does so because it has a broad net to stick to, and therefore allows Ciro to make unusual and unexpected connections that others might not.
Ciro believes that there are two types of people, and most notably software engineers, which are basically data wranglers: those with bad memory and those with good memory.
Those with bad memory, tend to focus on automating and improving their processes a lot. They take much longer to do one-off specific deep knowledge tasks however.
The downside of the good memory ones is that sooner or later they will find tasks that no matter how much memory they have, they cannot solve without automation, and they will fail at those.
Also, good memory people don't enable others to join the project efficiently as much.
This dichotomy also explains why Ciro sucks at code reviews, but is rather the person who runs the interesting patches by himself and finds some critical problems that the more theoretical code reviewers missed.
If Ciro had become a scientist, he would without doubt be an experimentalist, just like in this reality he is a GDB/runtime person rather than a "static source analysis" person. Those who have bad memory prefer to just run experiments over and over and observe system state at runtime.
Other effects of having a bad memory include:
- code duplication, or a constant fear of it at least, because Ciro forgets that some functionality exists already
- meeting aversion, because everything that is not recorded will fade away
- passion for backward design, because by the time a piece of knowledge learnt in school might be useful (and 99.99% won't), it will have been long forgotten
Related: jakobschwichtenberg.com/about/ from Jakob Schwichtenberg:
I'm a physicist and I try to write down things during my own learning process.In some sense, one of the biggest benefits I have over other people in physics is that I'm certainly not the smartest guy! I usually can't grasp complex issues very easily. So I have to break down complex ideas into smaller chunks to understand it myself. This means, whenever I describe something to others, everyone understands, because it's broken down into such simple terms.
On C2 wiki, therefore it cannot be wrong wiki.c2.com/?QuasiGreatTeacher:
Some people have learning disabilities, [... bullshit ...]. A lot of classic spiritual texts have been produced this way. Basically, the stupidest but most dogged disciple, if he has a neurotic habit of writing things down, will make the best teacher for the third and subsequent generations.
This is a general philosophy that Ciro Santilli, and likely others, observes over and over.
Basically, continuity, or higher order conditions like differentiability seem to impose greater constraints on problems, which make them more solvable.
Some good examples of that:
- complex discrete problems:
- simple continuous problems:
- characterization of Lie groups
NP-intermediate as of 2020 for similar reasons as integer factorization.
An important case is the discrete logarithm of the cyclic group in which the group is a cyclic group.
Discrete quantum effect observed in superconductors with a small insulating layer, a device known as a Josephson junction.
To understand the behaviour effect, it is important to look at the Josephson equations consider the following Josephson effect regimes separately:
A good summary from Wikipedia by physicist Andrew Whitaker:
at a junction of two superconductors, a current will flow even if there is no drop in voltage; that when there is a voltage drop, the current should oscillate at a frequency related to the drop in voltage; and that there is a dependence on any magnetic field
Bibliography:
- www.youtube.com/watch?v=cnZ6exn2CkE "Superconductivity: Professor Brian Josephson". Several random excerpts from Cambridge people talking about the Josephson effect
Initially light was though of as a wave because it experienced interference as shown by experiments such as:
But then, some key experiments also start suggesting that light is made up of discrete packets:and in the understanding of the 2020 Standard Model the photon is one of the elementary particles.
- Compton scattering, also suggests that photons carry momentum
- photoelectric effect
- single photon production and detection experiments
This duality is fully described mathematically by quantum electrodynamics, where the photon is modelled as a quantized excitation of the photon field.
Quantum superposition is really weird because it is fundamentally different than "either definite state but I don't know which", because the superposition state leads to different measurements than the non-superposition state.
Examples:
- www.youtube.com/watch?v=tt8gVXDsh7Q "Interference in quantum mechanics" by Looking Glass Universe (2015) shows how a left-right spin measurement has a defined value for a superposed half up half down state, but not for a pure up state.TODO can this be conducted? As mentioned in the video, this is closely linked to the fact that you can describe the wave function in multiple different bases (up/down or left/right), which is also at the root of the uncertainty principle.
- Video "Quantum Mechanics 9b - Photon Spin and Schrodinger's Cat II by ViaScience (2013)" gives a similar photon version
- it seems that the single particle double slit experiment can also be thought of as in terms of a superposition of "the particle goes through the right" and "the particle goes through the right", although it is a bit harder to thing about as it is not a discrete process
To better understand the discussion below, the best thing to do is to read it in parallel with the simplest possible example: Schrödinger picture example: quantum harmonic oscillator.
The state of a quantum system is a unit vector in a Hilbert space.
"Making a measurement" for an observable means applying a self-adjoint operator to the state, and after a measurement is done:Those last two rules are also known as the Born rule.
- the state collapses to an eigenvector of the self adjoint operator
- the result of the measurement is the eigenvalue of the self adjoint operator
- the probability of a given result happening when the spectrum is discrete is proportional to the modulus of the projection on that eigenvector.For continuous spectra such as that of the position operator in most systems, e.g. Schrödinger equation for a free one dimensional particle, the projection on each individual eigenvalue is zero, i.e. the probability of one absolutely exact position is zero. To get a non-zero result, measurement has to be done on a continuous range of eigenvectors (e.g. for position: "is the particle present between x=0 and x=1?"), and you have to integrate the probability over the projection on a continuous range of eigenvalues.In such continuous cases, the probability collapses to an uniform distribution on the range after measurement.The continuous position operator case is well illustrated at: Video "Visualization of Quantum Physics (Quantum Mechanics) by udiprod (2017)"
Self adjoint operators are chosen because they have the following key properties:
- their eigenvalues form an orthonormal basis
- they are diagonalizable
Perhaps the easiest case to understand this for is that of spin, which has only a finite number of eigenvalues. Although it is a shame that fully understanding that requires a relativistic quantum theory such as the Dirac equation.
The next steps are to look at simple 1D bound states such as particle in a box and quantum harmonic oscillator.
This naturally generalizes to Schrödinger equation solution for the hydrogen atom.
The solution to the Schrödinger equation for a free one dimensional particle is a bit harder since the possible energies do not make up a countable set.
This formulation was apparently called more precisely Dirac-von Neumann axioms, but it because so dominant we just call it "the" formulation.
Quantum Field Theory lecture notes by David Tong (2007) mentions that:
if you were to write the wavefunction in quantum field theory, it would be a functional, that is a function of every possible configuration of the field .
A single line in the emission spectrum.
So precise, so discrete, which makes no sense in classical mechanics!
Has been the leading motivation of the development of quantum mechanics, all the way from the:
- Schrödinger equation: major lines predicted, including Zeeman effect, but not finer line splits like fine structure
- Dirac equation: explains fine structure 2p spin split due to electron spin/orbit interactions, but not Lamb shift
- quantum electrodynamics: explains Lamb shift
- hyperfine structure: due to electron/nucleus spin interactions, offers a window into nuclear spin
Discrete quantum system model that can model both spin in the Stern-Gerlach experiment or photon polarization in polarizer.
Also known in quantum computing as a qubit :-)