Dirac Lagrangian by Ciro Santilli 40 Updated 2025-07-16
where:
Remember that is a 4-vetor, gamma matrices are 4x4 matrices, so the whole thing comes down to a dot product of two 4-vectors, with a modified by matrix multiplication/derivatives, and the result is a scalar, as expected for a Lagrangian.
Like any other Lagrangian, you can then recover the Dirac equation, which is the corresponding equations of motion, by applying the Euler-Lagrange equation to the Lagrangian.
Programming language by Ciro Santilli 40 Updated 2025-07-16
A language that allows you to talk to and command a computer.
There is only space for two languages at most in the world: the compiled one, and the interpreted one.
For 2020 now, when you have a choice, you must go for:
Those two are languages not by any means perfect from a language design point of view, and there are likely already better alternatives, they are only chosen due to a pragmatic tradeoff between ecosystem and familiarity.
Ciro predicts that Python will become like Fortran in the future: a legacy hated by most who have moved to JavaScript long ago (which is slightly inferior, but too similar, and with too much web dominance to be replaced), but with too much dominance in certain applications like machine learning to be worth replacing, like Fortran dominates certain HPC applications. We'll see. Maybe non performance critical scripting languages are easier to replace.
C++ however is decent, and is evolving in very good directions in the 2010's, and will remain relevant in the foreseeable future.
Bash can also be used when you're lazy. But if the project goes on, you will sooner or later regret that choice.
The language syntax in itself does not matter. All that matters is how many useful libraries and tooling it has.
This is how other languages compare:
A way to write the wavefunction such that the position operator is:i.e., a function that takes the wavefunction as input, and outputs another function:
If you believe that mathematicians took care of continuous spectrum for us and that everything just works, the most concrete and direct thing that this representation tells us is that:
the probability of finding a particle between and at time
equals:
LibreOffice by Ciro Santilli 40 Updated 2025-07-16
These people are heroes. There's nothing else to say.
Inner source by Ciro Santilli 40 Updated 2025-07-16
If you are going to do closed source, at least do it like this.
Basically the opposite of need to know for software.
Closed standard by Ciro Santilli 40 Updated 2025-07-16
ISO is the main culprit of this bullshit, some notable examples related to open source software:
The only low level thing that escaped this was OpenGL via Khronos, what heroes those people are.
How the hell are you supposed to develop an open source implementation of something that has a closed standard?
Not to mention open source test suites, that would be way too much to ask for, those always end up being made by some shady small companies that go bankrupt from time to time, see e.g. .
Closed source on offline products used by millions of people is evil, when you could just have those for free with open source software! Thus Ciro's hatred for Microsoft Windows and MacOS (at least userland, maybe).
Ciro Santilli can accept closed source on server products more easily than offline, because the servers have to be paid for somehow (by stealing your private data).
Code drop by Ciro Santilli 40 Updated 2025-07-16
Open source development model in which developers develop in private, and only release code to the public during releases.
Notable example project: Android Open Source Project.
This development model basically makes reporting bugs and sending patches a waste of time, because many of them will already have been solved, which is why this development model is evil.
Let's run on this Imagenet10 subset called Imagenette.
First ensure that you get the dummy test data run working as per MLperf v2.1 ResNet.
Next, in the imagenette2 directory, first let's create a 224x224 scaled version of the inputs as required by the benchmark at mlcommons.org/en/inference-datacenter-21/:
#!/usr/bin/env bash
rm -rf val224x224
mkdir -p val224x224
for syndir in val/*: do
  syn="$(dirname $syndir)"
  for img in "$syndir"/*; do
    convert "$img" -resize 224x224 "val224x224/$syn/$(basename "$img")"
  done
done
and then let's create the val_map.txt file to match the format expected by MLPerf:
#!/usr/bin/env bash
wget https://gist.githubusercontent.com/aaronpolhamus/964a4411c0906315deb9f4a3723aac57/raw/aa66dd9dbf6b56649fa3fab83659b2acbf3cbfd1/map_clsloc.txt
i=0
rm -f val_map.txt
while IFS="" read -r p || [ -n "$p" ]; do
  synset="$(printf '%s\n' "$p" | cut -d ' ' -f1)"
  if [ -d "val224x224/$synset" ]; then
    for f in "val224x224/$synset/"*; do
      echo "$f $i" >> val_map.txt
    done
  fi
  i=$((i + 1))
done < <( sort map_clsloc.txt )
then back on the mlperf directory we download our model:
wget https://zenodo.org/record/4735647/files/resnet50_v1.onnx
and finally run!
DATA_DIR=/mnt/sda3/data/imagenet/imagenette2 time ./run_local.sh onnxruntime resnet50 cpu --accuracy
which gives on P51:
TestScenario.SingleStream qps=164.06, mean=0.0267, time=23.924, acc=87.134%, queries=3925, tiles=50.0:0.0264,80.0:0.0275,90.0:0.0287,95.0:0.0306,99.0:0.0401,99.9:0.0464
where qps presumably means "querries per second". And the time results:
446.78user 33.97system 2:47.51elapsed 286%CPU (0avgtext+0avgdata 964728maxresident)k
The time=23.924 is much smaller than the time executable because of some lengthy pre-loading (TODO not sure what that means) that gets done every time:
INFO:imagenet:loaded 3925 images, cache=0, took=52.6sec
INFO:main:starting TestScenario.SingleStream
Let's try on the GPU now:
DATA_DIR=/mnt/sda3/data/imagenet/imagenette2 time ./run_local.sh onnxruntime resnet50 gpu --accuracy
which gives:
TestScenario.SingleStream qps=130.91, mean=0.0287, time=29.983, acc=90.395%, queries=3925, tiles=50.0:0.0265,80.0:0.0285,90.0:0.0405,95.0:0.0425,99.0:0.0490,99.9:0.0512
455.00user 4.96system 1:59.43elapsed 385%CPU (0avgtext+0avgdata 975080maxresident)k
TODO lower qps on GPU!

Pinned article: Introduction to the OurBigBook Project

Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
We have two killer features:
  1. topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculus
    Articles of different users are sorted by upvote within each article page. This feature is a bit like:
    • a Wikipedia where each user can have their own version of each article
    • a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
    This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.
    Figure 1.
    Screenshot of the "Derivative" topic page
    . View it live at: ourbigbook.com/go/topic/derivative
  2. local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:
    This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
    Figure 2.
    You can publish local OurBigBook lightweight markup files to either https://OurBigBook.com or as a static website
    .
    Figure 3.
    Visual Studio Code extension installation
    .
    Figure 4.
    Visual Studio Code extension tree navigation
    .
    Figure 5.
    Web editor
    . You can also edit articles on the Web editor without installing anything locally.
    Video 3.
    Edit locally and publish demo
    . Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension.
    Video 4.
    OurBigBook Visual Studio Code extension editing and navigation demo
    . Source.
  3. https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/feature/x/hilbert-space-arrow.png
  4. Infinitely deep tables of contents:
    Figure 6.
    Dynamic article tree with infinitely deep table of contents
    .
    Descendant pages can also show up as toplevel e.g.: ourbigbook.com/cirosantilli/chordate-subclade
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact