clang by Ciro Santilli 37 Updated 2025-07-16
LLVM front-end for C and related language like C++ etc.
MLperf v2.1 ResNet by Ciro Santilli 37 Updated 2025-07-16
Ubuntu 22.10 setup with tiny dummy manually generated ImageNet and run on ONNX:
sudo apt install pybind11-dev

git clone https://github.com/mlcommons/inference
cd inference
git checkout v2.1

virtualenv -p python3 .venv
. .venv/bin/activate
pip install numpy==1.24.2 pycocotools==2.0.6 onnxruntime==1.14.1 opencv-python==4.7.0.72 torch==1.13.1

cd loadgen
CFLAGS="-std=c++14" python setup.py develop
cd -

cd vision/classification_and_detection
python setup.py develop
wget -q https://zenodo.org/record/3157894/files/mobilenet_v1_1.0_224.onnx
export MODEL_DIR="$(pwd)"
export EXTRA_OPS='--time 10 --max-latency 0.2'

tools/make_fake_imagenet.sh
DATA_DIR="$(pwd)/fake_imagenet" ./run_local.sh onnxruntime mobilenet cpu --accuracy
Last line of output on P51, which appears to contain the benchmark results
TestScenario.SingleStream qps=58.85, mean=0.0138, time=0.136, acc=62.500%, queries=8, tiles=50.0:0.0129,80.0:0.0137,90.0:0.0155,95.0:0.0171,99.0:0.0184,99.9:0.0187
where presumably qps means queries per second, and is the main results we are interested in, the more the better.
Running:
tools/make_fake_imagenet.sh
produces a tiny ImageNet subset with 8 images under fake_imagenet/.
fake_imagenet/val_map.txt contains:
val/800px-Porsche_991_silver_IAA.jpg 817
val/512px-Cacatua_moluccensis_-Cincinnati_Zoo-8a.jpg 89
val/800px-Sardinian_Warbler.jpg 13
val/800px-7weeks_old.JPG 207
val/800px-20180630_Tesla_Model_S_70D_2015_midnight_blue_left_front.jpg 817
val/800px-Welsh_Springer_Spaniel.jpg 156
val/800px-Jammlich_crop.jpg 233
val/782px-Pumiforme.JPG 285
where the numbers are the category indices from ImageNet1k. At gist.github.com/yrevar/942d3a0ac09ec9e5eb3a see e.g.:
  • 817: 'sports car, sport car',
  • 89: 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',
and so on, so they are coherent with the image names. By quickly looking at the script we see that it just downloads from Wikimedia and manually creates the file.
TODO prepare and test on the actual ImageNet validation set, README says:
Prepare the imagenet dataset to come.
Since that one is undocumented, let's try the COCO dataset instead, which uses COCO 2017 and is also a bit smaller. Note that his is not part of MLperf anymore since v2.1, only ImageNet and open images are used. But still:
wget https://zenodo.org/record/4735652/files/ssd_mobilenet_v1_coco_2018_01_28.onnx
DATA_DIR_BASE=/mnt/data/coco
export DATA_DIR="${DATADIR_BASE}/val2017-300"
mkdir -p "$DATA_DIR_BASE"
cd "$DATA_DIR_BASE"
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip val2017.zip
unzip annotations_trainval2017.zip
mv annotations val2017
cd -
cd "$(git-toplevel)"
python tools/upscale_coco/upscale_coco.py --inputs "$DATA_DIR_BASE" --outputs "$DATA_DIR" --size 300 300 --format png
cd -
Now:
./run_local.sh onnxruntime mobilenet cpu --accuracy
fails immediately with:
No such file or directory: '/path/to/coco/val2017-300/val_map.txt
The more plausible looking:
./run_local.sh onnxruntime mobilenet cpu --accuracy --dataset coco-300
first takes a while to preprocess something most likely, which it does only one, and then fails:
Traceback (most recent call last):
  File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 596, in <module>
    main()
  File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 468, in main
    ds = wanted_dataset(data_path=args.dataset_path,
  File "/home/ciro/git/inference/vision/classification_and_detection/python/coco.py", line 115, in __init__
    self.label_list = np.array(self.label_list)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (5000, 2) + inhomogeneous part.
FFmpeg video synthesis by Ciro Santilli 37 Updated 2025-07-16
Video with a solid color:
Display count in seconds on the video:
Bibliography:
Stockfish CLI by Ciro Santilli 37 Updated 2025-10-14
Most of what follows is part of the Universal Chess Interface. Tested on Ubuntu 22.10, Stockfish 14.1.
After starting stockfish on the command line, d (presumably display) contains:
 +---+---+---+---+---+---+---+---+
 | r | n | b | q | k | b | n | r | 8
 +---+---+---+---+---+---+---+---+
 | p | p | p | p | p | p | p | p | 7
 +---+---+---+---+---+---+---+---+
 |   |   |   |   |   |   |   |   | 6
 +---+---+---+---+---+---+---+---+
 |   |   |   |   |   |   |   |   | 5
 +---+---+---+---+---+---+---+---+
 |   |   |   |   |   |   |   |   | 4
 +---+---+---+---+---+---+---+---+
 |   |   |   |   |   |   |   |   | 3
 +---+---+---+---+---+---+---+---+
 | P | P | P | P | P | P | P | P | 2
 +---+---+---+---+---+---+---+---+
 | R | N | B | Q | K | B | N | R | 1
 +---+---+---+---+---+---+---+---+
   a   b   c   d   e   f   g   h

Fen: rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1
Key: 8F8F01D4562F59FB
Sweet ASCII art. where:
Move white king's pawn from e2 to e4:
position startpos moves e2e4
Then display again:
d
gives:
 +---+---+---+---+---+---+---+---+
 | r | n | b | q | k | b | n | r | 8
 +---+---+---+---+---+---+---+---+
 | p | p | p | p | p | p | p | p | 7
 +---+---+---+---+---+---+---+---+
 |   |   |   |   |   |   |   |   | 6
 +---+---+---+---+---+---+---+---+
 |   |   |   |   |   |   |   |   | 5
 +---+---+---+---+---+---+---+---+
 |   |   |   |   | P |   |   |   | 4
 +---+---+---+---+---+---+---+---+
 |   |   |   |   |   |   |   |   | 3
 +---+---+---+---+---+---+---+---+
 | P | P | P | P |   | P | P | P | 2
 +---+---+---+---+---+---+---+---+
 | R | N | B | Q | K | B | N | R | 1
 +---+---+---+---+---+---+---+---+
   a   b   c   d   e   f   g   h

Fen: rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq - 0 1
Key: B46022469E3DD31B
so we see that the pawn moved.
Now let's make Stockfish think for one second what is the next best move for black:
go movetime 1000
gives as the last line:
bestmove c7c5 ponder g1f3
TODO:
  • what is ponder? Something to do with thinking on the opponent's turn: permanent brain.
  • understand the previous lines
To make the move it as suggested for black, we have to either repeat the entire sequence of movements:
position startpos moves e2e4 c7c5
d:
 +---+---+---+---+---+---+---+---+
 | r | n | b | q | k | b | n | r | 8
 +---+---+---+---+---+---+---+---+
 | p | p |   | p | p | p | p | p | 7
 +---+---+---+---+---+---+---+---+
 |   |   |   |   |   |   |   |   | 6
 +---+---+---+---+---+---+---+---+
 |   |   | p |   |   |   |   |   | 5
 +---+---+---+---+---+---+---+---+
 |   |   |   |   | P |   |   |   | 4
 +---+---+---+---+---+---+---+---+
 |   |   |   |   |   |   |   |   | 3
 +---+---+---+---+---+---+---+---+
 | P | P | P | P |   | P | P | P | 2
 +---+---+---+---+---+---+---+---+
 | R | N | B | Q | K | B | N | R | 1
 +---+---+---+---+---+---+---+---+
   a   b   c   d   e   f   g   h

Fen: rnbqkbnr/pp1ppppp/8/2p5/4P3/8/PPPP1PPP/RNBQKBNR w KQkq - 0 2
Key: 4CA78BCE9C2980B0
or alternatively we could also use the previous FEN notation as a starting point;
position fen rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq - 0 1 moves c7c5
Note how the Universal Chess Interface interface is very simple: we just load a state and then decide what to do next for that one state. The engine holds only one and exactly one state at a time, and you can't even modify it differentially without loading new one from scratch.
Let's move white again with our brain with either:
position startpos moves e2e4 c7c5 d2d3
position fen rnbqkbnr/pp1ppppp/8/2p5/4P3/8/PPPP1PPP/RNBQKBNR w KQkq - 0 2 moves d2d3
Set a specific position from fen:
position fen rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq - 0 1
FeathersJS by Ciro Santilli 37 Updated 2025-07-16
Looks interesting.
It seems to abstract the part about the client messaging the backend, which focuses on being able to easily plug in a number of Front-end web framework to manage client state.
Has the "main web API is the same as the REST API" focus, which is fundamental 2020-nowadays.
Uses Socket.IO, which allows the client Javascript to register callbacks when data is updated to achieve Socket.IO, e.g. their default chat app does:
client.service('messages').on('created', addMessage);
so that message appear immediately as they are sent.
Their standard template from feathers generate app on @feathersjs/cli@4.5.0 includes:
which looks promising! They don't have a default template for a Front-end web framework however unfortunately: docs.feathersjs.com/guides/frameworks.html#the-feathers-chat lists a few chat app versions, which is their hello world:
But it is in itself a completely boring app with a single splash page, and no database interaction, so not a good showcase. The actual showcase app is feathersjs/feathers-chat.
And there is no official example of the chat app that is immediately deployable to Heroku: FeathersJS Heroku deployment, all setups require thinking.
Global source entry point: determine on package.json as usual, defaults to src/index.js.
HyperPhysics by Ciro Santilli 37 Updated 2025-07-16
Created by Dr. Rod Nave from Georgia State University, where he worked from 1968 after his post-doc in North Wales on molecular spectroscopy.
While there is value to that website, it always feels like it falls a bit too short as too "encyclopedic" and too little "tutorial-like". Most notably, it has very little on the history of physics/experiments.
Ciro Santilli likes this Rod, he really practices some good braindumping, just look at how he documented his life in the pre-social media Internet dark ages: hyperphysics.phy-astr.gsu.edu/Nave-html/nave.html
The website evolved from a HyperCard stack, as suggested by the website name, mentioned at: hyperphysics.phy-astr.gsu.edu/hbase/index.html.
exhibits.library.gsu.edu/kell/exhibits/show/nave-kell-hall/capturing-a-career has some good photo selection focused on showing the department, and has an interview.
Krusader by Ciro Santilli 37 Updated 2025-07-16
The most powerful GUI file manager ever?? Infinite configurability??
Ciro Santilli wasted some time on it before he gave up on file managers altogether and started using only the CLI with a few aliases.

Pinned article: Introduction to the OurBigBook Project

Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
We have two killer features:
  1. topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculus
    Articles of different users are sorted by upvote within each article page. This feature is a bit like:
    • a Wikipedia where each user can have their own version of each article
    • a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
    This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.
    Figure 1.
    Screenshot of the "Derivative" topic page
    . View it live at: ourbigbook.com/go/topic/derivative
  2. local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:
    This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
    Figure 2.
    You can publish local OurBigBook lightweight markup files to either https://OurBigBook.com or as a static website
    .
    Figure 3.
    Visual Studio Code extension installation
    .
    Figure 4.
    Visual Studio Code extension tree navigation
    .
    Figure 5.
    Web editor
    . You can also edit articles on the Web editor without installing anything locally.
    Video 3.
    Edit locally and publish demo
    . Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension.
    Video 4.
    OurBigBook Visual Studio Code extension editing and navigation demo
    . Source.
  3. https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/feature/x/hilbert-space-arrow.png
  4. Infinitely deep tables of contents:
    Figure 6.
    Dynamic article tree with infinitely deep table of contents
    .
    Descendant pages can also show up as toplevel e.g.: ourbigbook.com/cirosantilli/chordate-subclade
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact