This is a good approach. The downside is that while you are developing the implementation and testing interactively you might notice that the requirements are wrong, and then the tests have to change.
One intermediate approach Ciro Santilli likes is to do the implementation and be happy with interactive usage, then create the test, make it pass, then remove the code that would make it pass, and see it fail. This does have a risk that you will forget to test something, but Ciro finds it is a worth it generally. Unless it really is one of those features that you are unable to develop without an automated test, generally more "logical/mathematical" stuff. This is a sort of laziness Driven Development.
Ciro's word of caution for 2019 aspiring system programmers: Should you waste your life with systems programming?
This is basically a direct consequence of backward design.
The higher the level you can operate at, the better.
C is better than assembly, userland better than kernelland.
The ideal level to operate at, and one of humankind's greatest ambitions is "AGI, make me money", the highest possible level.
Only go down a level when it seems necessary.
Operating system by Ciro Santilli 37 Updated 2025-07-16
Magic software that allows you to write a single program that runs on a wide range of hardware.
MicroPython by Ciro Santilli 37 Updated 2025-08-08
It is interpreted. It actually implements a Python (-like ?) interpreter that can run on a microcontroller. See e.g.: Compile MicroPython code for Micro Bit locally.
As a result, it is both very convenient, as it does not require a C toolchain to build for, but also very slow and produces larger images.
Zephyr is cool. Its installation setup is annoying. But the project is cool.
Run Zephyr on QEMU by Ciro Santilli 37 Updated 2025-07-26
Real hardware is for newbs. Real hardware is for newbs.
Tested on Ubuntu 23.10 we approximately follow instructions from: docs.zephyrproject.org/3.4.0/develop/getting_started/index.html stopping before the "Flash the sample" section, as we don't flash QEMU. We just run it.
sudo apt install --no-install-recommends git cmake ninja-build gperf \
  ccache dfu-util device-tree-compiler wget \
  python3-dev python3-pip python3-setuptools python3-tk python3-wheel xz-utils file \
  make gcc gcc-multilib g++-multilib libsdl2-dev libmagic1 python3-pyelftools
python3 -m venv ~/zephyrproject/.venv
source ~/zephyrproject/.venv/bin/activate
pip install west
west init ~/zephyrproject
cd ~/zephyrproject
west update
west zephyr-export
cd ~
wget https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/zephyr-sdk-0.16.1_linux-x86_64.tar.xz
tar xvf zephyr-sdk-0.16.1_linux-x86_64.tar.xz
cd zephyr-sdk-0.16.1
./setup.sh
The installation procedure install all compiler toolchains for us, so we can then basically compile for any target. It also fetches the latest Git source code of Zephyr under:
~/zephyrproject/zephyr
The "most default" blinky hello world example which blinks an LED is a bit useless for us because QEMU doesn't have LEDs, so instead we are going to use one of the UART examples which will print characters we can see on QEMU stdout.
Let's start with the hello world example on an x86 target:
cd ~/zephyrproject/zephyr
west build -b qemu_x86 samples/hello_world -t run
and it outputs:
Hello World! qemu_x86
The qemu_x64 on the output comes from the CONFIG_BOARD macro github.com/zephyrproject-rtos/zephyr/blob/c15ff103001899ba0321b2c38013d1008584edc0/samples/hello_world/src/main.c#L11
#include <zephyr/kernel.h>

int main(void)
{
	printk("Hello World! %s\n", CONFIG_BOARD);
	return 0;
}
You can also first cd into the directory that you want to build in to avoid typing samples/hello_world all the time:
cd ~/zephyrproject/zephyr/samples/hello_world
zephyr west build -b qemu_x86 -t run
You can also build and run separately with:
west build -b qemu_x86
west build -t run
Another important option is:
west build -t menuconfig
But note that it does not modify your prj.conf automatically for you.
Let's try on another target:
rm -rf build
zephyr west build -b qemu_cortex_a53 -t run
and same output, but on a completely different board! The qemu_cortex_a53 board is documented at: docs.zephyrproject.org/3.4.0/boards/arm64/qemu_cortex_a53/doc/index.html
The list of all examples can be seen under:
ls ~/zephyrproject/zephyr/samples
which for example contains:
zephyrproject/zephyr/samples/hello_world
So run another sample simply select it, e.g. to run zephyrproject/zephyr/samples/synchronization:
west build -b qemu_cortex_a53 samples/synchronization -t run
Linux by Ciro Santilli 37 Updated 2025-07-16
It ain't perfect, but it's decent enough.
From a technical point of view, it can do anything that Microsoft Windows can. Except being forcefully installed on every non-MacOS 2019 computer you can buy.
Ciro Santilli's conversion to Linux happened around 2012, and was a central part of Ciro Santilli's Open Source Enlightenment, since it fundamentally enables the discovery and contribution to open source software. Because what awesome open source person would waste time porting their amazing projects to closed source OSes?
Ciro's modest nature can be seen as he likes to compare this event Buddha's Great Renunciation.
Particularly interesting in the history of Linux is how it won out over the open competitors that were coming up in the time: MINIX (see the chat) and BSD Operating System that got legally bogged down at the critical growth moment.
Figure 1.
xkcd 619: Supported Features
. Source. This perfectly illustrates Linux development. First features that matter. Then useless features.
Video 1.
Bill Gates vs Steve Jobs by Epic Rap Battles of History (2012)
Source. Just stop whatever you are doing, and watch this right now. "I'm on Linux, bitch, I thought you GNU". Fandom explanations. It is just a shame that the Bill Gates actor looks absolutely nothing like the real gates. Actually, the entire Gates/Jobs parts are good, but not genial. But the Linux one is.
Linux Foundation by Ciro Santilli 37 Updated 2025-07-16
The fact that this foundation has a bunch of paid, closed, certification courses makes Ciro Santilli not respect them at all. They should be making open access content instead!
MLperf v2.1 ResNet by Ciro Santilli 37 Updated 2025-07-16
Ubuntu 22.10 setup with tiny dummy manually generated ImageNet and run on ONNX:
sudo apt install pybind11-dev

git clone https://github.com/mlcommons/inference
cd inference
git checkout v2.1

virtualenv -p python3 .venv
. .venv/bin/activate
pip install numpy==1.24.2 pycocotools==2.0.6 onnxruntime==1.14.1 opencv-python==4.7.0.72 torch==1.13.1

cd loadgen
CFLAGS="-std=c++14" python setup.py develop
cd -

cd vision/classification_and_detection
python setup.py develop
wget -q https://zenodo.org/record/3157894/files/mobilenet_v1_1.0_224.onnx
export MODEL_DIR="$(pwd)"
export EXTRA_OPS='--time 10 --max-latency 0.2'

tools/make_fake_imagenet.sh
DATA_DIR="$(pwd)/fake_imagenet" ./run_local.sh onnxruntime mobilenet cpu --accuracy
Last line of output on P51, which appears to contain the benchmark results
TestScenario.SingleStream qps=58.85, mean=0.0138, time=0.136, acc=62.500%, queries=8, tiles=50.0:0.0129,80.0:0.0137,90.0:0.0155,95.0:0.0171,99.0:0.0184,99.9:0.0187
where presumably qps means queries per second, and is the main results we are interested in, the more the better.
Running:
tools/make_fake_imagenet.sh
produces a tiny ImageNet subset with 8 images under fake_imagenet/.
fake_imagenet/val_map.txt contains:
val/800px-Porsche_991_silver_IAA.jpg 817
val/512px-Cacatua_moluccensis_-Cincinnati_Zoo-8a.jpg 89
val/800px-Sardinian_Warbler.jpg 13
val/800px-7weeks_old.JPG 207
val/800px-20180630_Tesla_Model_S_70D_2015_midnight_blue_left_front.jpg 817
val/800px-Welsh_Springer_Spaniel.jpg 156
val/800px-Jammlich_crop.jpg 233
val/782px-Pumiforme.JPG 285
where the numbers are the category indices from ImageNet1k. At gist.github.com/yrevar/942d3a0ac09ec9e5eb3a see e.g.:
  • 817: 'sports car, sport car',
  • 89: 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',
and so on, so they are coherent with the image names. By quickly looking at the script we see that it just downloads from Wikimedia and manually creates the file.
TODO prepare and test on the actual ImageNet validation set, README says:
Prepare the imagenet dataset to come.
Since that one is undocumented, let's try the COCO dataset instead, which uses COCO 2017 and is also a bit smaller. Note that his is not part of MLperf anymore since v2.1, only ImageNet and open images are used. But still:
wget https://zenodo.org/record/4735652/files/ssd_mobilenet_v1_coco_2018_01_28.onnx
DATA_DIR_BASE=/mnt/data/coco
export DATA_DIR="${DATADIR_BASE}/val2017-300"
mkdir -p "$DATA_DIR_BASE"
cd "$DATA_DIR_BASE"
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip val2017.zip
unzip annotations_trainval2017.zip
mv annotations val2017
cd -
cd "$(git-toplevel)"
python tools/upscale_coco/upscale_coco.py --inputs "$DATA_DIR_BASE" --outputs "$DATA_DIR" --size 300 300 --format png
cd -
Now:
./run_local.sh onnxruntime mobilenet cpu --accuracy
fails immediately with:
No such file or directory: '/path/to/coco/val2017-300/val_map.txt
The more plausible looking:
./run_local.sh onnxruntime mobilenet cpu --accuracy --dataset coco-300
first takes a while to preprocess something most likely, which it does only one, and then fails:
Traceback (most recent call last):
  File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 596, in <module>
    main()
  File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 468, in main
    ds = wanted_dataset(data_path=args.dataset_path,
  File "/home/ciro/git/inference/vision/classification_and_detection/python/coco.py", line 115, in __init__
    self.label_list = np.array(self.label_list)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (5000, 2) + inhomogeneous part.

Pinned article: Introduction to the OurBigBook Project

Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
We have two killer features:
  1. topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculus
    Articles of different users are sorted by upvote within each article page. This feature is a bit like:
    • a Wikipedia where each user can have their own version of each article
    • a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
    This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.
    Figure 1.
    Screenshot of the "Derivative" topic page
    . View it live at: ourbigbook.com/go/topic/derivative
  2. local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:
    This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
    Figure 2.
    You can publish local OurBigBook lightweight markup files to either https://OurBigBook.com or as a static website
    .
    Figure 3.
    Visual Studio Code extension installation
    .
    Figure 4.
    Visual Studio Code extension tree navigation
    .
    Figure 5.
    Web editor
    . You can also edit articles on the Web editor without installing anything locally.
    Video 3.
    Edit locally and publish demo
    . Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension.
    Video 4.
    OurBigBook Visual Studio Code extension editing and navigation demo
    . Source.
  3. https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/feature/x/hilbert-space-arrow.png
  4. Infinitely deep tables of contents:
    Figure 6.
    Dynamic article tree with infinitely deep table of contents
    .
    Descendant pages can also show up as toplevel e.g.: ourbigbook.com/cirosantilli/chordate-subclade
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact