Notion (productivity software) by Ciro Santilli 37 Created 2024-06-26 Updated 2025-07-16
Video 1.
9-Year Hustle to Achieve a Single Goal by EO
. Source. Interview with Akshay Kothari and Ivan Zhao.
Video 2.
How Notion Handles 200 BILLION Notes by Coding with Lewis
. Source.
Type of laser by Ciro Santilli 37 Created 2024-06-26 Updated 2025-07-16
Lasers vs other light sources by Ciro Santilli 37 Created 2024-06-26 Updated 2025-07-16
The key advantages of lasers over other light sources are:
One cool thing about lasers is that they rely on one specific atomic energy level transition to produce light. This is why they are able to to be so monchromatic. Compare this to:
As such, lasers manage to largely overcome "temperature distribution-like" effects that create wider wave spectrum
Video 1.
Crazy difference between 5W laser and 5W LED by Brainiac75
. Source. Baseic but good. Uses a laser photometer.
Laser spectrum by Ciro Santilli 37 Created 2024-06-26 Updated 2025-07-16
Video 1.
Spectrum of laser light by Shaoul Ezekiel
. Source. 2008, MIT.
Education of André-Marie Ampère by Ciro Santilli 37 Created 2024-06-26 Updated 2025-07-16
en.wikipedia.org/w/index.php?title=Andr%C3%A9-Marie_Amp%C3%A8re&oldid=1211946256:
Jean-Jacques Ampère, a successful merchant, was an admirer of the philosophy of Jean-Jacques Rousseau, whose theories of education (as outlined in his treatise Émile) were the basis of Ampère's education. Rousseau believed that young boys should avoid formal schooling and pursue instead a "direct education from nature." Ampère's father actualized this ideal by allowing his son to educate himself within the walls of his well-stocked library.
TODO find the source for this.
Electrical cable by Ciro Santilli 37 Created 2024-06-26 Updated 2025-07-16
One more more electrical wires surrounded by an insulator.
Oliver Heaviside by Ciro Santilli 37 Created 2024-06-26 Updated 2025-07-16
He participated in the development of the electrical telegraph, and he did some good modeling work that improved the foundations of the field, notably creating the telegrapher's equations.
He was one of those idealists who just want to do some cool work even if they have to starve for it, people had to get a state pension for him for his contributions. Nice guy. en.wikipedia.org/w/index.php?title=Oliver_Heaviside&oldid=1230097796#Later_years_and_views:
In 1896, FitzGerald and John Perry obtained a civil list pension of £120 per year for Heaviside, who was now living in Devon, and persuaded him to accept it, after he had rejected other charitable offers from the Royal Society.
He also never married: www.nndb.com/people/627/000204015/
Figure 1.
Oliver Heaviside c. 1900
. Source.
We intersect 2013 DNS Census virtual host cleanup with 2013 DNS census MX records and that leaves 460k hits. We did lose a third on the the MX records as of 260 hits since secureserver.net is only used in 1/3 of sites, but we also concentrate 9x, so it may be worth it.
Then we Wayback Machine CDX scanning. it takes about 5 days, but it is manageale.
We did a full Wayback Machine CDX scanning for JAR, SWF and cgi-bin in those, but only found a single new hit:
activatedgeek/LeNet-5 by Ciro Santilli 37 Updated 2025-07-16
This repository contains a very clean minimal PyTorch implementation of LeNet-5 for MNIST.
It trains the LeNet-5 neural network on the MNIST dataset from scratch, and afterwards you can give it newly hand-written digits 0 to 9 and it will hopefully recognize the digit for you.
Ciro Santilli created a small fork of this repo at lenet adding better automation for:
Install on Ubuntu 24.10 with:
sudo apt install protobuf-compiler
git clone https://github.com/activatedgeek/LeNet-5
cd LeNet-5
git checkout 95b55a838f9d90536fd3b303cede12cf8b5da47f
virtualenv -p python3 .venv
. .venv/bin/activate
pip install \
  Pillow==6.2.0 \
  numpy==1.24.2 \
  onnx==1.13.1 \
  torch==2.0.0 \
  torchvision==0.15.1 \
  visdom==0.2.4 \
;
We use our own pip install because their requirements.txt uses >= instead of == making it random if things will work or not.
On Ubuntu 22.10 it was instead:
pip install
  Pillow==6.2.0 \
  numpy==1.26.4 \
  onnx==1.17.0 torch==2.6.0 \
  torchvision==0.21.0 \
  visdom==0.2.4 \
;
Then run with:
python run.py
This script:
  • does a fixed 15 epochs on the training data
  • it then uses the trained net from memory to check accuracy with the test data
  • then it also produces a lenet.onnx ONNX file which contains the trained network, nice!
It throws a billion exceptions because we didn't start the Visdom server, but everything works nevertheless, we just don't get a visualization of the training.
The terminal outputs lines such as:
Train - Epoch 1, Batch: 0, Loss: 2.311587
Train - Epoch 1, Batch: 10, Loss: 2.067062
Train - Epoch 1, Batch: 20, Loss: 0.959845
...
Train - Epoch 1, Batch: 230, Loss: 0.071796
Test Avg. Loss: 0.000112, Accuracy: 0.967500
...
Train - Epoch 15, Batch: 230, Loss: 0.010040
Test Avg. Loss: 0.000038, Accuracy: 0.989300
And the runtime on Ubuntu 22.10, P51 was:
real    2m10.262s
user    11m9.771s
sys     0m26.368s
One of the benefits of the ONNX output is that we can nicely visualize the neural network on Netron:
Figure 1.
Netron visualization of the activatedgeek/LeNet-5 ONNX output
. From this we can see the bifurcation on the computational graph as done in the code at:
output = self.c1(img)
x = self.c2_1(output)
output = self.c2_2(output)
output += x
output = self.c3(output)
This doesn't seem to conform to the original LeNet-5 however?
MLperf v2.1 ResNet by Ciro Santilli 37 Updated 2025-07-16
Ubuntu 22.10 setup with tiny dummy manually generated ImageNet and run on ONNX:
sudo apt install pybind11-dev

git clone https://github.com/mlcommons/inference
cd inference
git checkout v2.1

virtualenv -p python3 .venv
. .venv/bin/activate
pip install numpy==1.24.2 pycocotools==2.0.6 onnxruntime==1.14.1 opencv-python==4.7.0.72 torch==1.13.1

cd loadgen
CFLAGS="-std=c++14" python setup.py develop
cd -

cd vision/classification_and_detection
python setup.py develop
wget -q https://zenodo.org/record/3157894/files/mobilenet_v1_1.0_224.onnx
export MODEL_DIR="$(pwd)"
export EXTRA_OPS='--time 10 --max-latency 0.2'

tools/make_fake_imagenet.sh
DATA_DIR="$(pwd)/fake_imagenet" ./run_local.sh onnxruntime mobilenet cpu --accuracy
Last line of output on P51, which appears to contain the benchmark results
TestScenario.SingleStream qps=58.85, mean=0.0138, time=0.136, acc=62.500%, queries=8, tiles=50.0:0.0129,80.0:0.0137,90.0:0.0155,95.0:0.0171,99.0:0.0184,99.9:0.0187
where presumably qps means queries per second, and is the main results we are interested in, the more the better.
Running:
tools/make_fake_imagenet.sh
produces a tiny ImageNet subset with 8 images under fake_imagenet/.
fake_imagenet/val_map.txt contains:
val/800px-Porsche_991_silver_IAA.jpg 817
val/512px-Cacatua_moluccensis_-Cincinnati_Zoo-8a.jpg 89
val/800px-Sardinian_Warbler.jpg 13
val/800px-7weeks_old.JPG 207
val/800px-20180630_Tesla_Model_S_70D_2015_midnight_blue_left_front.jpg 817
val/800px-Welsh_Springer_Spaniel.jpg 156
val/800px-Jammlich_crop.jpg 233
val/782px-Pumiforme.JPG 285
where the numbers are the category indices from ImageNet1k. At gist.github.com/yrevar/942d3a0ac09ec9e5eb3a see e.g.:
  • 817: 'sports car, sport car',
  • 89: 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',
and so on, so they are coherent with the image names. By quickly looking at the script we see that it just downloads from Wikimedia and manually creates the file.
TODO prepare and test on the actual ImageNet validation set, README says:
Prepare the imagenet dataset to come.
Since that one is undocumented, let's try the COCO dataset instead, which uses COCO 2017 and is also a bit smaller. Note that his is not part of MLperf anymore since v2.1, only ImageNet and open images are used. But still:
wget https://zenodo.org/record/4735652/files/ssd_mobilenet_v1_coco_2018_01_28.onnx
DATA_DIR_BASE=/mnt/data/coco
export DATA_DIR="${DATADIR_BASE}/val2017-300"
mkdir -p "$DATA_DIR_BASE"
cd "$DATA_DIR_BASE"
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip val2017.zip
unzip annotations_trainval2017.zip
mv annotations val2017
cd -
cd "$(git-toplevel)"
python tools/upscale_coco/upscale_coco.py --inputs "$DATA_DIR_BASE" --outputs "$DATA_DIR" --size 300 300 --format png
cd -
Now:
./run_local.sh onnxruntime mobilenet cpu --accuracy
fails immediately with:
No such file or directory: '/path/to/coco/val2017-300/val_map.txt
The more plausible looking:
./run_local.sh onnxruntime mobilenet cpu --accuracy --dataset coco-300
first takes a while to preprocess something most likely, which it does only one, and then fails:
Traceback (most recent call last):
  File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 596, in <module>
    main()
  File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 468, in main
    ds = wanted_dataset(data_path=args.dataset_path,
  File "/home/ciro/git/inference/vision/classification_and_detection/python/coco.py", line 115, in __init__
    self.label_list = np.array(self.label_list)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (5000, 2) + inhomogeneous part.
Netron by Ciro Santilli 37 Updated 2025-07-16
ONNX visualizer.
Figure 1.
Netron visualization of the activatedgeek/LeNet-5 ONNX output
.
This is a good concept. For the ammount most people save, having a simple and easy to apply investment thesis is the best way to go.
Video 1.
All the financial advice you’ll ever need fits on a single index card
. Source.
Finance guru by Ciro Santilli 37 Updated 2025-07-16
A person who gives financial advice, notably personal finance advice. Some of them are questinable guru-like beings, and many are on YouTube.

Unlisted articles are being shown, click here to show only listed articles.