Where to store images by Ciro Santilli 35 Updated +Created
Since images are large, they bring the following challenges:
  • keeping images in the main Git repository with text content makes the repository huge and slow to clone, and should not be done
  • storing and serving images could cost us, which we want to avoid
To solve those problems, the following alternatives appear to be stable enough and should be used decreasing preference:
  • for all images, use the separate GitHub repository: github.com/cirosantilli/media
    This way, the entire website is relies on a single third party: GitHub, so we have a simple single point of failure.
    We are at the mercy of GitHub's 1GB size policy: help.github.com/en/articles/what-is-my-disk-quota, but it will take a while to hit that.
    GitLab however has a 10Gb maximum size: about.gitlab.com/2015/04/08/gitlab-dot-com-storage-limit-raised-to-10gb-per-repo/ so we could move there is we ever blow up 1Gb on GitHub.
    Both GitLab and GitHub allow uploading files through the web UI, so downloading a large repo is never needed to contribute.
    GitHub does not serve videos like it does images however as of 2019.
  • Wikimedia Commons for videos if the following conditions are met:
    • in scope: "educational material in a broad sense", but not e.g. "Private image collections, e.g. private party photos, photos of yourself and your friends, your collection of holiday snaps and so on.". I don't think they will be too picky even with low quality photos.
    • allowed format, e.g. images or videos, but not ZIPs
    • allowed license: CC BY SA, but no fair use
    Since Wikimedia Commons has a higher level of curation and is an educational not-for-profit, it is the method most likely to remain available for the longest time.
    For this reason, we highly recommend uploading any acceptable files there as well as an additional backup.
    The downside is that its tooling is not as good, e.g. there are a bunch of messy unofficial tools for batch operations, and upload takes more effort.
    Another downside of Wikimedia Commons is that while we can choose the basename of files, it also adds some extra SHA crap to the beginning of URLs, making them harder to predict.
    Another serious downside is that they randomly rename images without redirects... e.g. they renamed upload.wikimedia.org/wikipedia/en/0/03/STJ_SVG_file.svg to upload.wikimedia.org/wikipedia/commons/8/81/Superconducting_tunnel_junction.svg
    Another "downside" is that they are extremely strict about copyright compliance. This is good because you can be pretty sure that they are correct in general, but it also means that they are very conservative, and delete things where fair use would be OK. And if those fair uses have no Wikipedia page, they won't show up anywhere.
  • archive.org for anything else, e.g. videos that Wikimedia commons does not accept.
    All content will be tracked under the cirosantilli collection: archive.org/details/cirosantilli
    archive.org has a very convenient upload and lax requirements. The generated URLs are predictable (single SHA prefix for the entire collection).
    Never trust a website that is not on GitHub Pages, for-profit companies will take down everything immediately as soon as it stops making them money.
    Every external link to non-GitHub pages must be archived. And GitHub links must be forked.
    We should also backup images that Wikimedia Commons does not accept here in addition to the github.com/cirosantilli/media repository.
The following do have direct links:
ResNet v1 vs v1.5 by Ciro Santilli 35 Updated +Created
catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch explains:
The difference between v1 and v1.5 is that, in the bottleneck blocks which requires downsampling, v1 has stride = 2 in the first 1x1 convolution, whereas v1.5 has stride = 2 in the 3x3 convolution.
This difference makes ResNet50 v1.5 slightly more accurate (~0.5% top1) than v1, but comes with a small performance drawback (~5% imgs/sec).
CNN convolution kernels are also learnt by Ciro Santilli 35 Updated +Created
CNN convolution kernels are not hardcoded. They are learnt and optimized via backpropagation. You just specify their size! Example in PyTorch you'd do just:
nn.Conv2d(1, 6, kernel_size=(5, 5))
as used for example at: activatedgeek/LeNet-5.
This can also be inferred from: stackoverflow.com/questions/55594969/how-to-visualise-filters-in-a-cnn-with-pytorch where we see that the kernels are not perfectly regular as you'd expected from something hand coded.
Brain biliography by Ciro Santilli 35 Updated +Created
LeNet by Ciro Santilli 35 Updated +Created
LeNet implementation by Ciro Santilli 35 Updated +Created
activatedgeek/LeNet-5 use ONNX for inference by Ciro Santilli 35 Updated +Created
Now let's try and use the trained ONNX file for inference on some manually drawn images on GIMP:
Figure 1.
Number 9 drawn with mouse on GIMP by Ciro Santilli (2023)
Note that:
  • the images must be drawn with white on black. If you use black on white, it the accuracy becomes terrible. This is a good very example of brittleness in AI systems!
  • images must be converted to 32x32 for lenet.onnx, as that is what training was done on. The training step converted the 28x28 images to 32x32 as the first thing it does before training even starts
We can try the code adapted from thenewstack.io/tutorial-using-a-pre-trained-onnx-model-for-inferencing/ at lenet/infer.py:
cd lenet
cp ~/git/LeNet-5/lenet.onnx .
wget -O 9.png https://raw.githubusercontent.com/cirosantilli/media/master/Digit_9_hand_drawn_by_Ciro_Santilli_on_GIMP_with_mouse_white_on_black.png
./infer.py 9.png
and it works pretty well! The program outputs:
9
as desired.
We can also try with images directly from Extract MNIST images.
infer_mnist.py lenet.onnx mnist_png/out/testing/1/*.png
and the accuracy is great as expected.
Penis by Ciro Santilli 35 Updated +Created
Vagina by Ciro Santilli 35 Updated +Created
Trained artificial neural network by Ciro Santilli 35 Updated +Created
Deep learning by Ciro Santilli 35 Updated +Created
Deep learning is the name artificial neural networks basically converged to in the 2010s/2020s.
It is a bit of an unfortunate as it suggests something like "deep understanding" and even reminds one of AGI, which it almost certainly will not attain on its own. But at least it sounds good.
Backpropagation by Ciro Santilli 35 Updated +Created
Video 1.
What is backpropagation really doing? by 3Blue1Brown (2017)
Source. Good hand-wave intuition, but does not describe the exact algorithm.
Epoch and batch size by Ciro Santilli 35 Updated +Created
Learning rate by Ciro Santilli 35 Updated +Created
Deep learning benchmark by Ciro Santilli 35 Updated +Created
MLperf by Ciro Santilli 35 Updated +Created
mlcommons.org/en/ Their homepage is not amazingly organized, but it does the job.
Benchmark focused on deep learning. It has two parts:
Furthermore, a specific network model is specified for each benchmark in the closed category: so it goes beyond just specifying the dataset.
And there are also separate repositories for each:
E.g. on mlcommons.org/en/training-normal-21/ we can see what the the benchmarks are:
DatasetModel
ImageNetResNet
KiTS193D U-Net
OpenImagesRetinaNet
COCO datasetMask R-CNN
LibriSpeechRNN-T
WikipediaBERT
1TB ClickthroughDLRM
GoMiniGo
MLperf v2.1 ResNet by Ciro Santilli 35 Updated +Created
Ubuntu 22.10 setup with tiny dummy manually generated ImageNet and run on ONNX:
sudo apt install pybind11-dev

git clone https://github.com/mlcommons/inference
cd inference
git checkout v2.1

virtualenv -p python3 .venv
. .venv/bin/activate
pip install numpy==1.24.2 pycocotools==2.0.6 onnxruntime==1.14.1 opencv-python==4.7.0.72 torch==1.13.1

cd loadgen
CFLAGS="-std=c++14" python setup.py develop
cd -

cd vision/classification_and_detection
python setup.py develop
wget -q https://zenodo.org/record/3157894/files/mobilenet_v1_1.0_224.onnx
export MODEL_DIR="$(pwd)"
export EXTRA_OPS='--time 10 --max-latency 0.2'

tools/make_fake_imagenet.sh
DATA_DIR="$(pwd)/fake_imagenet" ./run_local.sh onnxruntime mobilenet cpu --accuracy
Last line of output on P51, which appears to contain the benchmark results
TestScenario.SingleStream qps=58.85, mean=0.0138, time=0.136, acc=62.500%, queries=8, tiles=50.0:0.0129,80.0:0.0137,90.0:0.0155,95.0:0.0171,99.0:0.0184,99.9:0.0187
where presumably qps means queries per second, and is the main results we are interested in, the more the better.
Running:
tools/make_fake_imagenet.sh
produces a tiny ImageNet subset with 8 images under fake_imagenet/.
fake_imagenet/val_map.txt contains:
val/800px-Porsche_991_silver_IAA.jpg 817
val/512px-Cacatua_moluccensis_-Cincinnati_Zoo-8a.jpg 89
val/800px-Sardinian_Warbler.jpg 13
val/800px-7weeks_old.JPG 207
val/800px-20180630_Tesla_Model_S_70D_2015_midnight_blue_left_front.jpg 817
val/800px-Welsh_Springer_Spaniel.jpg 156
val/800px-Jammlich_crop.jpg 233
val/782px-Pumiforme.JPG 285
where the numbers are the category indices from ImageNet1k. At gist.github.com/yrevar/942d3a0ac09ec9e5eb3a see e.g.:
  • 817: 'sports car, sport car',
  • 89: 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',
and so on, so they are coherent with the image names. By quickly looking at the script we see that it just downloads from Wikimedia and manually creates the file.
TODO prepare and test on the actual ImageNet validation set, README says:
Prepare the imagenet dataset to come.
Since that one is undocumented, let's try the COCO dataset instead, which uses COCO 2017 and is also a bit smaller. Note that his is not part of MLperf anymore since v2.1, only ImageNet and open images are used. But still:
wget https://zenodo.org/record/4735652/files/ssd_mobilenet_v1_coco_2018_01_28.onnx
DATA_DIR_BASE=/mnt/data/coco
export DATA_DIR="${DATADIR_BASE}/val2017-300"
mkdir -p "$DATA_DIR_BASE"
cd "$DATA_DIR_BASE"
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip val2017.zip
unzip annotations_trainval2017.zip
mv annotations val2017
cd -
cd "$(git-toplevel)"
python tools/upscale_coco/upscale_coco.py --inputs "$DATA_DIR_BASE" --outputs "$DATA_DIR" --size 300 300 --format png
cd -
Now:
./run_local.sh onnxruntime mobilenet cpu --accuracy
fails immediately with:
No such file or directory: '/path/to/coco/val2017-300/val_map.txt
The more plausible looking:
./run_local.sh onnxruntime mobilenet cpu --accuracy --dataset coco-300
first takes a while to preprocess something most likely, which it does only one, and then fails:
Traceback (most recent call last):
  File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 596, in <module>
    main()
  File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 468, in main
    ds = wanted_dataset(data_path=args.dataset_path,
  File "/home/ciro/git/inference/vision/classification_and_detection/python/coco.py", line 115, in __init__
    self.label_list = np.array(self.label_list)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (5000, 2) + inhomogeneous part.
TODO!
Deep learning framework by Ciro Santilli 35 Updated +Created

Pinned article: ourbigbook/introduction-to-the-ourbigbook-project

Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
Video 1.
Intro to OurBigBook
. Source.
We have two killer features:
  1. topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculus
    Articles of different users are sorted by upvote within each article page. This feature is a bit like:
    • a Wikipedia where each user can have their own version of each article
    • a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
    This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.
    Figure 1.
    Screenshot of the "Derivative" topic page
    . View it live at: ourbigbook.com/go/topic/derivative
    Video 2.
    OurBigBook Web topics demo
    . Source.
  2. local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:
    • to OurBigBook.com to get awesome multi-user features like topics and likes
    • as HTML files to a static website, which you can host yourself for free on many external providers like GitHub Pages, and remain in full control
    This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
    Figure 5. . You can also edit articles on the Web editor without installing anything locally.
    Video 3.
    Edit locally and publish demo
    . Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension.
    Video 4.
    OurBigBook Visual Studio Code extension editing and navigation demo
    . Source.
  3. https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/feature/x/hilbert-space-arrow.png
  4. Infinitely deep tables of contents:
    Figure 6.
    Dynamic article tree with infinitely deep table of contents
    .
    Descendant pages can also show up as toplevel e.g.: ourbigbook.com/cirosantilli/chordate-subclade
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact