Instructions at:
Ubuntu 22.10 setup with tiny dummy manually generated ImageNet and run on ONNX:
sudo apt install pybind11-dev
git clone https://github.com/mlcommons/inference
cd inference
git checkout v2.1
virtualenv -p python3 .venv
. .venv/bin/activate
pip install numpy==1.24.2 pycocotools==2.0.6 onnxruntime==1.14.1 opencv-python==4.7.0.72 torch==1.13.1
cd loadgen
CFLAGS="-std=c++14" python setup.py develop
cd -
cd vision/classification_and_detection
python setup.py develop
wget -q https://zenodo.org/record/3157894/files/mobilenet_v1_1.0_224.onnx
export MODEL_DIR="$(pwd)"
export EXTRA_OPS='--time 10 --max-latency 0.2'
tools/make_fake_imagenet.sh
DATA_DIR="$(pwd)/fake_imagenet" ./run_local.sh onnxruntime mobilenet cpu --accuracyLast line of output on P51, which appears to contain the benchmark resultswhere presumably
TestScenario.SingleStream qps=58.85, mean=0.0138, time=0.136, acc=62.500%, queries=8, tiles=50.0:0.0129,80.0:0.0137,90.0:0.0155,95.0:0.0171,99.0:0.0184,99.9:0.0187qps means queries per second, and is the main results we are interested in, the more the better.Running:produces a tiny ImageNet subset with 8 images under
tools/make_fake_imagenet.shfake_imagenet/.fake_imagenet/val_map.txt contains:val/800px-Porsche_991_silver_IAA.jpg 817
val/512px-Cacatua_moluccensis_-Cincinnati_Zoo-8a.jpg 89
val/800px-Sardinian_Warbler.jpg 13
val/800px-7weeks_old.JPG 207
val/800px-20180630_Tesla_Model_S_70D_2015_midnight_blue_left_front.jpg 817
val/800px-Welsh_Springer_Spaniel.jpg 156
val/800px-Jammlich_crop.jpg 233
val/782px-Pumiforme.JPG 285TODO prepare and test on the actual ImageNet validation set, README says:
Prepare the imagenet dataset to come.
Since that one is undocumented, let's try the COCO dataset instead, which uses COCO 2017 and is also a bit smaller. Note that his is not part of MLperf anymore since v2.1, only ImageNet and open images are used. But still:
wget https://zenodo.org/record/4735652/files/ssd_mobilenet_v1_coco_2018_01_28.onnx
DATA_DIR_BASE=/mnt/data/coco
export DATA_DIR="${DATADIR_BASE}/val2017-300"
mkdir -p "$DATA_DIR_BASE"
cd "$DATA_DIR_BASE"
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip val2017.zip
unzip annotations_trainval2017.zip
mv annotations val2017
cd -
cd "$(git-toplevel)"
python tools/upscale_coco/upscale_coco.py --inputs "$DATA_DIR_BASE" --outputs "$DATA_DIR" --size 300 300 --format png
cd -Now:fails immediately with:The more plausible looking:first takes a while to preprocess something most likely, which it does only one, and then fails:
./run_local.sh onnxruntime mobilenet cpu --accuracyNo such file or directory: '/path/to/coco/val2017-300/val_map.txt./run_local.sh onnxruntime mobilenet cpu --accuracy --dataset coco-300Traceback (most recent call last):
File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 596, in <module>
main()
File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 468, in main
ds = wanted_dataset(data_path=args.dataset_path,
File "/home/ciro/git/inference/vision/classification_and_detection/python/coco.py", line 115, in __init__
self.label_list = np.array(self.label_list)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (5000, 2) + inhomogeneous part.TODO!
Video with a solid color:
- 2 second white video:Also add some audio:
ffplay -autoexit -f lavfi -i 'color=white:640x480:d=3,format=rgb24,trim=end=2'TODO how to ffplay the video + audio directly?ffmpeg -lavfi "color=white:640x480:d=3,format=rgb24,trim=end=2[v];sine=f=1000:d=2[a]" -map '[a]' -map '[v]' out.mkv-mapdoes not seem to work unfortunately. - 2 second white followed by 2 second black video:
ffplay -autoexit -f lavfi -i 'color=white:640x480:d=3,format=rgb24,trim=end=2[a];color=black:640x480:d=3,format=rgb24,trim=end=2[b];[a][b]concat=n=2:v=1:a=0' - bibliography:
Display count in seconds on the video:
- black text on white background. Start from 0 and count up to 2:
ffplay -autoexit -f lavfi -i " color=white:480x480:d=3, format=rgb24, drawtext= fontcolor=black: fontsize=600: text='%{eif\:t\:d}': x=(w-text_w)/2: y=(h-text_h)/2 " - count 0 to 2 with one different sine wave per count:
ffmpeg -lavfi " color=white:480x480:d=3, format=rgb24, drawtext= fontcolor=black: fontsize=600: text='%{eif\:t\:d}': x=(w-text_w)/2: y=(h-text_h)/2[v]; sine=f=500:d=1[a1]; sine=f=1000:d=1[a2]; sine=f=2000:d=1[a3]; [a1][a2][a3]concat=n=3:v=0:a=1[a]; " -map '[v]' -map '[a]' count.mkv - bibliography:
Bibliography:
- ffmpeg.org//ffmpeg-filters.html#Video-Sources main section of the documentation listing various video generators
- stackoverflow.com/questions/11640458/how-can-i-generate-a-video-file-directly-from-an-ffmpeg-filter-with-no-actual-in generically asking how to generate the video without an input video
Most of what follows is part of the Universal Chess Interface. Tested on Ubuntu 22.10, Stockfish 14.1.
After starting Sweet ASCII art. where:
stockfish on the command line, d (presumably display) contains: +---+---+---+---+---+---+---+---+
| r | n | b | q | k | b | n | r | 8
+---+---+---+---+---+---+---+---+
| p | p | p | p | p | p | p | p | 7
+---+---+---+---+---+---+---+---+
| | | | | | | | | 6
+---+---+---+---+---+---+---+---+
| | | | | | | | | 5
+---+---+---+---+---+---+---+---+
| | | | | | | | | 4
+---+---+---+---+---+---+---+---+
| | | | | | | | | 3
+---+---+---+---+---+---+---+---+
| P | P | P | P | P | P | P | P | 2
+---+---+---+---+---+---+---+---+
| R | N | B | Q | K | B | N | R | 1
+---+---+---+---+---+---+---+---+
a b c d e f g h
Fen: rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1
Key: 8F8F01D4562F59FBFen: FEN notationKey: TODO
Move white king's pawn from e2 to e4:Then display again:gives:so we see that the pawn moved.
position startpos moves e2e4d +---+---+---+---+---+---+---+---+
| r | n | b | q | k | b | n | r | 8
+---+---+---+---+---+---+---+---+
| p | p | p | p | p | p | p | p | 7
+---+---+---+---+---+---+---+---+
| | | | | | | | | 6
+---+---+---+---+---+---+---+---+
| | | | | | | | | 5
+---+---+---+---+---+---+---+---+
| | | | | P | | | | 4
+---+---+---+---+---+---+---+---+
| | | | | | | | | 3
+---+---+---+---+---+---+---+---+
| P | P | P | P | | P | P | P | 2
+---+---+---+---+---+---+---+---+
| R | N | B | Q | K | B | N | R | 1
+---+---+---+---+---+---+---+---+
a b c d e f g h
Fen: rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq - 0 1
Key: B46022469E3DD31BNow let's make Stockfish think for one second what is the next best move for black:gives as the last line:TODO:
go movetime 1000bestmove c7c5 ponder g1f3- what is ponder? Something to do with thinking on the opponent's turn: permanent brain.
- understand the previous lines
To make the move it as suggested for black, we have to either repeat the entire sequence of movements:or alternatively we could also use the previous FEN notation as a starting point;Note how the Universal Chess Interface interface is very simple: we just load a state and then decide what to do next for that one state. The engine holds only one and exactly one state at a time, and you can't even modify it differentially without loading new one from scratch.
position startpos moves e2e4 c7c5d: +---+---+---+---+---+---+---+---+
| r | n | b | q | k | b | n | r | 8
+---+---+---+---+---+---+---+---+
| p | p | | p | p | p | p | p | 7
+---+---+---+---+---+---+---+---+
| | | | | | | | | 6
+---+---+---+---+---+---+---+---+
| | | p | | | | | | 5
+---+---+---+---+---+---+---+---+
| | | | | P | | | | 4
+---+---+---+---+---+---+---+---+
| | | | | | | | | 3
+---+---+---+---+---+---+---+---+
| P | P | P | P | | P | P | P | 2
+---+---+---+---+---+---+---+---+
| R | N | B | Q | K | B | N | R | 1
+---+---+---+---+---+---+---+---+
a b c d e f g h
Fen: rnbqkbnr/pp1ppppp/8/2p5/4P3/8/PPPP1PPP/RNBQKBNR w KQkq - 0 2
Key: 4CA78BCE9C2980B0position fen rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq - 0 1 moves c7c5Looks interesting.
It seems to abstract the part about the client messaging the backend, which focuses on being able to easily plug in a number of Front-end web framework to manage client state.
Uses Socket.IO, which allows the client Javascript to register callbacks when data is updated to achieve Socket.IO, e.g. their default chat app does:so that message appear immediately as they are sent.
client.service('messages').on('created', addMessage);Their standard template from But it is in itself a completely boring app with a single splash page, and no database interaction, so not a good showcase. The actual showcase app is feathersjs/feathers-chat.
feathers generate app on @feathersjs/cli@4.5.0 includes:which looks promising! They don't have a default template for a Front-end web framework however unfortunately: docs.feathersjs.com/guides/frameworks.html#the-feathers-chat lists a few chat app versions, which is their hello world:
- Front-end web framework: not built-in on generator, but there are some sample repos pointed from the documentation, and they did work out-of-box:
And there is no official example of the chat app that is immediately deployable to Heroku: FeathersJS Heroku deployment, all setups require thinking.
Created by Dr. Rod Nave from Georgia State University, where he worked from 1968 after his post-doc in North Wales on molecular spectroscopy.
While there is value to that website, it always feels like it falls a bit too short as too "encyclopedic" and too little "tutorial-like". Most notably, it has very little on the history of physics/experiments.
Ciro Santilli likes this Rod, he really practices some good braindumping, just look at how he documented his life in the pre-social media Internet dark ages: hyperphysics.phy-astr.gsu.edu/Nave-html/nave.html
The website evolved from a HyperCard stack, as suggested by the website name, mentioned at: hyperphysics.phy-astr.gsu.edu/hbase/index.html.
Shame he was too old for CC BY-SA, see "Please respect the Copyright" at hyperphysics.phy-astr.gsu.edu/hbase/index.html.
exhibits.library.gsu.edu/kell/exhibits/show/nave-kell-hall/capturing-a-career has some good photo selection focused on showing the department, and has an interview.
Kell hall is a building of GSU that was demolished in 2019: atlanta.curbed.com/2020/1/31/21115980/gsu-georgia-state-atlanta-kell-hall-demolition-park-library-north
The most powerful GUI file manager ever?? Infinite configurability??
Ciro Santilli wasted some time on it before he gave up on file managers altogether and started using only the CLI with a few aliases.
Bibliography of the biliograpy:
- physics.stackexchange.com/questions/8441/what-is-a-complete-book-for-introductory-quantum-field-theory "What is a complete book for introductory quantum field theory?"
- www.quora.com/What-is-the-best-book-to-learn-quantum-field-theory-on-your-own on Quora
- www.amazon.co.uk/Lectures-Quantum-Field-Theory-Ashok-ebook/dp/B07CL8Y3KY
Recommendations by friend P. C.:
- The Global Approach to Quantum Field Theory
- Lecture Notes | Geometry and Quantum Field Theory | Mathematics ocw.mit.edu/courses/mathematics/18-238-geometry-and-quantum-field-theory-fall-2002/lecture-notes/
- Towards the mathematics of quantum field theory (Frederic Paugam)
- Path Integrals in Quantum Mechanics (J. Zinn–Justin)
- (B.Hall) Quantum Theory for Mathematicians (B.Hall)
- Quantum Field Theory and the Standard Model (Schwartz)
- The Algebra of Grand Unified Theories (John C. Baez)
- quantum Field Theory for The Gifted Amateur by Tom Lancaster (2015)
Pinned article: Introduction to the OurBigBook Project
Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
Intro to OurBigBook
. Source. We have two killer features:
- topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculusArticles of different users are sorted by upvote within each article page. This feature is a bit like:
- a Wikipedia where each user can have their own version of each article
- a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.Figure 1. Screenshot of the "Derivative" topic page. View it live at: ourbigbook.com/go/topic/derivativeVideo 2. OurBigBook Web topics demo. Source. - local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
- to OurBigBook.com to get awesome multi-user features like topics and likes
- as HTML files to a static website, which you can host yourself for free on many external providers like GitHub Pages, and remain in full control
Figure 3. Visual Studio Code extension installation.Figure 4. Visual Studio Code extension tree navigation.Figure 5. Web editor. You can also edit articles on the Web editor without installing anything locally.Video 3. Edit locally and publish demo. Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension.Video 4. OurBigBook Visual Studio Code extension editing and navigation demo. Source. - Infinitely deep tables of contents:
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact





