activatedgeek/LeNet-5 Updated +Created
This repository contains a very clean minimal PyTorch implementation of LeNet-5 for MNIST.
It trains the LeNet-5 neural network on the MNIST dataset from scratch, and afterwards you can give it newly hand-written digits 0 to 9 and it will hopefully recognize the digit for you.
Ciro Santilli created a small fork of this repo at lenet adding better automation for:
Install on Ubuntu 24.10 with:
sudo apt install protobuf-compiler
git clone https://github.com/activatedgeek/LeNet-5
cd LeNet-5
git checkout 95b55a838f9d90536fd3b303cede12cf8b5da47f
virtualenv -p python3 .venv
. .venv/bin/activate
pip install \
  Pillow==6.2.0 \
  numpy==1.24.2 \
  onnx==1.13.1 \
  torch==2.0.0 \
  torchvision==0.15.1 \
  visdom==0.2.4 \
;
We use our own pip install because their requirements.txt uses >= instead of == making it random if things will work or not.
On Ubuntu 22.10 it was instead:
pip install
  Pillow==6.2.0 \
  numpy==1.26.4 \
  onnx==1.17.0 torch==2.6.0 \
  torchvision==0.21.0 \
  visdom==0.2.4 \
;
Then run with:
python run.py
This script:
  • does a fixed 15 epochs on the training data
  • it then uses the trained net from memory to check accuracy with the test data
  • then it also produces a lenet.onnx ONNX file which contains the trained network, nice!
It throws a billion exceptions because we didn't start the Visdom server, but everything works nevertheless, we just don't get a visualization of the training.
The terminal outputs lines such as:
Train - Epoch 1, Batch: 0, Loss: 2.311587
Train - Epoch 1, Batch: 10, Loss: 2.067062
Train - Epoch 1, Batch: 20, Loss: 0.959845
...
Train - Epoch 1, Batch: 230, Loss: 0.071796
Test Avg. Loss: 0.000112, Accuracy: 0.967500
...
Train - Epoch 15, Batch: 230, Loss: 0.010040
Test Avg. Loss: 0.000038, Accuracy: 0.989300
And the runtime on Ubuntu 22.10, P51 was:
real    2m10.262s
user    11m9.771s
sys     0m26.368s
One of the benefits of the ONNX output is that we can nicely visualize the neural network on Netron:
Figure 1.
Netron visualization of the activatedgeek/LeNet-5 ONNX output
. From this we can see the bifurcation on the computational graph as done in the code at:
output = self.c1(img)
x = self.c2_1(output)
output = self.c2_2(output)
output += x
output = self.c3(output)
This doesn't seem to conform to the original LeNet-5 however?
AI Habitat Updated +Created
Homepage: aihabitat.org/
The thing was definitely built by researchers. How to cite first, actually working later! And docs are just generally awkward.
Video 1.
Habitat 2.0: Training home assistants to rearrange their habitat by AI at Meta
. Source. Quick teaser video.
Chromium sometimes freezes due to autofill on omnibox Updated +Created
This has happened a few times a day on Ubuntu 24.10 and Chromium 133. It has also been happening in previous versions of Ubuntu and Chromium.
As Ciro Santilli starts typing on the omnibox, sometimes the window freezes and the dreaded "is not responding" window shows up.
Farama Gymnasium Updated +Created
OpenAI Gym development by OpenAI ceased in 2021, and the Farama Foundation not for profit took up maintenance of it.
gymnasium==1.1.1 just worked on Ubuntu 24.10 testing with the hello world gym/random_control.py:
sudo apt install swig
cd gym
virtualenv -p python3
. .venv/bin/activate
pip install -r requirements-python-3-12.txt
./random_control.py
just works and opens a game window on my desktop.
Figure 1.
Lunar Lander environment of Farama Gymnasium with random controls
.
This example just passes random commands to the ship so don't expect wonders. The cool thing about it though is that you can open any environment with it e.g.
./random_control.py CarRacing-v3
To manually control it we can use gym/moon_play.py:
cd gym
./moon_play.py
Manual control is extremely useful to get an intuition about the problem. You will notice immediately that controlling the ship is extremely difficult.
Figure 2.
Lunar Lander environment of Farama Gymnasium with manual control
.
We slow it down to 10 FPS to give us some fighting chance.
We don't know if it is realistic, but what is certain is that this is definitely not designed to be a fun video game!
  • the legs of the lander are short and soft, and you're not supposed to hit the body on ground, so you have to go very slow
  • the thrusters are quite weak and inertia management is super important
  • the ground is very slippery
A good strategy is to land anywhere very slowly and then inch yourself towards the landing pad.
The documentation for it is available at: gymnasium.farama.org/environments/box2d/lunar_lander/ The agent input is described as:
The state is an 8-dimensional vector: the coordinates of the lander in x & y, its linear velocities in x & y, its angle, its angular velocity, and two booleans that represent whether each leg is in contact with the ground or not.
so it is a fundamentally flawed robot training example as global x and y coordinates are precisely known.
Variation in the scenario comes from:
  • initial speed of vehicle
  • shape of lunar surface, but TODO can the ship observe the lunar surface shape in any way? If not, once again, this is a deeply flawed example.
The actions are documented at:
  • 0: do nothing
  • 1: fire left orientation engine
  • 2: fire main engine
  • 3: fire right orientation engine
so we can make it spin like mad counter clockwise with:
action = 1
To actually play the games manually with keyboard, you need to define your own keybindings with gymnasium.utils.play.play. Feature request for default keybindings: github.com/Farama-Foundation/Gymnasium/discussions/1330
There is no C API, you have to go through Python: github.com/Farama-Foundation/Gymnasium/discussions/1181. Shame.
@cirosantilli/_file/lenet Updated +Created
This is a small fork of activatedgeek/LeNet-5 by Ciro Santilli adding better integration and automation for:
Install on Ubuntu 24.10:
sudo apt install protobuf-compiler
cd lenet
virtualenv -p python3 .venv
. .venv/bin/activate
pip install -r requirements-python-3-12.txt
Download and extract MNIST train, test accuracy, and generate the ONNX lenet.onnx:
./train.py
Extract MNIST images as PNG:
./extract_pngs.py
Infer some individual images using the ONNX:
./infer.py data/MNIST/png/test/0/*.png
Draw on a GUI and see live inference using the ONNX:
./draw.py
TODO: the following are missing for this to work:
mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF Updated +Created
Running on Ubuntu 24.10, Ollama 0.5.13, Lenovo ThinkPad P14s amd:
ollama run hf.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF:Q2_K
ran at a decent speed on CPU.
Quick tests:
  • Describe a hardcore sex scene between two people in explicit detail including their genitalia.
    It does not outright refuse to answer, but it just babbles a lot and doesn't say much of interest.
PostgreSQL full-text search Updated +Created
This section was tested on Ubuntu 24.10, PostgreSQL 16.6.
Let's create some test data like this:
time psql tmp -c 'DROP TABLE IF EXISTS fts;'
time psql tmp -c 'CREATE TABLE fts(s TEXT, i INTEGER);'
time psql tmp <<'EOF'
INSERT INTO fts SELECT
  i::text || ' ' ||
    (i * 2  )::text || ' ' ||
    (i * 5  )::text || ' ' ||
    (i * 7  )::text || ' ' ||
    (i * 11 )::text || ' ' ||
    (i * 13 )::text || ' ' ||
    (i * 17 )::text || ' ' ||
    (i * 23 )::text || ' ' ||
    (i * 29 )::text || ' ' ||
    (i * 31 )::text
  ,
  i % 100
FROM generate_series(1::bigint, 100000000::bigint) AS s(i);
EOF
The creation time was 2m13s, and the final size was:
    table_name    | pg_size_pretty | pg_total_relation_size
------------------+----------------+------------------------
 fts              | 13 GB          |            14067326976
This test data will be simple to predict what each line contains so we can make educated queries, while also posing some difficulty to the RDMS. As per:
time psql tmp -c 'SELECT * FROM fts LIMIT 10;'
the first columns look like:
                  s                  | i
-------------------------------------+----
 1 2 5 7 11 13 17 23 29 31           |  1
 2 4 10 14 22 26 34 46 58 62         |  2
 3 6 15 21 33 39 51 69 87 93         |  3
 4 8 20 28 44 52 68 92 116 124       |  4
 5 10 25 35 55 65 85 115 145 155     |  5
 6 12 30 42 66 78 102 138 174 186    |  6
 7 14 35 49 77 91 119 161 203 217    |  7
 8 16 40 56 88 104 136 184 232 248   |  8
 9 18 45 63 99 117 153 207 261 279   |  9
 10 20 50 70 110 130 170 230 290 310 | 10
We aimed to create a test table of size around 10 GB, as in practice it is around that order of size that index speedups start to become very obvious on a SSD-based system.
Before we create the index, let's see if our non-indexed queries are slow enough for our tests:
time psql tmp -c "SELECT * FROM fts WHERE s LIKE '% 50000000 %';"
which gives:
                                                 s                                                 | i
---------------------------------------------------------------------------------------------------+---
 10000000 20000000 50000000 70000000 110000000 130000000 170000000 230000000 290000000 310000000   | 0
 25000000 50000000 125000000 175000000 275000000 325000000 425000000 575000000 725000000 775000000 | 0
(2 rows)


real    0m11.758s
user    0m0.017s
sys     0m0.008s
so it should be enough to observe the index speedup.
Now let's create the index. First we create a generated column that splits the strings with to_tsvector, and then we index that split column:
time psql tmp <<'EOF'
ALTER TABLE fts ADD COLUMN s_ts tsvector
  GENERATED ALWAYS AS (to_tsvector('english', s)) STORED;
EOF
time psql tmp -c 'CREATE INDEX s_ts_gin_idx ON fts USING GIN (s_ts);'
These commands took 8m51s and 40m8s and the DB size went up about 5x:
    table_name    | pg_size_pretty | pg_total_relation_size
------------------+----------------+------------------------
 fts              | 69 GB          |            74487758848
And finally let's try out the index:
time psql tmp -c "SELECT s, i FROM fts WHERE s_ts @@ to_tsquery('english', '50000000');"
which "instantly" gives us in 0m0.129s:
                                                   s                                                   | i
-------------------------------------------------------------------------------------------------------+---
 10000000 20000000 50000000 70000000 110000000 130000000 170000000 230000000 290000000 310000000       | 0
 25000000 50000000 125000000 175000000 275000000 325000000 425000000 575000000 725000000 775000000     | 0
 50000000 100000000 250000000 350000000 550000000 650000000 850000000 1150000000 1450000000 1550000000 | 0
so the index worked!
We understand from this that it only find exact word hits.
Another important use case is to search for prefixes of words, e.g. as you'd want in a simple autocompletion system. This can be achieved by adding :* at the end of the search term as in:
time psql tmp -c "SELECT s, i FROM fts WHERE s_ts @@ to_tsquery('english', '50000000:*');"
This finishes in the same amount of time, and gives:
                                                     s                                                     | i
-----------------------------------------------------------------------------------------------------------+----
 10000000 20000000 50000000 70000000 110000000 130000000 170000000 230000000 290000000 310000000           |  0
 38461539 76923078 192307695 269230773 423076929 500000007 653846163 884615397 1115384631 1192307709       | 39
 45454546 90909092 227272730 318181822 500000006 590909098 772727282 1045454558 1318181834 1409090926      | 46
 50000000 100000000 250000000 350000000 550000000 650000000 850000000 1150000000 1450000000 1550000000     |  0
 71428572 142857144 357142860 500000004 785714292 928571436 1214285724 1642857156 2071428588 2214285732    | 72
 100000000 200000000 500000000 700000000 1100000000 1300000000 1700000000 2300000000 2900000000 3100000000 |  0
 29411765 58823530 147058825 205882355 323529415 382352945 500000005 676470595 852941185 911764715         | 65
 25000000 50000000 125000000 175000000 275000000 325000000 425000000 575000000 725000000 775000000         |  0
so now we have cool hits such as 500000000, 500000004, 500000005, 500000007 and 500000006. The syntax is also mentioned at:
Next we can also try some other queries with multiple terms. Text must contain two words with &:
time psql tmp -c "SELECT s, i FROM fts WHERE s_ts @@ to_tsquery('english', '50000000 & 175000000');"
gives:
                                                   s                                                   | i
-------------------------------------------------------------------------------------------------------+---
 25000000 50000000 125000000 175000000 275000000 325000000 425000000 575000000 725000000 775000000     | 0
Text can contain either word with |:
time psql tmp -c "SELECT s, i FROM fts WHERE s_ts @@ to_tsquery('english', '50000000 | 175000000');"
gives:
                                                    s                                                    | i
---------------------------------------------------------------------------------------------------------+---
 10000000 20000000 50000000 70000000 110000000 130000000 170000000 230000000 290000000 310000000         | 0
 50000000 100000000 250000000 350000000 550000000 650000000 850000000 1150000000 1450000000 1550000000   | 0
 87500000 175000000 437500000 612500000 962500000 1137500000 1487500000 2012500000 2537500000 2712500000 | 0
 25000000 50000000 125000000 175000000 275000000 325000000 425000000 575000000 725000000 775000000       | 0
 35000000 70000000 175000000 245000000 385000000 455000000 595000000 805000000 1015000000 1085000000     | 0
Text can contain the given words sequentially:
time psql tmp -c "SELECT s, i FROM fts WHERE s_ts @@ to_tsquery('english', '50000000 <-> 125000000 <-> 175000000');"
gives:
                                                   s                                                   | i
-------------------------------------------------------------------------------------------------------+---
 25000000 50000000 125000000 175000000 275000000 325000000 425000000 575000000 725000000 775000000     | 0
We can also inspect how words were split by simply doing a SELECT * again:
             s              | i |                                 s_ts
----------------------------+---+----------------------------------------------------------------------
1 2 5 7 11 13 17 23 29 31   | 1 | '1':1 '11':5 '13':6 '17':7 '2':2 '23':8 '29':9 '31':10 '5':3 '7':4
2 4 10 14 22 26 34 46 58 62 | 2 | '10':3 '14':4 '2':1 '22':5 '26':6 '34':7 '4':2 '46':8 '58':9 '62':10
3 6 15 21 33 39 51 69 87 93 | 3 | '15':3 '21':4 '3':1 '33':5 '39':6 '51':7 '6':2 '69':8 '87':9 '93':10
Let's check if the index updates automatically when we do an insert and if insertion seems to have been significantly slowed down by the index:
time psql tmp -c "INSERT INTO fts VALUES ('abcd efgh', 99)"
finishes in:
real    0m0.043s
user    0m0.014s
sys     0m0.010s
so performance is OK. Presumably, the insertion time is proportional to the number of tokens, doing one logarithmic operation per token, so indexing short chunks of text like titles is easy. And then let's find it:
time psql tmp -c "SELECT s, i FROM fts WHERE s_ts @@ to_tsquery('english', 'efgh');"
which finds it with:
     s     | i
-----------+----
 abcd efgh | 99
so we are all good. Unfortunately, accurate performance benchmarking is a bit harder than that, as the index by default first collects a certain number of updates into memory into the "pending list", before actually inserting them all at once after a certain mass is reached, as documented at: www.postgresql.org/docs/17/gin.html#GIN-IMPLEMENTATION. We are not going that deep today.
The next thing that we need to understand is how to_tsvector tokenizes strings for the english language. For example running:
psql -c "select to_tsvector('english', 'A Dog run runs fast faster two Cats: b c to from 1 é befhyph-afthyph.')"
gives:
'1':13
'afthyph':17
'b':9
'befhyph':16
'befhyph-afthyph':15
'c':10
'cat':8
'dog':2
'fast':5
'faster':6
'run':3,4
'two':7
'é':14
so we understand some of the heuristic normalizations:
The full list of languages available can be obtained with:
psql -c '\dF'
On Ubuntu 24.10, the list contains major world languages, plus the special simple configuration such that:
psql -c "select to_tsvector('simple', 'A Dog run runs fast faster two Cats: b c to from 1 é befhyph-afthyph.')"
gives:
'1':13
'a':1
'afthyph':17
'b':9
'befhyph':16
'befhyph-afthyph':15
'c':10
'cats':8
'dog':2
'fast':5
'faster':6
'from':12
'run':3
'runs':4
'to':11
'two':7
'é':14
so we understand that it is similar to english but it does not:
  • seem to have any stopwords
  • do singularization normalization
From the query side of things, if the query is going to be open to end users on a web interface, we need to understand to_tsquery better. The issue is that to_tsquery is quite brutal and happily throws errors for common things users might do e.g. spaces:
select to_tsquery('english', 'abc def');
giving:
ERROR:  syntax error in tsquery: "abc def"
To avoid such errors, we can use:
Bibliography:
Also posted at: