activatedgeek/LeNet-5 Updated +Created
Good packaging! Tested on Ubuntu 22.10:
git clone https://github.com/activatedgeek/LeNet-5
cd LeNet-5
git checkout 95b55a838f9d90536fd3b303cede12cf8b5da47f
virtualenv -p python3 .venv
. .venv/bin/activate

# Their requirements.txt uses >= and some == are incompatible with our Ubuntu.
pip install
  Pillow==6.2.0 \
  numpy==1.24.2 \
  onnx==1.13.1 \
  torch==2.0.0 \
  torchvision==0.15.1 \
  visdom==0.2.4 \
;

time python run.py
This throws a billion exceptions because we didn't start the visdom server, but never mind that.
The scrip does a fixed 15 epochs.
Output on P51:
real    2m10.262s
user    11m9.771s
sys     0m26.368s
The run also produces a lenet.onnx ONNX file, which is pretty neat, and allows us for example to visualize it on Netron:
Figure 1.
Netron visualization of the activatedgeek/LeNet-5 ONNX output
. From this we can see the bifurcation on the computational graph as done in the code at:
output = self.c1(img)
x = self.c2_1(output)
output = self.c2_2(output)
output += x
output = self.c3(output)
This doesn't seem to conform to the original LeNet-5 however?
CNN convolution kernels are also learnt Updated +Created
CNN convolution kernels are not hardcoded. They are learnt and optimized via backpropagation. You just specify their size! Example in PyTorch you'd do just:
nn.Conv2d(1, 6, kernel_size=(5, 5))
as used for example at: activatedgeek/LeNet-5.
This can also be inferred from: stackoverflow.com/questions/55594969/how-to-visualise-filters-in-a-cnn-with-pytorch where we see that the kernels are not perfectly regular as you'd expected from something hand coded.
Netron Updated +Created
ONNX Updated +Created
The most important thing this project provides appears to be the .onnx file format, which represents ANN models, pre-trained or not.
Deep learning frameworks can then output such .onnx files for interchangeability and serialization.
Some examples: