Academic publishing is broken Updated 2025-07-16
One of the most beautiful things is how they paywall even public domain works. E.g. here: www.nature.com/articles/119558a0 was published in 1927, and is therefore in the public domain as of 2023. But it is of course just paywalled as usual throughout 2023. There is zero incentive for them to open anything up.
Video 1.
What they don't tell you about academic publishing by Andy Stapleton (2021)
Source.
Video 2.
The publishing scandal happening right now by Andy Stapleton (2023)
. Source. TOOD get the name of the academic who quit.
A Chinese Ghost Story Updated 2025-07-16
OK, the Good film tag might imply that you are a Sinophile.
The adaptation is very loose.
Figure 1.
Poster of A Chinese Ghost Story
.
A Chinese Ghost Story sutra Updated 2025-07-16
Appears to be a small section from the Diamond Sutra. TODO find or create a video of it, it is just too awesome.
AC Josephson effect Updated 2025-07-16
This is what happens when you apply a DC voltage across a Josephson junction.
It is called "AC effect" because when we apply a DC voltage, it produces an alternating current on the device.
By looking at the Josephson equations, we see that a positive constant, then just increases linearly without bound.
Therefore, from the first equation:
we see that the current will just vary sinusoidally between .
This meas that we can use a Josephson junction as a perfect voltage to frequency converter.
Wikipedia mentions that this frequency is , so it is very very high, so we are not able to view individual points of the sine curve separately with our instruments.
Also it is likely not going to be very useful for many practical applications in this mode.
Figure 1. . Source.
Voltage is horizontal, current vertical. The vertical bar in the middle is the effect of interest: the current is going up and down very quickly between , the Josephson current of the device. Because it is too quick for the oscilloscope, we just see a solid vertical bar.
The non vertical curves at right and left are just other effects we are not interested in.
TODO what does it mean that there is no line at all near the central vertical line? What happens at those voltages?
Video 1.
Superconducting Transition of Josephson junction by Christina Wicker (2016)
Source. Amazing video that presumably shows the screen of a digital oscilloscope doing a voltage sweep as temperature is reduced and superconductivity is reached.
Figure 2. . So it appears that there is a zero current between and . Why doesn't it show up on the oscilloscope sweeps, e.g. Video 1. "Superconducting Transition of Josephson junction by Christina Wicker (2016)"?
Ackermann function Updated 2025-07-16
To get an intuition for it, see the sample computation at: en.wikipedia.org/w/index.php?title=Ackermann_function&oldid=1170238965#TRS,_based_on_2-ary_function where in this context. From this, we immediately get the intuition that these functions are recursive somehow.
Acousto-optic modulator Updated 2025-07-16
An optical multiplexer!
Video 1.
Control Light with Sound! by Les' Lab (2021)
Source.
activatedgeek/LeNet-5 Updated 2025-07-16
This repository contains a very clean minimal PyTorch implementation of LeNet-5 for MNIST.
It trains the LeNet-5 neural network on the MNIST dataset from scratch, and afterwards you can give it newly hand-written digits 0 to 9 and it will hopefully recognize the digit for you.
Ciro Santilli created a small fork of this repo at lenet adding better automation for:
Install on Ubuntu 24.10 with:
sudo apt install protobuf-compiler
git clone https://github.com/activatedgeek/LeNet-5
cd LeNet-5
git checkout 95b55a838f9d90536fd3b303cede12cf8b5da47f
virtualenv -p python3 .venv
. .venv/bin/activate
pip install \
  Pillow==6.2.0 \
  numpy==1.24.2 \
  onnx==1.13.1 \
  torch==2.0.0 \
  torchvision==0.15.1 \
  visdom==0.2.4 \
;
We use our own pip install because their requirements.txt uses >= instead of == making it random if things will work or not.
On Ubuntu 22.10 it was instead:
pip install
  Pillow==6.2.0 \
  numpy==1.26.4 \
  onnx==1.17.0 torch==2.6.0 \
  torchvision==0.21.0 \
  visdom==0.2.4 \
;
Then run with:
python run.py
This script:
  • does a fixed 15 epochs on the training data
  • it then uses the trained net from memory to check accuracy with the test data
  • then it also produces a lenet.onnx ONNX file which contains the trained network, nice!
It throws a billion exceptions because we didn't start the Visdom server, but everything works nevertheless, we just don't get a visualization of the training.
The terminal outputs lines such as:
Train - Epoch 1, Batch: 0, Loss: 2.311587
Train - Epoch 1, Batch: 10, Loss: 2.067062
Train - Epoch 1, Batch: 20, Loss: 0.959845
...
Train - Epoch 1, Batch: 230, Loss: 0.071796
Test Avg. Loss: 0.000112, Accuracy: 0.967500
...
Train - Epoch 15, Batch: 230, Loss: 0.010040
Test Avg. Loss: 0.000038, Accuracy: 0.989300
And the runtime on Ubuntu 22.10, P51 was:
real    2m10.262s
user    11m9.771s
sys     0m26.368s
One of the benefits of the ONNX output is that we can nicely visualize the neural network on Netron:
Figure 1.
Netron visualization of the activatedgeek/LeNet-5 ONNX output
. From this we can see the bifurcation on the computational graph as done in the code at:
output = self.c1(img)
x = self.c2_1(output)
output = self.c2_2(output)
output += x
output = self.c3(output)
This doesn't seem to conform to the original LeNet-5 however?
activatedgeek/LeNet-5 run on GPU Updated 2025-07-16
By default, the setup runs on CPU only, not GPU, as could be seen by running htop. But by the magic of PyTorch, modifying the program to run on the GPU is trivial:
cat << EOF | patch
diff --git a/run.py b/run.py
index 104d363..20072d1 100644
--- a/run.py
+++ b/run.py
@@ -24,7 +24,8 @@ data_test = MNIST('./data/mnist',
 data_train_loader = DataLoader(data_train, batch_size=256, shuffle=True, num_workers=8)
 data_test_loader = DataLoader(data_test, batch_size=1024, num_workers=8)

-net = LeNet5()
+device = 'cuda'
+net = LeNet5().to(device)
 criterion = nn.CrossEntropyLoss()
 optimizer = optim.Adam(net.parameters(), lr=2e-3)

@@ -43,6 +44,8 @@ def train(epoch):
     net.train()
     loss_list, batch_list = [], []
     for i, (images, labels) in enumerate(data_train_loader):
+        labels = labels.to(device)
+        images = images.to(device)
         optimizer.zero_grad()

         output = net(images)
@@ -71,6 +74,8 @@ def test():
     total_correct = 0
     avg_loss = 0.0
     for i, (images, labels) in enumerate(data_test_loader):
+        labels = labels.to(device)
+        images = images.to(device)
         output = net(images)
         avg_loss += criterion(output, labels).sum()
         pred = output.detach().max(1)[1]
@@ -84,7 +89,7 @@ def train_and_test(epoch):
     train(epoch)
     test()

-    dummy_input = torch.randn(1, 1, 32, 32, requires_grad=True)
+    dummy_input = torch.randn(1, 1, 32, 32, requires_grad=True).to(device)
     torch.onnx.export(net, dummy_input, "lenet.onnx")

     onnx_model = onnx.load("lenet.onnx")
EOF
and leads to a faster runtime, with less user as now we are spending more time on the GPU than CPU:
real    1m27.829s
user    4m37.266s
sys     0m27.562s

There are unlisted articles, also show them or only show them.