By default, the setup runs on CPU only, not GPU, as could be seen by running htop. But by the magic of PyTorch, modifying the program to run on the GPU is trivial:
cat << EOF | patch
diff --git a/run.py b/run.py
index 104d363..20072d1 100644
--- a/run.py
+++ b/run.py
@@ -24,7 +24,8 @@ data_test = MNIST('./data/mnist',
 data_train_loader = DataLoader(data_train, batch_size=256, shuffle=True, num_workers=8)
 data_test_loader = DataLoader(data_test, batch_size=1024, num_workers=8)

-net = LeNet5()
+device = 'cuda'
+net = LeNet5().to(device)
 criterion = nn.CrossEntropyLoss()
 optimizer = optim.Adam(net.parameters(), lr=2e-3)

@@ -43,6 +44,8 @@ def train(epoch):
     net.train()
     loss_list, batch_list = [], []
     for i, (images, labels) in enumerate(data_train_loader):
+        labels = labels.to(device)
+        images = images.to(device)
         optimizer.zero_grad()

         output = net(images)
@@ -71,6 +74,8 @@ def test():
     total_correct = 0
     avg_loss = 0.0
     for i, (images, labels) in enumerate(data_test_loader):
+        labels = labels.to(device)
+        images = images.to(device)
         output = net(images)
         avg_loss += criterion(output, labels).sum()
         pred = output.detach().max(1)[1]
@@ -84,7 +89,7 @@ def train_and_test(epoch):
     train(epoch)
     test()

-    dummy_input = torch.randn(1, 1, 32, 32, requires_grad=True)
+    dummy_input = torch.randn(1, 1, 32, 32, requires_grad=True).to(device)
     torch.onnx.export(net, dummy_input, "lenet.onnx")

     onnx_model = onnx.load("lenet.onnx")
EOF
and leads to a faster runtime, with less user as now we are spending more time on the GPU than CPU:
real    1m27.829s
user    4m37.266s
sys     0m27.562s
"Magic star" can refer to several different concepts, depending on the context. Here are a few possible interpretations: 1. **Mathematics**: In number theory, a magic star is similar to a magic square. It consists of points arranged in a star shape where the sums of the numbers in a specific formation (such as lines or diagonals) all yield the same total.
Bdellium is a gum resin that is obtained from certain trees in the genus Commiphora, which are part of the Burseraceae family. The term "bdellium" is sometimes used to refer to a resin similar to myrrh. Historically, bdellium has been mentioned in ancient texts, including the Bible, where it is described as a valuable substance. The resin has been used for various purposes, including as an ingredient in perfumes, incense, and traditional medicine.
Deep learning by Ciro Santilli 37 Updated 2025-07-16
Deep learning is the name artificial neural networks basically converged to in the 2010s/2020s.
It is a bit of an unfortunate as it suggests something like "deep understanding" and even reminds one of AGI, which it almost certainly will not attain on its own. But at least it sounds good.
Ciro Santilli once visited the chemistry department of a world leading university, and the chemists there were obsessed with NMR. They had small benchtop NMR machines. They had larger machines. They had a room full of huge machines. They had them in corridors and on desk tops. Chemists really love that stuff. More precisely, these are used for NMR spectroscopy, which helps identify what a sample is made of.
Basically measures the concentration of certain isotopes in a region of space.
Video 1.
Introduction to NMR by Allery Chemistry
. Source.
Video 2.
How to Prepare and Run a NMR Sample by University of Bath (2017)
Source. This is a more direct howto, cool to see. Uses a Bruker Corporation 300. They have a robotic arm add-on. Shows spectrum on computer screen at the end. Shame no molecule identification after that!
Video 3. Source.
This video has the merit of showing real equipment usage, including sample preparation.
Says clearly that NMR is the most important way to identify organic compounds.
Video 4.
Introductory NMR & MRI: Video 01 by Magritek (2009)
Source. Precession and Resonance. Precession has a natural frequency for any angle of the wheel.
Video 5.
Introductory NMR & MRI: Video 02 by Magritek (2009)
Source. The influence of temperature on spin statistics. At 300K, the number of up and down spins are very similar. As you reduce temperature, we get more and more on lower energy state.
Video 6.
Introductory NMR & MRI: Video 03 by Magritek (2009)
Source. The influence of temperature on spin statistics. At 300K, the number of up and down spins are very similar. As you reduce temperature, we get more and more on lower energy state.
Video 7.
NMR spectroscopy visualized by ScienceSketch
. Source. 2020. Decent explanation with animation. Could go into a bit more numbers, but OK.
X-ray diffraction by Ciro Santilli 37 Updated 2025-07-16
Often used as a synonym for X-ray crystallography, or to refer more specifically to the diffraction part of the experiment (exluding therefore sample preparation and data processing).
Kaggle by Ciro Santilli 37 Updated 2025-07-16
To be fair, this is one of the least worse ones.

Pinned article: Introduction to the OurBigBook Project

Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
We have two killer features:
  1. topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculus
    Articles of different users are sorted by upvote within each article page. This feature is a bit like:
    • a Wikipedia where each user can have their own version of each article
    • a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
    This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.
    Figure 1.
    Screenshot of the "Derivative" topic page
    . View it live at: ourbigbook.com/go/topic/derivative
  2. local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:
    This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
    Figure 5. . You can also edit articles on the Web editor without installing anything locally.
    Video 3.
    Edit locally and publish demo
    . Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension.
  3. https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/feature/x/hilbert-space-arrow.png
  4. Infinitely deep tables of contents:
    Figure 6.
    Dynamic article tree with infinitely deep table of contents
    .
    Descendant pages can also show up as toplevel e.g.: ourbigbook.com/cirosantilli/chordate-subclade
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact