Amazon EC2 GPU Updated +Created
As of December 2023, the cheapest instance with an Nvidia GPU is g4nd.xlarge, so let's try that out. In that instance, lspci contains:
00:1e.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
TODO meaning of "nd"? "n" presumably means Nvidia, but what is the "d"?
Be careful not to confuse it with g4ad.xlarge, which has an AMD GPU instead. TODO meaning of "ad"? "a" presumably means AMD, but what is the "d"?
Some documentation on which GPU is in each instance can seen at: docs.aws.amazon.com/dlami/latest/devguide/gpu.html (archive) with a list of which GPUs they have at that random point in time. Can the GPU ever change for a given instance name? Likely not. Also as of December 2023 the list is already outdated, e.g. P5 is now shown, though it is mentioned at: aws.amazon.com/ec2/instance-types/p5/
When selecting the instance to launch, the GPU does not show anywhere apparently on the instance information page, it is so bad!
Also note that this instance has 4 vCPUs, so on a new account you must first make a customer support request to Amazon to increase your limit from the default of 0 to 4, see also: stackoverflow.com/questions/68347900/you-have-requested-more-vcpu-capacity-than-your-current-vcpu-limit-of-0, otherwise instance launch will fail with:
You have requested more vCPU capacity than your current vCPU limit of 0 allows for the instance bucket that the specified instance type belongs to. Please visit aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit.
When starting up the instance, also select:
  • image: Ubuntu 22.04
  • storage size: 30 GB (maximum free tier allowance)
Once you finally managed to SSH into the instance, first we have to install drivers and reboot:
sudo apt update
sudo apt install nvidia-driver-510 nvidia-utils-510 nvidia-cuda-toolkit
sudo reboot
and now running:
nvidia-smi
shows something like:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05   Driver Version: 525.147.05   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:1E.0 Off |                    0 |
| N/A   25C    P8    12W /  70W |      2MiB / 15360MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
From basically everything should just work as normal. E.g. we were able to run a CUDA hello world just fine along:
nvcc inc.cu
./a.out
One issue with this setup, besides the time it takes to setup, is that you might also have to pay some network charges as it downloads a bunch of stuff into the instance. We should try out some of the pre-built images. But it is also good to know this pristine setup just in case.
Some stuff we then managed to run:
curl https://ollama.ai/install.sh | sh
/bin/time ollama run llama2 'What is quantum field theory?'
which gave:
0.07user 0.05system 0:16.91elapsed 0%CPU (0avgtext+0avgdata 16896maxresident)k
0inputs+0outputs (0major+1960minor)pagefaults 0swaps
so way faster than on my local desktop CPU, hurray.
After setup from: askubuntu.com/a/1309774/52975 we were able to run:
head -n1000 pap.txt | ARGOS_DEVICE_TYPE=cuda time argos-translate --from-lang en --to-lang fr > pap-fr.txt
which gave:
77.95user 2.87system 0:39.93elapsed 202%CPU (0avgtext+0avgdata 4345988maxresident)k
0inputs+88outputs (0major+910748minor)pagefaults 0swaps
so only marginally better than on P14s. It would be fun to see how much faster we could make things on a more powerful GPU.
Amazon EC2 hello world Updated +Created
Let's get SSH access, instal a package, and run a server.
As of December 2023 on a t2.micro instance, the only one part of free tier at the time with advertised 1 vCPU, 1 GiB RAM, 8 GiB disk for the first 12 months, on Ubuntu 22.04:
$ free -h
               total        used        free      shared  buff/cache   available
Mem:           949Mi       149Mi       210Mi       0.0Ki       590Mi       641Mi
Swap:             0B          0B          0B
$ nproc
1
$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       7.6G  1.8G  5.8G  24% /
To install software:
sudo apt update
sudo apt install cowsay
cowsay asdf
Once HTTP inbound traffic is enabled on security rules for port 80, you can:
while true; do printf "HTTP/1.1 200 OK\r\n\r\n`date`: hello from AWS" | sudo nc -Nl 80; done
and then you are able to curl from your local computer and get the response.
ampy Updated +Created
Dell Inspiron 15 3520 Updated +Created
Bought May 2024 to be my clean crypto-only computer. Searched for cheapest 1 TB disk 16 GB RAM not too old on Amazon with Ubuntu certification, and that was it at £479.00.
Some reviews:
  • the keyboard is kind of crap. Notably the key "a" is very hard to press!!
  • the lack of a sleep state indication LED and "I'm powering on LED" compared to Lenovo is really sad
  • it gets way too hot doing work (Monero bootstrap) with lid closed, likely brought system down
OPSEC: will run only cryptocurrency wallets and nothing else. Will connect to Internet, but never ever to a non clean USB flash drive.
The OPSEC for this machine supposes:
  • no supply of chain attack on USB hardware, Laptop hardware, pre-installed Windows and Ubuntu ISO
  • connecting with browser to a few well known websites to download stuff (Ubuntu ISO, Monero software) is safe
Bootstrap OPSEC:
It must have taken about one week running full time to sync the Monero blockchain which at the time was at about 3.1M blocks! I checked on system explorer, and CPU and internet usage was never maxed out, suggesting simply slow network. But the computer still overheated quite a bit and froze a few times.
Compile MicroPython code for Micro Bit locally Updated +Created
To use a prebuilt firmware, you can just use uflash, tested on Ubuntu 22.04:
git clone https://github.com/bbcmicrobit/micropython
cd micropython
git checkout 7fc33d13b31a915cbe90dc5d515c6337b5fa1660
uflash examples/led_dance.py
What that does is:
  • convert the MicroPython code to bytecode
  • join it up with a prebuilt firmware that ships with uflash which contains the MicroPython interpreter
  • flashes that
To build your own firmware see:
TODO didn't manage from source Ubuntu 22.04, their setup bitrotted way too fast... it's shameful even. Until I gave up and went for the magic Docker of + github.com/bbcmicrobit/micropython, and it bloody worked:
git clone https://github.com/bbcmicrobit/micropython
cd micropython
git checkout 7fc33d13b31a915cbe90dc5d515c6337b5fa1660
docker pull ghcr.io/carlosperate/microbit-toolchain:latest
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest yt target bbc-microbit-classic-gcc-nosd@https://github.com/lancaster-university/yotta-target-bbc-microbit-classic-gcc-nosd
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest make all

# Build one.
tools/makecombinedhex.py build/firmware.hex examples/counter.py -o build/counter.hex
cp build/counter.hex "/media/$USER/MICROBIT/"

# Build all.
for f in examples/*; do b="$(basename "$f")"; echo $b; tools/makecombinedhex.py build/firmware.hex "$f" -o "build/${b%.py}.hex"; done
The pre-Docker attempts:
sudo add-apt-repository -y ppa:team-gcc-arm-embedded
sudo apt update
sudo apt install gcc-arm-embedded
sudo apt install cmake ninja-build srecord libssl-dev

# Rust required for some Yotta component, OMG.
sudo snap install rustup
rustup default 1.64.0

python3 -m pip install yotta
The line:
sudo add-apt-repository -y ppa:team-gcc-arm-embedded
warns:
E: The repository 'https://ppa.launchpadcontent.net/team-gcc-arm-embedded/ppa/ubuntu jammy Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
and then the update/sudo apt-get install gcc-arm-embedded fails, bibliography:
Attempting to install Yotta:
sudo -H pip3 install yotta
or:
python3 -m pip install --user yotta
was failing with:
Exception: Version mismatch: this is the 'cffi' package version 1.15.1, located in '/tmp/pip-build-env-dinhie_9/overlay/local/lib/python3.10/dist-packages/cffi/api.py'.  When we import the top-level '_cffi_backend' extension module, we get version 1.15.0, located in '/usr/lib/python3/dist-packages/_cffi_backend.cpython-310-x86_64-linux-gnu.so'.  The two versions should be equal; check your installation.
Running:
python3 -m pip install --user cffi==1.15.1
did not help. Bibliography:
From a clean virtualenv, it appears to move further, and then fails at:
Building wheel for cmsis-pack-manager (pyproject.toml) ... error
error: [Errno 2] No such file or directory: 'cargo'
So we install Rust and try again, OMG:
sudo snap install rustup
rustup default stable
which at the time of writing was rustc 1.64.0, and then OMG, it worked!! We have the yt command.
However, it is still broken, e.g.:
git clone https://github.com/lancaster-university/microbit-samples
cd microbit-samples
git checkout 285f9acfb54fce2381339164b6fe5c1a7ebd39d5
cp source/examples/invaders/* source
yt clean
yt build
blows up:
annot import name 'soft_unicode' from 'markupsafe'
bibliography:
GHDL Updated +Created
Examples under vhdl.
First install GHDL. On Ubuntu:
sudo apt install verilator
Tested on Verilator 1.0.0, Ubuntu 22.04.
Run all examples, which have assertions in them:
cd vhdl
./run
Files:
gitk Updated +Created
Figure 1.
gitk 2.34.1 running on Ubuntu 22.04 with a simple repository.
Kdenlive Updated +Created
This seems like a decent option, although it has bugs coming in and out all the time! Also it is quite hard to learn to use.
To get started:
  • import a clip
  • drag it onto the track area
Shortucts:
To set the video length, search for "set outpoint" on "monitor".
Add subtitles:
  • Effects
  • Dynamic text
then drag on top of the video track. To add only to part of the video, cut it up first.
Preview has no sound on Ubuntu 20.10. Fixed as of Ubuntu 22.04.
Sound worked on Ubuntu 21.04 though, but it then soon crashed with:
 = = SET EFFECT PARAM:  "rect"  =  0=1188 0 732 242
MUTEX LOCK!!!!!!!!!!!! slotactivateeffect:  1
// // // RESULTING REQUIRED SCENE:  1
Object 0x557293592da0 destroyed while one of its QML signal handlers is in progress.
Most likely the object was deleted synchronously (use QObject::deleteLater() instead), or the application is running a nested event loop.
This behavior is NOT supported!
qrc:/qml/EffectToolBar.qml:80: function() { [native code] }
Killed
amazing.
On Ubuntu 22.04 haven't crashed yet.
Micro Bit getting started Updated +Created
When plugged into Ubuntu 22.04 via the USB Micro-B the Micro Bit mounts as:
/media/$USER/MICROBIT/
e.g.:
/media/ciro/MICROBIT/
for username ciro.
Loading the program is done by simply copying a .hex binary into the image e.g. with:
cp ~/Downloads/microbit_program.hex /media/$USER/MICROBIT/
The file name does not matter, only the .hex extension.
The back power light flashes while upload is happening.
Flashing takes about 10-15 seconds for the 1.8 MB scroll display hello world from microbit-micropython.readthedocs.io/en/v1.0.1/tutorials/hello.html:
from microbit import *
display.scroll("Hello, World!")
and the program starts executing immediately after flash ends.
You can restart the program by clicking the reset button near the USB. When you push down the program dies, and it restarts as soon as you release the button.
Program Raspberry Pi Pico W with C Updated +Created
Ubuntu 22.04 build just worked, nice! Much feels much cleaner than the Micro Bit C setup:
sudo apt install cmake gcc-arm-none-eabi libnewlib-arm-none-eabi libstdc++-arm-none-eabi-newlib

git clone https://github.com/raspberrypi/pico-sdk
cd pico-sdk
git checkout 2e6142b15b8a75c1227dd3edbe839193b2bf9041
cd ..

git clone https://github.com/raspberrypi/pico-examples
cd pico-examples
git checkout a7ad17156bf60842ee55c8f86cd39e9cd7427c1d
cd ..

export PICO_SDK_PATH="$(pwd)/pico-sdk"
cd pico-exampes
mkdir build
cd build
# Board selection.
# https://www.raspberrypi.com/documentation/microcontrollers/c_sdk.html also says you can give wifi ID and password here for W.
cmake -DPICO_BOARD=pico_w ..
make -j
Then we install the programs just like any other UF2 but plugging it in with BOOTSEL pressed and copying the UF2 over, e.g.:
cp pico_w/blink/picow_blink.uf2 /media/$USER/RPI-RP2/
Note that there is a separate example for the W and non W LED, for non-W it is:
cp blink/blink.uf2 /media/$USER/RPI-RP2/
Also tested the UART over USB example:
cp hello_world/usb/hello_usb.uf2 /media/$USER/RPI-RP2/
You can then see the UART messages with:
screen /dev/ttyACM0 115200
TODO understand the proper debug setup, and a flash setup that doesn't require us to plug out and replug the thing every two seconds. www.electronicshub.org/programming-raspberry-pi-pico-with-swd/ appears to describe it, with SWD to do both debug and flash. To do it, you seem need another board with GPIO, e.g. a Raspberry Pi, the laptop alone is not enough.
Raspberry Pi Pico W UART Updated +Created
You can connect form an Ubuntu 22.04 host as:
screen /dev/ttyACM0 115200
When in screen, you can Ctrl + C to kill main.py, and then execution stops and you are left in a Python shell. From there:
  • Ctrl + D: reboots
  • Ctrl + A K: kills the GNU screen window. Execution continues normally
but be aware of: Raspberry Pi Pico W freezes a few seconds after after screen disconnects from UART.
The Three Treasures of the Programmer Updated +Created
Ciro Santilli's joke version of the Chinese Four Treasures of the Study!
  • web browser
  • Text editor
  • terminal. Though to be honest, circa 2022, Ciro learned of the ctrl + click to open file (including with file.c:123 line syntax) ability of Visual Studio Code (likely present in other IDEs), and he was starting considering dumping the terminal altogether if some implementation gets it really really right. The main thing is that it can't be a tinny little bar at the bottom, it has to be full window and super easily toggleable!
In the past, Ciro used to use file managers, which would be the fourth tresure. But he stopped doing so for years due to his cd alias... so it became three. He actually had exactly three windows open when he was checking if there was anything else he could not open hand of.
Figure 1.
The three Treasures of the Programmer
. Featuring: Gvim, tmux running in GNOME terminal, and Chromium browser on Ubuntu 22.04. The minimized windows are for demonstration purposes, Cirism mandates that all windows shall be maximized at all times. Splits withing a single program are permitted however.
Universal asynchronous receiver-transmitter Updated +Created
A good project to see UARTs at work in all their beauty is to connect two Raspberry Pis via UART, and then:
Part of the beauty of this is that you can just connect both boards directly manually with a few wire-to-wire connections with simple jump wire. Its simplicity is just quite refreshing. Sure, you could do something like that for any physical layer link presumably...
Remember that you can only have one GNU screen connected at a time or else they will mess each other up: unix.stackexchange.com/questions/93892/why-is-screen-is-terminating-without-root/367549#367549
On Ubuntu 22.04 you can screen without sudo by adding yourself to the dialout group with:
sudo usermod -a -G dialout $USER
Verilator Updated +Created
Verilog simulator that transpiles to C++.
One very good thing about this is that it makes it easy to create test cases directly in C++. You just supply inputs and clock the simulation directly in a C++ loop, then read outputs and assert them with assert(). And you can inspect variables by printing them or with GDB. This is infinitely more convenient than doing these IO-type tasks in Verilog itself.
Some simulation examples under verilog.
First install Verilator. On Ubuntu:
sudo apt install verilator
Tested on Verilator 4.038, Ubuntu 22.04.
Run all examples, which have assertions in them:
cd verilator
make run
File structure is for example:
Example list: