Billy Mitchell (gamer) Updated 2025-07-16
Video 1.
When Cartoon Network Destroyed Billy Mitchell by Karl Jobst
. Source.
U-Math Updated 2025-07-16
Amazon EC2 GPU Updated 2025-07-16
As of December 2023, the cheapest instance with an Nvidia GPU is g4nd.xlarge, so let's try that out. In that instance, lspci contains:
00:1e.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
so we see that it runs a Nvidia T4 GPU.
Be careful not to confuse it with g4ad.xlarge, which has an AMD GPU instead. TODO meaning of "ad"? "a" presumably means AMD, but what is the "d"?
Some documentation on which GPU is in each instance can seen at: docs.aws.amazon.com/dlami/latest/devguide/gpu.html (archive) with a list of which GPUs they have at that random point in time. Can the GPU ever change for a given instance name? Likely not. Also as of December 2023 the list is already outdated, e.g. P5 is now shown, though it is mentioned at: aws.amazon.com/ec2/instance-types/p5/
When selecting the instance to launch, the GPU does not show anywhere apparently on the instance information page, it is so bad!
Also note that this instance has 4 vCPUs, so on a new account you must first make a customer support request to Amazon to increase your limit from the default of 0 to 4, see also: stackoverflow.com/questions/68347900/you-have-requested-more-vcpu-capacity-than-your-current-vcpu-limit-of-0, otherwise instance launch will fail with:
You have requested more vCPU capacity than your current vCPU limit of 0 allows for the instance bucket that the specified instance type belongs to. Please visit aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit.
When starting up the instance, also select:
Once you finally managed to SSH into the instance, first we have to install drivers and reboot:
sudo apt update
sudo apt install nvidia-driver-510 nvidia-utils-510 nvidia-cuda-toolkit
sudo reboot
and now running:
nvidia-smi
shows something like:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05   Driver Version: 525.147.05   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:1E.0 Off |                    0 |
| N/A   25C    P8    12W /  70W |      2MiB / 15360MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
From there basically everything should just work as normal. E.g. we were able to run a CUDA hello world just fine along:
nvcc inc.cu
./a.out
One issue with this setup, besides the time it takes to setup, is that you might also have to pay some network charges as it downloads a bunch of stuff into the instance. We should try out some of the pre-built images. But it is also good to know this pristine setup just in case.
We then managed to run Ollama just fine with:
curl https://ollama.ai/install.sh | sh
/bin/time ollama run llama2 'What is quantum field theory?'
which gave:
0.07user 0.05system 0:16.91elapsed 0%CPU (0avgtext+0avgdata 16896maxresident)k
0inputs+0outputs (0major+1960minor)pagefaults 0swaps
so way faster than on my local desktop CPU, hurray.
After setup from: askubuntu.com/a/1309774/52975 we were able to run:
head -n1000 pap.txt | ARGOS_DEVICE_TYPE=cuda time argos-translate --from-lang en --to-lang fr > pap-fr.txt
which gave:
77.95user 2.87system 0:39.93elapsed 202%CPU (0avgtext+0avgdata 4345988maxresident)k
0inputs+88outputs (0major+910748minor)pagefaults 0swaps
so only marginally better than on P14s. It would be fun to see how much faster we could make things on a more powerful GPU.
Photosynthesis Updated 2025-07-16
It is quite cool that photosynthesis works just like cellular respiration by producing a proton potential through chemiosmosis.
These were found by running object detection software for some porn/nudity detection. We need to run some more, all sex was likely missed: github.com/GantMan/nsfw_model/issues/160
Office space design and remote work Updated 2025-07-16
Working remotely is hard if you don't already highly master the software and enterprise systems used.
Also you don't feel people's love as strongly, and usefulness is built on love, see also Steve Jobs's Pixar office space design philosophy.
But please, give workers a small silent office so that we can concentrate instead of a silly open space, and create an internal social network so people can see what others are doing.
Remote working is much better if the majority of the team also does it, otherwise you will get excluded. Maybe after VR...
Peter Thiel Updated 2025-07-16
Nazism Updated 2025-07-16
Sirius Updated 2025-07-16
This is quite close! But as mentioned at: stars nearest to the Sun, there are several others nearby. Notably Sirius at 9 ly, the brightest star in the sky as of 2020.
CP Violation Updated 2025-09-17
Self-help Updated 2025-07-16

There are unlisted articles, also show them or only show them.