COCO 2017 by Ciro Santilli 37 Updated 2025-07-16
This is the one used on MLperf v2.1 ResNet, likely one of the most popular choices out there.
2017 challenge subset:
Test buy 2023-04-10 in the UK:
  • fee: 0.99 pounds, minimum buy: 1.99 pounds
  • bought 10 pounds, minus 0.99 fee, totalled: 0.00039162 BTC (£8.92) presumably after further fees/spread
  • bitcoin price on Google on that day: 22,777.54 GBP / BTC
  • bitcoin transaction fees were about 2.7 BTC on that day
Sending 5 pounds to wallet 12dg2FaiZLp3VzDtLvwPinaKz41TQcEGbs
  • network fee: 0.00001989 BTC
  • total bitcoin cost: -0.00023928 BTC
  • new balance: 15,234 satoshi (39,162 - 23,928).
  • total spent: £5.45
  • time est.: about 30 minutes
This worked and I received 21939 satoshis (23928 - 1989) on Electrum on one of the outputs of transaction 1177268091cbeaacbcaac5dc4f6d1774c4ec11b4bcffafa555cd2775eafb954c.
Sending 1 satoshi back! The lowest fee in Electron is 1120 Satoshis targeting 25 blocks (4 hours). Let's do it. Failed, server forbids dust, minimum is 1000 satoshi. OK, sending 1000 satoshi, at 1139 fee.
CNN convolution kernels are not hardcoded. They are learnt and optimized via backpropagation. You just specify their size! Example in PyTorch you'd do just:
nn.Conv2d(1, 6, kernel_size=(5, 5))
as used for example at: activatedgeek/LeNet-5.
This can also be inferred from: stackoverflow.com/questions/55594969/how-to-visualise-filters-in-a-cnn-with-pytorch where we see that the kernels are not perfectly regular as you'd expected from something hand coded.
This is about transactions that are interesting not because of their inscriptions, but for some other reason, such as transaction size, etc.
MLperf by Ciro Santilli 37 Updated 2025-07-16
mlcommons.org/en/ Their homepage is not amazingly organized, but it does the job.
Benchmark focused on deep learning. It has two parts:
Furthermore, a specific network model is specified for each benchmark in the closed category: so it goes beyond just specifying the dataset.
And there are also separate repositories for each:
E.g. on mlcommons.org/en/training-normal-21/ we can see what the the benchmarks are:
DatasetModel
ImageNetResNet
KiTS193D U-Net
OpenImagesRetinaNet
COCO datasetMask R-CNN
LibriSpeechRNN-T
WikipediaBERT
1TB ClickthroughDLRM
GoMiniGo
ONNX by Ciro Santilli 37 Updated 2025-07-16
The most important thing this project provides appears to be the .onnx file format, which represents ANN models, pre-trained or not.
Deep learning frameworks can then output such .onnx files for interchangeability and serialization.
Some examples:
The cool thing is that ONNX can then run inference in an uniform manner on a variety of devices without installing the deep learning framework used for. It's a bit like having a kind of portable executable. Neat.
Neurokernel by Ciro Santilli 37 Updated 2025-07-16
The Neurokernel Project aims to build an open software platform for the emulation of the entire brain of the fruit fly Drosophila melanogaster on multiple Graphics Processing Units (GPUs).
The student organized bar of the École. There's a corresponding Binet that takes care of it.

Unlisted articles are being shown, click here to show only listed articles.