CLI program implementing Universal Chess Interface: www.reddit.com/r/ComputerChess/comments/b6rdez/commandline_options_for_stockfish/
How to actually play against it: chess.stackexchange.com/questions/4353/how-to-install-stockfish-on-ubuntu So hard!
The user friendly Chess UI! Exactly what you would expect from a GNOME Project package. But also packs some punch via the Universal Chess Interface, e.g. Stockfish just works.
Both chess engine and a CLI chess UI. As an engine it is likely irrelevant compared to Stockfish as of 2020. TODO: does the UI support Universal Chess Interface?
Cool project history though. Started before the GNU Project itself, and became one of the first packages.
Advanced. Not beginner friendly, very clunky.
To be fair, this is one of the least worse ones.
The cool thing about this notation is that is showed to Ciro Santilli that there is more state to a chess game than just the board itself! Notably:plus some other boring draw rules counters.
- whose move it is next
- castling availability
- en passant availability
70,000 28x28 grayscale (1 byte per pixel) images of hand-written digits 0-9, i.e. 10 categories. 60k are considered training data, 10k are considered for test data.
This is THE "OG" computer vision dataset.
Playing with it is the de-facto computer vision hello world.
It was on this dataset that Yann LeCun made great progress with the LeNet model. Running LeNet on MNIST has to be the most classic computer vision thing ever. See e.g. activatedgeek/LeNet-5 for a minimal and modern PyTorch educational implementation.
But it is important to note that as of the 2010's, the benchmark had become too easy for many applications. It is perhaps fair to say that the next big dataset revolution of the same importance was with ImageNet.
The dataset could be downloaded from yann.lecun.com/exdb/mnist/ but as of March 2025 it was down and seems to have broken from time to time randomly, so Wayback Machine to the rescue:but doing so is kind of pointless as both files use some crazy single-file custom binary format to store all images and labels. OMG!
wget \
https://web.archive.org/web/20120828222752/http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz \
https://web.archive.org/web/20120828182504/http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz \
https://web.archive.org/web/20240323235739/http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz \
https://web.archive.org/web/20240328174015/http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
OK-ish data explorer: knowyourdata-tfds.withgoogle.com/#tab=STATS&dataset=mnist
TODO where to find it: www.kaggle.com/general/50987
14 million images with more than 20k categories, typically denoting prominent objects in the image, either common daily objects, or a wild range of animals. About 1 million of them also have bounding boxes for the objects. The images have different sizes, they are not all standardized to a single size like MNIST[ref].
Each image appears to have a single label associated to it. Care must have been taken somehow with categories, since some images contain severl possible objects, e.g. a person and some object.
Official project page: www.image-net.org/
The data license is restrictive and forbids commercial usage: www.image-net.org/download.php. Also as a result you have to login to download the dataset. Super annoying.
How to visualize: datascience.stackexchange.com/questions/111756/where-can-i-view-the-imagenet-classes-as-a-hierarchy-on-wordnet
From cocodataset.org/:
So they have relatively few object labels, but their focus seems to be putting a bunch of objects on the same image. E.g. they have 13 cat plus pizza photos. Searching for such weird combinations is kind of fun.
Their official dataset explorer is actually good: cocodataset.org/#explore
Also, images have captions describing the relation between objects:Epic.
This dataset is kind of cool.
As of v7:
- ~9M images
- 600 object classes
- bounding boxes
- visual relatoinships are really hard: storage.googleapis.com/openimages/web/factsfigures_v7.html#visual-relationships e.g. "person kicking ball": storage.googleapis.com/openimages/web/visualizer/index.html?type=relationships&set=train&c=kick
- google.github.io/localized-narratives/ localized narratives is ludicrous, you can actually hear the (Indian women mostly) annotators describing the image while hovering their mouses to point what they are talking about). They are clearly bored out of their minds the poor people!
atlas.brain-map.org/ omg some amazing things there.
A Drosophila melanogaster has about 135k neurons, and we only managed to reconstruct its connectome in 2023.
The human brain has 86 billion neurons, about 1 million times more. Therefore, it is obvious that we are very very far away from a full connectome.
Instead however, we could look at larger scales of connectome, and then try from that to extract modules, and then reverse engineer things module by module.
This is likely how we are going to "understand how the human brain works".
Some notable connectomes:
- 2019: 1mm cube of mouse brain: www.nature.com/articles/d41586-019-02208-0
- 2023: Drosophila connectome
This is the most plausible way of obtaining a full connectome looking from 2020 forward. Then you'd observe the slices with an electron microscope + appropriate Staining. Superintelligence by Nick Bostrom (2014) really opened Ciro Santilli's eyes to this possibility.
Once this is done for a human, it will be one of the greatest milestone of humanities, coparable perhaps to the Human Genome Project. BUt of course, privacy issues are incrediby pressing in this case, even more than in the human genome project, as we would essentially be able to read the brain of the person after their death.
This is also a possible path towards post-mortem brain reading.
- UK
- Higher Steaks then renamed to the boring "Uncommon": uncommonbio.co/
Unlisted articles are being shown, click here to show only listed articles.