TODO where to find it: www.kaggle.com/general/50987
14 million images, more than 20k categories, typically denoting prominent objects in the image, either common daily objects, or a wild range of animals. About 1 million of them also have bounding boxes for the objects.
Each image appears to have a single label associated to it. Care must have been taken somehow with categories, since some images contain severl possible objects, e.g. a person and some object.
In practice however, the ILSVRC subset is more commonly used.
Official project page: www.image-net.org/
The data license is restrictive and forbids commercial usage: www.image-net.org/download.php.
The categories are all part of WordNet, which means that there are several parent/child categories such as dog vs type of dog available. ImageNet1k only appears to have leaf nodes however (i.e. no "dog" label, just specific types of dog).
From cocodataset.org/:
- 330K images (>200K labeled)
- 1.5 million object instances
- 80 object categories
- 91 stuff categories
- 5 captions per image. A caption is a short textual description of the image.
So they have relatively few object labels, but their focus seems to be putting a bunch of objects on the same image. E.g. they have 13 cat plus pizza photos. Searching for such weird combinations is kind of fun.
Their official dataset explorer is actually good: cocodataset.org/#explore
And the objects don't just have bounding boxes, but detailed polygons.
Also, images have captions describing the relation between objects:Epic.
a black and white cat standing on a table next to a pizza.
This dataset is kind of cool.
TODO vs COCO dataset.
As of v7:
- ~9M images
- 600 object classes
- bounding boxes
- visual relatoinships are really hard: storage.googleapis.com/openimages/web/factsfigures_v7.html#visual-relationships e.g. "person kicking ball": storage.googleapis.com/openimages/web/visualizer/index.html?type=relationships&set=train&c=kick
- google.github.io/localized-narratives/ localized narratives is ludicrous, you can actually hear the (Indian women mostly) annotators describing the image while hovering their mouses to point what they are talking about). They are clearly bored out of their minds the poor people!
atlas.brain-map.org/ omg some amazing things there.
A Drosophila melanogaster has about 135k neurons, and we only managed to reconstruct its connectome in 2023.
The human brain has 86 billion neurons, about 1 million times more. Therefore, it is obvious that we are very very far away from a full connectome.
Instead however, we could look at larger scales of connectome, and then try from that to extract modules, and then reverse engineer things module by module.
This is likely how we are going to "understand how the human brain works".
Some notable connectomes:
- 2019: 1mm cube of mouse brain: www.nature.com/articles/d41586-019-02208-0
- 2023: Drosophila connectome
This is the most plausible way of obtaining a full connectome looking from 2020 forward. Then you'd observe the slices with an electron microscope + appropriate Staining. Superintelligence by Nick Bostrom (2014) really opened Ciro Santilli's eyes to this possibility.
Once this is done for a human, it will be one of the greatest milestone of humanities, coparable perhaps to the Human Genome Project. BUt of course, privacy issues are incrediby pressing in this case, even more than in the human genome project, as we would essentially be able to read the brain of the person after their death.
As of 2022, the Drosophila connectome had been almost fully extracted.
This is also a possible path towards post-mortem brain reading.
- UK
- Higher Steaks then renamed to the boring "Uncommon": uncommonbio.co/
This is a simple hierarchical plaintext notation Ciro Santilli created to explain programs to himself.
It is usuall created by doing searches in an IDE, and then manually selecting the information of interest.
It attempts to capture intuitive information not only of the call graph itself, including callbacks, but of when things get called or not, by the addition of some context code.
For example, consider the following pseudocode:Supose that we are interested in determining what calls
f1() {
}
f2(i) {
if (i > 5) {
f1()
}
}
f3() {
f1()
f2_2()
}
f2_2() {
for (i = 0; i < 10; i++) {
f2(i)
}
}
main() {
f2_2()
f3()
}
f1
.Then a reasonable call hierarchy for
f1
would be:f2(i)
if (i > 5) {
f1()
f2_2()
for (i = 0; i < 10; i++) {
f2(i)
main
f3
f3()
main()
Some general principles:
- start with a regular call tree
- to include context:
- remove any blank lines from the snippet of interest
- add it indented below the function
- and then follow it up with a blank line
- and then finally add any callers at the same indentation level
Most of these are going to be Whole-genome sequencing of some model organism:
- 2003: Human Genome Project (3 Gbp)en.wikipedia.org/wiki/Whole_genome_sequencing#History lists them all. Basically th big "firsts" all happened in the 1990s and early 2000s.
How to view only posts by followed on Facebook feed? by Ciro Santilli 35 Updated 2025-01-10 +Created 1970-01-01
Circa 2023, the feed is an unbearable list of stupid suggestions, never-ending idiotic memes, and you just end up missing posts you actually care about from people you actually follow.
- www.komando.com/social-media/facebook-customized-feeds/847500/
- www.quora.com/How-do-I-limit-my-news-feed-to-friends-only-on-Facebook
- www.youtube.com/watch?v=SIA8VydqiNQ OK they split their feed into multiple feeds. However on page follows www.facebook.com/?filter=pages&sk=h_chr you very quickly reach:the history doesn't go back even a few days as of November 2023! And the favorites feed www.facebook.com/?filter=favorites&sk=h_chr is more explicit on their ridiculous timing:
You're all caught up on Most Recent posts Check back later for more updates
OMG!You're up to date on posts from the last 3 days
Adversarial Policies Beat Superhuman Go AIs by Ciro Santilli 35 Updated 2025-01-10 +Created 1970-01-01
There are unlisted articles, also show them or only show them.