Underdetermination refers to a situation in philosophy of science and epistemology where the available evidence is insufficient to uniquely determine which of several competing theories or explanations is the correct one. In other words, multiple hypotheses can explain the same set of observations or data, leading to the conclusion that the evidence does not definitively support one theory over another.
This is the first thing you have to know about supervised learning:Both of those already have hardware acceleration available as of the 2010s.
- training is when you learn model parameters from input. This literally means learning the best value we can for a bunch of number input numbers of the model. This can easily be on the hundreds of thousands.
- inference is when we take a trained model (i.e. with the parameters determined), and apply it to new inputs
Ciro Santilli invented this term, derived from "hardware in the loop" to refer to simulations in which both the brain and the body and physical world of organism models are modelled.
E.g. just imagine running:
Size theory is a concept used in various fields, including mathematics, physics, and philosophy, but it can vary significantly based on context. Here are some interpretations of "size theory" in different disciplines: 1. **Mathematics**: In mathematical contexts, size theory can refer to concepts related to the measure and dimension of sets, particularly in geometry and topology. It may deal with how different dimensions and sizes of objects can be understood and compared.
Ciro Santilli invented this term, it refers to mechanisms in which you put an animal in a virtual world that the animal can control, and where you can measure the animal's outputs.
- MouseGoggles www.researchsquare.com/article/rs-3301474/v1 | twitter.com/hongyu_chang/status/1704910865583993236
- Fruit fly setup from Penn State: scitechdaily.com/secrets-of-fly-vision-for-rapid-flight-control-and-staggeringly-fast-reaction-speed/
An IBM made/pushed term, but that matches Ciro Santilli's general view of how we should move forward AGI.
Interesting layer skip architecture thing.
Apparently destroyed ImageNet 2015 and became very very famous as such.
- torchvision ResNet
- MLperf v2.1 ResNet contains a pre-trained ResNet ONNX at zenodo.org/record/4735647/files/resnet50_v1.onnx for its inference benchmark. We've tested it at: Run MLperf v2.1 ResNet on Imagenette.
A Drosophila melanogaster has about 135k neurons, and we only managed to reconstruct its connectome in 2023.
The human brain has 86 billion neurons, about 1 million times more. Therefore, it is obvious that we are very very far away from a full connectome.
Instead however, we could look at larger scales of connectome, and then try from that to extract modules, and then reverse engineer things module by module.
This is likely how we are going to "understand how the human brain works".
Some notable connectomes:
Pinned article: Introduction to the OurBigBook Project
Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
Intro to OurBigBook
. Source. We have two killer features:
- topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculusArticles of different users are sorted by upvote within each article page. This feature is a bit like:
- a Wikipedia where each user can have their own version of each article
- a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.Figure 1. Screenshot of the "Derivative" topic page. View it live at: ourbigbook.com/go/topic/derivativeVideo 2. OurBigBook Web topics demo. Source. - local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
- to OurBigBook.com to get awesome multi-user features like topics and likes
- as HTML files to a static website, which you can host yourself for free on many external providers like GitHub Pages, and remain in full control
Figure 3. Visual Studio Code extension installation.Figure 4. Visual Studio Code extension tree navigation.Figure 5. Web editor. You can also edit articles on the Web editor without installing anything locally.Video 3. Edit locally and publish demo. Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension.Video 4. OurBigBook Visual Studio Code extension editing and navigation demo. Source. - Infinitely deep tables of contents:
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact






