This is the first thing you have to know about supervised learning:Both of those already have hardware acceleration available as of the 2010s.
- training is when you learn model parameters from input. This literally means learning the best value we can for a bunch of number input numbers of the model. This can easily be on the hundreds of thousands.
- inference is when we take a trained model (i.e. with the parameters determined), and apply it to new inputs
One of the most nerve wrecking movies ever made. Until they decide to rescue their colleague from jail, then it just becomes too surreal.
Bibliography:
- some good interview excerpts with some of the pioneers on Glory of the Geeks
Two ways to see it:
- a ring where inverses exist
- a field where multiplication is not necessarily commutative
An IBM made/pushed term, but that matches Ciro Santilli's general view of how we should move forward AGI.
Surely You're Joking, Mr. Feynman chapter Alfred Nobel's Other Mistake Updated 2025-06-17 +Created 1970-01-01
Key quote that names the chapter:
My friend Matt Sands was once going to write a book to be called Alfred Nobel's Other Mistake.
Surely You're Joking, Mr. Feynman chapter O Americano, Outra Vez! Updated 2025-06-17 +Created 1970-01-01
Ciro's Edict #8 Article metadata shown next to every header Updated 2025-06-17 +Created 1970-01-01
This is a major feature: we have now started to inject the following buttons next to every single pre-rendered header:
This crucial feature makes it clear to every new user that every single header has its own separate metadata, which is a crucial idea of the website.
Screenshot showing metadata next to each header
. The page is: ourbigbook.com/donald-trump/chemistry. Note how even the subheaders "Chemical element" and "Hydrogen" show the metadata. Unlisted articles are being shown, click here to show only listed articles.