Some of the earlier computers of the 20th centure were analog computers, not digital.
At some point analog died however, and "computer" basically by default started meaning just "digital computer".
As of the 2010's and forward, with the limit of Moore's law and the rise of machine learning, people have started looking again into analog computing as a possile way forward. A key insight is that huge floating point precision is not that crucial in many deep learning applications, e.g. many new digital designs have tried 16-bit floating point as opposed to the more traditional 32-bit minium. Some papers are even looking into 8-bit: dl.acm.org/doi/10.5555/3327757.3327866
As an example, the Lightmatter company was trying to implement silicon photonics-based matrix multiplication.
A general intuition behind this type of development is that the human brain, the holy grail of machine learning, is itself an analog computer.
Cool data embedded in the Bitcoin blockchain Ordinal ruleset inscription Updated 2025-05-23 +Created 1970-01-01
Ordinals are inscriptions created with the protocol described at: docs.ordinals.com/inscriptions.html The protocol was designed by developer Casey Rodarmor, and shares a few similarities with the AtomSea & EMBII protocol.
The protocol also includes a way to have ownership over inscriptions, effectively creating an NFT system on top of the bitcoin blockchain. AtomSea & EMBII also already had such a system however. In either case, Ciro Santilli couldn't give less of a fuck about who owns some random publicly viewable digital asset.
For whatever reason, orinals became extremelly popular compared to the AtomSea & EMBII format, leading to millions os inscriptions, and 10k+ images as of block 830k. They also started to take up a substatial portion of the available block space.
This in turn led to a lot of child porn rediscussion, and people linking back to this page to view earlier inscriptions: incoming links.
Unfortunately, unlike AtomSea & EMBII and even cryptograffiti.info uploads, most ordinals are designed to be just souless bulk collectibles, as with as much artistic merit as any random collectible card set or postage stamps you may find at a newpaper stall. To make things worse many of them are likely algorithmically generated. Eternal September had truly arrived to the Bitcoin blockchain. As a result, machine learning would be almost essential in order to find interesting uploads amidst such bulk.
Ordinal #0
. This is the first ordinal ruleset inscription: ordinals.com/inscription/6fb976ab49dcec017f1e201e84395983204ae1a7c2abf7ced0a85d692e442799i0. It was made on block 767430 (2022-12-14).The
i0
at the end of the URL above means "inscription 0". This is because a single transaction can have multiple inscriptions.The ordinals also started taking up large portions of the Bitcoin blockchain:
Apparently the "Taproot" Bitcoin update made it easier to upload image-sized data once again, which had become prohibitively expensive 2023 and much earlier:
- protos.com/did-taproot-ruin-bitcoin-with-nft-inscriptions-of-monkey-jpegs/
- ordinals.com/ appears to index some types of ordinals
Bibliography:
- blocktelegraph.io/parent-child-bitcoin-inscriptions/ parent-child relationshipsi are possible between two ordinals
- ordinals.com/
- bitcoin.stackexchange.com/questions/117018/understanding-how-ordinals-work-with-the-bitcoin-blockchain-what-is-exactly-sto
- bitcoin.stackexchange.com/questions/118405/read-ordinal-transaction-data
- bitcoin.stackexchange.com/questions/118247/can-someone-explain-the-byte-composition-of-an-inscription-reveal-transaction
- nftnow.com/guides/bitcoin-nfts-most-notable-ordinals-inscriptions/
In the case of machine learning in particular, it is not part of the training data set.
Hyperparameters can also be considered in domains outside of machine learning however, e.g. the step size in partial differential equation solver is entirely independent from the problem itself and could be considered a hyperparamter. One difference from machine learning however is that step size hyperparameters in numerical analysis are clearly better if smaller at a higher computational cost. In machine learning however, there is often an optimum somewhere, beyond which overfitting becomes excessive.
This section is about companies that primarily specialize in machine learning.
The term "machine learning company" is perhaps not great as it could be argued that any of the Big tech are leaders and sometimes, especially in the case of Google, has a main product that is arguably a form of machine learning.
is a hyperparameter, and are common choices when doing dataset exploration, as they can be easily visualized on a planar plot.
The mapping is done by projecting all points to a dimensional hyperplane. PCA is an algorithm for choosing this hyperplane and the coordinate system within this hyperplane.
The hyperplane choice is done as follows:
- the hyperplane will have origin at the mean point
- the first axis is picked along the direction of greatest variance, i.e. where points are the most spread out.Intuitively, if we pick an axis of small variation, that would be bad, because all the points are very close to one another on that axis, so it doesn't contain as much information that helps us differentiate the points.
- then we pick a second axis, orthogonal to the first one, and on the direction of second largest variance
- and so on until orthogonal axes are taken
www.sartorius.com/en/knowledge/science-snippets/what-is-principal-component-analysis-pca-and-how-it-is-used-507186 provides an OK-ish example with a concrete context. In there, each point is a country, and the input data is the consumption of different kinds of foods per year, e.g.:so in this example, we would have input points in 4D.
- flour
- dry codfish
- olive oil
- sausage
Suppose that every country consumes the same amount of flour every year. Then, that number doesn't tell us much about which country each point represents (has the least variance), and the first PCA axes would basically never point anywhere near that direction.
Another cool thing is that PCA seems to automatically account for linear dependencies in the data, so it skips selecting highly correlated axes multiple times. For example, suppose that dry codfish and olive oil consumption are very high in Portugal and Spain, but very low in Germany and Poland. Therefore, the variation is very high in those two parameters, and contains a lot of information.
However, suppose that dry codfish consumption is also directly proportional to olive oil consumption. Because of this, it would be kind of wasteful if we selected:since the information about codfish already tells us the olive oil. PCA apparently recognizes this, and instead picks the first axis at a 45 degree angle to both dry codfish and olive oil, and then moves on to something else for the second axis.
The approach of this channel of exposing recent research papers is a "honking good idea" that should be taken to other areas beyond just machine learning. It takes a very direct stab at the missing link between basic and advanced!
Updates Post OurBigBook job search round 2025 Updated 2025-05-23 +Created 2025-05-07 2025-05-23
I shouldn't be doing this on funded OurBigBook time which is until the end of May, but I was getting too nervous and decided to start a casual job search to test the waters.
In particular I want to see if I can get past the HR lady step without toning down my online profiles. If nothing works out for the next round I'll be hiding anything too spicy like:Another interesting point is to see if French companies are more likely to reply given that Ciro Santilli studied at École Polytechnique which the French worship.
- prominently seeking funding for OurBigBook on my LinkedIn profile
- CIA 2010 covert communication websites references. This will be my first job hunt since I have published that article. Wish me luck.
- gay Putin profile picture on Stack Overflow
Gay Putin, currently used in Ciro Santilli's Stack Overflow profile
. Ciro's profiles may be a bit too much for the HR ladies who reject his job applications on the spot. To be fair, perhaps not enough years of experience for certain applications and job hopping may have something to do with it too. But since they don't ever tell you anything not to get sued, we'll never know.I'm looking in particular either for:
- machine learning-adjacent jobs in companies that seem to be doing something that could further AGI, e.g. automatic code generation or robotics would be ideal
- quantum computing
- systems programming, which is what I actually have work experience with
I spent the last two weeks doing that:
- one week browsing everything of interest in London and Paris and sending applications to anything that seemed both relevant and interesting. Maintaining an application list at: Section "Job application by Ciro Santilli".
- one week on a very laborious but somewhat interesting take home exercise for Linux kernel engineer a Canonical, makers of Ubuntu.I had a week to finish 5 practical coding and packaging questions, and I tried to do everything as perfectly as possible, but I somewhat underestimated the amount of work and wait needed to do everything and didn't manage to finish question 4 and missed 5. Oops let's see how that goes.At least this had a few good outcomes for the Internet as I tried to document things as nicely as I could where they were missing from Google as usual:
- I re-tested Linux Kernel Module Cheat and made some small improvements. Things still worked from a Ubuntu 24.10 host (using Docker to Ubuntu 22.04), and I also checked that kernel 6.8 builds and GDB step debugs after adding the newly required config
CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT
, also mentioned that at: Why are there no debug symbols in my vmlinux when using gdb with /proc/kcore? - I contributed some simple updates to github.com/martinezjavier/ldd3 getting it closer to work on Linux kernel v6.8. That repository aims to keep the venerable examples from Linux kernel module book LDD3 alive on newer kernels, and is a very good source for kernel module developers.
- How to compile a Linux kernel module?: wrote a quick Ciro-approved tutorial
- Dynamic array in Linux kernel module: I gave an educational example of a dynamic byte array (like std::string) using the kvmalloc family of allocators
- quickemu: this is a good emulator manager and I think I'll be using it for Ubuntu images when needed from now on. I wrote:
- How to run Ubuntu desktop on QEMU?: an introductory tutorial to the software as their README is not that good as is often the case. It's hard for project authors to predict what new users want or not. This is my second answer to this question, the previous one focusing on a more manual approach without third party helpers.
- How to share folder between guest/host? (Quickemu): I explained how to setup a 9p mount to share a directory between guest and host
- Error :: You must put some 'source' URIs in your sources.list: updated this answer for Ubuntu 24.04. This issue comes up when you want to do either of:which don't work by default, and my answer explains how to do it from the GUI and CLI. The CLI method is specially important for Docker images. Since Ubuntu doesn't offer a stable CLI method for this, the method breaks from time to time and we have to find the new config file to edit.
sudo apt build-dep sudo apt source
- What is hardware enablement (HWE)?: I learned a bit better how Ubuntu structures its kernel releases for each Ubuntu release
Some of the main issues I had were:- compiling Linux kernel for Ubuntu is extremely slow. I was used to compiling for embedded system with Buildroot, which finishes in minutes, but for Ubuntu is hours, presumably because they enable as many drivers as possible to make a single ISO work on as many different computers as possible, which makes sense, but also makes development harder
- my QEMU setup for Ubuntu was not quite as streamlined and I relearned a few things and set up quickemu. By chance I had recently come across quickemu for testing OurBigBook on MacOS, but I had to learn a bit how to set it up reasonably too
- I re-tested Linux Kernel Module Cheat and made some small improvements. Things still worked from a Ubuntu 24.10 host (using Docker to Ubuntu 22.04), and I also checked that kernel 6.8 builds and GDB step debugs after adding the newly required config