Long story short, the project is so far a complete failure on the most important metric: number of regular users, which current sits at exactly one: myself.
There were notable users who found the project online and who actually tried to use the website for some content and provided extremely valuable feedback:Unfortunately after the period of a few weeks they stopped using it to follow their other priorities instead. Which is of course totally fine, however sad.
I still believe that the OurBigBook Web feature is a significant tech innovation that could make the website go big.
I also believe that the project gets many fundamentals of braindumping right, notably the infinitely deep table of contents without forced scoping, e.g.:does not make Calculus have an ID orr URL of
- Mathematics
- Calculus
mathematics/calculus
, rather it's just calculus
.But there is a fundamental difficulty in reaching critical mass to that self-sustaining point, as people don't seem to be convinced by these logical "my system is better" argument alone, as opposed to having them Google into stuff they need now and then understand that the project is awesome.
A closely related critical mass issue is that existing big multiuser knowledge base websites such as Stack Overflow and Wikipedia have a tremendous advantage on PageRank. No matter how useless a Wikipedia article about something is, it will always be on top of Google within a week of creation for title hits. And since the main goal of publishing your stuff is to get it seen, it makes much more sense for writers to publish on such existing websites whenever possible, because anywhere else it is way way less likely to be seen by anybody.
Even I end up writing way more on Stack Overflow than on OurBigBook as a programmer. But I still believe that there is a value to OurBigBook, for the usual reasons of:
Perhaps what saddens me the most is that even on GitHub stars/Twitter/Hacker news terms there is almost no interest in the project despite the fact that I consider that it has innovations, while many other note taking apps as well in the thousands of stars. Maybe I'm just delusional and all the tech that I'm doing is completely useless?
Part of the issue is probably linked to the fact that most other note taking apps focus on "help me organize my ideas so I can make more money" and often completely ignore "I want to publish my knowledge", and stuff that helps you make money is always easier to sell and promote.
OurBigBook on the other hand a huge focus on "I want to publish me knowledge". It aims almost single mindedly in being the best tool ever for that. However this doesn't make money for people, and therefore there are going to be way less potential users.
I do believe strongly that all it takes is a few users for the project to snowball. For some people, once you start braindumping, it is very addictive, and you never want to stop basically. So with only a few of those we can open large parts of undergrad knowledge to the world. But these people are few, and so far I haven't been able to find even a single one like me, and on top of that convince them that I have created the ultimate system for their knowledge publishing desires.
Another general lesson is that I should perhaps aimed for greater compatibility with existing systems such as Obsidian. Taking something that many people already know and use can have a huge impact on acceptance. E.g. anything that touches Obsidian can reach thousands of stars: github.com/KosmosisDire/obsidian-webpage-export. Note taking apps that aim for "markdown" compatibility also tend to fare better, even if in the end you inevitably have to extend the Markdown for some of your features. And WYSIWYG, which I want but don't have, is perhaps the ultimate familiarity.
Another issue compared to other platforms is that OurBigBook just came out late. Obsidian launched in 2020. Roam Research and Trillium Notes also came earlier. And it is hard to fight the advantage already gained by those on the "I'm going to take some personal notes" area. I do believe however that there a strong separation between "these are my personal notes" and "I want to publish these". Once you decide to publish your knowledge, you immediately start to write in a different way, and it is very hard to convert pre-existing "private" notes into ones suitable for public consumption.
Updates Post OurBigBook job search round 2025 Updated 2025-05-29 +Created 2025-05-07 2025-05-29
I shouldn't be doing this on funded OurBigBook time which is until the end of May, but I was getting too nervous and decided to start a casual job search to test the waters.
In particular I want to see if I can get past the HR lady step without toning down my online profiles. If nothing works out for the next round I'll be hiding anything too spicy like:Another interesting point is to see if French companies are more likely to reply given that Ciro Santilli studied at École Polytechnique which the French worship.
- prominently seeking funding for OurBigBook on my LinkedIn profile
- CIA 2010 covert communication websites references. This will be my first job hunt since I have published that article. Wish me luck.
- gay Putin profile picture on Stack Overflow
Gay Putin, currently used in Ciro Santilli's Stack Overflow profile
. Ciro's profiles may be a bit too much for the HR ladies who reject his job applications on the spot. To be fair, perhaps not enough years of experience for certain applications and job hopping may have something to do with it too. But since they don't ever tell you anything not to get sued, we'll never know.I'm looking in particular either for:
- machine learning-adjacent jobs in companies that seem to be doing something that could further AGI, e.g. automatic code generation or robotics would be ideal
- quantum computing
- systems programming, which is what I actually have work experience with
I spent the last two weeks doing that:
- one week browsing everything of interest in London and Paris and sending applications to anything that seemed both relevant and interesting. Maintaining an application list at: Section "Job application by Ciro Santilli".
- one week on a very laborious but somewhat interesting take home exercise for Linux kernel engineer a Canonical, makers of Ubuntu.I had a week to finish 5 practical coding and packaging questions, and I tried to do everything as perfectly as possible, but I somewhat underestimated the amount of work and wait needed to do everything and didn't manage to finish question 4 and missed 5. Oops let's see how that goes.At least this had a few good outcomes for the Internet as I tried to document things as nicely as I could where they were missing from Google as usual:
- I re-tested Linux Kernel Module Cheat and made some small improvements. Things still worked from a Ubuntu 24.10 host (using Docker to Ubuntu 22.04), and I also checked that kernel 6.8 builds and GDB step debugs after adding the newly required config
CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT
, also mentioned that at: Why are there no debug symbols in my vmlinux when using gdb with /proc/kcore? - I contributed some simple updates to github.com/martinezjavier/ldd3 getting it closer to work on Linux kernel v6.8. That repository aims to keep the venerable examples from Linux kernel module book LDD3 alive on newer kernels, and is a very good source for kernel module developers.
- How to compile a Linux kernel module?: wrote a quick Ciro-approved tutorial
- Dynamic array in Linux kernel module: I gave an educational example of a dynamic byte array (like std::string) using the kvmalloc family of allocators
- quickemu: this is a good emulator manager and I think I'll be using it for Ubuntu images when needed from now on. I wrote:
- How to run Ubuntu desktop on QEMU?: an introductory tutorial to the software as their README is not that good as is often the case. It's hard for project authors to predict what new users want or not. This is my second answer to this question, the previous one focusing on a more manual approach without third party helpers.
- How to share folder between guest/host? (Quickemu): I explained how to setup a 9p mount to share a directory between guest and host
- Error :: You must put some 'source' URIs in your sources.list: updated this answer for Ubuntu 24.04. This issue comes up when you want to do either of:which don't work by default, and my answer explains how to do it from the GUI and CLI. The CLI method is specially important for Docker images. Since Ubuntu doesn't offer a stable CLI method for this, the method breaks from time to time and we have to find the new config file to edit.
sudo apt build-dep sudo apt source
- What is hardware enablement (HWE)?: I learned a bit better how Ubuntu structures its kernel releases for each Ubuntu release
Some of the main issues I had were:- compiling Linux kernel for Ubuntu is extremely slow. I was used to compiling for embedded system with Buildroot, which finishes in minutes, but for Ubuntu is hours, presumably because they enable as many drivers as possible to make a single ISO work on as many different computers as possible, which makes sense, but also makes development harder
- my QEMU setup for Ubuntu was not quite as streamlined and I relearned a few things and set up quickemu. By chance I had recently come across quickemu for testing OurBigBook on MacOS, but I had to learn a bit how to set it up reasonably too
- I re-tested Linux Kernel Module Cheat and made some small improvements. Things still worked from a Ubuntu 24.10 host (using Docker to Ubuntu 22.04), and I also checked that kernel 6.8 builds and GDB step debugs after adding the newly required config
github.com/cirosantilli/cirosantilli.github.io/issues/198. Previously at: stackoverflow.com/questions/31321009/best-more-standard-graph-representation-file-format-graphson-gexf-graphml/79467334#79467334 but Stack Overflow fucking deleted the question.
My general motivation for this is that a PageRank-like algorithm could be useful for more accurate user and article ranking on OurBigBook, see: Section "PageRank-like ranking"
But it could also be just generally cool to apply it to other graph datasets, e.g. for computing an Wikipedia internal PageRank.
Then I had a look at the Common Crawl web graph data to see if I could easily calculate it myself, and... they already have it! See: Section "Common Crawl web graph official PageRank"
Their graph dumps are in BVGraph graph file format, which is the native format of the WebGraph framework, which implements the format and algorithms such as PageRank.
The only thing I miss is a command line interface to calculate the PageRank. That would be so awesome.
Announcements:
In cc-main-2024-25-dec-jan-feb-domain-ranks.txt:
cirosantilli.com
was ranked ~453kourbigbook.com
was at ~606k
Ciro Santilli does the same via Google searches and Twitter/Reddit searches for himself, you can't invent anything new nowadays:
Kibo was known for his high-volume but thoughtful posts, but achieved Usenet celebrity circa 1991 by writing a small script to grep his entire Usenet feed for instances of his name, and then answering personally whenever and wherever he was mentioned, giving the illusion that he was personally reading the entire feed.
The original forum thread bitcointalk.org/index.php?topic=137.msg1195 suggests multiple purchases were made, until he had to withdrawl the offer. Perhaps an easier question is how many pizzas he got in the first place.
www.reddit.com/r/Bitcoin/comments/13on6px/comment/jl55025/?utm_source=reddit&utm_medium=web2x&context=3 mentions without source:One source is: bitcoinmagazine.com/culture/the-man-behind-bitcoin-pizza-day-is-more-than-a-meme-hes-a-mining-pioneer
I know. Laszlo Hanyecz estimates that he spent 100,000 BTC on pizza in 2010. Laszlo is the man that invented GPU mining and he mined well over 100,000 BTC.
Related thread from May 2023: bitcointalk.org/index.php?topic=5453728.msg62286606#msg62286606 "Did Laszlo Hanyecz exchange 40000 BTC for 8 pizzas, not 10000 BTC for 2 pizzas?" but their Googling is so bad no one had found the 100,000 quote before Ciro.
As per bitcoin.stackexchange.com/questions/113831/searching-the-blockchain-based-on-transaction-amount-and-or-date at blockchair.com/bitcoin/outputs?s=time(asc)&q=value(1000000000000),time(2010-05-18..2010-08-05) we can list all the transactions made between the offer and withdrawal dates for value exactly 10k. There are only about 20 of them, and including someone the 22nd of May, so it is extremely likely that this will contain the hits. No repeated recipients however, so it is hard to progress with more advanced analytics tools
Some of the transactions are:8 d1a429c05868f9be6cf312498b77f4e81c2d4db3268b007b6b80716fb56a35ad (29 May) is a common looking transaction with a single input from 1Bc7T7ygkKKvcburmEg14hJKBrLD7BXCkX and two outputs, one likely being the change to 1GH4dRUAagj67XVjr4TV6J9RFNmGYsLe7c and the other the actual value to 138eoqfNcEdeU9EG9CKfAxnYYz62uHRNrA.
- 49d2adb6e476fa46d8357babf78b1b501fd39e177ac7833124b3f67b17c40c2a (22 May 2010 06:17:59 GMT+1). This one has some Google mentions:This is a highly unusual transaction from a single address 17WFx2GQZUmh6Up2NDNCEDk3deYomdNCfk to a single address 1CZDM6oTttND6WPdt3D6bydo7DYKzd9Qik for the exact value with no change.By digging a bit, we see that the input comes from exactly 20 outputs, e.g. 1E43t1VCc3Q3STKauEiUoVqLbT81XT67xj, each of which is a block reward of 50 BTC, the reward value at those early times, thus satisfactorily explaining how the exact 10k value was obtained without change. Because we know that Laszlo was a big GPU miner, it is extremelly likely that this transaction was made by him.
- a1075db55d416d3ca199f55b6084e2115b9345e16c5cf302fc80e9d5fbf5d48d (22 May 2010 07:16:31 GMT+1) also has several Google mentions, e.g.:www.blockchain.com/explorer/transactions/btc/a1075db55d416d3ca199f55b6084e2115b9345e16c5cf302fc80e9d5fbf5d48d even specially marks it "Bitcoin Pizza" and "Notable". Furthermore, the receiving address 17SkEw2md5avVNyYgj6RiXuQKNwkXaxFyQ is even marked as verified an as belonging to Jeremy Sturdivant.Furthermore this also shows us how Jeremy then transferred about half of Bitcoins 10 minutes later, but we can't know if it was to his own accounts or to cash out.The nature of this transaction is very different from the previous one. It uses a bunch of inputs to a single address 1XPTgDRhN8RFnzniWCddobD9iKZatrvH4. 1XPTgDRhN8RFnzniWCddobD9iKZatrvH4 contains a mixture of regular small inputs, but also a bunch of block rewards e.g. www.blockchain.com/explorer/addresses/btc/1MUoh2nJudSDdKu9NkcevaCG1Qe3nZHWFZ, thus also clearly indicating Lsazlo ownership.
The input chain is complex, but it does contain one block reward on the third level: 17PBFeDzks3LzBTyt6bAMATNhowrvx5kBw + 79 rewards 4th level at 045795627ca29ec72a94c23a65ee775ea1949d60b6fba0938b75e1cfe1e6643e.
- d3498960e5f73031f726cb878382cc696938810fa43f918696cbf242afc9765e (04 June): complex chain, unclear
- 2ea2914c131b2798041a80c00c44081a3559233d69d8b367e4244e6b12096610 (10 June): single input/single output. Complex input, but has some 2nd order mines e.g. e6393f613ef12f5708fa511875b8ff5080f6c8864709f8d92bd99435826a9d0d
- ea595789878b673776d0577cbc6063db611bb4e2954e226459d556995f547922 (24 June): single input/single output. Complex input, but has some 2nd order mines e.g. b9a0c2d24a744b79fe001a67468c456746b74e94a6ce68a2e5f80bf645d678b9
- 461f91a98bbe2f269d8af938039e185287761677f0418fcc8238c5f3dca72935 (02 Jul 2010 08:39:17 GMT+1): single 20k input to two 10k outputs. Did he get 2x two pizzas at once? Complex input.
- a47f927ca1adeeb4394200e8a37a9297b07e784a251569074a9fc2c04855560f (02 Jul 2010 09:07:35 GMT+1): too close in time to the previous one, unless he was having a massive pizza party with invitees!
- 77036fa2ac75212be1ce93e8e1008d5cb2bcbb51aa560a5fe29c9c1423bbd00e (02 Jul 2010 09:14:33 GMT+1): the party grows even larger
Intel is known to have created customized chips for very large clients.
This is mentioned e.g. at: www.theregister.com/2021/03/23/google_to_build_server_socs/Those chips are then used only in large scale server deployments of those very large clients. Google is one of them most likely, given their penchant for Google custom hardware.
Intel is known to do custom-ish cuts of Xeons for big customers.
TODO better sources.
Ciro Santilli publishes videos of this not-so-common visual programming experiments on his YouTube channel occasionally: www.youtube.com/c/CiroSantilli. Ciro should however not be lazy and also upload each video produced to Wikimedia Commons, since YouTube does not offer a download option even for videos marked with a Creative Commons license: www.quora.com/Can-I-download-Creative-Commons-licensed-YouTube-videos-to-edit-them-and-use-them/answer/Tarmo-Toikkanen!
This is also where Ciro's downtime converged to in his early 30's, since he long lost patience for stupid video games and television series.
Ciro developed one interesting technique: while scrolling through YouTube's useless recommendations, when he understands what a channel is about, he either immediately:and no matter how much you say you don't want to hear about them, YouTube juts keeps on sending more.
This helps to keep this feed clean of boring stuff he already knows about. There is unfortunately an infinite amount of useless videos out there however on the topics of:
- sports
- music, mostly idiotic top of the charts
- news and political commentary
- food
- programming tutorials. Meh, got Stack Overflow.
- stuff that is not in English, and notably languages that Ciro does not even speak!
- motorcycles
- ASMR
- cute animals
- gaming and movie commentary. Ciro is interested only in a very specific number of video games
- nature life, e.g. hiking, cycling, or living in isolation, this Ciro enjoys
- science for kids (popular science)
Things Ciro hates about YouTube:
Likely FFmpeg is the backend of YouTube.
Bought by Google in 2006.