The Final Encyclopedia (Paul Allen) by Ciro Santilli 35 Updated +Created
The Google Story Chapter 21. A Virtual Library mentions that Paul Allen was interested in trying to create something like the "Final Encyclopedia" from this book. This is somewhat the same motivation for Google Books and Google's activities more broadly, as shown in their organise the world's information mission statement.
Very good jazz album by Ciro Santilli 35 Updated +Created
Tileset by Ciro Santilli 35 Updated +Created
Limited series by Ciro Santilli 35 Updated +Created
If you are going to make a television series, do make it a limited one. Plan one story, and execute it amazingly. Don't let things drag on and on.
Telegraph by Ciro Santilli 35 Updated +Created
Voice over IP by Ciro Santilli 35 Updated +Created
Class (biology) by Ciro Santilli 35 Updated +Created
Immune system by Ciro Santilli 35 Updated +Created
A cool thought: bacteria like E. Coli replicate every 20 minutes. A human replicates every 15 years. So how can multicellular beings possibly cope with the speed of evolution of parasites?
The answer is that within us, the adaptive immune system is a population of cells that evolves very quickly. So in a sense, within our bodies there is fast cell-level non-inheritable evolution happening daily!
RimWorld by Ciro Santilli 35 Updated +Created
Superconducting temperature by Ciro Santilli 35 Updated +Created
Josephson effect by Ciro Santilli 35 Updated +Created
Discrete quantum effect observed in superconductors with a small insulating layer, a device known as a Josephson junction.
To understand the behaviour effect, it is important to look at the Josephson equations consider the following Josephson effect regimes separately:
A good summary from Wikipedia by physicist Andrew Whitaker:
at a junction of two superconductors, a current will flow even if there is no drop in voltage; that when there is a voltage drop, the current should oscillate at a frequency related to the drop in voltage; and that there is a dependence on any magnetic field
Bibliography:
Tower defense by Ciro Santilli 35 Updated +Created
Ciro Santilli really likes this genre.
Star system by Ciro Santilli 35 Updated +Created
Standard Model Lagrangian by Ciro Santilli 35 Updated +Created
Combination of other sub-Lagrangians for each of the forces, e.g.:
Bad Stack Overflow policies by Ciro Santilli 35 Updated +Created
Article size and count limits by Ciro Santilli 35 Updated +Created
Limited the number of articles, and the size of article bodies. This, together with the reCAPTCHA setup from Email verification and reCAPTCHA signup protection should prevent the most basic types of denial-of-service attacks by filling up our database.
The limits can be increased by admin users from the web UI, and will be done generously when it is evident that it is not a DoS attack. Admin users are also a recently added feature.
cirosantilli.com content uploaded to ourbigbook.com/cirosantilli by Ciro Santilli 35 Updated +Created
Managed to upload the content from the static website cirosantilli.com (OurBigBook Markup source at github.com/cirosantilli/cirosantilli.github.io) to ourbigbook.com/cirosantilli.
Although most of the key requirements were already in place since the last update, as usual doing things with the complex reference content stresses the system further and leads to the exposition of several new bugs.
The upload of OurBigBook Markup files to ourbigbook.com was done with the newly added OurBigBook CLI ourbigbook --web option. Although fully exposed to end users, the setup is not super efficient: a trully decent implementation should only upload changed files, and would basically mean reimplementing/using Git, since version diffing is what Git shines at. But I've decided not to put much emphasis on CLI upload for now, since it is expected that initially the majority of users will use the Web UI only. The functionality was added primarily to upload the reference content.
This is a major milestone, as the new content can start attracting new users, and makes the purpose of the website much clearer. Just having this more realistic content also immediately highlighted what the next development steps need to be.
Once v1.0 is reached, I will actually make all internal links of cirosantilli.com to point to ourbigbook.com/cirosantilli to try and drive some more traffic.
The new content blows up by far the limit of the free Heroku PostgreSQL database of 10k lines. This meant that I needed to upgrade the Heroku Postgres plugin from the free Hobby Dev to the 9 USD/month Hobby Basic: elements.heroku.com/addons/heroku-postgresql, so now hosting costs will increase from 7 USD/month for the dyno to 7 + 9 = 16 UDS/month. After this upgrade and uploading all of cirosantilli.com to ourbigbook.com, Heroku dashboard reads reads:
  • 30,918 rows out of 10,000,000
  • 61.0 MB (out of 10 GB)
so clearly if we are ever forced to upgrade plans again, it means that a bunch of people are using the website and that things are going very very well! Happy how this storage cost turned out so far.
One key limitation found was that Heroku RAM memory is quite limited at 512MB, and JavaScript is not exactly the most memory economical language out there. Started investigation at: github.com/ourbigbook/ourbigbook/issues/230 Initially working around that by simply splitting the largest files. We were just on the verge of what could be ran however luckily, so a few dozen splits was enough, it managed to handle 70 kB OurBigBook Markup inputs. So hopefully if we manage to optimize a bit more we will be able to set a maximum size of 100 kB and still have a good safety margin.
Skip ID extraction and rendering based on database timestamps by Ciro Santilli 35 Updated +Created
Now that we can reliably split files at will with \Include, I finally added this feature.
This means while developing a website locally with the OurBigBook CLI, if you have a bunch of files with an error in one of them, your first run will run slowly until the error:
extract_ids README.ciro
extract_ids README.ciro finished in 73.82836899906397 ms
extract_ids art.ciro
extract_ids art.ciro finished in 671.1738419979811 ms
extract_ids ciro-santilli.ciro
extract_ids ciro-santilli.ciro finished in 1009.6256089992821 ms
extract_ids science.ciro
error: science.ciro:13686:1: named argument "parent" given multiple times
extract_ids science.ciro finished in 1649.6193730011582 ms
but further runs will blast through the files that worked, skipping all files that have sucessfully converted:
extract_ids README.ciro
extract_ids README.ciro skipped by timestamp
extract_ids art.ciro
extract_ids art.ciro skipped by timestamp
extract_ids ciro-santilli.ciro
extract_ids ciro-santilli.ciro skipped by timestamp
extract_ids science.ciro
so you can fix file by file and move on quickly.
This was not fully trivial to implement because we had to rework how duplicate IDs are checked. Previously, we just nuked the DB every time on a directory conversion, and then repopulated everything. If a duplicated showed up on a file, it was a duplicate.
But now that we are not necessarily extracing IDs from every file, we can't just nuke the database anymore, otherwise we'd lose the information. Therefore, what we have to do is to convert every file, and only at the end check the duplicates.

Unlisted articles are being shown, click here to show only listed articles.