Each transaction isolation level specifies what can or cannot happen when two queries are being run in parallel, i.e.: the memory semantics of the system.
Remember that queries can affects thousands of rows, and database systems like PostgreSQL can run multiple such queries at the same time.
This example is similar to nodejs/sequelize/raw/parallel_update_async.js, but now we are doing a separate SELECT, later followed by an update:
  • SELECT FROM to get i
  • update on Js code newI = i + 1
  • UPDATE SET the newI
Although this specific example is useless in itself, as we could just use UPDATE "MyInt" SET i = i + 1 as in nodejs/sequelize/raw/parallel_update_async.js, which automatically solves any concurrency issue, this kind of code could be required for example if the update was a complex function not suitably implemented in SQL, or if the update depends on some external data source.
Sample execution:
node --unhandled-rejections=strict ./parallel_select_and_update.js p 2 10 'READ COMMITTED'
which does:
Another one:
node --unhandled-rejections=strict ./parallel_select_and_update.js p 2 10 'READ COMMITTED' 'FOR UPDATE'
this will run SELECT FOR UPDATE rather than just SELECT
Observed behaviour under different SQL transaction isolation levels:
  • READ COMMITTED: fails. Nothing in this case prevents:
    • thread 1: SELECT, obtains i = 0
    • thread 2: SELECT, obtains i = 0
    • thread 2: newI = 1
    • thread 2: UPDATE i = 1
    • thread 1: newI = 1
    • thread 1: UPDATE i = 1
  • REPEATABLE READ: works. the manual mentions that if multiple concurrent updates would happen, only the first commit succeeds, and the following ones fail and rollback and retry, therefore preventing the loss of an update.
  • READ COMMITTED + SELECT FOR UPDATE: works. And does not do rollbacks, which probably makes it faster. With p 10 100, REPEATABLE READ was about 4.2s and READ COMMITTED + SELECT FOR UPDATE 3.2s on Lenovo ThinkPad P51 (2017).
    SELECT FOR UPDATE should be enough as mentioned at: www.postgresql.org/docs/13/explicit-locking.html#LOCKING-ROWS
    FOR UPDATE causes the rows retrieved by the SELECT statement to be locked as though for update. This prevents them from being locked, modified or deleted by other transactions until the current transaction ends. That is, other transactions that attempt UPDATE, DELETE, SELECT FOR UPDATE, SELECT FOR NO KEY UPDATE, SELECT FOR SHARE or SELECT FOR KEY SHARE of these rows will be blocked until the current transaction ends; conversely, SELECT FOR UPDATE will wait for a concurrent transaction that has run any of those commands on the same row, and will then lock and return the updated row (or no row, if the row was deleted). Within a REPEATABLE READ or SERIALIZABLE transaction, however, an error will be thrown if a row to be locked has changed since the transaction started. For further discussion see Section 13.4.
A non-raw version of this example can be seen at: nodejs/sequelize/parallel_select_and_update.js.
This program did not have certain dynamic linking related sections because we linked it minimally with ld.
However, if you compile a C hello world with GCC 8.2:
gcc -o main.out main.c
some other interesting sections would appear.
In this example, posts have tags. When a post is deleted, we check to see if there are now any empty tags, and now we want to delete any empty tags that the post deletion may have created.
If we are creating and deleting posts concurrently, a naive implementation might wrongly delete the tags of a newly created post.
This could be due to a concurrency issue of the following types.
Failure case 1:
  • thread 2: delete old post
  • thread 2: find all tags with 0 posts. Finds tag0 from the deleted old post which is now empty.
  • thread 1: create new post, which we want to have tag tag0
  • thread 1: try to create a new tag tag0, but don't because it already exists, this is done using SQLite's INSERT OR IGNORE INTO or PostgreSQL's INSERT ... ON CONFLICT DO NOTHING
  • thread 1: assign tag0 to the new post by adding an entry to the join table
  • thread 2: delete all tags with 0 posts. It still sees from its previous search that tag0 is empty, and deletes it, which then cascades into the join table
which would result in the new post incorrectly not having the tag0.
Failure case 2:
  • thread 2: delete old post
  • thread 2: find all tags with 0 posts
  • thread 1: create new post
  • thread 1: try to create a new tag tag0, but don't because it already exists
  • thread 2: delete all tags with 0 posts. It still sees from its previous search that tag0 is empty, and deletes it
  • thread 1: assign tag0 to the new post
which leads to a foreign key failure, because the tag does not exist anymore when the assignment happens.
Failure case 3:
  • thread 2: delete old post
  • thread 1: create new post, which we want to have tag tag0
  • thread 1: try to create a new tag tag0, and succeed because it wasn't present
  • thread 2: find all tags with 0 posts, finds the tag that was just created
  • thread 2: delete all tags with 0 posts, deleting the new tag
  • thread 1: assign tag0 to the new post
which leads to a foreign key failure, because the tag does not exist anymore when the assignment happens.
Sample executions:
  • node --unhandled-rejections=strict ./parallel_create_delete_empty_tag.js p 9 1000 'READ COMMITTED': PostgreSQL, 9 tags, DELETE/CREATE the tag0 test tag 1000 times, use READ COMMITTED
    Execution often fails, although not always. The failure is always:
    error: insert or update on table "PostTag" violates foreign key constraint "PostTag_tagId_fkey"
    because the:
    INSERT INTO "PostTag"
    tries to insert a tag that was deleted in the other thread, as it didn't have any corresponding posts, so this is the foreign key failure.
    TODO: we've never managed to observe the failure case in which tag0 is deleted. Is it truly possible? And if not, by which guarantee?
  • node --unhandled-rejections=strict ./parallel_create_delete_empty_tag.js p 9 1000 'READ COMMITTED' 'FOR UPDATE': do a SELECT ... FOR UPDATE before trying to INSERT.
    This is likely correct and the fastest correct method according to our quick benchmarking, about 20% faster than REPEATABLE READ.
    We are just now 100% sure it is corret becase we can't find out if the SELECT in the DELETE subquery could first select some rows, which are then locked by the tag creator, and only then locked by DELETE after selection. Or does it re-evaludate the SELECT even though it is in a subquery?
  • node --unhandled-rejections=strict ./parallel_create_delete_empty_tag.js p 9 1000 'REPEATABLE READ': repeatable read
    We've never observed any failures with this level. This should likely fix the foreign key issue according to the PostgreSQL docs, since:
    • the DELETE "Post" commit cannot start to be seen only in the middle of the thread 1 transaction
    • and then if DELETE happened, the thread 1 transaction will detect it, ROLLBACK, and re-run. TODO how does it detect the need rollback? Is it because of the foreign key? It is very hard to be sure about this kind of thing, just can't find the information. Related: postgreSQL serialization failure.
  • node --unhandled-rejections=strict ./parallel_create_delete_empty_tag.js p 9 1000 'SERIALIZABLE': serializable
  • node --unhandled-rejections=strict ./parallel_create_delete_empty_tag.js p 9 1000 'NONE': magic value, don't use any transaction. Can blow up of course, since even less restrictions than READ COMMITTED
All executions use 2 threads.
Some theoretical notes:
  • Failure case 3 is averted by a READ COMMITTED transaction, because thread 2 won't see the uncommitted tag that thread 1 created, and therefore won't be able to delete it
stackoverflow.com/questions/10935850/when-to-use-select-for-update from SELECT FOR UPDATE also talks about a similar example, and has relevant answers.
SQL window RANGE by Ciro Santilli 37 Updated 2025-07-16
rm -f tmp.sqlite
sqlite3 tmp.sqlite "create table t (id integer, val integer)"
sqlite3 tmp.sqlite <<EOF
insert into t values
  (0, 0),
  (1, 5),
  (2, 10),
  (3, 14),
  (4, 15),
  (5, 16),
  (6, 20),
  (7, 25),
  (8, 29),
  (9, 30),
  (10, 30),
  (11, 31),
  (12, 35),
  (13, 40)
EOF
Show how many neighbours each column has with val between val - 2 and val + 2 inclusive:
sqlite3 tmp.sqlite <<EOF
SELECT id, val, COUNT(*) OVER (
  ORDER BY val RANGE BETWEEN 2 PRECEDING AND 2 FOLLOWING
) FROM t;
EOF
Output:
0|0|1
1|5|1
2|10|1
3|14|3
4|15|3
5|16|3
6|20|1
7|25|1
8|29|4
9|30|4
10|30|4
11|31|4
12|35|1
13|40|1
val - 1 and val + 1 inclusive instead:
sqlite3 tmp.sqlite <<EOF
SELECT id, val, COUNT(*) OVER (
  ORDER BY val RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING
) FROM t;
EOF
Output:
0|0|1
1|5|1
2|10|1
3|14|2
4|15|3
5|16|2
6|20|1
7|25|1
8|29|3
9|30|4
10|30|4
11|31|3
12|35|1
13|40|1
There seems to be no analogue to HAVING for window functions, so we can just settle for a subquery for once, e.g.:
sqlite3 tmp.sqlite <<EOF
SELECT * FROM (
  SELECT id, val, COUNT(*) OVER (
    ORDER BY val RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING
  ) as c FROM t
) WHERE c > 2
EOF
which outputs:
4|15|3
8|29|3
9|30|4
10|30|4
11|31|3
SQL contiguous ranges by Ciro Santilli 37 Updated 2025-07-16
stackoverflow.com/questions/17046204/how-to-find-the-boundaries-of-groups-of-contiguous-sequential-numbers/17046749#17046749 just works, even in SQLite which supports all quoting types known to man including [] for compatibility with insane RDBMSs!
Here's a slightly saner version:
rm -f tmp.sqlite
sqlite3 tmp.sqlite "create table mytable (id integer primary key autoincrement, number integer, status integer)"
sqlite3 tmp.sqlite <<EOF
insert into mytable(number, status) values
  (100,0),
  (101,0),
  (102,0),
  (103,0),
  (104,1),
  (105,1),
  (106,0),
  (107,0),
  (1014,0),
  (1015,0),
  (1016,1),
  (1017,0)
EOF
sqlite3 tmp.sqlite <<EOF
SELECT
  MIN(id) AS "id",
  MIN(number) AS "from",
  MAX(number) AS "to"
FROM (
  SELECT ROW_NUMBER() OVER (ORDER BY number) - number AS grp, id, number
  FROM mytable
  WHERE status = 0
)
GROUP BY grp
ORDER BY MIN(number)
EOF
output:
1|100|103
7|106|107
9|1014|1015
12|1017|1017
To get only groups of length greater than 1:
sqlite3 tmp.sqlite <<EOF
SELECT "id", "from", "to", "to" - "from" + 1 as "len" FROM (
  SELECT
    MIN("id") AS "id",
    MIN(number) AS "from",
    MAX(number) AS "to"
  FROM (
    SELECT ROW_NUMBER() OVER (ORDER BY "number") - "number" AS "grp", "id", "number"
    FROM "mytable"
    WHERE "status" = 0
  )
  GROUP BY "grp"
  ORDER BY MIN("number")
) WHERE "len" > 1
EOF
Output:
1|100|103|4
7|106|107|2
9|1014|1015|2
This means that e.g. if you do an UPDATE query on multiple rows, and power goes out half way, either all update, or none update.
This is different from isolation, which considers instead what can or cannot happen when multiple queries are running in parallel.
Ranger (file manager) by Ciro Santilli 37 Updated 2025-07-16
Ciro Santilli considered it before he stopped using file managers altogether, it is not bad.
Cocos2d by Ciro Santilli 37 Updated 2025-07-16
Ciro Santilli considered this as the basis for Ciro's 2D reinforcement learning games, but ultimately decided it was a bit too messy. Nice overall though.
RPG Maker by Ciro Santilli 37 Updated 2025-07-16
The one true game engine!
Video 1.
Reviewing a Bunch of RPG Maker Games by Majuular (2022)
Source.
Originally by Keyhole Inc., which the nbecame Google Maps, but the format seems standardized and has non-Google support, so should be OK.
Chomp is a two-player mathematical strategy game played on a rectangular grid of squares, representing chocolate bars. The game mechanics are straightforward: players take turns selecting a square, and when a player picks a square, all squares to the right and below it are "chomped" or removed from the game. Here's how it works: 1. The game starts with a chocolate bar represented by a grid of squares.

Pinned article: Introduction to the OurBigBook Project

Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
We have two killer features:
  1. topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculus
    Articles of different users are sorted by upvote within each article page. This feature is a bit like:
    • a Wikipedia where each user can have their own version of each article
    • a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
    This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.
    Figure 1.
    Screenshot of the "Derivative" topic page
    . View it live at: ourbigbook.com/go/topic/derivative
  2. local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:
    This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
    Figure 5. . You can also edit articles on the Web editor without installing anything locally.
    Video 3.
    Edit locally and publish demo
    . Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension.
  3. https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/feature/x/hilbert-space-arrow.png
  4. Infinitely deep tables of contents:
    Figure 6.
    Dynamic article tree with infinitely deep table of contents
    .
    Descendant pages can also show up as toplevel e.g.: ourbigbook.com/cirosantilli/chordate-subclade
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact