When using SQL REPEATABLE READ isolation level and SQL SERIALIZABLE isolation level, concurrent transactions may fail with a serialization failure, and then you might need to retry them. You server code or your ORM must always account for that.
A good way to explore when it happens is to use the example
Related questions:
- stackoverflow.com/questions/7705273/what-are-the-conditions-for-encountering-a-serialization-failure
- stackoverflow.com/questions/59351109/error-could-not-serialize-access-due-to-concurrent-update
- stackoverflow.com/questions/50797097/postgres-could-not-serialize-access-due-to-concurrent-update/51932824
Vs SQL SERIALIZABLE isolation level on PostgreSQL: dba.stackexchange.com/questions/284744/postgres-repeatable-read-vs-serializable
nodejs/sequelize/raw/parallel_create_delete_empty_tag.js is an example which experimentally seems to be solved by
REAPEATABLE READ
, although we are not sure that this is truly the case and why. What is clear is that that example is not solved by the SQL READ COMMITTED isolation level.In PostgreSQL, this is the first isolation level which can lead to postgreSQL serialization failures, this does not happen to SQL READ COMMITTED isolation level in that DBMS. You then have to retry the transaction.