Now that we can reliably split files at will with
\Include, I finally added this feature.
This means while developing a website locally with the OurBigBook CLI, if you have a bunch of files with an error in one of them, your first run will run slowly until the error:but further runs will blast through the files that worked, skipping all files that have sucessfully converted:so you can fix file by file and move on quickly.
extract_ids README.ciro extract_ids README.ciro finished in 73.82836899906397 ms extract_ids art.ciro extract_ids art.ciro finished in 671.1738419979811 ms extract_ids ciro-santilli.ciro extract_ids ciro-santilli.ciro finished in 1009.6256089992821 ms extract_ids science.ciro error: science.ciro:13686:1: named argument "parent" given multiple times extract_ids science.ciro finished in 1649.6193730011582 ms
extract_ids README.ciro extract_ids README.ciro skipped by timestamp extract_ids art.ciro extract_ids art.ciro skipped by timestamp extract_ids ciro-santilli.ciro extract_ids ciro-santilli.ciro skipped by timestamp extract_ids science.ciro
More details at: cirosantilli.com/ourbigbook#no-render-timestamp
This was not fully trivial to implement because we had to rework how duplicate IDs are checked. Previously, we just nuked the DB every time on a directory conversion, and then repopulated everything. If a duplicated showed up on a file, it was a duplicate.
But now that we are not necessarily extracing IDs from every file, we can't just nuke the database anymore, otherwise we'd lose the information. Therefore, what we have to do is to convert every file, and only at the end check the duplicates.
Managed to do that with a single query as documented at: stackoverflow.com/questions/71235548/how-to-find-all-rows-that-have-certain-columns-duplicated-in-sequelize/71235550#71235550