Install and first run Updated +Created
At 7e4cc9e57de76752df0f4e32eca95fb653ea64e4 you basically need to use the Docker image on Ubuntu 21.04 due to pip breaking changes... (not their fault). Perhaps pyenv would solve things, but who has the patience for that?!?!
The Docker setup from README does just work. The image download is a bit tedius, as it requires you to create a GitHub API key as described in the README, but there must be reasons for that.
Once the image is downloaded, you really want to run is from the root of the source tree:
sudo docker run --name=wcm -it -v "$(pwd):/wcEcoli" docker.pkg.github.com/covertlab/wholecellecolirelease/wcm-full
This mounts the host source under /wcEcoli, so you can easily edit and view output images from your host. Once inside Docker we can compile, run the simulation, and analyze results with:
make clean compile &&
python runscripts/manual/runFitter.py &&
python runscripts/manual/runSim.py &&
python runscripts/manual/analysisVariant.py &&
python runscripts/manual/analysisCohort.py &&
python runscripts/manual/analysisMultigen.py &&
python runscripts/manual/analysisSingle.py
The meaning of each of the analysis commands is described at Section "Output overview".
As a Docker refresher, after you stop the container, e.g. by restarting your computer or running sudo docker stop wcm, you can get back into it with:
sudo docker start wcm
sudo docker run -it wcm bash
runscripts/manual/runFitter.py takes about 15 minutes, and it generates files such as reconstruction/ecoli/dataclasses/process/two_component_system.py (related) which is required to run the simulation, it is basically a part of the build.
runSim.py does the main simulation, progress output contains lines of type:
Time (s)  Dry mass     Dry mass      Protein          RNA    Small mol     Expected
              (fg)  fold change  fold change  fold change  fold change  fold change
========  ========  ===========  ===========  ===========  ===========  ===========
    0.00    403.09        1.000        1.000        1.000        1.000        1.000
    0.20    403.18        1.000        1.000        1.000        1.000        1.000
and then it ended on the Lenovo ThinkPad P51 (2017) at:
 2569.18    783.09        1.943        1.910        2.005        1.950        1.963

Simulation finished:
 - Length: 0:42:49
 - Runtime: 0:09:13
when the cell had almost doubled, and presumably divided in 42 minutes of simulated time, which could make sense compared to the 20 under optimal conditions.
This example is similar to nodejs/sequelize/raw/parallel_update_async.js, but now we are doing a separate SELECT, later followed by an update:
  • SELECT FROM to get i
  • update on Js code newI = i + 1
  • UPDATE SET the newI
Although this specific example is useless in itself, as we could just use UPDATE "MyInt" SET i = i + 1 as in nodejs/sequelize/raw/parallel_update_async.js, which automatically solves any concurrency issue, this kind of code could be required for example if the update was a complex function not suitably implemented in SQL, or if the update depends on some external data source.
Sample execution:
node --unhandled-rejections=strict ./parallel_select_and_update.js p 2 10 'READ COMMITTED'
which does:
Another one:
node --unhandled-rejections=strict ./parallel_select_and_update.js p 2 10 'READ COMMITTED' 'FOR UPDATE'
this will run SELECT FOR UPDATE rather than just SELECT
Observed behaviour under different SQL transaction isolation levels:
  • READ COMMITTED: fails. Nothing in this case prevents:
    • thread 1: SELECT, obtains i = 0
    • thread 2: SELECT, obtains i = 0
    • thread 2: newI = 1
    • thread 2: UPDATE i = 1
    • thread 1: newI = 1
    • thread 1: UPDATE i = 1
  • REPEATABLE READ: works. the manual mentions that if multiple concurrent updates would happen, only the first commit succeeds, and the following ones fail and rollback and retry, therefore preventing the loss of an update.
  • READ COMMITTED + SELECT FOR UPDATE: works. And does not do rollbacks, which probably makes it faster. With p 10 100, REPEATABLE READ was about 4.2s and READ COMMITTED + SELECT FOR UPDATE 3.2s on Lenovo ThinkPad P51 (2017).
    SELECT FOR UPDATE should be enough as mentioned at: www.postgresql.org/docs/13/explicit-locking.html#LOCKING-ROWS
    FOR UPDATE causes the rows retrieved by the SELECT statement to be locked as though for update. This prevents them from being locked, modified or deleted by other transactions until the current transaction ends. That is, other transactions that attempt UPDATE, DELETE, SELECT FOR UPDATE, SELECT FOR NO KEY UPDATE, SELECT FOR SHARE or SELECT FOR KEY SHARE of these rows will be blocked until the current transaction ends; conversely, SELECT FOR UPDATE will wait for a concurrent transaction that has run any of those commands on the same row, and will then lock and return the updated row (or no row, if the row was deleted). Within a REPEATABLE READ or SERIALIZABLE transaction, however, an error will be thrown if a row to be locked has changed since the transaction started. For further discussion see Section 13.4.
A non-raw version of this example can be seen at: nodejs/sequelize/parallel_select_and_update.js.
How to mine Monero Updated +Created
Ubuntu 20.10 as per xmrig.com/docs/miner/build/ubuntu:
sudo apt install git build-essential cmake libuv1-dev libssl-dev libhwloc-dev
git clone https://github.com/xmrig/xmrig.git
mkdir xmrig/build && cd xmrig/build
cmake ..
make -j$(nproc)
At minexmr.com/#getting_started we see that all you then need is a single CLI command:
xmrig -o pool.minexmr.com:4444 -u <your-monero-address>
Seems simple, well done devs!
Benchmark on Lenovo ThinkPad P51 (2017) as per xmrig.com/docs/miner/benchmark:
./xmrig --bench=1M
gives:
948.1 h/s
which according to the minexmr.com mining pool would generate 0.0005 XMR/day, which at the February 2021 rate of 140 USD/XMR is 0.07 USD/day. The minimum payout in that pool is 0.004 XMR so it would take 8 days to reach that.
So clearly, application-specific integrated circuit mining is the only viable way of doing this.
www.makeuseof.com/cryptos-you-can-mine-at-home/ is a completely full of bullshit article that says otherwise. How can someone publish that!
JavaScript memory usage benchmark Updated +Created
In this section we will use the file nodejs/bench_mem.js, tests are run on Node.js v16.14.2 from NVM, Ubuntu 21.10, on Lenovo ThinkPad P51 (2017) which has 32 GB RAM.
A C hello world with an infinite loop at the end has:
  • 2.7 MB
  • 770 KB
For a Node.js infinite loop nodejs/infinite_loop.js
topp infinite_loop.js
This gives approximately:
  • RSS: 20 MB
  • VSZ: 230 MB
Adding a single hello world to it as in nodejs/infinite_hello.js and running:
topp infinite_hello.js
leads to:
  • RSS: 26 MB
  • VSZ: 580 MB
We understand that Node.js preallocates VSZ wildly. No big deal, but it does mean that VSZ is a useless measure for Node.js.
Forcing garbage collection as in nodejs/infinite_hello.js brings it down to 20 MB however:
topp node --expose-gc infinite_hello_gc.js
Finally let's see a baseline for process.memoryUsage nodejs/infinite_memoryusage.js:
node --expose-gc infinite_memoryusage.js
which gives initially:
{
  rss: 23851008,
  heapTotal: 6987776,
  heapUsed: 3674696,
  external: 285296,
  arrayBuffers: 10422
}
but after a few seconds randomly jumps to:
{
  rss: 26005504,
  heapTotal: 9084928,
  heapUsed: 3761240,
  external: 285296,
  arrayBuffers: 10422
}
so we understand that
  • heapUsed seems constant at 3.7 MB
  • heapTotal is a very noisy, as it starts at 7 MB, but randomly jumps to 9 MB at one point without apparent reason
Now let's run our main test program.
First a baseline case with an array of length 1:
node --expose-gc bench_mem.js n 1
This gives the same results as node --expose-gc infinite_memoryusage.js. The same result is obtained by doing:
a = undefined
with:
node --expose-gc bench_mem.js dealloc
Not let's vary the size of n a bit with:
node --expose-gc bench_mem.js n N
which gives:
NheapUsedheapTotalrssheapUsed per elemrss per elem
1 M14 MB48 MB56 MB1030
10 M122 MB157 MB176 MB1815
100 M906 MB940 MB960 MB99.3
"rss per elem" is calculated as: rss - 26 MB, where 26 MB is the baseline RSS seen on n 1.
Similarly "heapUsed per elem" deduces the 4 MB (approximation of the above 3.7 MB) seen on n 1.
Note that to reach MAX_SAFE_INTEGER we would need 8 bytes per elem worst case.
Everything below 100 million (8) is therefore very memory wasteful in terms of RSS.
If we use Int32Array typed array buffers instead of a simple Array:
node --expose-gc bench_mem.js array-buffer n N
we see that the memory is now, unsurprisingly, accounted for under arrayBuffers, e.g. for N 1 million:
{
  rss: 31776768,
  heapTotal: 6463488,
  heapUsed: 3674520,
  external: 4285296,
  arrayBuffers: 4010422
}
Results for different N:
|| N
|| `arrayBuffers`
|| `rss`
|| `rss` per elem

| 1 M
| 4 MB
| 31 MB
| 5

| 10 M
| 40 MB
| 67 MB
| 4.6

| 100 M
| 40 MB
| 427 MB
| 4
We see therefore that typed arrays are much closer to what they advertise (4 bytes per element), even for smaller element counts, as expected.
Now let's try one million objects of type { a: 1, b: -1 }:
node --expose-gc bench_mem.js obj
gives:
{
  rss: 138969088,
  heapTotal: 105246720,
  heapUsed: 70103896,
  external: 285296,
  arrayBuffers: 10422
}
Disaster! Memory usage is up to 70 MB! Why?? We were expecting only about 24, 4 baseline + 2 * 10 for each million int?!
And now an equivalent version using class:
node --expose-gc bench_mem.js class
gives the same result.
Let's try Array:
node --expose-gc bench_mem.js arr
is even worse at 78 MB!! OMG why.
{
  rss: 164597760,
  heapTotal: 129363968,
  heapUsed: 78117008,
  external: 285296,
  arrayBuffers: 10422
}
Let's change the number of fields on the object? First as a sanity check:
node --expose-gc bench_mem.js obj 2
produces as expected the smae result as:
node --expose-gc bench_mem.js obj
so adding properties one by one doesn't change anything from creating the literal all at once. Good.
Now:
node --expose-gc bench_mem.js obj N
gives heapUsed:
  • 1: 70M
  • 2: 70M
  • 3: 70M
  • 4: 70M
  • 5: 110M
  • 6: 110M
  • 7: 110M
  • 8: 134M
  • 9: 134M
  • 10: 134M
  • 11: 158M
Ubuntu 21.10 does not wake up from suspend Updated +Created
Does not happen every time, only some times. Can't figure out why. Usually happens when has suspended for a longer time.
bugs.launchpad.net/ubuntu/+source/nvidia-graphics-drivers-470/+bug/1946303 sounds like a likely report, Nvidia driver version 470, but can't find those error messages anywhere. The last line of:
journalctl -o short-precise -k -b -1
once was:
PM: suspend entry (deep)
which is when sleep starts.
This suggests that it is not a video bug then, seems that it is not waking up at all? Gotta try to SSH into it. OK. I did SSH into it, and that was fine, so it is just the video that won't start.
PM: suspend exit
bugs.launchpad.net/ubuntu/+source/linux/+bug/1949977 is another possible bug, based on kernel version. I'm running 5.13, which is one of the failing versions on the report. Can't find any interesting dmesg though.
In another crash:
journalctl -o short-precise -k -b -1
had the following interesting lines:
nvidia-modeset: WARNING: GPU:0: Lost display notification (0:0x00000000); continuing.
[24307.640014] NVRM: GPU at PCI:0000:01:00: GPU-18af74bb-7c72-ff70-e447-87d48378ea20
[24307.640018] NVRM: Xid (PCI:0000:01:00): 79, pid=8828, GPU has fallen off the bus.
[24307.640021] NVRM: GPU 0000:01:00.0: GPU has fallen off the bus.
[24328.054022] nvidia-modeset: ERROR: GPU:0: The requested configuration of display devices (LGD (DP-4)) is not supported on this GPU.
[repeats several more times]
[24328.056767] nvidia-modeset: ERROR: GPU:0: The requested configuration of display devices (LGD (DP-4)) is not supported on this GPU.
[24328.056951] nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000927c:0:0:0x0000000f
[24328.056955] nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000927c:1:0:0x0000000f
[24328.056959] nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000927c:2:0:0x0000000f
[24328.056962] nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000927c:3:0:0x0000000f
[24328.056983] nvidia-modeset: ERROR: GPU:0: DP-4: Failed to disable DisplayPort audio stream-0
[24328.056992] nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000947d:0:0:0x0000000f
and there was a corresponding /var/crash/_usr_sbin_gdm3.0.crash.
Ubuntu 23.04 boot broken on kernel 6.2 Updated +Created
Switching to the other installed kernel, 5.9 made boot work.
The solution on kernel 6.2 was:
sudo apt instal nvidia-driver-515
as per comments under: bugs.launchpad.net/ubuntu/+source/linux/+bug/2012559. This also made the nvidia driver work: Find GPU information in Ubuntu.
Previously I had:
nvidia-driver-510
and it blew up before reaching disk decryption.
I also tried:
nvidia-driver-525
but that broke in a different way:
Finished apport-autoreport.service - Process error reports when automatic reporting is enabled.
nvidia-modeset: ERROR: GPU:0: Idling display engine timed out: 0x0000947d:0:0:407
(1 of 2) Job systemd-backlight@backlight: nvidia_e.service/start running (32s no limit)
Unable to lock screen on Ubuntu Updated +Created
Happened on P14s on Ubuntu 23.10, which started with fresh Ubuntu 23.10 install.
However it did not happen on Lenovo ThinkPad P51 (2017) also on Ubuntu 23.10 which had been upgraded several times from God knows what starting point... At first one had X11 (forced by Nvidia drivers) and the other Wayland, but moving to p14s X11 changed nothing.
Both were running GNOME Display Manager.
Same happens with Super + L, but also CLI commands: askubuntu.com/questions/7776/how-do-i-lock-the-desktop-screen-via-command-line
vmpk Updated +Created
VMPK is a virtual device that replicates what you would get by connecting a physical MIDI keyboard to your computer. It is not a software synthesizer on its own. But it does connect to a working synthesizer by default (Sonivox EAS) which makes it produce sounds out-of-the box.
TODO: then I messed with my sound settings, and then it stopped working by default on the default "MIDI Connection" > "MIDI Out Driver" > "Network". But it still works on "SonivoxEAS".
A hello world of actually connecting it to a specific software synthesizer manually on Advanced Linux Sound Architecture with aconnect can be found at: askubuntu.com/questions/34391/virtual-midi-piano-keyboard-setup/1298026#1298026
Reasonable default key mappings to keyboard covering 2 octaves.
3 multiple simultaneous keys did not work (tested "ZQI"). This might just be a limitation of my keyboard however.