EC2 instance type by Ciro Santilli 35 Updated +Created
Amazon's informtion about their own intances is so bad and non-public that this was created: instances.vantage.sh/
OpenNMT by Ciro Santilli 35 Updated +Created
qiskit.transpile() by Ciro Santilli 35 Updated +Created
This function does quantum compilation. Shown e.g. at qiskit/qft.py.
GitHub book repo by Ciro Santilli 35 Updated +Created
Some amazing people have put book source codes on GitHub. This is a list of such repos.
Ollama by Ciro Santilli 35 Updated +Created
Ollama is a highly automated open source wrapper that makes it very easy to run multiple Open weight LLM models either on CPU or GPU.
Its README alone is of great value, serving as a fantastic list of the most popular Open weight LLM models in existence.
Install with:
curl https://ollama.ai/install.sh | sh
The below was tested on Ollama 0.1.14 from December 2013.
Download llama2 7B and open a prompt:
ollama run llama2
On P14s it runs on CPU and generates a few tokens per second, which is quite usable for a quick interactive play.
As mentioned at github.com/jmorganca/ollama/blob/0174665d0e7dcdd8c60390ab2dd07155ef84eb3f/docs/faq.md the downloads to under /usr/share/ollama/.ollama/models/ and ncdu tells me:
--- /usr/share/ollama ----------------------------------
    3.6 GiB [###########################] /.ollama
    4.0 KiB [                           ]  .bashrc
    4.0 KiB [                           ]  .profile
    4.0 KiB [                           ]  .bash_logout
We can also do it non-interactively with:
/bin/time ollama run llama2 'What is quantum field theory?'
which gave me:
0.13user 0.17system 2:06.32elapsed 0%CPU (0avgtext+0avgdata 17280maxresident)k
0inputs+0outputs (0major+2203minor)pagefaults 0swaps
but note that there is a random seed that affects each run by default. This was apparently fixed however: github.com/ollama/ollama/issues/2773, but Ciro Santilli doesn't know how to set the seed.
Some other quick benchmarks from Amazon EC2 GPU on a g4nd.xlarge instance which had an Nvidia Tesla T4:
0.07user 0.05system 0:16.91elapsed 0%CPU (0avgtext+0avgdata 16896maxresident)k
0inputs+0outputs (0major+1960minor)pagefaults 0swaps
and on Nvidia A10G in an g5.xlarge instance:
0.03user 0.05system 0:09.59elapsed 0%CPU (0avgtext+0avgdata 17312maxresident)k
8inputs+0outputs (1major+1934minor)pagefaults 0swaps
So it's not too bad, a small article in 10s.
It tends to babble quite a lot by default, but eventually decides to stop.
Unable to lock screen on Ubuntu by Ciro Santilli 35 Updated +Created
Happened on P14s on Ubuntu 23.10, which started with fresh Ubuntu 23.10 install.
However it did not happen on Lenovo ThinkPad P51 (2017) also on Ubuntu 23.10 which had been upgraded several times from God knows what starting point... At first one had X11 (forced by Nvidia drivers) and the other Wayland, but moving to p14s X11 changed nothing.
Both were running GNOME Display Manager.
Culture of Brazil by Ciro Santilli 35 Updated +Created
HTTP by Ciro Santilli 35 Updated +Created
History of Bitcoin by Ciro Santilli 35 Updated +Created
Change (Bitcoin) by Ciro Santilli 35 Updated +Created
Bitcoin mining reward by Ciro Santilli 35 Updated +Created
LSF get version by Ciro Santilli 35 Updated +Created
Most/all commands have the -V option which prints the version, e.g.:
bsub -V
LSF command by Ciro Santilli 35 Updated +Created
Khronos standard by Ciro Santilli 35 Updated +Created
Universal Scene Description by Ciro Santilli 35 Updated +Created
Project Gutenberg remove line breaks by Ciro Santilli 35 Updated +Created
Their txt formats are so crap!
E.g. for;
wget -O pap.txt https://www.gutenberg.org/ebooks/1342.txt.utf-8
a good one is:
perl -0777 -pe 's/(?<!\r\n)\r\n(?!\r\n)( +)?/ /g' pap.txt
The ( +)? is for the endlessly many quoted letters they have, which use four leading spaces per line as a quote marker.
Conda by Ciro Santilli 35 Updated +Created
Conda is like pip, except that it also manages shared library dependencies, including providing prebuilts.
This has made Conda very popular in the deep learning community around 2020, where using Python frontends like PyTorch to configure faster precompiled backends was extremely common.
It also means that it is a full package manager and extremely overbloated and blows up all the time. People should just use Docker instead for that kind of stuff: www.reddit.com/r/learnmachinelearning/comments/kd88p8/comment/keco07k/

Unlisted articles are being shown, click here to show only listed articles.