Term invented by Ciro Santilli, similar to "nuclear blues", and used to describe the feeling that every little shitty job you are doing (that does not considerably help achieving AGI) is completely pointless given that we are likely close to AGI as of 2023.
en.bitcoin.it/wiki/Jercos mentions:
According to jercos the transaction was finalized over IRC chats. Jercos was 18 at the time of the transaction.www.bitcoinwhoswho.com/jercosinterview is the source. Persumably the contact was initiated via the private messaging feature of the Bitcoin Forum.
Bibliography:
en.bitcoin.it/wiki/Jercos
en.bitcoin.it/wiki/Jercos
By default, LSF only sends you an email with the stdout and stderr included in it, and does not show or store anything locally.
One option to store things locally is to use:as documented at:
bsub -oo stdout.log -eo stderr.log 'echo myout; echo myerr 1>&2'
Or to use files with the job id in them:
bsub -oo %J.out -eo %J.err 'echo myout; echo myerr 1>&2'
By default To get just the stdout to the file, use as mentioned at:
bsub -oo
:- also contains the LSF metadata in addition to the actual submitted process stdout
- prevents the completion email from being sent
bsub -N -oo
which:- stores only stdout on the file
- re-enables the completion email
Another option is to run with the bsub This immediately prints stdout and stderr to the terminal.
-I
option:bsub -I 'echo a;sleep 1;echo b;sleep 1;echo c'
@cirosantilli/_file/qiskit/qiskit/qft.py by Ciro Santilli 35 Updated 2024-12-23 +Created 1970-01-01
This is an example of the
qiskit.circuit.library.QFT
implementation of the Quantum Fourier transform function which is documented at: docs.quantum.ibm.com/api/qiskit/0.44/qiskit.circuit.library.QFTOutput:So this also serves as a more interesting example of quantum compilation, mapping the
init: [1, 0, 0, 0, 0, 0, 0, 0]
qc
┌──────────────────────────────┐┌──────┐
q_0: ┤0 ├┤0 ├
│ ││ │
q_1: ┤1 Initialize(1,0,0,0,0,0,0,0) ├┤1 QFT ├
│ ││ │
q_2: ┤2 ├┤2 ├
└──────────────────────────────┘└──────┘
transpiled qc
┌──────────────────────────────┐ ┌───┐
q_0: ┤0 ├────────────────────■────────■───────┤ H ├─X─
│ │ ┌───┐ │ │P(π/2) └───┘ │
q_1: ┤1 Initialize(1,0,0,0,0,0,0,0) ├──────■───────┤ H ├─┼────────■─────────────┼─
│ │┌───┐ │P(π/2) └───┘ │P(π/4) │
q_2: ┤2 ├┤ H ├─■─────────────■──────────────────────X─
└──────────────────────────────┘└───┘
Statevector([0.35355339+0.j, 0.35355339+0.j, 0.35355339+0.j,
0.35355339+0.j, 0.35355339+0.j, 0.35355339+0.j,
0.35355339+0.j, 0.35355339+0.j],
dims=(2, 2, 2))
init: [0.0, 0.35355339059327373, 0.5, 0.3535533905932738, 6.123233995736766e-17, -0.35355339059327373, -0.5, -0.35355339059327384]
Statevector([ 7.71600526e-17+5.22650714e-17j,
1.86749130e-16+7.07106781e-01j,
-6.10667421e-18+6.10667421e-18j,
1.13711443e-16-1.11022302e-16j,
2.16489014e-17-8.96726857e-18j,
-5.68557215e-17-1.11022302e-16j,
-6.10667421e-18-4.94044770e-17j,
-3.30200457e-16-7.07106781e-01j],
dims=(2, 2, 2))
QFT
gate to Qiskit Aer primitives.If we don't
transpile
in this example, then running blows up with:qiskit_aer.aererror.AerError: 'unknown instruction: QFT'
The second input is:and the output of that approximately:which can be defined simply as the normalized DFT of the input quantum state vector.
[0, 1j/sqrt(2), 0, 0, 0, 0, 0, 1j/sqrt(2)]
From this we see that the Quantum Fourier transform is equivalent to a direct discrete Fourier transform on the quantum state vector, related: physics.stackexchange.com/questions/110073/how-to-derive-quantum-fourier-transform-from-discrete-fourier-transform-dft
Let's get SSH access, instal a package, and run a server.
As of December 2023 on a
t2.micro
instance, the only one part of free tier at the time with advertised 1 vCPU, 1 GiB RAM, 8 GiB disk for the first 12 months, on Ubuntu 22.04:$ free -h
total used free shared buff/cache available
Mem: 949Mi 149Mi 210Mi 0.0Ki 590Mi 641Mi
Swap: 0B 0B 0B
$ nproc
1
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/root 7.6G 1.8G 5.8G 24% /
To install software:
sudo apt update
sudo apt install cowsay
cowsay asdf
Once HTTP inbound traffic is enabled on security rules for port 80, you can:and then you are able to
while true; do printf "HTTP/1.1 200 OK\r\n\r\n`date`: hello from AWS" | sudo nc -Nl 80; done
curl
from your local computer and get the response.As of December 2023, the cheapest instance with an Nvidia GPU is g4nd.xlarge, so let's try that out. In that instance, lspci contains:TODO meaning of "nd"? "n" presumably means Nvidia, but what is the "d"?
00:1e.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
Be careful not to confuse it with g4ad.xlarge, which has an AMD GPU instead. TODO meaning of "ad"? "a" presumably means AMD, but what is the "d"?
Some documentation on which GPU is in each instance can seen at: docs.aws.amazon.com/dlami/latest/devguide/gpu.html (archive) with a list of which GPUs they have at that random point in time. Can the GPU ever change for a given instance name? Likely not. Also as of December 2023 the list is already outdated, e.g. P5 is now shown, though it is mentioned at: aws.amazon.com/ec2/instance-types/p5/
When selecting the instance to launch, the GPU does not show anywhere apparently on the instance information page, it is so bad!
Also note that this instance has 4 vCPUs, so on a new account you must first make a customer support request to Amazon to increase your limit from the default of 0 to 4, see also: stackoverflow.com/questions/68347900/you-have-requested-more-vcpu-capacity-than-your-current-vcpu-limit-of-0, otherwise instance launch will fail with:
You have requested more vCPU capacity than your current vCPU limit of 0 allows for the instance bucket that the specified instance type belongs to. Please visit aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit.
When starting up the instance, also select:Once you finally managed to SSH into the instance, first we have to install drivers and reboot:and now running:shows something like:
- image: Ubuntu 22.04
- storage size: 30 GB (maximum free tier allowance)
sudo apt update
sudo apt install nvidia-driver-510 nvidia-utils-510 nvidia-cuda-toolkit
sudo reboot
nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:1E.0 Off | 0 |
| N/A 25C P8 12W / 70W | 2MiB / 15360MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
If we start from the raw Ubuntu 22.04, first we have to install drivers:
- docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-nvidia-driver.html official docs
- stackoverflow.com/questions/63689325/how-to-activate-the-use-of-a-gpu-on-aws-ec2-instance
- askubuntu.com/questions/1109662/how-do-i-install-cuda-on-an-ec2-ubuntu-18-04-instance
- askubuntu.com/questions/1397934/how-to-install-nvidia-cuda-driver-on-aws-ec2-instance
From basically everything should just work as normal. E.g. we were able to run a CUDA hello world just fine along:
nvcc inc.cu
./a.out
One issue with this setup, besides the time it takes to setup, is that you might also have to pay some network charges as it downloads a bunch of stuff into the instance. We should try out some of the pre-built images. But it is also good to know this pristine setup just in case.
Some stuff we then managed to run:which gave:so way faster than on my local desktop CPU, hurray.
curl https://ollama.ai/install.sh | sh
/bin/time ollama run llama2 'What is quantum field theory?'
0.07user 0.05system 0:16.91elapsed 0%CPU (0avgtext+0avgdata 16896maxresident)k
0inputs+0outputs (0major+1960minor)pagefaults 0swaps
After setup from: askubuntu.com/a/1309774/52975 we were able to run:which gave:so only marginally better than on P14s. It would be fun to see how much faster we could make things on a more powerful GPU.
head -n1000 pap.txt | ARGOS_DEVICE_TYPE=cuda time argos-translate --from-lang en --to-lang fr > pap-fr.txt
77.95user 2.87system 0:39.93elapsed 202%CPU (0avgtext+0avgdata 4345988maxresident)k
0inputs+88outputs (0major+910748minor)pagefaults 0swaps
Laucnh Amazin EC2 with existing EBS volume by Ciro Santilli 35 Updated 2024-12-23 +Created 1970-01-01
Not possible directly without first creating an AMI image from snapshot? So annoying!
The hot and more expensive sotorage for Amazon EC2, where e.g. your Ubuntu filesystem will lie.
The cheaper and slower alternative is to use Amazon S3.
The CLI tools don't appear to be packaged for Ubuntu 23.10? Annoying... There is a package
libapache-jena-java
but it doesn't contain any binaries, only Java library files.To run the CLI tools easily we can download the prebuilt:and we can confirm it works with:which outputs:
sudo apt install openjdk-22-jre
wget https://dlcdn.apache.org/jena/binaries/apache-jena-4.10.0.zip
unzip apache-jena-4.10.0.zip
cd apache-jena-4.10.0
export JENA_HOME="$(pwd)"
export PATH="$PATH:$(pwd)/bin"
sparql -version
Apache Jena version 4.10.0
If your Java is too old then then running
sparql
with the prebuilts fails with:Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.UnsupportedClassVersionError: arq/sparql has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:473)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:621)
Build from source is likely something like:TODO test it.
sudo apt install maven openjdk-22-jdk
git clone https://github.com/apache/jena --branch jena-4.10.0 --depth 1
cd jena
mvn clean install
If you make the mistake of trying to run the source tree without build:it fails with:as per: users.jena.apache.narkive.com/T5TaEszT/sparql-tutorial-querying-datasets-error-unrecognized-option-graph
git clone https://github.com/apache/jena --branch jena-4.10.0 --depth 1
cd jena
export JENA_HOME="$(pwd)"
export PATH="$PATH:$(pwd)/apache-jena/bin"
Error: Could not find or load main class arq.sparql
They have a tutorial at: jena.apache.org/tutorials/sparql.html
Once you've done the Apache Jena CLI tools setup we can query all users with Full Name (FN) "John Smith" directly fom the rdf/vcard.ttl Turtle RDF file with the rdf/vcard.rq SPARQL query:and that outputs:
sparql --data=rdf/vcard.ttl --query=rdf/vcard.rq
---------------------------------
| x |
=================================
| <http://somewhere/JohnSmith/> |
---------------------------------
Nvidia T4 GPUs as mentioned at: aws.amazon.com/ec2/instance-types/g4/
Nvidia A10G GPU, 4 vCPUs.
There are unlisted articles, also show them or only show them.