Our definition of fog computing: a system that uses the computational resources of individuals who volunteer their own devices, in which you give each of the volunteers part of a computational problem that you want to solve.
Folding@home and SETI@home are perfect example of that definition.
Advantages of fog: there is only one, reusing hardware that would be otherwise idle.
Disadvantages:
  • in cloud, you can put your datacenter on the location with the cheapest possible power. On fog you can't.
  • on fog there is some waste due to network communication.
  • you will likely optimize code less well because you might be targeting a wide array of different types of hardware, so more power (and time) wastage. Furthermore, some of the hardware used will not not be optimal for the task, e.g. CPU instead of GPU.
All of this makes Ciro Santilli doubtful if it wouldn't be more efficient for volunteers simply to donate money rather than inefficient power usage.
Bibliography:
Figure 1.
Cloud Computing market share in Q2 2022 by statista.com
. Source.
Basically means "company with huge server farms, and which usually rents them out like Amazon AWS or Google Cloud Platform
Figure 1.
Global electricity use by data center type: 2010 vs 2018
. Source. The growth of hyperscaler cloud vs smaller cloud and private deployments was incredible in that period!
Google BigQuery alternative.
Let's get SSH access, instal a package, and run a server.
As of December 2023 on a t2.micro instance, the only one part of free tier at the time with advertised 1 vCPU, 1 GiB RAM, 8 GiB disk for the first 12 months, on Ubuntu 22.04:
$ free -h
               total        used        free      shared  buff/cache   available
Mem:           949Mi       149Mi       210Mi       0.0Ki       590Mi       641Mi
Swap:             0B          0B          0B
$ nproc
1
$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       7.6G  1.8G  5.8G  24% /
To install software:
sudo apt update
sudo apt install cowsay
cowsay asdf
Once HTTP inbound traffic is enabled on security rules for port 80, you can:
while true; do printf "HTTP/1.1 200 OK\r\n\r\n`date`: hello from AWS" | sudo nc -Nl 80; done
and then you are able to curl from your local computer and get the response.
As of December 2023, the cheapest instance with an Nvidia GPU is g4nd.xlarge, so let's try that out. In that instance, lspci contains:
00:1e.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
TODO meaning of "nd"? "n" presumably means Nvidia, but what is the "d"?
Be careful not to confuse it with g4ad.xlarge, which has an AMD GPU instead. TODO meaning of "ad"? "a" presumably means AMD, but what is the "d"?
Some documentation on which GPU is in each instance can seen at: docs.aws.amazon.com/dlami/latest/devguide/gpu.html (archive) with a list of which GPUs they have at that random point in time. Can the GPU ever change for a given instance name? Likely not. Also as of December 2023 the list is already outdated, e.g. P5 is now shown, though it is mentioned at: aws.amazon.com/ec2/instance-types/p5/
When selecting the instance to launch, the GPU does not show anywhere apparently on the instance information page, it is so bad!
Also note that this instance has 4 vCPUs, so on a new account you must first make a customer support request to Amazon to increase your limit from the default of 0 to 4, see also: stackoverflow.com/questions/68347900/you-have-requested-more-vcpu-capacity-than-your-current-vcpu-limit-of-0, otherwise instance launch will fail with:
You have requested more vCPU capacity than your current vCPU limit of 0 allows for the instance bucket that the specified instance type belongs to. Please visit aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit.
When starting up the instance, also select:
  • image: Ubuntu 22.04
  • storage size: 30 GB (maximum free tier allowance)
Once you finally managed to SSH into the instance, first we have to install drivers and reboot:
sudo apt update
sudo apt install nvidia-driver-510 nvidia-utils-510 nvidia-cuda-toolkit
sudo reboot
and now running:
nvidia-smi
shows something like:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05   Driver Version: 525.147.05   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:1E.0 Off |                    0 |
| N/A   25C    P8    12W /  70W |      2MiB / 15360MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
From basically everything should just work as normal. E.g. we were able to run a CUDA hello world just fine along:
nvcc inc.cu
./a.out
One issue with this setup, besides the time it takes to setup, is that you might also have to pay some network charges as it downloads a bunch of stuff into the instance. We should try out some of the pre-built images. But it is also good to know this pristine setup just in case.
Some stuff we then managed to run:
curl https://ollama.ai/install.sh | sh
/bin/time ollama run llama2 'What is quantum field theory?'
which gave:
0.07user 0.05system 0:16.91elapsed 0%CPU (0avgtext+0avgdata 16896maxresident)k
0inputs+0outputs (0major+1960minor)pagefaults 0swaps
so way faster than on my local desktop CPU, hurray.
After setup from: askubuntu.com/a/1309774/52975 we were able to run:
head -n1000 pap.txt | ARGOS_DEVICE_TYPE=cuda time argos-translate --from-lang en --to-lang fr > pap-fr.txt
which gave:
77.95user 2.87system 0:39.93elapsed 202%CPU (0avgtext+0avgdata 4345988maxresident)k
0inputs+88outputs (0major+910748minor)pagefaults 0swaps
so only marginally better than on P14s. It would be fun to see how much faster we could make things on a more powerful GPU.
These come with pre-installed drivers, so e.g. nvidia-smi just works on them out of the box, tested on g5.xlarge which has an Nvidia A10G GPU. Good choice as a starting point for deep learning experiments.
The hot and more expensive sotorage for Amazon EC2, where e.g. your Ubuntu filesystem will lie.
The cheaper and slower alternative is to use Amazon S3.
Large but ephemeral storage for EC2 instances. Predetermined by the EC2 instance type. Stays in the local server disk. Not automatically mounted.
You SSH into a an OS like Ubuntu and do whatever you want from there. E.g. Amazon EC2.
The OS is usually virualized, and you get only a certain share of the CPU by default.
Highly managed, you don't even see the Docker images, only some higher level JSON configuration file.
These setups are really convenient and cheap, and form a decent way to try out a new website with simple requirements.
This feels good.
One problem though is that Heroku is very opinionated, a likely like other PaaSes. So if you are trying something that is slightly off the mos common use case, you might be fucked.
Another problem with Heroku is that it is extremely difficult to debug a build that is broken on Heroku but not locally. We needed a way to be able to drop into a shell in the middle of build in case of failure. Otherwise it is impossible.
Deployment:
git push heroku HEAD:master
View stdout logs:
heroku logs --tail
PostgreSQL database, it seems to be delegated to AWS. How to browse database: stackoverflow.com/questions/20410873/how-can-i-browse-my-heroku-database
heroku pg:psql
Drop and recreate database:
heroku pg:reset --confirm <app-name>
All tables are destroyed.
Restart app:
heroku restart
Arghh, why so hard... tested 2021:
  • SendGrid: this one is the first one I got working on free tier!
  • Mailgun: the Heroku add-on creates a free plan. This is smaller than the flex plan and does not allow custom domains, and is not available when signing up on mailgun.com directly: help.mailgun.com/hc/en-us/articles/203068914-What-Are-the-Differences-Between-the-Free-and-Flex-Plans- And without custom domains you cannot send emails to anyone, only to people in the 5 manually whitelisted list, thus making this worthless. Also, gmail is not able to verify the DNS of the sandbox emails, and they go to spam.
    Mailgun does feel good otherwise if you are willing to pay. Their Heroku integration feels great, exposes everything you need on environment variables straight away.
  • CloudMailin: does not feel as well developed as Mailgun. More focus on receiving. Tried adding TXT xxx._domainkey.ourbigbook.com and CNAME mta.ourbigbook.com entires with custom domain to see if it works, took forever to find that page... www.cloudmailin.com/outbound/domains/xxx Domain verification requires a bit of human contact via email.
    They also don't document their Heroku usage well. The envvars generated on Heroku are useless, only to login on their web UI. The send username and password must be obtained on their confusing web ui.
Most/all commands have the -V option which prints the version, e.g.:
bsub -V
Submit a new job. The most important command!
By default, LSF only sends you an email with the stdout and stderr included in it, and does not show or store anything locally.
One option to store things locally is to use:
bsub -oo stdout.log -eo stderr.log 'echo myout; echo myerr 1>&2'
as documented at:Or to use files with the job id in them:
bsub -oo %J.out -eo %J.err 'echo myout; echo myerr 1>&2'
By default bsub -oo:
  • also contains the LSF metadata in addition to the actual submitted process stdout
  • prevents the completion email from being sent
To get just the stdout to the file, use bsub -N -oo which:
  • stores only stdout on the file
  • re-enables the completion email
as mentioned at:
Another option is to run with the bsub -I option:
bsub -I 'echo a;sleep 1;echo b;sleep 1;echo c'
This immediately prints stdout and stderr to the terminal.
Run bsub on foreground, show stdout on host stdout live with an interactive with the bsub -I option:
bsub -I 'echo a;sleep 1;echo b;sleep 1;echo c'; echo done
Ctrl + C kills the job on remote as well as locally.
View stdout/stderr of a running job.
Kill jobs.
By the current user:
bkill 0
Some good insights on the earlier history of the industry at: The Supermen: The Story of Seymour Cray by Charles J. Murray (1997).
The scale where human brain simulation becomes possible according to some estimates.
First publicly reached by Frontier.
Figure 1.
Intel supercomputer market share from 1993 to 2020
. Source. This graph is shocking, they just took over the entire market! Some good pre-Intel context at The Supermen: The Story of Seymour Cray by Charles J. Murray (1997), e.g. in those earlier days, custom architectures like Cray's and many others dominated.
Early models were heavy and not practical for people to carry them, so the main niche they initially filled was being carried in motor vehicles, notably trucks where drivers are commercially driving all day long.
It also helps in the case of trucks that you only need to cover a one-dimensional region of the main roads.
For example, this niche was the original entry point of companies such as:

Articles by others on the same topic (0)

There are currently no matching articles.