The table of contents was limited to the first 1000 articles out of 2570 total. Click here to view all children of Information technology.
The artistic instrument that enables the ultimate art: coding, See also: Section "The art of programming".
Much more useful than instruments used in inferior arts, such as pianos or paintbrushes.
Unlike other humans, computers are mindless slaves that do exactly what they are told to, except for occasional cosmic ray bit flips. Until they take over the world that is.
A computer is a highly layered system, and so you have to decide which layers you are the most interested in studying.
Although the layer are somewhat independent, they also sometimes interact, and when that happens it usually hurts your brain. E.g., if compilers were perfect, no one optimizing software would have to know anything about microarchitecture. But if you want to go hardcore enough, you might have to learn some lower layer.
It must also be said that like in any industry, certain layers are hidden in commercial secrecy mysteries making it harder to actually learn them. In computing, the lower level you go, the more closed source things tend to become.
But as you climb down into the abyss of low level hardcoreness, don't forget that making usefulness is more important than being hardcore: Figure 1. "xkcd 378: Real Programmers".
First, the most important thing you should know about this subject: cirosantilli.com/linux-kernel-module-cheat/should-you-waste-your-life-with-systems-programming
Here's a summary from low-level to high-level:
- semiconductor physical implementation this level is of course the most closed, but it is fun to try and peek into it from any openings given by commercials and academia:
- photolithography, and notably photomask design
- register transfer level
- interactive Verilator fun: Is it possible to do interactive user input and output simulation in VHDL or Verilog?
- more importantly, and much harder/maybe impossible with open source, would be to try and set up a open source standard cell library and supporting software to obtain power, performance and area estimates
- Are there good open source standard cell libraries to learn IC synthesis with EDA tools? on Quora
- the most open source ones are some initiatives targeting FPGAs, e.g. symbiflow.github.io/, www.clifford.at/icestorm/
- qflow is an initiative targeting actual integrated circuits
- microarchitecture: a good way to play with this is to try and run some minimal userland examples on gem5 userland simulation with logging, e.g. see on the Linux Kernel Module Cheat:This should be done at the same time as books/website/courses that explain the microarchitecture basics.This is the level of abstraction that Ciro Santilli finds the most interesting of the hardware stack. Learning it for actual CPUs (which as of 2020 is only partially documented by vendors) could actually be useful in hardcore software optimization use cases.
- instruction set architecture: a good approach to learn this is to manually write some userland assembly with assertions as done in the Linux Kernel Module Cheat e.g. at:
- github.com/cirosantilli/linux-kernel-module-cheat/blob/9b6552ab6c66cb14d531eff903c4e78f3561e9ca/userland/arch/x86_64/add.S
- cirosantilli.com/linux-kernel-module-cheat/x86-userland-assembly
- learn a bit about calling conventions, e.g. by calling C standard library functions from assembly:
- you can also try and understand what some simple C programs compile to. Things can get a bit hard though when
-O3
is used. Some cute examples:
- executable file format, notably executable and Linkable Format. Particularly important is to understand the basics of:
- address relocation: How do linkers and address relocation work?
- position independent code: What is the -fPIE option for position-independent executables in GCC and ld?
- how to observe which symbols are present in object files, e.g.:
- how C++ uses name mangling What is the effect of extern "C" in C++?
- how C++ template instantiation can help reduce link time and size: Explicit template instantiation - when is it used?
- operating system. There are two ways to approach this:
- learn about the Linux kernel Linux kernel. A good starting point is to learn about its main interfaces. This is well shown at Linux Kernel Module Cheat:
- system calls
- write some system calls in
- pure assembly:
- C GCC inline assembly:
- write some system calls in
- learn about kernel modules and their interfaces. Notably, learn about to demystify special files such
/dev/random
and so on: - learn how to do a minimal Linux kernel disk image/boot to userland hello world: What is the smallest possible Linux implementation?
- learn how to GDB Step debug the Linux kernel itself. Once you know this, you will feel that "given enough patience, I could understand anything that I wanted about the kernel", and you can then proceed to not learn almost anything about it and carry on with your life
- system calls
- write your own (mini-) OS, or study a minimal educational OS, e.g. as in:
- learn about the Linux kernel Linux kernel. A good starting point is to learn about its main interfaces. This is well shown at Linux Kernel Module Cheat:
- programming language
This is a general principle of software/hardware design that Ciro feels holds wide applicability.
The most extreme case of this is of course the integrated circuit itself, in which it is essentially impossible (?) to observe the specific value of some indidual wire at some point.
Somewhat on the other extreme, we have high level programming languages running on top of an operating system: at this point, you can just GDB step debug your program, print the value of any variable/memory location, and fully understand anything that you want. Provided that you manage to easily reach that point of interest.
And for anything in between we have various intermediate levels of complication. The most notable perhaps being developing the operating system itself. At this level, you can't so easily step debug (although techniques do exist). For early boot or bootloaders for example, you might want to use JTAG for example on real hardware.
In parallel to this, there is also another very important pair of closely linked tradeoffs:
- the lower level at which something is implemented, the faster it runs
- emulation gives you observability back, at the cost of slower runtime
Emulation also has another potential downside: unless you are very careful at implementing things correctly, your model might not be representative of the real thing. Also, there may be important tradeoffs between how much the model looks like the real thing, and how fast it runs. For example, QEMU's use of binary translation allows it to run orders of magnitude faster than gem5. However, you are unable to make any predictions about system performance with QEMU, since you are not modelling key elements like the cache or CPU pipeline.
Instrumentation is another technique that has can be considered to achieve greater observability.
Instrumentation basically means adding loggers/print statements to certain points of interest of your hardware/software.
Instrumentation tends to slow execution down a bit, but way less than emulation.
The downside is that if the instrumentation does not provide you the data you need to debug, there's not much you can do, you will need to modify it, i.e. you don't get full visibility from instrumention.
This is unlike emulation that provides full observability.
The term loosely refers to certain layers of the computer abstraction layers hierarchy, usually high level hardware internals like CPU pipeline, caching and the memory system. Basically exactly what gem5 models.
Some of the earlier computers of the 20th centure were analog computers, not digital.
At some point analog died however, and "computer" basically by default started meaning just "digital computer".
As of the 2010's and forward, with the limit of Moore's law and the rise of machine learning, people have started looking again into analog computing as a possile way forward. A key insight is that huge floating point precision is not that crucial in many deep learning applications, e.g. many new digital designs have tried 16-bit floating point as opposed to the more traditional 32-bit minium. Some papers are even looking into 8-bit: dl.acm.org/doi/10.5555/3327757.3327866
As an example, the Lightmatter company was trying to implement silicon photonics-based matrix multiplication.
A general intuition behind this type of development is that the human brain, the holy grail of machine learning, is itself an analog computer.
Unsurprisingly the term "computer" became a synonym for this from the 1960s onwards!
Ciro Santilli's fork of PARSEC. This fork was made to improve the build system and better support newer targets, including newer Ubuntu and Buildroot.
After the PARSEC website died in 2023, Ciro Santilli also uploaded the test data to GitHub.
The interface is a bit annoying, but the tool is really cool.
100 cycles of
matrixprod
:stress-ng -c1 --cpu-ops 100 --cpu-method matrixprod
man stress-ng
gives the list of possible --cpu-method
. It documents matrixprod
as:matrix product of two 128 × 128 matrices of double floats. Testing on 64 bit x86 hardware shows that this is provides a good mix of memory, cache and floating point operations and is probably the best CPU method to use to make a CPU run hot.
If you don't specify the
--cpu-method
it apparently loops through every method one by one.Limit time to 1s instead of limiting cycles:
stress-ng -c1 -t1 --cpu-method matrixprod
This section is about companies that were primarily started as computer makers.
For companies that make integrated circuits, see also: Section "Semiconductor company".
- owns the entire stack and creates high quality highly optimized systems
- creates closed lock-in systems without inter-operability and actively fights users from owning their devices
- do they give back enough to open source, or do they leech mostly?
Of course, this only made sense when Apple was more of an underdog to IBM, and Ciro Santilli greatly admires their defiance of the norm.
As of 2020 however, Apple is kind of on the top of the mobile world, and Think different simply makes no sense anymore, notably because it relies on closed source offline software used by millions.
it's Popular Now It Sucks comes to mind.
This is a trap every company that prides itself on it's "alternative culture" sets for itself. If they succeed, they could become the norm.
Because the people who are crazy enough to think they can change the world are the ones who do.
Was a direct tech predecessor to the iPhone.
Nice looking and expensive operating system by Apple. Ciro Santilli believes that:
- if you want to be ripped off, just use Microsoft Windows which has more software available
- or if you want to attain Enlightenment, just use Linux, which is free and open source
The story of how OS X was ported to x86 from PowerPC with large initial work up to boot by a single man in the year 2000, John Kullmann, is really worth reading: www.quora.com/Apple-company/How-does-Apple-keep-secrets-so-well/answer/Kim-Scheinberg on Quora, see also:
Can you do anything with it? What's the license?
Co-founder of Apple.
Is Jobs evil? Is he interesting? Undoubtedly.
www.folklore.org/ProjectView.py?project=Macintosh&characters=Steve%20Jobs has some good anecdotes about him.
Ciro Santilli is especially fond of: Jobs and Wozniak's blue box.
Good quotes:
- "Try to have a nice family life, have fun, save a little money." quote at: Section "Don't be a pussy" and the related Jobs and Wozniak's blue box attitude
- "Steve Jobs Insult Response" on backward design
- Steve Jobs Pixar office design philosophy: great ideas happen from chance meetings on corridors, not in board rooms: officesnapshots.com/2012/07/16/pixar-headquarters-and-the-legacy-of-steve-jobs/
- Steve Jobs' 2005 Stanford Commencement Address
- Here's to the crazy ones: Ciro would like to believe that this is mostly written by Jobs, but apparently it was just written by an advertisement agency. Good job though.
You must watch this: Video "Bill Gates vs Steve Jobs by Epic Rap Battles of History (2012)".
Evil deeds:
- not recognizing own daughter for many years??? en.wikipedia.org/wiki/Lisa_Brennan-Jobs
- lying to Steve Wozniak about the 5000 dollar Atari bonus: web.archive.org/web/20110612071502/http://www.woz.org/letters/general/91.html
- not giving stock to early garage employees: www.businessinsider.com/steve-wozniak-gave-early-apple-employees-10-million-in-stock-2014-9 OK, not a legal obligation. But... love?
This idea also comes up in other sources of course.
TODO clear attribution source:
Some people say, "Give the customers what they want." But that's not my approach. Our job is to figure out what they're going to want before they do. I think Henry Ford once said, "If I'd asked customers what they wanted, they would have told me, 'A faster horse!'" People don't know what they want until you show it to them. That's why I never rely on market research. Our task is to read things that are not yet on the page.
Ciro Santilli likes Magic: The Gathering and he was pleased when he learned that Steve Wozniak does too, and has an expensive collection: redsunsoft.com/2019/03/how-a-post-to-play-magictg-turned-into-an-afternoon-with-the-woz/
Some have actually been preserved: en.wikipedia.org/wiki/File:Blue_Box_in_museum.jpg
The japanese name literally means:
- 富士 fushi, from Mount Fuji, which itself has unknown origin
- 通 tong: telecommunications
They died so completely, Googling "ICL" now has higher hits such as Imperial College London.
As of the 2020's, a slumbering giant.
But the pre-Internet impact of IBM was insane! Including notably:
- some of the most important business computers of the pre-personal computer era
- SQL
- IBM Generalized Markup Language, which is a predecessor to XML and HTML
This is a family of computers. It was a big success. It appears that this was a big unification project of previous architectures. And it also gave software portability guarantees with future systems, since writing software was starting to become as expensive as the hardware itself.
Media:
This was the first major commercial computer hit. Stlil vacuum tube-based.
Borrow from the Internet Archive for free: archive.org/details/manbehindmicroc000berl/page/n445/mode/2up
Borrow from the Internet Archive for free: archive.org/details/supermenstory00murr
Initial chapters put good clarity on the formation of the military-industrial complex. Being backed by the military, especially just after World War II, was in itself enough credibility to start and foster a company.
It is funny to see how the first computers were very artisanal, made on a one-off basis.
Amazing how Control Data Corporation raised capital IPO style as a startup without a product. The dude was selling shares at dinner parties in his home.
Very interesting mention on page 70 of how Israel bought CDC's UNIVAC 1103 which Cray contributed greatly to design, and everyone knew that it was to make thermonuclear weapons, since that was what the big American labs like this mention should be added to: en.wikipedia.org/wiki/Nuclear_weapons_and_Israel but that's Extended Protected... the horrors of Wikipedia.
Another interesting insight is how "unintegrated" computers were back then. They were literally building computers out of individual vacuum tubes, then individual semiconducting transistors, a gate at a time. Then things got more and more integrated as time went. That is why the now outdated word "microprocessor" existed. When processors start to fit into a single integrated circuit, they were truly micro compared to the monstrosities that existed previously.
Also, because integration was so weak initially, it was important to more manually consider the length of wire signals had to travel, and try to put components closer together to reduce the critical path to be able to increase clock speeds. These constraints are also of course present in modern computer design, but they were just so much more visible in those days.
The book does unfortunately not give much detail in Crays personal life as mentioned on this book review: www.goodreads.com/review/show/1277733185?book_show_action=true. His childhood section is brief, and his wedding is described in one paragraph, and divorce in one sentence. Part of this is because he was very private about his family most likely note how Wikipedia had missed his first wedding, and likely misattribute children to the second wedding; en.wikipedia.org/wiki/Talk:Seymour_Cray section "Weddings and Children".
Crays work philosophy is is highlighted many times in the book, and it is something worthy to have in mind:
- if a design is not working, start from scratch
- don't be the very first pioneer of a technology, let others work out the problems for you first, and then come second and win
Cray's final downfall was when he opted to try to use a promising but hard to work with material gallium arsenide instead of silicon as his way to try and speed up computers, see also: gallium arsenide vs silicon. Also, he went against the extremely current of the late 80's early 90's pointing rather towards using massively parallel systems based on silicon off-the-shelf Intel processors, a current that had DARPA support, and which by far the path that won very dramatically as of 2020, see: Intel supercomputer market share.
www.threekit.com/blog/gltf-everything-you-need-to-know comparision of several formats
Official demos: github.com/KhronosGroup/glTF-Sample-Assets These are visible at: github.khronos.org/glTF-Sample-Viewer-Release/ with a JavaScript viewer present at: github.com/KhronosGroup/glTF-Sample-Viewer TODO can you load models on the web?
Supports animations, e.g.:
gltf-viewer.donmccurdy.com/ is based on doesn't work with those examples because they have separate asset files.
f3d just worked for it.
To test it, let's get two computers on the same local area network, e.g. connected to Wi-Fi on the same home modem router.
On computer B:
- find computer IP with the
ip
CLI tool. Suppose it is 192.168.1.102 - then run Ciro's
nc
HTTP test server
On computer A, run on terminal 1:
sudo tcpdump ip src 192.168.1.102 or dst 192.168.1.102
Then on terminal 2 make a test request:
curl 192.168.1.102:8000
Output on terminal 1:TODO understand them all! Possibly correlate with Wireshark, or use
17:14:22.017001 IP ciro-p14s.55798 > 192.168.1.102.8000: Flags [S], seq 2563867413, win 64240, options [mss 1460,sackOK,TS val 303966323 ecr 0,nop,wscale 7], length 0
17:14:22.073957 IP 192.168.1.102.8000 > ciro-p14s.55798: Flags [S.], seq 1371418143, ack 2563867414, win 65160, options [mss 1460,sackOK,TS val 171832817 ecr 303966323,nop,wscale 7], length 0
17:14:22.074002 IP ciro-p14s.55798 > 192.168.1.102.8000: Flags [.], ack 1, win 502, options [nop,nop,TS val 303966380 ecr 171832817], length 0
17:14:22.074195 IP ciro-p14s.55798 > 192.168.1.102.8000: Flags [P.], seq 1:82, ack 1, win 502, options [nop,nop,TS val 303966380 ecr 171832817], length 81
17:14:22.076710 IP 192.168.1.102.8000 > ciro-p14s.55798: Flags [P.], seq 1:80, ack 1, win 510, options [nop,nop,TS val 171832821 ecr 303966380], length 79
17:14:22.076710 IP 192.168.1.102.8000 > ciro-p14s.55798: Flags [.], ack 82, win 510, options [nop,nop,TS val 171832821 ecr 303966380], length 0
17:14:22.076727 IP ciro-p14s.55798 > 192.168.1.102.8000: Flags [.], ack 80, win 502, options [nop,nop,TS val 303966383 ecr 171832821], length 0
17:14:22.077006 IP ciro-p14s.55798 > 192.168.1.102.8000: Flags [F.], seq 82, ack 80, win 502, options [nop,nop,TS val 303966383 ecr 171832821], length 0
17:14:22.077564 IP 192.168.1.102.8000 > ciro-p14s.55798: Flags [F.], seq 80, ack 82, win 510, options [nop,nop,TS val 171832821 ecr 303966380], length 0
17:14:22.077578 IP ciro-p14s.55798 > 192.168.1.102.8000: Flags [.], ack 81, win 502, options [nop,nop,TS val 303966384 ecr 171832821], length 0
17:14:22.079429 IP 192.168.1.102.8000 > ciro-p14s.55798: Flags [.], ack 83, win 510, options [nop,nop,TS val 171832824 ecr 303966383], length 0
-A
option to dump content.Amazing tool that captures packets and disassembles them. Allows you to click an interactive tree that represents Ethernet, TCP/IP and application layer like HTTP.
Start capture immediately from CLI, capture packets to/from 192.168.1.102:
sudo wireshark -f 'host 192.168.1.102' -k
Capture by instead:
sudo wireshark -f http -k
sudo wireshark -f icmp -k
Filter by both protocol and host:
sudo wireshark -f 'host 192.168.1.102 and icmp' -k
For application layer capture filtering, the best you can do is by port:There is an
sudo wireshark -f 'tcp port 80'
http
filter but only for as a wireshark display filterSample usage:This produces simple one liners for each request.
sudo tshark -f 'host 192.168.1.102
What you likely want is the
-V
option which fully disassembles each frame much as you can do in the GUI Wireshark:sudo tshark -V -f 'host 192.168.1.102
TODO didn't manage to get it working with TP Link ARCHER VR2800 even though it shows DHCP as enabled and it also shows MAC addresses and corresponding hostnames in the router management interface.
For IP-level communication, askubuntu.com/questions/22835/how-to-network-two-ubuntu-computers-using-ethernet-without-a-router/116680#116680 just worked between P51 and P14s both on Ubuntu 23.10 connected with a regular Cat 5e cable.
On both machines, first we found the Ethernet cable interface name with the which outputs on the P41s:so the interface was
ip
CLI tool:ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fc:5c:ee:24:fb:b4 brd ff:ff:ff:ff:ff:ff
3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 04:7b:cb:cc:1b:10 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.123/24 brd 192.168.1.255 scope global dynamic noprefixroute wlp2s0
valid_lft 61284sec preferred_lft 61284sec
inet6 fe80::3597:15d8:74ff:e112/64 scope link noprefixroute
valid_lft forever preferred_lft forever
enp1s0f0
, because wlp
is wireless and lo
is localhost.So on the P14s we assign an IP of 10.0.0.10 to the P51:
sudo ip address add 10.0.0.10/24 dev enp1s0f0
Then on the P51 analogously, giving IP of 10.0.0.20 to the P14s:
sudo ip address add 10.0.0.20/24 dev enp0s31f6
And after that, P14s can:and P51 can:
ping 10.0.0.10
ping 10.0.0.20
TODO after a few seconds, the settings appear to be forgotten, and
ping
stops working unless you do sudo ip address add
on the local machine again. This seems to happen after a popup appears saying "Activation of network connection failed" as it fails to obtain Internet from the cable.TODO list and delete such manual assignments we've made.
This one is not generally seen by software, which mostly operates starting from OSI layer 2.
A good project to see UARTs at work in all their beauty is to connect two Raspberry Pis via UART, and then:
- type in one and see characters appear in the other: scribles.net/setting-up-uart-serial-communication-between-raspberry-pis/
- send data via a script: raspberrypi.stackexchange.com/questions/29027/how-should-i-properly-communicate-2-raspberry-pi-via-uart
Part of the beauty of this is that you can just connect both boards directly manually with a few wire-to-wire connections with simple jump wire. Its simplicity is just quite refreshing. Sure, you could do something like that for any physical layer link presumably...
Remember that you can only have one GNU screen connected at a time or else they will mess each other up: unix.stackexchange.com/questions/93892/why-is-screen-is-terminating-without-root/367549#367549
On Ubuntu 22.04 you can screen without sudo by adding yourself to the
dialout
group with:sudo usermod -a -G dialout $USER
When non-specialists say "Ethernet cable", they usually mean twisted pair for Ethernet over twisted pair.
But of course, this term is much more generic to a more specialized person, since notably fiber optics are also extensively used in Ethernet over fiber.
This is the most common home "ethernet cable" as of 2024. It is essentially ubiquitous. According to the existing Ethernet physical layer, the maximum speed supported is 2.5 Gbit/s.
The frequency range of Wi-Fi, which falls in the microwave range, is likely chosen to allow faster data transfer than say, FM broadcasting, while still being relatively transparent to walls (though not as much).
Articles were limited to the first 100 out of 2570 total. Click here to view all children of Information technology.