The lower level you go into a computer, the harder it is to observe things Updated +Created
This is a general principle of software/hardware design that Ciro feels holds wide applicability.
The most extreme case of this is of course the integrated circuit itself, in which it is essentially impossible (?) to observe the specific value of some indidual wire at some point.
Somewhat on the other extreme, we have high level programming languages running on top of an operating system: at this point, you can just GDB step debug your program, print the value of any variable/memory location, and fully understand anything that you want. Provided that you manage to easily reach that point of interest.
And for anything in between we have various intermediate levels of complication. The most notable perhaps being developing the operating system itself. At this level, you can't so easily step debug (although techniques do exist). For early boot or bootloaders for example, you might want to use JTAG for example on real hardware.
In parallel to this, there is also another very important pair of closely linked tradeoffs:
  • the lower level at which something is implemented, the faster it runs
  • emulation gives you observability back, at the cost of slower runtime
Emulation also has another potential downside: unless you are very careful at implementing things correctly, your model might not be representative of the real thing. Also, there may be important tradeoffs between how much the model looks like the real thing, and how fast it runs. For example, QEMU's use of binary translation allows it to run orders of magnitude faster than gem5. However, you are unable to make any predictions about system performance with QEMU, since you are not modelling key elements like the cache or CPU pipeline.
Instrumentation is another technique that has can be considered to achieve greater observability.