Technology company Updated +Created
How to blackout your window without drilling / Previous failed attempts Updated +Created
I also believe in publishing null results, so here goes.
Thick cardboard paper and Gorilla Tape: the intense Sun heat made the cardboard bend, and even the Gorilla tape could not hold it, leading to light leakage. Even worse, it started to smell a bit, and I got afraid that it could catch fire, so don't do this! Maybe I will try coating with aluminium foil next time, but I'm afraid it might stick to the glass. In any case, even if those setups work, your room may be permanently very dark depending on how far the window opens, which can lead to other problems such as mold. Another downside of this method is that the tape is extremely sticky, and especially difficult to remove if it touches the glass, where you can't use metallic items to scrape it off without scratching the glass. I had to get a solvent and use a lot of elbow grease to get rid of it.
I have tried a few sleeping masks, but none of them were enough on their own. There is always some light leakage around the nose, especially as you turn around in the night. And some of them are too hot. I have tried:
I also considered getting one of those "Perfect Fit Blinds" www.blindsdirect.co.uk/perfect-fit-roller-blinds (archive) which fit between the glass and the insulation. This looks like it could work. But I didn't go for it in the end because my window has 3 glass panels, so I would have to get three of those blinds separately.
Latin phrase Updated +Created
Optimization problem Updated +Created
Secondary school Updated +Created
x86 Paging Tutorial / 64-bit architectures Updated +Created
64 bits is still too much address for current RAM sizes, so most architectures will use less bits.
x86_64 uses 48 bits (256 TiB), and legacy mode's PAE already allows 52-bit addresses (4 PiB). 56-bits is a likely future candidate.
12 of those 48 bits are already reserved for the offset, which leaves 36 bits.
If a 2 level approach is taken, the best split would be two 18 bit levels.
But that would mean that the page directory would have 2^18 = 256K entries, which would take too much RAM: close to a single-level paging for 32 bit architectures!
Therefore, 64 bit architectures create even further page levels, commonly 3 or 4.
x86_64 uses 4 levels in a 9 | 9 | 9 | 9 scheme, so that the upper level only takes up only 2^9 higher level entries.
The 48 bits are split equally into two disjoint parts:
----------------- FFFFFFFF FFFFFFFF
Top half
----------------- FFFF8000 00000000


Not addressable


----------------- 00007FFF FFFFFFFF
Bottom half
----------------- 00000000 00000000
A 5-level scheme is emerging in 2016: software.intel.com/sites/default/files/managed/2b/80/5-level_paging_white_paper.pdf which allows 52-bit addresses with 4k pagetables.
prime-number-theorem Updated +Created
Consider this is a study in failed computational number theory.
The approximation converges really slowly, and we can't easy go far enough to see that the ration converges to 1 with only awk and primes:
sudo apt intsall bsdgames
cd prime-number-theorem
./main.py 100000000
Runs in 30 minutes tested on Ubuntu 22.10 and P51, producing:
Figure 1.
Linear vs approximation plot
. and are added to give a better sense of scale. is too close to 0 and not visible, and the approximation almost overlaps entirely with .
Figure 2.
. It is clear that the difference diverges, albeit very slowly.
Figure 3.
. We just don't have enough points to clearly see that it is converging to 1.0, the convergence truly is very slow. The logarithm integral approximation is much much better, but we can't calculate it in awk, sadface.
But looking at: en.wikipedia.org/wiki/File:Prime_number_theorem_ratio_convergence.svg we see that it takes way longer to get closer to 1, even at it is still not super close. Inspecting the code there we see:
(* Supplement with larger known PrimePi values that are too large for \
Mathematica to compute *)
LargePiPrime = {{10^13, 346065536839}, {10^14, 3204941750802}, {10^15,
     29844570422669}, {10^16, 279238341033925}, {10^17,
    2623557157654233}, {10^18, 24739954287740860}, {10^19,
    234057667276344607}, {10^20, 2220819602560918840}, {10^21,
    21127269486018731928}, {10^22, 201467286689315906290}, {10^23,
    1925320391606803968923}, {10^24, 18435599767349200867866}};
so OK, it is not something doable on a personal computer just like that.
Canada Updated +Created
Feedback loop Updated +Created
Fingerprinting (cybersecurity) Updated +Created
Personal computer Updated +Created
Suona piece Updated +Created
x86 Paging Tutorial / Application Updated +Created
Paging makes it easier to compile and run two programs or threads at the same time on a single computer.
For example, when you compile two programs, the compiler does not know if they are going to be running at the same time or not.
So nothing prevents it from using the same RAM address, say, 0x1234, to store a global variable.
And thread stacks, that must be contiguous and keep growing down until they overwrite each other, are an even bigger issue!
But if two programs use the same address and run at the same time, this is obviously going to break them!
Paging solves this problem beautifully by adding one degree of indirection:
(logical) ------------> (physical)
             paging
Where:
  • logical addresses are what userland programs see, e.g. the contents of rsi in mov eax, [rsi].
    They are often called "virtual" addresses as well.
  • physical addresses can be though of the values that go to physical RAM index wires.
    But keep in mind that this is not 100% true because of further indirections such as:
Compilers don't need to worry about other programs: they just use simple logical addresses.
As far as programs are concerned, they think they can use any address between 0 and 4 GiB (2^32, FFFFFFFF) on 32-bit systems.
The OS then sets up paging so that identical logical addresses will go into different physical addresses and not overwrite each other.
This makes it much simpler to compile programs and run them at the same time.
Paging achieves that goal, and in addition:
  • the switch between programs is very fast, because it is implemented by hardware
  • the memory of both programs can grow and shrink as needed without too much fragmentation
  • one program can never access the memory of another program, even if it wanted to.
    This is good both for security, and to prevent bugs in one program from crashing other programs.
Or if you like non-funny jokes:
Figure 1.
Comparison between the Linux kernel userland memory virtualization and The Matrix
. Source. Is this RAM real?
Double cover Updated +Created
Hamburg Updated +Created
Learn in public Updated +Created
This is the most extreme and final form of peer tutoring, it's natural final consequence given the Internet Age.
Ciro's Edict #8 / OurBigBook.com Updated +Created
x86 Paging Tutorial / Invalidating TLB entries Updated +Created
When the process changes, cr3 change to point to the page table of the new current process.
This creates a problem: the TLB is now filled with a bunch of cached entries for the old process.
A simple and naive solution would be to completely invalidate the TLB whenever the cr3 changes.
However, this is would not be very efficient, because it often happens that we switch back to process 1 before process 2 has completely used up the entire TLB cache entries.
Basically, the OS assigns a different ASID for each process, and then TLB entries are automatically also tagged with that ASID. This way when the process makes an access, the TLB can determine if a hit is actually for the current process, or if it is an old address coincidence with another process.
The x86 also offers the invlpg instruction which explicitly invalidates a single TLB entry. Other architectures offer even more instructions to invalidated TLB entries, such as invalidating all entries on a given range.
Axon Updated +Created
EPR paradox Updated +Created

Unlisted articles are being shown, click here to show only listed articles.