Why are pages 4 KiB anyways?
There is a trade-off between memory wasted in:
  • page tables
  • extra padding memory within pages
This can be seen with the extreme cases:
  • if the page size were 1 byte:
    • granularity would be great, and the OS would never have to allocate unneeded padding memory
    • but the page table would have 2^32 entries, and take up the entire memory!
  • if the page size were 4 GiB:
    • we would need to swap 4 GiB to disk every time a new process becomes active
    • the page size would be a single entry, so it would take almost no memory at all
x86 designers have found that 4 KiB pages are a good middle ground.
The problem with a single-level paging scheme is that it would take up too much RAM: 4G / 4K = 1M entries per process.
If each entry is 4 bytes long, that would make 4M per process, which is too much even for a desktop computer: ps -A | wc -l says that I am running 244 processes right now, so that would take around 1GB of my RAM!
For this reason, x86 developers decided to use a multi-level scheme that reduces RAM usage.
The downside of this system is that is has a slightly higher access time, as we need to access RAM more times for each translation.
The algorithmically minded will have noticed that paging requires associative array (like Java Map of Python dict()) abstract data structure where:
  • the keys are linear pages addresses, thus of integer type
  • the values are physical page addresses, also of integer type
The single level paging scheme uses a simple array implementation of the associative array:
  • the keys are the array index
  • this implementation is very fast in time
  • but it is too inefficient in memory
and in C pseudo-code it looks like this:
linear_address[0]      = physical_address_0
linear_address[1]      = physical_address_1
linear_address[2]      = physical_address_2
...
linear_address[2^20-1] = physical_address_N
But there another simple associative array implementation that overcomes the memory problem: an (unbalanced) k-ary tree.
A K-ary tree, is just like a binary tree, but with K children instead of 2.
Using a K-ary tree instead of an array implementation has the following trade-offs:
  • it uses way less memory
  • it is slower since we have to de-reference extra pointers
In C-pseudo code, a 2-level K-ary tree with K = 2^10 looks like this:
level0[0] = &level1_0[0]
    level1_0[0]      = physical_address_0_0
    level1_0[1]      = physical_address_0_1
    ...
    level1_0[2^10-1] = physical_address_0_N
level0[1] = &level1_1[0]
    level1_1[0]      = physical_address_1_0
    level1_1[1]      = physical_address_1_1
    ...
    level1_1[2^10-1] = physical_address_1_N
...
level0[N] = &level1_N[0]
    level1_N[0]      = physical_address_N_0
    level1_N[1]      = physical_address_N_1
    ...
    level1_N[2^10-1] = physical_address_N_N
and we have the following arrays:
  • one directory, which has 2^10 elements. Each element contains a pointer to a page table array.
  • up to 2^10 pagetable arrays. Each one has 2^10 4 byte page entries.
and it still contains 2^10 * 2^10 = 2^20 possible keys.
K-ary trees can save up a lot of space, because if we only have one key, then we only need the following arrays:
  • one directory with 2^10 entries
  • one pagetable at directory[0] with 2^10 entries
  • all other directory[i] are marked as invalid, don't point to anything, and we don't allocate pagetable for them at all
64 bits is still too much address for current RAM sizes, so most architectures will use less bits.
x86_64 uses 48 bits (256 TiB), and legacy mode's PAE already allows 52-bit addresses (4 PiB). 56-bits is a likely future candidate.
12 of those 48 bits are already reserved for the offset, which leaves 36 bits.
If a 2 level approach is taken, the best split would be two 18 bit levels.
But that would mean that the page directory would have 2^18 = 256K entries, which would take too much RAM: close to a single-level paging for 32 bit architectures!
Therefore, 64 bit architectures create even further page levels, commonly 3 or 4.
x86_64 uses 4 levels in a 9 | 9 | 9 | 9 scheme, so that the upper level only takes up only 2^9 higher level entries.
The 48 bits are split equally into two disjoint parts:
----------------- FFFFFFFF FFFFFFFF
Top half
----------------- FFFF8000 00000000


Not addressable


----------------- 00007FFF FFFFFFFF
Bottom half
----------------- 00000000 00000000
A 5-level scheme is emerging in 2016: software.intel.com/sites/default/files/managed/2b/80/5-level_paging_white_paper.pdf which allows 52-bit addresses with 4k pagetables.
Physical address extension.
With 32 bits, only 4GB RAM can be addressed.
This started becoming a limitation for large servers, so Intel introduced the PAE mechanism to Pentium Pro.
To relieve the problem, Intel added 4 new address lines, so that 64GB could be addressed.
Page table structure is also altered if PAE is on. The exact way in which it is altered depends on weather PSE is on or off.
PAE is turned on and off via the PAE bit of cr4.
Even if the total addressable memory is 64GB, individual process are still only able to use up to 4GB. The OS can however put different processes on different 4GB chunks.
If either PAE and PSE are active, different paging level schemes are used:
  • no PAE and no PSE: 10 | 10 | 12
  • no PAE and PSE: 10 | 22.
    22 is the offset within the 4Mb page, since 22 bits address 4Mb.
  • PAE and no PSE: 2 | 9 | 9 | 12
    The design reason why 9 is used twice instead of 10 is that now entries cannot fit anymore into 32 bits, which were all filled up by 20 address bits and 12 meaningful or reserved flag bits.
    The reason is that 20 bits are not enough anymore to represent the address of page tables: 24 bits are now needed because of the 4 extra wires added to the processor.
    Therefore, the designers decided to increase entry size to 64 bits, and to make them fit into a single page table it is necessary reduce the number of entries to 2^9 instead of 2^10.
    The starting 2 is a new Page level called Page Directory Pointer Table (PDPT), since it points to page directories and fill in the 32 bit linear address. PDPTs are also 64 bits wide.
    cr3 now points to PDPTs which must be on the fist four 4GB of memory and aligned on 32 bit multiples for addressing efficiency. This means that now cr3 has 27 significative bits instead of 20: 2^5 for the 32 multiples * 2^27 to complete the 2^32 of the first 4GB.
  • PAE and PSE: 2 | 9 | 21
    Designers decided to keep a 9 bit wide field to make it fit into a single page.
    This leaves 23 bits. Leaving 2 for the PDPT to keep things uniform with the PAE case without PSE leaves 21 for offset, meaning that pages are 2M wide instead of 4M.
The Translation Lookahead Buffer (TLB) is a cache for paging addresses.
Since it is a cache, it shares many of the design issues of the CPU cache, such as associativity level.
This section shall describe a simplified fully associative TLB with 4 single address entries. Note that like other caches, real TLBs are not usually fully associative.
After a translation between linear and physical address happens, it is stored on the TLB. For example, a 4 entry TLB starts in the following state:
  valid  linear  physical
  -----  ------  --------
> 0      00000   00000
  0      00000   00000
  0      00000   00000
  0      00000   00000
The > indicates the current entry to be replaced.
And after a page linear address 00003 is translated to a physical address 00005, the TLB becomes:
  valid  linear  physical
  -----  ------  --------
  1      00003   00005
> 0      00000   00000
  0      00000   00000
  0      00000   00000
and after a second translation of 00007 to 00009 it becomes:
  valid  linear  physical
  -----  ------  --------
  1      00003   00005
  1      00007   00009
> 0      00000   00000
  0      00000   00000
Now if 00003 needs to be translated again, hardware first looks up the TLB and finds out its address with a single RAM access 00003 --> 00005.
Of course, 00000 is not on the TLB since no valid entry contains 00000 as a key.
When TLB is filled up, older addresses are overwritten. Just like CPU cache, the replacement policy is a potentially complex operation, but a simple and reasonable heuristic is to remove the least recently used entry (LRU).
With LRU, starting from state:
  valid  linear  physical
  -----  ------  --------
> 1      00003   00005
  1      00007   00009
  1      00009   00001
  1      0000B   00003
adding 0000D -> 0000A would give:
  valid  linear  physical
  -----  ------  --------
  1      0000D   0000A
> 1      00007   00009
  1      00009   00001
  1      0000B   00003
LC circuit by Ciro Santilli 40 Updated 2025-07-16
When Ciro Santilli was studying electronics at the University of São Paulo, the courses, which were heavily inspired from the USA 50's were obsessed by this one! Thinking about it, it is kind of a cool thing though.
Video 1.
Tutorial on LC resonant circuits by w2aew (2012)
Source.
Video 2.
LC circuit dampened oscillations on an oscilloscope by Queuerious Guy (2014)
Source. Finally a video that shows the oscillations without a driving AC source. The dude just move wires around on his breadboard manually, first charging the capacitor and then closing the LC circuit, and is able to see damped oscillations on the oscilloscope.
Video 3.
Introduction to LC Oscillators by USAF (1974)
Source.
Video 4. Source. Exactly what you would expect from an Eugene Khutoryansky video. The key insight is that the inductor resists to changes in current. So when current is zero, it slows down the current. And when current is high, it tries to keep it going, which recharges the other side of the capacitor.
Using the TLB makes translation faster, because the initial translation takes one access per TLB level, which means 2 on a simple 32 bit scheme, but 3 or 4 on 64 bit architectures.
The TLB is usually implemented as an expensive type of RAM called content-addressable memory (CAM). CAM implements an associative map on hardware, that is, a structure that given a key (linear address), retrieves a value.
Mappings could also be implemented on RAM addresses, but CAM mappings may required much less entries than a RAM mapping.
For example, a map in which:
  • both keys and values have 20 bits (the case of a simple paging schemes)
  • at most 4 values need to be stored at each time
could be stored in a TLB with 4 entries:
linear  physical
------  --------
00000   00001
00001   00010
00010   00011
FFFFF   00000
However, to implement this with RAM, it would be necessary to have 2^20 addresses:
linear  physical
------  --------
00000   00001
00001   00010
00010   00011
... (from 00011 to FFFFE)
FFFFF   00000
which would be even more expensive than using a TLB.
The Linux kernel makes extensive usage of the paging features of x86 to allow fast process switches with small data fragmentation.
There are also however some features that the Linux kernel might not use, either because they are only for backwards compatibility, or because the Linux devs didn't feel it was worth it yet.
The Linux Kernel reserves two zones of virtual memory:
  • one for kernel memory
  • one for programs
The exact split is configured by CONFIG_VMSPLIT_.... By default:
  • on 32-bit:
    • the bottom 3/4 is program space: 00000000 to BFFFFFFF
    • the top 1/4 is kernel memory: C0000000 to FFFFFFFF, like this:
      ------------------ FFFFFFFF
      Kernel
      ------------------ C0000000
      ------------------ BFFFFFFF
      
      
      Process
      
      
      ------------------ 00000000
  • on 64-bit: currently only 48-bits are actually used, split into two equally sized disjoint spaces. The Linux kernel just assigns:
    • the bottom part to processes 00000000 00000000 to 008FFFFF FFFFFFFF
    • the top part to the kernel: FFFF8000 00000000 to FFFFFFFF FFFFFFFF, like this:
      ------------------ FFFFFFFF
      Kernel
      ------------------ C0000000
      
      
      (not addressable)
      
      
      ------------------ BFFFFFFF
      Process
      ------------------ 00000000
Kernel memory is also paged.

Pinned article: Introduction to the OurBigBook Project

Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
We have two killer features:
  1. topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculus
    Articles of different users are sorted by upvote within each article page. This feature is a bit like:
    • a Wikipedia where each user can have their own version of each article
    • a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
    This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.
    Figure 1.
    Screenshot of the "Derivative" topic page
    . View it live at: ourbigbook.com/go/topic/derivative
  2. local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:
    This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
    Figure 2.
    You can publish local OurBigBook lightweight markup files to either https://OurBigBook.com or as a static website
    .
    Figure 3.
    Visual Studio Code extension installation
    .
    Figure 4.
    Visual Studio Code extension tree navigation
    .
    Figure 5.
    Web editor
    . You can also edit articles on the Web editor without installing anything locally.
    Video 3.
    Edit locally and publish demo
    . Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension.
    Video 4.
    OurBigBook Visual Studio Code extension editing and navigation demo
    . Source.
  3. https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/feature/x/hilbert-space-arrow.png
  4. Infinitely deep tables of contents:
    Figure 6.
    Dynamic article tree with infinitely deep table of contents
    .
    Descendant pages can also show up as toplevel e.g.: ourbigbook.com/cirosantilli/chordate-subclade
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact