x86 Paging Tutorial by Ciro Santilli 37 Updated 2025-07-19
This tutorial explains the very basics of how paging works, with focus on x86, although most high level concepts will also apply to other instruction set architectures, e.g. ARM.
The goals are to:
  • demonstrate minimal concrete simplified paging examples that will be useful to those learning paging for the first time
  • explain the motivation behind paging
This tutorial was extracted and expanded from this Stack Overflow answer.
Like everything else in programming, the only way to really understand this is to play with minimal examples.
What makes this a "hard" subject is that the minimal example is large because you need to make your own small OS.
Paging makes it easier to compile and run two programs or threads at the same time on a single computer.
For example, when you compile two programs, the compiler does not know if they are going to be running at the same time or not.
So nothing prevents it from using the same RAM address, say, 0x1234, to store a global variable.
And thread stacks, that must be contiguous and keep growing down until they overwrite each other, are an even bigger issue!
But if two programs use the same address and run at the same time, this is obviously going to break them!
Paging solves this problem beautifully by adding one degree of indirection:
(logical) ------------> (physical)
             paging
Where:
  • logical addresses are what userland programs see, e.g. the contents of rsi in mov eax, [rsi].
    They are often called "virtual" addresses as well.
  • physical addresses can be though of the values that go to physical RAM index wires.
    But keep in mind that this is not 100% true because of further indirections such as:
Compilers don't need to worry about other programs: they just use simple logical addresses.
As far as programs are concerned, they think they can use any address between 0 and 4 GiB (2^32, FFFFFFFF) on 32-bit systems.
The OS then sets up paging so that identical logical addresses will go into different physical addresses and not overwrite each other.
This makes it much simpler to compile programs and run them at the same time.
Paging achieves that goal, and in addition:
  • the switch between programs is very fast, because it is implemented by hardware
  • the memory of both programs can grow and shrink as needed without too much fragmentation
  • one program can never access the memory of another program, even if it wanted to.
    This is good both for security, and to prevent bugs in one program from crashing other programs.
Or if you like non-funny jokes:
Figure 1.
Comparison between the Linux kernel userland memory virtualization and The Matrix
. Source. Is this RAM real?
Paging is implemented by the CPU hardware itself.
Paging could be implemented in software, but that would be too slow, because every single RAM memory access uses it!
Operating systems must setup and control paging by communicating to the CPU hardware. This is done mostly via:
Processes can however make requests to the OS that cause the page tables to be modified, notably:
The kernel then decides if the request will be granted or not in a controlled manner.
In x86 systems, there may actually be 2 address translation steps:
  • first segmentation
  • then paging
like this:
(logical) ------------------> (linear) ------------> (physical)
             segmentation                 paging
The major difference between paging and segmentation is that:
  • paging splits RAM into equal sized chunks called pages
  • segmentation splits memory into chunks of arbitrary sizes
Paging came after segmentation historically, and largely replaced it for the implementation of virtual memory in modern OSs.
Paging has become so much more popular that support for segmentation was dropped in x86-64 in 64-bit mode, the main mode of operation for new software, where it only exists in compatibility mode, which emulates IA-32.
This is an example of how paging operates on a simplified version of a x86 architecture to implement a virtual memory space with a 20 | 12 address split (4 KiB page size).
This is how the memory could look like in a single level paging scheme:
Links   Data                    Physical address

      +-----------------------+ 2^32 - 1
      |                       |
      .                       .
      |                       |
      +-----------------------+ page0 + 4k
      | data of page 0        |
+---->+-----------------------+ page0
|     |                       |
|     .                       .
|     |                       |
|     +-----------------------+ pageN + 4k
|     | data of page N        |
|  +->+-----------------------+ pageN
|  |  |                       |
|  |  .                       .
|  |  |                       |
|  |  +-----------------------+ CR3 + 2^20 * 4
|  +--| entry[2^20-1] = pageN |
|     +-----------------------+ CR3 + 2^20 - 1 * 4
|     |                       |
|     .    many entires       .
|     |                       |
|     +-----------------------+ CR3 + 2 * 4
|  +--| entry[1] = page1      |
|  |  +-----------------------+ CR3 + 1 * 4
+-----| entry[0] = page0      |
   |  +-----------------------+ <--- CR3
   |  |                       |
   |  .                       .
   |  |                       |
   |  +-----------------------+ page1 + 4k
   |  | data of page 1        |
   +->+-----------------------+ page1
      |                       |
      .                       .
      |                       |
      +-----------------------+  0
Notice that:
  • the CR3 register points to the first entry of the page table
  • the page table is just a large array with 2^20 page table entries
  • each entry is 4 bytes big, so the array takes up 4 MiB
  • each page table contains the physical address a page
  • each page is a 4 KiB aligned 4 KiB chunk of memory that user processes may use
  • we have 2^20 table entries. Since each page is 4 KiB == 2^12, this covers the whole 4 GiB (2^32) of 32-bit memory
Suppose that the OS has setup the following page tables for process 1:
entry index   entry address       page address   present
-----------   ------------------  ------------   -------
0             CR3_1 + 0      * 4  0x00001        1
1             CR3_1 + 1      * 4  0x00000        1
2             CR3_1 + 2      * 4  0x00003        1
3             CR3_1 + 3      * 4                 0
...
2^20-1        CR3_1 + 2^20-1 * 4  0x00005        1
and for process 2:
entry index   entry address       page address   present
-----------   -----------------   ------------   -------
0             CR3_2 + 0      * 4  0x0000A        1
1             CR3_2 + 1      * 4  0x12345        1
2             CR3_2 + 2      * 4                 0
3             CR3_2 + 3      * 4  0x00003        1
...
2^20-1        CR3_2 + 2^20-1 * 4  0xFFFFF        1
Before process 1 starts running, the OS sets its cr3 to point to the page table 1 at CR3_1.
When process 1 tries to access a linear address, this is the physical addresses that will be actually accessed:
linear     physical
---------  ---------
00000 001  00001 001
00000 002  00001 002
00000 003  00001 003
00000 FFF  00001 FFF
00001 000  00000 000
00001 001  00000 001
00001 FFF  00000 FFF
00002 000  00003 000
FFFFF 000  00005 000
To switch to process 2, the OS simply sets cr3 to CR3_2, and now the following translations would happen:
linear     physical
---------  ---------
00000 002  0000A 002
00000 003  0000A 003
00000 FFF  0000A FFF
00001 000  12345 000
00001 001  12345 001
00001 FFF  12345 FFF
00004 000  00003 000
FFFFF 000  FFFFF 000
Step-by-step translation for process 1 of logical address 0x00000001 to physical address 0x00001001:
  • split the linear address into two parts:
    | page (20 bits) | offset (12 bits) |
    So in this case we would have:
    *page = 0x00000. This part must be translated to a physical location.
    *offset = 0x001. This part is added directly to the page address, and is not translated: it contains the position within the page.
  • look into Page table 1 because cr3 points to it.
  • The hardware knows that this entry is located at RAM address CR3 + 0x00000 * 4 = CR3:
    *0x00000 because the page part of the logical address is 0x00000
    *4 because that is the fixed size in bytes of every page table entry
  • since it is present, the access is valid
  • by the page table, the location of page number 0x00000 is at 0x00001 * 4K = 0x00001000.
  • to find the final physical address we just need to add the offset:
      00001 000
    + 00000 001
      ---------
      00001 001
    because 00001 is the physical address of the page looked up on the table and 001 is the offset.
    We shift 00001 by 12 bits because the pages are always aligned to 4 KiB.
    The offset is always simply added the physical address of the page.
  • the hardware then gets the memory at that physical location and puts it in a register.
Another example: for logical address 0x00001001:
  • the page part is 00001, and the offset part is 001
  • the hardware knows that its page table entry is located at RAM address: CR3 + 1 * 4 (1 because of the page part), and that is where it will look for it
  • it finds the page address 0x00000 there
  • so the final address is 0x00000 * 4k + 0x001 = 0x00000001
The same linear address can translate to different physical addresses for different processes, depending only on the value inside cr3.
Both linear addresses 00002 000 from process 1 and 00004 000 from process 2 point to the same physical address 00003 000. This is completely allowed by the hardware, and it is up to the operating system to handle such cases.
This often in normal operation because of Copy-on-write (COW), which be explained elsewhere.
Such mappings are sometime called "aliases".
FFFFF 000 points to its own physical address FFFFF 000. This kind of translation is called an "identity mapping", and can be very convenient for OS-level debugging.
The exact format of table entries is fixed by the hardware.
Each page entry can be seen as a struct with many fields.
The page table is then an array of struct.
On this simplified example, the page table entries contain only two fields:
bits   function
-----  -----------------------------------------
20     physical address of the start of the page
1      present flag
so in this example the hardware designers could have chosen the size of the page table to b 21 instead of 32 as we've used so far.
All real page table entries have other fields, notably fields to set pages to read-only for Copy-on-write. This will be explained elsewhere.
It would be impractical to align things at 21 bits since memory is addressable by bytes and not bits. Therefore, even in only 21 bits are needed in this case, hardware designers would probably choose 32 to make access faster, and just reserve bits the remaining bits for later usage. The actual value on x86 is 32 bits.
Here is a screenshot from the Intel manual image "Formats of CR3 and Paging-Structure Entries with 32-Bit Paging" showing the structure of a page table in all its glory: Figure 1. "x86 page entry format".
Figure 1.
x86 page entry format
.
The fields are explained in the manual just after.
Why are pages 4 KiB anyways?
There is a trade-off between memory wasted in:
  • page tables
  • extra padding memory within pages
This can be seen with the extreme cases:
  • if the page size were 1 byte:
    • granularity would be great, and the OS would never have to allocate unneeded padding memory
    • but the page table would have 2^32 entries, and take up the entire memory!
  • if the page size were 4 GiB:
    • we would need to swap 4 GiB to disk every time a new process becomes active
    • the page size would be a single entry, so it would take almost no memory at all
x86 designers have found that 4 KiB pages are a good middle ground.
The problem with a single-level paging scheme is that it would take up too much RAM: 4G / 4K = 1M entries per process.
If each entry is 4 bytes long, that would make 4M per process, which is too much even for a desktop computer: ps -A | wc -l says that I am running 244 processes right now, so that would take around 1GB of my RAM!
For this reason, x86 developers decided to use a multi-level scheme that reduces RAM usage.
The downside of this system is that is has a slightly higher access time, as we need to access RAM more times for each translation.

Pinned article: Introduction to the OurBigBook Project

Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
We have two killer features:
  1. topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculus
    Articles of different users are sorted by upvote within each article page. This feature is a bit like:
    • a Wikipedia where each user can have their own version of each article
    • a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
    This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.
    Figure 1.
    Screenshot of the "Derivative" topic page
    . View it live at: ourbigbook.com/go/topic/derivative
  2. local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:
    This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
    Figure 2.
    You can publish local OurBigBook lightweight markup files to either https://OurBigBook.com or as a static website
    .
    Figure 3.
    Visual Studio Code extension installation
    .
    Figure 4.
    Visual Studio Code extension tree navigation
    .
    Figure 5.
    Web editor
    . You can also edit articles on the Web editor without installing anything locally.
    Video 3.
    Edit locally and publish demo
    . Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension.
    Video 4.
    OurBigBook Visual Studio Code extension editing and navigation demo
    . Source.
  3. https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/feature/x/hilbert-space-arrow.png
  4. Infinitely deep tables of contents:
    Figure 6.
    Dynamic article tree with infinitely deep table of contents
    .
    Descendant pages can also show up as toplevel e.g.: ourbigbook.com/cirosantilli/chordate-subclade
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact