Using the TLB makes translation faster, because the initial translation takes one access per TLB level, which means 2 on a simple 32 bit scheme, but 3 or 4 on 64 bit architectures.
The TLB is usually implemented as an expensive type of RAM called content-addressable memory (CAM). CAM implements an associative map on hardware, that is, a structure that given a key (linear address), retrieves a value.
Mappings could also be implemented on RAM addresses, but CAM mappings may required much less entries than a RAM mapping.
linear physical
------ --------
00000 00001
00001 00010
00010 00011
FFFFF 00000
Those page faults only happen when a process tries to write to the page, and not read from it.
When Linux forks a process:
- instead of copying all the pages, which is unnecessarily costly, it makes the page tables of the two process point to the same physical address.
- it marks those linear addresses as read-only
- whenever one of the processes tries to write to a page, the makes a copy of the physical memory, and updates the pages of the two process to point to the two different physical addresses
Handbook 2019/2020: web.archive.org/web/20210211192812/http://teaching.chem.ox.ac.uk/Data/Sites/58/media/courseinfo/ug-handbook-chemistry-2019-20.pdf
At teaching.chem.ox.ac.uk/undergraduate-course-handbook.aspx there's a paywall, but Google found the PDF it anyways.
www.ox.ac.uk/students/academic/guidance/undergraduate/handbooks in theory links to all handbooks, but some are likely paywalled. But Google can generally find them anyways.
To avoid duplication when citing multiple pages: Section "How to use a single source multiple times in a Wikipedia article?"
A good big sample definition:There is also
<ref name="googleStory">{{cite book |last1=Vise |first1=David |author-link1=David A. Vise |last2=Malseed |first2=Mark |author-link2=Mark Malseed |title=The Google Story |date=2008 |publisher=Delacorte Press |url=https://archive.org/details/isbn_9780385342728}}</ref>
title-link
to link to a wiki page. But it is incompatible with url=
for Internet Archive Open Library links which is a shame.How to implement Nested set model in SQL:
- stackoverflow.com/questions/192220/what-is-the-most-efficient-elegant-way-to-parse-a-flat-table-into-a-tree/42781302#42781302 contains the correct left/size representation and update queries, which makes it much easier to maintain the tree without having to worry about the sizes of siblings which are constant
- stackoverflow.com/questions/192220/what-is-the-most-efficient-elegant-way-to-parse-a-flat-table-into-a-tree/194031#194031 amazing ASCII art representations of the structure. Unfortunately uses a wonky left/right representation, rather than the much more natural left/size representation from the other post
The default isolation level for SQLite is SERIALIZABLE
It does not appear possible to achieve the other two levels besides SERIALIZABLE and READ UNCOMMITED
Paging is implemented by the CPU hardware itself.
Paging could be implemented in software, but that would be too slow, because every single RAM memory access uses it!
Operating systems must setup and control paging by communicating to the CPU hardware. This is done mostly via:
- the CR3 register, which tells the CPU where the page table is in RAM memory
- writing the correct paging data structures to the RAM pointed to the CR3 register.Using RAM data structures is a common technique when lots of data must be transmitted to the CPU as it would cost too much to have such a large CPU register.The format of the configuration data structures is fixed by the hardware, but it is up to the OS to set up and manage those data structures on RAM correctly, and to tell the hardware where to find them (via
cr3
).Then some heavy caching is done to ensure that the RAM access will be fast, in particular using the TLB.Another notable example of RAM data structure used by the CPU is the IDT which sets up interrupt handlers. - CR3 cannot be modified in ring 3. The OS runs in ring 0. See also:
- the page table structures are made invisible to the process using paging itself!
Processes can however make requests to the OS that cause the page tables to be modified, notably:
- stack size changes
brk
andmmap
calls, see also: stackoverflow.com/questions/6988487/what-does-brk-system-call-do/31082353#31082353
The kernel then decides if the request will be granted or not in a controlled manner.
Addresses are now split as:
| directory (10 bits) | table (10 bits) | offset (12 bits) |
Then:
- The top table is called a "directory of page tables".
cr3
now points to the location on RAM of the page directory of the current process instead of page tables.Page directory entries are very similar to page table entries except that they point to the physical addresses of page tables instead of physical addresses of pages.Each directory entry also takes up 4 bytes, just like page entries, so that makes 4 KiB per process minimum.Page directory entries also contain a valid flag: if invalid, the OS does not allocate a page table for that entry, and saves memory.Each process has one and only one page directory associated to it (and pointed to bycr3
), so it will contain at least2^10 = 1K
page directory entries, much better than the minimum 1M entries required on a single-level scheme. - Second level entries are also called page tables like the single level scheme.Each page table has only
2^10 = 1K
page table entries instead of2^20
for the single paging scheme. - the offset is again not used for translation, it only gives the offset within a page
One reason for using 10 bits on the first two levels (and not, say,
12 | 8 | 12
) is that each Page Table entry is 4 bytes long. Then the 2^10 entries of Page directories and Page Tables will fit nicely into 4Kb pages. This means that it faster and simpler to allocate and deallocate pages for that purpose.Some examples by Ciro Santilli follow.
Of the tutorial-subjectivity type:
- This edit perfectly summarizes how Ciro feels about Wikipedia (no particular hate towards that user, he was a teacher at the prestigious Pierre and Marie Curie University and actually as a wiki page about him):which removed the only diagram that was actually understandable to non-Mathematicians, which Ciro Santilli had created, and received many upvotes at: math.stackexchange.com/questions/776039/intuition-behind-normal-subgroups/3732426#3732426. The removal does not generate any notifications to you unless you follow the page which would lead to infinite noise, and is extremely difficult to find out how to contact the other person. The removal justification is even somewhat ad hominem: how does he know Ciro Santilli is also not a professional Mathematician? :-) Maybe it is obvious because Ciro explains in a way that is understandable. Also removal makes no effort to contact original author. Of course, this is caused by the fact that there must also have been a bunch of useless edits not done by Ciro, and there is no reputation system to see if you should ignore a person or not immediately, so removal author has no patience anymore. This is what makes it impossible to contribute to Wikipedia: your stuff gets deleted at any time, and you don't know how to appeal it. Ciro is going to regret having written this rant after Daniel replies and shows the diagram is crap. But that would be better than not getting a reply and not learning that the diagram is crap.
rm a cryptic diagram (not understandable by a professional mathematician, without further explanations
- en.wikipedia.org/w/index.php?title=Finite_field&type=revision&diff=1044934168&oldid=1044905041 on finite fields with edit comment "Obviously: X ≡ α". Discussion at en.wikipedia.org/wiki/Talk:Finite_field#Concrete_simple_worked_out_example Some people simply don't know how to explain things to beginners, or don't think Wikipedia is where it should be done. One simply can't waste time fighting off those people, writing good tutorials is hard enough in itself without that fight.
- en.wikipedia.org/w/index.php?title=Discrete_Fourier_transform&diff=1193622235&oldid=1193529573 by user Bob K. removed Ciro Santilli's awesome simple image of the Discrete Fourier transform as seen at en.wikipedia.org/w/index.php?title=Discrete_Fourier_transform&oldid=1176616763:with message:
Hello. I am a retired electrical engineer, living near Washington,DC. Most of my contributions are in the area of DSP, where I have about 40 years of experience in applications on many different processors and architectures.
Thank you so much!!remove non-helpful image
Maybe it is a common thread that these old "experts" keep removing anything that is actually intelligible by beginners? Section "There is value in tutorials written by beginners"Also ranted at: x.com/cirosantilli/status/1808862417566290252Figure 1. Source at: numpy/fft_plot.py. - when Ciro Santilli created Scott Hassan's page, he originally included mentions of his saucy divorce: en.wikipedia.org/w/index.php?title=Scott_Hassan&oldid=1091706391 These were reverted by Scott's puppets three times, and Ciro and two other editors fought back. Finally, Ciro understood that Hassan's puppets were likely right about the removal because you can't talk about private matters of someone who is low profile:even if it is published in well known and reliable publications like the bloody New York Times. In this case, it is clear that most people wanted to see this information summarized on Wikipedia since others fought back Hassan's puppet. This is therefore a failure of Wikipedia to show what the people actually want to read about.This case is similar to the PsiQuantum one. Something is extremely well known in an important niche, and many people want to read about it. But because the average person does not know about this important subject, and you are limited about what you can write about it or not, thus hurting the people who want to know about it.
Notability constraints, which are are way too strict:There are even a Wikis that were created to remove notability constraints: Wiki without notability requirements.
- even information about important companies can be disputed. E.g. once Ciro Santilli tried to create a page for PsiQuantum, a startup with $650m in funding, and there was a deletion proposal because it did not contain verifiable sources not linked directly to information provided by the company itself: en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/PsiQuantum Although this argument is correct, it is also true about 90% of everything that is on Wikipedia about any company. Where else can you get any information about a B2B company? Their clients are not going to say anything. Lawsuits and scandals are kind of the only possible source... In that case, the page was deleted with 2 votes against vs 3 votes for deletion.is very similar to Stack Exchange's own Stack Overflow content deletion issues. Ain't Nobody Got Time For That. "Ain't Nobody Got Time for That" actually has a Wiki page: en.wikipedia.org/wiki/Ain%27t_Nobody_Got_Time_for_That. That's notable. Unlike a $600M+ company of course.
should we delete this extremely likely useful/correct content or not according to this extremely complex system of guidelines"
In December 2023 the page was re-created, and seemed to stick: en.wikipedia.org/wiki/Talk:PsiQuantum#Secondary_sources It's just a random going back and forth. Author Ctjk has an interesting background:I am a legal official at a major government antitrust agency. The only plausible connection is we regulate tech firms
For these reasons reason why Ciro basically only contributes images to Wikipedia: because they are either all in or all out, and you can determine which one of them it is. And this allows images to be more attributable, so people can actually see that it was Ciro that created a given amazing image, thus overcoming Wikipedia's lack of reputation system a little bit as well.
Wikipedia is perfect for things like biographies, geography, or history, which have a much more defined and subjective expository order. But when it comes to "tutorials of how to actually do stuff", which is what mathematics and physics are basically about, Wikipedia has a very hard time to go beyond dry definitions which are only useful for people who already half know the stuff. But to learn from zero, newbies need tutorials with intuition and examples.
Bibliography:
- gwern.net/inclusionism from gwern.net:
Iron Law of Bureaucracy: the downwards deletionism spiral discourages contribution and is how Wikipedia will die.
- Quote "Golden wiki vs Deletionism on Wikipedia"
GPU accelerated, simulates the Craig's minimized M. genitalium, JCVI-syn3A at a particle basis of some kind.
Lab head is the cutest-looking lady ever: chemistry.illinois.edu/zan, Zaida (Zan) Luthey-Schulten.
- 2022 paper: www.cell.com/cell/fulltext/S0092-8674(21)01488-4 Fundamental behaviors emerge from simulations of a living minimal cell by Thornburg et al. (2022) published on Cell
- faculty.scs.illinois.edu/schulten/lm/ actual source code. No Version control and non-code drop release, openess and best practices haven't reached such far obscure reaches of academia yet. One day.
- blogs.nvidia.com/blog/2022/01/20/living-cell-simulation/ Nvidia announcement. That's how they do business, it is quite interesting how they highlight this kind of research.
When the process changes,
cr3
change to point to the page table of the new current process.A simple and naive solution would be to completely invalidate the TLB whenever the
cr3
changes.However, this is would not be very efficient, because it often happens that we switch back to process 1 before process 2 has completely used up the entire TLB cache entries.
The solution for this is to use so called "Address Space Identifiers" (ASID) as mentioned in sources such as:
Basically, the OS assigns a different ASID for each process, and then TLB entries are automatically also tagged with that ASID. This way when the process makes an access, the TLB can determine if a hit is actually for the current process, or if it is an old address coincidence with another process.
The algorithmically minded will have noticed that paging requires associative array (like Java
Map
of Python dict()
) abstract data structure where:The single level paging scheme uses a simple array implementation of the associative array:and in C pseudo-code it looks like this:
- the keys are the array index
- this implementation is very fast in time
- but it is too inefficient in memory
linear_address[0] = physical_address_0
linear_address[1] = physical_address_1
linear_address[2] = physical_address_2
...
linear_address[2^20-1] = physical_address_N
But there another simple associative array implementation that overcomes the memory problem: an (unbalanced) k-ary tree.
Using a K-ary tree instead of an array implementation has the following trade-offs:
In C-pseudo code, a 2-level K-ary tree with and we have the following arrays:
K = 2^10
looks like this:level0[0] = &level1_0[0]
level1_0[0] = physical_address_0_0
level1_0[1] = physical_address_0_1
...
level1_0[2^10-1] = physical_address_0_N
level0[1] = &level1_1[0]
level1_1[0] = physical_address_1_0
level1_1[1] = physical_address_1_1
...
level1_1[2^10-1] = physical_address_1_N
...
level0[N] = &level1_N[0]
level1_N[0] = physical_address_N_0
level1_N[1] = physical_address_N_1
...
level1_N[2^10-1] = physical_address_N_N
and it still contains
2^10 * 2^10 = 2^20
possible keys.K-ary trees can save up a lot of space, because if we only have one key, then we only need the following arrays:
The Linux Kernel reserves two zones of virtual memory:
- one for kernel memory
- one for programs
The exact split is configured by
CONFIG_VMSPLIT_...
. By default:- on 32-bit:
- on 64-bit: currently only 48-bits are actually used, split into two equally sized disjoint spaces. The Linux kernel just assigns:
- the bottom part to processes
00000000 00000000
to008FFFFF FFFFFFFF
- the top part to the kernel:
FFFF8000 00000000
toFFFFFFFF FFFFFFFF
, like this:------------------ FFFFFFFF Kernel ------------------ C0000000 (not addressable) ------------------ BFFFFFFF Process ------------------ 00000000
- the bottom part to processes
Kernel memory is also paged.
In previous versions, the paging was continuous, but with HIGHMEM this changed.
There is no clear physical memory split: stackoverflow.com/questions/30471742/physical-memory-userspace-kernel-split-on-linux-x86-64
Paging is done by the Memory Management Unit (MMU) part of the CPU.
It was later integrated into the CPU, but the term MMU still used.
Decent encyclopedia of mathematics. Not much motivation, mostly statements though.
Created by:
There are unlisted articles, also show them or only show them.