OsmAnd Updated 2025-07-16
Kind of works! Notably, has the amazing cycling database offline for you, if you fall within the 6 area downloads. It is worth supporting these people beyond the 6 free downloads however.
Principal component analysis Updated 2025-07-16
Given a bunch of points in dimensions, PCA maps those points to a new dimensional space with .
is a hyperparameter, and are common choices when doing dataset exploration, as they can be easily visualized on a planar plot.
The mapping is done by projecting all points to a dimensional hyperplane. PCA is an algorithm for choosing this hyperplane and the coordinate system within this hyperplane.
The hyperplane choice is done as follows:
  • the hyperplane will have origin at the mean point
  • the first axis is picked along the direction of greatest variance, i.e. where points are the most spread out.
    Intuitively, if we pick an axis of small variation, that would be bad, because all the points are very close to one another on that axis, so it doesn't contain as much information that helps us differentiate the points.
  • then we pick a second axis, orthogonal to the first one, and on the direction of second largest variance
  • and so on until orthogonal axes are taken
www.sartorius.com/en/knowledge/science-snippets/what-is-principal-component-analysis-pca-and-how-it-is-used-507186 provides an OK-ish example with a concrete context. In there, each point is a country, and the input data is the consumption of different kinds of foods per year, e.g.:
  • flour
  • dry codfish
  • olive oil
  • sausage
so in this example, we would have input points in 4D.
The question is then: we want to be able to identify the country by what they eat.
Suppose that every country consumes the same amount of flour every year. Then, that number doesn't tell us much about which country each point represents (has the least variance), and the first PCA axes would basically never point anywhere near that direction.
Another cool thing is that PCA seems to automatically account for linear dependencies in the data, so it skips selecting highly correlated axes multiple times. For example, suppose that dry codfish and olive oil consumption are very high in Portugal and Spain, but very low in Germany and Poland. Therefore, the variation is very high in those two parameters, and contains a lot of information.
However, suppose that dry codfish consumption is also directly proportional to olive oil consumption. Because of this, it would be kind of wasteful if we selected:
since the information about codfish already tells us the olive oil. PCA apparently recognizes this, and instead picks the first axis at a 45 degree angle to both dry codfish and olive oil, and then moves on to something else for the second axis.
We can see that much like the rest of machine learning, PCA can be seen as a form of compression.
When viewed as matrices, it is the group of all matrices that preserve the dot product, i.e.:
This implies that it also preserves important geometric notions such as norm (intuitively: distance between two points) and angles.
This is perhaps the best "default definition".
Vector graphics Updated 2025-07-16
Smaller files, scalable image size, and editability. Why would you use anything else for programmatically generated images?!?!
Hermitian form Updated 2025-07-16
The prototypical example of it is the complex dot product.
Note that this form is neither strictly symmetric, it satisfies:
where the over bar indicates the complex conjugate, nor is it linear for complex scalar multiplication on the second argument.
Higgs boson Updated 2025-07-16
Initially there were mathematical reasons why people suspected that all boson needed to have 0 mass as is the case for photons a gluons, see Goldstone's theorem.
However, experiments showed that the W boson and the Z boson both has large non-zero masses.
So people started theorizing some hack that would fix up the equations, and they came up with the higgs mechanism.
High-temperature superconductivity Updated 2025-07-16
As of 2020, basically means "liquid nitrogen temperature", which is much cheaper than liquid helium.
Figure 1.
Timeline of superconductivity from 1900 to 2015
. Source.

There are unlisted articles, also show them or only show them.