Gradient Updated +Created
Takes a scalar field as input and produces a vector field.
Mnemonic: the gradient shows the direction in which the function increases fastest.
Think of a color gradient going from white to black from left to right.
Therefore, it has to:
  • take a scalar field as input. Otherwise, how do you decide which vector is larger than the other?
  • output a vector field that contains the direction in which the scalar increases fastest locally at each point. This has to give out vectors, since we are talking about directions
Gram-negative bacteria Updated +Created
Notable examples:
Figure 1.
Structure of a Gram-negative bacteria
. Source.
Magic the gathering's banning of 7 cards due to "racism" (2020) Updated +Created
Open source LLM Updated +Created
Course of the University of Cambridge Updated +Created
Magnetic quantum number Updated +Created
Fixed quantum angular momentum in a given direction.
Can range between .
E.g. consider gallium which is 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p1:
The z component of the quantum angular momentum is simply:
so e.g. again for gallium:
Note that this direction is arbitrary, since for a fixed azimuthal quantum number (and therefore fixed total angular momentum), we can only know one direction for sure. is normally used by convention.
Mathematics course of the University of Oxford structure Updated +Created
Vector graphics Updated +Created
Smaller files, scalable image size, and editability. Why would you use anything else for programmatically generated images?!?!
ColdQuanta Updated +Created
Not a quantum computing pure-play, they also do sensing.
D-Wave Systems Updated +Created
Ethidium bromide Updated +Created
Gurdwara Updated +Created
Theses places give out free food all the time.
The first time Ciro Santilli went to one was when an Indian friend of his took him to the one in the North of Paris when they were living there in the first half of the 2010's, the Gurdwara Singh Sabha France.
Instead of just talking, those people really go out, and put food on the plate for anyone who needs it (or even for those that don't really need it! Although who would be so souless to eat for free and not donate a few bucks if they can afford to???). There's a beauty to that.
Writing this also remembered Ciro of non-religious groups that would give out free food to the poor at
Half-life Updated +Created
The half-life of radioactive decay, which as discovered a few years before quantum mechanics was discovered and matured, was a major mystery. Why do some nuclei fission in apparently random fashion, while others don't? How is the state of different nuclei different from one another? This is mentioned in Inward Bound by Abraham Pais (1988) Chapter 6.e Why a half-life?
The term also sees use in other areas, notably biology, where e.g. RNAs spontaneously decay as part of the cell's control system, see e.g. mentions in E. Coli Whole Cell Model by Covert Lab.
Hall effect Updated +Created
The voltage changes perpendicular to the current when magnetic field is applied.
Figure 1.
Hall effect experimental diagram
. Source. The Hall effect refers to the produced voltage , AKA on this setup.
An intuitive video is:
The key formula for it is:
where:
Applications:
Other more precise non-classical versions:
OsmAnd Updated +Created
Kind of works! Notably, has the amazing cycling database offline for you, if you fall within the 6 area downloads. It is worth supporting these people beyond the 6 free downloads however.
Principal component analysis Updated +Created
Given a bunch of points in dimensions, PCA maps those points to a new dimensional space with .
is a hyperparameter, and are common choices when doing dataset exploration, as they can be easily visualized on a planar plot.
The mapping is done by projecting all points to a dimensional hyperplane. PCA is an algorithm for choosing this hyperplane and the coordinate system within this hyperplane.
The hyperplane choice is done as follows:
  • the hyperplane will have origin at the mean point
  • the first axis is picked along the direction of greatest variance, i.e. where points are the most spread out.
    Intuitively, if we pick an axis of small variation, that would be bad, because all the points are very close to one another on that axis, so it doesn't contain as much information that helps us differentiate the points.
  • then we pick a second axis, orthogonal to the first one, and on the direction of second largest variance
  • and so on until orthogonal axes are taken
www.sartorius.com/en/knowledge/science-snippets/what-is-principal-component-analysis-pca-and-how-it-is-used-507186 provides an OK-ish example with a concrete context. In there, each point is a country, and the input data is the consumption of different kinds of foods per year, e.g.:
  • flour
  • dry codfish
  • olive oil
  • sausage
so in this example, we would have input points in 4D.
The question is then: we want to be able to identify the country by what they eat.
Suppose that every country consumes the same amount of flour every year. Then, that number doesn't tell us much about which country each point represents (has the least variance), and the first PCA axes would basically never point anywhere near that direction.
Another cool thing is that PCA seems to automatically account for linear dependencies in the data, so it skips selecting highly correlated axes multiple times. For example, suppose that dry codfish and olive oil consumption are very high in Portugal and Spain, but very low in Germany and Poland. Therefore, the variation is very high in those two parameters, and contains a lot of information.
However, suppose that dry codfish consumption is also directly proportional to olive oil consumption. Because of this, it would be kind of wasteful if we selected:
since the information about codfish already tells us the olive oil. PCA apparently recognizes this, and instead picks the first axis at a 45 degree angle to both dry codfish and olive oil, and then moves on to something else for the second axis.
We can see that much like the rest of machine learning, PCA can be seen as a form of compression.
Project Zomboid Updated +Created
Hans Bethe Updated +Created
Head of the theoretical division at the Los Alamos Laboratory during the Manhattan Project.
Richard Feynman was working under him there, and was promoted to team lead by him because Richard impressed Hans.
He was also the person under which Freeman Dyson was originally under when he moved from the United Kingdom to the United States.
And Hans also impressed Feynman, both were problem solvers, and liked solving mental arithmetic and numerical analysis.
This relationship is what brought Feynman to Cornell University after World War II, Hans' institution, which is where Feynman did the main part of his Nobel prize winning work on quantum electrodynamics.
Hans must have been the perfect PhD advisor. He's always smiling, and he seemed so approachable. And he was incredibly capable, notably in his calculation skills, which were much more important in those pre-computer days.

There are unlisted articles, also show them or only show them.