Vs metric:
- a norm is the size of one element. A metric is the distance between two elements.
- a norm is only defined on a vector space. A metric could be defined on something that is not a vector space. Most basic examples however are also vector spaces.
Members of the orthogonal group.
Does not require entangled particles, unlike E91 which does.
en.wikipedia.org/w/index.php?title=Quantum_key_distribution&oldid=1079513227#BB84_protocol:_Charles_H._Bennett_and_Gilles_Brassard_(1984) explains it well. Basically:
- Alice and Bob randomly select a measurement basis of either 90 degrees and 45 degrees for each photon
- Alice measures each photon. There are two possible results to either measurement basis: parallel or perpendicular, representing values 0 or 1. TODO understand better: weren't the possible results supposed to be pass or non-pass? She writes down the results, and sends the (now collapsed) photons forward to Bob.
- Bob measures the photons and writes down the results
- Alice and Bob communicate to one another their randomly chosen measurement bases over the unencrypted classic channel.This channel must be authenticated to prevent man-in-the-middle. The only way to do this authentication that makes sense is to use a pre-shared key to create message authentication codes. Using public-key cryptography for a digital signature would be pointless, since the only advantage of QKD is to avoid using public-key cryptography in the first place.
- they drop all photons for which they picked different basis. The measurements of those which were in the same basis are the key. Because they are in the same basis, their results must always be the same in an ideal system.
- if there is an eavesdropper on the line, the results of measurements on the same basis can differ.Unfortunately, this can also happen due to imperfections in the system.Alice and Bob must decide what level of error is above the system's imperfections and implies that an attacker is listening.
Explains how it is possible that everyone observes the same speed of light, even if they are moving towards or opposite to the light!!!
This was first best observed by the Michelson-Morley experiment, which uses the movement of the Earth at different times of the year to try and detect differences in the speed of light.
This leads leads to the following conclusions:
- to length contraction and time dilation
- the speed of light is the maximum speed anything can reach
All of this goes of course completely against our daily Physics intuition.
The "special" in the name refers to the fact that it is a superset of general relativity, which also explains gravity in a single framework.
Since time and space get all messed up together, you have to be very careful to understand what it means to say "I observed this to happen over there at that time", otherwise you will go crazy. A good way to think about is this:
- use Einstein synchronization to setup a bunch of clocks for every position in your frame of reference
- on every point of space, you put a little detector which records events and the time of the event
- each detector can only detect events locally, i.e. events that happen very close to the detector
- then, after the event, the detectors can send a signal to you, who is sitting at the origin, telling you what they detected
mlcommons.org/en/ Their homepage is not amazingly organized, but it does the job.
Benchmark focused on deep learning. It has two parts:Furthermore, a specific network model is specified for each benchmark in the closed category: so it goes beyond just specifying the dataset.
Results can be seen e.g. at:
- training: mlcommons.org/en/training-normal-21/
- inference: mlcommons.org/en/inference-datacenter-21/
And there are also separate repositories for each:
E.g. on mlcommons.org/en/training-normal-21/ we can see what the the benchmarks are:
Dataset | Model |
---|---|
ImageNet | ResNet |
KiTS19 | 3D U-Net |
OpenImages | RetinaNet |
COCO dataset | Mask R-CNN |
LibriSpeech | RNN-T |
Wikipedia | BERT |
1TB Clickthrough | DLRM |
Go | MiniGo |
Takes a scalar field as input and produces a vector field.
Mnemonic: the gradient shows the direction in which the function increases fastest.
Think of a color gradient going from white to black from left to right.
Therefore, it has to:
- take a scalar field as input. Otherwise, how do you decide which vector is larger than the other?
- output a vector field that contains the direction in which the scalar increases fastest locally at each point. This has to give out vectors, since we are talking about directions
77K. Low enough for "high temperature superconductors" such as yttrium barium copper oxide, but for "low temperature superconductors", you need to go much lower, typically with liquid helium, which is likely much more expensive. TODO by how much?
- you don't get any/sufficient recognition for your contributions. The closest they have to upvotes and reputation is the incredibly obscure "thank" feature which is only visible to the receiver itself: en.wikipedia.org/wiki/Help:Notifications/Thanks
- deletionism is a tremendous problem on Wikipedia, for two main causes:The stuff you wrote can be deleted anytime by some random admin/opposing editor, examples at: Section "Deletionism on Wikipedia".
- tutorial-like subjectivity
- notability
This also possibly leads to Edit wars in the case of sub-page content (full page deletion is more clearly arbitrated). - Scope too limited, and politics defined. Everything has to sound encyclopedic and be notable enough. This basically excludes completely good tutorials.
- Insane impossible to use markup language-base talk pages instead of issue trackers?! Ridiculous!!! That change alone could make Wikipedia so much more amazing. Wikipedia could become a Stack Exchange killer by doing that alone + some basic reputation system. Some work on that is being done at: www.mediawiki.org/wiki/Extension:DiscussionTools, already in Beta as of 2022.
- Edit wars
There are unlisted articles, also show them or only show them.