This is the most important thing to understand Git!
You must:
- be able to visualize the commit tree
- understand how each git command modifies the commit DAG
But not every directed acyclic graph is a tree.
Example of a tree (and therefore also a DAG):Convention in this presentation: arrows implicitly point up, just like in a
5
|
4 7
| |
3 6
|/
2
|
1
git log
, i.e.:and so on.Some people like merges, but they are ugly and stupid. Rebase instead and keep linear history.
Linear history:
5 master
|
4
|
3
|
2
|
1 first commit
Branched history:
7 master
|\
| \
6 \
|\ \
| | |
3 4 5
| | |
| / /
|/ /
2 /
| /
1/ first commit
Which type of tree do you think will be easier to understand and maintain?
????
????????????
You may disconnect now if you still like branched history.
Generate a minimal test repo. You should get in the habit of doing this to test stuff out.
#!/usr/bin/env bash
mkdir git-tips
cd git-tips
git init
for i in 1 2 3 4 5; do
echo $i > f
git add f
git commit -m $i
done
git checkout HEAD~2
git checkout -b my-feature
for i in 6 7; do
echo $i > f
git add f
git commit -m $i
done
Oh but there are usually 2 trees: local and remote.
So you also have to learn how to observe and modify and sync with the remote tree!
But basically:to update the remote tree. And then you can use it exactly like any other branch, except you prefix them with the remote (usually
git fetch
origin/*
), e.g.:origin/master
is the latest fetch of the remote version ofmaster
origin/my-feature
is the latest fetch of the remote version ofmy-feature
Non-POSIX only here.
The best open source implementation as of 2020 seems to be: Mozilla rr.
Subset of ImageNet. About 167.62 GB in size according to www.kaggle.com/competitions/imagenet-object-localization-challenge/data.
Contains 1,281,167 images and exactly 1k categories which is why this dataset is also known as ImageNet1k: datascience.stackexchange.com/questions/47458/what-is-the-difference-between-imagenet-and-imagenet1k-how-to-download-it
www.kaggle.com/competitions/imagenet-object-localization-challenge/overview clarifies a bit further how the categories are inter-related according to WordNet relationships:
The 1000 object categories contain both internal nodes and leaf nodes of ImageNet, but do not overlap with each other.
image-net.org/challenges/LSVRC/2012/browse-synsets.php lists all 1k labels with their WordNet IDs.There is a bug on that page however towards the middle:and there is one missing label if we ignore that dummy
n02119789: kit fox, Vulpes macrotis
n02100735: English setter
n02096294: Australian terrier
n03255030: dumbbell
href="ht:
n02102040: English springer, English springer spaniel
href=
line. A thinkg of beauty!Also the lines are not sorted by synset, if we do then the first three lines are:
n01440764: tench, Tinca tinca
n01443537: goldfish, Carassius auratus
n01484850: great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias
gist.github.com/aaronpolhamus/964a4411c0906315deb9f4a3723aac57 has lines of type:therefore numbered on the exact same order as image-net.org/challenges/LSVRC/2012/browse-synsets.php
n02119789 1 kit_fox
n02100735 2 English_setter
n02110185 3 Siberian_husky
gist.github.com/yrevar/942d3a0ac09ec9e5eb3a lists all 1k labels as a plaintext file with their benchmark IDs.therefore numbered on sorted order of image-net.org/challenges/LSVRC/2012/browse-synsets.php
{0: 'tench, Tinca tinca',
1: 'goldfish, Carassius auratus',
2: 'great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias',
The official line numbering in-benchmark-data can be seen at
LOC_synset_mapping.txt
, e.g. www.kaggle.com/competitions/imagenet-object-localization-challenge/data?select=LOC_synset_mapping.txtn01440764 tench, Tinca tinca
n01443537 goldfish, Carassius auratus
n01484850 great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias
There are unlisted articles, also show them or only show them.