For the strong.
git log --abbrev-commit --decorate --graph --pretty=oneline master HEAD
Output:
* b4ec057 (master) 5
* 0b37c1b 4
| * fbfbfe8 (HEAD -> my-feature) 7
| * 7b0f59d 6
|/
* 661cfab 3
* 6d748a9 2
* c5f8a2c 1
If we also add the As we can see, this removes any commit that is neither:
--simplify-by-decoration
, which you very often want want on a real repository with many commits:* b4ec057 (master) 5
| * fbfbfe8 (HEAD -> my-feature) 7
|/
* c5f8a2c 1
- under a branch or tag
- at the intersection of too branches or tags
git rebase moves commits one by one by Ciro Santilli 35 Updated 2024-12-23 +Created 1970-01-01
In order to solve conflicts, you just have to understand what commit you are trying to move where.
E.g. if from:we do:what happens step by step is first 6 is moved on top of 5:and then 7 is moved on top of the new 6:
5 master
|
4 7 my-feature HEAD
| |
3 6
|/
2
|
1
git rebase master
6on5 HEAD
|
5 master
|
4 7 my-feature
| |
3 6
| |
2-----------------+
|
1
7on5 HEAD
|
6on5
|
5 master
|
4 7 my-feature
| |
3 6
| |
2-----------------+
|
1
All good? so OK, let's move the
my-feature
to the new 7:7on5 my-feature HEAD
|
6on5
|
5 master
|
4
|
3
|
2
|
1
The key to solve conflicts: see the two conflicting diffs by Ciro Santilli 35 Updated 2024-12-23 +Created 1970-01-01
The key to solve conflicts is:
You have to understand what are the two commits that touched a given line (one from master, one from features), and then combine them somehow.
Or in other words, at every rebase conflict we have something like:Therefore there are 2 diffs that you have to understand and reconcile:
master-commit feature-commit
| |
| |
base-commit------+
|
|
base-commit
tomaster-commit
base-commit
tofeature-commit
CLI hello world:
gnuplot -p -e 'p sin(x)'
The good:
- slick UI! But very hard to read characters, they're way too small.
- attempts to show state diffs with a flash. But it goes by too fast, would be better if it were more permanent
- Reverse debugging
The good:
- Reverse debugging
- circuit diagram
The bad:
- Clunky UI
- circuit diagram doesn't show any state??
Likely the best JavaScript 2D game engine as of 2023.Uses Matter.js as a physics engine if enabled. There's also an alternative (in-house?) "arcade" engine: photonstorm.github.io/phaser3-docs/Phaser.Physics.Arcade.ArcadePhysics.html but it appears to be simpler/less robust (but also possibly faster).
TODO any 2D first person examples a bit like Ciro's 2D reinforcement learning games?
The examples are present under:but note that that repo is huge, about 4.5 GiB on local disk, as is has tons of assets.
git clone https://github.com/photonstorm/phaser3-examples
The demos also include a Monaco-editor based sandbox mode where you can edit code directly on the web and see the game update which is a really sweet addition.
To run the demos locally, tested on Ubuntu 22.10:and this opens up the demos on the browser.
git clone https://github.com/liabru/matter-js
cd matter-js
git checkout 0.19.0
npm install
npm run dev
Good packaging! Tested on Ubuntu 22.10:This throws a billion exceptions because we didn't start the visdom server, but never mind that.
git clone https://github.com/activatedgeek/LeNet-5
cd LeNet-5
git checkout 95b55a838f9d90536fd3b303cede12cf8b5da47f
virtualenv -p python3 .venv
. .venv/bin/activate
# Their requirements.txt uses >= and some == are incompatible with our Ubuntu.
pip install
Pillow==6.2.0 \
numpy==1.24.2 \
onnx==1.13.1 \
torch==2.0.0 \
torchvision==0.15.1 \
visdom==0.2.4 \
;
time python run.py
The scrip does a fixed 15 epochs.
Output on P51:
real 2m10.262s
user 11m9.771s
sys 0m26.368s
Instructions at:
Ubuntu 22.10 setup with tiny dummy manually generated ImageNet and run on ONNX:
sudo apt install pybind11-dev
git clone https://github.com/mlcommons/inference
cd inference
git checkout v2.1
virtualenv -p python3 .venv
. .venv/bin/activate
pip install numpy==1.24.2 pycocotools==2.0.6 onnxruntime==1.14.1 opencv-python==4.7.0.72 torch==1.13.1
cd loadgen
CFLAGS="-std=c++14" python setup.py develop
cd -
cd vision/classification_and_detection
python setup.py develop
wget -q https://zenodo.org/record/3157894/files/mobilenet_v1_1.0_224.onnx
export MODEL_DIR="$(pwd)"
export EXTRA_OPS='--time 10 --max-latency 0.2'
tools/make_fake_imagenet.sh
DATA_DIR="$(pwd)/fake_imagenet" ./run_local.sh onnxruntime mobilenet cpu --accuracy
Last line of output on P51, which appears to contain the benchmark resultswhere presumably
TestScenario.SingleStream qps=58.85, mean=0.0138, time=0.136, acc=62.500%, queries=8, tiles=50.0:0.0129,80.0:0.0137,90.0:0.0155,95.0:0.0171,99.0:0.0184,99.9:0.0187
qps
means queries per second, and is the main results we are interested in, the more the better.Running:produces a tiny ImageNet subset with 8 images under
tools/make_fake_imagenet.sh
fake_imagenet/
.fake_imagenet/val_map.txt
contains:val/800px-Porsche_991_silver_IAA.jpg 817
val/512px-Cacatua_moluccensis_-Cincinnati_Zoo-8a.jpg 89
val/800px-Sardinian_Warbler.jpg 13
val/800px-7weeks_old.JPG 207
val/800px-20180630_Tesla_Model_S_70D_2015_midnight_blue_left_front.jpg 817
val/800px-Welsh_Springer_Spaniel.jpg 156
val/800px-Jammlich_crop.jpg 233
val/782px-Pumiforme.JPG 285
- 817: 'sports car, sport car',
- 89: 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',
TODO prepare and test on the actual ImageNet validation set, README says:
Prepare the imagenet dataset to come.
Since that one is undocumented, let's try the COCO dataset instead, which uses COCO 2017 and is also a bit smaller. Note that his is not part of MLperf anymore since v2.1, only ImageNet and open images are used. But still:
wget https://zenodo.org/record/4735652/files/ssd_mobilenet_v1_coco_2018_01_28.onnx
DATA_DIR_BASE=/mnt/data/coco
export DATA_DIR="${DATADIR_BASE}/val2017-300"
mkdir -p "$DATA_DIR_BASE"
cd "$DATA_DIR_BASE"
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip val2017.zip
unzip annotations_trainval2017.zip
mv annotations val2017
cd -
cd "$(git-toplevel)"
python tools/upscale_coco/upscale_coco.py --inputs "$DATA_DIR_BASE" --outputs "$DATA_DIR" --size 300 300 --format png
cd -
Now:fails immediately with:The more plausible looking:first takes a while to preprocess something most likely, which it does only one, and then fails:
./run_local.sh onnxruntime mobilenet cpu --accuracy
No such file or directory: '/path/to/coco/val2017-300/val_map.txt
./run_local.sh onnxruntime mobilenet cpu --accuracy --dataset coco-300
Traceback (most recent call last):
File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 596, in <module>
main()
File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 468, in main
ds = wanted_dataset(data_path=args.dataset_path,
File "/home/ciro/git/inference/vision/classification_and_detection/python/coco.py", line 115, in __init__
self.label_list = np.array(self.label_list)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (5000, 2) + inhomogeneous part.
TODO!
ONNX visualizer.
There are unlisted articles, also show them or only show them.