This doesn't do a hole lot. Ciro Santilli wouldn't really call it a web framework. It's more like a middleware. Real web frameworks are built on top of it.
Examples under: nodejs/express:
- nodejs/express/min.js: minimal example. Visit localhost:3000 and it shows
hello world
. It is a bit wrong because the headers say HTML but we return plaintext. - nodejs/express/index.js: example dump with automated tests where possible. The automated tests are run at startup after the server launches. Then the server keeps running so you can interact with it.
A live example on Heroku can be seen at: github.com/cirosantilli/heroku-node-min
Looks interesting.
It seems to abstract the part about the client messaging the backend, which focuses on being able to easily plug in a number of Front-end web framework to manage client state.
Has the "main web API is the same as the REST API" focus, which is fundamental 2020-nowadays.
Uses Socket.IO, which allows the client Javascript to register callbacks when data is updated to achieve Socket.IO, e.g. their default chat app does:so that message appear immediately as they are sent.
client.service('messages').on('created', addMessage);
Their standard template from which looks promising! They don't have a default template for a Front-end web framework however unfortunately: docs.feathersjs.com/guides/frameworks.html#the-feathers-chat lists a few chat app versions, which is their hello world:But it is in itself a completely boring app with a single splash page, and no database interaction, so not a good showcase. The actual showcase app is feathersjs/feathers-chat.
feathers generate app
on @feathersjs/cli@4.5.0
includes:- several authentication methods, including OAuth
- testing
- backend database with one of several object-relational mapping! However, they don't abstract across them. E.g., the default Chat example uses NeDB, but a real app will likely use Sequelize, and a port is needed
- Front-end web framework: not built-in on generator, but there are some sample repos pointed from the documentation, and they did work out-of-box:
And there is no official example of the chat app that is immediately deployable to Heroku: FeathersJS Heroku deployment, all setups require thinking.
Global source entry point: determine on
package.json
as usual, defaults to src/index.js
.The idea is cool. It really unifies front-and back end.
But Ciro Santilli feels the approach proposed by FeathersJS of being a glue between bigger third-party Front-end web frameworks like React and backend (object-relational mapping) is more promising and flexible.
By the current user:
bkill 0
www.wholecellviz.org/viz.php awesome visualization of simtk, paper: www.ncbi.nlm.nih.gov/pmc/articles/PMC3413483/ A Whole-Cell Computational Model Predicts Phenotype from Genotype - 2013 - Jonathan R. Karr.
Followed up by the E. Coli Whole Cell Model by Covert Lab.
www.newyorker.com/magazine/2022/03/07/a-journey-to-the-center-of-our-cells A Journey to the Center of Our Cells (2022) by James Somers comments on M. genitalium in general, and in particular on the JCVI strains.
MacKenzie Bezos' new husband after she divorced Bezos.
Science teacher at the Lakeside School in Seattle.
www.dailymail.co.uk/femail/article-9338723/Who-billionaire-Mackenzie-Scotts-new-husband-Dan-Jewett.html Who IS billionaire Mackenzie Scott's new husband Dan Jewett?
MacKenzie Bezos went on to marry a science teacher who taught their children.
The contrast with Bezos's girlfriend is simply comical. MacKenzie married the idealistic morally upright science teacher, while Bezos went for a silly sex bomb. Ah, bruta flor, do querer!
It is a shame, but this game just doesn't feel good. The controls are just not as snappy as Mario Kart 64, the levels are too wide which limits player interaction, and the weapons feel clumsy weak and unexciting. These are all aspects that the closed source smashkarts.io gets pretty well.
It is interpreted. It actually implements a Python (-like ?) interpreter that can run on a microcontroller. See e.g.: Compile MicroPython code for Micro Bit locally.
As a result, it is both very convenient, as it does not require a C toolchain to build for, but also very slow and produces larger images.
Instructions at:
Ubuntu 22.10 setup with tiny dummy manually generated ImageNet and run on ONNX:
sudo apt install pybind11-dev
git clone https://github.com/mlcommons/inference
cd inference
git checkout v2.1
virtualenv -p python3 .venv
. .venv/bin/activate
pip install numpy==1.24.2 pycocotools==2.0.6 onnxruntime==1.14.1 opencv-python==4.7.0.72 torch==1.13.1
cd loadgen
CFLAGS="-std=c++14" python setup.py develop
cd -
cd vision/classification_and_detection
python setup.py develop
wget -q https://zenodo.org/record/3157894/files/mobilenet_v1_1.0_224.onnx
export MODEL_DIR="$(pwd)"
export EXTRA_OPS='--time 10 --max-latency 0.2'
tools/make_fake_imagenet.sh
DATA_DIR="$(pwd)/fake_imagenet" ./run_local.sh onnxruntime mobilenet cpu --accuracy
Last line of output on P51, which appears to contain the benchmark resultswhere presumably
TestScenario.SingleStream qps=58.85, mean=0.0138, time=0.136, acc=62.500%, queries=8, tiles=50.0:0.0129,80.0:0.0137,90.0:0.0155,95.0:0.0171,99.0:0.0184,99.9:0.0187
qps
means queries per second, and is the main results we are interested in, the more the better.Running:produces a tiny ImageNet subset with 8 images under
tools/make_fake_imagenet.sh
fake_imagenet/
.fake_imagenet/val_map.txt
contains:val/800px-Porsche_991_silver_IAA.jpg 817
val/512px-Cacatua_moluccensis_-Cincinnati_Zoo-8a.jpg 89
val/800px-Sardinian_Warbler.jpg 13
val/800px-7weeks_old.JPG 207
val/800px-20180630_Tesla_Model_S_70D_2015_midnight_blue_left_front.jpg 817
val/800px-Welsh_Springer_Spaniel.jpg 156
val/800px-Jammlich_crop.jpg 233
val/782px-Pumiforme.JPG 285
- 817: 'sports car, sport car',
- 89: 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',
TODO prepare and test on the actual ImageNet validation set, README says:
Prepare the imagenet dataset to come.
Since that one is undocumented, let's try the COCO dataset instead, which uses COCO 2017 and is also a bit smaller. Note that his is not part of MLperf anymore since v2.1, only ImageNet and open images are used. But still:
wget https://zenodo.org/record/4735652/files/ssd_mobilenet_v1_coco_2018_01_28.onnx
DATA_DIR_BASE=/mnt/data/coco
export DATA_DIR="${DATADIR_BASE}/val2017-300"
mkdir -p "$DATA_DIR_BASE"
cd "$DATA_DIR_BASE"
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip val2017.zip
unzip annotations_trainval2017.zip
mv annotations val2017
cd -
cd "$(git-toplevel)"
python tools/upscale_coco/upscale_coco.py --inputs "$DATA_DIR_BASE" --outputs "$DATA_DIR" --size 300 300 --format png
cd -
Now:fails immediately with:The more plausible looking:first takes a while to preprocess something most likely, which it does only one, and then fails:
./run_local.sh onnxruntime mobilenet cpu --accuracy
No such file or directory: '/path/to/coco/val2017-300/val_map.txt
./run_local.sh onnxruntime mobilenet cpu --accuracy --dataset coco-300
Traceback (most recent call last):
File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 596, in <module>
main()
File "/home/ciro/git/inference/vision/classification_and_detection/python/main.py", line 468, in main
ds = wanted_dataset(data_path=args.dataset_path,
File "/home/ciro/git/inference/vision/classification_and_detection/python/coco.py", line 115, in __init__
self.label_list = np.array(self.label_list)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (5000, 2) + inhomogeneous part.
TODO!
A Three-Dimensional Model of the Myoglobin Molecule Obtained by X-Ray Analysis (1958) Updated 2025-01-06 +Created 1970-01-01
@cirosantilli/_file/nodejs/nodejs/read_child_process_lines.js Updated 2025-01-06 +Created 1970-01-01
This example reads lines from a child process one by one, as soon as lines become fully available. Related:
Unlisted articles are being shown, click here to show only listed articles.