Quick overview at stackoverflow.com/questions/1780599/what-is-the-meaning-of-posix/31865755#31865755
Exmples under c/posix:
- c/posix/signal_return.c: stackoverflow.com/questions/37063212/where-does-signal-handler-return-back-to
- c/posix/inet/pton.c:
inet_pton
demo. Adapted fromman inet_pton
on Ubuntu 23.04. Usage:Output:./pton.out 192.187.1.42
So we see that the strings was converted to an integer, e.g.:0xc0bb012a
See also: stackoverflow.com/questions/1680622/ip-address-to-integer-c/76520978#76520978- 0xc0 = 192
- 0xbb = 187
- 0x01 = 1
- 0x2a = 42
- c/posix/inet/ntop.c:
inet_ntop
demo. Adapted fromman inet_pton
on Ubuntu 23.04. Usage:Output:./ntop.out 0x01021AA0
./ntop.out 0x01021AA0
Likely a good replacement for Python. If the ecosystem gets there, Ciro Santilli would gladly use it more.
Java is good.
Its boilerplate requirement is a pain, but the design is otherwise very clean.
But its ecosystem sucks.
The development process is rather closed, the issue tracker obscure.
And above all, Google LLC v. Oracle America, Inc. killed everybody's trust in it once and for all. Thanks Oracle.
To run the demos locally, tested on Ubuntu 22.10:and this opens up the demos on the browser.
git clone https://github.com/liabru/matter-js
cd matter-js
git checkout 0.19.0
npm install
npm run dev
A multi-scenario demo.
async
is all present in JavaScript for two reasons:- you make network requests all the time
- JavaScript is single threaded, so if you are waiting for a network request, the UI freezes, see remarks on the deprecation of synchronous HTTP request at: developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests
However, it is also Hell: how to convert
async
to sync in JavaScript.God, it's impossible! You just have to convert the entire fucking call stack all the way up to async functions. It could mean refactoring hundreds of functions.
To be fair, there is a logic to this, if you put yourself within the crappiness of the JavaScript threading model. And Python is not that much better with its Global Interpreter Lock.
The problem is that async was introduced relatively late, previously we just had to use infinitely deep callback trees, which was worse:compared to the new infinitely more readable:But now we are in an endless period of transition between both worlds.
myAsync().then(ret => myAsync2(ret).then(ret2 => myAsync3(re3)))
ret = await myAsync()
ret2 = await myAsync2(ret)
ret3 = await myAsync3(ret3)
It is also worth mentioning that callbacks are still inescapable if you really want to fan out into a non-linear dependency graph, usually with
Promise.all
:await Promise.all([
myAsync(1).then(ret => myAsync2(ret)),
myAsync(2).then(ret => myAsync2(ret)),
])
Bibliography:
- stackoverflow.com/questions/21819858/how-to-wrap-async-function-calls-into-a-sync-function-in-node-js-or-javascript
- stackoverflow.com/questions/9121902/call-an-asynchronous-javascript-function-synchronously
- stackoverflow.com/questions/47227550/using-await-inside-non-async-function
- stackoverflow.com/questions/43832490/is-it-possible-to-use-await-without-async-in-js
- stackoverflow.com/questions/6921895/synchronous-delay-in-code-execution
And then, after many many hours of this work, you might notice that the new code is way, way way slower than before, because making small functions
async
has a large performance impact: madelinemiller.dev/blog/javascript-promise-overhead/. Real world case with a 4x slowdown: github.com/ourbigbook/ourbigbook/tree/async-slow.Anyways, since you Googled here, you might as well learn the standard pattern to convert callbacks functions into async functions using a promise: stackoverflow.com/questions/4708787/get-password-from-input-using-node-js/71868483#71868483
WellSync, if you are gonna useSync this wonky language thing inSync one place, you might as well useSync it everywhereSync and make it more decent. See also: how to convert
async
to sync in JavaScript.Their CLI debugger is a bit crap compared to GDB, basic functionality is either lacking or too verbose:Documentation at: nodejs.org/dist/latest-v16.x/docs/api/debugger.html
- stackoverflow.com/questions/65493221/how-to-break-at-a-specific-function-or-line-with-the-node-js-node-inspect-comman
- stackoverflow.com/questions/70486188/how-to-break-on-uncaught-exception-on-the-node-js-node-inspect-command-line-debu Some operations are only possible on the browser debug UI...
In this section we will use the file nodejs/bench_mem.js, tests are run on Node.js v16.14.2 from NVM, Ubuntu 21.10, on Lenovo ThinkPad P51 (2017) which has 32 GB RAM.
Related answer: stackoverflow.com/questions/12023359/what-do-the-return-values-of-node-js-process-memoryusage-stand-for/72043884#72043884
First using
topp
from stackoverflow.com/questions/1221555/retrieve-cpu-usage-and-memory-usage-of-a-single-process-on-linux/40576129#40576129 let's observe the memory usage of some baseline cases.For a Node.js infinite loop nodejs/infinite_loop.jsThis gives approximately:
topp infinite_loop.js
- RSS: 20 MB
- VSZ: 230 MB
Adding a single hello world to it as in nodejs/infinite_hello.js and running:leads to:We understand that Node.js preallocates VSZ wildly. No big deal, but it does mean that VSZ is a useless measure for Node.js.
topp infinite_hello.js
- RSS: 26 MB
- VSZ: 580 MB
Forcing garbage collection as in nodejs/infinite_hello.js brings it down to 20 MB however:
topp node --expose-gc infinite_hello_gc.js
Finally let's see a baseline for which gives initially:but after a few seconds randomly jumps to:so we understand that
process.memoryUsage
nodejs/infinite_memoryusage.js:node --expose-gc infinite_memoryusage.js
{
rss: 23851008,
heapTotal: 6987776,
heapUsed: 3674696,
external: 285296,
arrayBuffers: 10422
}
{
rss: 26005504,
heapTotal: 9084928,
heapUsed: 3761240,
external: 285296,
arrayBuffers: 10422
}
First a baseline case with an array of length 1:This gives the same results as with:
node --expose-gc bench_mem.js n 1
node --expose-gc infinite_memoryusage.js
. The same result is obtained by doing:a = undefined
node --expose-gc bench_mem.js dealloc
If we use we see that the memory is now, unsurprisingly, accounted for under Results for different N:We see therefore that typed arrays are much closer to what they advertise (4 bytes per element), even for smaller element counts, as expected.
Int32Array
typed array buffers instead of a simple Array
:node --expose-gc bench_mem.js array-buffer n N
arrayBuffers
, e.g. for N
1 million:{
rss: 31776768,
heapTotal: 6463488,
heapUsed: 3674520,
external: 4285296,
arrayBuffers: 4010422
}
|| N
|| `arrayBuffers`
|| `rss`
|| `rss` per elem
| 1 M
| 4 MB
| 31 MB
| 5
| 10 M
| 40 MB
| 67 MB
| 4.6
| 100 M
| 40 MB
| 427 MB
| 4
Now let's try one million objects of type gives:Disaster! Memory usage is up to 70 MB! Why?? We were expecting only about 24, 4 baseline + 2 * 10 for each million int?!
{ a: 1, b: -1 }
:node --expose-gc bench_mem.js obj
{
rss: 138969088,
heapTotal: 105246720,
heapUsed: 70103896,
external: 285296,
arrayBuffers: 10422
}
And now an equivalent version using gives the same result.
class
:node --expose-gc bench_mem.js class
Let's try Array:is even worse at 78 MB!! OMG why.
node --expose-gc bench_mem.js arr
{
rss: 164597760,
heapTotal: 129363968,
heapUsed: 78117008,
external: 285296,
arrayBuffers: 10422
}
No support:
Example with raw examples under nodejs/sequelize/raw/many_to_many.js
Pinned article: Introduction to the OurBigBook Project
Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
Intro to OurBigBook
. Source. We have two killer features:
- topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculusArticles of different users are sorted by upvote within each article page. This feature is a bit like:
- a Wikipedia where each user can have their own version of each article
- a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.Figure 1. Screenshot of the "Derivative" topic page. View it live at: ourbigbook.com/go/topic/derivativeVideo 2. OurBigBook Web topics demo. Source. - local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
- to OurBigBook.com to get awesome multi-user features like topics and likes
- as HTML files to a static website, which you can host yourself for free on many external providers like GitHub Pages, and remain in full control
Figure 2. You can publish local OurBigBook lightweight markup files to either OurBigBook.com or as a static website.Figure 3. Visual Studio Code extension installation.Figure 5. . You can also edit articles on the Web editor without installing anything locally. Video 3. Edit locally and publish demo. Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension. - Infinitely deep tables of contents:
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact