Approximates an original function by sines. If the function is "well behaved enough", the approximation is to arbitrary precision.
Fourier's original motivation, and a key application, is solving partial differential equations with the Fourier series.
Can only be used to approximate for periodic functions (obviously from its definition!). The Fourier transform however overcomes that restriction:
The Fourier series behaves really nicely in , where it always exists and converges pointwise to the function: Carleson's theorem.
bitcoin.org registration: 2008-08-18
2008-08-22: first private contact to Wei Dai email. Reproduced at www.gwern.net/docs/bitcoin/2008-nakamoto on gwern.net from address
satoshi@anonymousspeech.com
. Email provider shutting down entirely on 2021-09-30 as per archive.ph/wip/RRNKx, homepage now juts contains useless Bitcoin stuff.First public Bitcoin whitepaper announcement: 2008-10-31 www.metzdowd.com/pipermail/cryptography/2008-October/014810.html linking to www.bitcoin.org/bitcoin.pdf, email sent from from satoshi@vistomail.com. Claimed one year and a half development time. Provider apparently closed in 2014: www.reddit.com/r/Bitcoin/comments/3h80mi/vistomailcom_closed_and_domain_changed_owner_in/, as of 2021 just reads:
Replies in November: www.metzdowd.com/pipermail/cryptography/2008-November/thread.html#14863 under satoshi@anonymousspeech.com claims source code shared privately by request at that point.
First open source release: 9 January 2009. Announcement: www.metzdowd.com/pipermail/cryptography/2009-January/014994.html "Windows only for now. Open source C++ code is included" Arghhhhhh how can those libertarians use Microsoft Windows??? Had a GUI already.
2011-04-23 Satoshi sent his last email ever, it was to Martti Malmi. www.nytimes.com/2015/05/17/business/decoding-the-enigma-of-satoshi-nakamoto-and-the-birth-of-bitcoin.html mentions:
How Satoshi hid his mining IP address:
Hal Finney:
- Jan 11, 2009 twitter.com/halfin/status/1110302988 "Running Bitcoin"
We've come across a few shallow and stylistically similar websites on suspicious ranges with this pattern.
No JS/JAR/SWF comms, but rather a subdomain, and an HTTPS page with .cgi extension that leads to a login page. Some names seen for this subdomain:
The question is, is this part of some legitimate tooling that created such patterns? And if so which? Or are they actual hits with a new comms mechanism not previously seen?
The fact that:suggests to Ciro that they are an actual hit.
- hits of this type are so dense in the suspicious ranges
- they are so stylistically similar between on another
- citizenlabs specifically mentioned a "CGI" comms method
In particular, the
secure
and ssl
ones are overused, and together with some heuristics allowed us to find our first two non Reuters ranges! Section "secure subdomain search on 2013 DNS Census"Some currently known URLsIf we could do a crawl search for
- backstage.musical-fortune.net/cgi-bin/backstage.cgi
- clients.smart-travel-consultant.com/cgi-bin/clients.cgi
- members.it-proonline.com/cgi-bin/members.cgi
- members.metanewsdaily.com/cgi-bin/ABC.cgi
- miembros.todosperuahora.com/cgi-bin/business.cgi
- secure.altworldnews.com/cgi-bin/desk.cgi
- secure.driversinternationalgolf.com/cgi-bin/drivers.cgi
- secure.freshtechonline.com/cgi-bin/tech.cgi
- secure.globalnewsbulletin.com/cgi-bin/index.cgi
- secure.negativeaperture.com/cgi-bin/canon.cgi
- secure.riskandrewardnews.com/cgi-bin/worldwide.cgi
- secure.theworld-news.net/cgi-bin/news.cgi
- secure.topbillingsite.com/cgi-bin/main.cgi
- secure.worldnewsandent.com/cgi-bin/news.cgi
- ssl.beyondnetworknews.com/cgi-bin/local.cgi
- ssl.newtechfrontier.com/cgi-bin/tech.cgi
- www.businessexchangetoday.com/cgi-bin/business.cgi
- heal.conquermstoday.com (path unknown)
secure.*com/cgi-bin/*.cgi
that might be a good enough fingerprint, maybe even *.*com/cgi-bin/*.cgi
. Edit: it is not perfect, but we kind of did it: Section "secure subdomain search on 2013 DNS Census".In this section we will use the file nodejs/bench_mem.js, tests are run on Node.js v16.14.2 from NVM, Ubuntu 21.10, on Lenovo ThinkPad P51 (2017) which has 32 GB RAM.
Related answer: stackoverflow.com/questions/12023359/what-do-the-return-values-of-node-js-process-memoryusage-stand-for/72043884#72043884
First using
topp
from stackoverflow.com/questions/1221555/retrieve-cpu-usage-and-memory-usage-of-a-single-process-on-linux/40576129#40576129 let's observe the memory usage of some baseline cases.For a Node.js infinite loop nodejs/infinite_loop.jsThis gives approximately:
topp infinite_loop.js
- RSS: 20 MB
- VSZ: 230 MB
Adding a single hello world to it as in nodejs/infinite_hello.js and running:leads to:We understand that Node.js preallocates VSZ wildly. No big deal, but it does mean that VSZ is a useless measure for Node.js.
topp infinite_hello.js
- RSS: 26 MB
- VSZ: 580 MB
Forcing garbage collection as in nodejs/infinite_hello.js brings it down to 20 MB however:
topp node --expose-gc infinite_hello_gc.js
Finally let's see a baseline for which gives initially:but after a few seconds randomly jumps to:so we understand that
process.memoryUsage
nodejs/infinite_memoryusage.js:node --expose-gc infinite_memoryusage.js
{
rss: 23851008,
heapTotal: 6987776,
heapUsed: 3674696,
external: 285296,
arrayBuffers: 10422
}
{
rss: 26005504,
heapTotal: 9084928,
heapUsed: 3761240,
external: 285296,
arrayBuffers: 10422
}
First a baseline case with an array of length 1:This gives the same results as with:
node --expose-gc bench_mem.js n 1
node --expose-gc infinite_memoryusage.js
. The same result is obtained by doing:a = undefined
node --expose-gc bench_mem.js dealloc
If we use we see that the memory is now, unsurprisingly, accounted for under Results for different N:We see therefore that typed arrays are much closer to what they advertise (4 bytes per element), even for smaller element counts, as expected.
Int32Array
typed array buffers instead of a simple Array
:node --expose-gc bench_mem.js array-buffer n N
arrayBuffers
, e.g. for N
1 million:{
rss: 31776768,
heapTotal: 6463488,
heapUsed: 3674520,
external: 4285296,
arrayBuffers: 4010422
}
|| N
|| `arrayBuffers`
|| `rss`
|| `rss` per elem
| 1 M
| 4 MB
| 31 MB
| 5
| 10 M
| 40 MB
| 67 MB
| 4.6
| 100 M
| 40 MB
| 427 MB
| 4
Now let's try one million objects of type gives:Disaster! Memory usage is up to 70 MB! Why?? We were expecting only about 24, 4 baseline + 2 * 10 for each million int?!
{ a: 1, b: -1 }
:node --expose-gc bench_mem.js obj
{
rss: 138969088,
heapTotal: 105246720,
heapUsed: 70103896,
external: 285296,
arrayBuffers: 10422
}
And now an equivalent version using gives the same result.
class
:node --expose-gc bench_mem.js class
Let's try Array:is even worse at 78 MB!! OMG why.
node --expose-gc bench_mem.js arr
{
rss: 164597760,
heapTotal: 129363968,
heapUsed: 78117008,
external: 285296,
arrayBuffers: 10422
}
To break the meta means to find a new strategy that offers a significant advantage over the existing meta.
Due to Ciro Santilli's self perceived creative personality, Ciro Santilli is very attracted to meta breaks.
How One Man Changed the High Jump Forever by Olympics (2018)
Source. Dick Fosbury created and implemented the Fosbury Flop jump style in 1968.They can only absorb electrons up to a certain point, but then the pushback becomes too strong, and current stops.
Therefore, they cannot conduct direct current long term.
For alternating current however, things are different, because in alternating current, electrons are just jiggling back and forward a little bit around a center point. So you can send alternating current power across a capacitor.
The key equation that relates Voltage to electric current in the capacitor is:So if a voltage Heavyside step function is applied what happens is:More realistically, one may consider the behavior or the series RC circuit to see what happens without infinities when a capacitor is involved as in the step response of the series RC circuit.
- the capacitor fills up instantly with an infinite current
- the current then stops instantly
Capacitors Explained by The Engineering Mindset
. Source. 2019.These people have good intentions.
The problem is that they don't manage to go critical because there's to way for students to create content, everything is manually curated.
You can't even publicly comment on the textbooks. Or at least Ciro Santilli hasn't found a way to do so. There is just a "submit suggestion" box.
This massive lost opportunity is even shown graphically at: cnx.org/about (archive) where there is a clear separation between:Maybe this wasn't the case in their legacy website, legacy.cnx.org/content?legacy=true, but not sure, and they are retiring that now.
- "authors", who can create content
- "students", who can consume content
By Rice University.
TODO what are the books written in?
- github.com/openstax/openstax-cms Uses Wagtail CMS. So presumaby they just Wagtail's WYSIWYG.
- github.com/openstax/os-webview
Our WIP script: wikipedia/import-categories.sh.
Related:
- opendata.stackexchange.com/questions/1533/download-wikipedia-articles-from-a-specific-category
- webapps.stackexchange.com/questions/16359/is-there-a-way-to-download-a-list-of-all-wikipedia-categories/172480#172480
- stackoverflow.com/questions/40119322/how-to-download-all-pages-inside-a-category-in-wikipedia
- category tree on Stack Overflow
- stackoverflow.com/questions/17432254/wikipedia-category-hierarchy-from-dumps/77313490#77313490 Canon but no good answers.
- stackoverflow.com/questions/12227134/how-to-fetch-category-tree-of-wiki
- stackoverflow.com/questions/21782410/finding-subcategories-of-a-wikipedia-category-using-category-and-categorylinks-t. Actually explains it: stackoverflow.com/questions/21782410/finding-subcategories-of-a-wikipedia-category-using-category-and-categorylinks-t/21798259#21798259
- stackoverflow.com/questions/27279649/how-to-build-wikipedia-category-hierarchy
- mdkzaman.com/knowledge-graph-from-wikipedia-category-hierarchy/
Consider:
Let's observe them in MySQL:outputs:
mysql enwiki -e "select page_id, page_namespace, page_title, page_is_redirect from page where page_namespace in (0, 14) and page_title in ('Computer_storage_devices', 'Computer_data_storage')"
+----------+----------------+--------------------------+------------------+
| page_id | page_namespace | page_title | page_is_redirect |
+----------+----------------+--------------------------+------------------+
| 5300 | 0 | Computer_data_storage | 0 |
| 42371130 | 0 | Computer_storage_devices | 1 |
| 711721 | 14 | Computer_data_storage | 0 |
| 895945 | 14 | Computer_storage_devices | 0 |
+----------+----------------+--------------------------+------------------+
mysql enwiki -e "select cl_from, cl_to from categorylinks where cl_from in (5300, 711721, 895945, 42371130)"
+----------+-----------------------------------------------------------------------+
| cl_from | cl_to |
+----------+-----------------------------------------------------------------------+
| 5300 | All_articles_containing_potentially_dated_statements |
| 5300 | Articles_containing_potentially_dated_statements_from_2009 |
| 5300 | Articles_containing_potentially_dated_statements_from_2011 |
| 5300 | Articles_with_GND_identifiers |
| 5300 | Articles_with_NKC_identifiers |
| 5300 | Articles_with_short_description |
| 5300 | Computer_architecture |
| 5300 | Computer_data_storage |
| 5300 | Short_description_matches_Wikidata |
| 5300 | Use_dmy_dates_from_June_2020 |
| 5300 | Wikipedia_articles_incorporating_text_from_the_Federal_Standard_1037C |
| 711721 | Computer_architecture |
| 711721 | Computer_data |
| 711721 | Computer_hardware_by_type |
| 711721 | Data_storage |
| 895945 | Computer_data_storage |
| 895945 | Computer_peripherals |
| 895945 | Recording_devices |
| 42371130 | Redirects_from_alternative_names |
+----------+-----------------------------------------------------------------------+
So we see that
cl_from
encodes the parent categories:- parent categories of categories:
- en.wikipedia.org/wiki/Category:Computer_data_storage, which has ID
711721
, has parent categories: "Computer hardware by type", "Computer data", "Data storage", "Computer architecture". This matches exactly on the database. These are all encoded on the source code of the page:{{DEFAULTSORT:Storage}} [[Category:Computer hardware by type]] [[Category:Computer data|Storage]] [[Category:Data storage|Computer]] [[Category:Computer architecture]]
- en.wikipedia.org/wiki/Category:Computer_storage_devices has parent categories: "Computer data storage", "Recording devices", "Computer peripherals". This matches exactly on the database.
- en.wikipedia.org/wiki/Category:Computer_data_storage, which has ID
- parent categories of pages:
- en.wikipedia.org/wiki/Computer_storage_devices whish is a redirect gets the magic category "Redirects_from_alternative_names", a humongous placeholder with many thousands of pages: en.wikipedia.org/wiki/Category:Redirects_from_alternative_names
- en.wikipedia.org/wiki/Computer_data_storage shows only two categories onthe web UI: "Computer data storage" and "Computer architecture". Both of these are present on the database and at the end of the source code:The others appear to be more magic. Two of them we can guess from the templates:
{{DEFAULTSORT:Computer Data Storage}} [[Category:Computer data storage| ]] [[Category:Computer architecture]]
are likely{{short description|Storage of digital data readable by computers}} {{Use dmy dates|date=June 2020}}
Use_dmy_dates_from_June_2020
andArticles_with_short_description
but the rest is more magic and not necessarily present in-source.
So to find all articls and categories under a given category title, say en.wikipedia.org/wiki/Category:Mathematics we can run:
mariadb enwiki -e "select cl_from, cl_to, page_namespace, page_title from categorylinks inner join page on page_namespace in (0, 14) and cl_from = page_id and cl_to = 'Mathematics'"
Per-table dumps created with mysqldump and listed at: dumps.wikimedia.org/. Most notably, for the English Wikipedia: dumps.wikimedia.org/enwiki/latest/
A few of the files are not actual tables but derived data, notably dumps.wikimedia.org/enwiki/latest/enwiki-latest-all-titles-in-ns0.gz from Download titles of all Wikipedia articles
The tables are "documented" under: www.mediawiki.org/wiki/Manual:Database_layout, e.g. the central "page" table: www.mediawiki.org/wiki/Manual:Page_table. But in many cases it is impossible to deduce what fields are from those docs.
Some examples by Ciro Santilli follow.
Of the tutorial-subjectivity type:
- This edit perfectly summarizes how Ciro feels about Wikipedia (no particular hate towards that user, he was a teacher at the prestigious Pierre and Marie Curie University and actually as a wiki page about him):which removed the only diagram that was actually understandable to non-Mathematicians, which Ciro Santilli had created, and received many upvotes at: math.stackexchange.com/questions/776039/intuition-behind-normal-subgroups/3732426#3732426. The removal does not generate any notifications to you unless you follow the page which would lead to infinite noise, and is extremely difficult to find out how to contact the other person. The removal justification is even somewhat ad hominem: how does he know Ciro Santilli is also not a professional Mathematician? :-) Maybe it is obvious because Ciro explains in a way that is understandable. Also removal makes no effort to contact original author. Of course, this is caused by the fact that there must also have been a bunch of useless edits not done by Ciro, and there is no reputation system to see if you should ignore a person or not immediately, so removal author has no patience anymore. This is what makes it impossible to contribute to Wikipedia: your stuff gets deleted at any time, and you don't know how to appeal it. Ciro is going to regret having written this rant after Daniel replies and shows the diagram is crap. But that would be better than not getting a reply and not learning that the diagram is crap.
rm a cryptic diagram (not understandable by a professional mathematician, without further explanations
- en.wikipedia.org/w/index.php?title=Finite_field&type=revision&diff=1044934168&oldid=1044905041 on finite fields with edit comment "Obviously: X ≡ α". Discussion at en.wikipedia.org/wiki/Talk:Finite_field#Concrete_simple_worked_out_example Some people simply don't know how to explain things to beginners, or don't think Wikipedia is where it should be done. One simply can't waste time fighting off those people, writing good tutorials is hard enough in itself without that fight.
- en.wikipedia.org/w/index.php?title=Discrete_Fourier_transform&diff=1193622235&oldid=1193529573 by user Bob K. removed Ciro Santilli's awesome simple image of the Discrete Fourier transform as seen at en.wikipedia.org/w/index.php?title=Discrete_Fourier_transform&oldid=1176616763:with message:
Hello. I am a retired electrical engineer, living near Washington,DC. Most of my contributions are in the area of DSP, where I have about 40 years of experience in applications on many different processors and architectures.
Thank you so much!!remove non-helpful image
Maybe it is a common thread that these old "experts" keep removing anything that is actually intelligible by beginners? Section "There is value in tutorials written by beginners"Also ranted at: x.com/cirosantilli/status/1808862417566290252Figure 1. Source at: numpy/fft_plot.py. - when Ciro Santilli created Scott Hassan's page, he originally included mentions of his saucy divorce: en.wikipedia.org/w/index.php?title=Scott_Hassan&oldid=1091706391 These were reverted by Scott's puppets three times, and Ciro and two other editors fought back. Finally, Ciro understood that Hassan's puppets were likely right about the removal because you can't talk about private matters of someone who is low profile:even if it is published in well known and reliable publications like the bloody New York Times. In this case, it is clear that most people wanted to see this information summarized on Wikipedia since others fought back Hassan's puppet. This is therefore a failure of Wikipedia to show what the people actually want to read about.This case is similar to the PsiQuantum one. Something is extremely well known in an important niche, and many people want to read about it. But because the average person does not know about this important subject, and you are limited about what you can write about it or not, thus hurting the people who want to know about it.
Notability constraints, which are are way too strict:There are even a Wikis that were created to remove notability constraints: Wiki without notability requirements.
- even information about important companies can be disputed. E.g. once Ciro Santilli tried to create a page for PsiQuantum, a startup with $650m in funding, and there was a deletion proposal because it did not contain verifiable sources not linked directly to information provided by the company itself: en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/PsiQuantum Although this argument is correct, it is also true about 90% of everything that is on Wikipedia about any company. Where else can you get any information about a B2B company? Their clients are not going to say anything. Lawsuits and scandals are kind of the only possible source... In that case, the page was deleted with 2 votes against vs 3 votes for deletion.is very similar to Stack Exchange's own Stack Overflow content deletion issues. Ain't Nobody Got Time For That. "Ain't Nobody Got Time for That" actually has a Wiki page: en.wikipedia.org/wiki/Ain%27t_Nobody_Got_Time_for_That. That's notable. Unlike a $600M+ company of course.
should we delete this extremely likely useful/correct content or not according to this extremely complex system of guidelines"
In December 2023 the page was re-created, and seemed to stick: en.wikipedia.org/wiki/Talk:PsiQuantum#Secondary_sources It's just a random going back and forth. Author Ctjk has an interesting background:I am a legal official at a major government antitrust agency. The only plausible connection is we regulate tech firms
For these reasons reason why Ciro basically only contributes images to Wikipedia: because they are either all in or all out, and you can determine which one of them it is. And this allows images to be more attributable, so people can actually see that it was Ciro that created a given amazing image, thus overcoming Wikipedia's lack of reputation system a little bit as well.
Wikipedia is perfect for things like biographies, geography, or history, which have a much more defined and subjective expository order. But when it comes to "tutorials of how to actually do stuff", which is what mathematics and physics are basically about, Wikipedia has a very hard time to go beyond dry definitions which are only useful for people who already half know the stuff. But to learn from zero, newbies need tutorials with intuition and examples.
Bibliography:
- gwern.net/inclusionism from gwern.net:
Iron Law of Bureaucracy: the downwards deletionism spiral discourages contribution and is how Wikipedia will die.
- Quote "Golden wiki vs Deletionism on Wikipedia"
Pinned article: ourbigbook/introduction-to-the-ourbigbook-project
Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
Intro to OurBigBook
. Source. We have two killer features:
- topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculusArticles of different users are sorted by upvote within each article page. This feature is a bit like:
- a Wikipedia where each user can have their own version of each article
- a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.Figure 1. Screenshot of the "Derivative" topic page. View it live at: ourbigbook.com/go/topic/derivativeVideo 2. OurBigBook Web topics demo. Source. - local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
- to OurBigBook.com to get awesome multi-user features like topics and likes
- as HTML files to a static website, which you can host yourself for free on many external providers like GitHub Pages, and remain in full control
Figure 2. You can publish local OurBigBook lightweight markup files to either OurBigBook.com or as a static website.Figure 3. Visual Studio Code extension installation.Figure 5. . You can also edit articles on the Web editor without installing anything locally. Video 3. Edit locally and publish demo. Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension. - Infinitely deep tables of contents:
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact