Chinese cuisine Updated 2025-07-16
One of the best in the world, but you need to know how to find real restaurants if you are not in China.
But worry not, Ciro Santilli has got you covered: github.com/cirosantilli/china-dictatorship/restaurants
- www.youtube.com/channel/UC54SLBnD5k5U3Q6N__UjbAw Chinese Cooking Demystified. Possibly the best YouTube channel at explaining how to make key Chinese dishes and sauces in English.
Some stuff at: cirosantilli.com/china-dictatorship/#the-best-chinese-food but that is bound to die one guesses.
Chinese dynasty Updated 2025-07-16
Chinese food Updated 2025-07-16
Chinese game Updated 2025-07-16
Chinese garden Updated 2025-07-16
Chinese history Updated 2025-07-16
Chinese (language) Updated 2025-07-16
Some remarks on the language at: cirosantilli.com/china-dictatorship/does-ciro-santilli-speak-chinese
Chinese numbered list Updated 2025-07-16
For some reason Chinese people (and their sphere of influence such as Japan) are obsessed by numbered lists, e.g. stuff like Four Beauties, Four Treasures of the Study and so on!
Chinese regional cuisine Updated 2025-07-16
Chinese scholar Updated 2025-07-16
On Wikipedia we only find the term Scholar-official. But the idea of the ancient Chinese scholar is a bit wider as a concept, and even people who were not trying to be officials could thrive to follow certain aspects of the scholar way of life.
Ciro Santilli's hardware DSD TECH USB to TTL Serial Converter CP2102 Updated 2025-07-26
Bought in the late 2010s.
Mentioned at: raspberrypi.stackexchange.com/questions/3867/ssh-to-rpi-without-a-network-connection/53823#53823
Official product page: www.dsdtech-global.com/2017/07/dsd-tech-usb-to-ttl-serial-converter.html
Sample Amazon link: www.amazon.co.uk/gp/product/B072K3Z3TL
Chordate Updated 2025-07-16
You read the name and think: hmm, neural cords!
But then you see that his is one of its members:
Yup. That's your cousin. And it's a much closer cousin than something like arthropods, which at least have heads eyes and legs like you.
Convergent evolution is crazy!
Chromium (web browser) Updated 2025-07-16
How to reference a book in Wikipedia markup? Updated 2025-07-16
Their reference markup is incredibly overengineered, convoluted, and underdocumented, it is unbelivable!
Use the reference:
This is a fact.{{sfn|Schweber|1994|p=487}}Define the reference:
===Sources===
{{refbegin|2|indent=yes}}
*{{Cite book|author-link=Silvan S. Schweber |title=QED and the Men Who Made It: Dyson, Feynman, Schwinger, and Tomonaga|last=Schweber|first=Silvan S.|location=Princeton|publisher=University Press|year=1994 |isbn=978-0-691-03327-3 |url=https://archive.org/details/qedmenwhomadeitd0000schw/page/492 |url-access=registration}}
{{refend}}sfn is magic and matches the the author last name and date from the Cite, it is documented at: en.wikipedia.org/wiki/Template:SfnUnforutunately, if there are multiple duplicate
Cites inline in the article, it will complain that there are multiple definitions, and you have to first factor out the article by replacing all those existing Cite with sfn, and keeping just one Cite at the bottom. What a pain...You can also link to a specific page of the book, e.g. if it is a book is on Internet Archive Open Library with:
{{sfn|Murray|1997|p=[https://archive.org/details/supermenstory00murr/page/86 86]}}For multiple pages should use
pp= instead of p=. Does not seem to make much difference on the rendered output besides showing p. vs pp., but so be it:{{sfn|Murray|1997|pp=[https://archive.org/details/supermenstory00murr/page/86 86-87]}} CIA 2010 covert communication websites 2013 DNS census NS records Updated 2025-07-16
We can also cut down the data a lot with stackoverflow.com/questions/1915636/is-there-a-way-to-uniq-by-column/76605540#76605540 and tld filtering:This brings us down to a much more manageable 3.0 GB, 83 M rows.
awk -F, 'BEGIN{OFS=","} { if ($1 != last) { print $1, $3; last = $1; } }' ns.csv | grep -E '\.(com|net|info|org|biz),' > nsu.csvLet's just scan it once real quick to start with, since likely nothing will come of this venue:As of 267 hits we get:so yeah, most of those are likely going to be humongous just by looking at the names.
grep -f <(awk -F, 'NR>1{print $2}' ../media/cia-2010-covert-communication-websites/hits.csv) nsu.csv | tee nsu-hits.csv
cat nsu-hits.csv | csvcut -c 2 | sort | awk -F. '{OFS="."; print $(NF-1), $(NF)}' | sort | uniq -c | sort -k1 -n 1 a2hosting.com
1 amerinoc.com
1 ayns.net
1 dailyrazor.com
1 domainingdepot.com
1 easydns.com
1 frienddns.ru
1 hostgator.com
1 kolmic.com
1 name-services.com
1 namecity.com
1 netnames.net
1 tonsmovies.net
1 webmailer.de
2 cashparking.com
55 worldnic.com
86 domaincontrol.comThe smallest ones by far from the total are: frienddns.ru with only 487 hits, all others quite large or fake hits due to CSV. Did a quick Wayback Machine CDX scanning there but no luck alas.
Let's check the smaller ones:Doubt anything will come out of this.
inews-today.com,2013-08-12T03:14:01,ns1.frienddns.ru
source-commodities.net,2012-12-13T20:58:28,ns1.namecity.com -> fake hit due to grep e-commodities.net
dailynewsandsports.com,2013-08-13T08:36:28,ns3.a2hosting.com
just-kidding-news.com,2012-02-04T07:40:50,jns3.dailyrazor.com
fightwithoutrules.com,2012-11-09T01:17:40,sk.s2.ns1.ns92.kolmic.com
fightwithoutrules.com,2013-07-01T22:46:23,ns1625.ztomy.com
half-court.net,2012-09-10T09:49:15,sk.s2.ns1.ns92.kolmic.com
half-court.net,2013-07-07T00:31:12,ns1621.ztomy.com CIA 2010 covert communication websites atomworldnews.com Updated 2025-07-16
whoisxmlapi WHOIS record on April 17, 2011
CIA 2010 covert communication websites Common Crawl Updated 2025-07-16
So far, no new domains have been found with Common Crawl, nor have any existing known domains been found to be present in Common Crawl. Our working theory is that Common Crawl never reached the domains How did Alexa find the domains?
Let's try and do something with Common Crawl.
Unfortunately there's no IP data apparently: github.com/commoncrawl/cc-index-table/issues/30, so let's focus on the URLs.
Using their Common Crawl Athena method: commoncrawl.org/2018/03/index-to-warc-files-and-urls-in-columnar-format/
Sample first output line:So
# 2
url_surtkey org,whwheelers)/robots.txt
url https://whwheelers.org/robots.txt
url_host_name whwheelers.org
url_host_tld org
url_host_2nd_last_part whwheelers
url_host_3rd_last_part
url_host_4th_last_part
url_host_5th_last_part
url_host_registry_suffix org
url_host_registered_domain whwheelers.org
url_host_private_suffix org
url_host_private_domain whwheelers.org
url_host_name_reversed
url_protocol https
url_port
url_path /robots.txt
url_query
fetch_time 2021-06-22 16:36:50.000
fetch_status 301
fetch_redirect https://www.whwheelers.org/robots.txt
content_digest 3I42H3S6NNFQ2MSVX7XZKYAYSCX5QBYJ
content_mime_type text/html
content_mime_detected text/html
content_charset
content_languages
content_truncated
warc_filename crawl-data/CC-MAIN-2021-25/segments/1623488519183.85/robotstxt/CC-MAIN-20210622155328-20210622185328-00312.warc.gz
warc_record_offset 1854030
warc_record_length 639
warc_segment 1623488519183.85
crawl CC-MAIN-2021-25
subset robotstxturl_host_3rd_last_part might be a winner for CGI comms fingerprinting!Naive one for one index:have no results... data scanned: 5.73 GB
select * from "ccindex"."ccindex" where url_host_registered_domain = 'conquermstoday.com' limit 100;Let's see if they have any of the domain hits. Let's also restrict by date to try and reduce the data scanned:Humm, data scanned: 60.59 GB and no hits... weird.
select * from "ccindex"."ccindex" where
fetch_time < TIMESTAMP '2014-01-01 00:00:00' AND
url_host_registered_domain IN (
'activegaminginfo.com',
'altworldnews.com',
...
'topbillingsite.com',
'worldwildlifeadventure.com'
)Sanity check:has a bunch of hits of course. Data scanned: 212.88 MB,
select * from "ccindex"."ccindex" WHERE
crawl = 'CC-MAIN-2013-20' AND
subset = 'warc' AND
url_host_registered_domain IN (
'google.com',
'amazon.com'
)WHERE crawl and subset are a must! Should have read the article first.Let's widen a bit more:Still nothing found... they don't seem to have any of the URLs of interest?
select * from "ccindex"."ccindex" WHERE
crawl IN (
'CC-MAIN-2013-20',
'CC-MAIN-2013-48',
'CC-MAIN-2014-10'
) AND
subset = 'warc' AND
url_host_registered_domain IN (
'activegaminginfo.com',
'altworldnews.com',
...
'worldnewsandent.com',
'worldwildlifeadventure.com'
) GNOME Project Updated 2025-07-16
GNU General Public License Updated 2025-07-16
CIA 2010 covert communication websites feedsdemexicoyelmundo.com Updated 2025-07-16
whoisxmlapi WHOIS record on April 28, 2011
There are unlisted articles, also show them or only show them.


