Chinese cuisine Updated 2025-07-16
One of the best in the world, but you need to know how to find real restaurants if you are not in China.
Some stuff at: cirosantilli.com/china-dictatorship/#the-best-chinese-food but that is bound to die one guesses.
Chinese food Updated 2025-07-16
Chinese game Updated 2025-07-16
Chinese garden Updated 2025-07-16
Figure 1.
The Humble Administrator's Garden in Suzhou
. Source.
Figure 2.
Round door at the Lingering Garden in Suzhou
. Source.
Chinese numbered list Updated 2025-07-16
For some reason Chinese people (and their sphere of influence such as Japan) are obsessed by numbered lists, e.g. stuff like Four Beauties, Four Treasures of the Study and so on!
The concept does exist in the West (e.g. The Seven Wonders of the World), but clearly the Sinosphere is much more obsesse by it!
Chinese scholar Updated 2025-07-16
On Wikipedia we only find the term Scholar-official. But the idea of the ancient Chinese scholar is a bit wider as a concept, and even people who were not trying to be officials could thrive to follow certain aspects of the scholar way of life.
Chordate Updated 2025-07-16
Chordate is a sad clade.
You read the name and think: hmm, neural cords!
But then you see that his is one of its members:
Yup. That's your cousin. And it's a much closer cousin than something like arthropods, which at least have heads eyes and legs like you.
Chromium (web browser) Updated 2025-07-16
Google is trying to kill it as of 2021: www.omgubuntu.co.uk/2021/01/chromium-sync-google-api-removed The lack of sync is a major major blow. So selfish. Google makes billions, and it won't give in a little bit of settings storage...
Their reference markup is incredibly overengineered, convoluted, and underdocumented, it is unbelivable!
Use the reference:
This is a fact.{{sfn|Schweber|1994|p=487}}
Define the reference:
===Sources===
{{refbegin|2|indent=yes}}
*{{Cite book|author-link=Silvan S. Schweber |title=QED and the Men Who Made It: Dyson, Feynman, Schwinger, and Tomonaga|last=Schweber|first=Silvan S.|location=Princeton|publisher=University Press|year=1994 |isbn=978-0-691-03327-3 |url=https://archive.org/details/qedmenwhomadeitd0000schw/page/492 |url-access=registration}}
{{refend}}
sfn is magic and matches the the author last name and date from the Cite, it is documented at: en.wikipedia.org/wiki/Template:Sfn
Unforutunately, if there are multiple duplicate Cites inline in the article, it will complain that there are multiple definitions, and you have to first factor out the article by replacing all those existing Cite with sfn, and keeping just one Cite at the bottom. What a pain...
You can also link to a specific page of the book, e.g. if it is a book is on Internet Archive Open Library with:
{{sfn|Murray|1997|p=[https://archive.org/details/supermenstory00murr/page/86 86]}}
For multiple pages should use pp= instead of p=. Does not seem to make much difference on the rendered output besides showing p. vs pp., but so be it:
{{sfn|Murray|1997|pp=[https://archive.org/details/supermenstory00murr/page/86 86-87]}}
ns.csv is 57 GB. This file is too massive, working with it is a pain.
We can also cut down the data a lot with stackoverflow.com/questions/1915636/is-there-a-way-to-uniq-by-column/76605540#76605540 and tld filtering:
awk -F, 'BEGIN{OFS=","} { if ($1 != last) { print $1, $3; last = $1; } }' ns.csv | grep -E '\.(com|net|info|org|biz),' > nsu.csv
This brings us down to a much more manageable 3.0 GB, 83 M rows.
Let's just scan it once real quick to start with, since likely nothing will come of this venue:
grep -f <(awk -F, 'NR>1{print $2}' ../media/cia-2010-covert-communication-websites/hits.csv) nsu.csv | tee nsu-hits.csv
cat nsu-hits.csv | csvcut -c 2 | sort | awk -F. '{OFS="."; print $(NF-1), $(NF)}' | sort | uniq -c | sort -k1 -n
As of 267 hits we get:
      1 a2hosting.com
      1 amerinoc.com
      1 ayns.net
      1 dailyrazor.com
      1 domainingdepot.com
      1 easydns.com
      1 frienddns.ru
      1 hostgator.com
      1 kolmic.com
      1 name-services.com
      1 namecity.com
      1 netnames.net
      1 tonsmovies.net
      1 webmailer.de
      2 cashparking.com
     55 worldnic.com
     86 domaincontrol.com
so yeah, most of those are likely going to be humongous just by looking at the names.
The smallest ones by far from the total are: frienddns.ru with only 487 hits, all others quite large or fake hits due to CSV. Did a quick Wayback Machine CDX scanning there but no luck alas.
Let's check the smaller ones:
inews-today.com,2013-08-12T03:14:01,ns1.frienddns.ru
source-commodities.net,2012-12-13T20:58:28,ns1.namecity.com -> fake hit due to grep e-commodities.net
dailynewsandsports.com,2013-08-13T08:36:28,ns3.a2hosting.com
just-kidding-news.com,2012-02-04T07:40:50,jns3.dailyrazor.com
fightwithoutrules.com,2012-11-09T01:17:40,sk.s2.ns1.ns92.kolmic.com
fightwithoutrules.com,2013-07-01T22:46:23,ns1625.ztomy.com
half-court.net,2012-09-10T09:49:15,sk.s2.ns1.ns92.kolmic.com
half-court.net,2013-07-07T00:31:12,ns1621.ztomy.com
Doubt anything will come out of this.
Let's do a bit of counting out of the total:
grep domaincontrol.com ns.csv | awk -F, '{print $1}' | uniq | wc
gives ~20M domain using domaincontrol. Let's see how many domains are in the first place:
awk -F, '{print $1}' ns.csv | uniq | wc
so it accounts for 1/4 of the total.
whoisxmlapi WHOIS record on April 17, 2011
  • Created Date: April 9, 2010 00:00:00 UTC
  • Updated Date: April 9, 2010 00:00:00 UTC
  • Expires Date: April 9, 2012 00:00:00 UTC
  • Registrant Name: domainsbyproxy.com
  • Name servers: NS33.DOMAINCONTROL.COM|NS34.DOMAINCONTROL.COM
So far, no new domains have been found with Common Crawl, nor have any existing known domains been found to be present in Common Crawl. Our working theory is that Common Crawl never reached the domains How did Alexa find the domains?
Let's try and do something with Common Crawl.
Unfortunately there's no IP data apparently: github.com/commoncrawl/cc-index-table/issues/30, so let's focus on the URLs.
Hello world:
select * from "ccindex"."ccindex" limit 100;
Data scanned: 11.75 MB
Sample first output line:
#                            2
url_surtkey                  org,whwheelers)/robots.txt
url                          https://whwheelers.org/robots.txt
url_host_name                whwheelers.org
url_host_tld                 org
url_host_2nd_last_part       whwheelers
url_host_3rd_last_part
url_host_4th_last_part
url_host_5th_last_part
url_host_registry_suffix     org
url_host_registered_domain   whwheelers.org
url_host_private_suffix      org
url_host_private_domain      whwheelers.org
url_host_name_reversed
url_protocol                 https
url_port
url_path                     /robots.txt
url_query
fetch_time                   2021-06-22 16:36:50.000
fetch_status                 301
fetch_redirect               https://www.whwheelers.org/robots.txt
content_digest               3I42H3S6NNFQ2MSVX7XZKYAYSCX5QBYJ
content_mime_type            text/html
content_mime_detected        text/html
content_charset
content_languages
content_truncated
warc_filename                crawl-data/CC-MAIN-2021-25/segments/1623488519183.85/robotstxt/CC-MAIN-20210622155328-20210622185328-00312.warc.gz
warc_record_offset           1854030
warc_record_length           639
warc_segment                 1623488519183.85
crawl                        CC-MAIN-2021-25
subset                       robotstxt
So url_host_3rd_last_part might be a winner for CGI comms fingerprinting!
Naive one for one index:
select * from "ccindex"."ccindex" where url_host_registered_domain = 'conquermstoday.com' limit 100;
have no results... data scanned: 5.73 GB
Let's see if they have any of the domain hits. Let's also restrict by date to try and reduce the data scanned:
select * from "ccindex"."ccindex" where
  fetch_time < TIMESTAMP '2014-01-01 00:00:00' AND
  url_host_registered_domain IN (
   'activegaminginfo.com',
   'altworldnews.com',
   ...
   'topbillingsite.com',
   'worldwildlifeadventure.com'
 )
Humm, data scanned: 60.59 GB and no hits... weird.
Sanity check:
select * from "ccindex"."ccindex" WHERE
  crawl = 'CC-MAIN-2013-20' AND
  subset = 'warc' AND
  url_host_registered_domain IN (
   'google.com',
   'amazon.com'
 )
has a bunch of hits of course. Data scanned: 212.88 MB, WHERE crawl and subset are a must! Should have read the article first.
Let's widen a bit more:
select * from "ccindex"."ccindex" WHERE
  crawl IN (
    'CC-MAIN-2013-20',
    'CC-MAIN-2013-48',
    'CC-MAIN-2014-10'
  ) AND
  subset = 'warc' AND
  url_host_registered_domain IN (
    'activegaminginfo.com',
    'altworldnews.com',
    ...
    'worldnewsandent.com',
    'worldwildlifeadventure.com'
 )
Still nothing found... they don't seem to have any of the URLs of interest?
whoisxmlapi WHOIS record on April 28, 2011
  • Registrar Name: GODADDY.COM, INC
  • Created Date: February 9, 2010 00:00:00 UTC
  • Updated Date: February 9, 2010 00:00:00 UTC
  • Expires Date: February 9, 2015 00:00:00 UTC
  • Registrant Name: domainsbyproxy.com
  • Name servers: NS55.DOMAINCONTROL.COM|NS56.DOMAINCONTROL.COM

There are unlisted articles, also show them or only show them.