This is a dark art, and many of the sources are shady as fuck! We often have no idea of their methodology. Also no source is fully complete. We just piece up as best we can.
Some links of interest:
This is our primary data source, the first article that pointed out a few specific CIA websites which then served as the basis for all of our research.
We take the truth of this article as an axiom. And then all we claim is that all other websites found were made by the same people due to strong shared design principles of the such websites.
But to be serious. The Wayback Machine contains a very large proportion of all sites. It is the most complete database we have found so far. Some archives are very broken. But those are rares.
The only problem with the Wayback Machine is that there is no known efficient way to query its archives across domains. You have to have a domain in hand for CDX queries: Wayback Machine CDX scanning.
The Common Crawl project attempts in part to address this lack of querriability, but we haven't managed to extract any hits from it.
CDX + 2013 DNS Census + heuristics however has been fruitful however.
The Wayback Machine has an endpoint to query cralwed pages called the CDX server. It is documented at:
This allows to filter down 10 thousands of possible domains in a few hours. But 100s of thousands would be too much. This is because you have to query exactly one URL at a time, and they possibly rate limit IPs. But no IP blacklisting so far after several hours, so it's not that bad.
Once you have a heuristic to narrow down some domains, you can use this helper: cia-2010-covert-communication-websites/ to drill them down from 10s of thousands down to hundreds or thousands.
We then post process the results of with cia-2010-covert-communication-websites/ to drill them down from from thousands to dozens, and manually inspect everything.
From then on, you can just manually inspect for hist on your browser.
Dire times require dire methods: cia-2010-covert-communication-websites/
First we must start the tor servers with the tor-army command from:
tor-army 100
and then use it on a newline separated domain name list to check;
./ infile.txt
This creates a directory infile.txt.cdx/ containing:
  • infile.txt.cdx/out00, out01, etc.: the suspected CDX lines from domains from each tor instance based on the simple criteria that the CDX can handle directly. We split the input domains into 100 piles, and give one selected pile per tor instance.
  • infile.txt.cdx/out: the final combined CDX output of out00, out01, ...
  • infile.txt.cdx/ the final output containing only domain names that match further CLI criteria that cannot be easily encoded on the CDX query. This is the cleanest domain name list you should look into at the end basically.
Since archive is so abysmal in its data access, e.g. a Google BigQuery would solve our issues in seconds, we have to come up with creative ways of getting around their IP throttling.
The CIA doesn't play fair. They're actually the exact opposite of fair. So neither shall we.
This should allow a full sweep of the 4.5M records in 2013 DNS Census virtual host cleanup in a reasonable amount of time. After JAR/SWF/CGI filtering we obtained 5.8k domains, so a reduction factor of about 1 million with likely very few losses. Not bad.
5.8k is still a bit annoying to fully go over however, so we can also try to count CDX hits to the domains and remove anything with too many hits, since the CIA websites basically have very few archives:
cd 2013-dns-census-a-novirt-domains.txt.cdx
./ -d domain-list.txt
cut -d' ' -f1 out | uniq -c | sort -k1 -n | awk 'match($2, /([^,]+),([^)]+)/, a) {printf("%s.%s %d\n", a[2], a[1], $1)}' > out.count
This gives us something like: 1 1 1 1 1
sorted by increasing hit counts, so we can go down as far as patience allows for!
New results from a full CDX scan of 2013-dns-census-a-novirt.csv:
JAR, SWF and CGI-bin scanning by path only is fine, since there are relatively few of those. But .js scanning by path only is too broad.
One option would be to filter out by size, an information that is contained on the CDX. Let's check typical ones:
grep -f <(jq -r '.[]|select(select(.comms)|.comms|test("\\.js"))|.host' ../media/cia-2010-covert-communication-websites/hits.json) out | out.jshits.cdx
sort -n -k7 out.jshits.cdx
Ignoring some obvious unrelated non-comms files visually we get a range of about 2732 to 3632:
net,hollywoodscreen)/current.js 20110106082232 text/javascript 200 XY5NHVW7UMFS3WSKPXLOQ5DJA34POXMV 2732
com,amishkanews)/amishkanewss.js 20110208032713 text/javascript 200 S5ZWJ53JFSLUSJVXBBA3NBJXNYLNCI4E 3632
This ignores the obviously atypical JavaScript with SHAs from iranfootballsource, and the particularly small old menu.js from, which we embed into cia-2010-covert-communication-websites/
The size helps a bit, but it's not insanely good unfortunately, only about 3x, these are some common JS sizes right there!
Many hits appear to happen on the same days, and per-day data does exist: but apparently cannot be publicly downloaded unfortunately. But maybe there's another way? TODO select candidates.
Accounts used so far: 6 (1500 reverse IP checks).
Their historic DNS and reverse DNS info was very valuable, and served as Ciro's the initial entry point to finding hits in the IP ranges given by Reuters.
Their data is also quite disjoint from the data of the 2013 DNS Census. There is some overlap, but clearly their methodology is very different. Some times they slot into one another almost perfectly.
You can only get about 250 queries on the web interface, then 250 queries per free account via API.
Since this source is so scarce and valuable, we have been quite careful to note down all the domain and IP ranges that have been explored.
They check your IP when you signup, and you can't sign in twice from the same IP. They also state that Tor addresses are blacklisted.
At, the creator of the, "Hughesey", also stated that he'd able to give some free credits for public research projects such as this one. This would have saved up going to quite a few Cafes to get those sweet extra IPs! But it was more fun in hardmode, no doubt.
They also normalize dots in gmail addresses, so you need more diverse email accounts. But they haven't covered the .gmail vs .googlemail trick.
We do API access to IP ranges with this simple helper: cia-2010-covert-communication-websites/, usage:
./ <apikey> <start-ipv-address> <end-ipv-address>
./ 8b890b00b17ed2d66bbed878d51200b58d43d014
For domain to IP queries from the API you should use "iphistory"
curl '$APIKEY&output=json'
Very curiously, their reverse IP search appears to be somewhat broken, or not to be historic, e.g.
We've contacted support and they replied:
The reverse IP tool will only show a domain if that is it's current IP address.
This is likely not accurate, more precisely it likely only works if it was the last IP address, not necessarily a current one.
Main article: DNS Census 2013.
This data source was very valuable, and led to many hits, and to finding the first non Reuters ranges with Section "secure subdomain search on 2013 DNS Census".
Hit overlap:
jq -r '.[].host' ../media/cia-2010-covert-communication-websites/hits.json ) | xargs -I{} sqlite3 aiddcu.sqlite "select * from t where d = '{}'"
Domain hit count when we were at 279 hits: 142 hits, so about half of the hits were present.
The timing of the database is perfect for this project, it is as if the CIA had planted it themselves!
We've noticed that often when there is a hit range:
  • there is only one IP for each domain
  • there is a range of about 20-30 of those
and that this does not seem to be that common. Let's see if that is a reasonable fingerprint or not.
Note that although this is the most common case, we have found multiple hits that maps to the same IP.
First we create a table u (unique) that only have domains which are the only domain for an IP, let's see by how much that lowers the 191 M total unique domains:
time sqlite3 u.sqlite 'create table t (d text, i text)'
time sqlite3 av.sqlite -cmd "attach 'u.sqlite' as u" "insert into u.t select min(d) as d, min(i) as i from t where d not like '%.%.%' group by i having count(distinct d) = 1"
The not like '%.%.%' removes subdomains from the counts so that CGI comms are still included, and distinct in count(distinct is because we have multiple entries at different timestamps for some of the hits.
Let's start with the 208 subset to see how it goes:
time sqlite3 av.sqlite -cmd "attach 'u.sqlite' as u" "insert into u.t select min(d) as d, min(i) as i from t where i glob '208.*' and d not like '%.%.%' and (d like '' or d like '') group by i having count(distinct d) = 1"
OK, after we fixed bugs with the above we are down to 4 million lines with unique domain/IP pairs and which contains all of the original hits! Almost certainly more are to be found!
This data is so valuable that we've decided to upload it to: Format:
The numbers of the first column are the IPs as a 32-bit integer representation, which is more useful to search for ranges in.
To make a histogram with the distribution of the single hostname IPs:
#!/usr/bin/env bash
sqlite3 2013-dns-census-a-novirt.sqlite -cmd '.mode csv' >2013-dns-census-a-novirt-hist.csv <<EOF
select i, sum(cnt) from (
  select floor(i/${bin}) as i,
         count(*) as cnt
    from t
    group by 1
  select *, 0 as cnt from generate_series(0, 255)
group by i
gnuplot \
  -e 'set terminal svg size 1200, 800' \
  -e 'set output "2013-dns-census-a-novirt-hist.svg"' \
  -e 'set datafile separator ","' \
  -e 'set tics scale 0' \
  -e 'unset key' \
  -e 'set xrange[0:255]' \
  -e 'set title "Counts of IPs with a single hostname"' \
  -e 'set xlabel "IPv4 first byte"' \
  -e 'set ylabel "count"' \
  -e 'plot "2013-dns-census-a-novirt-hist.csv" using 1:2:1 with labels' \
Which gives the following useless noise, there is basically no pattern:
There are two keywords that are killers: "news" and "world" and their translations or closely related words. Everything else is hard. So a good start is:
grep -e news -e noticias -e nouvelles -e world -e global
iran + football:
  • the third hit for this area after the two given by Reuters! Epic.
3 easy hits with "noticias" (news in Portuguese or Spanish"), uncovering two brand new ip ranges:
Let's see some French "nouvelles/actualites" for those tumultuous Maghrebis:
news + world:
news + global:
OK, I've decided to do a complete Wayback Machine CDX scanning of news... Searching for .JAR or https.*cgi-bin.*\.cgi are killers, particularly the .jar hits, here's what came out:
Wayback Machine CDX scanning of "world":
"headline": only 140 matches in 2013-dns-census-a-novirt.csv and 3 hits out of 269 hits. Full inspection without CDX led to no new hits.
"today": only 3.5k matches in 2013-dns-census-a-novirt.csv and 12 hits out of 269 hits, TODO how many on those on 2013-dns-census-a-novirt? No new hits.
"world", "global", "international", and spanish/portuguese/French versions like "mondo", "mundo", "mondi": 15k matches in 2013-dns-census-a-novirt.csv. No new hits.
Let' see if there's anything in records/mx.xz.
mx.csv is 21GB.
They do have " in the files to escape commas so:
import csv
import sys
writer = csv.writer(sys.stdout)
with open('mx.csv', 'r') as f:
    reader = csv.reader(f)
    for row in reader:
        writer.writerow([row[0], row[3]])
Would have been better with csvkit:
# uniq not amazing as there are often two or three slightly different records repeated on multiple timestamps, but down to 11 GB
python3 | uniq > mx-uniq.csv
sqlite3 mx.sqlite 'create table t(d text, m text)'
# 13 GB
time sqlite3 mx.sqlite ".import --csv --skip 1 'mx-uniq.csv' t"

# 41 GB
time sqlite3 mx.sqlite 'create index td on t(d)'
time sqlite3 mx.sqlite 'create index tm on t(m)'
time sqlite3 mx.sqlite 'create index tdm on t(d, m)'

# Remove dupes.
# Rows: 150m
time sqlite3 mx.sqlite <<EOF
delete from t
where rowid not in (
  select min(rowid)
  from t
  group by d, m

# 15 GB
time sqlite3 mx.sqlite vacuum
Let's see what the hits use:
awk -F, 'NR>1{ print $2 }' ../media/cia-2010-covert-communication-websites/hits.csv | xargs -I{} sqlite3 mx.sqlite "select distinct * from t where d = '{}'"
At around 267 total hits, only 84 have MX records, and from those that do, almost all of them have exactly:
with only three exceptions:|||
We need to count out of the totals!
sqlite3 mx.sqlite "select count(*) from t where m = ''"
which gives, ~18M, so nope, it is too much by itself...
Let's try to use that to reduce av.sqlite from 2013 DNS Census virtual host cleanup a bit further:
time sqlite3 mx.sqlite '.mode csv' "attach 'aiddcu.sqlite' as 'av'" '.load ./ip' "select ipi2s(av.t.i), av.t.d from av.t inner join t as mx on av.t.d = mx.d and mx.m = '' order by av.t.i asc" > avm.csv
where avm stands for av with mx pruning. This leaves us with only ~500k entries left. With one more figerprint we could do a Wayback Machine CDX scanning scan.
Let's check that we still have most our hits in there:
grep -f <(awk -F, 'NR>1{print $2}' /home/ciro/bak/git/media/cia-2010-covert-communication-websites/hits.csv) avm.csv
At 267 hits we got 81, so all are still present.
secureserver is a hosting provider, we can see their blank page e.g. at: comments: is the name GoDaddy use as the reverse DNS for IP addresses used for dedicated/virtual server hosting
We intersect 2013 DNS Census virtual host cleanup with 2013 DNS census MX records and that leaves 460k hits. We did lose a third on the the MX records as of 260 hits since is only used in 1/3 of sites, but we also concentrate 9x, so it may be worth it.
Then we Wayback Machine CDX scanning. it takes about 5 days, but it is manageale.
We did a full Wayback Machine CDX scanning for JAR, SWF and cgi-bin in those, but only found a single new hit:
ns.csv is 57 GB. This file is too massive, working with it is a pain.
We can also cut down the data a lot with and tld filtering:
awk -F, 'BEGIN{OFS=","} { if ($1 != last) { print $1, $3; last = $1; } }' ns.csv | grep -E '\.(com|net|info|org|biz),' > nsu.csv
This brings us down to a much more manageable 3.0 GB, 83 M rows.
Let's just scan it once real quick to start with, since likely nothing will come of this venue:
grep -f <(awk -F, 'NR>1{print $2}' ../media/cia-2010-covert-communication-websites/hits.csv) nsu.csv | tee nsu-hits.csv
cat nsu-hits.csv | csvcut -c 2 | sort | awk -F. '{OFS="."; print $(NF-1), $(NF)}' | sort | uniq -c | sort -k1 -n
As of 267 hits we get:
so yeah, most of those are likely going to be humongous just by looking at the names.
The smallest ones by far from the total are: with only 487 hits, all others quite large or fake hits due to CSV. Did a quick Wayback Machine CDX scanning there but no luck alas.
Let's check the smaller ones:,2013-08-12T03:14:01,,2012-12-13T20:58:28, -> fake hit due to grep,2013-08-13T08:36:28,,2012-02-04T07:40:50,,2012-11-09T01:17:40,,2013-07-01T22:46:23,,2012-09-10T09:49:15,,2013-07-07T00:31:12,
Doubt anything will come out of this.
Let's do a bit of counting out of the total:
grep ns.csv | awk -F, '{print $1}' | uniq | wc
gives ~20M domain using domaincontrol. Let's see how many domains are in the first place:
awk -F, '{print $1}' ns.csv | uniq | wc
so it accounts for 1/4 of the total.
Same as 2013 DNS census NS records basically, nothing came out. contains historical domain -> mappings.
We have not managed to extract much from this source, they don't have as much data on the range of interest.
But they do have some unique data at least, perhaps we should try them a bit more often, e.g. they were the only source we've seen so far that made the association: -> which places it in the more plausible IP range.
TODO can it do IP to domain? Or just domain to IP? Asked on their Discord: Their banner suggests that yes:
With our new look website you can now find other domains hosted on the same IP address, your website neighbours and more even quicker than before.
Owner replied, you can't:
At the moment you can only do this for current not historical records
This is a shame, reverse IP here could be quite valuable.
In principle, we could obtain this data from search engines, but Google doesn't track that entire website well, e.g. no hits for "" presumably due to heavy IP throttling.
Homepage gives date starting in 2009:
Here at DNS History we have been crawling DNS records since 2009, our database currently contains over 1 billion domains and over 12 billion DNS records.
and it is true that they do have some hits from that useful era.
Any data that we have the patience of extracting from this we will dump under
They appear to piece together data from various sources. As a result, they have a very complete domain -> IP history.
TODO reverse IP? The fact that they don't seem to have it suggests that they are just making historical reverse IP requests to a third party via some API.
Account creation blacklists common email providers such as gmail to force users to use a "corporate" email address. But using random domains like works fine.
Their data seems to date back to 2008 for our searches.
So far, no new domains have been found with Common Crawl, nor have any existing known domains been found to be present in Common Crawl. Our working theory is that Common Crawl never reached the domains How did Alexa find the domains?
Let's try and do something with Common Crawl.
Unfortunately there's no IP data apparently:, so let's focus on the URLs.
Hello world:
select * from "ccindex"."ccindex" limit 100;
Data scanned: 11.75 MB
Sample first output line:
#                            2
url_surtkey                  org,whwheelers)/robots.txt
url_host_tld                 org
url_host_2nd_last_part       whwheelers
url_host_registry_suffix     org
url_host_private_suffix      org
url_protocol                 https
url_path                     /robots.txt
fetch_time                   2021-06-22 16:36:50.000
fetch_status                 301
content_digest               3I42H3S6NNFQ2MSVX7XZKYAYSCX5QBYJ
content_mime_type            text/html
content_mime_detected        text/html
warc_filename                crawl-data/CC-MAIN-2021-25/segments/1623488519183.85/robotstxt/CC-MAIN-20210622155328-20210622185328-00312.warc.gz
warc_record_offset           1854030
warc_record_length           639
warc_segment                 1623488519183.85
crawl                        CC-MAIN-2021-25
subset                       robotstxt
So url_host_3rd_last_part might be a winner for CGI comms fingerprinting!
Naive one for one index:
select * from "ccindex"."ccindex" where url_host_registered_domain = '' limit 100;
have no results... data scanned: 5.73 GB
Let's see if they have any of the domain hits. Let's also restrict by date to try and reduce the data scanned:
select * from "ccindex"."ccindex" where
  fetch_time < TIMESTAMP '2014-01-01 00:00:00' AND
  url_host_registered_domain IN (
Humm, data scanned: 60.59 GB and no hits... weird.
Sanity check:
select * from "ccindex"."ccindex" WHERE
  crawl = 'CC-MAIN-2013-20' AND
  subset = 'warc' AND
  url_host_registered_domain IN (
has a bunch of hits of course. Also Data scanned: 212.88 MB, WHERE crawl and subset are a must! Should have read the article first.
Let's widen a bit more:
select * from "ccindex"."ccindex" WHERE
  crawl IN (
  ) AND
  subset = 'warc' AND
  url_host_registered_domain IN (
Still nothing found... they don't seem to have any of the URLs of interest?
Does not appear to have any reverse IP hits unfortunately: Likely only has domains that were explicitly advertised.
We could not find anything useful in it so far, but there is great potential to use this tool to find new IP ranges based on properties of existing IP ranges. Part of the problem is that the dataset is huge, and is split by top 256 bytes. But it would be reasonable to at least explore ranges with pre-existing known hits...
We have started looking for patterns on 66.* and 208.*, both selected as two relatively far away ranges that have a number of pre-existing hits. 208 should likely have been 212 considering later finds that put several ranges in 212.
  • 66.104.
    • 1346397300 SCAN(V=6.01%E=4%D=1/12%OT=22%CT=443%CU=%PV=N%G=N%TM=387CAB9E%P=mipsel-openwrt-linux-gnu),ECN(R=N),T1(R=N),T2(R=N),T3(R=N),T4(R=N),T5(R=N),T6(R=N),T7(R=N),U1(R=N),IE(R=N)
    • 1346816700 SCAN(V=6.01%E=4%D=1/2%OT=22%CT=443%CU=%PV=N%DC=I%G=N%TM=1D5EA%P=mipsel-openwrt-linux-gnu),SEQ(SP=F8%GCD=3%ISR=109%TI=Z%TS=A),ECN(R=N),T1(R=Y%DF=Y%TG=40%S=O%A=S+%F=AS%RD=0%Q=),T1(R=N),T2(R=N),T3(R=N),T4(R=N),T5(R=Y%DF=Y%TG=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=),T6(R=N),T7(R=N),U1(R=N),IE(R=N)
    • 1346692500 SCAN(V=6.01%E=4%D=9/3%OT=22%CT=443%CU=%PV=N%DC=I%G=N%TM=5044E96E%P=mipsel-openwrt-linux-gnu),SEQ(SP=105%GCD=1%ISR=108%TI=Z%TS=A),OPS(O1=M550ST11NW6%O2=M550ST11NW6%O3=M550NNT11NW6%O4=M550ST11NW6%O5=M550ST11NW6%O6=M550ST11),WIN(W1=1510%W2=1510%W3=1510%W4=1510%W5=1510%W6=1510),ECN(R=N),T1(R=Y%DF=Y%TG=40%S=O%A=S+%F=AS%RD=0%Q=),T1(R=N),T2(R=N),T3(R=N),T4(R=N),T5(R=Y%DF=Y%TG=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=),T6(R=N),T7(R=N),U1(R=N),IE(R=N)
    • 1346822100 SCAN(V=6.01%E=4%D=1/1%OT=22%CT=443%CU=%PV=N%DC=I%G=N%TM=14655%P=mipsel-openwrt-linux-gnu),SEQ(TI=Z%TS=A),ECN(R=N),T1(R=Y%DF=Y%TG=40%S=O%A=S+%F=AS%RD=0%Q=),T1(R=N),T2(R=N),T3(R=N),T4(R=N),T5(R=Y%DF=Y%TG=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=),T6(R=N),T7(R=N),U1(R=N),IE(R=N)
    • 1346712300 SCAN(V=6.01%E=4%D=9/4%OT=22%CT=443%CU=%PV=N%DC=I%G=N%TM=50453230%P=mipsel-openwrt-linux-gnu),SEQ(SP=FB%GCD=1%ISR=FF%TI=Z%TS=A),ECN(R=N),T1(R=Y%DF=Y%TG=40%S=O%A=S+%F=AS%RD=0%Q=),T1(R=N),T2(R=N),T3(R=N),T4(R=N),T5(R=Y%DF=Y%TG=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=),T6(R=N),T7(R=N),U1(R=N),IE(R=N)
  • 66.175.106
    • 1340077500 SCAN(V=5.51%D=1/3%OT=22%CT=443%CU=%PV=N%G=N%TM=38707542%P=mipsel-openwrt-linux-gnu),ECN(R=N),T1(R=N),T2(R=N),T3(R=N),T4(R=N),T5(R=Y%DF=Y%TG=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=),T6(R=N),T7(R=N),U1(R=N),IE(R=N)
    • 1345562100 SCAN(V=5.51%D=8/21%OT=22%CT=443%CU=%PV=N%DC=I%G=N%TM=5033A5F2%P=mips-openwrt-linux-gnu),SEQ(SP=FB%GCD=1%ISR=FC%TI=Z%TS=A),ECN(R=Y%DF=Y%TG=40%W=1540%O=M550NNSNW6%CC=N%Q=),T1(R=Y%DF=Y%TG=40%S=O%A=S+%F=AS%RD=0%Q=),T2(R=N),T3(R=N),T4(R=N),T5(R=Y%DF=Y%TG=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=),T6(R=N),T7(R=N),U1(R=N),IE(R=N)
Hostprobes quick look on two ranges:
... similar down	1334668500	down	no-response	1338270300	down	no-response	1338839100	down	no-response	1339361100	down	no-response	1346391900	down	no-response	1335806100	up	unknown	1336979700	up	unknown	1338840900	up	unknown	1339454700	up	unknown	1346778900	up	echo-reply (0.34s latency).	1346838300	up	echo-reply (0.30s latency).	1335840300	up	unknown	1338446700	up	unknown	1339334100	up	unknown	1346658300	up	echo-reply (0.26s latency).

... similar up	1335708900	up	unknown	1338446700	up	unknown	1339330500	up	unknown	1346494500	up	echo-reply (0.24s latency).	1335840300	up	unknown	1337793300	up	unknown	1338853500	up	unknown	1346454900	up	echo-reply (0.23s latency).	1335856500	up	unknown	1338200100	down	no-response	1338749100	down	no-response	1339334100	down	no-response	1346607900	down	net-unreach	1335699900	up	unknown

... similar down
Suggests exactly 127 - 96 + 1 = 31 IPs.
... similar down	1334522700	down	no-response	1335276900	down	no-response	1335784500	down	no-response	1337845500	down	no-response	1338752700	down	no-response	1339332300	down	no-response	1346499900	down	net-unreach	1334668500	up	unknown	1336808700	up	unknown	1339334100	up	unknown	1346766300	up	echo-reply (0.40s latency).	1335770100	up	unknown	1338444900	up	unknown	1339334100	up	unknown

... similar up	1346517900	up	echo-reply (0.19s latency).	1335708900	up	unknown	1335708900	up	unknown	1338066900	up	unknown	1338747300	up	unknown	1346872500	up	echo-reply (0.27s latency).	1335773700	up	unknown	1336949100	up	unknown	1338750900	up	unknown	1339334100	up	unknown	1346854500	up	echo-reply (0.13s latency).	1335665700	down	no-response	1336567500	down	no-response	1338840900	down	no-response	1339425900	down	no-response	1346494500	down	time-exceeded

... similar down
Suggests exactly 223 - 192 + 1 = 31 IPs.
Let's have a look at the file 68: outcome: no clear hits like on 208. One wonders why.
It does appears that long sequences of ranges are a sort of fingerprint. The question is how unique it would be.
time awk '$3=="up"{ print $1 }' $n | uniq -c | sed -r 's/^ +//;s/ /,/' | tee $n-up-uniq
rm -f $t
time sqlite3 $t 'create table tmp(cnt text, i text)'
time sqlite3 $t ".import --csv $n-up-uniq tmp"
time sqlite3 $t 'create table t (i integer)'
time sqlite3 $t '.load ./ip' 'insert into t select str2ipv4(i) from tmp'
time sqlite3 $t 'drop table tmp'
time sqlite3 $t 'create index ti on t(i)'
This reduces us to 2 million IP rows from the total possible 16 million IPs.
OK now just counting hits on fixed windows has way too many results:
sqlite3 208-up-uniq.sqlite "\
  SELECT min(i), COUNT(*) OVER (
  ) as c FROM t
) WHERE c > 20 and c < 30
Let's try instead consecutive ranges of length exactly 31 instead then:
sqlite3 208-up-uniq.sqlite <<EOF
SELECT f, t - f as c FROM (
  SELECT min(i) as f, max(i) as t
  GROUP BY grp
) where c = 31
271. Hmm. A bit more than we'd like...
Another route is to also count the ups:
time awk '$3=="up"{ print $1 }' $n | uniq -c | sed -r 's/^ +//;s/ /,/' | tee $n-up-uniq-cnt
rm -f $t
time sqlite3 $t 'create table tmp(cnt text, i text)'
time sqlite3 $t ".import --csv $n-up-uniq-cnt tmp"
time sqlite3 $t 'create table t (cnt integer, i integer)'
time sqlite3 $t '.load ./ip' 'insert into t select cnt as integer, str2ipv4(i) from tmp'
time sqlite3 $t 'drop table tmp'
time sqlite3 $t 'create index ti on t(i)'
Let's see how many consecutives with counts:
sqlite3 208-up-uniq-cnt.sqlite <<EOF
SELECT f, t - f as c FROM (
  SELECT min(i) as f, max(i) as t
  FROM (SELECT i, ROW_NUMBER() OVER (ORDER BY i) - i as grp FROM t WHERE cnt >= 3)
  GROUP BY grp
) where c > 28 and c < 32
Let's check on 66:
grep -e '66.45.179' -e '66.45.179' 66
not representative at all... e.g. several convfirmed hits are down:   1335305700      down    no-response   1337579100      down    no-response   1338765300      down    no-response   1340271900      down    no-response   1346813100      down    no-response
Let's check relevancy of known hits:
grep -e '208.254.40' -e '208.254.42' 208 | tee 208hits
Output:	1355564700	unreachable	1355622300	unreachable	1334537100	alive, 36342	1335269700	alive, 17586

..	1355562900	alive, 35023	1355593500	alive, 59866	1334609100	unreachable	1334708100	alive from, 43358	1336596300	unreachable
The rest of 208 is mostly unreachable.	1335294900	unreachable
...	1344737700	unreachable	1345574700	Icmp Error: 0,ICMP Network Unreachable, from	1346166900	unreachable
...	1355665500	unreachable	1334625300	alive, 6672
...	1355658300	alive, 57412	1334677500	alive, 28985	1336524300	unreachable	1344447900	alive, 8934	1344613500	alive, 24037	1344806100	alive, 20410	1345162500	alive, 10177
...	1336590900	alive, 23284
...	1355555700	alive, 58841	1334607300	Icmp Type: 11,ICMP Time Exceeded, from	1334681100	Icmp Type: 11,ICMP Time Exceeded, from	1336563900	Icmp Type: 11,ICMP Time Exceeded, from	1344451500	Icmp Type: 11,ICMP Time Exceeded, from	1344566700	unreachable	1344762900	unreachable
Let's try with 66. First there way too much data, 9 GB, let's cut it down:
time awk '$3~/^alive,/ { print $1 }' $n | uniq -c | sed -r 's/^ +//;s/ /,/' | tee $n-up-uniq-c
OK down to 45 MB, now we can work.
grep -e '66.45.179' -e '66.104.169' -e '66.104.173' -e '66.104.175' -e '66.175.106' '66-alive-uniq-c' | tee 66hits
Nah, it's full of holes:
won't be able to find new ranges here.
Domain list only, no IPs and no dates. We haven't been able to extract anything of interest from this source so far.
Domain hit count when we were at 69 hits: only 9, some of which had been since reused. Likely their data collection did not cover the dates of interest.
When you Google most of the hit domains, many of them show up on "expired domain trackers", and above all Chinese expired domain trackers for some reason, notably e.g.:
  • e.g.国际域名).txt. Heavily IP throttled. Tor hindered more than helped.
    Scraping script: cia-2010-covert-communication-websites/ Scraping does about 1 day every 5 minutes relatively reliably, so about 36 hours / year. Not bad.
    Results are stored under tmp/humo/<day>.
    Check for hit overlap:
    grep -Fx -f <( jq -r '.[].host' ../media/cia-2010-covert-communication-websites/hits.json ) cia-2010-covert-communication-websites/tmp/hupo/*
    The hits are very well distributed amongst days and months, at least they did a good job hiding these potential timing fingerprints. This feels very deliberately designed.
    There are lots of hits. The data set is very inclusive. Also we understand that it must have been obtains through means other than Web crawling, since it contains so many of the hits.
    Nice output format for scraping as the HTML is very minimal
    They randomly changed their URL format to remove the space before the .com after 2012-02-03:
    Some of their files are simply missing however unfortunately, e.g. neither of the following did contain that one however: Hmm. we might have better luck over there then?
    2018-11-19 is corrupt in a new and wonderful way, with a bunch of trailing zeros:
    wget -O hupo-2018-11-19 '
    hd hupo-2018-11-19
    ends in:
    000ffff0  74 75 64 69 65 73 2e 63  6f 6d 0d 0a 70 31 63 6f  ||
    00100000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
    0018a5e0  00 00 00 00 00 00 00 00  00                       |.........|
    More generally, several files contain invalid domain names with non-ASCII characters, e.g. 2013-01-02 contains 365<D3>л<FA><C2><CC>.com. Domain names can only contain ASCII charters: Maybe we should get rid of any such lines as noise.
    Some files around 2011-09-06 start with an empty line. 2014-01-15 starts with about twenty empty lines. Oh and that last one also has some trash bytes the end <B7><B5><BB><D8>. Beauty.
  • e.g. Appears to contain the exact same data as ""
    Also heavily IP throttled, and a bit more than hupo apparently.
    Also has some randomly missing dates like, though different missing ones from hupo, so they complement each other nicely.
    Some of the URLs are broken and don't inform that with HTTP status code, they just replace the results with some Chinese text 无法找到该页 (The requested page could not be found):
    Several URLs just return length 0 content, e.g.:
    curl -vvv
    *   Trying
    * Connected to ( port 80 (#0)
    > GET /com/2015-10-31.asp HTTP/1.1
    > Host:
    > User-Agent: curl/7.88.1
    > Accept: */*
    < HTTP/1.1 200 OK
    < Date: Sat, 21 Oct 2023 15:12:23 GMT
    < Server: Microsoft-IIS/6.0
    < X-Powered-By: ASP.NET
    < Content-Length: 0
    < Content-Type: text/html
    < Cache-control: private
    * Connection #0 to host left intact
    It is not fully clear if this is a throttling mechanism, or if the data is just missing entirely.
    Starting around 2018, the IP limiting became very intense, 30 mins / 1 hour per URL, so we just gave up. Therefore, data from 2018 onwards does not contain data.
    Starting from 2013-05-10 the format changes randomly. This also shows us that they just have all the HTML pages as static files on their server. E.g. with:
    grep -a '<pre' * | s
    we see:
    2013-05-09:<pre style='font-family:Verdana, Arial, Helvetica, sans-serif; '><strong>2013<C4><EA>05<D4><C2>09<C8>յ<BD><C6>ڹ<FA><BC><CA><D3><F2><C3><FB></strong><br>
  • e.g.
  • e.g.:
This suggests that scraping these lists might be a good starting point to obtaining "all expired domains ever".
We've made the following pipelines for + merging:
./ &
./ &
# Export as small Google indexable files in a Git repository.
# Export as per year zips for Internet Archive.
# Obtain count statistics:
The extracted data is present at:
Soon after uploading, these repos started getting some interesting traffic, presumably started by security trackers going "bling bling" on certain malicious domain names in their databases:
  • GitHub trackers:
    • 8 1
    • 17 2
    • 2 1
    • 1 1
    • 2 1
    • 2 1
    • 2 1
    • 4 1
    • 2 1
    • 10 3
    • 2 1
    • 2 1
    • 4 1
    • 2 1
    • 1 1
    • 18 2
    • Looks like a Russian hacker forum.
  • LinkedIn profile views:
    • "Information Security Specialist at Forcepoint"
Check for overlap of the merge:
grep -Fx -f <( jq -r '.[].host' ../media/cia-2010-covert-communication-websites/hits.json ) cia-2010-covert-communication-websites/tmp/merge/*
Next, we can start searching by keyword with Wayback Machine CDX scanning with Tor parallelization with out helper cia-2010-covert-communication-websites/, e.g. to check domains that contain the term "news":
./ mydir 'news|global' 2011 2019
produces per-year results for the regex term news|global between the years under:
OK lets:
./ out 'news|headline|internationali|mondo|mundo|mondi|iran|today'
Other searches that are not dense enough for our patience:
OMG news search might be producing some golden, golden new hits!!! Going full into this. Hits:
and a few more. It's amazing.
TODO what does this Chinese forum track? New registrations? Their focus seems to be domain name speculation
Some of the threads contain domain dumps. We haven't yet seen a scrapable URL pattern, but their data goes way back and did have various hits. The forum seems to have started in 2006: "【国际域名拟删除列表】2007年06月16日" is the earliest list we could find. It is an expired domain list.
Some hits:
  • contains The thread title is "2009.5.04". The post date 2009-04-30
    Breadcrumb nav: 域名论坛 > 域名增值交易区 > 国际域名专栏 (domain name forum > area for domain names increasing in value > international domais) dated mega early on Sep 30th, 2012 by CYBERTAZIEX.
This source was found by Oleg Shakirov.
Holy fuck the type of data source that we get in this area of work!
This pastebin contained a few new hits, in addition to some pre-existing ones. Most of the hits them seem to be linked to the IP, which presumably is a major part of the fingerprint found by CYBERTAZIEX, though unsurprisingly methodology is unclear. As documented, the domains appear to be linked to a "Condor hosting" provider, but it is hard to find any information about it online.
Ciro Santilli checked every single non-subdomain domain in the list.
Other files under the same account: did not seem of interest.
The author's real name appears to be Deni Suwandi: from Indonesia, but all accounts appear to be inactive, otherwise we'd ping him to ask for more info about the list.
OK, Oleg Shakirov's findings inspired Ciro Santilli to try Yandexing a bit more... had a hit:, and so Ciro started looking around... and a good number of other things have hits.
Not all of them, definitely less data than
But they do reverse IP, and they show which nearby reverse IPs have hits on the same page, for free, which is great!
Shame their ordering is purely alphabetical, doesn't properly order the IPs so it is a bit of a pain, but we can handle it.
OMG, Russians!!!
The data here had a little bit of non-overlap from other sources. 4 new confirmed hits were found, plus 4 possible others that were left as candidates.

Articles by others on the same topic (0)

There are currently no matching articles

See all articles in the same topic