They appear to piece together data from various sources. This is the most complete historical domain -> IP database we have so far. They don't have hugely more data than viewdns.info, but many times do offer something new. It feels like the key difference is that their data goes further back in the critical time period a bit.
TODO do they have historical reverse IP? The fact that they don't seem to have it suggests that they are just making historical reverse IP requests to a third party via some API?
E.g. searching thefilmcentre.com under historical data at securitytrails.com/domain/thefilmcentre.com/history/al gives the correct IP 62.22.60.55.
But searching the IP 62.22.60.55 is empty and there's no historical data option?
Account creation blacklists common email providers such as gmail to force users to use a "corporate" email address. But using random domains like ciro@cirosantilli.com works fine.
Their data seems to date back to 2008 for our searches.
Wikipedia mentions "Since animal mtDNA evolves faster than nuclear genetic markers" with a few sources.
Some sources:
Tin compound Updated 2025-07-16
X-10 Graphite Reactor Updated 2025-07-16
This was an intermediate step between the nuclear chain reaction prototype Chicago Pile-1 and the full blown plutonium mass production at Hanford site. Located in the Oak Ridge National Laboratory.
Figure 1.
Exterior of the X-10 Graphite Reactor in 1950
. Source.
Figure 2.
Workers loading Uranium into the X-10 Graphite Reactor with a rod
. Source.
19-inch rack Updated 2025-07-16
IBM System/360 Updated 2025-07-16
This is a family of computers. It was a big success. It appears that this was a big unification project of previous architectures. And it also gave software portability guarantees with future systems, since writing software was starting to become as expensive as the hardware itself.
The CGI comms websites contain the only occurrence of HTTPS, so it might open up the door for a certificate fingerprint as proposed by user joelcollinsdc at: news.ycombinator.com/item?id=36280801!
crt.sh appears to be a good way to look into this:
They all appear to use either of:
Let's try another one for secure.altworldnews.com: search.censys.io/certificates/e88f8db87414401fd00728db39a7698d874dbe1ae9d88b01c675105fabf69b94. Nope, no direct mega hits here either.
Accounts used so far: 6 (1500 reverse IP checks).
Their historic DNS and reverse DNS info was very valuable, and served as Ciro's the initial entry point to finding hits in the IP ranges given by Reuters.
Generic information about the website not specific on this project will be stored at: Section "viewdns.info".
Since this source is so scarce and valuable, we have been quite careful to note down all the domain and IP ranges that have been explored.
At news.ycombinator.com/item?id=38496244, the creator of the viewdns.info, "Hughesey", also stated that he'd able to give some free credits for public research projects such as this one. This would have saved up going to quite a few Cafes to get those sweet extra IPs! But it was more fun in hardmode, no doubt.
We do API access to IP ranges with this simple helper: ../cia-2010-covert-communication-websites/viewdns-info.sh, usage:
./viewdns-info.sh <apikey> <start-ipv-address> <end-ipv-address>
e.g.:
./viewdns-info.sh 8b890b00b17ed2d66bbed878d51200b58d43d014 66.45.179.187 66.45.179.210
For domain to IP queries from the API you should use "iphistory" viewdns.info/api/docs/ip-history.php:
curl 'https://api.viewdns.info/iphistory/?domain=todaysengineering.com&apikey=$APIKEY&output=json'
Just beware of the viewdns.info reverse IP bug, that really sucks and led to us missing a ton of domains.
Myoglobin Updated 2025-07-16
From Wikipedia:
In humans, myoglobin is only found in the bloodstream after muscle injury.
Tree traversal Updated 2025-07-16
In principle one could talk about tree traversal of unordered trees as a number of possible traversals without a fixed order. But we won't consider that under this section, only deterministic ordered tree traversals.
D'oh.
But to be serious. The Wayback Machine contains a very large proportion of all sites. It does happen sometime that a Wayback Machine archive is missing or broken and cqcounter has the screenshot. But the Wayback Machine is still the most complete database we have found so far. Some archives are very broken. But those are rare.
The only problem with the Wayback Machine is that there is no known efficient way to query its archives across domains. You have to have a domain in hand for CDX queries: Wayback Machine CDX scanning.
The Common Crawl project attempts in part to address this lack of querriability, but we haven't managed to extract any hits from it.
CDX + 2013 DNS Census + heuristics however has been fruitful however.
We have dumped all Wayback Machine archives of known websites to: github.com/cirosantilli/cia-2010-websites-dump using ../cia-2010-covert-communication-websites/download-websites.sh. This allows for better grepping and serves as a backup in case they ever go down.
The Wayback Machine has an endpoint to query cralwed pages called the CDX server. It is documented at: github.com/internetarchive/wayback/blob/master/wayback-cdx-server/README.md.
This allows to filter down 10 thousands of possible domains in a few hours. But 100s of thousands would be too much. This is because you have to query exactly one URL at a time, and they possibly rate limit IPs. But no IP blacklisting so far after several hours, so it's not that bad.
Once you have a heuristic to narrow down some domains, you can use this helper: ../cia-2010-covert-communication-websites/cdx.sh to drill them down from 10s of thousands down to hundreds or thousands.
We then post process the results of cdx.sh with ../cia-2010-covert-communication-websites/cdx-post.sh to drill them down from from thousands to dozens, and manually inspect everything.
From then on, you can just manually inspect for hist on your browser.

There are unlisted articles, also show them or only show them.