Topics Top articles New articles Updated articles Top users New users New discussions Top discussions New comments+ New article
2024-04: got two backpacks "for free" with the Lenovo reward points from buying the Lenovo ThinkPad P14s gen4 amd, not bad, that was already cheap and now I got some extra swag:The sport backpack has a fatal flaw: no strap to hold laptop in place, wo it just tumbles back and forth as you walk.
- Lenovo Select Targus 16" Sport Backpack: www.lenovo.com/gb/en/p/accessories-and-software/cases-and-bags/backpacks/gx41l44751
- Lenovo Select Targus 16" Mobile Elite Backpack: www.lenovo.com/gb/en/p/accessories-and-software/cases-and-bags/backpacks/gx41l44752. Measured weight: 1070 g
Neither of them have very good padding below the laptop, but at least the Elite one has a slightly elevated inner bag which woudl likely help a lot in case of a drop.
For sizing see also: Ciro Santilli's body.
Bought 2021-11, light brown, size EUR 44.5, 190$, waterproof: www.timberland.co.uk/shop/en/tbl-uk/larchmont-chukka-for-men-in-brown-a12es210
Made four blisters on back of foot on first two days after walking a few hours on them, but then put on some tape on foot, and stopped hurting after that, so the shoe broke quickly.
All with olive oil and salt mixed up before roasting.
2021-04-05 180C:
- chestnuts: 1.5x 200g: 3x 6min, this was a bit too much
- hazelnuts: 1.5x 200g: 3x 6min, seemed fine
- pecans: 4.5x 200g bags: 5x 6 min, a bit uneven roast because too much on tray
2021-02-06 180C:
- almonds: 2x 200g: 3x 6min, slighted burnt taste
- Brazil nuts: 2x 300g: 3x 6min + 3min
- chestnuts: 1x 400g: 3x 6min, perfect
- pecans: 3x 200g bags (previously had done just 2 bags at a time): 3x 6 min + 2x 3min, perfect
2021-01-04:
- almonds: 190C, 8 min, they started burning on top! What? I put olive oil abundantly this time. 170C 5 min
- chestnuts: 180C, 6 min, stir, 6 min, stir, 4 min, they became very good, dark brown
- pecans: 180C, 6 min, stir, 6 min, stir, 3 min while preparing chestnuts, very good
2020-11-21:
- mixed nuts: 180C, 10 minutes, did not reach the point. Then 7 more minutes on 190C: pecans completely burned out
- almonds: 190C, about 25 minutes, opened several times, in the end had a slight burnt taste, but did not get black, just darker brown. Not as crispy as the ones we buy roasted, but pretty good
- pecans: 180C, 13 minutes, opened 3 times to stir, became great
Every article now has a (very basic) GitHub-like issue tracker. Comments now go under issues, and issues go under articles. Issues themselves are very similar to articles, with a title and a body.
This was part of 1.0, but not the first priority, but I did it now anyways because I'm trying to do all the database changes ASAP as I'm not in the mood to write database migrations.
Here's an example:
- ourbigbook.com/go/issue/2/donald-trump/atomic-orbital a specific issue about the article "Atomic Orbital" by Donald Trump. Note the comments possibly by other users at the bottom.
- ourbigbook.com/go/issues/1/donald-trump/atomic-orbital list of issues about the article "Atomic Orbital" by Donald Trump
After breaking production and sweating for a bit hotfixing (not that anyone uses the website yet), I decided to be smart and created a staging server: ourbigbook-staging.herokuapp.com. Now I can blow that server up as I wish without afecting users. Documented at: cirosantilli.com/ourbigbook/staging-deployment
I had meant to make an update earlier, but I wanted to try and add some more "visible end-user changes" to OurBigBook.com.
Just noticed BTW that signup on the website is broken. Facepalm. Not that it matters much since it is not very useful in the current state, but still. Going to fix that soon. EDIT: nevermind, it wasn't broken, I just had JavaScript disabled on that website with an extension to test if pages are visible without JavaScript, and yes, they are perfectly visible, you can't tell the difference! But you can't login without JavaScript either!
I still haven't the user visible ones I wanted, but I've hit major milestones, and it feels like time for an update.
I have now finished all the OurBigBook CLI features that I wanted for 1.0, all of which will be automatically reused in ourbigbook.com.
The two big things since last email were the following:
A secondary but also important advance was: further improvements to the website's base technology.
I knew I was going to do them for several months now, and I knew they were going to hurt, and they did, but I did them.
These change caused two big bugs that I will solve next, one them infinite recursion in the database recursive query, but they shouldn't be too hard.
Enable reference features into ourbigbook.com by Ciro Santilli 35 Updated 2024-12-23 +Created 1970-01-01
Currently, none of the crucial cross file features like
\x
, \Include
and table of contents are working. I was waiting until the above mentioned features were done, and now I'm going to get to that. Gathering key points from the articles by Ciro Santilli 35 Updated 2024-12-23 +Created 1970-01-01
citizenlab.ca/2022/09/statement-on-the-fatal-flaws-found-in-a-defunct-cia-covert-communications-system/ did an investigation and found 885 such websites, but decided not to disclose the list or methods:The question is which website. E.g. at citizenlab.ca/2021/07/hooking-candiru-another-mercenary-spyware-vendor-comes-into-focus/ they used data from Censys.
Using only a single website, as well as publicly available material such as historical internet scanning results and the Internet Archive's Wayback Machine, we identified a network of 885 websites and have high confidence that the United States (US) Central Intelligence Agency (CIA) used these sites for covert communication.The websites included similar Java, JavaScript, Adobe Flash, and CGI artifacts that implemented or apparently loaded covert communications apps. In addition, blocks of sequential IP addresses registered to apparently fictitious US companies were used to host some of the websites. All of these flaws would have facilitated discovery by hostile parties.The websites, which purported to be news, weather, sports, healthcare, and other legitimate websites, appeared to be localized to at least 29 languages and geared towards at least 36 countries.
We searched historical data from Censyscitizenlab.ca/2016/08/million-dollar-dissident-iphone-zero-day-nso-group-uae/ mentions scans.io/. citizenlab.ca/2020/12/running-in-circles-uncovering-the-clients-of-cyberespionage-firm-circles/ mentions: www.shodan.io/, Censys really seems to be their thing.
Another critical excerpt is:This basically implies that they must have found some communication layer level identifier, e.g. IP registration, domain name registration, or certificate because it is impossible to believe that real agent names would have been present on the website content itself!
The bulk of the websites that we discovered were active at various periods between 2004 and 2013. We do not believe that the CIA has recently used this communications infrastructure. Nevertheless, a subset of the websites are linked to individuals who may be former and possibly still active intelligence community employees or assets:Given that we cannot rule out ongoing risks to CIA employees or assets, we are not publishing full technical details regarding our process of mapping out the network at this time. As a first step, we intend to conduct a limited disclosure to US Government oversight bodies.
- Several are currently abroad
- Another left mainland China in the time frame of the Chinese crackdown
- Another was subsequently employed by the US State Department
- Another now works at a foreign intelligence contractor
The websites were used from at least as early as August 2008, as per Gholamreza Hosseini's account, and the system was only shutdown in 2013 apparently. citizenlab.ca/2022/09/statement-on-the-fatal-flaws-found-in-a-defunct-cia-covert-communications-system/ however claims that they were used since as early as 2004.
Notably, so as to be less suspicious the websites are often in the language of the country for which they were intended, so we can often guess which country they were intended for!
ns.csv is 57 GB. This file is too massive, working with it is a pain.
We can also cut down the data a lot with stackoverflow.com/questions/1915636/is-there-a-way-to-uniq-by-column/76605540#76605540 and tld filtering:This brings us down to a much more manageable 3.0 GB, 83 M rows.
awk -F, 'BEGIN{OFS=","} { if ($1 != last) { print $1, $3; last = $1; } }' ns.csv | grep -E '\.(com|net|info|org|biz),' > nsu.csv
Let's just scan it once real quick to start with, since likely nothing will come of this venue:As of 267 hits we get:so yeah, most of those are likely going to be humongous just by looking at the names.
grep -f <(awk -F, 'NR>1{print $2}' ../media/cia-2010-covert-communication-websites/hits.csv) nsu.csv | tee nsu-hits.csv
cat nsu-hits.csv | csvcut -c 2 | sort | awk -F. '{OFS="."; print $(NF-1), $(NF)}' | sort | uniq -c | sort -k1 -n
1 a2hosting.com
1 amerinoc.com
1 ayns.net
1 dailyrazor.com
1 domainingdepot.com
1 easydns.com
1 frienddns.ru
1 hostgator.com
1 kolmic.com
1 name-services.com
1 namecity.com
1 netnames.net
1 tonsmovies.net
1 webmailer.de
2 cashparking.com
55 worldnic.com
86 domaincontrol.com
The smallest ones by far from the total are: frienddns.ru with only 487 hits, all others quite large or fake hits due to CSV. Did a quick Wayback Machine CDX scanning there but no luck alas.
Let's check the smaller ones:Doubt anything will come out of this.
inews-today.com,2013-08-12T03:14:01,ns1.frienddns.ru
source-commodities.net,2012-12-13T20:58:28,ns1.namecity.com -> fake hit due to grep e-commodities.net
dailynewsandsports.com,2013-08-13T08:36:28,ns3.a2hosting.com
just-kidding-news.com,2012-02-04T07:40:50,jns3.dailyrazor.com
fightwithoutrules.com,2012-11-09T01:17:40,sk.s2.ns1.ns92.kolmic.com
fightwithoutrules.com,2013-07-01T22:46:23,ns1625.ztomy.com
half-court.net,2012-09-10T09:49:15,sk.s2.ns1.ns92.kolmic.com
half-court.net,2013-07-07T00:31:12,ns1621.ztomy.com
Let's do a bit of counting out of the total:gives ~20M domain using so it accounts for 1/4 of the total.
grep domaincontrol.com ns.csv | awk -F, '{print $1}' | uniq | wc
domaincontrol
. Let's see how many domains are in the first place:awk -F, '{print $1}' ns.csv | uniq | wc
Of course, if academic journals require greater reproducibility for publication, then the cost per paper increases.
However, the total cost has to be smaller than the cost everyone who reads the paper spends to reproduce, no?
The truth is, part of the replication crisis is also due to research groups not wanting to share their precious secrets with others, so they can keep ahead of the publication curve, or maybe spin off a startup.
And when it comes to papers, things are even crazier: big companies manage to publish white papers in peer reviewed journals.
Ciro Santilli wants to help in this area with his videos of all key physics experiments project idea.
Cool initiative. Papers that do not share source code should be banned from peer reviewed academic journals.
From episode "Mortynight Run"
Look at this. You beat cancer, and then you went back to work at the carpet store? Booooh.
Figure "xkcd 435: Fields arranged by purity" must again be cited.
Pinned article: ourbigbook/introduction-to-the-ourbigbook-project
Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
We have two killer features:
- topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculusArticles of different users are sorted by upvote within each article page. This feature is a bit like:
- a Wikipedia where each user can have their own version of each article
- a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it. - local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
- to OurBigBook.com to get awesome multi-user features like topics and likes
- as HTML files to a static website, which you can host yourself for free on many external providers like GitHub Pages, and remain in full control
- Internal cross file references done right:
- Infinitely deep tables of contents:
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact