It is true that one image is worth a thousand words, but unfortunately it is also true that one image takes up at least as much bytes as a thousand words!
Having one single page to rule them all is of course the ideal setup for a website, as you can Ctrl + F one ToC and quickly find what you want.
And, with Linux Kernel Module Cheat Ciro noticed that it is very hard to write so much intelligent prose that becomes larger than reasonable to load on a single webpage.
He then started using this technique for everything he writes, including this page and Chinese government.
However, if there are too many images on the page, the loading of the last images would take forever in case users want to view the last sections.
There are two solutions to that:
- be traditional and create separate web pages
- be bold and load images as they appear on the viewport: stackoverflow.com/questions/2321907/how-do-you-make-images-load-only-when-they-are-in-the-viewport/57389607#57389607Edit: OK, it was standardized with
loading=lazy
, without need JavaScript!Now the last awesome thing would be a method that loads first images in viewport, then those below, and then those above, that would be the ultimate solution.This question comes close: stackoverflow.com/questions/7906348/change-loading-order-of-images-already-on-page
Ciro is still deciding between those two. The traditional approach works for sure but loses the one page to rule them all benefits.
The innovative approach will work for interactive viewing, but archive.org will fail to load the images for example, and there may be other unforseen consequences.
Wikimedia Commons is awesome and automatically converts and serves smaller versions of images, so always choose the smallest images size needed by the output document. Readers can then find the higher resolution versions by following the page source.
This also comes to mind: motherfuckingwebsite.com
zettelkasten.de/posts/overview/ from zettelkasten:
How many Zettelkästen should I have? The answer is, most likely, only one for the duration of your life. But there are exceptions to this rule.
Since images are large, they bring the following challenges:
- keeping images in the main Git repository with text content makes the repository huge and slow to clone, and should not be done
- storing and serving images could cost us, which we want to avoid
To solve those problems, the following alternatives appear to be stable enough and should be used decreasing preference:
- for all images, use the separate GitHub repository: github.com/cirosantilli/mediaThis way, the entire website is relies on a single third party: GitHub, so we have a simple single point of failure.We are at the mercy of GitHub's 1GB size policy: help.github.com/en/articles/what-is-my-disk-quota, but it will take a while to hit that.GitLab however has a 10Gb maximum size: about.gitlab.com/2015/04/08/gitlab-dot-com-storage-limit-raised-to-10gb-per-repo/ so we could move there is we ever blow up 1Gb on GitHub.Both GitLab and GitHub allow uploading files through the web UI, so downloading a large repo is never needed to contribute.GitHub does not serve videos like it does images however as of 2019.
- Wikimedia Commons for videos if the following conditions are met:
- in scope: "educational material in a broad sense", but not e.g. "Private image collections, e.g. private party photos, photos of yourself and your friends, your collection of holiday snaps and so on.". I don't think they will be too picky even with low quality photos.
- allowed format, e.g. images or videos, but not ZIPs
- allowed license: CC BY SA, but no fair use
Since Wikimedia Commons has a higher level of curation and is an educational not-for-profit, it is the method most likely to remain available for the longest time.For this reason, we highly recommend uploading any acceptable files there as well as an additional backup.The downside is that its tooling is not as good, e.g. there are a bunch of messy unofficial tools for batch operations, and upload takes more effort.Another downside of Wikimedia Commons is that while we can choose the basename of files, it also adds some extra SHA crap to the beginning of URLs, making them harder to predict.Another serious downside is that they randomly rename images without redirects... e.g. they renamed upload.wikimedia.org/wikipedia/en/0/03/STJ_SVG_file.svg to upload.wikimedia.org/wikipedia/commons/8/81/Superconducting_tunnel_junction.svgAnother "downside" is that they are extremely strict about copyright compliance. This is good because you can be pretty sure that they are correct in general, but it also means that they are very conservative, and delete things where fair use would be OK. And if those fair uses have no Wikipedia page, they won't show up anywhere. - archive.org for anything else, e.g. videos that Wikimedia commons does not accept.All content will be tracked under the
cirosantilli
collection: archive.org/details/cirosantilliarchive.org has a very convenient upload and lax requirements. The generated URLs are predictable (single SHA prefix for the entire collection).Never trust a website that is not on GitHub Pages, for-profit companies will take down everything immediately as soon as it stops making them money.Every external link to non-GitHub pages must be archived. And GitHub links must be forked.We should also backup images that Wikimedia Commons does not accept here in addition to the github.com/cirosantilli/media repository.
The following alternatives seem impossible because Ciro could not find if they expose direct links to the images:
The following do have direct links:
- www.flickr.com e.g. live.staticflickr.com/7437/27402357162_7d91b73cd5_z.jpg documented at help.flickr.com/en_us/get-the-url-of-a-flickr-photo-S1Hnnmjym Also does automatic image size conversion. But only provides ugly autogenerated URLs.
- Instagram does not support upload from computer? Lol?
For videos, YouTube does not allow download, even of Creative Commons videos so uploading only there is not acceptable as it prevents reuse:
Articles by others on the same topic
There are currently no matching articles.