That’s interesting. Yesterday I did read a blog post, where the BlurHash library was mentioned: https://blurha.sh/
It would give us a possibility to create initial thumbnail tiddlers, that contain the extracted EXIF data in fields plus a _canonical_uri to the original, to keep the TW small.
When the image is shown the BlurHash can be used as a thumbnail and the images can be lazy loaded.
So the wiki itself would be searchable but small.
That would be a usecase that I personally prefer.
BlurHash
In short,
BlurHash takes an image, and gives you a short string (only 20-30 characters!) that represents the placeholder for this image.You do this on the backend of your service, and store the string along with the image. When you send data to your client, you send both the URL to the image, and the BlurHash string.
Your client then takes the string, and decodes it into an image that it shows while the real image is loading over the network. The string is short enough that it comfortably fits into whatever data format you use. For instance, it can easily be added as a field in a JSON object.
Some personal remarks. … I do have about 10th of thousands of images on my HDs and another 1000++ on my phone. Movies not counted.
All of them have EXIF data, but not a single one of them has additional meta data. Because the existing editing tools I know all suck. So I never did add any new data there.
In contrary, if I do send “polished” images to friends, I do remove the EXIF info, because of privacy reasons.
All the meta info is part of the directory structure. … That’s “just good enough” for me.
Just my thoughts