Swimming
The weather on the weekend was really nice and warm. Almost 20º C and even during the nights it stayed above 5º C. Swimming on Sunday was really refreshing and I could have easily swam across the lake. We but decided to just swim to the new buoy in the middle of the lake. Apparently some research institute does some measurement in the lake. Likely to determine water quality and health of the lake.
Dance Camp
Kids are on vacation this week. Our girl is attending a dance camp in the countryside. Looks fun spending a few days off-site with the whole team and combining training and recreation.
Kiddo came back happy but also pretty exhausted. Apparently it was intense. But she liked it.
Still fighting macOS Updates
After I managed to lift my external SSD to 15.4 eventually Apple released an emergency patch to 15.4.1.
Of course that has the same issue that it can’t be easily applied to my external SSD. All Tricks fail so far to get this updated. The thread on the Apple Support forum still grows with people having the same issue. There are but also some who succeeded.
Photo hoarding
Few weeks ago I downloaded my Google Takeout. About 2.1 TB of images and videos. Now I finally got around to unpack and sort the data. Images have their original names and are organized in Google Photos album as folders. Each file has a supplemental JSON file with metadata.
I want to organize my images in a folder structure of Year/Month/Date and name the file after its creation timestamp YYYY-MM-DD_HH:MM:SS
The opensource tool ‘exiftool‘ can do this kind of renaming and moving of files based on the metadata in the image files.
Did that in a first pass and sorted thousands of images. Still there were a few thousand left which exiftool couldn’t extract the creation date from.
So I thought reading the respective data from the supplementary JSON files. Tried for about a day to write a shell script finding the images, the corresponding JSON files and extracting the timestamp. But utterly failed due to fancy file and path names. Passing them between functions and commands in a shell script isn’t trivial apparently.
I gave up on this approach and instead wrote myself a python script to do the job. Eventually enriching the left over images with the metadata so exiftool can sort them properly.
Now that I have the files in order ready for my archive I wanted to remove duplicates at least. So I loaded the whole folder of 1.1 TB (I omitted the videos from the Takeout ) into digikam. It took almost 6 hours for it to generate the needed meta information. Fortunately it uses all the 12 CPU cores and the data resides on an external NVMe SSD with 10 Gbps connectivity.
digikam not only looks at file names and sizes to find duplicates but at actual images content with wavelet patterns. Found of course thousands in my stash and was able to remove them.
Next step will be syncing the archive to my home NAS over the network. That’s going to take a while as well …