Your employer is not your friend

Yesterday I was talking with a young colleague from India. She felt that her work is not seen and her role seems undervalued in the organisation. She’s putting a lot of effort into her role and tries to deliver at every task thrown at her. But she started to question the benefit for her and her career.

I shared my personal experience and perspective with her. Mainly that one should consider the employee/employer relationship as that what it is: a commercial contract.

Your employer is not your friend. They will at best fulfil your contract, but not go beyond that. So everything you put in beyond what’s in the contract if a free gift to the company.

That might sounds harsh, but thats the reality in my experience. All the hard work and over hours you might put in are not guaranteed to pay out. They depend on individuals like your direct line manager to be recognised and respected. That person however can change at any time at the will of the company. And the next line manager will have no knowledge about your efforts.

Think of your employment contract like of any other commercial contract you enter. When you buy a TV set, the seller will usually not give you something else for free in addition. And you would not consider to pay 10 or 20% more just because you enjoy the deal. So why would you give free labor to your employer? In hope of a promotion?

That might be a reason. But ask yourself what’s the risk here. Does your employer have a document career progression framework? Have you talked to your line manager what is that you need to do to get promoted? Or are you just hoping that your effort will eventually be recognised for a promotion? That means you’re banking on the memory of your line manager and her goodwill to promote you. It’s not a surefire thing to get.

So whenever you commit effort beyond what’s in your contract, make sure you understand what’s in it for you.

I did a lot of over hours throughout my career. And I did so voluntarily, knowing that often times I’m not compensated by the company with money or free time in return. But most of the time it was a conscious decision of mine because I knew that I would learn and gain experience. That was, what I took out for me of those over hours. One might argue whether that was worth it, sure. But I think it was.

The Return of Infrastructure Independence: Breaking Free from US Hyperscalers

In the rapidly evolving landscape of technology, we sometimes find ourselves experiencing a sense of déjà vu. The current state of cloud computing and infrastructure management feels remarkably similar to the late 1990s server market—a time of major technological transition that ultimately rewarded those who maintained traditional expertise.

The Great Windows Server Migration of the Late ’90s

Cast your mind back to the late 1990s. Windows NT was gaining significant traction in the enterprise server space. Microsoft’s marketing machine was in full swing, promoting Windows as the future of server technology. The interface was familiar, the management tools were accessible, and the promise was enticing: simplify your infrastructure and reduce costs.

Many companies bought into this vision. They let go of their Unix administrators—the wizards who understood the deep intricacies of system architecture—and pivoted toward the seemingly more accessible Windows ecosystem. Unix expertise was deemed outdated, a relic of computing’s past.

But then something unexpected happened: Linux emerged as a powerful force. This open-source Unix-like operating system combined the robustness of traditional Unix with modern development approaches. Companies that had maintained their Unix expertise found themselves with a significant competitive advantage, while those who had discarded that knowledge scrambled to adapt.

Today’s Dangerous Dependency on US Hyperscalers

Fast forward to today, and we’re witnessing a similar phenomenon, but with far greater geopolitical implications. The cloud market has become dominated by a handful of US-based hyperscalers: AWS, Azure, and Google Cloud Platform. These giants now control the backbone of global digital infrastructure, creating an unprecedented level of dependency.

Organizations worldwide have entrusted their mission-critical systems, data, and intellectual property to these American corporations. This concentration of digital power in the hands of a few US companies presents significant risks:

  1. Geopolitical Vulnerability: Non-US entities are subject to American data regulations, surveillance capabilities, and political whims
  2. Sovereignty Concerns: Nations and regions have limited control over their own digital infrastructure
  3. Single Points of Failure: Global dependence on a handful of providers creates systemic risks
  4. Compliance Challenges: Navigating complex and sometimes contradictory regulations across jurisdictions

Today’s developers and systems engineers often have limited exposure to building and maintaining independent infrastructure stacks. The knowledge of creating self-sufficient, sovereign digital platforms has been sacrificed at the altar of convenience offered by the hyperscalers.

The Coming Era of Regional Digital Sovereignty

As geopolitical tensions rise and concerns about surveillance escalate, we’re approaching a breaking point that parallels the Linux revolution of the early 2000s. The excessive centralization of cloud infrastructure in the hands of US corporations is becoming increasingly untenable for many regions and organizations around the world.

Europe, in particular, stands at a crossroads. With its strong regulatory framework through GDPR and emphasis on digital sovereignty, the continent has the potential to lead a shift toward regional cloud infrastructure. A “European Cloud” built on open standards and operated independently of US hyperscalers could provide a template for other regions seeking digital autonomy.

This is where those 50+ year-old systems engineers—the ones who understand how to build infrastructure from the ground up—will become invaluable again. Their knowledge of architecting complete technology stacks without reliance on hyperscaler ecosystems will be crucial as organizations and regions work to establish independent digital capabilities.

Building Regional Digital Independence

The path to reducing dependency on US hyperscalers requires:

  1. Regional Infrastructure Initiatives: Government-backed programs to develop sovereign cloud capabilities within specific geographic or political boundaries
  2. Open Source Foundations: Building on open source technologies to avoid vendor lock-in and enable collaboration
  3. Knowledge Preservation: Actively maintaining expertise in full-stack infrastructure management
  4. Hybrid Approaches: Developing gradual migration paths that balance hyperscaler advantages with sovereignty requirements
  5. International Cooperation: Creating alliances between nations with shared interests in digital sovereignty

The Role of Experienced Infrastructure Engineers

The systems engineers who remember a world before AWS, Azure, and Google Cloud will play a pivotal role in this transition. Their experience building and managing independent data centers, designing network architectures without reliance on hyperscaler services, and understanding the full technology stack from hardware to application will be essential.

These veterans know what it takes to build robust, independent infrastructure. They understand the pitfalls, requirements, and strategic considerations that younger engineers, raised entirely in the hyperscaler era, may overlook.

Conclusion

The technology industry has always moved in cycles. What seems obsolete today may become critical tomorrow. Just as Linux vindicated those Unix administrators who maintained their expertise through the Windows NT revolution, the growing movement toward digital sovereignty could similarly elevate those who’ve preserved their knowledge of building independent infrastructure.

As regions like Europe work to establish their own cloud ecosystems and reduce dependency on US hyperscalers, the experienced systems engineers who understand how to build truly independent technology stacks will become not just relevant, but essential to our digital future.

The coming years may well see a renaissance of regional infrastructure expertise, as organizations and nations alike recognize that true digital resilience requires breaking free from excessive dependency on the American tech giants that currently dominate our global digital landscape.

See also: https://berthub.eu/articles/posts/you-can-no-longer-base-your-government-and-society-on-us-clouds/

Fixing PaperlessNGX Email Processing Issues After Restart

When running PaperlessNGX in Docker, I encountered an issue where certain emails were not processed after restarting the Paperless container in the middle of a batch processing operation. Paperless saw the emails in the inbox but incorrect

ly marked them as already processed.

Identifying the Issue

The first step to diagnose the issue was to check the mail.log file within Paperless. The log provided information on which emails were skipped from processing, including their unique IDs. For example:

[2025-02-17 09:50:03,084] [DEBUG] [paperless_mail] Skipping mail '321' subject 'Email from Epson WF-4830 Series' from 'scanner@example.com', already processed.

Logging into the Database

To access the Paperless database running inside a Docker container, I used the following command:

docker compose exec db /bin/bash

This command opens a bash shell inside the db service container, allowing further interaction with PostgreSQL.

Resolving the Issue

To resolve the issue, I connected to the Paperless database, which was running on PostgreSQL. Using the provided email UID from the mail.log, I deleted the corresponding entries from the paperless_mail_processedmail table to allow Paperless to process the email again.

psql -U paperless_db_user

Here’s the SQL command I used:

DELETE FROM paperless_mail_processedmail WHERE uid = '322';

After running this command for every of the reported mails that are skipped, Paperless successfully reprocessed the emails during the next processing cycle.

Conclusion

If you encounter similar issues with PaperlessNGX not processing certain emails after a restart, checking the mail.log and manually deleting the processed mail entries from the database can be an effective solution.

SSH with FIDO2 keys on hardware tokens

I recently bought a pair of Token2 FIDO2 hardware security keys. Those are USB/NFC devices to store cryptographic keys on and use them for authentication purposes on various services.

Beside the main purpose of serving as my Passkey supply I’ve set them up to be used for SSH authentication as well.

This is straightforward meanwhile if you meet the prerequisites of using a recent version of SSH (OpenSSH >= 8.3)

Technicalities

SSH authentication by means of cryptographic keys usually works with an asymmetric pair of keys as you might know from tools like PGP. You put your public key part on the server you want to log in to. When opening an SSH session to the server, you provide your private key to sign the authentication challenge given by the server. The server verifies it’s really you by checking the signature of the challenge with your public that you placed on the server earlier.

For the FIDO2 keys, this is slightly different. The private key on your machine is not actually stored on the FIDO2 key. Instead when you create an SSH key to be used with the FIDO2 key you create a reference (key handle) to the FIDO2 hardware key that acts as your private key part.

Generating the SSH Keypair

To make use of your FIDO2 key for SSH you have to generate a new SSH key pair which is associated to your FIDO2 hardware key.

ssh-keygen -t ed25519-sk -O resident -O verify-required -C “Comment”

the option -t ed25519-sk will tell SSH to generate a key using the Elliptic Curve cryptography algorithm. More specifically the ED25519 curve. The suffix “-sk” indicates that this will be a key handle associated with the FIDO authenticator.

the option -O resident tells SSH to store the key handle on the FIDO key itself, the option -O verify-required will require you to press the FIDO key when requested to confirm your physical presence. And finally -C "Comment" should be obviously the comment of the keypair.

putting your new SSH public key on the destination server

As with normal SSH key pairs you just add the contents of your public key to the ~/.ssh/authorized_keys file on the destination server. You can you the ssh-copy-id command for this:

Now you should be able to

login to the remote machine using your passkey

Plug in your FIDO key token and start the ssh connection. You’ll be asked for the PIN of the hardware token to unlock the keystone before the key can be used. If your PIN is correct the token will start blinking and request you to touch it to prove your physical presence.

Using your key on a new machine

Now they you have set up your machine to make use of the FIDO2 key, you might want to use your key on another computer. Since it’s stored on your hardware token, you can use it from any machine without copying your private key onto multiple machines.

All you need to do is to create the respective key handle file and import the public key for your private key on the hardware token. This can be achieved with the ssh-keygen -K command.

This will put two files in the local directory. The file id_ed25519_sk_rk is the password protected key handle file referencing your private key on the FIDO hardware token. The file id_ed25519_sk_rk.pub is the respective SSH PublicKey which you can share with your remote machines.

Your private key is still safely located on the hardware token. The mere key handle file alone can’t be used to establish an SSH connection to remove machines. It requires the hardware token as well.

Manage your Token2 PIN

To manage the PIN of your Token2 keys you can either use a Chrome-based browser or use the fido2-manage tool provided by Token2.

Update 20.03.2025 – Apple SSH is broken

I’ve just set up a new Mac with macOS Sonoma. Turns out the Apple provided ssh is broken. They’ve disabled the security key support.

Homebrew to the rescue. First install SSH from home-brew and then ssh-keygen -K will work.

With the Apple provided ssh I got the error message “Cannot download keys without provider”

Solution found here

Upscale Videos using open source AI tools

I recently got the question from my uncle whether I can upscale one of his really old videos. The source was a short 10 seconds video with some low quality audio in 320×240 pixel resolution. Likely taken by one of the first video capable digital cameras or a phone many years ago.

I accepted the challenge as I had seen some AI tools like DiffusionBee being able to upscale images with decent quality.

I haven’t found a good free tool to upscale a video directly yet. There are shade free tools out there, but I don’t trust them.

What I ended up doing is exporting each frame of the original video to an image, scale up the images with an open AI model and then stitch them back together to a video.

1.) export each frame of the video to a JPEG file, export sound into a single file

ffmpeg -i input.mp4 ./LOW/frame_%04d.jpg
ffmpeg -i input_video.mpeg -vn ./output_audio.mp3

Directory Structure:

.
├── HIGH
│   ├── upscayl_jpg_realesrgan-x4plus_4x
│   │   ├── frame_0001.jpg
│   │   ├── frame_0002.jpg
│   │   ├── frame_0003.jpg
...
│   │   ├── frame_0254.jpg
│   │   └── frame_0255.jpg
│   ├── upscayl_jpg_remacri_3x
│   ├── upscayl_jpg_ultramix_balanced_3x
│   └── upscayl_jpg_ultrasharp_2x
└── LOW
│   ├── frame_0001.jpg
│   ├── frame_0002.jpg
...
│   ├── frame_0254.jpg
│   └── frame_0255.jpg

2.) Upscale images using AI tool Upscaly

https://upscayl.org/

brew install --cask upscayl

3.) combine new images into a movie

cd ./HIGH/upscayl_jpg_realesrgan-x4plus_4x

ffmpeg -framerate 15 -f image2 -pattern_type glob -i "frame_?.jpg" -i ../../output_audio.mp3 -c:v libx264 -crf 1 -vf scale=2048:2048 -pix_fmt yuv420p -vb 100M ../output_${PWD##/}.mp4