Configuration

Help! We’ve ran into a DockerHub rate limit!

About

Yes, it is still happining. In 2025! Here you will find:

Podman Dockerhub Mirror Configuration

~/.config/containers/registries.conf.d/dockerhub-mirror.conf:

[[registry]]                                                                                              
prefix = "docker.io"                                                      
insecure = false                                                                              
blocked = false
location = "public.ecr.aws/docker"  

[[registry.mirror]]
location = "mirror.gcr.io"

[[registry.mirror]]
location = "gitlab.com/acme-org/dependency_proxy/containers"

[[registry.mirror]]
location = "registry-1.docker.io"                                                              

[[registry.mirror]]
location = "123456789012.dkr.ecr.us-east-1.amazonaws.com/docker-io"

I hope you are using ecr-login for your ECR registries ;)

export REGISTRY_AUTH_FILE=$HOME/.config/containers/auth.json
{
  "auths": {
    "docker.io": {
      "auth": "eGw4ZGVwXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXem40VQ=="
    },
    "gitlab.com": {
      "auth": "cmVXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXSYQ=="
    },
    "registry.gitlab.com": {
      "auth": "cmVXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXSYQ=="
    }
  },
  "credHelpers": {
    "*": "",
    "123456789012.dkr.ecr.us-east-1.amazonaws.com": "ecr-login",
    "345678901234.dkr.ecr.us-east-1.amazonaws.com": "ecr-login"
  }
}

K8s Quickfix: Rewriting Existing K8s Resources


$ cd $(mktemp -d)

$ (
  kubectl get pods --field-selector=status.phase=Pending -A -ojson | jq -c '.items[]';
  kubectl get deployments -ojson -A | jq -c '.items[]';
  kubectl get replicasets -ojson -A | jq -c '.items[]';
  kubectl get daemonsets -ojson -A | jq -c '.items[]';
) > /tmp/cluster.jsonl

$ cat /tmp/cluster.jsonl \
  | jq -r '
    def parse_into_parts:
      . as $i
      |capture(
        "^((?<host>[a-zA-Z0-9-]+\\.[a-zA-Z0-9.-]+)/)?"  
        + "(:(?<port>[0-9]+))?"
        + "((?<path>[a-zA-Z0-9-._/]+)/)?"
        + "(?<image>[a-zA-Z0-9-._]+)"
        + "((:(?<tag>[a-z0-9_.-]+))|(@(?<digest>sha256:[a-z0-9]+)))?$"
      ) // error("couldnt parse \($i)");

    def qualify_oci_image:
      if (.host==null) then .host="docker.io" end
      |if (.path==null and .host=="docker.io") then .path="library" end
      # |if (.tag==null and .digest==null) then .tag="latest" end
      ;

    def glue_parts:
      [
        if (.host) then .host else "" end,
        if (.port) then ":\(.port)" else "" end,
        if (.host) then "/" else "" end,
        if (.path) then "\(.path)/" else "" end,
        .image,
        if (.digest) then "@\(.digest)" elif (.tag) then ":\(.tag)" else "" end
      ]|join("")
      ;

    def fix_oci_image:
      . as $i
      |parse_into_parts
      |qualify_oci_image
      |if (.path=="bitnami") then .path="bitnamilegacy" else . end
      |if (.host=="docker.io") then (.host="123456780123.dkr.ecr.us-east-1.amazonaws.com"|.path="docker-io/\(.path)") else . end
      |glue_parts;
    
    [
      ..|objects|(.initContainers[]?,.containers[]?)
      |(.image|fix_oci_image) as $newImage
      |select(.image!=$newImage)
      |"\(.name)=\($newImage)"
    ] as $p
    |select($p|length > 0)
    |"kubectl set image \(.kind) -n \(.metadata.namespace) \(.metadata.name) \($p|join(" "))"
    

Permanent Mirror Configuration for containerd

(
	# patch /etc/containerd/config.toml for automatically picking dockerhub mirror

	containerd_config_version="$(grep -oP '^\s*version\s*=\s*\K\d+' /etc/containerd/config.toml)"
	p=""
	case "$containerd_config_version" in
		2) p="io.containerd.grpc.v1.cri";;
		3) p="io.containerd.cri.v1.images";;
		*) echo "unsupported"; return;;
	esac
	cat <<-EOM >> /etc/containerd/config.d/dockerhub-mirrors.toml
[plugins]

  [plugins."$p".registry]

    [plugins."$p".registry.mirrors]

      [plugins."$p".registry.mirrors."docker.io"]
        endpoint = [
          "public.ecr.aws/docker",
          "mirror.gcr.io",
          "gitlab.com/acme-org/dependency_proxy/containers",
          "123456789012.dkr.ecr.us-east-1.amazonaws.com/docker-io",
          "docker.io",
        ]

    [plugins."$p".registry.configs]
      [plugins."io.containerd.grpc.v1.cri".registry.configs."gitlab.com".auth]
      	# https://gitlab.com/groups/acme-org/-/settings/access_tokens?page=1
        username = "dependency-proxy"
        password = "glpat-XXXXXXXXXXXXXXXXXXXX"

      [plugins."$p".registry.configs."docker.io".auth]
        username = "acme-org"
        password = "dckr_pat_3Xi_XXXXXXXXXXXXXXXXXXXXXXX"
        auth = "dckr_pat_3Xi_XXXXXXXXXXXXXXXXXXXXXXX"
EOM		
	fi
)

if ! containerd config dump 1>/dev/null; then
   echo "exiting since containerd config is bad" >&2
   exit 1
fi

Techstack n - 1 is dead!

TL;DR TechStack n-1 is dead. It ended with the rise of the clouds and software release cycles going down to weeks due to containerized CIs.

Against ‘it’s stable and mature so let it run’

Death of Sophocles
The Death of Sophocles (Creative Commons)

Beeing OpenSource-based, Ubuntu already had the concept of point releases every 6 months when the Docker and K8s hit the world and gave automated CIs a big boost in making system containers. Some years after Docker itself switched to a 3-month release cycle. So did the Linux Kernel with 2-3 months. Firefox 4-weeks.

5¢ on YAML in the DevOps world

YAML Fatigue is Real: Are We Forgetting the “Why” Behind Declarative Config?

Lately, it feels like everywhere you look, there’s a YAML wrapper.

Often, a simple envsubst would suffice. I see many apps and frameworks that – let’s be honest – often just feel like a slightly polished YAML layer over a simple API call. And then there are the countless YAML-based Task Runner projects that are trying so hard to be the next-gen CI configuration (taskctl?… 😬).

Connect to GitLab via SSH

Start an SSH Agent

If you haven’t already done so, add the following command to your shell’s RC file (such as .bashrc or .zshrc) to start the ssh-agent:

$ eval $(ssh-agent)

Add Your Generated Key

Use the ssh-add command to add your private SSH key (assuming it is the default id_rsa file) to the agent:

$ ssh-add ~/.ssh/id_rsa

List Keys

You can list the keys currently loaded by the ssh-agent using the following command:

PulseAudio: Mono-Sink Audio

Just in case your 10,000+ employee corporation doesn’t plug in the microphone jack correctly and no one is allowed to ask questions (presentation-only).


Creating a Mono Audio Sink with PulseAudio

To force stereo audio output into a single mono channel, you can use the PulseAudio module module-remap-sink. This is often useful for presentations or when hardware is misconfigured (e.g., a microphone is plugged into an unbalanced stereo input, but only one channel is picked up).

AWS sync is not reliable!

While migrating from s3cmd to the AWS S3 CLI, I noticed that files did not reliably sync when using the AWS CLI.

I tested this behavior with different versions, and they all exhibited the same issue:


Test Setup

  1. Set up the AWS CLI utility and configure your credentials.

  2. Create a testing S3 bucket.

  3. Set up some random files:

    # Create 10 random files of 10MB each
    for i in {1..10}; do dd if=/dev/urandom of=multi/part-$i.out bs=1MB count=10; done;
    # Then copy the first 5 files over
    mkdir multi-changed
    cp -r multi/part-{1,2,3,4,5}.out multi-changed
    # And replace the content in the remaining 5 files (6-10)
    for i in {6..10}; do dd if=/dev/urandom of=multi-changed/part-$i.out bs=1MB count=10; done;
    

Testing S3 sync with AWS CLI

Cleanup

$ aws s3 rm s3://l3testing/multi --recursive

Inital sync

$ aws s3 sync multi s3://l3testing/multi
upload: multi/part-1.out to s3://l3testing/multi/part-1.out       
upload: multi/part-3.out to s3://l3testing/multi/part-3.out     
upload: multi/part-2.out to s3://l3testing/multi/part-2.out     
upload: multi/part-4.out to s3://l3testing/multi/part-4.out     
upload: multi/part-10.out to s3://l3testing/multi/part-10.out   
upload: multi/part-5.out to s3://l3testing/multi/part-5.out     
upload: multi/part-6.out to s3://l3testing/multi/part-6.out     
upload: multi/part-8.out to s3://l3testing/multi/part-8.out     
upload: multi/part-7.out to s3://l3testing/multi/part-7.out     
upload: multi/part-9.out to s3://l3testing/multi/part-9.out

Update files

Only 5 files should now be uploaded. Timestamps for all 10 files should be changed.

IP in VPN vs. LAN: Alias IP Address by iptables

Scenario: Using a Consistent IP Address

When you’re at work, you are on the LAN and use an IP address like 192.168.x.x. When you work from home, you connect via VPN to the same database (DB), and your IP address changes to 10.x.x.x. You want to avoid changing configuration files for your application every time you switch environments.

This problem can be easily worked around using iptables to create an IP address alias.

Laptop Performance: irqbalancer vs. intel_pstate

Today I uninstalled irqbalancer and noticed a performance gain on my GNOME desktop.

The CPUfreq control panel showed me IRQBALANCE DETECTED, and they state the following:

Why I should not use a single core for power saving

These points are stated very simply. I feel there are some contradictions here.