Web
Help! We’ve ran into a DockerHub rate limit!
About
Yes, it is still happining. In 2025! Here you will find:
- Podman Dockerhub Mirror Configuration
- K8s Quickfix: Rewriting Existing K8s Resources
- Permanent Mirror Configuration for
containerd - K8s Admission Webhook to do the same
Podman Dockerhub Mirror Configuration
~/.config/containers/registries.conf.d/dockerhub-mirror.conf:
[[registry]]
prefix = "docker.io"
insecure = false
blocked = false
location = "public.ecr.aws/docker"
[[registry.mirror]]
location = "mirror.gcr.io"
[[registry.mirror]]
location = "gitlab.com/acme-org/dependency_proxy/containers"
[[registry.mirror]]
location = "registry-1.docker.io"
[[registry.mirror]]
location = "123456789012.dkr.ecr.us-east-1.amazonaws.com/docker-io"
I hope you are using ecr-login for your ECR registries ;)
export REGISTRY_AUTH_FILE=$HOME/.config/containers/auth.json
{
"auths": {
"docker.io": {
"auth": "eGw4ZGVwXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXem40VQ=="
},
"gitlab.com": {
"auth": "cmVXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXSYQ=="
},
"registry.gitlab.com": {
"auth": "cmVXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXSYQ=="
}
},
"credHelpers": {
"*": "",
"123456789012.dkr.ecr.us-east-1.amazonaws.com": "ecr-login",
"345678901234.dkr.ecr.us-east-1.amazonaws.com": "ecr-login"
}
}
K8s Quickfix: Rewriting Existing K8s Resources
$ cd $(mktemp -d)
$ (
kubectl get pods --field-selector=status.phase=Pending -A -ojson | jq -c '.items[]';
kubectl get deployments -ojson -A | jq -c '.items[]';
kubectl get replicasets -ojson -A | jq -c '.items[]';
kubectl get daemonsets -ojson -A | jq -c '.items[]';
) > /tmp/cluster.jsonl
$ cat /tmp/cluster.jsonl \
| jq -r '
def parse_into_parts:
. as $i
|capture(
"^((?<host>[a-zA-Z0-9-]+\\.[a-zA-Z0-9.-]+)/)?"
+ "(:(?<port>[0-9]+))?"
+ "((?<path>[a-zA-Z0-9-._/]+)/)?"
+ "(?<image>[a-zA-Z0-9-._]+)"
+ "((:(?<tag>[a-z0-9_.-]+))|(@(?<digest>sha256:[a-z0-9]+)))?$"
) // error("couldnt parse \($i)");
def qualify_oci_image:
if (.host==null) then .host="docker.io" end
|if (.path==null and .host=="docker.io") then .path="library" end
# |if (.tag==null and .digest==null) then .tag="latest" end
;
def glue_parts:
[
if (.host) then .host else "" end,
if (.port) then ":\(.port)" else "" end,
if (.host) then "/" else "" end,
if (.path) then "\(.path)/" else "" end,
.image,
if (.digest) then "@\(.digest)" elif (.tag) then ":\(.tag)" else "" end
]|join("")
;
def fix_oci_image:
. as $i
|parse_into_parts
|qualify_oci_image
|if (.path=="bitnami") then .path="bitnamilegacy" else . end
|if (.host=="docker.io") then (.host="123456780123.dkr.ecr.us-east-1.amazonaws.com"|.path="docker-io/\(.path)") else . end
|glue_parts;
[
..|objects|(.initContainers[]?,.containers[]?)
|(.image|fix_oci_image) as $newImage
|select(.image!=$newImage)
|"\(.name)=\($newImage)"
] as $p
|select($p|length > 0)
|"kubectl set image \(.kind) -n \(.metadata.namespace) \(.metadata.name) \($p|join(" "))"
Permanent Mirror Configuration for containerd
(
# patch /etc/containerd/config.toml for automatically picking dockerhub mirror
containerd_config_version="$(grep -oP '^\s*version\s*=\s*\K\d+' /etc/containerd/config.toml)"
p=""
case "$containerd_config_version" in
2) p="io.containerd.grpc.v1.cri";;
3) p="io.containerd.cri.v1.images";;
*) echo "unsupported"; return;;
esac
cat <<-EOM >> /etc/containerd/config.d/dockerhub-mirrors.toml
[plugins]
[plugins."$p".registry]
[plugins."$p".registry.mirrors]
[plugins."$p".registry.mirrors."docker.io"]
endpoint = [
"public.ecr.aws/docker",
"mirror.gcr.io",
"gitlab.com/acme-org/dependency_proxy/containers",
"123456789012.dkr.ecr.us-east-1.amazonaws.com/docker-io",
"docker.io",
]
[plugins."$p".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."gitlab.com".auth]
# https://gitlab.com/groups/acme-org/-/settings/access_tokens?page=1
username = "dependency-proxy"
password = "glpat-XXXXXXXXXXXXXXXXXXXX"
[plugins."$p".registry.configs."docker.io".auth]
username = "acme-org"
password = "dckr_pat_3Xi_XXXXXXXXXXXXXXXXXXXXXXX"
auth = "dckr_pat_3Xi_XXXXXXXXXXXXXXXXXXXXXXX"
EOM
fi
)
if ! containerd config dump 1>/dev/null; then
echo "exiting since containerd config is bad" >&2
exit 1
fi
Craftsmanship And The Right Tools for Your Job

I wonder, why I my hardware was always superior than the one my companies provided me with. Shouldn’t they be interested in getting best quality? Would I order an electrician, and then forbid him to use his tools and give him my IKEA toolbox? – I wouldn’t!
A Software Craftsman’s tools are not mere instruments; they are his accumulated skill, capital, and tradition made tangible. I believe you have to care for your tools. Also, ★nix craftsmen often tend to solve problems with the capabilities of their systems.
Techstack n - 1 is dead!
TL;DR TechStack n-1 is dead. It ended with the rise of the clouds and software release cycles going down to weeks due to containerized CIs.
Against ‘it’s stable and mature so let it run’
Beeing OpenSource-based, Ubuntu already had the concept of point releases every 6 months when the Docker and K8s hit the world and gave automated CIs a big boost in making system containers. Some years after Docker itself switched to a 3-month release cycle. So did the Linux Kernel with 2-3 months. Firefox 4-weeks.
PulseAudio: Mono-Sink Audio
Just in case your 10,000+ employee corporation doesn’t plug in the microphone jack correctly and no one is allowed to ask questions (presentation-only).
Creating a Mono Audio Sink with PulseAudio
To force stereo audio output into a single mono channel, you can use the PulseAudio module module-remap-sink. This is often useful for presentations or when hardware is misconfigured (e.g., a microphone is plugged into an unbalanced stereo input, but only one channel is picked up).
AWS sync is not reliable!
While migrating from s3cmd to the AWS S3 CLI, I noticed that files did not reliably sync when using the AWS CLI.
I tested this behavior with different versions, and they all exhibited the same issue:
python2.7-awscli1.9.7python2.7-awscli1.15.47python3.6-awscli1.15.47
Test Setup
-
Set up the AWS CLI utility and configure your credentials.
-
Create a testing S3 bucket.
-
Set up some random files:
# Create 10 random files of 10MB each for i in {1..10}; do dd if=/dev/urandom of=multi/part-$i.out bs=1MB count=10; done; # Then copy the first 5 files over mkdir multi-changed cp -r multi/part-{1,2,3,4,5}.out multi-changed # And replace the content in the remaining 5 files (6-10) for i in {6..10}; do dd if=/dev/urandom of=multi-changed/part-$i.out bs=1MB count=10; done;
Testing S3 sync with AWS CLI
Cleanup
$ aws s3 rm s3://l3testing/multi --recursive
Inital sync
$ aws s3 sync multi s3://l3testing/multi
upload: multi/part-1.out to s3://l3testing/multi/part-1.out
upload: multi/part-3.out to s3://l3testing/multi/part-3.out
upload: multi/part-2.out to s3://l3testing/multi/part-2.out
upload: multi/part-4.out to s3://l3testing/multi/part-4.out
upload: multi/part-10.out to s3://l3testing/multi/part-10.out
upload: multi/part-5.out to s3://l3testing/multi/part-5.out
upload: multi/part-6.out to s3://l3testing/multi/part-6.out
upload: multi/part-8.out to s3://l3testing/multi/part-8.out
upload: multi/part-7.out to s3://l3testing/multi/part-7.out
upload: multi/part-9.out to s3://l3testing/multi/part-9.out
Update files
Only 5 files should now be uploaded. Timestamps for all 10 files should be changed.
Infojunk September 2018
This is a collection of interesting links and resources I came across in September 2018, covering topics like Kotlin, Python, Markdown, note-taking apps, web development, Linux, and more.
Kotlin
- Kotlin vs. Scala vs. Java
- What should I choose instead of Java?
- Super Kotlin Tutorial
- SQL with Kotlin Exposed
- Kotlin Playground
Python
Markdown Notetaking
Some notetaking apps you should give a try. At least Notion is very promising (yet you have to pay).
Update Confluence Page by API
You can create you own API token here: https://id.atlassian.com/manage/api-tokens and live-update any information you want. The script basicaly creates a HTML file, pumps it by JQ into a JSON-file and uploads it.
#!/bin/bash
# Update Confluence page by API
# Strict mode
set -euo pipefail
# Some informations
PAGEID=602767382
SPACE="EL3"
AUTH="user@example.com:GETYOUROWNTOKENORNEVERKNOW"
API_URL="https://mycompany.atlassian.net/wiki/rest/api"
# Create temp dir
TMP=$( mktemp -d )
# Shutdown handler
shutdown() {
# Cleanup temp directory
if [ -e "$TMP" ]; then
rm -fr "$TMP"
fi
}
trap shutdown TERM EXIT
# We first need current page version for update with next-page version
curl --silent --user ${AUTH} ${API_URL}/content/${PAGEID} > ${TMP}/current.json
VERSION=$( cat ${TMP}/current.json | jq '.version.number' )
NEXTVERSION=$( expr 1 + ${VERSION} )
echo Got Version: ${VERSION}
# Get information
create page.txt
# Create HTML file
echo "
Date of creation: $( date --utc )
<pre>$( cat ${TMP}/page.txt | sed 's/$/<br\>/g' | tr -d '\n' )</br\></pre>
" > ${TMP}/page.html
# Prepare upload JSON with JQ
cat ${TMP}/page.html | jq -sR "@text | {\"id\":\"$PAGEID\",\"type\":\"page\",\"title\":\"Information Gathering\",\"space\":{\"key\":\"${SPACE}\"},\"body\":{\"storage\":{\"value\": . ,\"representation\":\"storage\"}},\"version\":{\"number\":${NEXTVERSION}}}" > ${TMP}/upload.json
# Upload
curl \
--silent \
--user ${AUTH} \
-X PUT -H 'Content-Type: application/json' \
-T ${TMP}/upload.json \
${API_URL}/content/${PAGEID} \
1>/dev/null
echo Updated Version: ${NEXTVERSION}
IP in VPN vs. LAN: Alias IP Address by iptables
Scenario: Using a Consistent IP Address
When you’re at work, you are on the LAN and use an IP address like 192.168.x.x. When you work from home, you connect via VPN to the same database (DB), and your IP address changes to 10.x.x.x. You want to avoid changing configuration files for your application every time you switch environments.
This problem can be easily worked around using iptables to create an IP address alias.
Laptop Performance: irqbalancer vs. intel_pstate
Today I uninstalled irqbalancer and noticed a performance gain on my GNOME desktop.
The CPUfreq control panel showed me IRQBALANCE DETECTED, and they state the following:
Why I should not use a single core for power saving
- Modern OS/kernels work better on multi-core architectures.
- You need at least 1 core for a foreground application and 1 for background system services.
- Linux Kernel switches between CPU cores to avoid overheating, CPU thermal throttling, and to balance system load.
- Many CPUs have Hyper-Threading (HT) technology enabled by default. So there is no reason to run half of a physical CPU core.
These points are stated very simply. I feel there are some contradictions here.
Better than VNC and TeamViewer - NoMachine and the NX protocol
NoMachine and the NX Protocol for Remote Desktop
The NX protocol is essentially a successor to the X protocol and is excellent for remote display and streaming. NoMachine implements this technology, offering a robust, cross-platform alternative to VNC with superior performance and video streaming capabilities. It is also a good alternative to TeamViewer.
A key advantage of NoMachine is its seamless cross-platform support for keyboard and mouse, which is often an issue with many other alternatives to TeamViewer. Despite this, TeamViewer remains my preferred choice for now.
Anti-Patterns and Mental Downers
Cognitive Biases and Communication Gaps
Most people are familiar with anti-patterns, but this discussion focuses more on the psychological side of working in complex environments.
I’ve personally experienced how poor communication can lead to issues. For example, I once believed there was a mandatory release chat and tried to gather information, only to have my questions in the company chat go unanswered. I was eventually added, but two months late. Similarly, I was forgotten when access was granted to a Shared Google Drive folder, yet everyone assumed I had the shared information because the distribution was taken as obvious.
Untitled
+++ title = ‘Better than VNC and TeamViewer - NoMachine and the NX protocol’ date = ‘2016-07-30T22:01:57+02:00’ draft = false tags = [] +++
About
The NX protocol is basicaly a successor to the X protocol and very nyce for Streaming. No Machine implements that and is a better VNC with video streaming and a nice alternative to TeamViewer with good cross-platform capabilities.
In-Depth
In 2001, the compression and transport protocol NX was created to improve on the performance of the native X display protocol to the point that it could be usable over a slow link such as a dial-up modem. It wrapped remote connections Secure Shell for encryption.