Skip to content

Docker Registry

Dual Docker registry setup running on mifs01.intranet for caching Docker Hub images and storing custom images.

Overview

Two separate registries running on mifs01:

Registry 1: Pull-Through Cache (port 5000)

  • Purpose: Cache Docker Hub images automatically
  • Storage: /srv/docker (BTRFS subvolume)
  • Usage: Transparent - use normal Docker Hub image names
  • Example: docker.io/library/nginx:alpine

Registry 2: Private Images (port 5001)

  • Purpose: Store custom/private images
  • Storage: /srv/docker-private (BTRFS subvolume)
  • Usage: Must specify full registry URL
  • Example: mifs01.intranet:5001/hello-world:latest

Technology: Podman containers with Docker Registry v2 Clients: k3s cluster nodes (pikm01-03, piks01-04)

Architecture

┌─────────────────────────────────────────────────────────────┐
│ mifs01.intranet (Podman)                                    │
│                                                             │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Registry 1 (Cache) :5000                                │ │
│ │ - Pull-through cache → Docker Hub                       │ │
│ │ - Storage: /srv/docker (BTRFS)                          │ │
│ │ - READ-ONLY proxy mode                                  │ │
│ └─────────────────────────────────────────────────────────┘ │
│                                                             │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Registry 2 (Private) :5001                              │ │
│ │ - Custom/private images                                 │ │
│ │ - Storage: /srv/docker-private (BTRFS)                  │ │
│ │ - READ-WRITE direct storage                             │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
                    ▲                   ▲
                    │ :5000             │ :5001
       ┌────────────┼───────────────────┼────────────┐
       │            │                   │            │
  ┌────▼───┐   ┌───▼────┐         ┌───▼────┐   ┌───▼────┐
  │ pikm01 │   │ pikm02 │   ...   │ piks01 │   │ piks04 │
  │  k3s   │   │  k3s   │         │k3s-agent│  │k3s-agent│
  └────────┘   └────────┘         └────────┘   └────────┘
       /etc/rancher/k3s/registries.yaml

  docker.io/* → mifs01:5000 (automatic)
  mifs01.intranet:5001/* → mifs01:5001 (explicit)

Configuration

Registry Server (mifs01)

Cache Registry (port 5000): - Service: /etc/systemd/system/docker-registry.service - Config: /etc/docker-registry/config.yml - Storage: /srv/docker - Mode: Pull-through cache (Docker Hub proxy)

Private Registry (port 5001): - Service: /etc/systemd/system/docker-registry-private.service - Config: /etc/docker-registry/config-private.yml - Storage: /srv/docker-private - Mode: Direct storage (push/pull custom images)

k3s Nodes

  • Config: /etc/rancher/k3s/registries.yaml
  • Mirrors:
  • docker.io → http://mifs01.intranet:5000 (automatic)
  • mifs01.intranet:5001 → direct access (explicit)

Usage: - Public images: Use normal names → nginx:alpine (cached through :5000) - Custom images: Use full URL → mifs01.intranet:5001/myapp:latest (stored on :5001)

Testing the Registry

IMPORTANT: Run these tests BEFORE deploying registry configuration to k3s cluster. All tests run from your local Linux PC using Podman to validate the registry is working correctly in isolation.

Prerequisites

  • Local Linux PC with Podman installed
  • Network access to mifs01.intranet:5000
  • Registry service running on mifs01

Test 1: Verify Both Registries are Accessible

Basic connectivity and API test for both registries.

# From your local PC

# Test 1a: Check cache registry is responding (port 5000)
curl http://mifs01.intranet:5000/v2/
# Expected: {}

# Test 1b: Check private registry is responding (port 5001)
curl http://mifs01.intranet:5001/v2/
# Expected: {}

# Test 1c: Check empty catalogs (fresh registries)
curl http://mifs01.intranet:5000/v2/_catalog
# Expected: {"repositories":[]}

curl http://mifs01.intranet:5001/v2/_catalog
# Expected: {"repositories":[]}

# Test 1d: Test with verbose output
curl -v http://mifs01.intranet:5000/v2/
curl -v http://mifs01.intranet:5001/v2/
# Both should see: HTTP/1.1 200 OK

Test 2: Configure Local Podman for Both Registries

Configure your local Podman to use both registries and allow insecure (HTTP) connections.

# From your local PC

# Create Podman registry configuration
mkdir -p ~/.config/containers/registries.conf.d/

cat > ~/.config/containers/registries.conf.d/mifs01-registries.conf << 'EOF'
# Dual registry configuration for mifs01

# Registry 1: Use mifs01:5000 as pull-through cache for Docker Hub
[[registry]]
location = "docker.io"

[[registry.mirror]]
location = "mifs01.intranet:5000"
insecure = true

# Registry 2: Allow direct access to mifs01:5000 (cache)
[[registry]]
location = "mifs01.intranet:5000"
insecure = true

# Registry 3: Allow direct access to mifs01:5001 (private)
[[registry]]
location = "mifs01.intranet:5001"
insecure = true
EOF

# Verify configuration
cat ~/.config/containers/registries.conf.d/mifs01-registries.conf

# Test configuration is loaded
podman info | grep -A 30 registries

Test 3: Test Pull-Through Cache

Pull a public image through mifs01 cache.

# From your local PC

# Test 3a: Remove any existing alpine image
podman rmi docker.io/library/alpine:latest 2>/dev/null || true

# Test 3b: Pull alpine (will go through mifs01 cache)
# Watch for "mifs01.intranet:5000" in the output
podman pull docker.io/library/alpine:latest

# Test 3c: Verify image was cached on mifs01
curl http://mifs01.intranet:5000/v2/_catalog
# Expected: {"repositories":["library/alpine"]}

# Test 3d: Check alpine tags in cache
curl http://mifs01.intranet:5000/v2/library/alpine/tags/list
# Expected: {"name":"library/alpine","tags":["latest"]}

# Test 3e: Test cache performance - pull again
podman rmi docker.io/library/alpine:latest
time podman pull docker.io/library/alpine:latest
# Should be much faster (from cache)

Test 4: Build Image Using Cached Base Images

Build a custom image with the registry as pull-through cache.

# From your local PC

# Create test directory
mkdir -p ~/docker-registry-test
cd ~/docker-registry-test

# Create Dockerfile
cat > Dockerfile << 'EOF'
FROM alpine:latest

RUN apk add --no-cache curl bash

COPY hello.sh /hello.sh
RUN chmod +x /hello.sh

CMD ["/hello.sh"]
EOF

# Note: Alpine is multi-arch and will build for your local architecture
# If building on amd64 for arm64 k3s cluster, use:
# podman build --platform linux/arm64 -t hello-world:v1.0 .

# Create application script
cat > hello.sh << 'EOF'
#!/bin/bash
echo "================================================"
echo "  Hello from Docker Registry Test!"
echo "================================================"
echo "Hostname: $(hostname)"
echo "Date: $(date)"
echo "Kernel: $(uname -r)"
echo "================================================"
echo "This image was built using mifs01 registry cache"
echo "and pushed to mifs01 for private storage."
echo "================================================"
while true; do
  echo "[$(date)] Container is running... (press Ctrl+C to stop)"
  sleep 30
done
EOF

# Build image (will use cached alpine from mifs01)
# For k3s arm64 cluster, build multi-arch or arm64 image:
podman build --platform linux/arm64,linux/amd64 --manifest hello-world:v1.0 .

# Or if building on x86_64 for arm64 cluster only:
# podman build --platform linux/arm64 -t hello-world:v1.0 .

# Verify alpine is still in cache
curl http://mifs01.intranet:5000/v2/_catalog
# Expected: {"repositories":["library/alpine"]}

Test 5: Push Custom Image to Private Registry

Test private image storage functionality on port 5001.

# From your local PC (still in ~/docker-registry-test)

# Test 5a: Tag image for private registry (port 5001)
# If you built a manifest (multi-arch), tag the manifest:
podman manifest push hello-world:v1.0 mifs01.intranet:5001/hello-world:v1.0
podman manifest push hello-world:v1.0 mifs01.intranet:5001/hello-world:latest

# OR if you built a single-arch image:
# podman tag hello-world:v1.0 mifs01.intranet:5001/hello-world:v1.0
# podman tag hello-world:v1.0 mifs01.intranet:5001/hello-world:latest
# podman push mifs01.intranet:5001/hello-world:v1.0
# podman push mifs01.intranet:5001/hello-world:latest

# Test 5c: Verify push succeeded on private registry
curl http://mifs01.intranet:5001/v2/_catalog
# Expected: {"repositories":["hello-world"]}

# Test 5d: Check tags
curl http://mifs01.intranet:5001/v2/hello-world/tags/list
# Expected: {"name":"hello-world","tags":["latest","v1.0"]}

# Test 5e: Verify on mifs01 disk storage
ssh mifs01.intranet "ls -lah /srv/docker-private/docker/registry/v2/repositories/"
# Should show: hello-world/

# Test 5f: Verify cache registry still has only alpine
curl http://mifs01.intranet:5000/v2/_catalog
# Expected: {"repositories":["library/alpine"]} (no hello-world here)

Test 6: Pull Custom Image from Private Registry

Test pulling your private image back from the private registry.

# From your local PC

# Test 6a: Remove local copy
podman rmi mifs01.intranet:5001/hello-world:latest 2>/dev/null || true
podman rmi mifs01.intranet:5001/hello-world:v1.0 2>/dev/null || true
podman rmi hello-world:v1.0 2>/dev/null || true

# Test 6b: Pull from private registry
podman pull mifs01.intranet:5001/hello-world:latest

# Test 6c: Verify image exists locally
podman images | grep hello-world

# Test 6d: Pull specific version
podman pull mifs01.intranet:5001/hello-world:v1.0

# Test 6e: Verify both versions pulled successfully
podman images mifs01.intranet:5001/hello-world

Test 7: Run Custom Image from Private Registry

Test running containers from private registry-stored images.

# From your local PC

# Test 7a: Run container from private registry
podman run --rm -d --name hello-test \
  mifs01.intranet:5001/hello-world:latest

# Test 7b: Check container is running
podman ps | grep hello-test

# Test 7c: View logs
podman logs hello-test

# Expected output:
# ================================================
#   Hello from Docker Registry Test!
# ================================================
# Hostname: <container-id>
# Date: Sun Nov 10 19:45:23 UTC 2025
# Kernel: 6.12.48+deb13-amd64
# ================================================
# This image was built using mifs01 registry cache
# and pushed to mifs01 for private storage.
# ================================================
# [date] Container is running...

# Test 7d: Interactive test - exec into container
podman exec -it hello-test /bin/bash
# Inside container:
# curl --version
# exit

# Test 7e: Stop container
podman stop hello-test

Test 8: Test Multiple Image Pulls (Stress Test)

Test caching multiple different images.

# From your local PC

# Remove previous test images
podman rmi docker.io/library/nginx:alpine 2>/dev/null || true
podman rmi docker.io/library/redis:alpine 2>/dev/null || true
podman rmi docker.io/library/busybox:latest 2>/dev/null || true

# Pull multiple images (all will cache through mifs01)
echo "=== Pulling nginx:alpine ==="
time podman pull docker.io/library/nginx:alpine

echo "=== Pulling redis:alpine ==="
time podman pull docker.io/library/redis:alpine

echo "=== Pulling busybox:latest ==="
time podman pull docker.io/library/busybox:latest

# Verify all are cached
curl http://mifs01.intranet:5000/v2/_catalog | jq
# Expected: repositories include nginx, redis, busybox, alpine, hello-world

# Test cache performance - pull all again
podman rmi docker.io/library/nginx:alpine
podman rmi docker.io/library/redis:alpine
podman rmi docker.io/library/busybox:latest

echo "=== Re-pulling from cache ==="
time podman pull docker.io/library/nginx:alpine
time podman pull docker.io/library/redis:alpine
time podman pull docker.io/library/busybox:latest
# Should all be much faster

Test 9: Test Registry Persistence

Verify cached images survive registry restart.

# From your local PC

# Test 9a: Check current catalog
curl http://mifs01.intranet:5000/v2/_catalog | jq

# Test 9b: Restart registry on mifs01
ssh mifs01.intranet "sudo systemctl restart docker-registry"

# Wait for restart
sleep 5

# Test 9c: Verify registry is back online
curl http://mifs01.intranet:5000/v2/

# Test 9d: Verify all images still present
curl http://mifs01.intranet:5000/v2/_catalog | jq
# All previously cached images should still be listed

# Test 9e: Pull image to confirm data integrity
podman rmi docker.io/library/alpine:latest
podman pull docker.io/library/alpine:latest
# Should work without errors

Test 10: Test Registry Error Handling

Test what happens when registry is unavailable.

# From your local PC

# Test 10a: Stop registry
ssh mifs01.intranet "sudo systemctl stop docker-registry"

# Test 10b: Try to pull new image (should fail gracefully)
podman pull docker.io/library/python:alpine
# Expected: Should try mifs01, fail, then fall back to docker.io directly
# (This is pull-through cache behavior - doesn't block Docker Hub access)

# Test 10c: Verify fallback worked
podman images | grep python

# Test 10d: Start registry again
ssh mifs01.intranet "sudo systemctl start docker-registry"
sleep 5

# Test 10e: Verify registry works again
curl http://mifs01.intranet:5000/v2/

Test 11: Clean Up Test Configuration

After successful testing, optionally remove test configuration.

# From your local PC

# Option A: Keep configuration (recommended - useful for future testing)
# Do nothing - leave ~/.config/containers/registries.conf.d/mifs01-mirror.conf

# Option B: Disable temporarily (rename)
# mv ~/.config/containers/registries.conf.d/mifs01-mirror.conf \
#    ~/.config/containers/registries.conf.d/mifs01-mirror.conf.disabled

# Option C: Remove completely
# rm ~/.config/containers/registries.conf.d/mifs01-mirror.conf

# Clean up test directory (optional)
rm -rf ~/docker-registry-test

# Keep test images in registry for k3s testing later
# Don't clean mifs01 cache - it will be useful when k3s nodes start pulling

Deploying to k3s Cluster

Only after all tests pass, deploy registry configuration to k3s:

cd /home/mixi/pg/home/automation/ansible/plays

# Deploy registries.yaml to all k3s nodes
ansible-playbook setup-k3s-registry.yml

# Verify deployment on a master node
ssh pikm01.intranet cat /etc/rancher/k3s/registries.yaml

# Verify on a worker node
ssh piks01.intranet cat /etc/rancher/k3s/registries.yaml

# Restart k3s services (done automatically by playbook, but verify if needed)
# On master: sudo systemctl restart k3s
# On worker: sudo systemctl restart k3s-agent

Verify k3s Integration

After deploying to k3s, verify it's working:

# Test from k3s node
ssh pikm01.intranet

# Check k3s sees registry configuration
sudo crictl info | grep -A 10 registry

# Pull test image (should use mifs01 cache)
sudo crictl pull docker.io/library/nginx:alpine

# Verify it pulled from cache (check mifs01 logs)
ssh mifs01.intranet "sudo journalctl -u docker-registry -n 20"

# Test public image through cache (multi-arch, works on both amd64/arm64)
kubectl run nginx-test --image=nginx:alpine --restart=Never

# Verify it used cache (check mifs01 logs)
ssh mifs01.intranet "sudo journalctl -u docker-registry -n 20"

# Test private registry with custom image
# Note: Ensure your hello-world image is built for arm64 if k3s cluster is arm64
kubectl run hello-k3s --image=mifs01.intranet:5001/hello-world:latest --restart=Never

# Wait for pod to start
kubectl wait --for=condition=Ready pod/hello-k3s --timeout=60s

# Check logs
kubectl logs hello-k3s

# Expected output includes:
# "Hello from Docker Registry Test!"
# "This image was built using mifs01 registry cache"

# Check architecture
kubectl exec hello-k3s -- uname -m
# Should show: aarch64 (for arm64 cluster) or x86_64 (for amd64 cluster)

# Cleanup
kubectl delete pod hello-k3s nginx-test

Alternative: Use Pre-built Multi-Arch Image for Testing

If you want a simpler test without building custom images:

# Pull a known multi-arch image to your local PC
podman pull docker.io/library/busybox:latest

# Tag and push to private registry
podman tag busybox:latest mifs01.intranet:5001/busybox:test
podman push mifs01.intranet:5001/busybox:test

# Deploy to k3s
kubectl run busybox-test --image=mifs01.intranet:5001/busybox:test --restart=Never -- sleep 3600

# Verify it's running on correct architecture
kubectl exec busybox-test -- uname -m

# Cleanup
kubectl delete pod busybox-test

Maintenance Commands

Check Registry Status

ssh mifs01.intranet

# Check cache registry (port 5000)
sudo systemctl status docker-registry
sudo journalctl -u docker-registry -f

# Check private registry (port 5001)
sudo systemctl status docker-registry-private
sudo journalctl -u docker-registry-private -f

View Cached/Stored Images

# Cache registry (port 5000) - Docker Hub images
curl http://mifs01.intranet:5000/v2/_catalog | jq
curl http://mifs01.intranet:5000/v2/library/nginx/tags/list | jq

# Private registry (port 5001) - Custom images
curl http://mifs01.intranet:5001/v2/_catalog | jq
curl http://mifs01.intranet:5001/v2/hello-world/tags/list | jq

Check Storage Usage

ssh mifs01.intranet

# Cache registry storage
du -sh /srv/docker
sudo btrfs filesystem df /srv/docker

# Private registry storage
du -sh /srv/docker-private
sudo btrfs filesystem df /srv/docker-private

# Total usage
du -sh /srv/docker /srv/docker-private

Garbage Collection

Remove unreferenced image layers to free space:

ssh mifs01.intranet

# Garbage collect cache registry (port 5000)
sudo systemctl stop docker-registry
sudo podman run --rm \
  -v /srv/docker:/var/lib/registry \
  -v /etc/docker-registry/config.yml:/etc/docker/registry/config.yml:ro \
  docker.io/library/registry:2 \
  garbage-collect /etc/docker/registry/config.yml
sudo systemctl start docker-registry

# Garbage collect private registry (port 5001)
sudo systemctl stop docker-registry-private
sudo podman run --rm \
  -v /srv/docker-private:/var/lib/registry \
  -v /etc/docker-registry/config-private.yml:/etc/docker/registry/config.yml:ro \
  docker.io/library/registry:2 \
  garbage-collect /etc/docker/registry/config.yml
sudo systemctl start docker-registry-private

Update Registry Images

ssh mifs01.intranet
sudo podman pull docker.io/library/registry:2
sudo systemctl restart docker-registry
sudo systemctl restart docker-registry-private
sudo podman image prune -a

Delete Specific Image

To remove an image from the registry:

# Note: Requires delete support enabled in config (already enabled)

# Delete manifest
curl -X DELETE http://mifs01.intranet:5000/v2/hello-world/manifests/<digest>

# Then run garbage collection (see above)

Troubleshooting

Registry Not Responding

# Check if service is running
ssh mifs01.intranet sudo systemctl status docker-registry

# Check logs
ssh mifs01.intranet sudo journalctl -u docker-registry -n 100

# Check if port is listening
ssh mifs01.intranet sudo ss -tlnp | grep 5000

# Test connectivity from k3s node
ssh pikm01.intranet curl http://mifs01.intranet:5000/v2/

Images Not Caching

# On k3s node, verify registries.yaml exists
ssh pikm01.intranet cat /etc/rancher/k3s/registries.yaml

# Check containerd/k3s configuration
ssh pikm01.intranet sudo crictl info | grep -A 10 registry

# Check k3s logs
ssh pikm01.intranet sudo journalctl -u k3s -n 50 | grep -i registry

Pull Failures

# Check registry logs during pull
ssh mifs01.intranet sudo journalctl -u docker-registry -f

# Test manual pull from k3s node
ssh pikm01.intranet sudo crictl pull docker.io/library/hello-world:latest

# Check registry health
curl http://mifs01.intranet:5000/v2/
# Should return: {}

Storage Full

# Check available space
ssh mifs01.intranet df -h /srv/docker

# Run garbage collection (see Maintenance section)

# Check BTRFS usage
ssh mifs01.intranet sudo btrfs filesystem usage /srv/docker

# If needed, clear all cached images (WARNING: destructive)
ssh mifs01.intranet
sudo systemctl stop docker-registry
sudo rm -rf /srv/docker/docker
sudo mkdir -p /srv/docker/docker
sudo systemctl start docker-registry

Security Considerations

  • No authentication: Registry is HTTP-only without authentication
  • Network isolation: Only accessible from 192.168.1.0/24 (internal network)
  • No internet exposure: Not accessible from outside the local network
  • No sensitive data: Only public Docker Hub images and local test images
  • Storage permissions: /srv/docker owned by root with restricted permissions

Performance

  • In-memory cache: Blob descriptors cached in RAM for fast lookups
  • BTRFS filesystem: Uses autodefrag and noatime for better performance
  • Network: Uses host networking for minimal overhead
  • Expected performance:
  • First pull (from Docker Hub): 5-15 seconds (depending on image size and internet speed)
  • Subsequent pulls (from cache): 1-3 seconds
  • Local network bandwidth: ~1 Gbps (limited by switch/NIC)

References