Deploying a docker-registry in your private kubernetes cluster
One, in my opinion, disastrous design decision from Docker, was to make it mandatory to use https in docker repositories. Sure, for production setups, nothing else makes any sense, but for developers it's just a big time eating obstacle. If you run a local docker cluster (swarm or kubernetes) and work on some application, the only ratonal way to deploy it to the cluster is from a local docker registry. Uploading your development images (that in my case can be several gigabytes in size for large C++ applications built with debug information and extra instrumentation) to docker hub just to download them to the same LAN, may be from several nodes, is just plain wrong.
Allowing plain http for local on premise deployments would have saved quite some time (and money) for dockers.
There are many articles online that describe how to deploy a docker-registry with certificates, but none of the ones I found was relevant for my use case - a bare metal kubernetes cluster without any external load-balancer.
So, here is how I did it.
Get a TLS certificate
First I obtained a wildcard certificate from letsencrypt. I did it manually, so I'll have to renew it every 10 weeks (or automate it when i have time). I chose a wildcard cert so that I can reuse that same certificate for any TLS enabled services I later want to deploy to the cluster.
I sent a request to letsencrypt using certbot
in ubuntu 18.04.
$ sudo certbot certonly --manual --preferred-challenges dns -d '*.lastviking.eu'
It asked me to create a txt dns entry with some text it it. I did that from my DNS providers web console, and then I pressed enter.
Put the TLS cert in a kubernetes secret
Now it was time to create the name-space for the registry, and to add the cert to a secret.
$ kubectl create ns docker-registry
$ kubectl create secret tls registry-tls -n docker-registry --cert fullchain.pem --key privkey.pem
Deploy the registry
registry.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
spec:
replicas: 1
template:
metadata:
labels:
name: registry
spec:
containers:
- resources:
name: registry
image: registry
ports:
- name: registry-port
containerPort: 5000
volumeMounts:
- mountPath: /var/lib/registry
name: images
- mountPath: /certs
name: certs
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/tls.crt
- name: REGISTRY_HTTP_TLS_KEY
value: /certs/tls.key
volumes:
- name: images
hostPath:
path: /root/registry/images
- name: certs
secret:
secretName: registry-tls
$ kubectl -n docker-registry apply -f registry.yaml
The secret is mounted as a volume in the pod, and and the registry service reads the certs just as if we had created a persistent volume and copied the files there.
In order to make it available from the outside (practical if you plan to push any images), deploy a service that expose the registry on a NodePort.
registry-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: registry-svc
labels:
name: registry-svc
spec:
type: NodePort
ports:
- port: 5000
nodePort: 30500
name: http
selector:
name: registry
$ kubectl -n docker-registry apply -f registry-svc.yaml
Now, all we have to do to use the registry is to tag the images we want to deploy with the a valid dns name (I just added the IP of a node in my cluster to my dns server, so I don't have to mess with internal dns configurations on my machines or in the cluster) and port, and a name.
$ docker tag a4a5a9eb3d28 k8registry.lastviking.eu:30500/amazingapp:latest
$ docker push a4a5a9eb3d28 k8registry.lastviking.eu:30500/amazingapp:latest
Use the same tag to specify the image in your kubernetes or helm declarations.
Note that this receipt will create local volumes. In my case that is fine, because the workflow is to build a container on my laptop, and deploy it immediately in order to test it. I have no need for yesterdays images.