I do a lot of building, testing, and experimenting with Kubernetes in my day job. In particular, working with customers at VMware, I end up feeling their problems in a way that I don’t always have to when working on open source. When I’m working on open source, my goal is generally to find the fastest test and run cycle, which often means using cloud-hosted services like GCR or EC2. These services are easy to use and a great deal — you get professionals managing them and trying hard to make them approachable and attractive. And usually for pennies per month for small amounts of traffic.
So, what’s wrong with just using cloud-hosted services? Well, sometimes VMware’s customers aren’t using them, for various reasons: air-gaps, data sovereignty, blast reduction and are all good reasons for wanting your own registry on the cluster. And sometimes you just want something that’s simple to operate and self-contained which you won’t have to clean up after the cluster.
An OCI registry feels like it really should be part of the core out-of-the-box Kubernetes offering. I suspect the main reason it’s not is that none of the vendors supporting Kubernetes really have a lot of incentive to make this part easier to handle for new developers, and all have their own registry solution they’d prefer to sell.
My Solution: Run a simple registry on the Kubernetes cluster
What I’m going to describe here is not a good choice for a production cluster. Among other things, it introduces an exciting circular startup dependency if you happen to use the registry to host any of the Kubernetes cluster images or any of the other bootstrap components (such as CSI or CNI images, or in my home case, Ceph images) on the registry. However, it’s great for iterative development, and I wish that this was an option for Kubernetes built-in installation. Having to also set up and authenticate to a registry in order to develop on Kubernetes is a big stumbling block. I understand that Red Hat OpenShift may have something like this, but I haven’t seen it generally adopted or documented.
This particular design attempts to heavily leverage existing Kubernetes software. As such, it’s not as nicely contained as a smaller attempt might be. It also uses a self-signed CA, to avoid needing to rely on LetsEncrypt, which may not be available in all environments. Depending on your environment, LetsEncrypt might be a pain, as LetsEncrypt can only generate certificates for hosts which it can validate from the public internet. Additionally, LetsEncrypt records all certificates generated in a public certificate transparency log, which means that anyone on the internet can find your registry’s hostname. This is not a problem for many people, but I wanted a solution that enabled testing self-signed registries.
In code:
If you just want to get to the code, https://github.com/evankanderson/k8s-private-local-registry has the files. They are designed to work on most public clouds or other internet-connected environments without need for modification. I’ll go into the details of how these work below, but if you just want something working, you should be able to use the above (assuming that you have the prerequisites, which I’ll explain why they are in more detail below).
How it works

The Ingress and registry:2
configuration is extracted from a Helm chart into simple manifests (I like to work from simple manifests, as I’m nervous about the Helm chart changing later). To that, I’ve added the self-signed CA via cert-manager, one-shot Kubernetes Job
which is used to locate the IP address of the Ingress (so that we can sign a cert for registry.${IP}.nip.io
), and a DaemonSet to enable the nodes on the cluster to trust the CA’s registry.
The Job
I really like software that just does the right thing, rather than needing to follow a set of manual replacement steps. Since most software needs at least a little bit of adjustment to fit into a particular environment, there are two ways to do this:
- Have the software automatically bootstrap itself into its environment using built-in smarts.
- Have a “first-run initialization” that fills in reasonable values for items which haven’t already been filled in.
AppleTalk and multicast DNS are great examples of the first version — you can plug a printer into your network, and then find it in the local network segment without having to do anything. The software just works; behind the scenes, there’s a complicated stack that’s handling DNS requests on local-network-only multicast routes and the special DNS suffix .local
. This works well when your goal is to only handle local-network configuration — you can set aside some special addresses, and every network gets to re-use the same reservations.
This doesn’t work as well if you want something that’s generally reachable across a wider network. In those cases, you need a name which is rooted outside your network, and that probably means configuration of some sort. In my case, we’re using the nip.io
DNS server, which is a handy wildcard DNS service which answers requests of the form anything.${IP}.nip.io
with an answer of ${IP}
; effectively giving you an unlimited set of DNS names for a single IP address, without needing to keep any state on the server.
Since we’re going to need to write the observed address back into a few Kubernetes objects, I chose to use the bitnami/kubectl
container image. Unfortunately, that image is pretty minimal, so I ended up having to pull out some crazy bash-isms (/dev/tcp
and /proc/net/tcp
) in order to resolve hostnames for ingresses that are using AWS ELBs. (I’ve also seen AWS ELBs change IP address after a while, so you may prefer to either re-run this periodically, or to use the ALB additional controller. Yes, it seems like AWS doesn’t really want you to run EKS…)
Enforcing Trust
Once we’ve gotten a hostname and CA certificate set up, we can add the registry and its CA certificate to the local CRI (Container Runtime Interface) daemon. As a reminder, Kubernetes doesn’t implement its own container runtime, but instead depends on a host-installed container agent like crio
, Docker, or containerd
. containerd
seems to be popular with various cloud providers, but also seems to be the hardest to add additional registry trusts to.
In any case, you can read the code; the short form is that both crio
and Docker support adding new registry certs at runtime by injecting certs into the correct location; containerd
‘s default configuration also requires adding a line to the configuration file, which in turn requires restarting containerd
. About 80% of the script is dealing with containerd
; we early-exit if it’s not present and could probably use a much smaller image if that was standard. I opened https://github.com/containerd/containerd/pull/6488 to simplify this, but it seems stuck on making a breaking API change instead of the “new feature” model I tried to introduce.
Note that the “enforcing trust” DaemonSet has scary permissions — several hostPath
mounts under /etc
, and all of securityContext.privileged
, hostPID
, and hostNetwork
set to true
. If you see a container with these kinds of permissions and you don’t understand what it does, don’t run it. These settings are definitely enough to compromise your node, and by extension, all the containers running on it.
We use a DaemonSet for this, because we want the setup to run on each node in the cluster, which is the intended role for a DaemonSet; if you are using node taints on your cluster, you may need to adjust the DaemonSet to tolerate those taints, or the tainted nodes won’t be able to talk to the local registry. We also want the registry-trust
script to run periodically to refresh the settings if needed; I used an init container to do the setup work, and then a normal container running sleep 6h
to delay re-running the script until the main container exits.
I’ll admit that I’m slightly terrified of this container — I’d check real carefully before running something like this in a new environment (your own kubeadm
cluster, or something like Oracle’s Kubernetes offering). I did test it by hand on AKS, EKS, and GCE shortly after I created it, but I don’t have automation running (since it would cost actual money to set up and tear down these clusters, and this isn’t critical enough to warrant that).
Twists!
I figured I’d jot down a few interesting notes at the end:
- I started by trying to expose the registry only inside the cluster. It took a little while to realize that this worked for on-cluster builds (my first testing use case), but not for running containers that were built. This is because the CRI and kubelet are not on the overlay network set up by the CNI; you’ll need to expose things to the host using a NodePort, LoadBalancer, or Ingress. I chose the last because:
- It would be pretty easy to either directly create a certificate, or write a job to create and store a certificate and CA directly. I chose to use an Ingress and cert-manager instead for two reasons:
- Some of the public clouds have a hosted default Ingress implementation loaded, and most clusters should have an Ingress implementation loaded. (IIRC, Azure doesn’t have this installed by default; EKS also requires that you install an add-on.)
- LetsEncrypt can automatically issue short-lived licenses and renew them, which seemed like a nice feature. It’s not being used at the moment, but it would be easy to change the Ingress annotation to use LetsEncrypt if you wanted.
- Figuring out the version of
containerd
from inside a running container is more tricky than I’d like… I wish there was a standardized interface for querying this type of information, like the instance metadata servers that GCE and EC2 have. - There’s a bunch of gnarly stuff going on in the registry-trust DaemonSet. This feels super-fragile to me.
- I really wish that Kubernetes would ship with an out-of-the-box registry; maintaining a hack like this in “user space” means that every single casual Kubernetes hacker either needs to find a resource like this (fat chance! I wrote this after 4 1/2 years of Kubernetes use) or manage an external registry outside the lifecycle of their cluster.
0 thoughts on “Kubernetes Self-Hosted Registry”