Integrating Google Container Registry with GKE

Dissecting how to securely store your images and configure access from your cluster

Sebastian Scotti
6 min readNov 21, 2020
Photo by CHUTTERSNAP on Unsplash

So you managed to push your Docker images to the Container Registry, but you’re getting a dreaded ImagePullBackoff error when your Pods are trying to start. Even though there are several reasons that could make this happen (i.e.: a typo in your YAML files), it could simply be that your cluster is lacking the permissions to fetch them. On GCP, by default, clusters have access to the Container Registry automatically if both are in the same GCP project. But if you have to access images from other projects or you chose another provider, that’s another story. Enter imagePullSecrets .

Setting up a container registry access

Before getting to any specifics, you must know that a Kubernetes cluster has two ways of choosing how to authenticate with a private Container Registry:

  • By using a service account
  • By using the key specified by the Pod

Both of these methods require that we first add them to the cluster, using a Secret.

We’re going to use GCP’s Container Registry (from now on, simply GCR), so we only need to give our cluster permission to access it. This step is three-folded, since we need to:

  1. Generate a service key, that will allow the cluster to pull the images from this repository;
  2. Store that key in the cluster — using a Secret;
  3. Tell the default ServiceAccount where’s the secret so it uses it when pulling an image from GCR; or configure your Pod to do the same.

Generate a service key

In your GCP web console, Go to IAM >> Service Accounts and create a new one with a name that’s clear. On the Roles selection page, select Storage Object viewer. Why? Because Google Cloud Storage is the underlying backend used to store the images, and that role enables the user to read the files from those buckets. Remember: this service account will only be used for this and you want to limit the access as much as you can. This is known as Principle of Least Privilege. So don’t reuse this service account for anything else and don’t modify its privileges. Step 1 done.

Pro tip: if you want to make it even more restrictive, you can go further and limit this Service Account to only access the images specific bucket. In order to use this option, the bucket needs to enable Uniform bucket-level access (which overrides the default ACLs). When you’re creating the service account, click “Add Condition” and type:

resource.type == "storage.googleapis.com/Bucket" &&
resource.name.startsWith("projects/_/buckets/<docker-images-bucket-name")

Once it’s created, select Create Key and pick JSON as format, which will be downloaded to your computer. This file is literally the key to all the Docker images you upload, so store it safely! Before moving forward, open it with a text editor and take your time to familiarise yourself with its content.

Storing the key in the cluster

For the next stop simply run:

This will create a Secret with all the necessary information. If you want the full list of available Docker servers, check this page. Step 2 done.

Pro tip: I chose a US Docker server because the cluster I created is also in the US, so there are no cross-region network Egress costs

With the kubectl get secrets command you can see it was correctly created.

Now that it’s registered, remember there are two ways of consuming it:

  • Configuring the default ServiceAccount
  • Configuring the Pod spec

To me (and according to the official Kubernetes docs), the second option is the preferred way when dealing with private registries. In my opinion, mainly because it allows you to have a more granular control over how each Deployment should pull the images. Bear in mind that this will require that each new Deployment will need to define the name of the secret to use.

Configuring the default ServiceAccount

If you execute the describe command on the default Service Account (sa is a shortcut for serviceaccount), you’ll see that the Image pull secrets field is empty. This is the field the Service Account will use to know where to grab the key from when it creates the Pods.

kubectl describe serviceaccount default will print

Now it’s just a matter of telling the default Service account to use the content of the gcr-json-key secret when pulling Docker images, simply by doing:

Configuring the Pod spec

If you take a look at line 18 and 19 in the file below, you’ll see that I just modified my deployment.yaml file to specify what secret it should use to pull images. The other advantage of this method is that it allows you to specify multiple secrets (i.e.: if you have a multiple container Pod, and each image is located in a different registry)

If you want to know a bit more about multi-container Pods, you can check out the following article:

Going the extra mile

By default, Kubernetes doesn’t provide an encryption layer for Secrets so it relies on the provider for this. GCP provides by default encryption at rest in the Infrastructure layer, and also (optionally) in the application layer, using KMS.

To enable this in your cluster, you just have to specify an encryption key. To do this, in your terminal, run:

This will create a KeyRing, it’s just a logical grouping of multiple keys to facilitate its management. Next step is to add a key to it:

Once this is done, you’ll have to give allow the default service account to access this key to encrypt/decrypt the secrets

Finally, just tell the cluster to start using it:

This last step might take a while, so sit back and relax until it’s complete.

One last word on this topic: this key is now critical for the operation of your cluster. If you delete the key, your cluster will become virtually useless — you won’t be able to access your encrypted information again, so no upgrades, no rollacks, no nothing. You know what Uncle Ben said for a situation like this.

Bonus track

We pretty much covered everything but I thought I’d leave you with an extra, that’s not really necessary, but can save you from trouble.

Recently I had to migrate a whole lot of projects running on Kubernetes to use a private container registry in eu.gcr.io instead of gcr.io (again, because of cross-region egress costs), and I didn’t want to create another secret for this new domain because it’d mean that I had to also update all the deployment.yaml files. Inspecting the contents of the gcr-json-key Secret, I noticed this:

As you can see, the command we run when we created the Secret, contains a base64 encoded version of a JSON file with a map, which key is the domain of the container registry. This means that I should be able to add the same credentials, only for a different domain. With a little help of StackOverflow, I found a simple way to do this:

That’s it! Hope you enjoyed the article and, most importantly, that it helps you understand a little bit better how to integrate these two platforms.

--

--