A Kubernetes Tale: Part II — Gotta Kubernetise ’em all

Make your application join the grown-ups table

Sebastian Scotti
11 min readNov 25, 2020
Photo by Andrew Neel on Unsplash

You got your Docker image ready. Now how do you run it? Pods? ReplicaSets? Services? Fear no more, here comes the cavalry.

This is a series of 5 articles. If you want to check Part I, follow the following link:

I’ll try to make this as clear as possible because there are a few steps to follow that might seem a lot the first time. There are three big steps as listed below:

Throughout this article, I consider that you have a basic knowledge of how Kubernetes works and the different components that are involved. If you don’t know enough about it, some time ago I wrote an article where I describe it all. Feel free to check it out too 👇

Before starting

The code 👾

You can find all the code used throughout this article in this repository:

The costs 💸

To give you a rough idea, while writing this article, I “spent” £0.30, and I forgot to turn off the Nodes at night. I “spent” because as I’m also using GCP’s free trial, I didn’t really pay anymoney.

Setting up the environment

If you come from Part I, all you have is a Docker image in your local repository you can run with a simple command. But to run that same image in your Kubernetes cluster requires some extra configuration, more concretely:

  1. A Kubernetes cluster — because, duh
  2. A Container Registry — a place to push your Docker images so they’re accessible from your cluster (unless you want your clusters to pull the images from your computer, which I don’t know what about you, but doesn’t sound like a good idea to me)
  3. A credit card — because knowledge is free, but the servers where you’ll run you code aren’t (don’t panic, you’ll still have for your Christmas presents)

Create the cluster

First, sign-up on GCP and follow the steps to start your free trial (if you choose AWS/Azure you’re on your own today, mate). No instructions for this part, you Google “gcp” and literally the first result that comes back is “GCP — Get started for free”. There you go, point number 3 already done. Easy, right?(something that requires your money always is).

In the Kubernetes Engine cluster creation page (enable the service API if necessary) you can tweak multiple aspects of the cluster configuration. If you don’t want to think just click the My first cluster quick configuration which seems quite quirky if you’re a bit lost with all the buttons and knobs. Now, if you want to work for like 3 mins, to keep costs down for the test I suggest you use this config:

  • Zonal cluster on us-central1 region— one of the cheapest regions where the VMs (Nodes) will be created. Within that region, pick whichever zone you like. This means you’ll have your cluster in only one zone from that region.

While choosing any region might be fine for an initial test, a production cluster might require that you deploy it in specific areas (i.e.: in Europe, after GDPR was introduced, companies are required to process their users data in datacenters located in the continent). Here you can find the full list of GCP regions and zones.

  • Stable release channel — v1.16.13 at the time of writing this article. This option here is entirely up to you: auto-upgrades could break your releases without your knowledge and you could realise at the worst time possible (been there, done that). The alternative option is to select a master static version and you’ll be responsible of doing the upgrade. Here you have the latest release notes.
  • Preemptible e2-small machine type — you get a fraction (25%) of 2 vCPUs and it might get bursts of up to 2 full vCPUs for short periods; also as they’re preemptible, they live up to 24hs and offer no availability guarantee. So using it for testing…good. Using it for production…bad.
  • Single NodePool with 1 node with a 10GB (minimum allowed) boot disk — you can resize the NodePool and add/remove nodes later.

Hit Create and you’re golden! Wait for a few minutes until it’s fully provisioned. In the meantime, there are some other things to do. Point number 2 done!

If you’re a terminal person, just run this:

Link your terminal with your GCP account

Optional, but recommended

There are three ways to communicate with Kubernetes API:

  • CLI
  • REST API or
  • GCP’s web console

Anyway, we’re going to use…yes, you guessed right, the CLI.

To do this, we first need to link our terminal with our GCP account. Download gcloud SDK and follow the instructions to set it up.

Just so you know, GCP also offers something called Cloud Shell, which gives you a shortcut to all your Cloud resources. If you feel like it, just go to the top menu bar and click on the terminal-like icon. I prefer my local terminal because it’s faster and it’s handy if I need to use files from my computer.

Connect to your brand new cluster

Once your cluster is created, you’ll have a Connect button that will open a pop-up with a command to link your terminal to it: copy and paste it in your terminal.

This is how your cluster should look like on the list.

Pro tip: when you have a long list of clusters, it’s good to create aliases for all of them, so you don’t need to go back and forth to this page again.

Once you run the connect to the cluster, you’ll see a message saying that kubeconfig was correctly configured
Once you run the command, you’ll see something like this, meaning that the connection was successful

Setting up a container registry access

Originally I started writing this part here but finally decided to write its own article. Follow this link below to find out all about it 👇

Creating the deployment

Everything in the previous sections was a one-off thing. Now, what follows, is a configuration that you’ll have to do each time you want to deploy a new application to your cluster.

The dotted lines are optional and will depend on the needs of the deployment

Pushing to the Container Registry

If you’re doing this for the first time, you will first require to authenticate Docker against GCR. Remember that we gave our cluster access to it to pull images? Well, in this case we need to do the same, using our User Account to push said images. Luckily, gcloud provides a helper to do this in no time, just by running:

If you try to push without running the command above first, you’ll get an error message like:

unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication

Once Docker is configured, you’ll need to tag and push your Docker image with a GCR URL. How? Like this:

One important point here: it’s quite obvious, but you need to use the same GCR region domain you set up the Image Pull Secret in the cluster. Otherwise, you’ll be able to push, but the cluster won’t be able to pull as it won’t have the credentials configured for that domain (in the example in Setting up the container registry, I used us.gcr.io).

The first time you run this it’ll have to push all the layers so it’ll take longer, depending on your Internet connection, your location and the location of the GCR region you chose.

On the left, you can observe how the image will appear under the Container Registry. On the right, it’ll show how it’s actually stored in Cloud Storage: the many files there are the different layers of your image.

Create a Service

This is the part that can vary, because the type of Service that you create depends entirely on how you’re planning to use it. If you’re unsure, check my article where I explain the differences between the different types. In this case, I’m going to create a headless service.

The definition is quite straightforward: listen on the port 8080 and redirect to the Pods that have a label with key app and value dockeriser, and target their 8080 ports.

To apply it, run kubectl apply -f service.yaml

Create ConfigMap and Secrets

Optional

There could be the rare occasion where you don’t need either of these (hence the Optional sign above) but it’s quite standard configuration management practice to have them. These two resources are almost the same, except that, as you can imagine, one will be in charge of holding sensitive information (guess which!).

We can create a simple ConfigMap to store, let’s say, the environment name, which could later be used to tweak the internal configuration of the app. Add this to your cluster by running kubectl apply -f configmap.yaml.

Secrets have a special treatment though, as its information needs to be encoded in base 64. Let’s say we had to handle our credentials to connect to a database with username and password, here it’s what it’d look like:

Visit base64decode.org to see the actual values of the Secrets

Again, simply add it to your cluster by running kubectl apply -f secret.yaml

Now, if you’re wondering: “OK, this looks fine, but this means that I’ll have my username and password exposed in my source code”, you’re right. This is a topic for the third part of this series using Helm Secrets (which in turns delegates the encryption at rest to Sops). Stay tuned.

Putting it all together: Creating the Deployment

I’m going to deploy the same application I created in Part I, only that I tweaked it slightly to show how ConfigMaps and Secrets can be consumed from within the app.

Also, Kubernetes needs to know when it can start sending traffic to our new Pods; and then, over time, keep checking the the Pods health to understand if they can receive traffic and ultimately, if they need to be restarted.

So far we know that:

  • (1) The Pods need to labeled with app=dockeriser so they’re reachable through the Service
  • (2) The Pod will need a private key Secret to gain access and pull the images from our private Container Registry
  • (3) The image will be stored in us.grc.io/my-awesome-project/dockeriser
  • (4) The application needs to listen on the 8080 port
  • (5) We need to read the DB’s credentials from the Secret
  • (6) We need to read the environment name from the ConfigMap
  • (7) The container needs to provide an endpoint to determine if it’s ready to receive traffic
  • (8) The container needs to provide an endpoint to determine if it’s still alive

You can see below how each point is expressed:

Just like before, run kubectl apply -f deployment.yaml . Wait a few seconds, and if you did it right, you should see two Pods with a name starting with dockeriser , when you run kubectl get po .

The random characters in the Pod name? They’re simply
[deploy-name]-[replicaset-id]-[pod-id]

A few clarifications on the spec above:

  • If you check out line 8, I configured this deployment to have two replicas — you can use the number that makes sense for you.
  • The label app=dockeriser in line 5 is added to the Deployment — simply to earmark it; whilst the one in line 11 tells the Deployment which Pods to overlook.
  • The numbers (timeouts, delays, etc) for both liveness and readiness probe depend exclusively on the application and the hardware where it’s running — play with it until you hit the spot.
  • The environment name, DB user and password are passed onto the container as environment variables, ready to be used by your application.

Now, I modified slightly the launcher of my application to demonstrate how to grab these from your application (as framework/language-agnostic as possible). In case you want to see how to grab this in a more Spring-able fashion, check out applications.yaml file, consumed from EnvironmentProperties.

Capturing the logs, you can see that it prints the following line:

A bit of homework: it’s recommended that you specify the resources your application will need (and the limits), because Kubernetes’ scheduler will probably evict these Pods if your Nodes are running low on CPU/RAM. Setting the right values can avoid this situation. Check out this page to find out more.

Wrapping up

Now, let’s verify that everything works correctly. First let’s list our running Pods:

Run kubectl get pods -o wide to get a nice overview of your running Pods

From the screenshot above, you can see that our Pods are available (inside the cluster) on the IPs 10.56.0.52 and 10.56.0.53. These are the IPs the Service should use to redirect the traffic.

We can see that when requesting the state of our Service, it contains our two Pods with their corresponding ports.

Testing the deployment

First of all, let’s start a simple Pod to test the service and verify that it’s working correctly:

This will spin up a Pod running an Ubuntu image and will link your terminal to that running instance (if you exit it, the Pod will be destroyed). Once you’re connected, install curl by running apt-get update && apt-get install -y curl . The headless Service we created uses its name (dockeriser-headless-service) as hostname, so we can use it like below.

If you hit it multiple times, you should see that the responding IP changes

And that’s it! Now you have your application running on your Kubernetes cluster. Just remember to delete the cluster to avoid getting unnecessary charges!

Or if you only have one cluster (you’ll not be charged for it) in your billing account, just resize it to 0 nodes:

One last word

If you liked this article, follow me to get notified when I publish the next parts! Next stop: Helm.

--

--