I use cookies in order to optimize my website and continually improve it. By continuing to use this site, you are agreeing to the use of cookies.
You can find an Opt-Out option and more details on the Privacy Page!

Deploy Kubernetes applications with terraform

A typical deployment of an application to a Kubernetes cluster happens via yaml files. This makes deploying an application very easy, but has disadvantages as well. A validation of the yaml file happens directly inside of the cluster, this can lead to problems if the yaml file is invalid or something has changed to previous deployments. You won’t see what will happen and what will change by deploying a new version. Therefore Hashicorp developed a new Terraform provider, to manage resources in Kubernetes.

To work with the provider is very easy, but it has also it’s disadvantages. The provider only supports resources of the Kubernetes v1 api, so we can’t create other resources like deployments or ingress.

Installation

To enable the Kubernetes Provider, we have to add the provider to our main.tf file and configure how terraform connects to the cluster:

provider "kubernetes" {
  host = "https://url-to-k8s-cluster",

  client_certificate = "${file("/root/kubernetes/admin.pem")}"
  client_key = "${file("/root/kubernetes/admin-key.pem")}"
  cluster_ca_certificate = "${file("/root/kubernetes/ca.pem")}"
}

After this we can run the command terraform init to download the provider.

Create a namespace and a docker pull secret

To create a namespace we can append the following part to our main.tf file:

resource "kubernetes_namespace" "application" {
 
  "metadata" {
    name = "application"
  }

}

This will generate a new namespace called application. If we now try to run terraform plan we should see that Terraform refreshes the state and plans to add the namespace to the cluster. By typing terraform apply we can now create the namespace in the kubernetes cluster. The next step is to add the registry pull secret to allow a later generated replication-controller to download the needed docker image:

resource "kubernetes_secret" "docker_pull_secret" {
  "metadata" {
    name = "gitlab-com"
    namespace = "${kubernetes_namespace.application.metadata.0.name}"
  }

  data {
    ".dockercfg" = "${file("${path.module}/docker-registry.json")}"
  }

  type = "kubernetes.io/dockercfg"
}

The name of the newly generated pull secret is gitlab-com and it will be generated in the already created application namespace. The easiest way to append the username, password and registry information we reference to a json file in the following format:

{
  "registry.gitlab.com": {
    "username": "username",
    "password": "password",
    "email": "[email protected]"
  }
}

Configure our first application

Now we have all things to describe our first application:

resource "kubernetes_replication_controller" "ui" {
  "metadata" {
    name = "ui"
    namespace = "${kubernetes_namespace.application.metadata.0.name}"
    labels {
      app = "ui"
    }
  }

  "spec" {
    replicas = 1

    "selector" {
      app = "ui"
    }

    "template" {
      image_pull_secrets {
        name = "${kubernetes_secret.docker_pull_secret.metadata.0.name}"
      }

      container {
        name = "ui"
        image = "registry.gitlab.com/koudingspawn-blog/ui:latest"

        port {
          container_port = 80
        }
      }
    }
  }
}

The first part of the new resource describes the name of the replication-controller and the target namespace. In the spec part we specify the containers inside of the pod and the replica count. We reference to our previously created image pull secret to specify the credentials for downloading the docker image. Now we can create a service to balance traffic to the ui component:

resource "kubernetes_service" "ui" {
  "metadata" {
    namespace = "${kubernetes_namespace.application.metadata.0.name}"
    name = "ui"
  }
  "spec" {
    selector {
      app = "${kubernetes_replication_controller.ui.metadata.0.labels.app}"
    }

    port {
      port = 80
      target_port = 80
    }
  }
}

If we now type terraform plan again we will see, that a terraform apply might create the newly written resources (replication-controller, pull secret and service). With a terraform apply we generate the resources in Kubernetes.

An example output shows this impressive. Here we change the replica count of the ui from 1 to 2 and see that if we apply the state change what will happen:

terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

kubernetes_namespace.application: Refreshing state... (ID: application)
kubernetes_secret.docker_pull_secret: Refreshing state... (ID: application/gitlab-com)
kubernetes_replication_controller.ui: Refreshing state... (ID: application/ui)
kubernetes_service.ui: Refreshing state... (ID: application/ui)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ kubernetes_replication_controller.ui
      spec.0.replicas: "1" => "2"


Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Björn Wenzel

Björn Wenzel

My name is Björn Wenzel. I’m a DevOps with interests in Kubernetes, CI/CD, Spring and NodeJS.