I use cookies in order to optimize my website and continually improve it. By continuing to use this site, you are agreeing to the use of cookies.
You can find an Opt-Out option and more details on the Privacy Page!

Cloudify Spring Boot Application (Part I)

This blog post is a series of three posts.

  • In the first of the posts I’ll describe how to Dockerize a Spring Boot application and run it in Kubernetes.
  • The second part of the tutorial looks on how to monitor the application and see if everything is ok.
  • And in the last part of the series I’ll look on how to analyze and collect logs of the Spring Boot application.

Generate a Spring Boot Application

To get started create a new project at https://start.spring.io with Web and Actuator as dependencies. The Web part is for the simple ReST Controller that returns a Hello World message to the user and the Actuator is for liveness and readiness probes. More details in the next steps :-).

The ReST Controller is very easy to create, as described it only returns a message based on a PathVariable:

package de.koudingspawn.demo;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/hello")
public class HelloController {

    @GetMapping("/{name}")
    public String helloWorld(@PathVariable("name") String name) {
        return String.format("Hello %s", name);
    }

}

When the application is started we can see the output of a simple request to it:

$ http GET localhost:8080/hello/world
HTTP/1.1 200
Content-Length: 11
Content-Type: text/plain;charset=UTF-8
Date: Sat, 26 May 2018 12:47:41 GMT

Hello world

What we also see, that there is an endpoint created for health checks. This is part of the actuator project. Here you can see also a simple request to check if the application is up and running:

$ http GET localhost:8080/actuator/health
HTTP/1.1 200
Content-Type: application/vnd.spring-boot.actuator.v2+json;charset=UTF-8
Date: Sat, 26 May 2018 12:48:22 GMT
Transfer-Encoding: chunked

{
    "status": "UP"
}

The important part is the status code 200, this later helps Kubernetes to identify if our application is healthy and works as expected.

Dockerize the application

With this, we are now able to generate a Dockerfile to put the application into a container and let it run on Kubernetes.

The Dockerfile looks as follows:

FROM openjdk:8-alpine

ENV JAVA_OPTS="-Xmx256m -Xms256m"

COPY target/demo.jar /opt/app.jar
COPY docker-entrypoint.sh /docker-entrypoint.sh

EXPOSE 8080

ENTRYPOINT ["/docker-entrypoint.sh"]

We define JAVA_OPTS, this parameters are important for the JVM to define how many memory our application can consume. Otherwise Java will use as much memory as is available on the host system in extreme cases. This can produce Out Of Memory problems for our application and also for other applications running on the same system. The main problem is that all versions of Java below Java 10 ignore the Docker max Memory parameter. To prevent this we define a max Memory usage of 256 MB.

The next part is, we are copying the jar file and a docker-entrypoint.sh file into the Docker Image. The jar file name is in this case demo, you can specify in Maven the name of the generated Jar File as follows:

<build>
        <finalName>demo</finalName>
        ...
</build>

The docker-entrypoint.sh file looks as follows and appends the defined environment variable JAVA_OPTS to the Java process to let the JVM know the max Memory setting.

#!/bin/sh

exec java ${JAVA_OPTS} -Djava.security.egd=file:/dev/./urandom -jar /opt/app.jar

Now we can build our Maven application and after this the Docker Image with the following parameters:

$ mvn clean install
$ docker build -t demo .

Continuous Integration with Gitlab-CI

Now our application is dockerized and can be deployed with Kubernetes, therefore it’s required to push the Docker Image to a registry. Therefore you can use the Docker Registry available at hub.docker.com or you use for example gitlab.com a free service for Version Control and also a provider for a Docker registry and Continuous Integration.

A simple .gitlab-ci.yml file for Continuous Integration can look as follows:

image: docker:latest
services:
  - docker:dind

variables:
  DOCKER_DRIVER: overlay

stages:
  - build
  - package

maven-build:
  image: maven:3-jdk-8
  stage: build
  script: "mvn package -B"
  artifacts:
    paths:
      - target/*.jar

docker-build:
  stage: package
  script:
  - docker build -t $CI_REGISTRY_IMAGE .
  - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
  - docker push $CI_REGISTRY_IMAGE
  only:
    - master

In the build Stage step maven-build we are defining that the application should be packaged with Maven and that the generated jar file in the target directory should be stored as artifact.

Then in the second package Stage step docker-build we are building the Docker Image and pushing it to the Gitlab Docker registry. The important part here is, that it’ll use the generated jar file from the maven-build step and inject it into the docker-build stage so Docker can find and copy it into the Image.

After you have created a Gitlab Project you can push the source code and the .gitlab-ci.yaml file and it’ll build your Docker Image on each new Push. In this case it’ll overwrite the latest Tag for each commit. You can also define different names for each Build to prevent overwrites and for Rollbacks and testing etc.

Deploy the application to Kubernetes

The deployment of the application contains 3 components:

  1. Deployment
  2. Service
  3. Ingress Definition

Deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  generation: 1
  labels:
    app: microservice
  name: microservice
  namespace: microservice
spec:
  replicas: 1
  selector:
    matchLabels:
      app: microservice
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: microservice
    spec:
      containers:
      - name: microservice
        env:
        - name: JAVA_OPTS
          value: -Xmx256m -Xms256m
        image: registry.gitlab.com/koudingspawn-blog/simple-spring-boot:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          protocol: TCP
        resources:
          limits:
            memory: 294Mi
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 45
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        readinessProbe:
          failureThreshold: 5
          httpGet:
            path: /actuator/health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5

The first part the metadata part describes the name of the deployment and the namespace in which the deployment should be placed. The spec.replicas defines how often the application should be deployed and the spec.selector.matchLabels helps Kubernetes to find the Pods that are connected to the deployment. Therefore the spec.template.metadata.labels has the same labels so a pod that is started in a deployment can be identified by it.

In the spec.template.spec.containers part are the containers defined that should be deployed inside a pod. First we define an environment variable for the JAVA_OPTS as described above. Here we can now also define a different Value that will overwrite the default ENV value defined in the Dockerfile. Otherwise Docker will use the default value we specified earlier. Next we specify the image that should be deployed and which ports are exposed. In our case it’s only port 8080 where Spring Boot starts to listen on.

In the resources.limits part we are telling Docker to limit the Memory that the application can consume to 294 MB. This is a crooked number, but I have made the experience that as an addition to the JVM process there should be 15% more memory to prevent an Out Of Memory.

The next section for the liveness and readiness probes defines how to check if the application is up and if the application is still healthy. Therefore we define that for readiness probe (when is the application ready to serve the first traffic) Kubernetes should do a GET request to the actuator health endpoint to check if it’s up. This will be delayed by 30 seconds because the Spring Boot application needs some time to start. This check should be done until the application is up every 10 seconds with a HTTP timeout of 5 seconds. It should be retried 5 times, as described every 10 seconds, and should if there is the first successful response (status code 200) take the application into service and mark it as up.

The same happens for the liveness probe this should happen the first time after 45 seconds and should be done every 10 seconds with a timeout of 5 seconds. If it fails for more then 3 times the application should be marked as unhealthy.

This helps, that the application will be restarted automatically if it fails on startup or during it’s work.

Service

apiVersion: v1
kind: Service
metadata:
  name: microservice
  namespace: microservice
  labels:
    app: microservice
spec:
  type: ClusterIP
  ports:
  - name: http
    port: 8080
    targetPort: 8080
  selector:
    app: microservice

The service is a type of a Load Balancer in front of the pods that are running. It is also available as DNS endpoint via KubeDNS in the format: ., here it's microservice.microservice. (OK maybe the naming of the namespace and the service itself should be different)

Here it is defined to watch for traffic on port 8080 and forward it to the pods with the app-label microservice.

Ingress

The ingress resource is for defining that the application should be exposed via an Ingress Controller. In this case we use a NGINX-Ingress Controller developed by the Kubernetes community.

Introductions on how to install NGINX-Ingress Controller I’ve written in some of my older blog posts: Install Kubernetes Ingress and Advanced Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: microservice-ingress
  namespace: microservice
spec:
  rules:
  - host: demo.koudingspawn.de
    http:
      paths:
      - backend:
          serviceName: microservice
          servicePort: 8080
        path: /

Here we define that the NGINX-Ingress Controller should watch for requests with Host-Header “demo.koudingspawn.de” and proxy this requests to the microservice-Service we’ve created above.

After this three components are applied via the following commands to Kubernetes it should be available and accessible from the outside.

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml

In the next part I’ll introduce how to Monitor our created application with Micrometer and Prometheus.

Björn Wenzel

Björn Wenzel

My name is Björn Wenzel. I’m a Platform Engineer working for Schenker with interests in Kubernetes, CI/CD, Spring and NodeJS.