, ,

Migrating to Kubernetes Part 2 – Deploy with kubectl

Can Kattwinkel

Written by: Pirmin Gersbacher, Can Kattwinkel, Mario Sallat

Migrating from Bare Metal to Kubernetes

The interest in software containers is a relatively new trend in the developers world. Classic VMs have not lost their right to exist within a world full of monoliths yet, but the trend is clearly towards microservices in which containers can play off the advantages of lightweight and performance. Several factors cause headaches for developers when using software containers. How are container applications properly orchestrated? What about security if the Linux kernel is compromised? How is the data of an application within a volatile container properly persisted?

This part of the article focuses on the usage of docker container technology and how it interacts with the container orchestration system Kubernetes as an important part of the increasing cloud computing infrastructure à la Google Cloud Platform or Amazon Web Services (AWS).

The role of Kubernetes as an automatic and scalable management of containerized applications is currently the subject of many discussions. The open source system Kubernetes developed by Google offers an efficient use of resources, a high horizontal scalability, reliability through self-healing and the improvement of the automation of processes (https://cloud.google.com/kubernetes-engine/?hl=en).

As described in the first part, the objective in this chapter is to dockerize the parts of a classic 3-tier architecture consisting of the presentation layer (client tier) and the logic layer (server tier) and the data storage layer (data tier). The first step is to deploy the application manually on Kubernetes locally using Minikube and execute the application completely within the Kubernetes cluster.

Dockerization

When working with applications, it often makes sense to distribute and package them in a way that they can be easily exchanged between systems. This is where Docker comes in. The default container runtime engine allows you to package an artifact of an application into a container image and push it into a remote registry from where it can be pulled by other systems. The following section explains the steps to get the application into Docker Image format and how to launch it.

Building Application Images with Docker

Now the Docker images can be created. A separate docker file is used for both the client and the server. This file automates the process of creating an image. The Dockerfiles have to be created in the root directory of the subproject with the name Dockerfile. This file first defines which image is to be used. At this point the Alpine Image for Node.js is used. Then the artefacts are copied to the previously referenced path. Docker is configured by EXPOSE so that the container listens to the specified port. CMD defines the commands that are used to start the container.

Dockerfile Client:

Dockerfile Server:

Build Image for Client:

Build Image for Server:

Running Containers with Docker

The following shell commands can be used to start the containers locally after running the local database and to call the client in the browser under the specified port.

Deploy Kubernetes Cluster

To start a Kubernetes cluster locally, Minikube is required. Minikube will launch a single node cluster within a VM. A prerequisite for running Minikube is to have a hypervisor like VirtualBox installed on the system. Further installation instructions for Minikube can be found at Install Minikube.

Installing Kubernetes locally with Minikube

By starting Minikube, a Kubernetes cluster is started automatically in the first step. It is recommended to specify parameters for how much system resources Minikube should be allocated.

Kubernetes is typically designed that internal components such as pods and services have IPs that are used for routing within the cluster and are not accessible from outside the cluster. To make the cluster accessible from the outside, an ingress controller is required. This is not provided by Minikube out of the box and must be activated via add-on.

The eval $(minikube docker-env) command configures the local environment to allow the Docker daemon to be reused within the Minikube instance. This speeds up the development and allows you to communicate with the Docker daemon directly from the command line.

As described in the previous section, the Docker image for the server and the client can be built within the Minikube instance.

kubectl CLI

All components of a Kubernetes cluster are managed using the command line tool kubectl. The following link Install kubectl is useful for installing the CLI. The kubectl create command is being used to create a local deployment.

At this point yaml files are used to describe how the deployment is defined. If several units are to be deployed, it is advisable to move the deployments to different files. This has the background that indentations of the yaml notation remain better readable and in the further course of this article the deployments will be performed with Helm Charts, where a single file per deployment is advantageous. Nevertheless the following command creates deployments for all definitions of pods and services relating this application.

Pod deployments

The smallest deployable unit within a Kubernetes cluster is a pod. A pod consists of one or more docker containers.

Pod Deployment Client (client.deployment.yaml):

Pod Deployment Server (server.deployment.yaml):

Service Deployments

Each pod has its own IP address. If a node in the cluster dies, the pod also dies. Then a new node is automatically created and a new IP address is assigned to the new pod. A service in Kubernetes defines the logical set of interdependent pods and manages the allocation of IP addresses. A service is defined for each client and server.

Service Deployment Client (client.service.yaml)

Service Deployment Server (server.service.yaml):

Ingress Controller Deployment

As described earlier the cluster must have an ingress controller to enable routing from outside the cluster. The rules are defined as follows. The client service is exposed under the path / and the port 4200 and the server service under the path /api and the port 4000.

Ingress Deployment (ingress.yaml):

PostgreSQL Deployment

The PostgreSQL database configuration is stored in a ConfigMap deployment. Here you will find information about the database name, user and password.

Configmap Deployment (postgres.configmap.yaml)

As already mentioned, a docker container is volatile. Persistent volumes (claims) are used to persist data. This is the path (path: "/mnt/data") where the data is stored locally within the host system.

PersistentVolume and PersistentVolumeClaim Deployment (postgres.storage.yaml)

The PostgreSQL Docker Image version 10.4 from the public Docker hub is used for PostgreSQL database pod deployment. The manifest uses the previously defined ConfigMap. In addition, the deployment will mount the previously created volume and set the port to 5432 where the database will be accessible at.

Pod Deployment PostgreSQL (postgres.deployment.yaml):

Accessing the application

By using Ingress, the application is now accessible from the outside. Recall that, the Ingress Controller Deployment exposes the client-service under / while the ‘server-service’ is exposed under /api. The only thing missing to view the deployment in the browser is the IP of the cluster. Wich can be obtained by another Minikube command. When calling the IP after a HTTPs warning, the deployed page should appear.

Helpful Hints

To clean up all deployments from the cluster run this command. Pay attention, the persistence and all other namespaces are affected as well!

In order to debug your application you are able to get the logs from the containers running inside the cluster. Therefore list your pods and show the related log that you want to inspect.

Parts:

Part 2 – Deploy with kubectl


Posted

in

, ,

by

Can Kattwinkel

Tags:

Comments

Leave a Reply