Autoscaling of Docker Containers in Google Kubernetes Engine

The name Kubernetes comes originally from the Greek word for helmsman. It is the person who steers a ship or boat. Representing a steering wheel, the Kubernetes logo was most likely inspired by it. [1

The choice of the name can also be interpreted to mean that Kubernetes (the helmsman) steers a ship that contains several containers (e.g. docker containers). It is therefore responsible for bringing these containers safely to their destination (to ensure that the journey goes smoothly) and for orchestrating them.

Apparently Kubernetes was called Seven of Nine within Google. The Star Trek fans under us should be familiar with this reference. Having 7 spikes, there might be a connection between the logo and this name. [2]

This blog post was created during the master lecture System Engineering and Management. In this lecture we deal with topics that are of interest to us and with which we would like to conduct experiments. We have already worked with docker containers very often and appreciate the advantages. Of course we have also worked with several containers within a closed system orchestrated by Docker-Compose configurations. Nevertheless, especially in connection with scaling and the big companies like Netflix or Amazon, you hear the buzzword Kubernetes and quickly find out that a distribution of a system to several nodes requires a platform such as Kubernetes.

Continue reading

Kubernetes: from Zero to Hero with Kompose, Minikube, k3sup and Helm — Part 1: Design

This is part one of our series on how we designed and implemented a scalable, highly-available and fault-tolerant microservice-based Image Editor. The series covers the various design choices we made and the difficulties we faced during design and development of our web application. It shows how we set up the scaling infrastructure with Kubernetes and what we learned about designing a distributed system and developing a production-grade Kubernetes cluster running on multiple nodes.

Continue reading

Creating an Email Server Environment with Docker and Kubernetes

As a part of the lecture “Software Development for Cloud Computing” our task was to bring an application into a cloud environment. When we presented our software project MUM at the Media Night in our 4th semester, we talked with a few people about dockerizing MUM together with a whole email server configuration. While brainstorming a project idea for the lecture, we remembered these conversations. And since Docker by itself would not have fulfilled all of our requirements, we decided to create a Kubernetes cluster that would house a complete email server environmen and would be even easier to install. That way we could learn more about containerization and how clustering with Kubernetes works.

How Does Email Work?

First of all, we need to make a small trip to the world of emails to better understand what we actually wanted to do.

Continue reading

CI/CD infrastructure: Choosing and setting up a server with Jenkins as Docker image

Related articles: ►Take Me Home – Project Overview  ►Android SDK and emulator in Docker for testing  ►Automated Unit- and GUI-Testing for Android in Jenkins  ►Testing a MongoDB with NodeJS, Mocha and Mongoose


This article will run you through the motivation for a continuous integration and delivery, choosing a corresponding tool and a server to run it on. It will give you a brief overview over IBM Bluemix and kubernetes as server solution and then discuss the application for a virtual machine inside a company. There are some useful instructions (for beginners) on generating key-pairs for the server on Windows. Next there is a motivation why to run Jenkins (the CI tool of choice) as docker container and gives some instructions to get started. Finally, frequent problems are discussed which hopefully save some of your time.
Continue reading