Microservices – Legolizing Software Development I

Welcome to our five-part series about microservices and a legolized software development. We’d like to share our lessons learned about architecture, development environment and security considerations with you. We will also explain some issues we stumbled over and what solutions we chose to solve them.

I) In the first part, we present an example microservice structure, with multiple services, a foreign API interface and a reverse proxy that also allows load balancing.

II) Part two will take a closer look on how caching improves the heavy and frequent communication within our setup. [read]

III) Security is a topic that always occurs with microservices. We’ll present our solution for managing both, authentication and authorization at one single point. [read]

IV) An automated development environment will save you. We explain how we set up Jenkins, Docker and Git to work seamlessly together. [read]

V) We finish with a concluding review about the use of microservices in small projects and give an overview about our top stumbling blocks. [read]

Continue reading

More docker = more power? – Part 3: Setting up the loadbalancer

To benefit from using a loadbalancer we need several machines to distribute the traffic on, evidently.
Thanks to Docker we simply run

docker run -d -p 81:80 testwebsite:1 

to get a second machine. This time the container port of the webserver is mapped to port 81. If you now visit <IP OF YOUR VM>:81 you should see your test website.
You can have as many machines as you want to. Simply pay attention to the ports.
Of course we don’t want to write this command manually each time when we want to create a new container. Especially not when we want about 100 new containers. That’s why we wrote a small bash script, which does the job for us.

Continue reading

More docker = more power? – Part 2: Setting up Nginx and Docker

This is Part 2 of a series of posts. You can find Part 1 here: https://blog.mi.hdm-stuttgart.de/index.php/2016/01/03/more-docker-more-power-part-1-setting-up-virtualbox/

In the first part of this series we have set up two VirtualBox machines. One functions as the load balancer and the other will house our services. As the next step we want to install docker on the service VM. To do that enter the following commands in the bash:

$ wget -qO- https://get.docker.com/ | sh
$ sudo gpasswd -a <username> docker
$ newgrp docker

This downloads and installs Docker, adds your user to the docker user group and logs you into this new group to allow you to create and run containers.

Continue reading

More docker = more power? – Part 1: Setting up VirtualBox

This series of blogposts will focus on the effects on response times when performing different tasks running on a variable number of docker containers in a virtual machine.
What will be the performance differences running a small or large number of containers on the same machine? These posts will function as a step-by-step tutorial, enabling everyone to reproduce our studies.
In production one of the most scaled services are webservers. Therefore, we want to focus on stress testing a self hosted website that is being load balanced and running in a varying number of Docker containers.

Continue reading