To benefit from using a loadbalancer we need several machines to distribute the traffic on, evidently.
Thanks to Docker we simply run
docker run -d -p 81:80 testwebsite:1
to get a second machine. This time the container port of the webserver is mapped to port 81. If you now visit <IP OF YOUR VM>:81 you should see your test website.
You can have as many machines as you want to. Simply pay attention to the ports.
Of course we don’t want to write this command manually each time when we want to create a new container. Especially not when we want about 100 new containers. That’s why we wrote a small bash script, which does the job for us.
#!/bin/bash num=$1 for (( i=0; i < $num; i++)) do port=$((80+i)) docker run -d -p $port:80 testwebsite:4 done
All the script does is executing docker run
as many times as you specified as a commandline parameter. It also increments the port by one each time.
So when you run the script with ./myscript.sh 25
, you’ll get 25 containers with ports from 80 to 104.
You can check this by accessing the IP of your Dockerhost with your browser and looking at the different ports. You should get the same webpage for each port.
Now we can generate as many web servers as we want. So now it’s time to add some load balancing in front of it.
With a loadbalancer we specify all the servers we can use, and the balancer automatically chooses one for us. There are several mechanisms for choosing the best server for your request. However, we will just use the “least-connection” mechanism which chooses the server with the least connections.
There are hardware and software load balancers. Since we don’t have the hardware, we will choose a software to do the balancing for us.
Luckily, configuring nginx as a load balancer is pretty easy.
So on your balancer-VM install Nginx like we did on the other VM as well:
sudo apt-get install nginx
In the nginx-config you should remove most of the lines, since they host their own website. This is not what we want. Insert the following into /etc/nginx/nginx.conf
user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; } http { upstream loadbalancer { least_conn; server dockerhost:80; server dockerhost:81; # Add servers here } server { listen 80; location / { proxy_pass http://loadbalancer; } } }
In the section “upstream loadbalancer” you have to insert all available servers. So if you have 25 containers running, you have to insert 25 lines.
If you restart nginx now, you should have your load balancer running.
If you access the IP of your load balancer with your browser you should see the website hosted by your Docker containers.
Every requests will be redirected to another container.
You could try to deploy different websites to the containers. When you do this, you’ll see different websites, each time you access your loadbalancer.
However, it’s not very handy to write the IPs of all containers into your nginx.conf. That’s why we created a Python script, which generates an nginx config, depending on the number of containers you want to create.
#!/usr/bin/python from sys import argv part_a="""user www-data;\nworker_processes 4;\npid /run/nginx.pid;\n\nevents {\n\tworker_connections 768;\n\t# multi_accept on;\n}\n\nhttp {\n\tlog_format compression '$upstream_addr';\n\tupstream loadbalancer {\n\t\tleast_conn;\n""" part_b="""\t}\n\tserver {\n\t\tlisten 80;\n\t\tlocation / {\n\t\t\tproxy_pass http://loadbalancer;\n\t\t}\n\t}\n}\n""" counter = int(argv[1]) config = open("nginx.conf", "w") config.write(part_a) for i in range(counter): port=80+i config.write("\t\tserver slave.fritz.box:%i;\n" % port) config.write(part_b) config.close()
All you have to do, is to copy the generated file to /etc/nginx/nginx.conf and restart your loadbalancer with
sudo service nginx restart
In part four we finally get to do some load testing and struggle with a few problems.
Leave a Reply
You must be logged in to post a comment.