Now, it’s finally time to start our first load test. We will be using ApacheBench. To install it simply enter apt-get install apache2-utils. To load test your website enter
ab -n 200 -c 50 <URL_to_your_page>
This command runs 200 requests, with a maximum of 50 at the same time. The results are then displayed in the terminal.
All good so far. We decided to to run 10000 requests with a maximum of 1000 at the same time with 1, 5, 20 and 100 Docker containers providing our website, to see if the amount of containers makes a difference. However, the results did not really vary at all. No matter if we used 1 or 100 containers. The requests per second and the time per request ended up being the same (with a little, not noteworthy variation) for every amount of containers.
This made us question our whole setup. Does it even matter to have different amounts of Docker containers when providing a website? Is one nginx instance capable of of handling everything? And why do the times not vary at all? We knew that the host-system hardware is not virtualized between the Docker containers, but we were at least expecting differences between running 1 or 100 nginx instances at the same time.
At first we did some tests to see if the load balancer is working. Therefor, we checked the logfiles of the web servers. It seemed like everything was fine here. Each request was forwarded to a different container.
Later we figured out, since we only had a 100Mbit network available (yes, no Gbit over here), that maybe limiting the requests per nginx instance could change the results. Turns out it only kind of does, since AB does not load the whole website but only checks for its availability. So next up, we limited the bandwidth, which did produce different results, however, far away from being useful. And then it hit us:
We were performing all tests over WiFi! (yeah..) Plus: we were looking at the results the wrong way!
So first, always use a LAN connection for this (even the most obvious things can go wrong). The switch to WiFi made our load testing more than 4 times faster.
Second, it’s all about reading the results correctly. We completely neglected the amount of failed requests. And this is where it gets interesting.
Taking a look at 10000 requests with a maximum of 1000 at the same time we found out the the amount of failed requests decreases extremely between 1 and 100 docker containers.
For example, one measurement came up with the following amount of failed requests per container-amount:
1: 10009, 5: 9653, 20: 7231 and 100: 1284. This told us that our thesis of “more docker = more power” might be true after all.
In our next and last post we will provide you guys with our final insights. Testing 1, 5, 20, 100 and even 200 docker containers and different amounts of requests and requests at the same time, to finally find out, if “more docker = more power”. Click right here to read more.