{"id":1060,"date":"2016-08-06T22:20:32","date_gmt":"2016-08-06T20:20:32","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=1060"},"modified":"2023-08-06T21:54:21","modified_gmt":"2023-08-06T19:54:21","slug":"exploring-docker-security-part-1-the-whales-anatomy","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/08\/06\/exploring-docker-security-part-1-the-whales-anatomy\/","title":{"rendered":"Exploring Docker Security &#8211; Part 1: The whale&#8217;s anatomy"},"content":{"rendered":"<figure style=\"width: 2000px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg\" width=\"2000\" height=\"1339\"><figcaption class=\"wp-caption-text\">https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg<\/figcaption><\/figure>\n<p>When it comes to Docker, most of us&nbsp;immediately start thinking of current trends like Microservices, DevOps, fast deployment, or scalability. Without a doubt, Docker seems to hit the road towards establishing itself&nbsp;as&nbsp;the&nbsp;de-facto standard for lightweight application containers, shipping not only with lots of features and tools, but also great usability. However, another important topic is&nbsp;neglected&nbsp;very often: Security. Considering the rapid growth of potential&nbsp;threats for IT systems, security belongs to the crucial aspects that might&nbsp;decide about&nbsp;Docker (and generally containers) being widely and long-term adopted by software industry.<br \/>\nTherefore, this series of blog posts is about&nbsp;giving you an overview of the state of the art as far as container security (especially&nbsp;Docker) is concerned. But talking about that does not make so much sense without having a basic understanding of container technology in general. This is what I want to cover in this first part.<br \/>\nYou may guessed right: Altogether, this will be some kind of longer read. So grab a coffee, sit down and let me take you on a whale ride&nbsp;through the universe&nbsp;of (Docker) containers.<\/p>\n<p><!--more--><\/p>\n<h3><\/h3>\n<hr>\n<h3><strong>In medias res<\/strong><\/h3>\n<p>The approach I chose is about&nbsp;confronting you with a demo before I start telling you about anything related to Docker or containers. Don&#8217;t hesitate to perform the following commands on your own machine if you&#8217;ve Docker already installed. If not, <a href=\"https:\/\/docs.docker.com\/engine\/installation\/linux\/ubuntulinux\/\">here&#8217;s<\/a> a detailed guide on how to get it up and running on Ubuntu Linux. Guides for other platforms are also provided. In order to check if everything is working fine, type the following command and hit <em>Enter:<\/em><\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 1: Checking your Docker installation\">$ docker -v \nDocker version 1.11.2, build b9f10c9<\/pre>\n<p>Got it? Ok, then we can straight move on and directly launch&nbsp;a first Docker container.&nbsp;For now, don&#8217;t worry about having no idea of&nbsp;what you&#8217;re doing.&nbsp;Simply&nbsp;keep moving forward and everything will become clear sooner or later.<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 2: Launching our first Docker container\">$ docker run -i -t ubuntu:latest \/bin\/bash<\/pre>\n<p>What you should see now is a prompt like the one shown in the following listing. Type&nbsp;<code class=\"\" data-line=\"\">ls&nbsp;<\/code>and see what happens:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 3: Examining a Docker container\">root@9d4e8c9d2af1:\/# ls\nbin dev home lib64 mnt proc run srv tmp var \nboot etc lib media opt root sbin sys usr<\/pre>\n<p>So far, so good. I&#8217;ll bet you, there&#8217;re many questions which came up while you were following these instructions until now. &nbsp;Taking the risk to get you even more confused, let me shortly answer some of these:<\/p>\n<p><span style=\"color: #ff0000;\">Q: &#8220;In Listing 2, you used a quite complex command in order to run a Docker container. What exactly do these commands and options mean?&#8221;<\/span><\/p>\n<p><em>A: &#8220;In a nutshell, we told Docker to create a container on top of an Ubuntu user space (<strong>ubuntu:latest<\/strong>), &nbsp;attach a terminal (<strong>-t<\/strong>) , open up an input stream which consumes our keyboard input (<strong>-i<\/strong>) and finally&nbsp;<em>start bash in a new process (<strong>\/bin\/bash<\/strong>)<\/em>. As a result, you got that prompt after container startup was finished. We&#8217;ll soon talk about what exactly is going on under the hood.&#8221;<\/em><\/p>\n<p><span style=\"color: #ff0000;\">Q: &#8220;As soon as I typed <code class=\"\" data-line=\"\">ls<\/code>&nbsp;<\/span><span style=\"color: #ff0000;\">&nbsp;on that prompt, what I saw was the structure of an ordinary Linux file system (Listing 3). So we actually started a VM with <code class=\"\" data-line=\"\">docker run<\/code>&nbsp;<\/span><span style=\"color: #ff0000;\">, didn&#8217;t we<\/span><em><span style=\"color: #ff0000;\">?&#8221;<\/span><\/em><\/p>\n<p><em>A: &#8220;No, you launched a container instead of a virtual machine. Creating a new virtual machine also means booting a complete and independent OS kernel. But this is not what has happened here. You rather spawned a new <strong>process<\/strong> on your host system running Docker. As for the file system, you&#8217;re right: Our container indeed operates on his own one. We&#8217;ll cover this in the next section.&#8221;<\/em><\/p>\n<p><span style=\"color: #ff0000;\">Q: &#8220;The container prompt we saw in Listing 3 looks kind of weird and unfamiliar. What&#8217;s behind that?&#8221;<\/span><\/p>\n<p><em>A: &#8220;The numbers and characters after the <strong>@ <\/strong>tells us about the container ID. Every Docker container is assigned a distinct identifier. Don&#8217;t care about the <strong>root <\/strong>for now, we&#8217;ll come back to this later. However, I can forestall that this is indeed the current user attached to this container session.&#8221;<\/em><\/p>\n<p><span style=\"color: #ff0000;\">Q: &#8220;Maybe I should have asked this right at the beginning: What exactly can I do with containers and why should I ever use them?&#8221;<\/span><\/p>\n<p><em>A: &#8220;The point of containers is that they allow you to run applications, web servers, databases etc. in a lightweight, isolated environment provided by a process instead of a virtual machine. You certainly noticed that starting a Docker container is a matter of seconds. Have you ever tried that with a virtual machine? Another aspect is storage. The memory consumption of a single container process is within the range of a few megabytes, whereas a VM needs several gigabytes. So containers are not only faster, but also save lots of resources. &#8220;<\/em><\/p>\n<p>Before becoming a little more precise about what I chucked at you so far, let me close that first introduction with a short &nbsp;and succint definition of what a container is. I will refer to this below from time to time:<\/p>\n<blockquote><p>&nbsp;<\/p>\n<p><em>&#8220;A container can be considered a special kind of isolated environment for at least one single process. Although all containers running on&nbsp;the same host share the available physical resources as well as the kernel services of the host OS, they&#8217;re fooled to exist exclusively as a standalone&nbsp;system and therefore are not even aware of other processes outside their &#8216;world&#8217; or being virtualized at all. Their greatest&nbsp;advantage over conventional virtual machines lies in their ability to be multiplexed over&nbsp;a single OS kernel, whereas a classical&nbsp;VM usually boots its own one.&#8221;<\/em><\/p>\n<p><em>-M<\/em>e<\/p><\/blockquote>\n<hr>\n<h2>Container file systems<\/h2>\n<p>We&#8217;ve already seen that a Docker container seems to have its own Linux file system. Look around in your container from above &nbsp;a little bit and you&#8217;ll soon be convinced that it really is an independent file system rather than the one&nbsp;of your underlying OS, no matter you&#8217;re running your Docker host in a virtualized environment&nbsp;or directly on hardware. However, didn&#8217;t we keep hold of a container being merely a process and not a complete OS? Why should a single process even have its own file system?<br \/>\nAccording to the definition we established above, what containers want to achieve is providing an isolated environment for its inherent processes. Amongst others, what this means is that all the services running in a container may not have any dependencies on the Docker host. This is exaclty what a dedicated filesystem per container gives us. It enables us to install the software and run the services we want, without having to worry about potential&nbsp;conflicts when it comes to other applications as well as their dependencies.<\/p>\n<h4>Working on&nbsp;a container&#8217;s filesystem<\/h4>\n<p>In order to examine&nbsp;how this works, we&#8217;ll install and setup an Apache web server with just a few steps:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 4: Installing Apache\"># Running a container with port 80 exposed (syntax is: -p $HOST_PORT:$CONTAINER_PORT)\n$ docker run -i -t -p 80:80 ubuntu:latest \/bin\/bash\n\n# Update mirrors and install Apache (inside the container)\nroot@123b456c:\/# apt-get update &amp;&amp; apt-get install apache2\n\n# Start Apache server (inside the container)\nroot@123b456c:\/# service apache2 start<\/pre>\n<p>Mind the usage of the <code class=\"\" data-line=\"\">-p<\/code> option with the first command in Listing 4. In order to expose&nbsp;any services hosted by a container, Docker requires the according container port to be mapped on a free port of the Docker host. In this case, we took port 80 of the container (where Apache is available on) and bound it to the same port on the host. As long as a free host port is elected, the choice of which one to use is completely up to you. After container startup, open up your favorite&nbsp;browser and either type <strong>http:\/\/localhost<\/strong> or <strong>http:\/\/{VM IP}<\/strong>&nbsp;if you&#8217;re working on a virtualized Docker host. If Apache welcomes you by saying <em>&#8220;It works&#8221;<\/em> (see screenshot), well &#8230; then it everything works.<\/p>\n<figure id=\"attachment_1135\" aria-describedby=\"caption-attachment-1135\" style=\"width: 637px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/apache_docker.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"1135\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/08\/06\/exploring-docker-security-part-1-the-whales-anatomy\/apache_docker\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/apache_docker.png\" data-orig-size=\"854,247\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"apache_docker\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/apache_docker.png\" class=\"wp-image-1135\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/apache_docker-300x87.png\" alt=\"apache_docker\" width=\"637\" height=\"185\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/apache_docker-300x87.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/apache_docker-768x222.png 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/apache_docker.png 854w\" sizes=\"auto, (max-width: 637px) 100vw, 637px\" \/><\/a><figcaption id=\"caption-attachment-1135\" class=\"wp-caption-text\"><strong>Figure 1: Greetings from the Apache server running inside our Docker container<\/strong><\/figcaption><\/figure>\n<h4>Container filesystem internals<\/h4>\n<p>At this point, we&#8217;ve understood that each Docker container comes with its own filesystem, where all kinds of software can be installed without interferfing with other containers or the Docker host itself. However, it must be considered that everything that&#8217;s installed into a container makes it increase in size. Let&#8217;s take the opportunity and check the size of the&nbsp;Ubuntu user space we&#8217;ve used so far:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 5: Displaying present Docker images\"># List all the available Docker images (details will be provided later)\n$ docker images\n\n# The size of our Ubuntu image is shown in the last column (the rest can be ignored for now)\nREPOSITORY      TAG         IMAGE ID        CREATED         SIZE\nubuntu          latest      321def123       11 weeks ago    120.8 MB<\/pre>\n<p>The <code class=\"\" data-line=\"\">docker images<\/code> command returns a list of every Docker <em>image&nbsp;<\/em>currently available on our Docker host. Docker image? Well, speaking honestly, a Docker image de facto is nothing but another designation for a container filesystem established by Docker itself. Subsequently, when diving deeper into container filesystems, I will preferably refer to them as <em>images.<\/em>&nbsp;Coming back to the&nbsp;current subject, our Ubuntu image&nbsp;has a magnitude of 120.8 MB.<br \/>\nNow, imagine what might happen if you keep on installing more and more software on top of that filesystem. Eventually, it would be several gigabyte in size. Thinking of a bunch of different Docker images for various intended uses existing on the same hard drive, a container system&#8217;s advantage over a VM in terms of storage would diminish more and more.&nbsp;In order to avoid such an expansion and maintain its efficiency, Docker works&nbsp;as follows:<\/p>\n<ul>\n<li>Each Docker&nbsp;image&nbsp;reduces the dependencies it ships with to&nbsp;an absolute minimum.<\/li>\n<li>Docker uses a special storage driver called <em>AUFS (Advanced Multi-Layered Unification Filesystem)&nbsp;<\/em>in order to minimize storage redundancy.<\/li>\n<\/ul>\n<p>Reduction of dependencies simply means that the OS user space (i.e. image) a container is based on (Ubuntu in our case) includes&nbsp;only a few of the binaries, configurations etc. an ordinary Ubuntu installation&nbsp;would. This can be easily demonstrated using the example of <code class=\"\" data-line=\"\">ping<\/code>&nbsp;:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 6: Where's ping??\"># Inside our container\nroot@123b456c:\/# ping www.docker.com\n\n# Oops, something went wrong ...\nbash: ping: command not found<\/pre>\n<p>I think this first point was quite easy to understand. The second one is slightly harder.<\/p>\n<h4>Getting familiar with AUFS<\/h4>\n<p>As we&#8217;ve already learned,&nbsp;<em>AUFS<\/em>&nbsp;&nbsp;is short hand for <em>Advanced Multi-Layered Unification Filesystem<\/em>. The interesting thing about this is that standard filesystems as you may know them are neither <em>multi-layered<\/em> nor have the characteristic of <em>unifying<\/em> anything. Indeed, AUFS works in a different way from ordinary filesystems you may know, for example <em>ext4<\/em>.<br \/>\nSo let&#8217;s get into this and foremost talk about what is meant&nbsp;by <em>multi-layered<\/em>.<em>&nbsp;<\/em>For the first step towards understanding this, remember the layout&nbsp;of a normal Linux filesystem. Under its root directory (<code class=\"\" data-line=\"\">\/<\/code>), there&#8217;re all the directories as specified&nbsp;by the <em><a href=\"https:\/\/en.wikipedia.org\/wiki\/Filesystem_Hierarchy_Standard\">Filesystem Hierarchy Standard (FHS)<\/a>&nbsp;<\/em>(see&nbsp;Listing 3)<em>.<\/em> Going further, imagine that you now add a simple textfile named <em>my_fancy_textfile<\/em> to the <code class=\"\" data-line=\"\">\/opt<\/code>&nbsp; directory:<\/p>\n<figure id=\"attachment_1156\" aria-describedby=\"caption-attachment-1156\" style=\"width: 500px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/dir_hierarchy.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"1156\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/08\/06\/exploring-docker-security-part-1-the-whales-anatomy\/dir_hierarchy\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/dir_hierarchy.png\" data-orig-size=\"494,324\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"dir_hierarchy\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/dir_hierarchy.png\" class=\"wp-image-1156\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/dir_hierarchy-300x197.png\" alt=\"dir_hierarchy\" width=\"500\" height=\"328\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/dir_hierarchy-300x197.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/dir_hierarchy.png 494w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\" \/><\/a><figcaption id=\"caption-attachment-1156\" class=\"wp-caption-text\"><strong>Figure 2: Adding a file to a Linux filesystem<\/strong> (created with draw.io)<\/figcaption><\/figure>\n<p>This scenario sounds kind of simple in the first place. However, what happens if we introduce an additional contraint by assuming the original filesystem to be&nbsp;immutable or rather read-only? How to add anything to an immutable filesystem?<br \/>\nA feature called <em>Copy-on-Write (CoW)<\/em> answers this question. The first solution for this issue that comes to your mind might be to create a fresh&nbsp;copy of the immutable filesystem and modify it, leaving the original one untouched. The CoW mechanism relies on a much smarter approach, taking&nbsp;only the parts of a filesystem which are affected by one or more changes made by the user and copy them to a new so-called <em>layer<\/em>. The first layer everything starts with always is the&nbsp;unaltered&nbsp;filesystem. In this way, duplicating unmodified files or directories and subsequently wasting lots of storage can be avoided. By the way: This is exactly how testing Linux distributions&nbsp;works, when booting them&nbsp;from a flash drive or CD-ROM without immediately install&lt;ing them. All the changes made to the initial (read-only) OS image residing on the volume are stored in a temporary layer and are gone forever after shutdown.<br \/>\nTake a look at figure 3, which illustrates the CoW&nbsp;mechanism&nbsp;in a graphical manner. It all&nbsp;starts with a plain and unmodified Linux&nbsp;filesystem (<strong>layer 1<\/strong>). As soon as we add our sample textfile, AUFS recognizes that <code class=\"\" data-line=\"\">\/opt<\/code> has changed and that a new file called <code class=\"\" data-line=\"\">\/opt\/my_fancy_textfile<\/code>&nbsp;has been inserted. This and only this modification&nbsp;is recorded&nbsp;by AUFS in the form of a new layer <strong>(layer 2<\/strong>). Everything else may remain&nbsp;in layer 1, since it hasn&#8217;t changed at all. In case we would have modified an already existing file like <code class=\"\" data-line=\"\">\/etc\/hosts<\/code>, AUFS would have copied the original file from layer 1 and would have stored the updated version in layer 2. That&#8217;s where the notion of <em>Copy-On-Write<\/em> is coming from.<\/p>\n<figure id=\"attachment_1294\" aria-describedby=\"caption-attachment-1294\" style=\"width: 500px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/08\/fs_layers.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"1294\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/08\/06\/exploring-docker-security-part-1-the-whales-anatomy\/fs_layers\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/08\/fs_layers.png\" data-orig-size=\"494,324\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"fs_layers\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/08\/fs_layers.png\" class=\"wp-image-1294\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/08\/fs_layers-300x197.png\" alt=\"fs_layers\" width=\"500\" height=\"328\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/08\/fs_layers-300x197.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/08\/fs_layers.png 494w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\" \/><\/a><figcaption id=\"caption-attachment-1294\" class=\"wp-caption-text\"><strong>Figure 3: Dividing up a filesystem in multiple layers<\/strong> (created with draw.io)<\/figcaption><\/figure>\n<p>As a consequence, the filesystem we see when navigating through a container is a composition of multiple layers. On container startup, all necessary layers are stacked one upon the other, which gives the user the impression of working on a single consistent directory hierarchy. An overhead projector poses a very demonstrative analogy for better understanding how this works: Think of the layers as several sheets lying about each other on the projector. Then, what you see on the wall is a combination of the content of the individual sheets without realizing that the result is formed by several elements being arranged on different&nbsp;levels. That&#8217;s exactly how a multi-layered filesystem works.<br \/>\nAlso mind that the order of all the layers on the filesystem stack is very important! Remember that CoW tracks any modification inside a new layer. If all assicoiated&nbsp;layers would be&nbsp;arranged in random order, the updated version of a file may&nbsp;be <strong>shadowed&nbsp;<\/strong>by an older&nbsp;one, making the user think that his changes are gone. Therefore, AUFS needs additional metadata to stack layers by timeliness correctly.<br \/>\nOh, and what about deleting files or directories? With&nbsp;CoW as a purely additive approach, how can this be handled? Because CoW is only capable of adding files or directories (and not to remove them), AUFS manages this with <em>whiteout files<\/em>. Figure 4 shows how&nbsp;AUFS would deal with the deletion&nbsp;of&nbsp;the <code class=\"\" data-line=\"\">\/etc\/hosts<\/code>&nbsp;file&nbsp;(which is nonsense, nobody would ever do that). It creates a new file with the same name and additionaly adds a <code class=\"\" data-line=\"\">.wh<\/code> prefix to it. Such a whiteout file serves as a marker which <em>shadows <\/em>a file and prevents it from being visible in the topmost filesystem layer. Thereby, AUFS creates the illusion of a file actually been removed.<\/p>\n<figure id=\"attachment_1167\" aria-describedby=\"caption-attachment-1167\" style=\"width: 500px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/aufs_deletion.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"1167\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/08\/06\/exploring-docker-security-part-1-the-whales-anatomy\/aufs_deletion\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/aufs_deletion.png\" data-orig-size=\"494,324\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"aufs_deletion\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/aufs_deletion.png\" class=\"wp-image-1167\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/aufs_deletion-300x197.png\" alt=\"aufs_deletion\" width=\"500\" height=\"328\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/aufs_deletion-300x197.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/aufs_deletion.png 494w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\" \/><\/a><figcaption id=\"caption-attachment-1167\" class=\"wp-caption-text\"><strong>Figure 4: Deleting a file by shadowing it with a whiteout file<\/strong> (created with draw.io)<\/figcaption><\/figure>\n<h4>Unification &#8211; from layers to filesystems<\/h4>\n<p>As for the functionality of AUFS, there&#8217;s still one point missing. We&#8217;ve already understood the notion of layers and stacking&nbsp;them in order to create&nbsp;the illusion of a single&nbsp;hierarchy of directories. Yet, I didn&#8217;t talk about what <em>stacking<\/em>&nbsp;means in a technical sense so far.<br \/>\nLet&#8217;s start with how the individual&nbsp;layers are&nbsp;managed&nbsp;by AUFS. When it comes to&nbsp;Docker, every layer is stored in it&#8217;s own directory. All&nbsp;layer directories in turn reside in&nbsp;the <code class=\"\" data-line=\"\">\/var\/lib\/docker\/aufs\/diff<\/code> directory, as shown by the next figure:<\/p>\n<figure style=\"width: 732px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/docs.docker.com\/engine\/userguide\/storagedriver\/images\/aufs_layers.jpg\" width=\"732\" height=\"403\"><figcaption class=\"wp-caption-text\"><strong>Figure 5: How AUFS manages its filesystem layers<\/strong> (<a href=\"https:\/\/docs.docker.com\/engine\/userguide\/storagedriver\/images\/aufs_layers.jpg\">https:\/\/docs.docker.com\/engine\/userguide\/storagedriver\/images\/aufs_layers.jpg<\/a>)<\/figcaption><\/figure>\n<p>As the graphic points out, the particular layers of an AUFS filesystem are also called <em>branches<\/em>. In order to get them stacked, all the branches are brought together at a single mount point. This principle is called a <em>union mount.&nbsp;<\/em>The only aspect the AUFS storage driver must pay attention to is the order of the layers. As I already mentioned before, AUFS makes use of additional metadata to get the layers into the correct order.<\/p>\n<h4>How to implement containers<\/h4>\n<p>If you&#8217;re an attentive reader, you might have read about a <em>container layer<\/em>&nbsp; in figure 5. Now that you finally have the necessary comprehension of what&nbsp;a Docker image is, we can take the last step towards containers. Although the definition I introduced above precisely describes what a container is, we still have no idea of how they&#8217;re implemented and created. To keep it simple: A container is nothing but an additional layer created on top of a Docker image. In contrast to the layers of the underlying image, the container is not read-only, but can also be written.<br \/>\nIt should now be clear what exactly happend when we applied&nbsp;the <code class=\"\" data-line=\"\">docker run<\/code>&nbsp;command (see Listing 2): Under the hood, Docker took the read-only Ubuntu Docker image and created an <strong>additional read-write-layer<\/strong> on top of that. It takes at least a single process in order to create a container. That&#8217;s why we told Docker to run the <code class=\"\" data-line=\"\">\/bin\/bash<\/code> command in&nbsp;a process as soon as container creation is finished. With a new&nbsp;container in hands, you can now install additional&nbsp;software, create or delete files etc. with all your steps being stored by the container layer. As soon as you&#8217;re done, a container can be committed, meaning that the existing read-write-layer becomes the topmost read-only layer of a new image. This image can in turn be used to create another container.<\/p>\n<h4>AUFS &#8211; Advantages<\/h4>\n<p>At this point, we have a much&nbsp;better understanding of Docker containers and images. We saw that they&#8217;re based on the AUFS storage driver, using CoW to keep memory consumption as small as possible. However, &nbsp;there&#8217;s still more that AUFS can do to reduce the storage overhead for Docker images.<br \/>\nIn the section above, I explained that containers can be used to modify existing Docker images by adding additional read-write-layers on top of them, forming a new&nbsp;image after a commit. The resulting image can again be used to launch yet another container. A good example is the official&nbsp;<a href=\"https:\/\/nginx.org\/\">nginx<\/a>&nbsp;Docker image. This one as well as many other images can be found at <a href=\"https:\/\/hub.docker.com\/\">Docker Hub<\/a>,&nbsp;a public repository hosted by Docker where users as well as&nbsp;organizations can publish and share their images with the community. Because the nginx web server requires several OS services for being able to do its job (e.g. a TCP\/IP stack), it is built upon the official Debian Docker image, which makes perfectly sense. By reusing a base image that already ships with common&nbsp;services, there&#8217;s no need to re-implement or rather reinstall them over and over. On the one hand, the reuse of images comes with great advantages for Docker and their infrastructure, since they don&#8217;t have to store the same content millions of times.<br \/>\nOn the other hand, Docker&nbsp;users also benefit from this approach. Pertaining to our nginx example, imagine there&#8217;s already an up-to-date Debian Docker image present on your local machine. If for any reason you decide pull the nginx image from Docker Hub, your Docker host recognizes that the Debian image is already available instead of downloading it again. Actually, Docker does not only perform a comparison by image but rather by individual image layer. This makes Docker&#8217;s reusage strategy&nbsp;even more efficient. A very nice tool for inspecting&nbsp;Docker images with all their layers can be found on <a href=\"https:\/\/imagelayers.io\">https:\/\/imagelayers.io<\/a>&nbsp;(give it a try, it&#8217;s amazing!). The following screenshot shows it in action, visualizing image layer reusage between the nginx and Debian Docker image:<\/p>\n<figure id=\"attachment_1172\" aria-describedby=\"caption-attachment-1172\" style=\"width: 368px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/imagelayers_io.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"1172\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/08\/06\/exploring-docker-security-part-1-the-whales-anatomy\/imagelayers_io\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/imagelayers_io.png\" data-orig-size=\"471,723\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"imagelayers_io\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/imagelayers_io.png\" class=\"wp-image-1172\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/imagelayers_io-195x300.png\" alt=\"imagelayers_io\" width=\"368\" height=\"566\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/imagelayers_io-195x300.png 195w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/imagelayers_io.png 471w\" sizes=\"auto, (max-width: 368px) 100vw, 368px\" \/><\/a><figcaption id=\"caption-attachment-1172\" class=\"wp-caption-text\"><strong>Figure 6: Screenshot of imagelayers.io<\/strong><\/figcaption><\/figure>\n<p>Furthermore, Docker even reuses images present on the host when creating containers. Remember we&#8217;ve learned that the individual layers of each image reside in their own directory. Because they&#8217;re read-only, they can be referred by an arbitrary number of containers at the same time.<\/p>\n<h4>&nbsp;Stepping further<\/h4>\n<p>Phew, there&#8217;s much to say talking about AUFS. So&nbsp;at least, we gained a basic understanding of how Docker containers are really working. Yet, this is not everything. Indeed, we know lots of details about a container&#8217;s internal structure, filesystem, storage and how they&#8217;re created by means of layered images. But what about isolating containers against their host OS? Although we made the first step with dedicated container filesystems which provide self-contained environments, we still have no idea about how to manage their access to the host&#8217;s resources like the networking services. This is what we&#8217;ll cover next.<\/p>\n<hr>\n<h2>Namespaces<\/h2>\n<p>So far, we&#8217;ve learned how container processes can be captured&nbsp;into their own hierarchy of directories, where they&#8217;re able to manage their own dependencies, configurations and so forth. However, what happens if a container process needs to trap into the shared OS kernel in order to communicate over the network or performing disk I\/O? This is not a trivial question, since simply giving up isolation at this point insults the notion of containers being self-contained process execution&nbsp;units.<br \/>\nThe approach of the Linux kernel is based on so-called <em>namespaces, <\/em>giving each container an individual and unique view on several global resources. Thereby, containers on the same host can be prevented from getting in each others way when trying to access their mount points or services on the internet. But let&#8217;s not get ahead of ourselves.<\/p>\n<h4>The mount namespace<\/h4>\n<p>I&#8217;m pretty sure you know that on every Linux filesystem, there&#8217;s exactly one root directory <code class=\"\" data-line=\"\">\/<\/code>. So let me ask you something: Remembering that all Docker image layers are located under <code class=\"\" data-line=\"\">\/var\/lib\/docker\/aufs\/diff<\/code> as part of the host filesystem, how can every container have its own root directory? And how comes the directory names inside their AUFS filesystem do not get into naming conflicts with the host filesystem?<br \/>\nFor understanding this, think of how Linux manages filesystems and their locations. It actually makes use of <em>mount points <\/em>to do that. What mount points do is they assign a path to a disk partition, making the partition&#8217;s filesystem a part of the original one. You can check that on your own by opening a terminal on your Docker host and type <code class=\"\" data-line=\"\">mount<\/code>. One&nbsp;line of the output might look like this:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 7: Checking existent mount points\">$ mount \n\/dev\/sda1 on \/ type ext4 (rw, relatime, errors=remount-ro, data=ordered)<\/pre>\n<p>This tells us that in my case the filesystem root is located on the <code class=\"\" data-line=\"\">\/dev\/sda1<\/code> partition. All the mount points on a Linux system are stored inside a single structure, which can also be viewed as an array.<br \/>\nWith that knowledge, implementing mount namespaces is fairly easy. Everything a container system has to do is asking the kernel for a copy of its&nbsp;mount point structure by performing a system call and attach it to a container. In other words: A new container gets its own <em>mount namespace<\/em>. From then on, a container may add, remove or alter mount points, without&nbsp;the original host structure being&nbsp;ever affected, since each&nbsp;container has its own copy.<\/p>\n<figure style=\"width: 571px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.toptal.io\/uploads\/blog\/image\/677\/toptal-blog-image-1416545619045.png\" width=\"571\" height=\"571\"><figcaption class=\"wp-caption-text\"><strong>Figure 7: Mount namespaces prevent container mount points&nbsp;from interfering with the host&#8217;s mount points&nbsp; <\/strong>(<a href=\"https:\/\/assets.toptal.io\/uploads\/blog\/image\/677\/toptal-blog-image-1416545619045.png\">https:\/\/assets.toptal.io\/uploads\/blog\/image\/677\/toptal-blog-image-1416545619045.png<\/a>)<\/figcaption><\/figure>\n<p>Subsequently, binding a container to its own root directory solely&nbsp;requires to mount the container&#8217;s AUFS filesystem under <code class=\"\" data-line=\"\">\/<\/code> by adding an appropriate entry to its mount namespace. If necessary, this can also be done with other partitions on hard disks or flash drives.<br \/>\nMind that this approach is different from what <a href=\"https:\/\/wiki.archlinux.de\/title\/Chroot\">chroot<\/a> does! According to the Arch Linux wiki, it&#8217;s crucial to understand that chroot has never been designed as a security feature. Consequentially,&nbsp;there&#8217;re well-known workarounds covering how to break out of a chroot environment. That explains why using chroot was never&nbsp;really an option for Docker and containers in general.<\/p>\n<h4>The PID namespace<\/h4>\n<p>Another global structure which has to be protected from container access is the process tree. At the root of the process tree is the init process, which initiates&nbsp;the Linux user space as soon as the kernel is ready. It always gets assigned 1 as its unique identifier, aka PID (Process ID). Under&nbsp;Linux, every process can have 0 up to n child processes, but only one parent process (with the exception of PID 1 which does not have a parent).<br \/>\nWhy is that concerned with containers? The point is, that any process in&nbsp;Linux may inspect other processes or even kill them, assumed that is has the required privileges. It&#8217;s kind of obvious that processes inside a container should never get the chance to cause some damage on the host by manipulating anything that exists&nbsp;outside the container. For that reason, as soon as a container is created, container systems like Docker perform a special &nbsp;<code class=\"\" data-line=\"\">clone()<\/code> system call, instructing the kernel to create a new PID namespace. It does that by forking a new process and using it as the root of a new sub-tree or <em>PID namespace<\/em> (see figure 8).<\/p>\n<figure style=\"width: 569px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"\" src=\"https:\/\/assets.toptal.io\/uploads\/blog\/image\/674\/toptal-blog-image-1416487554032.png\" width=\"569\" height=\"569\"><figcaption class=\"wp-caption-text\">Figure 8:<\/figcaption><\/figure>\n<p>The essence of these sub-trees is that they&#8217;re not created in the middle of nowhere. In fact, each PID namespace simultaneously&nbsp;remains part of the old process hierarchy. As you can see in figure 8, there&#8217;re three processes with two PIDs, with <strong>process 8,1<\/strong> as their root. All the processes on the outside of the new PID namespace refer to the container processes by their PIDs 8, 9 and 10. But inside the container, they are known by the PIDs 1, 2 and 3. As a consequence, the container processes have no idea of actually living in a sub-tree of a much greater hierarchy. Concurrently, the outer processes regard them as an ordinary part of the original process tree without having any knowledge of their &#8220;inner&#8221; PIDs. In this way, container processes can be stopped from accessing any other process on the host, whereas the outer&nbsp;processes can still control them. This can be very important in case a container consumes much more resources than it should (maybe due to&nbsp;an attack). In that case, an external process may kill the container and prevent it from knocking out the entire host system.<\/p>\n<h4>The network namespace<\/h4>\n<p>We&#8217;ve already learned how Docker employs&nbsp;namespaces to restrict a container&#8217;s access to global resources like mount points and the process hierarchy. Nevertheless, there&#8217;re still more parts of the host system we need an abstraction layer for, e.g. the network interface. One reason for this&nbsp;is that each container should be able to use the full range of ports without getting into conflicts with other containers or the&nbsp;host OS. Consider a container running an Apache web server on port 80 as we&#8217; ve met above. If a second container wants to run an nginx web server on the same port, an error would be raised, since port 80 is already blocked by the first container. Moreover, containers should be able to communicate with services running inside other containers, no matter they reside on the local or a remote host. It seems clear where this leads to: Every container requires his own network interface that must be managed transparently (i.e. a containers isolation and self-reliance has to&nbsp;be maintained).<br \/>\nHowever, simply attaching a unique network interface to a container is not enough in this case. Remember the behavior&nbsp;of mount and PID namespaces: In both cases, there&#8217;s no link from the child namespace to its parent. Only with PID namespaces the parent tree is always in control of child namespaces, but not the other way around. This is important because bidirectionally routing packages between containers and their host requires both parent and child network namespaces to know about each other and being connected in some respects.<br \/>\nFigure 9 illustrates the flow of network traffic between the global network namespace and its childs. You can see that a container can reach the internet as well as other containers on the same host by means of a special routing mechanism on the Docker host.<\/p>\n<figure style=\"width: 570px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.toptal.io\/uploads\/blog\/image\/675\/toptal-blog-image-1416487605202.png\" width=\"570\" height=\"570\"><figcaption class=\"wp-caption-text\"><strong>Figure 9: Container networking&nbsp;<\/strong> (https:\/\/assets.toptal.io\/uploads\/blog\/image\/675\/toptal-blog-image-1416487605202.png)<\/figcaption><\/figure>\n<p>So how does that work? Maybe you already noticed that there&#8217;s a new network interface on your host since you&#8217;ve installed Docker, bearing the name <em>docker0.&nbsp;<\/em>As soon as a new container gets started, Docker creates a pair of virtual network interfaces. One of these interfaces is connected to the host&#8217;s docker0 interface, while the other one gets attached to the new Docker container. Do you remember the principle of a cans phone? That sort of analogy describes the machnism very well. So, the docker0 interface acts as some kind of networking switch, either navigating traffic between containers and the&nbsp;internet&nbsp;or even between different containers on the same Docker host. Consider that there&#8217;s no direct link between the docker0 interface and other interfaces like eth0 or wlan0. Instead, Docker establishes special routing rules for communicating with these interfaces and thus reaching the internet. If you&#8217;re interested in examining this more in detail, <a href=\"http:\/\/www.infrabricks.de\/blog\/2014\/07\/06\/docker-entschlusselt-netzwerk\/\">this blog post<\/a>&nbsp;by Peter Ro\u00dfbach and Andreas Schmidt is a good point to start.<\/p>\n<hr>\n<h2>Control groups<\/h2>\n<p>The last core feature Docker containers are built upon is another Linux kernel feature called <em>control groups<\/em> (in short: <em>cgroups<\/em>). What cgroups gives us is the ability to put hardware resource limits as well as access restrictions on a entire collection of processes instead of just a single one.<br \/>\nTo make it clear to yourself why this is absolutely necessary, think of an arbitrary process that, for some reason, starts consuming and binding more and more of the available phyiscal resources like memory or disk space. It seems obvious that killing the process immediately is the right thing to do here. However, there&#8217;s one point that&#8217;s often forgotten when working under Unix environments: If a process calls <em>fork()&nbsp;<\/em>and the resulting process also&nbsp;does&nbsp;(which is called <em>double-forking<\/em>), the process emerging from the second fork may escape from the control of the hierarchy&#8217;s topmost process (which did the first fork). As a consequence, stopping the original process in case of misbehavior of one of its children may not have the desired outcome, since it does not exercise control on any child processes arising by a great number of nested forks. Instead, there&#8217;s a high risk they keep on running, making a reboot inevitable.<br \/>\nThe cgroups feature provides a solution for that problem, by merging a collection of&nbsp;processes into a logical group, which is exposed in the form of a virtual filesystem and therefore easily accessible. The point is that no matter how often a process or its children fork, there&#8217;s no way for them leaving the surrounding cgroup. In case any process inside a cgroup starts running amok, it&#8217;s perfectly sufficient to kill the entire cgroup and you can be sure that the problem is solved. Moreover, cgroups also allow for supplying the with resource limits or access restrictions in advance. In this way, the cgroup&#8217;s behavior can constantly be checked against a custom and fixed&nbsp;set of&nbsp;conditions, making it easy for the system to distinguish a healthy cgroup from a misbehaving one.<\/p>\n<hr>\n<h2>Conclusion<\/h2>\n<p>Admittedly, this was lots of information&nbsp;for a single blog post. However, considering&nbsp;what we talked about somehow&nbsp;demystifies Docker containers and provides us with the awareness that when examining the core principles separately, they&#8217;re not so hard to understand. Furthermore, we now have the basics to look at&nbsp;containers from another perspective and analyze how their underlying technologies relate to security. This will be covered by the next blog post of this series. See you soon!<\/p>\n<hr>\n<h2>Sources<\/h2>\n<h4>Web<\/h4>\n<ul>\n<li>Poettering, Lennart: <em>Rethinking PID 1&nbsp;(April 30, 2010)<\/em>, <a href=\"http:\/\/0pointer.de\/blog\/projects\/systemd.html\">http:\/\/0pointer.de\/blog\/projects\/systemd.html<\/a>&nbsp;(last access: August 06, 2016)<\/li>\n<li>Docker Inc.: <em>Docker Overview <\/em>(2016),&nbsp;<a href=\"https:\/\/docs.docker.com\/engine\/understanding-docker\/\">https:\/\/docs.docker.com\/engine\/understanding-docker\/<\/a>&nbsp;(last access: August 06, 2016)<\/li>\n<li>Docker Inc.: <em>Docker and AUFS storage driver in practice <\/em>(2016),&nbsp;<a href=\"https:\/\/docs.docker.com\/engine\/userguide\/storagedriver\/aufs-driver\/\">https:\/\/docs.docker.com\/engine\/userguide\/storagedriver\/aufs-driver\/<\/a>&nbsp;(last access: August 06, 2016)<\/li>\n<li>Wikipedia: <em>Copy-on-write&nbsp;<\/em>(August 05, 2016),&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/Copy-on-write\">https:\/\/en.wikipedia.org\/wiki\/Copy-on-write<\/a>&nbsp;(last access: August 06, 2016)<\/li>\n<li>Ro\u00dfbach, Schmidt: <em>Docker entschl\u00fcsselt: Netzwerk&nbsp;<\/em>(July 06, 2014),&nbsp;<a href=\"http:\/\/www.infrabricks.de\/blog\/2014\/07\/06\/docker-entschlusselt-netzwerk\/\">http:\/\/www.infrabricks.de\/blog\/2014\/07\/06\/docker-entschlusselt-netzwerk\/<\/a>&nbsp;(last access: August 06, 2016)<\/li>\n<li>Ridwan, Mahmud: <em>Separation Anxiety: A Tutorial for Isolating Your System with Linux Namespaces <\/em>(n.d.),&nbsp;<a href=\"https:\/\/www.toptal.com\/linux\/separation-anxiety-isolating-your-system-with-linux-namespaces\">https:\/\/www.toptal.com\/linux\/separation-anxiety-isolating-your-system-with-linux-namespaces<\/a>&nbsp;(last access: August 06, 2016)<\/li>\n<li>archlinux Wiki: <em>chroot <\/em>(June 03, 2013)<em>,&nbsp;<\/em><a href=\"https:\/\/wiki.archlinux.de\/title\/Chroot\">https:\/\/wiki.archlinux.de\/title\/Chroot<\/a>&nbsp;(last access: August 06, 2016)<\/li>\n<\/ul>\n<h4>Papers<\/h4>\n<ul>\n<li>Grattafiori, Aaron: <em>Understanding and Hardening Linux Containers<\/em> (NCC Group Whitepaper, Version 1.0, April 20, 2016)<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>When it comes to Docker, most of us&nbsp;immediately start thinking of current trends like Microservices, DevOps, fast deployment, or scalability. Without a doubt, Docker seems to hit the road towards establishing itself&nbsp;as&nbsp;the&nbsp;de-facto standard for lightweight application containers, shipping not only with lots of features and tools, but also great usability. However, another important topic is&nbsp;neglected&nbsp;very [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[26,651,2],"tags":[61,3,4,27],"ppma_author":[694],"class_list":["post-1060","post","type-post","status-publish","format-standard","hentry","category-secure-systems","category-system-designs","category-system-engineering","tag-containers","tag-docker","tag-linux","tag-security"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":27,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2015\/12\/17\/27\/","url_meta":{"origin":1060,"position":0},"title":"Docker- dive into its foundations","author":"Benjamin Binder","date":"17. December 2015","format":false,"excerpt":"Docker has gained a lot of attention over the past several years.\u00a0But not only because of its cool logo or it being\u00a0the top buzzword of managers, but also because of its useful features.\u00a0We talked about Docker quite a bit without really\u00a0understanding why it's so\u00a0great to use. So we decided to\u2026","rel":"","context":"In &quot;Databases&quot;","block_context":{"text":"Databases","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/databases\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1299,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/08\/16\/exploring-docker-security-part-2-container-flaws\/","url_meta":{"origin":1060,"position":1},"title":"Exploring Docker Security &#8211; Part 2: Container flaws","author":"Patrick Kleindienst","date":"16. August 2016","format":false,"excerpt":"Now that we've understood the basics, this\u00a0second part will\u00a0cover the most relevant container threats, their possible impact as well as\u00a0existent countermeasures. Beyond that, a short overview\u00a0of the most important sources for container threats will be provided. I'm pretty sure you're not counting on most\u00a0of them. Want to know more? Container\u2026","rel":"","context":"In &quot;Secure Systems&quot;","block_context":{"text":"Secure Systems","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/secure-systems\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/article-1301858-0ABD7881000005DC-365_964x543.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/article-1301858-0ABD7881000005DC-365_964x543.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/article-1301858-0ABD7881000005DC-365_964x543.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/article-1301858-0ABD7881000005DC-365_964x543.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":1924,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-4\/","url_meta":{"origin":1060,"position":2},"title":"Microservices \u2013 Legolizing Software Development IV","author":"Calieston Varatharajah, Christof Kost, Korbinian Kuhn, Marc Schelling, Steffen Mauser","date":"28. February 2017","format":false,"excerpt":"An automated development environment will save you. We explain how we set up Jenkins, Docker and Git to work seamlessly together.","rel":"","context":"In &quot;System Designs&quot;","block_context":{"text":"System Designs","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-1024x439.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-1024x439.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-1024x439.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":5175,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/02\/24\/benefiting-kubernetes-part-2-deploy-with-kubectl\/","url_meta":{"origin":1060,"position":3},"title":"Migrating to Kubernetes Part 2 &#8211; Deploy with kubectl","author":"Can Kattwinkel","date":"24. February 2019","format":false,"excerpt":"Written by: Pirmin Gersbacher, Can Kattwinkel, Mario Sallat Migrating from Bare Metal to Kubernetes The interest in software containers is a relatively new trend in the developers world. Classic VMs have not lost their right to exist within a world full of monoliths yet, but the trend is clearly towards\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=1050%2C600&ssl=1 3x"},"classes":[]},{"id":10949,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2020\/09\/11\/behind-the-scenes-of-modern-operating-systems-security-through-isolation-part-2\/","url_meta":{"origin":1060,"position":4},"title":"Behind the scenes of modern operating systems \u2014 Security through isolation (Part 2)","author":"Artur Bergen","date":"11. September 2020","format":false,"excerpt":"If you have not read the first part, we recommend that you read it first. It covers the topics sandboxing and isolation using Linux kernel features. In this part we go one step further and show more tools \u2014 based on part one \u2014 that are used and find their\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":293,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/02\/14\/more-docker-more-power-part-4-problems-arise\/","url_meta":{"origin":1060,"position":5},"title":"More docker = more power? \u2013 Part 4: Problems arise","author":"Tobias Schneider","date":"14. February 2016","format":false,"excerpt":"Now, it\u2019s finally time to start our first load test. We will be using ApacheBench. To install it simply enter apt-get install apache2-utils. To load test your website enter ab -n 200 -c 50 <URL_to_your_page> This command runs 200 requests, with a maximum of 50 at the same time. The\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/01\/1429543497dockerimg.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/01\/1429543497dockerimg.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/01\/1429543497dockerimg.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/01\/1429543497dockerimg.png?resize=700%2C400&ssl=1 2x"},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":694,"user_id":4,"is_guest":0,"slug":"pk070","display_name":"Patrick Kleindienst","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/d0135b87f4c61a26c5a66f7a2ed6c5c65e24a27662ff67c06a36af82b702336f?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/1060","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=1060"}],"version-history":[{"count":94,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/1060\/revisions"}],"predecessor-version":[{"id":25539,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/1060\/revisions\/25539"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=1060"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=1060"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=1060"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=1060"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}