{"id":1299,"date":"2016-08-16T22:58:04","date_gmt":"2016-08-16T20:58:04","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=1299"},"modified":"2023-08-06T21:54:04","modified_gmt":"2023-08-06T19:54:04","slug":"exploring-docker-security-part-2-container-flaws","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/08\/16\/exploring-docker-security-part-2-container-flaws\/","title":{"rendered":"Exploring Docker Security &#8211; Part 2: Container flaws"},"content":{"rendered":"<figure style=\"width: 964px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/article-1301858-0ABD7881000005DC-365_964x543.jpg\" width=\"964\" height=\"543\"><figcaption class=\"wp-caption-text\">https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/article-1301858-0ABD7881000005DC-365_964x543.jpg<\/figcaption><\/figure>\n<p>Now that we&#8217;ve understood the basics, this&nbsp;second part will&nbsp;cover the most relevant container threats, their possible impact as well as&nbsp;existent countermeasures. Beyond that, a short overview&nbsp;of the most important sources for container threats will be provided. I&#8217;m pretty sure you&#8217;re not counting on most&nbsp;of them. Want to know more?<\/p>\n<p><!--more--><\/p>\n<h2>Container threats<\/h2>\n<p>Due to their features and fundamental implementation, containers are exposed to several attack vectors. However,&nbsp;before we step further, let me get one thing&nbsp;clear: First, this is not only the fault of container vendors like Docker. As we saw in the first part, containers rely on many features offered by the underlying Linux kernel&nbsp;which&nbsp;includes&nbsp;bugs, too. Second, &nbsp;there&#8217;s no software in the world that can be marked as bug-free and robust against any sort of attacks. Therefore, this post is not about criticizing Docker or containers in general (Honestly speaking, I really like&nbsp;them both!), but presenting existent attack surfaces and what can done about them in a neutral way. So keep the following in mind for the rest of this blog post: Nobody is perfect! And there&#8217;s less this is as suitable for as software.<\/p>\n<h4>Inner-container attacks<\/h4>\n<p>The probably most obvious threat for a container is any attacker who wants to gain&nbsp;access to the container itself. There might be different reasons for this. Possibly the attacker intends to simply shutdown the container or to steal available user data that might shape up as&nbsp;valuable. Another more advanced motivation might be to get foothold on a single container in order to&nbsp;use it as an&nbsp;initial&nbsp;point for performing attacks against other containers or the container host.<br \/>\nInterestingly, flaws introduced by container systems like Docker usually are not the reason that allows for capturing a container. Instead, it&#8217;s often the software or rather the services hosted by a container which makes it vulnerable against invaders. Here&#8217;s a more detailed list of potential causes:<\/p>\n<ul>\n<li><strong>Overage software<\/strong>: The software running inside a container is not kept up to date.<\/li>\n<li><strong>Exposure to unsecure\/untrusted networks<\/strong>: A container is accidentally accessible from unknown networks.<\/li>\n<li><strong>Use of large base images<\/strong>: Lots of software means lots of things to patch. Every additional package&nbsp;or library increases a container&#8217;s attack surface.<\/li>\n<li><strong>Weak application security<\/strong>: A custom application or an application&#8217;s runtime environment (e.g. Java&#8217;s JVM) might be exploited by attackers due to vulnerabilites.<\/li>\n<li><strong>Working with&nbsp;the root user (UID 0)<\/strong>: By default, the user inside a container is <em>root<\/em> (UID 0). We&#8217;ll cover the consequences in an extra section.<\/li>\n<\/ul>\n<h4>Cross-container attacks<\/h4>\n<p>Losing&nbsp;control over a single container might be annoying , but considering only basic scenarios, there&#8217;s no impact on the rest of the container infrastructure. In the last resort, the affected container has to be deleted and re-instantiated.<br \/>\nThe next level consists in capturing a container and use it to attack other containers existing on the same host or within the local network. For example, an attacker might have the following goals:<\/p>\n<ul>\n<li>Stealing database credentials by means of ARP spoofing.<\/li>\n<li>Executing DoS (Denial of Service) attacks by flooding other container services with requests until they&#8217;re down or making&nbsp;a single container bind all the available physical resources (e.g. by means of forkbombs).<\/li>\n<\/ul>\n<p>Again, the vulnerabilities making that possible are generally not introduced by faulty container implementations or container-related weaknesses. Instead, it&#8217;s the following points that bring about&nbsp;this attack vector:<\/p>\n<ul>\n<li><strong>Weak network defaults<\/strong>: Docker settings default to <em>bridge<\/em> configuration. The host&#8217;s&nbsp;<em>docker0<\/em>&nbsp; interface acts as a switch and passes on traffic between containers without any restriction.<\/li>\n<li><strong>Weak cgroup restrictions<\/strong>: Poor or completely missing resource limits.<\/li>\n<li><strong>Working with root user (UID0)<\/strong>: High risk due to parts of the kernel which are not namespace-aware.<\/li>\n<\/ul>\n<p>Especially the <em>bridge<\/em> network configuration constitutes a high risk, since it allows for accessing other containers attached to the same interface without any restrictions. Docker&#8217;s policy is to provide a set of reasonable default configurations in order to get their users up and running without having to deal with various settings beforehand. The <em>bridge<\/em> configuration is part of the Docker defaults which of course makes sense, considering&nbsp;that e.g. a application container and a database container have to communicate. However, this approach comes with the disadvantage of some users being tempted to leave the entire configuration responsibility up to the container system. As for cgroup settings, it&#8217;s exactly the same thing.<\/p>\n<h4>Attacks against container management tools<\/h4>\n<p>Deploying containerized applications in production is usually not done manually. Instead, tools for container orchestration (e.g. <a href=\"http:\/\/kubernetes.io\/\">Kubernetes<\/a> or <a href=\"http:\/\/mesos.apache.org\/\">Apache Mesos<\/a>) as well as&nbsp;service discovery (e.g. <a href=\"https:\/\/github.com\/Netflix\/eureka\">Eureka<\/a> by Netflix) are employed in order to realize automated workflows as well as robust and scalable environments.<br \/>\nThough, using such tools requires permission for bidirectional network traffic between containers and the management tools. Tools&nbsp;like Kubernetes manage containers on the basis of health checks comprising lots of different metrics they gather from containers. As a consequence, there have to be permanent connections instead of just temporary ones. This brings up different imaginable scenarios:<\/p>\n<ol>\n<li>An attacker captures a single container. Since there&#8217;s a network connection to the orchestration tool, he also succeeds in bringing it under his control by exploting any documented&nbsp;weaknesses. From that moment on, the attacker might for example shut down the entire production system, execute DoS attacks or expands into other parts of the internal network.<\/li>\n<li>Again, an attacker captures a single container. Due to vulnerabilities of the service discovery, he also captures it and either shuts it down or manipulates its data. As in the scenario above, he might aim at causing a crash failure of the whole system or making his way through the internal network.<\/li>\n<\/ol>\n<p>Once more, these threats are not caused by the container system itself. But what could lead to the scenarios described above?<\/p>\n<ul>\n<li><strong>Weak network defaults<\/strong>: Yes indeed, it&#8217;s the network configuration again. The problem is the same as explained in the section covering cross-container attacks.<\/li>\n<li><strong>Weak firewall settings<\/strong>: Poorly configured firewalls may allow containers or additional tools to access parts of the internal network which are not relevant for&nbsp;their purpose.<\/li>\n<\/ul>\n<h4>Escaping<\/h4>\n<p>Another very important threat that must not&nbsp;be underestimated is escaping the container and entering the host. In fact, this can even be considered the worst case scenario, because once successful, the attacker&#8217;s influence is not limited to a&nbsp;container any more. Rather, assumed&nbsp;he manages to obtain&nbsp;root privileges, he controls every service and every application running on that host. Moreover, the attacker may attempt to compromise other machines residing within the local network.<br \/>\nEspecially in terms of containers escaping is a very critical aspect, since their principle actually is giving up some degree of isolation in order to gain advantages conerning storage and speed. Here&#8217;re the aspects which facilitate container breakouts:<\/p>\n<ul>\n<li>&nbsp;<strong>Insecure defaults\/weak configuration<\/strong>: This concerns the host firewall and cgroup settings.<\/li>\n<li><strong>Information disclosure<\/strong>: That means e.g. exposing the host&#8217;s <a href=\"http:\/\/linux.die.net\/man\/1\/dmesg\">kernel ring buffer<\/a>, <a href=\"http:\/\/www.ibm.com\/developerworks\/library\/l-proc\/\">procfs<\/a> or <a href=\"https:\/\/www.kernel.org\/doc\/Documentation\/filesystems\/sysfs.txt\">sysfs<\/a> to containers.<\/li>\n<li><strong>Weak network defaults<\/strong>: It&#8217;s very risky to bind a host&#8217;s services and dameons on all interfaces (0.0.0.0) because this way they&#8217;re accessible from within containers.<\/li>\n<li><strong>Working with root user (UID 0)<\/strong>: Operating as root within a container might lead to container breakout (a proof of concept for Docker will be presented in the next section).<\/li>\n<li><strong>Mounting host directories inside containers<\/strong>: This is very critical especially for Docker containers. I will explain that in the proof of concept section.<\/li>\n<\/ul>\n<p>The points which highly increase the risk of container escaping look very familiar. Indeed, it&#8217;s a relatively small set of container settings or properties that&#8217;s responsible for the bigger part of the risks coming with them. One of the&nbsp;most critical aspects is that a Docker container&#8217;s default user is root. That is because if an attacker succeeds in breaking out of a Docker container as root user, the host system is at his mercy. Under particular conditions, escaping a Docker container is actually very easy. I will cover this in the following section.<\/p>\n<h4>The Linux kernel<\/h4>\n<p>Yes, it&#8217;s true: Even the Linux kernel itself constitutes an important threat as far as containers are concerned. There&#8217;re multiple reasons why this is the case:<\/p>\n<ul>\n<li><strong>Privilege escalation vulnerabilities<\/strong>: New kernel weaknesses which enable users to extend their privileges are permanently discovered. For example, many users of smartphones exploit these security gaps in order to become <em>root<\/em>&nbsp; within their Andoid OS. Of course this also affects containers.<\/li>\n<li><strong>Overage kernel code<\/strong>: Many of the already mentioned kernel vulnerabilities are a direct result of the user being able to load and install old packets and kernel modules which are no longer updated and maintained. The most popular Linux distributions like Ubuntu continuously&nbsp;integrate protection mechanisms against this, but they just started doing so. Another source for kernel-related security issues is the system calls.<\/li>\n<li>&nbsp;<strong>Ignorance of development team<\/strong>: Although it may sound kind of odd, the development team itself is a great threat for kernel security. This is because for many core developers, security does&nbsp;not have the highest&nbsp;priority. &nbsp;Linus Torvalds is particularly infamous for considering security bugs less important than other ones.<\/li>\n<\/ul>\n<hr>\n<h2>Proof of Concept: Escaping&nbsp;a Docker container<\/h2>\n<p>In order to makes this more concrete, I will now demonstrate how to escape a Docker container. You will shortly&nbsp;recognize that although this may sound like rocket science first, it is much easier as it seems. And this is exactly what makes it very dangerous.<\/p>\n<h4>How it works in theory<\/h4>\n<p>What we&#8217;ll do with the following &#8220;hack&#8221; is talking to the Docker dameon of the host with a Docker CLI we installed inside a container. You heard right, we&#8217;ll run Docker inside Docker ;).<\/p>\n<figure style=\"width: 499px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.docker.com\/media\/2013\/09\/docker-meme.jpg\" width=\"499\" height=\"323\"><figcaption class=\"wp-caption-text\">https:\/\/blog.docker.com\/media\/2013\/09\/docker-meme.jpg<\/figcaption><\/figure>\n<p>As soon as we have our dockerized Docker container up and running, we&#8217;ll use the host&#8217;s Docker socket to gain unrestricted&nbsp;root access to the host system (figure 1).<\/p>\n<figure id=\"attachment_1313\" aria-describedby=\"caption-attachment-1313\" style=\"width: 600px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/08\/mount_host_fs.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"1313\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/08\/16\/exploring-docker-security-part-2-container-flaws\/mount_host_fs\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/08\/mount_host_fs.png\" data-orig-size=\"684,483\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"mount_host_fs\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/08\/mount_host_fs.png\" class=\"wp-image-1313\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/08\/mount_host_fs-300x212.png\" alt=\"mount_host_fs\" width=\"600\" height=\"424\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/08\/mount_host_fs-300x212.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/08\/mount_host_fs.png 684w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a><figcaption id=\"caption-attachment-1313\" class=\"wp-caption-text\"><strong>Figure 1: Having access to the host&#8217;s Docker daemon from inside a container entirely exposes the host&#8217;s filesystem<\/strong> (created with draw.io).<\/figcaption><\/figure>\n<h4>Docker architecture<\/h4>\n<p>Docker daemon? Docker socket? Ok, let&#8217;s stop here for a moment and take a look at the Docker architecture fundamentals. It&#8217;s important to understand that Docker is not just a single script or programm, but consists of three major components which only form a comprehensive container platform when they come together.<br \/>\nThe first element of Docker is what you use as soon as you run commands like <code class=\"\" data-line=\"\">docker run<\/code>, <code class=\"\" data-line=\"\">docker pull<\/code> and so forth. It&#8217;s called the <em>Docker client<\/em> or CLI (command line interface).<br \/>\nHowever, it takes another component to process these commands and perform the heavy lifting, stopping, re-starting and destroying of containers. This part of Docker is called the <em>Docker daemon<\/em>, because it&#8217;s a background process.<br \/>\nThe third essential part of Docker is the <em>Docker Registry<\/em>. I already mentioned it in the previous blog post and will skip it here since it&#8217;s not relevant for our current concern.<br \/>\nInstead, the next interesting question is: How can&nbsp;CLI and daemon communicate?<\/p>\n<figure style=\"width: 1009px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/docs.docker.com\/engine\/article-img\/architecture.svg\" width=\"1009\" height=\"527\"><figcaption class=\"wp-caption-text\"><strong>Figure 2: Docker is built upon three major components (Client, Daemon and Registry)<\/strong>&nbsp;(https:\/\/docs.docker.com\/engine\/article-img\/architecture.svg).<\/figcaption><\/figure>\n<p>For being able to process commands sent by the Docker client, the Docker daemon provides a RESTful interface. Although one might immediately think of HTTP when reading about&nbsp;&#8220;REST&#8221;, Docker rather makes use of a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Unix_domain_socket\">UNIX domain socket<\/a>&nbsp;for client-daemon communication. This bidirectional communication endpoint can be found under the following path: <code class=\"\" data-line=\"\">\/var\/run\/docker.sock<\/code>.<br \/>\nThe last thing we need to know before we can start is how to access this socket from within a container. Fortunately, Docker comes with a feature that allows to mount directories of the host directly into a&nbsp;container. Such <em>volumes,<\/em>&nbsp;as they&#8217;re called in the Docker ecosystem,<em>&nbsp;<\/em>can be very handy in certain situations like sharing configuration files among multiple containers. Though, I will now show you that this feature might backfire if&nbsp;it&#8217;s used&nbsp;inconsiderately.<\/p>\n<h4>How it&#8217;s done<\/h4>\n<p>The first thing we have to do is running an ordinary Docker container as we did several times before. However, we will additionally configure a host directory to be mounted within the new container. In other words, we&#8217;ll add a <em>volume&nbsp;<\/em>to the container. It&#8217;s very important that for our purpose, we can&#8217;t take an arbitrary directory. Instead, we&#8217;ll rather choose the <code class=\"\" data-line=\"\">\/var\/run<\/code>&nbsp;folder:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 1: Mounting a volume on container creation\"># Launch a container with a volume mounted at \/var\/run\n$ docker run -i -t -v \/var\/run:\/var\/run ubuntu:latest \/bin\/bash<\/pre>\n<p>Why do we just have to take the <code class=\"\" data-line=\"\">\/var\/run<\/code> directory? In the previous paragraph, I mentioned that the Docker socket <code class=\"\" data-line=\"\">docker.sock<\/code> we need for being able to talk to the host&#8217;s Docker daemon resides in this folder. That&#8217;s the only reason we can&#8217;t select another one. Consider that the path to <code class=\"\" data-line=\"\">docker.sock<\/code> inside the container must be exactly the same as on the host, since this is the default path where the Docker CLI expects to find it.<br \/>\nNow that we have the hosts&#8217;s Docker socket available, we also need a corresponding command line interface for sending commands. Therefore, our&nbsp;next step is to install Docker inside the container. I won&#8217;t cover this here, because the procedure is not different from what must be done to install Docker on a normal host and there&#8217;s lots of documentation available covering this&nbsp;in under various&nbsp;Linux environments. As soon as you&#8217;re done with this, make sure everything works properly:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 2: Checking the Docker setup within our container\"># We do this WITHIN a running container!\nroot@abc123def:\/# docker -v\nDocker version 1.11.0<\/pre>\n<p>Assumed our Docker in Docker is ready, we&#8217;re ready to hit the host daemon from within the container. Can&#8217;t believe that? Type <code class=\"\" data-line=\"\">docker images<\/code> on your container prompt. Look exactly at what you&#8217;re seeing as a result: Indeed, the command returned a list of all images located on the host, didn&#8217;t it?<br \/>\nWe&#8217;re almost done. Before we take the last steps towards breaking out of our container, make yourself clear what we&#8217;ve reached so far: Since we&#8217;re able to control the host&#8217;s Docker daemon from inside a container, we&#8217;ve logically&nbsp;everything we need to force our way into every host directory we would like to. How about mounting the host&#8217;s entire filesystem into a new container?<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 3: Running a container inside a container\"># Attention: We are already WITHIN a container! \n# We're mounting the host's fs root at \/test \nroot@abc123def:\/# docker run -i -t -v \/:\/test ubuntu:latest \/bin\/bash<\/pre>\n<p>Once your containerized container is ready (which sould &nbsp;be a matter of seconds), <code class=\"\" data-line=\"\">cd<\/code> into <code class=\"\" data-line=\"\">\/test<\/code> and execute <code class=\"\" data-line=\"\">ls<\/code>. If everything works as expected, you&#8217;ll discover that every folder under the hosts root directory is now available within <code class=\"\" data-line=\"\">\/test<\/code>. Basically, we&#8217;re now able to manipulate the host filesystem as we like it. That&#8217;s exactly what we wanted to achieve.<br \/>\nTo make things a little bit more comfortable, we&#8217;ll at least start a new root shell with <code class=\"\" data-line=\"\">\/test<\/code> as our new root directory. So go back to the container root <code class=\"\" data-line=\"\">\/<\/code> and type the following:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 4: Using chroot to go round bind-mount restrictions\"># Configuring \/test as our new root dir by creating chroot jail\nroot@ghi456jkl:\/# chroot \/test\n\n# Starting bash inside our jail\n\/bin\/bash\n\n# \u00c9t voil\u00e0\nroot@ghi456jkl:\/# cd \/; ls\nbin dev home lib mnt proc run srv tmp var\nboot etc lib64 media opt root sbin sys usr<\/pre>\n<p>As a side effect, this also helps us to circumvent the access restrictions introduced by a <a href=\"http:\/\/unix.stackexchange.com\/questions\/198590\/what-is-a-bind-mount\">bind-mount<\/a>. To prove yourself that you&#8217;re actually working directly on the host right now, navigate to a directory of your choice and create a sample file.<br \/>\nNow, exit the inner container as well as the outer container until you&#8217;re finally back on the Docker host. Go to the folder where you created the sample file from within the inner container. You&#8217;ll realize that it really exists.<\/p>\n<h4>The root-dilemma<\/h4>\n<p>Asking why and how it is possible to &nbsp;acquire control over the entire host system from inside a container is a legitimate question. From your Docker host, navigate to the sample file from above again and check the files permissions and ownership:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 5: The host kernel can't make a difference between two root users with UID 0\">$ ls -al sample_file\n-rw-r--r-- 1 root   root    0 Aug 10 00:00 sample_file<\/pre>\n<p>Look at the user who owns the file from the host&#8217;s point of view: root??<br \/>\nAt first sight, that&#8217;s no surprise because the user inside the container was also called root. However, shouldn&#8217;t the existence of this&nbsp;root user be restricted to the container itself? And before you ask: The container&#8217;s root user actually has UID 0. So you may guess what&#8217;s the origin of this dilemma: As soon as a container root leaves it&#8217;s enclosed environment, what really happens&nbsp;when it accesses a mounted host directory, there&#8217;s no chance for the host to make a difference between the &#8220;real&#8221; host root and the container root user. From the kernel&#8217;s perspective, a user is just a number, and in that case it recognizes UID 0 in both cases.<br \/>\nOf course one must notice that this behavior is a consequence of having the Docker socket mounted inside a container. Accessing this socket has been the key to operate on the host without any restrictions. The argument that nobody would ever expose the host&#8217;s Docker socket inside a container isn&#8217;t a very good one in my opinion, since solely rely on nobody will ever mount <code class=\"\" data-line=\"\">\/var\/run<\/code> is too risky.<\/p>\n<hr>\n<h2>Countermeasures<\/h2>\n<p>So far, we covered lots of potential risks and threats for containers, and we even got very concrete by stepping trough a Docker escape hands-on. Now let&#8217;s see what kind of solutions have been developed over the last time to answer these problems.<\/p>\n<h4>User namespaces<\/h4>\n<p>In order to tackle the issue we&#8217;ve met when we entered the host filesystem from within a container, &nbsp;a new feature called <em>user namespaces <\/em>has been elaborated. Remember that the problem&#8217;s core arose from the fact that the Linux kernel was not able to make a difference between the container root user and the actual host root, since both carry UID 0. So when&nbsp;we once had access to any host directories virtually restricted to the host root, nothing could have been stop us from damaging the entire system when it comes to the worst.<br \/>\nThe solution which solves this problem is &#8220;root remapping&#8221; (this is how Docker&#8217;s Director of Security Nathan McCauley calls it in his <a href=\"https:\/\/www.youtube.com\/watch?v=w519CClzEuc\">talk&nbsp;at Docker Con 2015<\/a>). What that means is that the root user inside a container is mapped to another user outside the container. Within the container environment, the root user has&nbsp;still UID 0, but is represented by another random UID as soon as it enters any mounted host directory. Since this way there&#8217;s no more confusion between different users with UID 0 on the host system, the container root (which has&nbsp;now <em>nobody<\/em>&nbsp; privileges on the host) &nbsp;cannot harm the host any more.<br \/>\nLet&#8217;s first activate user namespaces in Docker to prove this actually works. The scenario is quite the same: Start a container with the Docker socket mounted as a volume and install Docker inside the container. To make it easy, the best thing is to reuse the container from above. However, before we start with this, there&#8217;s a small change we have to apply to the systemd Docker configuration (assumed that your init system is systemd). Navigate&nbsp;to <code class=\"\" data-line=\"\">\/etc\/systemd\/system\/docker.service.d\/<\/code> and create a new file inside this folder which ends with &#8220;.conf&#8221;. After that, write this into the file:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 6: How to activate Docker's user namespaces feature\"># We have to tell systemd that this is a service unit\n[Service]\n# Make sure that the parameter is actually resetted\nExecStart=\n# We activate the user namespaces (see last paramaeter)\nExecStart=\/usr\/bin\/docker daemon -H fd:\/\/ --userns-remap=default<\/pre>\n<p>Afterwards, we have to restart the Docker daemon in systemd style:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 7: Publishing our changes to systemd\"># Reload configuration\n$ sudo systemctl daemon-reload\n\n# Restart Docker daemon\n$ sudo systemctl restart docker.service<\/pre>\n<p>Again, prepare a Docker-in-Docker container and ..hey, what&#8217;s going on?!<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 8: Seems like something has happened ...\">root@111abc222:\/# docker images\nCannot connect to the Docker daemon. Is the docker daemon running on this host?<\/pre>\n<p>It seems like we can&#8217;t access the mounted Docker socket. However, the error message doesn&#8217;t give us the reason for that behavior. Further examinations will prove that the container user doesn&#8217;t have the necessary permissions any more in order to communicate with the host&#8217;s Docker daemon, which requires root privileges or at least membership of the&nbsp;<em>docker<\/em> group. Indeed,&nbsp;user namespaces do their job pretty well.<br \/>\nThough, we already&nbsp;ran into an important disadvantage of the current user namespaces implementation. That is, they can be disabled on demand. As for Docker, the team dicided to leave them deactivated by default, which is why we explicitly had to enable them by means of a daemon parameter and restart the corresponding Docker service. At first sight, this seems not to be a big deal. But focusing on the average user, I&#8217;m sure that most of them run the Docker daemon with user namespaces disabled or, even worse, have no idea about how to turn this feature on and off or that it even exists. Maybe Docker will adjust their default settings with future releases.<br \/>\nAnother essential aspect is that user namespaces is&nbsp;a relatively new feature. It&#8217;s usually common for new features to ship with several bugs and unintended backdoors which can be exploited to bypass their security mechanisms. So user namespaces are not (yet?) the silver bullet in terms of container security, since this feature is still simply work in progress!<br \/>\nNevertheless, user namespaces are already valuable when working with Docker. Do you remember I said that accessing the Docker socket requires root priviliges OR being part of the <em>docker<\/em> group? Take a moment and think about that, keeping in mind that the daemon runs with root privileges. As a consequence, doesn&#8217;t that mean that belonging to the <em>docker&nbsp;<\/em>group&nbsp;can be regarded as an equivalent to having root privileges on the host? Yes, that&#8217;s exactly what it means. You see, the user namespaces feature didn&#8217;t&nbsp;arrive too early.<\/p>\n<h4>Control Groups<\/h4>\n<p>We already met cgroups in the previous post when discussing the basic container technologies, which is why I&#8217;ll keep this short. When we looked at cgroups the first time, the focus was on putting resource limits (CPU, RAM etc.) upon a group of processes. In terms of security, they&#8217;re also a effective mechanism for&nbsp;establishing access restrictions. However, I don&#8217;t want to keep quiet that on the other hand cgroups also have the potential of being a serious threat to container security. The reason for this is their implementation as a virtual filesystem, which enables external tools to view existing cgroups and control them if necessary. Since the <em>cgroup virtual filesystem <\/em>can be mounted by anyone interested in this&nbsp;information (also by containers!), it must be considered a possible method for container escape if employed incautiously.<\/p>\n<h4>Capabilities<\/h4>\n<p>I think we agree that user namespaces is a reasonable kernel extension as for container security. Though, due to some Linux characteristics even this new feature can&#8217;t provide all-over protection of the host system. And we&#8217;re not talking about any bugs here.<br \/>\nMaybe you already heared about so-calles <em>setuid binaries (in short: suid binaries)<\/em>. As opposed to other binaries, they&#8217;re executed with the privileges of the file&#8217;s owner in place of the user running it. A special type of suid binaires is <em>suid root binaries<\/em>, which belong to the root user and therefore are always processed with its unrestricted privilege, no matter which user put&nbsp;them into action. I&#8217;m pretty sure you&#8217;re surprised to hear that probably the most famous of these binaries is <em>\/bin\/ping<\/em>. Just to get that clear: Everytime you do a ping, it&#8217;s done with root priviliges!<br \/>\nThe consequence that arises from the existence of suid root binaries is that they primarily introduce a large gateway for privilege escalation attacks. Unfortunately,&nbsp;user namespaces can&#8217;t even help us here. So the idea behind <em>capabilities<\/em> is giving a single process temporary and very fine-granular permissions. Capabilities can dynamically be hand over to or&nbsp;taken from a process even on&nbsp;execution time. Regarding <em>\/bin\/ping,&nbsp;<\/em>privilege escalation may be avoided by granting it only the permissions it really needs (i.e. network access) instead of being to generous giving it unrestricted access to parts of the system which are not of its business.<br \/>\nAlthough capabilities is a quite reasonable and powerful feature, it also comes with a great weakness, namely <em>CAP_SYS_ADMIN<\/em>.&nbsp;This is the most dangerous capability since giving it to any process has the same effect as running it as the root user. There might be a high risk for many users being to lazy for taking the time to carefully&nbsp;think about which permissions are really required and assign them in the form of individual capabilities. Instead they might be tempted to use&nbsp;<em>CAP_SYS_ADMIN<\/em>&nbsp;in these situations. You see that it&#8217;s not only about the technology, because users play an important role in terms of security, too.<\/p>\n<h4>Mandatory Access Control<\/h4>\n<p>So what else can be done? Another security concept&nbsp;which is actually very old and now widely adopted by several container vendors is <em>Mandatory Access Control (MAC)<\/em>. MAC is heaviliy based on the paper &#8220;Integrity Considerations for Secure Computing Systems&#8221; which has been commissioned by the US Air Force in 1977. Amongst other things, this paper discusses the idea of watermarks and policies in order to protect the integrity of a computing system&#8217;s data. However, it was just ten years later when&nbsp;<em>National Security Agency (NSA)&nbsp;<\/em>gave MAC a kick towards being integrated into operating systems.<br \/>\nWhile <em>Discretionary Access Control (DAC) <\/em>purely<em>&nbsp;<\/em>makes its access decisions based on the notion of <em>subjects&nbsp;<\/em>(e.g. users or processes) and <em>objects<\/em> (e.g. files or sockets) as you may be familiar with from working with Linux, MAC goes beyond that. It relies on a policy for decision making which is made up of a custom set of rules. An essential assumption is that everything that&#8217;s not explicitly allowed by a policy rule is interpreted as forbidden by the security system.<br \/>\nThere&#8217;re several MAC implementations available on Linux, whereupon <em>SELinux (Security Enhanced Linux)<\/em>&nbsp;and <em>AppArmor&nbsp;<\/em>might be the most popular ones. What they have in common is their ability to offer the definition of very fine-granular rules. Though, both their configuration is generally considered quite complex since a custom policy requires the creation of a security profile. Each&nbsp;profile contains rules which must be expressed through a specific language. For this reason, Docker ships with an AppArmor template, containing some rules which should be a good default. As for SELinux, an appropriate Docker profile is provided by Red Hat.<\/p>\n<hr>\n<h2>Conclusion<\/h2>\n<p>Although&nbsp;container technology offers great changes towards highly available and scalable IT infrastructures, their impact on security must not be underestimated. However, we also see that the threats we discussed do not originate from containers themselves in the first place. In fact, the most dangerous risks are introduced by the users themselves as well as&nbsp;the underlying Linux kernel.<br \/>\nEven the most advanced security features cannot give us any protection if they&#8217;re explicitly disabled by the system administrator. Of course one may&nbsp;answer that Docker ships with user namespaces and MAC disabled by default. However, we should not take that as a reason for absolving from the responsibility for taking care of our systems. Moreover, Docker already explained that e.g. user namespaces will be activated by default in future releases.<br \/>\nThen there&#8217;s the Linux kernel, whose development team didn&#8217;t take security seriously enough for a long time, which is why now lots of security features have to be added retroactively. In fact, the security enhancements dicussed above constitute a step in the right direction, albeit introducing new functionality usually involves introducing new bugs. Besides, there&#8217;re still many parts of the Linux kernel which are not namespace-aware.<br \/>\nOf course there&#8217;re also features coming with Docker which enlarge the attack surface. We saw that Docker&#8217;s ability to give a container access to a host directory by offering volumes might lead to escape. Thus, I&#8217;m not really a big fan of Docker&#8217;s mount feature, even though it might be practical e.g. for aggregating logs or sharing configuration files among&nbsp;multiple containers. However,&nbsp;in my opinion this is not only dangerous, but also hurts the principle of a container being a self-cotntained process execution unit. What are your thinking about this?<br \/>\nThis shall be enough for this blog post. In the third and last part of that series, we will take a short look at a very nice Docker feature called Docker Content Trust. Stay tuned!<\/p>\n<hr>\n<h2>Further research questions<\/h2>\n<ul>\n<li>The security awareness in terms of container security seems to increase as of late. Will container security ever reach a level where a container can really be called <em>secure<\/em>? Is absolute security even possible?<\/li>\n<li>What kind of security features will container vendors like Docker plan for future releases?<\/li>\n<li>In January 2016 Docker announced that <a href=\"https:\/\/blog.docker.com\/2016\/01\/unikernel\/\">Unikernel Systems joined the team<\/a>. Will&nbsp;this technology going beyond bringing Docker on Windows or Mac OS and taking Docker security on a higher level in the future? Can Unikernels even help with that?<\/li>\n<li>Since containers is a key factor&nbsp;driving Linux kernel development, Docker&#8217;s participation on Linux kernel improvement would perfectly make sense. When will they start to take part? Did they already?<\/li>\n<li>Of course&nbsp;security becomes more and more important in software industry. Furthermore, this also holds true for an ordinary software developer&#8217;s daily work. How might a developer&#8217;s field of responsibility look like in a few years? What kind of role does the DevOps movement play here?<\/li>\n<\/ul>\n<hr>\n<h2>Sources<\/h2>\n<h4>Web<\/h4>\n<ul>\n<li>Docker Inc.: <em>Docker security <\/em>(2016),&nbsp;<a href=\"https:\/\/docs.docker.com\/engine\/security\/security\/\">https:\/\/docs.docker.com\/engine\/security\/security\/<\/a> (last access: August 16, 2016)<\/li>\n<li>lvh: <em>Don&#8217;t expose the Docker socket (not even to a container) <\/em>(August 23, 2015)<em>,<\/em> <a href=\"https:\/\/www.lvh.io\/posts\/dont-expose-the-docker-socket-not-even-to-a-container.html\">https:\/\/www.lvh.io\/posts\/dont-expose-the-docker-socket-not-even-to-a-container.html<\/a> (last access: August 16, 2016)<\/li>\n<\/ul>\n<h4>Papers<\/h4>\n<ul>\n<li>Grattafiori, Aaron: <em>Understanding and Hardening Linux Containers<\/em> (NCC Group Whitepaper, Version 1.0, April 20, 2016)<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Now that we&#8217;ve understood the basics, this&nbsp;second part will&nbsp;cover the most relevant container threats, their possible impact as well as&nbsp;existent countermeasures. Beyond that, a short overview&nbsp;of the most important sources for container threats will be provided. I&#8217;m pretty sure you&#8217;re not counting on most&nbsp;of them. Want to know more?<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[26,651,2],"tags":[61,3,4,27],"ppma_author":[694],"class_list":["post-1299","post","type-post","status-publish","format-standard","hentry","category-secure-systems","category-system-designs","category-system-engineering","tag-containers","tag-docker","tag-linux","tag-security"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":27,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2015\/12\/17\/27\/","url_meta":{"origin":1299,"position":0},"title":"Docker- dive into its foundations","author":"Benjamin Binder","date":"17. December 2015","format":false,"excerpt":"Docker has gained a lot of attention over the past several years.\u00a0But not only because of its cool logo or it being\u00a0the top buzzword of managers, but also because of its useful features.\u00a0We talked about Docker quite a bit without really\u00a0understanding why it's so\u00a0great to use. So we decided to\u2026","rel":"","context":"In &quot;Databases&quot;","block_context":{"text":"Databases","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/databases\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1060,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/08\/06\/exploring-docker-security-part-1-the-whales-anatomy\/","url_meta":{"origin":1299,"position":1},"title":"Exploring Docker Security &#8211; Part 1: The whale&#8217;s anatomy","author":"Patrick Kleindienst","date":"6. August 2016","format":false,"excerpt":"When it comes to Docker, most of us\u00a0immediately start thinking of current trends like Microservices, DevOps, fast deployment, or scalability. Without a doubt, Docker seems to hit the road towards establishing itself\u00a0as\u00a0the\u00a0de-facto standard for lightweight application containers, shipping not only with lots of features and tools, but also great usability.\u2026","rel":"","context":"In &quot;Secure Systems&quot;","block_context":{"text":"Secure Systems","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/secure-systems\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":1924,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-4\/","url_meta":{"origin":1299,"position":2},"title":"Microservices \u2013 Legolizing Software Development IV","author":"Calieston Varatharajah, Christof Kost, Korbinian Kuhn, Marc Schelling, Steffen Mauser","date":"28. February 2017","format":false,"excerpt":"An automated development environment will save you. We explain how we set up Jenkins, Docker and Git to work seamlessly together.","rel":"","context":"In &quot;System Designs&quot;","block_context":{"text":"System Designs","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-1024x439.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-1024x439.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-1024x439.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":26254,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2024\/03\/21\/docker-security-hands-on-guide\/","url_meta":{"origin":1299,"position":3},"title":"Docker security: Hands-on guide","author":"Maximilian Tellmann","date":"21. March 2024","format":false,"excerpt":"Absichern von Docker Containern, durch die Nutzung von Best Practices in DockerFiles und Docker Compose. Einf\u00fchrung Es ist sehr wahrscheinlich im Alltag mit containerisierten Anwendungen in Ber\u00fchrung zu kommen, ohne sich dessen bewusst zu sein. In einer Zeit, in der sich der Trend der Unternehmen weiterhin stark in Richtung Cloud\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5175,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/02\/24\/benefiting-kubernetes-part-2-deploy-with-kubectl\/","url_meta":{"origin":1299,"position":4},"title":"Migrating to Kubernetes Part 2 &#8211; Deploy with kubectl","author":"Can Kattwinkel","date":"24. February 2019","format":false,"excerpt":"Written by: Pirmin Gersbacher, Can Kattwinkel, Mario Sallat Migrating from Bare Metal to Kubernetes The interest in software containers is a relatively new trend in the developers world. Classic VMs have not lost their right to exist within a world full of monoliths yet, but the trend is clearly towards\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=1050%2C600&ssl=1 3x"},"classes":[]},{"id":170,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/01\/07\/more-docker-more-power-part-2-setting-up-nginx-and-docker\/","url_meta":{"origin":1299,"position":5},"title":"More docker = more power? \u2013 Part 2: Setting up Nginx and Docker","author":"Moritz Lottermann","date":"7. January 2016","format":false,"excerpt":"This is Part 2 of a series of posts. You can find Part 1 here:\u00a0https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/01\/03\/more-docker-more-power-part-1-setting-up-virtualbox\/ In the first part of this series we have set up two VirtualBox machines. One functions as the load balancer and the other will house our services. As the next step we want to install\u2026","rel":"","context":"In &quot;System Designs&quot;","block_context":{"text":"System Designs","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/01\/1429543497dockerimg.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/01\/1429543497dockerimg.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/01\/1429543497dockerimg.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/01\/1429543497dockerimg.png?resize=700%2C400&ssl=1 2x"},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":694,"user_id":4,"is_guest":0,"slug":"pk070","display_name":"Patrick Kleindienst","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/d0135b87f4c61a26c5a66f7a2ed6c5c65e24a27662ff67c06a36af82b702336f?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/1299","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=1299"}],"version-history":[{"count":47,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/1299\/revisions"}],"predecessor-version":[{"id":25537,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/1299\/revisions\/25537"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=1299"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=1299"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=1299"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=1299"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}