{"id":1373,"date":"2016-09-13T14:25:01","date_gmt":"2016-09-13T12:25:01","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=1373"},"modified":"2023-08-06T21:53:55","modified_gmt":"2023-08-06T19:53:55","slug":"exploring-docker-security-part-3-docker-content-trust","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/09\/13\/exploring-docker-security-part-3-docker-content-trust\/","title":{"rendered":"Exploring Docker Security &#8211; Part 3: Docker Content Trust"},"content":{"rendered":"<figure style=\"width: 800px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/Notary.jpg\" width=\"800\" height=\"450\"><figcaption class=\"wp-caption-text\">https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/Notary.jpg<\/figcaption><\/figure>\n<p>In terms of security, obtaining&nbsp;Docker images from private or public Docker Registries is affected by the same issues as&nbsp;every software update system: It must be ensured that a client can always verify the publisher of the content and also that he or she actually got the latest version of the image. In order to provide its users with that guarantees, Docker ships with a feature called <em>Docker Content Trust<\/em>&nbsp;since version 1.8.<br \/>\nThis third and last part of this series intends to give an overview of Docker Content Trust, which in fact combines different frameworks and tools, namely <em>Notary <\/em>and <em>Docker Registry v2<\/em>,&nbsp;&nbsp;into a rich and powerful feature set making Docker images more secure.<\/p>\n<p><!--more--><\/p>\n<h2>The software update process<\/h2>\n<p>First of all, I want to make sure that we have a common understanding of how software update systems generally do their job. It doesn&#8217;t really&nbsp;matter if we think package managers like <a href=\"https:\/\/help.ubuntu.com\/lts\/serverguide\/apt.html\"><em>APT<\/em><\/a>&nbsp; or &nbsp;software library managers like <a href=\"https:\/\/rubygems.org\/\">RubyGems<\/a>, since their core functionality is basically the same.&nbsp;So let&#8217;s briefly skim through&nbsp;the steps such an update system performs to check and &#8211; if necessary &#8211; install the latest updates:<\/p>\n<ol>\n<li>The updating process starts with a download of metadata files from the a certain repository. These files list the latest version(s) of the software hosted by this&nbsp;repository.<\/li>\n<li>The client&#8217;s update system investigates the previously fetched metadata files and checks if there&#8217;s any software available which is newer than what is currently installed on the client machine.<\/li>\n<li>In case there&#8217;s new software available, the software update system downloads and installs the latest version(s) of the package(s) or application(s).<\/li>\n<li>In case the client is already in posession of the latest software, nothing happens.<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<h2>Conceivable&nbsp;attacks on software update systems<\/h2>\n<p>Now that we clarified how software update systems operate in high-level terms, I want to convey&nbsp;a first picture&nbsp;of why securing the software update process is so important. There&#8217; re many different kinds of attacks a client might be exposed to, so let&#8217;s take a moment and review some of them:<\/p>\n<ul>\n<li><strong>Arbitrary installation attacks<\/strong> &#8211; A client is presented arbitrary data by the attacker as a response to a software update download request.<\/li>\n<\/ul>\n<ul>\n<li><strong>Endless data attacks<\/strong> &#8211; An attacker overwhelms a client with a large amount of data as a response to a software update download request. The client machine might not be able to cope with the masses of input that it receives and finally crashes. This is&nbsp;also called a DoS (Denial of Service) attack.<\/li>\n<\/ul>\n<ul>\n<li><strong>Extraneous dependencies attacks<\/strong> &#8211; An attacker forces a client into downloading malicious software as dependencies&nbsp;which are not necessarily needed.<\/li>\n<\/ul>\n<ul>\n<li><strong>Fast-forward attacks<\/strong> &#8211; An attacker tricks a client&#8217;s software update system into marking a file as newer than the &nbsp;most up-to-date and valid version on the update server. As a consequence, the client&#8217;s update system refuses to install a revision of a file that is older than what it has already seen, which prevents the&nbsp;client from receiving&nbsp;the latest updates.<\/li>\n<\/ul>\n<ul>\n<li><strong>Indefinite freeze attacks<\/strong> &#8211; An attacker answers any update request with outdated metadata. The client therefore will never see that there&#8217;re any updates available.<\/li>\n<\/ul>\n<ul>\n<li><strong>Mix-and-match attacks<\/strong> &#8211; An attacker presents a composition of different metadata concerning different&nbsp;packages. However, this combination of metadata might have never existed on the server at the same time. In this way, the attacker can serve an arbitrary combination of several packages with random versions.<\/li>\n<\/ul>\n<ul>\n<li><strong>Rollback attacks<\/strong> &#8211; An attacker tricks a client into installing outdated software.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h2>Why not use GPG for image signing?<\/h2>\n<p>A technology that is often used for signing software packages is GPG (GNU Privacy Guard). The package to distribute gets&nbsp;signed with a private key and is then published along with the corresponding public key, allowing any client to verify the signature (and also&nbsp;the idenity) of the original publisher. So why not use that approach for signing Docker images?<br \/>\nWhile GPG sounds like a simple and reliable system, it&nbsp;also comes with major&nbsp;drawbacks:<\/p>\n<ul>\n<li><strong>No freshness guarantees<\/strong> &#8211; Consider a Man-in-the-Middle who serves you a package from years ago, which also has been signed by the currently valid key pair. Your update system will download the package, successfully verify the signature and install the software. Since&nbsp;the package was signed properly and you have no idea that it might come from an attacker, you unwittingly ran into a rollback attack.<\/li>\n<\/ul>\n<ul>\n<li><strong>Vulnerable GPG&nbsp;key<\/strong> &#8211; Since managing software as well as building and signing packages is done by means of automated processes on remote (front-end) servers (e.g. CI servers), the GPG private key also has to be kept online, where it might be stolen by malicious invaders. In case that really happens, a new key pair has to be generated and all clients have to revoke their trust on the old keys for no longer successfully verifying software that is signed by the stolen key. This is not just lots of work, but also very embarrassing in the first place.<\/li>\n<\/ul>\n<p>Facing that, it might be clear why Docker decided to go another way for image verification. In the next section, I want to focus on the approach Docker built upon.<\/p>\n<p>&nbsp;<\/p>\n<h2>The Update Framework (TUF)<\/h2>\n<p>TUF (a play on words with <em>tough<\/em> security) is a software update framework which has been started in 2009. It&#8217;s heavily based&nbsp;on <em>Thandy<\/em>, which is the application updater of the <a href=\"https:\/\/www.torproject.org\/\">Tor browser<\/a>.<br \/>\nIn contrast to Thandy or other application updaters or package managers, TUF rather aims at being a universial extension of any software update system that wants to use it, than being a standalone software update tool. &nbsp;The TUF specification as well as a refrerence implementation can be found on <a href=\"https:\/\/github.com\/theupdateframework\/tuf\">Github<\/a>.<\/p>\n<p>&nbsp;<\/p>\n<h4>Roles, keys and files<\/h4>\n<p>We already saw that the GPG approach is vulnerable due to a single signing key which is kept online and therefore exposed to potential attackers. In order to bypass that problem, TUF defines a hierarchy of different keys with different privileges and varied&nbsp;expiration dates instead of relying on a single key. These keys are bound to specific roles, the owner of the root key for example earns the <em>root role&nbsp;<\/em>within the system. On top of that, TUF determines&nbsp;a set&nbsp;of metadata files which must be present at&nbsp;a repository&#8217;s top level directory. Let&#8217;s take a closer look at the framework&#8217;s architecture.<\/p>\n<figure id=\"attachment_1589\" aria-describedby=\"caption-attachment-1589\" style=\"width: 716px\" class=\"wp-caption alignleft\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_keys.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"1589\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/09\/13\/exploring-docker-security-part-3-docker-content-trust\/notary_keys\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_keys.png\" data-orig-size=\"716,589\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"notary_keys\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_keys.png\" class=\"wp-image-1589 size-full\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_keys.png\" alt=\"notary_keys\" width=\"716\" height=\"589\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_keys.png 716w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_keys-300x247.png 300w\" sizes=\"auto, (max-width: 716px) 100vw, 716px\" \/><\/a><figcaption id=\"caption-attachment-1589\" class=\"wp-caption-text\"><strong>Figure 1: Hierarchy of keys and roles in TUF<\/strong> (<a href=\"https:\/\/camo.githubusercontent.com\/7f810d43a785c341d6bf4606c884c371a665013c\/68747470733a2f2f63646e2e7261776769742e636f6d2f646f636b65722f6e6f746172792f303966383137313730383066353332373665363838316563653537636262626639316238653261372f646f63732f696d616765732f6b65792d6869657261726368792e737667\">Source<\/a>)<\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<ul>\n<li><strong>Root key<\/strong> &#8211; The root key acts as the root of trust in&nbsp;TUF. It&#8217;s the key with the longest expiration date as well as&nbsp;the highest privileges, and it&#8217;s only&nbsp;job is to sign the other keys in the system, which is why it should be kept offline and secure, e.g. in a USB drive or smart card. More precisely, the root role signs a file called <em>root.json<\/em>, which is required by the TUF specification and lists the currently valid public keys for all the other keys in the system. In this way, the validity of the other keys can always be checked by a&nbsp;client at any time.<\/li>\n<\/ul>\n<ul>\n<li><strong>Snapshots key<\/strong> &#8211; This key signs the <em>snapshot.json&nbsp;<\/em>file, which contains a list of all currently valid metadata files (i.e. file name, file size and hash) except from <em>timestamp.json <\/em>(more on that in a little while). In other words, this file gives us a &#8220;snapshot&#8221;, comprising everything in our repository that should be considered a part of the latest revision of our software. This idea of taking &#8220;snapshots&#8221; underlines one of TUF&#8217;s core concepts, which is thinking in collections instead of single files. In this way, we can protect ourselves from Mix-and-match attacks.<\/li>\n<\/ul>\n<ul>\n<li><strong>Timestamp key<\/strong> &#8211; The timestamp key signs the <em>timestamp.json<\/em> file, which in turn indicates the currently valid <em>snapshot.json<\/em> file by hash, file size and version number. It has the shortest expiration time and the least privileges in the system, since it is kept online and therefore must be considered very vulnerable. <em>Timestamp.json&nbsp;<\/em>gets re-signed in regular intervals.&nbsp;Thereby, clients can be provided with freshness guarantees, which means they can be sure to actually download the latest&nbsp;updates.<\/li>\n<\/ul>\n<ul>\n<li><strong>Targets key<\/strong> &#8211; This key&nbsp;is finally&nbsp;responsible for verifying&nbsp;the files we want to protect (i.e. the &#8220;target&#8221; files). It&nbsp;signs <em>targets.json<\/em>, a file which lists our target files by file name, file size and hash and therefore ensures their&nbsp;integrity. The targets role allows for delegating responsibility to one or more subordinated roles, meaning that they can also sign a subset of the present target files (e.g. a all files in a certain subdirectory). The advantage that comes with delegations is that the owner of the targets key doesn&#8217;t have to share it with others. Instead, the targets role signs one or more delegation keys, which only apply to the files it wants to delegate trust for. In this way, there&#8217;s no possibility for the delegated role to sign any content it isn&#8217;t supposed to.<\/li>\n<\/ul>\n<h4>TUF in action &#8211; Bringing it all together<\/h4>\n<p>I guess you might be clobbered over the head with all the different roles, files and keys that we went through. However, when stepping trough a single TUF workflow, it&#8217;s much more easier to understand what they do and how everything&nbsp;fits into the whole thing.<br \/>\nSo when any client applications interacts with TUF in order to check for updates, the following steps occur:<\/p>\n<ol>\n<li>The client application instructs TUF to search&nbsp;for available updates. If the client interacts with the repository for the first time, <em>root.json<\/em> is downloaded and the root public key gets imported. Remember that this file allows us to&nbsp;verify the signatures of all the other keys.<\/li>\n<li>TUF downloads the <em>timestamp.json&nbsp;<\/em>file from the repository, checks its signature with the public key given by <em>root.json<\/em> and compares it to the latest version of the file that is present on the client machine. The timestamp file tells us about the latest valid snapshot within the system, remember?<\/li>\n<li>In case TUF recognizes that <em>snapshot.json<\/em>&nbsp;has changed, the framework also downloads this file and verifies the signature by means of the public key that came with <em>root.json.&nbsp;<\/em>TUF then inspects the latest&nbsp;version of the snapshots file and checks if any other metadata file (<em>targets.json <\/em>and\/or<em> root.json<\/em>) has been modified.<\/li>\n<li>In case <em>root.json&nbsp;<\/em>has changed (e.g. due to a key rotation), the latest revision of this file is fetched from the repository and the update process restarts with step 1.<\/li>\n<li>If <em>targets.json<\/em> has been modified, that means that one or more target files have been updated in the meantime. TUF downloads the file, verifies it, inspects it and finally creates a list of files which can be updated. The list&nbsp;is then presented to the client update system.<\/li>\n<li>For all files on the list that shall be updated, TUF is instructed to download them.<\/li>\n<li>TUF downloads the files from the repository, stores them within a&nbsp;temporary directory and checkes their signatures. Only after all the fetched files have successfully been verified, TUF hands them over to the software update system.<\/li>\n<\/ol>\n<h4>What TUF gives us<\/h4>\n<p>Although we only scratched at the surface of TUF, it&#8217;s quite obvious that understanding its internals is not so trivial. So what do all these keys and files give us in the end?<br \/>\nSumming up, TUF helps us the bypass the most important flaws of GPG signing we discussed above:<\/p>\n<ul>\n<li><strong>Surviving key compromise<\/strong> &#8211; &nbsp;Since several keys are needed for being able to sign and publish new valid content, &nbsp;a single compromised key does not immediately result in an entirely compromised system. Consider the timestamp key to be compromised: Because the attacker doesn&#8217;t also own the tagging key, all we loose is our freshness guarantees. However, the attacker is still not able to publish new content. Thinking about the inverse scenario where an attacker succeeds in stealing the tagging key, he or she can in fact sign the content, but since the timestamp key is still safe, there&#8217;s no way this content will ever be published as the most recent revision. In order to invalidate any stolen key, everything an admin&nbsp;has to do is taking the offline root key and rotate the other keys. A very nice side-effect that comes with&nbsp;this sort of key rotation is that any client of the affected repository only has to accept and import the new public keys. Of course all the guarantees concerning survival of key compromise presume the root key to be safe. If the root key is stolen,&nbsp;there&#8217;s no more guarantees to make about your content at all.<\/li>\n<\/ul>\n<ul>\n<li><strong>Freshness guarantees<\/strong>&nbsp;&#8211; On top of allowing you to survive key compromise, TUF also ensures that a client&#8217;s software update system always gets not just exactly the file or package that it really wants to get, but also the latest version of it. Therefore, a hacker or Man-in-the-Middle can no longer serve clients outdated software, since it&#8217;s signed by means of a highly ephemeral timestamp key. Thereby, no client can ever be tricked into installing software which is actually older than what he or she has already installed.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h2>Docker Notary<\/h2>\n<p>So how can Docker benefit from TUF in order to make image distribution more secure? The answer is quite simple: The Docker team stuck to the TUF specification and built its own update system&nbsp;on top of it. This is where <em>Notary<\/em> comes into play, which is in fact&nbsp;an opinionated implementation of TUF. Notary&#8217;s primary task is to enable clients to ensure a Docker image&#8217;s integrity as well as verify the identity of its publisher.<br \/>\nBear in mind that it&#8217;s not Notary&#8217;s job to check the contents of an image in either way or perform any code analysis, as Diogo M\u00f3nica points out in his <a href=\"https:\/\/www.youtube.com\/watch?v=JvjdfQC8jxM\">talk&nbsp;about Docker Content Trust<\/a>. We only talk about integrity and publisher identity as the main concerns for Notary. From an OOP perspective, one can say that Notary follows the <em>Single-Responsibility-Principle:<\/em> It does exactly one thing, and that&#8217;s image signing.<br \/>\nIt&#8217;s also worth mentioning that Notary, even though it has ben implemented by Docker, is not restricted&nbsp;to be applied on Docker images in any way. Instead, Notary is a completely independent tool which can work on arbirtary repositories or collections of data.<\/p>\n<h4>Notary architecture<\/h4>\n<p>Notary consists of two major components: The <em>Notary server<\/em>&nbsp;and the <em>Notary signer<\/em>. Notary clients only interact with the Notary server, by pulling metadata from or pushing metadata to it. It stores the TUF metadata files for one or more trusted collections in an associated database.<br \/>\nThe Notary signer can be regarded as an independent entity, which stores all TUF private keys in a seperate database (the signer DB) and signs metadata for the Notary server. &nbsp;The following figure provides an overview of the architecture.<\/p>\n<figure id=\"attachment_1599\" aria-describedby=\"caption-attachment-1599\" style=\"width: 656px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_architecture.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"1599\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/09\/13\/exploring-docker-security-part-3-docker-content-trust\/notary_architecture\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_architecture.png\" data-orig-size=\"907,616\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"notary_architecture\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_architecture.png\" class=\"wp-image-1599 size-medium_large\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_architecture-768x522.png\" alt=\"notary_architecture\" width=\"656\" height=\"446\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_architecture-768x522.png 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_architecture-300x204.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/09\/notary_architecture.png 907w\" sizes=\"auto, (max-width: 656px) 100vw, 656px\" \/><\/a><figcaption id=\"caption-attachment-1599\" class=\"wp-caption-text\"><strong>Figure 2: Notary&#8217;s client-server-signer architecture<\/strong> (<a href=\"https:\/\/camo.githubusercontent.com\/54cbc4e524756f310fcf14b1e89efa42b83d8c98\/68747470733a2f2f63646e2e7261776769742e636f6d2f646f636b65722f6e6f746172792f303966383137313730383066353332373665363838316563653537636262626639316238653261372f646f63732f696d616765732f736572766963652d6172636869746563747572652e737667\">Source<\/a>)<\/figcaption><\/figure>\n<h5>Notary server<\/h5>\n<p>As already mentioned, the Notary server stores and serves metadata for the Notary clients. When a client asks the Notary server for metadata, this component makes sure that the latest versions of the metadata files are fetched from the TUF database and delivered to the client. Beyond that, the Notary server checks&nbsp;all metadata that is uploaded by clients for validity as well as legal signatures. As soon as&nbsp;new valid metadata is supplied by a client, the Notary server also generates new timestamp metadata, which gets signed by Notary signer.<\/p>\n<h5>Notary signer<\/h5>\n<p>The Notary signer is much like a &#8220;back-end&#8221;, storing the private timestamp key (and possibly the snapshot key) and waits for the Notary server&#8217;s signing requests. The Notary server is the only component which&nbsp;directly connects to the signer. On the contrary, the Notary server directly serves clients and therefore acts more as a &#8220;front-end&#8221;.<\/p>\n<p>Designing Notary&#8217;s architecture like this comes with&nbsp;very important advantages. First, TUF &nbsp;metadata which is sent to clients is not mixed up with the TUF keys in a single database. Second, the TUF private keys don&#8217;t have to be stored on a&nbsp;vulnerable front-end which is directly exposed to clients.<\/p>\n<h4>Client-server-signer interaction<\/h4>\n<p>With Notary&#8217;s basic architecture in mind, we&#8217;ll now examine what&#8217;s going on between a client, Notary server and Notary signer as soon as a client starts interaction with Notary. Consider that I&#8217;ll just shortly describe the actions which are preformed by the parties involved. For a more detailed explanation please visist the <a href=\"https:\/\/github.com\/docker\/notary\/blob\/master\/docs\/service_architecture.md\">Notary documentation on Github<\/a>.<\/p>\n<ol>\n<li>We assume that a client has modified an arbitrary target file. He or she adds the new hash, file size and file name to <em>targets.json<\/em>, signes this file with the targets key and finally uploads&nbsp;it to the Notary server.<\/li>\n<li>Notary checks the uploaded version of <em>targets.json<\/em> for validity as well as possible conflicts with existing versions and verifies the signature.<\/li>\n<li>If the verification step has been successful, Notary server generates new &nbsp;<em>snapshot.json<\/em> and <em>timestamp.json <\/em>files. Afterwards, this&nbsp;new metadata is sent to the Notary signer.<\/li>\n<li>Notary signer fetches the snapshot and timestamp private keys from the signer DB, decrypts the keys (yes, they&#8217;re never stored anywhere without being encrypted!), signs the metadata received&nbsp;by Notary server and returns it.<\/li>\n<li>&nbsp;The Notary server now holds new signed metadata which represents the new &#8220;truth&#8221; in terms of the state of the managed trusted collection of files.&nbsp;From now on, every Notary client is served the updated metadata from TUF database for the concerned&nbsp;collection.<\/li>\n<li>Finally, Notary sends a notification to the client that uploading the new metadata has been successful.<\/li>\n<li>If any other client requests Notary server for the latest metadata, it immediately returns the updated metadata files given that none of the metadata has expired. However, if <em>timestamp.json<\/em> has expired, Notary server again goes through the entire procedure of generating a new timestamp, having it signed by Notary signer and storing it in the TUF database before serving it to the client.<\/li>\n<\/ol>\n<p>As I already mentioned above, there&#8217;re a few steps that I&#8217;ve skipped here. For example, it makes perfectly sense to provide authentication and authorization mechanisms for client connections. Again, I recommend the <a href=\"https:\/\/github.com\/docker\/notary\/blob\/master\/docs\/service_architecture.md\">Notary documentation on Github<\/a>&nbsp;for more information on these topics.<\/p>\n<p>&nbsp;<\/p>\n<h2>Docker Registry v2<\/h2>\n<p>We know that a Docker Registry is a central public or private repository to store and distribute our Docker images. But how does the Registry fit into what we&#8217;ve learned about Notary and Docker Content Trust so far? I&#8217;ll soonly answer that question, but first of all, we&#8217;ll take a look at how the latest release of the Docker Registry (which is version 2) operates.<\/p>\n<h4>Registry v2 fundamentals<\/h4>\n<p>At its core, Docker Registry v2 is a so-called <em>content-addressable system<\/em>. What does that mean?<br \/>\nWith a content-addressable system, the location (or &#8220;address&#8221;) of an element it is holding is&nbsp;determined by the element itself. More precisely, an element is used to compute a cryptographic hash, which in turn defines the address under that the element gets stored. In the case of Docker Registry, the elements we want to store and retrieve are images:<\/p>\n<p style=\"text-align: center;\"><em><strong>image address = hash(image bytes)<\/strong><\/em><\/p>\n<p style=\"text-align: left;\">As for the cryptographic hash function <em>hash(x)<\/em>, SHA-256&nbsp;is applied&nbsp;by the Docker Registry. So the correct equation is:<\/p>\n<p style=\"text-align: center;\"><em><strong>image address= sha256(image bytes)<\/strong><\/em><\/p>\n<p style=\"text-align: left;\">Now you might ask: What&#8217;s the point of prefering&nbsp;the cryptographic hash of an image over a randomly generated sequence of characters and numbers as the&nbsp;image storage address?<br \/>\nThe point with this is that all entries of a content-addressable system become self-verifiable:<\/p>\n<ol>\n<li style=\"text-align: left;\">We pull an arbitrary image, let&#8217;s say <em>ubuntu:latest<\/em>, from the registry by providing the corresponding hash value (<em>&#8220;pull by digest&#8221;<\/em>).<\/li>\n<li style=\"text-align: left;\">Docker Registry looks up the image by means of the hash that we just&nbsp;provided.<\/li>\n<li style=\"text-align: left;\">Because we know that the hash value we used for pulling actually is&nbsp;a cryptographic hash of the image we just pulled from Docker Registry and that SHA-256 has been applied as the hash function, we can take the image we&#8217;ve just downloaded and compute its&nbsp;digest on ourselves.<\/li>\n<li style=\"text-align: left;\">If the result equals the hash&nbsp;we provided earlier to pull the image from Docker Registry, we can be sure about the integrity of our image.<\/li>\n<\/ol>\n<p>I admit that there&#8217;s a couple of things here which have been kind of oversimplified by my description. The first question&nbsp;is: Where does the hash we used for pulling&nbsp;<em>ubuntu:latest<\/em> in the example above come from?<br \/>\nOn top of that, I was not completely honest with you so far. It&#8217;s not correct that Docker Registry stores entire images as its elements.<\/p>\n<h4>Docker Registry and Image Manifests<\/h4>\n<p>Think about what would happen if each element&nbsp;within Docker Registry really was an entire image. In the first part of this series, we realized that a Docker image consists of 1-n layers, whereat each layer can be shared across an arbirtary number of images. Sharing layers would actually not be possible if each entry inside the Docker Registry would contain a&nbsp;complete&nbsp;image, which by the way would&nbsp;waste lots of storage since there would be much redundancy.<br \/>\nInstead, what the Registry gives us when we ask for <em>ubuntu:latest&nbsp;<\/em>is a <em>Image Manifest<\/em> file, which lists all the layers this image consists of along with their SHA-256 hash values. Once we have such an Image Manifest available, the rest is pretty simple:<\/p>\n<ol>\n<li>As described above, we verify the Image Manifest we just fetched by computing its SHA-256 hash and compare the result to the hash&nbsp;we provided to perform the <em>docker pull<\/em>.<\/li>\n<li>For the next step, Docker inspects the list of image layers (or rather their hash values) and starts doing a pull by digest for every single layer in the Manifest. As for performance reasons, several of these operations can be ran in parallel.<\/li>\n<li>Afterwards, each layer is verified the same way as the original Manifest belonging to <em>ubuntu:latest<\/em>. The whole process might be recursive, since a Manifest may point to another Manifest as one of its layers. Diogo M\u00f3nica calls this &#8220;Turtles all the way down&#8221;, which is very accurate in my opinion.<\/li>\n<\/ol>\n<p>You see where this is going? What Docker Registry really does is storing individual layers instead of complete images. A Docker Image is nothing but a composition of one or more image layers, which can be represented by means of Image Manifests within the Registry.<\/p>\n<p>&nbsp;<\/p>\n<h2>Docker Registry and Notary &#8211; The big picture<\/h2>\n<p>Summing up, Docker Registry v2&nbsp;enables its clients to verify that what they just pulled really matches the content they were addressing by computing the received image&#8217;s SHA-256 hash value and comparing it to the digest that was used for pulling. However, the reliability of this approach entirely depends on&nbsp;if we truly have the correct hash for an image that we want to download.<\/p>\n<h4>Where Notary comes in<\/h4>\n<p>Why is this so important? Consider that a Docker user never specifies a digest directly when doing a docker pull. Instead, we use image names and tags like <em>ubuntu:latest, <\/em>since<em>&nbsp;<\/em>this is comes quite a bit more handy than having to type weird hash values.&nbsp;As a consequence, what we need is a service that safely translates an image name into the correct hash value which points to the corresponding entry within Docker Registry. And luckily for us, we already have an&nbsp;adequate&nbsp;service available: Notary.<br \/>\nIndeed, Notary is the perfect solution here since it guarantees to always give us the last version of a certain collection of files. In the same way, we can make&nbsp;Notary to provide us with the hash that identifies the publisher of the desired image and also points to the exact version of the image that we want to download. Let&#8217;s go through two scenarios that&nbsp;clearly illustrate how Notary and Registry teamwork mitigates common threats affecting Docker images.<\/p>\n<h4>Protection against Image Forgery<\/h4>\n<p>Consider a situation where an attacker attains a priviliged position in your network and enters the Docker Registry server. What he or she might&nbsp;do now is going into the Registry and&nbsp;&nbsp;tampering some of the existent layers. As soon as a client comes in and asks for an image which refers&nbsp;to any tampered layer, the verification process fails right after download and prevents the client from running potentially malicious software (see figure 3). What happend?<\/p>\n<figure style=\"width: 981px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/blog.docker.com\/wp-content\/uploads\/2015\/08\/dct2.png?w=981&amp;ssl=1\" width=\"981\" height=\"485\"><figcaption class=\"wp-caption-text\"><strong>Figure 3: Notary and Docker Registry protect clients from Image Forgery<\/strong> (https:\/\/i0.wp.com\/blog.docker.com\/wp-content\/uploads\/2015\/08\/dct2.png?w=981&amp;ssl=1)<\/figcaption><\/figure>\n<p>Imagine that we instructed Notary to give us the correct digest for <em>ubuntu:latest<\/em>. With that hash at hands, we go to Docker Registry and can be sure to actually download the desired image. The image&#8217;s TUF signatures are perfectly valid, there&#8217;s nothing wrong here. Though, since one or more layers have been modified by an attacker, the hash we compute by applying the SHA-256 function to the content we just downloaded doesn&#8217;t match the digest we got from Notary. Because we can rely on Notary in terms of what the correct hash for <em>ubuntu:latest<\/em> is, we can imply&nbsp;that something must be wrong with the image.<\/p>\n<h4>Protection against Replay Attacks<\/h4>\n<p>In another situation, an attacker who&#8217;s in a privileged network position might not tamper individual&nbsp;image layers but rather serve clients an old version of an image they want to fetch. For example, that&#8217;s the case when the image you long for&nbsp;is <em>ubuntu:latest, &nbsp;<\/em>but what the&nbsp;Man-in-the-Middle serves you is an&nbsp;outdated version of that image. Again, image verification fails after the download as the result.<\/p>\n<figure style=\"width: 981px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i2.wp.com\/blog.docker.com\/wp-content\/uploads\/2015\/08\/dct3.png?w=981&amp;ssl=1\" width=\"981\" height=\"481\"><figcaption class=\"wp-caption-text\"><strong>Figure 4: Notary and Docker Registry protect clients from Replay Attacks<\/strong> (https:\/\/i2.wp.com\/blog.docker.com\/wp-content\/uploads\/2015\/08\/dct3.png?w=981&amp;ssl=1)<\/figcaption><\/figure>\n<p>So what happens here is that Docker Content Trust prohibits the clients from running an old image by means of the TUF timestamp key. In this example, verifying the old image against&nbsp;<em>timestamp.json<\/em>&nbsp;leads to the outcome that its signature does not match currently valid content and therefore fails&nbsp;with an error message.<\/p>\n<p>&nbsp;<\/p>\n<h2>How to get started with Docker Content Trust?<\/h2>\n<p>If you&#8217;re running at least Docker 1.8, launching Docker Content Trust is very easy. All you have to do is setting the corresponding environment variable:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Listing 1: How to enable Docker Content Trust\">export DOCKER_CONTENT_TRUST=1<\/pre>\n<p>That&#8217;s really all that has to be done. From now on, every single Docker command you&#8217;re executing will be secured by Content Trust, no matter if you&#8217;re doing a <em>docker pull<\/em>, <em>docker run<\/em> or whatever. What that means is that there&#8217;s no need for users to learn any additional commands in order to start working on signed images. Instead, Content Trust operates transparently once it has been enabled.<br \/>\nYou may have recognized that so far Docker Content Trust must be enabled explicitly (&#8220;opt-in&#8221;). The reason for this is that Docker wants to collect as much feedback as possible before finally enabling&nbsp;it by&nbsp;default with a future release (&#8220;opt-out&#8221;).<\/p>\n<p>&nbsp;<\/p>\n<h2>Conclusion and further thoughts<\/h2>\n<p>I have to confess that understanding the details of Docker Content Trust is not so easy and took me lots of time to see through it. The reason is that although TUF comes with a spec that is in fact quite easy to read, getting familiar with its complex structure inlcuding many different roles and keys turned out as not so simple. But on the other hand, I think that the Docker team did a great job with their docs, focusing on the most important aspects of TUF without diving into its&nbsp;internals too deeply. And finally there&#8217;re many things I&#8217;ve learned about Content Trust, TUF and security in general.<br \/>\nWhat I like most about Docker Content Trust, aside from its great usability, is that every component (Notary signer\/server, Docker Registry) exactly has one single responsibility, which facilitates getting an overview of what happens at a certain moment on a certain place within the system.<br \/>\nI want to end this with a question: Where does the journey go for TUF and Notary? Sure,&nbsp;they&#8217;re both very sophisticated solutions in terms of the most common threats for software update systems. However, I think we can be sure that new threats will arise (the public CA model?). As a consequence, TUF and Notary constantly have to be considered work in progress, since they&#8217;ll have to keep up with these. Though, I think that all in all these franeworks&nbsp;pose an excellent starting point for making Docker Image distribution more secure.<\/p>\n<p>&nbsp;<\/p>\n<h2>Acknowledgement<\/h2>\n<p>I want to thank Nathan McCauley (<a class=\"ProfileHeaderCard-screennameLink u-linkComplex js-nav\" href=\"https:\/\/twitter.com\/nathanmccauley\">@<span class=\"u-linkComplex-target\">nathanmccauley<\/span><\/a>), Director of Security at Docker, who&nbsp;kindly agreed on answering me some questions on what I didn&#8217;t get in the first place. Beyond, this blog post highly profited by Docker Security Lead Diogo M\u00f3nica&#8217;s &nbsp;(<a class=\"ProfileHeaderCard-screennameLink u-linkComplex js-nav\" href=\"https:\/\/twitter.com\/diogomonica\">@<span class=\"u-linkComplex-target\">diogomonica<\/span><\/a>) talk covering&nbsp;Notary&nbsp;and&nbsp;also by&nbsp;lots of valuable resources provided by Docker and TUF.<\/p>\n<p>&nbsp;<\/p>\n<h2>Sources<\/h2>\n<h4>Web<\/h4>\n<ul>\n<li>Cappos, Justin, and Kuppusamy, Trishank Karthik. 2014. <em>The Update Framework Specification<\/em>. Last modified August 25, 2016. <a href=\"https:\/\/github.com\/theupdateframework\/tuf\/blob\/develop\/docs\/tuf-spec.txt\">&nbsp;https:\/\/github.com\/theupdateframework\/tuf\/blob\/develop\/docs\/tuf-spec.txt<\/a><\/li>\n<li>Day, Stephen. 2015. <em>A New Model for Image Distribution<\/em> (DockerCon SF 2015). Published by user &#8220;Docker, Inc.&#8221; June 29, 2015.&nbsp;<a href=\"http:\/\/de.slideshare.net\/Docker\/docker-registry-v2\">http:\/\/de.slideshare.net\/Docker\/docker-registry-v2<\/a><\/li>\n<li>Docker Inc.&nbsp;2016.&nbsp;<em>Getting started with Docker<\/em>&nbsp;<em>Notary.&nbsp;<\/em>Last modified Juni 12, 2016.&nbsp;<a href=\"https:\/\/github.com\/docker\/notary\/blob\/master\/docs\/getting_started.md\">https:\/\/github.com\/docker\/notary\/blob\/master\/docs\/getting_started.md<\/a><\/li>\n<li>Docker Inc. 2016. <em>Image Manifest Version 2, Schema 2<\/em>. Accessed September 11, 2016.&nbsp;<a href=\"https:\/\/docs.docker.com\/registry\/spec\/manifest-v2-2\/\">https:\/\/docs.docker.com\/registry\/spec\/manifest-v2-2\/<\/a><\/li>\n<li>Docker Inc. 2016. <em>Understand the Notary service architecture<\/em>. Last modified August 2, 2016.&nbsp;<a href=\"https:\/\/github.com\/docker\/notary\/blob\/master\/docs\/service_architecture.md\">https:\/\/github.com\/docker\/notary\/blob\/master\/docs\/service_architecture.md<\/a><\/li>\n<li>M\u00f3nica, Diogo. 2015. <em>Introducing Docker Content Trust. <\/em>Accessed August 28, 2016.&nbsp;<a href=\"https:\/\/docs.docker.com\/engine\/security\/trust\/content_trust\/\">https:\/\/docs.docker.com\/engine\/security\/trust\/content_trust\/<\/a><\/li>\n<li>Wikipedia, The Free Encyclopedia. 2004. <em>GNU Privacy Guard<\/em>. Last modified August 15, 2016.&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/GNU_Privacy_Guard\">https:\/\/en.wikipedia.org\/wiki\/GNU_Privacy_Guard<\/a><\/li>\n<\/ul>\n<h4>Videos<\/h4>\n<ul>\n<li>McCauley, Nathan. 2015. <em>Understanding Docker Security.&nbsp;<\/em>YouTube video. 48:03. Posted by &#8220;Docker&#8221;. December 19, 2015.&nbsp;<a href=\"https:\/\/www.youtube.com\/watch?v=w519CClzEuc\">https:\/\/www.youtube.com\/watch?v=w519CClzEuc<\/a><\/li>\n<li>M\u00f3nica, Diogo. 2015. <em>A Docker image walks into a Notary &#8211; Diogo M\u00f3nica.&nbsp;<\/em>YouTube video. 26:27. Posted by &#8220;ContainerCamp&#8221;. September 29, 2015.&nbsp;<a href=\"https:\/\/www.youtube.com\/watch?v=JvjdfQC8jxM\">https:\/\/www.youtube.com\/watch?v=JvjdfQC8jxM<\/a><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This third and last part of this series intends to give an overview of Docker Content Trust, which in fact combines different frameworks and tools, namely Notary and Docker Registry v2,  into a rich and powerful feature set making Docker images more secure.<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[26,651,2],"tags":[],"ppma_author":[694],"class_list":["post-1373","post","type-post","status-publish","format-standard","hentry","category-secure-systems","category-system-designs","category-system-engineering"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":1299,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/08\/16\/exploring-docker-security-part-2-container-flaws\/","url_meta":{"origin":1373,"position":0},"title":"Exploring Docker Security &#8211; Part 2: Container flaws","author":"Patrick Kleindienst","date":"16. August 2016","format":false,"excerpt":"Now that we've understood the basics, this\u00a0second part will\u00a0cover the most relevant container threats, their possible impact as well as\u00a0existent countermeasures. Beyond that, a short overview\u00a0of the most important sources for container threats will be provided. I'm pretty sure you're not counting on most\u00a0of them. Want to know more? Container\u2026","rel":"","context":"In &quot;Secure Systems&quot;","block_context":{"text":"Secure Systems","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/secure-systems\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/article-1301858-0ABD7881000005DC-365_964x543.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/article-1301858-0ABD7881000005DC-365_964x543.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/article-1301858-0ABD7881000005DC-365_964x543.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/article-1301858-0ABD7881000005DC-365_964x543.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":1924,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-4\/","url_meta":{"origin":1373,"position":1},"title":"Microservices \u2013 Legolizing Software Development IV","author":"Calieston Varatharajah, Christof Kost, Korbinian Kuhn, Marc Schelling, Steffen Mauser","date":"28. February 2017","format":false,"excerpt":"An automated development environment will save you. We explain how we set up Jenkins, Docker and Git to work seamlessly together.","rel":"","context":"In &quot;System Designs&quot;","block_context":{"text":"System Designs","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-1024x439.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-1024x439.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-1024x439.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":282,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/03\/10\/docker-running-on-a-raspberry-pi-hypriot\/","url_meta":{"origin":1373,"position":2},"title":"Docker on a Raspberry Pi: Hypriot","author":"Jonathan Peter","date":"10. March 2016","format":false,"excerpt":"Raspberry Pis are small, cheap\u00a0and easy to come by. But what if you want to use Docker on them? Our goal was to run Docker on several Raspberry Pis and combine them to a cluster with Docker Swarm. To achieve this, we first\u00a0needed to get Docker running on the Pi.\u2026","rel":"","context":"In &quot;System Designs&quot;","block_context":{"text":"System Designs","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":21064,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2021\/09\/11\/how-do-you-get-a-web-application-into-the-cloud\/","url_meta":{"origin":1373,"position":3},"title":"How do you get a web application into the cloud?","author":"af094","date":"11. September 2021","format":false,"excerpt":"by Dominik Ratzel (dr079) and Alischa Fritzsche (af094) For the lecture \"Software Development for Cloud Computing\", we set ourselves the goal of exploring new things and gaining experience. We focused on one topic: \"How do you get a web application into the cloud?\". In doing so, we took a closer\u2026","rel":"","context":"In &quot;Cloud Technologies&quot;","block_context":{"text":"Cloud Technologies","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/cloud-technologies\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/09\/availableRunners-150x118.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":7154,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/31\/setting-up-a-ci-cd-pipeline-in-gitlab\/","url_meta":{"origin":1373,"position":4},"title":"Setting up a CI\/CD pipeline in Gitlab","author":"nr037","date":"31. August 2019","format":false,"excerpt":"Introduction For all my university software projects, I use the HdM Gitlab instance for version control. But Gitlab offers much more such as easy and good ways to operate a pipeline. In this article, I will show how we can use the CI\/CD functionality in a university project to perform\u2026","rel":"","context":"In &quot;Cloud Technologies&quot;","block_context":{"text":"Cloud Technologies","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/cloud-technologies\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Screenshot-2019-08-26-at-09.53.13.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Screenshot-2019-08-26-at-09.53.13.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Screenshot-2019-08-26-at-09.53.13.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Screenshot-2019-08-26-at-09.53.13.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Screenshot-2019-08-26-at-09.53.13.png?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Screenshot-2019-08-26-at-09.53.13.png?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":1060,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/08\/06\/exploring-docker-security-part-1-the-whales-anatomy\/","url_meta":{"origin":1373,"position":5},"title":"Exploring Docker Security &#8211; Part 1: The whale&#8217;s anatomy","author":"Patrick Kleindienst","date":"6. August 2016","format":false,"excerpt":"When it comes to Docker, most of us\u00a0immediately start thinking of current trends like Microservices, DevOps, fast deployment, or scalability. Without a doubt, Docker seems to hit the road towards establishing itself\u00a0as\u00a0the\u00a0de-facto standard for lightweight application containers, shipping not only with lots of features and tools, but also great usability.\u2026","rel":"","context":"In &quot;Secure Systems&quot;","block_context":{"text":"Secure Systems","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/secure-systems\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/ballena-de-alas-largas-240873.jpg?resize=1400%2C800&ssl=1 4x"},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":694,"user_id":4,"is_guest":0,"slug":"pk070","display_name":"Patrick Kleindienst","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/d0135b87f4c61a26c5a66f7a2ed6c5c65e24a27662ff67c06a36af82b702336f?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/1373","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=1373"}],"version-history":[{"count":63,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/1373\/revisions"}],"predecessor-version":[{"id":25534,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/1373\/revisions\/25534"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=1373"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=1373"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=1373"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=1373"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}