{"id":308,"date":"2016-03-10T10:19:56","date_gmt":"2016-03-10T09:19:56","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=308"},"modified":"2023-08-06T21:55:16","modified_gmt":"2023-08-06T19:55:16","slug":"more-is-always-better-building-a-cluster-with-pies","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/03\/10\/more-is-always-better-building-a-cluster-with-pies\/","title":{"rendered":"More is always better: building a cluster with Pies"},"content":{"rendered":"<p><a href=\"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/3\/3d\/Raspberry_PI.jpeg\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/3\/3d\/Raspberry_PI.jpeg\" alt=\"Raspberry Pi 2\" width=\"4224\" height=\"2394\"><\/a><\/p>\n<p>So you have written the uber-pro-web-application with a bazillion of active users. But your requests start to get out of hand and the <a href=\"https:\/\/www.raspberrypi.org\/\" target=\"_blank\" rel=\"noopener\">Raspberry Pi<\/a> under your desk can&#8217;t handle all the pressure on its own. Finally,&nbsp;the time for rapid expansion has come!<\/p>\n<p>If you have already containerized your application, the step towards clustering your software isn&#8217;t that hard. In this post, we want to shed some light on management tools you can use to handle a cluster of Docker nodes.<\/p>\n<p><!--more--><\/p>\n<p>First let&#8217;s get a short overview of what we really need in order to come up with a resilient and fault tolerant cluster:<\/p>\n<ul>\n<li>We want to be able to quickly add additional Docker hosts on high traffic<\/li>\n<li>These hosts should run on all kinds of machines from&nbsp; physical ones, to virtual machines, or even on cloud platforms like <a href=\"https:\/\/aws.amazon.com\/de\/\" target=\"_blank\" rel=\"noopener\">AWS<\/a>.<\/li>\n<li>If our traffic reduces (which is very unlikely), we want to remove idle hosts<\/li>\n<li>And of course, we don&#8217;t want to get our hands dirty. So everything should be automated as much as possible.<\/li>\n<\/ul>\n<p>So lets start. First you should buy some&nbsp;Raspberry Pies, like we did (we actually bought seven of them). Then, we must find a way to set up all of these machines with as little manual effort as possible. This is were Docker Machine comes into play.<\/p>\n<p style=\"padding-left: 30px;\"><span style=\"color: #808080;\"><em>Note: The following examples build upon each other. So if you want to set up your hosts, read all the&nbsp;sections but wait with the execution of the commands until you&#8217;ve read the whole story \ud83d\ude09<\/em><\/span><\/p>\n<h1>Docker Machine<\/h1>\n<p><a href=\"https:\/\/docs.docker.com\/machine\/overview\/\" target=\"_blank\" rel=\"noopener\">Docker Machine<\/a> allows you to install and manage the Docker engine on remote hosts. Perfect for managing several cluster nodes from your PC.<\/p>\n<p>An important component of Docker Machine are the <a href=\"https:\/\/docs.docker.com\/machine\/drivers\/os-base\/\" target=\"_blank\" rel=\"noopener\">machine drivers<\/a>. They enables us to pass target host specific information to Docker Machine, such as the IP&nbsp;of the machine, or login credentials.<\/p>\n<p>If for example we want to create a Docker node in the Amazon cloud, we launch this command:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\"> $ docker-machine create --driver amazonec2 --amazonec2-access-key AKI******* --amazonec2-secret-key *******  aws-testmachine<\/pre>\n<p>Docker Machine creates a node called &#8220;aws-testmachine&#8221; using your Amazon-account.<\/p>\n<p>If you now want to run a container with your application on this host, it could look a bit like this. First you get the environment from your new cloud machine to your local machine, then you run a container with your application on the remote host.<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\"># get the remote environment\n$ docker-machine env aws-testmachine\n$ eval \"$(docker-machine env aws-testmachine)\"\n\n# run a container of uber-pro-application named \"testapplication\"\n$ docker run -d -p 80:80 --name testapplication your\/uber-pro-application<\/pre>\n<p>And there you have it! A machine host, automatically installed on a remote server running your application.<\/p>\n<p>To be honest, a real world deployment would include a few more steps, but the idea stays the same. Docker is also providing some <a href=\"https:\/\/docs.docker.com\/engine\/installation\/cloud\/\">very good examples<\/a> if you want to take a closer look on Docker Machine.<\/p>\n<p>Now we have transformed our Pies&nbsp;in Docker hosts and we can run containers on them. But still a lot of manual work is needed to get everything up and running. With Docker Swarm we can reduce the manual overhead drastically.<\/p>\n<h1>Docker Swarm<\/h1>\n<p><a href=\"https:\/\/docs.docker.com\/swarm\/overview\/\" target=\"_blank\" rel=\"noopener\">Docker Swarm<\/a> is basically a clustering solution for Docker. It allows us to view all of our Docker hosts as one virtual host. If you deploy a container in the swarm cluster, it gets either started on any host available, or on one with less work to do than all the others (a better description for the scheduling-algorithm can be found <a href=\"https:\/\/docs.docker.com\/swarm\/scheduler\/strategy\/\" target=\"_blank\" rel=\"noopener\">here<\/a>).<\/p>\n<p>A Swarm is always managed by a Swarm master (or several for redundancy). It can be set up by using this command:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">$ docker-machine create -d hypriot --swarm --swarm-master --swarm-discovery token:1234 --hypriot-ip-address &lt;SWARM_MASTER_IP&gt; &lt;SWARM_MASTER_HOSTNAME&gt;<\/pre>\n<p>This creates us a Swarm Master on the given host (swarm-master-ip\/hostname). The Swarm is later identified by its Cluster-ID (swarm-discovery-token). It can either be entered directly, or read from an environment variable.<br \/>\nBecause we used Hypriot on our Pies, we have to use the Hypriot-driver with its specific IP-parameter.<\/p>\n<p>With the Swarm Master set up, we can now start adding our Pies&nbsp;to the Cluster:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">$ docker-machine create -d hypriot --swarm --swarm-discovery token:1234 --hypriot-ip-address &lt;SWARM_NODE_IP&gt; &lt;SWARM_NODE_HOSTNAME&gt;<\/pre>\n<p>This creates a new Docker host and joins it to the provided Swarm (swarm-discovery-token).<\/p>\n<p>Given that everything worked fine, you now have a entirely functional Docker cluster. We can now start containers on any node&nbsp;and even do a little load balancing. But this still requires a lot of command line&nbsp;fiddling. Our next goal is to get a nice user interface up and running.<\/p>\n<h1>Shipyard<\/h1>\n<p>The one we have picked is Shipyard. It is one of many Docker management tools, but it has some cool features we came to like.<\/p>\n<ul>\n<li>Even for those who like the good old command line, a web GUI seemed slightly more comfortable to us.<\/li>\n<li>It builds upon the Docker Swarm API. So it is 100% compatible with any default Docker Swarm installation. If you get tired of shipyard, you can&nbsp;replace it with any other management tool without touching your cluster.<\/li>\n<li>Shipyard is completely&nbsp;composable. Each component runs in its own Docker container and some of them can even be replaced with alternatives. Perfect for our needs.<\/li>\n<li>Shipyard offers some basic user management and role-based-authentication, so if you do have some spare processor capacity, you can allow&nbsp;someone else to use your cluster to do some important calculating stuff (bitcoin-mining seems interesting).<\/li>\n<\/ul>\n<p>For the setup of shipyard, we&#8217;ve&nbsp;followed&nbsp;<a href=\"https:\/\/shipyard-project.com\/docs\/deploy\/manual\/\" target=\"_blank\" rel=\"noopener\">this instruction<\/a>. However, if you do your deployment, we strongly recommend to use the <a href=\"https:\/\/shipyard-project.com\/docs\/deploy\/automated\/\" target=\"_blank\" rel=\"noopener\">automated deployment script<\/a> for the cluster manager, because some instructions were some kind of misleading. We have printed all the fixed&nbsp;commands in this post.<\/p>\n<p>The automated script for the cluster nodes cannot be used because of processor architecture conflicts (the script always tries to install x86-software).<\/p>\n<p>And since we were too lazy to copy-paste the &#8220;docker run&#8221;- command before each line, we will leave this task to you. Simply post the following before each of the next commands.<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">$ docker run -ti -d --restart=always ...<\/pre>\n<p>First of all, lets set up the cluster&nbsp;manager.<\/p>\n<h2>Cluster-Manager<\/h2>\n<p>The job of&nbsp;our&nbsp;shipyard cluster manager is to provide a swarm master, a service discovery functionality for the entire cluster, &nbsp;and a management&nbsp;GUI with its own backend.<\/p>\n<p>For our software choices, we have sticked with the official instructions and pretty much copied all the commands. We are using <a href=\"https:\/\/coreos.com\/etcd\/\" target=\"_blank\" rel=\"noopener\">etcd<\/a>, a product of&nbsp;the CoreOS-team, as service discovery and&nbsp;<a href=\"https:\/\/www.rethinkdb.com\/\" target=\"_blank\" rel=\"noopener\">RethinkDB<\/a> as backend for Shipyard.<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\"># install the Shipyard backend\n$ --name shipyard-rethinkdb shipyard\/rethinkdb:latest\n\n# install the service discovery and expose the necessary ports\n$ --name shipyard-discovery -p 4001:4001 -p 7001:7001 microbox\/etcd:latest -name discovery\n\n# make our docker engine available via network\n$ --name shipyard-proxy-p 2375:2375 --hostname=$HOSTNAME -v \/var\/run\/docker.sock:\/var\/run\/docker.sock -e PORT=2375 shipyard\/docker-proxy:latest \n\n# configure this node to be the swarm manager\n$ --name shipyard-swarm-manager swarm:latest manage --host tcp:\/\/0.0.0.0:3375 etcd:\/\/&lt;Cluster_Master_IP&gt;:4001\n\n# install the Shipyard controller and link it to all its components\n$ --name shipyard-controller --link shipyard-rethinkdb:rethinkdb --link shipyard-swarm-manager:swarm -p 80:8080 shipyard\/shipyard:latest server -d tcp:\/\/swarm:3375<\/pre>\n<h2>Cluster-Node<\/h2>\n<p>The setup of the cluster nodes was a bit tricky.&nbsp;So we will provide you with some solutions for pitfalls we ran into:<\/p>\n<ul>\n<li>If you haven&#8217;t configured the Docker engine to be reachable via network (like we did), you have to also run a&nbsp;Docker-proxy on each node (took us 3 hours to figure it out, even it was pretty obvious). Make sure to use an <a href=\"https:\/\/hub.docker.com\/r\/janeczku\/docker-proxy-armv7\/\" target=\"_blank\" rel=\"noopener\">ARM-compatible image<\/a> for the proxy.<\/li>\n<li>The naming of the different IP-addresses in the official instructions was very vague, so here we tried to make it a little more obvious for you. We extracted&nbsp;pretty much any information from the deployment scripts.&nbsp;We used the <a href=\"https:\/\/hub.docker.com\/r\/hypriot\/rpi-swarm\/\" target=\"_blank\" rel=\"noopener\">Hypriot Swarm image<\/a> for compatibility reasons.<\/li>\n<\/ul>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\"># make our docker engine available via network\n$ --name shipyard-proxy -p 2375:2375 --hostname=$HOSTNAME -v \/var\/run\/docker.sock:\/var\/run\/docker.sock -e PORT=2375 janeczku\/docker-proxy-armv7:latest\n\n# install the swarm agent and connect it to the service discovery\n$ --name shipyard-swarm-agent hypriot\/rpi-swarm:latest j --addr &lt;CLUSTER_NODE_IP&gt;:2375 etcd:\/\/&lt;ClUSTER_MASTER_IP&gt;:4001<\/pre>\n<p>And that&#8217;s it! If you execute this two commands on each node, you should get a nice cluster with the combined computing power of possibly hundreds of Pies. We tried it with six and it worked brilliantly.<\/p>\n<p>If you apply this examples to your infrastructure, nothing should get in your way to world domination now. If you do, leave&nbsp;a comment down below with your experiences.<\/p>\n<h2>Further Reading<\/h2>\n<ul>\n<li><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/03\/10\/intel-nuc-and-the-quest-for-the-holy-boot-target\/#more-476\">Part 2: Setting up Debian on an Intel NUC<\/a><\/li>\n<li><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/03\/10\/docker-running-on-a-raspberry-pi-hypriot\/\">Part 3: Bringing Docker to ARM with Hypriot<\/a><\/li>\n<\/ul>\n<h2>Image-Sources<\/h2>\n<ul>\n<li><a href=\"https:\/\/commons.wikimedia.org\/wiki\/File:Raspberry_PI.jpeg\" target=\"_blank\" rel=\"noopener\">https:\/\/commons.wikimedia.org\/wiki\/File:Raspberry_PI.jpeg<\/a>, Author: Onepiece84<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>So you have written the uber-pro-web-application with a bazillion of active users. But your requests start to get out of hand and the Raspberry Pi under your desk can&#8217;t handle all the pressure on its own. Finally,&nbsp;the time for rapid expansion has come! If you have already containerized your application, the step towards clustering your [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[650,651,2,223],"tags":[3,16,15,14],"ppma_author":[681],"class_list":["post-308","post","type-post","status-publish","format-standard","hentry","category-scalable-systems","category-system-designs","category-system-engineering","category-ultra-large-scale-systems","tag-docker","tag-docker-machine","tag-docker-swarm","tag-raspberry-pi"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":282,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/03\/10\/docker-running-on-a-raspberry-pi-hypriot\/","url_meta":{"origin":308,"position":0},"title":"Docker on a Raspberry Pi: Hypriot","author":"Jonathan Peter","date":"10. March 2016","format":false,"excerpt":"Raspberry Pis are small, cheap\u00a0and easy to come by. But what if you want to use Docker on them? Our goal was to run Docker on several Raspberry Pis and combine them to a cluster with Docker Swarm. To achieve this, we first\u00a0needed to get Docker running on the Pi.\u2026","rel":"","context":"In &quot;System Designs&quot;","block_context":{"text":"System Designs","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5175,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/02\/24\/benefiting-kubernetes-part-2-deploy-with-kubectl\/","url_meta":{"origin":308,"position":1},"title":"Migrating to Kubernetes Part 2 &#8211; Deploy with kubectl","author":"Can Kattwinkel","date":"24. February 2019","format":false,"excerpt":"Written by: Pirmin Gersbacher, Can Kattwinkel, Mario Sallat Migrating from Bare Metal to Kubernetes The interest in software containers is a relatively new trend in the developers world. Classic VMs have not lost their right to exist within a world full of monoliths yet, but the trend is clearly towards\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=1050%2C600&ssl=1 3x"},"classes":[]},{"id":6652,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/07\/24\/how-to-create-a-k8s-cluster-with-custom-nodes-in-rancher\/","url_meta":{"origin":308,"position":2},"title":"How to create a K8s cluster with custom nodes in Rancher","author":"Sarah Schwab","date":"24. July 2019","format":false,"excerpt":"Don't you find it annoying not to be able to manage all your Kubernetes clusters at a glance? Ranger 2.0 offers an ideal solution.\u00a0 The following article is less a scientific post than a how-to guide to creating a new Kubernetes cluster with custom nodes in Ranger 2.0.\u00a0 But before\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/07\/cluster-1.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/07\/cluster-1.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/07\/cluster-1.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/07\/cluster-1.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/07\/cluster-1.png?resize=1050%2C600&ssl=1 3x"},"classes":[]},{"id":9655,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2020\/02\/29\/image-editor-on-kubernetes-with-kompose-minikube-k3s-k3sup-and-helm-part-2\/","url_meta":{"origin":308,"position":3},"title":"Kubernetes: from Zero to Hero with Kompose, Minikube, k3sup and Helm \u2014 Part 2: Hands-On","author":"Leon Klingele","date":"29. February 2020","format":false,"excerpt":"This is part two of our series on how we designed and implemented a scalable, highly-available and fault-tolerant microservice-based Image Editor. This part depicts how we went from a basic Docker Compose setup to running our application on our own \u00bbbare-metal\u00ab Kubernetes cluster.","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2020\/02\/DDD_dependencies-1024x119.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2020\/02\/DDD_dependencies-1024x119.png 1x, \/wp-content\/uploads\/2020\/02\/DDD_dependencies-1024x119.png 1.5x, \/wp-content\/uploads\/2020\/02\/DDD_dependencies-1024x119.png 2x"},"classes":[]},{"id":2859,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/08\/31\/iot-with-the-raspberry-pi-final-application-part-3\/","url_meta":{"origin":308,"position":4},"title":"IoT with the Raspberry Pi \u2013 Final application \u2013 Part 3","author":"mr143@hdm-stuttgart.de","date":"31. August 2017","format":false,"excerpt":"In our final application, we have put together a solution consisting of four different modules. First, we have again the Raspberry Pi which raises and sends the sensor data using the already presented Python script. We changed the transfer protocol in the final application to MQTT, which gives us more\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/mqtt-1024x465.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/mqtt-1024x465.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/mqtt-1024x465.jpg?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":28117,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2026\/02\/27\/how-to-develop-notification-system-for-crypto-stocks\/","url_meta":{"origin":308,"position":5},"title":"How to Develop a Notification System for Crypto Stocks for Telegram and Discord","author":"Julia Bai","date":"27. February 2026","format":false,"excerpt":"This blog post was written for the lecture \"System Engineering & Management\" (143101a) by Julia Bai, Frederik Runge and Dominik Seitz. Introduction The cryptocurrency market never sleeps. While traditional stock exchanges close, trading in digital assets occurs 24\/7, characterized by extreme volatility where minutes decide between profit and loss. A\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/02\/Shift-Left20Defect20Detection20and20Remediation_5.gif?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/02\/Shift-Left20Defect20Detection20and20Remediation_5.gif?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/02\/Shift-Left20Defect20Detection20and20Remediation_5.gif?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/02\/Shift-Left20Defect20Detection20and20Remediation_5.gif?resize=700%2C400&ssl=1 2x"},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":681,"user_id":5,"is_guest":0,"slug":"bb074","display_name":"Benjamin Binder","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/b39750be005f19ce71d3af93115f9d5f02d18769c36bfa750ca4de423b20d981?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/308","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=308"}],"version-history":[{"count":48,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/308\/revisions"}],"predecessor-version":[{"id":25555,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/308\/revisions\/25555"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=308"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=308"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=308"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=308"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}