{"id":1924,"date":"2017-02-28T20:17:14","date_gmt":"2017-02-28T19:17:14","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=1924"},"modified":"2023-06-07T15:25:44","modified_gmt":"2023-06-07T13:25:44","slug":"microservices-legolizing-software-development-4","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-4\/","title":{"rendered":"Microservices \u2013 Legolizing Software Development IV"},"content":{"rendered":"<p style=\"text-align: justify;\">Welcome to part four\u00a0of our microservices series. If you\u2019ve missed a previous post you can read it here:<\/p>\n<p style=\"text-align: justify;\">I)<a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-1\/\"> Architecture<br \/>\n<\/a>II) <a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-2\/\">Caching<br \/>\n<\/a>III) <a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-3\/\">Security<br \/>\n<\/a>IV) <a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-4\/\">Continuous Integration<br \/>\n<\/a>V) <a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-5\/\">Lessons Learned<\/a><\/p>\n<h1 style=\"text-align: justify;\">Continuous Integration<\/h1>\n<h2 style=\"text-align: justify;\">Introduction<\/h2>\n<p style=\"text-align: justify;\">In our fourth\u00a0part of <em>Microservices \u2013 Legolizing Software Development<\/em>\u00a0we will focus on our Continuous Integration environment and how we made the the three major parts &#8211; Jenkins, Docker and Git &#8211; work seamlessly together.<\/p>\n<p><!--more--><\/p>\n<h2 style=\"text-align: justify;\">Jenkins<\/h2>\n<p style=\"text-align: justify;\">The very center of each Continuous Integration workflow is a CI server. In our case, we decided to use <a href=\"https:\/\/jenkins.io\/\">Jenkins<\/a> &#8211; a java based open source automation server which provides a lot of plugins to support testing, building, deploying and any other kinds of automation. But before we take a closer look at our actual Jenkins and CI configuration, we first need to look at two other technologies in more detail: Git and Docker.<\/p>\n<h2 style=\"text-align: justify;\">Git<\/h2>\n<p style=\"text-align: justify;\">As Martin Fowler suggests in his great and to this day still relevant <a href=\"https:\/\/www.martinfowler.com\/articles\/continuousIntegration.html\">article<\/a>\u00a0about Continuous Integration, we need to <i>Maintain a Single Source Repository<\/i>. In our case, <a href=\"https:\/\/git-scm.com\/\">Git<\/a> will be the source code management system of our choice. When it comes to our Git workflow we do also stick to Fowler\u2019s suggestions: <i>Everyone Commits to the Mainline Every Day<\/i>. Fine, so everyone committing to mainline every day it is. But operating with so many independent microservices the question arises: What exactly is our mainline? The answer to this question leads us to Git submodules.<\/p>\n<p style=\"text-align: justify;\">Since microservices are by definition self-contained services and there is hardly any dependencies between them, we decided to manage each service source code in its own Git repository. This keeps change histories of each service clean and consistent. Furthermore, we keep all Docker and Jenkins configuration files concerning a microservice in its own repository which allows us to keep track of changes in configuration as well.<\/p>\n<p style=\"text-align: justify;\">So far, so good. But as a developer you may want to pull all source code of all services to one single location on your local machine. At best without memorizing the repositories\u2019 names or manually jumping into each directory to perform a pull. To achieve this goal, we defined all subrepositories &#8211; the repositories of the individual services &#8211; as Git submodules of one main repository. This results in two advantages:<\/p>\n<p style=\"text-align: justify;\">Firstly, the entire source code can be pulled with one single command:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">git submodule update --recursive --remote<\/pre>\n<p style=\"text-align: justify;\">Secondly, the particular Git repositories of the individual services still remain independent and easily manageable.<\/p>\n<h2 style=\"text-align: justify;\">Docker<\/h2>\n<p style=\"text-align: justify;\">When we talk about microservices, topics like virtualization and containerization are essential. For containerization, we have decided to use <a href=\"https:\/\/www.docker.com\/\">Docker<\/a> which became a big hype since its first release in 2013. Docker is an open-source software\u00a0which enables the possibility to package applications in containers. By using Docker we want to reach high portability and to avoid various software installations on our application server for different deployments. Furthermore, Docker allows us to version and reuse components, scale the number of service instances according to the current load while simultaneously consuming less resources. Since many blogs and tutorials out there already show and discuss brilliantly what Docker is and how to install it and set it up, we will leave out this part. Moreover, we want to explain how we integrated Docker into our CI concept.<\/p>\n<h2 style=\"text-align: justify;\">Jenkins and Git working together<\/h2>\n<p style=\"text-align: justify;\">Now after having introduced the three major components of our CI environment, let\u2019s get down to the nitty-gritty: their interaction.<\/p>\n<p style=\"text-align: justify;\">Let\u2019s start with the interaction of Jenkins and Git. Sticking to Martin Fowler\u2019s suggestions about CI, he furthermore recommends that <i>Every Commit should build the Mainline on an Integration Machine<\/i>. To be able to do so, Jenkins comes with the very handy support of so called <i>webhooks<\/i>. Webhooks can be regarded as non-standardized HTTP callbacks. Usually, they are triggered by an event. So, when an event occurs, the event source simply sends an HTTP request to the target\u2019s URI configured for the webhook. There, the event invokes some predefined behavior. In practical terms, it means that we tell Git to notify our Jenkins server via a webhook on each Git push event. Jenkins in turn is able to follow Martin Fowler\u2019s recommendation and initiate the corresponding build jobs on every single push to the repository.<\/p>\n<h2 style=\"text-align: justify;\">Jenkins in Docker and Docker outside of Docker<\/h2>\n<p style=\"text-align: justify;\">Now let\u2019s take a closer look on how Jenkins and Docker collaborate. As our goal is to containerize most of our components, it fits the mould to run Jenkins in a Docker container as well. So, Jenkins isn&#8217;t\u00a0conventionally installed on the host system but runs within a Docker container which brings all the advantages of Docker like portability, reuse, resource consumption, minimal overhead, scalability, version control and so on with it. But it gets even better! Our jenkins-docker instance is able to build and run\u00a0our microservices in Docker containers as well.<\/p>\n<p style=\"text-align: justify;\">Accessing Docker within a Docker container can be done by using one of this two approaches: The first is called Docker-in-Docker (DinD) and the second Docker-outside-of-Docker (DooD). Spoiler alert: we used the latter one.<\/p>\n<p style=\"text-align: justify;\">But nevertheless, let\u2019s take a quick look at the Docker-in-Docker approach. As its name implies, this approach requires an additional Docker installation inside the jenkins-docker container itself. Furthermore, the Jenkins container would need to be run in <code class=\"\" data-line=\"\">--privileged<\/code>\u00a0mode for mostly unrestricted access to the host\u2019s resources. Even though installing Docker within a Docker container might sound logically in the first place, the approach causes some significant drawdowns. Especially when it comes to Linux Security Modules and the nested usage of copy-on-write file systems. So even the official DinD developers themselves state that the approach was <a href=\"https:\/\/hub.docker.com\/_\/docker\/\">generally not recommended<\/a> \u00a0and limit its usage for only a few use cases such as the development of Docker itself. If you were interested in more details about DinD drawdowns in CI environments we would like to refer to J\u00e9r\u00f4me Petazzoni\u2019s blog post <a href=\"https:\/\/jpetazzo.github.io\/2015\/09\/03\/do-not-use-docker-in-docker-for-ci\/\">Using Docker-in-Docker for your CI or testing environment? Think twice<\/a>.<\/p>\n<p style=\"text-align: justify;\">Since DinD does definitely not seem to be suitable for our use cases, we decided to use the Docker-outside-of-Docker approach. Even though it is not perfect as well, it suits our needs the best. For a first simple illustration of how DooD works, take a look at the following figure.<\/p>\n<figure id=\"attachment_1938\" aria-describedby=\"caption-attachment-1938\" style=\"width: 656px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"1938\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-4\/draw_io_docker_small\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small.png\" data-orig-size=\"4220,1810\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Docker-outside-of-Docker I\" data-image-description=\"\" data-image-caption=\"&lt;p&gt;Docker-outside-of-Docker I&lt;\/p&gt;\n\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-1024x439.png\" class=\"wp-image-1938 size-large\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-1024x439.png\" width=\"656\" height=\"281\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-1024x439.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-300x129.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_small-768x329.png 768w\" sizes=\"auto, (max-width: 656px) 100vw, 656px\" \/><\/a><figcaption id=\"caption-attachment-1938\" class=\"wp-caption-text\">Docker-outside-of-Docker I<\/figcaption><\/figure>\n<p style=\"text-align: justify;\">The basic idea of DooD is to access the host\u2019s Docker installation from within the Jenkins container. But how can this be done? Basically, to achieve this the following two requirements must be met.<\/p>\n<p style=\"text-align: justify;\">Firstly, the docker socket (<code class=\"\" data-line=\"\">\/var\/run\/docker.sock<\/code>) of the host must be accessible from within the Jenkins Docker container. This can easily be done by mounting and mapping it into the container using the <code class=\"\" data-line=\"\">-v<\/code>\u00a0flag or the docker-compose yaml keyword <code class=\"\" data-line=\"\">volumes<\/code>. And while we are on it, let\u2019s mount the Docker binaries (<code class=\"\" data-line=\"\">\/usr\/bin\/docker<\/code>) right along with it. To do so, we just need to add the following lines to our Jenkins docker-compose.yml file:<\/p>\n<pre class=\"prettyprint lang-yaml\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">volumes:\n  - \/var\/run\/docker.sock:\/var\/run\/docker.sock\n  - \/usr\/bin\/docker:\/usr\/bin\/docker<\/pre>\n<p style=\"text-align: justify;\">That\u2019s it. Docker socket and binaries of the host are now addressable from within the Jenkins container.<\/p>\n<p style=\"text-align: justify;\">But what about the permissions? Well, this question leads us to requirement number two: In order to build docker images and run containers the <em>jenkins<\/em> user (running the process in the container) must be granted the appropriate permissions. But in this case, we cannot just map these like we did with docker socket and binaries. In docker there is no such thing like mapping of users or groups from docker host to docker containers or vice versa. Access from a container to a volume takes place with the user ID and group ID the running process was executed with. User names or group names are completely left out of consideration. But what does this mean in practical terms? Well, in order to access docker socket and binaries our <em>jenkins<\/em> user must run under the exact same group ID as the host\u2019s <i>docker<\/i> group. We do so by adding the following lines to our Jenkins Dockerfile:<\/p>\n<pre class=\"prettyprint lang-text\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">USER root\nRUN groupadd -g 999 docker &amp;&amp; usermod -a -G docker jenkins\nUSER jenkins<\/pre>\n<p style=\"text-align: justify;\">In our case, the group ID of the host\u2019s <i>docker<\/i> group is 999. Pay attention that it may be different on other systems. To make this solution more portable, the group ID could also be extracted dynamically by using a script. But for simplicity sake this part is left out to your imagination. Alternatively, we also could have made the jenkins user a member of <i>sudoers<\/i>, but in this case all Docker commands would need to be prefixed with <code class=\"\" data-line=\"\">sudo<\/code>.<\/p>\n<p style=\"text-align: justify;\">Conclusively, in case you want to set up your own DooD Jenkins server quickly, this is how our docker configuration files look like:<\/p>\n<p style=\"text-align: justify;\">docker-compose.yml:<\/p>\n<pre class=\"prettyprint lang-yaml\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">jenkins:\n   restart: always\n   build: .\n   container_name: jenkins_dood\n   ports:\n       - 8080:8080\n       - 50000:50000\n   volumes:\n       - \/var\/jenkins_home:\/var\/jenkins_home\n       - \/var\/run\/docker.sock:\/var\/run\/docker.sock\n       - \/usr\/bin\/docker:\/usr\/bin\/docker<\/pre>\n<p style=\"text-align: justify;\">Dockerfile:<\/p>\n<pre class=\"prettyprint lang-text\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">FROM jenkins\nUSER root\n#TODO: Replace \u2018999\u2019 with your host\u2019s docker group ID\nRUN groupadd -g 999 docker &amp;&amp; usermod -a -G docker jenkins\nRUN apt-get update &amp;&amp; apt-get -q -y install python-pip &amp;&amp; yes | pip install docker-compose\nUSER jenkins<\/pre>\n<p style=\"text-align: justify;\">To run the container simply execute:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">docker-compose docker-compose.yml build\ndocker-compose docker-compose.yml up -d<\/pre>\n<h2 style=\"text-align: justify;\">Overview<\/h2>\n<p style=\"text-align: justify;\">So let\u2019s take a final look at how our CI environment looks like after having successfully set up and integrated our three major CI parts:<\/p>\n<figure id=\"attachment_1944\" aria-describedby=\"caption-attachment-1944\" style=\"width: 656px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_big.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"1944\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-4\/draw_io_docker_big\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_big.png\" data-orig-size=\"4890,3120\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Docker-outside-of-Docker II\" data-image-description=\"\" data-image-caption=\"&lt;p&gt;Docker-outside-of-Docker II&lt;\/p&gt;\n\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_big-1024x653.png\" class=\"size-large wp-image-1944\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_big-1024x653.png\" alt=\"\" width=\"656\" height=\"418\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_big-1024x653.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_big-300x191.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/draw_io_docker_big-768x490.png 768w\" sizes=\"auto, (max-width: 656px) 100vw, 656px\" \/><\/a><figcaption id=\"caption-attachment-1944\" class=\"wp-caption-text\">Docker-outside-of-Docker II<\/figcaption><\/figure>\n<p style=\"text-align: justify;\">First of all, the Git repositories (organized as submodules of one main repository) comprise and keep track of the source code and all relevant configuration files. This includes the blueprints of each microservice docker image, which are kept and maintained as Dockerfile and docker-compose.yml files, as well as the particular Jenkins Job definitions described in Jenkinsfiles. As stated previously, the Jenkins automation server is the heart of the CI environment. It gets notified on each git push event via a\u00a0webhook. Thanks to the Docker-outside-of-Docker approach Jenkins is able to access the host\u2019s docker socket and binaries, build docker images and run containers. It executes the corresponding Jenkins jobs and subsequently the docker containers can be started, stopped, deleted or updated. Eventually the Nginx reverse proxy &#8211; also running in a docker container &#8211; maps the containers\u2019 ports to the host\u2019s open ports 80 and 443. Et voil\u00e0, there they are: your containerized microservices accessible from the outside world.<\/p>\n<p style=\"text-align: justify;\">We hope this fourth\u00a0part of our blog post gave you an insight on how Jenkins, Docker and Git could be set up to work seamlessly together to help you configure, run and manage your legolized microservice architecture productively.<\/p>\n<p style=\"text-align: justify;\">In the last blog post we finish with a concluding review about the use of microservices in small projects and give an overview about our top stumbling blocks.<br \/>\n<a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-5\">Continue with Part V &#8211; Lessons Learned<\/a><\/p>\n<hr \/>\n<p style=\"text-align: justify;\">Kost, Christof [<a href=\"mailto:ck154@hdm-stuttgart.de\">ck154@hdm-stuttgart.de<\/a>]<br \/>\nKuhn, Korbinian [<a href=\"mailto:kk129@hdm-stuttgart.de\">kk129@hdm-stuttgart.de<\/a>]<br \/>\nSchelling, Marc [<a href=\"mailto:ms467@hdm-stuttgart.de\">ms467@hdm-stuttgart.de<\/a>]<br \/>\nMauser, Steffen [<a href=\"mailto:sm182@hdm-stuttgart.de\">sm182@hdm-stuttgart.de<\/a>]<br \/>\nVaratharajah, Calieston\u00a0[<a href=\"mailto:cv015@hdm-stuttgart.de\">cv015@hdm-stuttgart.de<\/a>]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>An automated development environment will save you. We explain how we set up Jenkins, Docker and Git to work seamlessly together.<\/p>\n","protected":false},"author":192,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[651,2],"tags":[24,3,97,78,91],"ppma_author":[719],"class_list":["post-1924","post","type-post","status-publish","format-standard","hentry","category-system-designs","category-system-engineering","tag-ci","tag-docker","tag-git","tag-jenkins","tag-microservices"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":1915,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-5\/","url_meta":{"origin":1924,"position":0},"title":"Microservices &#8211; Legolizing Software Development V","author":"Korbinian Kuhn, Steffen Mauser","date":"28. February 2017","format":false,"excerpt":"We finish with a concluding review about the use of microservices in small projects and give an overview about our top stumbling blocks.","rel":"","context":"In &quot;System Designs&quot;","block_context":{"text":"System Designs","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1967,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-3\/","url_meta":{"origin":1924,"position":1},"title":"Microservices \u2013 Legolizing Software Development III","author":"Calieston Varatharajah, Christof Kost, Korbinian Kuhn, Marc Schelling, Steffen Mauser","date":"28. February 2017","format":false,"excerpt":"Security is a topic that always occurs with microservices. We\u2019ll present our solution for managing both, authentication and authorization at one single point.","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/auth_login_gesamt03.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/auth_login_gesamt03.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/auth_login_gesamt03.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/auth_login_gesamt03.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/auth_login_gesamt03.png?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/auth_login_gesamt03.png?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":1912,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-2\/","url_meta":{"origin":1924,"position":2},"title":"Microservices &#8211; Legolizing Software Development II","author":"Korbinian Kuhn, Steffen Mauser","date":"28. February 2017","format":false,"excerpt":"Part two will take a closer look on how caching improves the heavy and frequent communication within our setup.","rel":"","context":"In &quot;System Designs&quot;","block_context":{"text":"System Designs","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/Caching-01.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/Caching-01.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/Caching-01.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/Caching-01.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/Caching-01.png?resize=1050%2C600&ssl=1 3x"},"classes":[]},{"id":1907,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/02\/28\/microservices-legolizing-software-development-1\/","url_meta":{"origin":1924,"position":3},"title":"Microservices &#8211; Legolizing Software Development I","author":"Korbinian Kuhn, Steffen Mauser","date":"28. February 2017","format":false,"excerpt":"In the first part, we present an example microservice structure, with multiple services, a foreign API interface and a reverse proxy that also allows load balancing.","rel":"","context":"In &quot;System Designs&quot;","block_context":{"text":"System Designs","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/Architecture-01.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/Architecture-01.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/02\/Architecture-01.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":23961,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2023\/02\/10\/microservices-any-good\/","url_meta":{"origin":1924,"position":4},"title":"Microservices &#8211; any good?","author":"Kim Bastiaanse","date":"10. February 2023","format":false,"excerpt":"As software solutions continue to evolve and grow in size and complexity, the effort required to manage, maintain and update them increases. To address this issue, a modular and manageable approach to software development is required.\u00a0Microservices architecture provides a solution by breaking down applications into smaller, independent services that can\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/02\/Microservice.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/02\/Microservice.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/02\/Microservice.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/02\/Microservice.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/02\/Microservice.png?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/02\/Microservice.png?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":23067,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2022\/03\/15\/security-strategies-and-best-practices-for-microservices-architecture\/","url_meta":{"origin":1924,"position":5},"title":"Security Strategies and Best Practices for Microservices Architecture","author":"Larissa Schmauss","date":"15. March 2022","format":false,"excerpt":"Microservices architectures seem to be the new trend in the approach to application development. However, one should always keep in mind that microservices architectures are always closely associated with a specific environment:\u00a0Companies want to develop faster and faster, but resources are also becoming more limited, so they now want to\u2026","rel":"","context":"In &quot;Scalable Systems&quot;","block_context":{"text":"Scalable Systems","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/"},"img":{"alt_text":"","src":"https:\/\/lh6.googleusercontent.com\/LbFspPRY1BxRBdAVjQwWXeJ6UOoxl6JWsRYrxboF5ObXlNNgy3uZikcGkc3cgzI0mr_ZlbWPxvdp0FoJC1k-odh7mRc2lCPXaMSq8TudjfoZ7e5HKstaMHmLpH319jCym6vQRo1a","width":350,"height":200,"srcset":"https:\/\/lh6.googleusercontent.com\/LbFspPRY1BxRBdAVjQwWXeJ6UOoxl6JWsRYrxboF5ObXlNNgy3uZikcGkc3cgzI0mr_ZlbWPxvdp0FoJC1k-odh7mRc2lCPXaMSq8TudjfoZ7e5HKstaMHmLpH319jCym6vQRo1a 1x, https:\/\/lh6.googleusercontent.com\/LbFspPRY1BxRBdAVjQwWXeJ6UOoxl6JWsRYrxboF5ObXlNNgy3uZikcGkc3cgzI0mr_ZlbWPxvdp0FoJC1k-odh7mRc2lCPXaMSq8TudjfoZ7e5HKstaMHmLpH319jCym6vQRo1a 1.5x"},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":719,"user_id":192,"is_guest":0,"slug":"sm182","display_name":"Calieston Varatharajah, Christof Kost, Korbinian Kuhn, Marc Schelling, Steffen Mauser","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/a66e03af75a6f3435a95485a0dff1f52d3dac9a448797b8354b4d2218852bd37?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/1924","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/192"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=1924"}],"version-history":[{"count":45,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/1924\/revisions"}],"predecessor-version":[{"id":2251,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/1924\/revisions\/2251"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=1924"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=1924"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=1924"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=1924"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}