{"id":5313,"date":"2019-02-26T14:15:48","date_gmt":"2019-02-26T13:15:48","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=5313"},"modified":"2019-02-27T09:54:54","modified_gmt":"2019-02-27T08:54:54","slug":"experiences-from-breaking-down-a-monolith-3","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/02\/26\/experiences-from-breaking-down-a-monolith-3\/","title":{"rendered":"Experiences from breaking down a monolith (3)"},"content":{"rendered":"\n<p> Written by Verena Barth,  Marcel Heisler,  Florian Rupp, &amp; Tim Tenckhoff <br><\/p>\n\n\n\n<ul class=\"wp-block-gallery aligncenter columns-1 is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\"><li class=\"blocks-gallery-item\"><figure><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"534\" data-attachment-id=\"5498\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/02\/26\/experiences-from-breaking-down-a-monolith-3\/bild_eingefugt_am_2019-02-26__1_31_pm-3\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/Bild_eingefugt_am_2019-02-26__1_31_PM-2.png\" data-orig-size=\"800,534\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Bild_eingefugt_am_2019-02-26__1_31_PM\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/Bild_eingefugt_am_2019-02-26__1_31_PM-2.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/Bild_eingefugt_am_2019-02-26__1_31_PM-2.png\" alt=\"\" data-id=\"5498\" data-link=\"https:\/\/blog.mi.hdm-stuttgart.de\/?attachment_id=5498\" class=\"wp-image-5498\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/Bild_eingefugt_am_2019-02-26__1_31_PM-2.png 800w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/Bild_eingefugt_am_2019-02-26__1_31_PM-2-300x200.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/Bild_eingefugt_am_2019-02-26__1_31_PM-2-768x513.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><\/figure><\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">DevOps<br><\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Code Sharing<\/h3>\n\n\n\n<p>Building multiple services hold in separated code repositories, we headed the problem of code duplication. Multiple times a piece of code is used twice, for example data models. As the services grow larger, just copying is no option. This makes it really hard to maintain the code in a consistent and transparent way, not to mention the overhead of time required to do so. In the context of this project, this issue was solved by creating an own code library. Yes, a library with an own repository which not directly builds an executable service. But isn\u2019t it much work to always load and update it in all the services? &nbsp;Yes it is &#8211; as long as you are not familiar with scripting. Therefore the build management tool gradle is a big win. It gives you the opportunity to write your own task to be executed, such like the packaging of a java code library as maven package and the upload to a package cloud afterwards. Good thing there is the free package host provider <em>packagecloud.io<\/em> around, which allows a storage size of 150MB for free. When the library was hosted online, this dependency could easily loaded automatically by the gradle dependency management.<\/p>\n\n\n\n<p>By the use of this approach, the code development process could focus on what it really needs to &#8211; the development and not code copying! Also the team did more think about how to design the code to be more flexible, to be able to reuse it in another service. Of course it was an overhead of additional work, but the advantages outweigh. If the library is to be updated, this could achieved by an increment of its version number. Therefore all services can change only the version number and get the new software automatically updated.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">CI\/CD<\/h3>\n\n\n\n<p>To bring development and operations closer together we set up a CI\/CD-Pipeline. Because we wanted to have a quick solution to support the development as fast as possible by enabling automated builds, tests and deployments, we had to choose a tool very early. We came up with the alternatives GitLab hosted by our University or setting up Jenkins ourselves. We quickly created the following table of pros and cons and decided to use HdM\u2019s GitLab mainly because it is already set up and contains our code. <\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"939\" height=\"387\" data-attachment-id=\"5329\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/02\/26\/experiences-from-breaking-down-a-monolith-3\/cicompare1\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/ciCompare1.png\" data-orig-size=\"939,387\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"ciCompare1\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/ciCompare1.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/ciCompare1.png\" alt=\"\" class=\"wp-image-5329\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/ciCompare1.png 939w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/ciCompare1-300x124.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/ciCompare1-768x317.png 768w\" sizes=\"auto, (max-width: 939px) 100vw, 939px\" \/><\/figure>\n\n\n\n<p> Our first pipeline was created \u2018quick and dirty\u2019, and it\u2019s main purpose was just to build the projects with Gradle (in case of a Java project), to run it\u2019s tests and to deploy it to our server. In order to improve the pipeline\u2019s performance we wanted to cache the Gradle dependencies which turned out to be not that easy. Building the cache as the official GitLab Docs described it did not work, neither the workaround to set the GRADLE_USER_HOME variable to the directory of our project (which was mentioned very often, e.g. <a href=\"https:\/\/stackoverflow.com\/questions\/34162120\/gitlab-ci-gradle-dependency-cache\/36050711\">here<\/a> and <a href=\"https:\/\/gitlab.com\/gitlab-org\/gitlab-runner\/issues\/327\">here<\/a>). The cache seemed to be created but was deleted again before the next stage began. We ended up pushig the Gradle Wrapper in our repository as well and using it to build and test our application. Actually it is recommended anyway to execute a build with the <a href=\"https:\/\/docs.gradle.org\/current\/userguide\/gradle_wrapper.html#sec:using_wrapper\">Wrapper <\/a>to ensure a reliable, controlled and standardized execution of the build. To make use of the Wrapper you need to make it executable (see \u201cbefore_script\u201d command in the code below). Then you\u2019re able to build your project, but with other commands, like \u201c.\/gradlew assemble\u201d instead of \u201cgradle build\u201d. <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\" data-line=\"\">image: openjdk:11-jdk-slim-sid\n\nstages:\n - build\n # [..]\n\nbefore_script:\n - chmod +x gradlew\n - apt-get update -qy\n\nbuild:\n stage: build\n script:\n    - .\/gradlew -g \/cache\/.gradle clean assemble\n\n# [..]<\/code><\/pre>\n\n\n\n<p>In the end we improved the time needed from almost four to about two and a half minutes. <br><\/p>\n\n\n\n<p>Having this initial version in use we spent some more time on improving our pipeline. In doing so we found some more pros and cons of the different tools we compared before and a third option to think about.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1014\" height=\"713\" data-attachment-id=\"5334\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/02\/26\/experiences-from-breaking-down-a-monolith-3\/cicompare2\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/ciCompare2.png\" data-orig-size=\"1014,713\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"ciCompare2\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/ciCompare2.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/ciCompare2.png\" alt=\"\" class=\"wp-image-5334\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/ciCompare2.png 1014w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/ciCompare2-300x211.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/ciCompare2-768x540.png 768w\" sizes=\"auto, (max-width: 1014px) 100vw, 1014px\" \/><\/figure>\n\n\n\n<p> The main drawbacks we found for our current solution were, that HdM does not allow docker-in-docker (dind) due to security reasons and GitLab container registry is disabled to save storage. In return we <a href=\"https:\/\/medium.freecodecamp.org\/how-to-setup-ci-on-gitlab-using-docker-66e1e04dcdc2\">read <\/a>that the docker integration is very powerful in GitLab. The added option GitLab.com could solve both the problems we had with HdM\u2019s GitLab. But we came up with it too late in the project because we were already at solving the issues and didn\u2019t want to migrate all our repositories. Also company-made constraints might always occur and we learned from solving them. <\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Our GitLab Runner<\/h4>\n\n\n\n<p>To solve our dind problem we needed a different GitLab Runner because the shared runners provided by HdM don\u2019t allow docker-in-docker for security reasons. Trying to use it anyway makes the pipeline fail with logs containing something like this:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\" data-line=\"\">docker:dind ...\nWaiting for services to be up and running...\n*** WARNING: Service runner-57fea070-project-1829-concurrent-0-docker-0 probably didn&#039;t start properly.\nHealth check error:\nservice &quot;runner-57fea070-project-1829-concurrent-0-docker-0-wait-for-service&quot; timeout\nHealth check container logs:\nService container logs:\n2018-11-29T12:38:05.473753192Z mount: permission denied (are you root?)\n2018-11-29T12:38:05.474003218Z Could not mount \/sys\/kernel\/security.\n2018-11-29T12:38:05.474017136Z AppArmor detection and --privileged mode might break.\n2018-11-29T12:38:05.475690384Z mount: permission denied (are you root?) \n*********<\/code><\/pre>\n\n\n\n<p>\nTo use our own runner there are some possibilities:\n\n<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li> Install a runner on a server<\/li><li> Install runners locally<\/li><li> Integrate a Kubernetes cluster and install a runner there <\/li><\/ol>\n\n\n\n<p>Since we already have a server the first option is the easiest and makes the most sense. There are tutorials you can follow straight forward. First <a href=\"https:\/\/docs.gitlab.com\/runner\/install\/linux-repository.html\">install the runner<\/a> and then <a href=\"https:\/\/docs.gitlab.com\/runner\/register\/index.html\">register <\/a>the runner for each GitLab repository you want to allow to use this runner. The URL and token you need to specify for registration can be found in GitLab under Settings -&gt; CI\/CD -&gt; Runners -&gt; Set up a specific Runner manually. &nbsp;It is also help provided to <a href=\"https:\/\/docs.gitlab.com\/runner\/executors\/README.html\">choose the executor<\/a>, which needs to be specified on registration. <\/p>\n\n\n\n<p>We chose Docker as executer because it provides all we need and is easy to configure. Now the runner can be started with \u201cgitlab-runner start\u201d. To be able to use docker-in-docker some more configuration is necessary but all changes to the config file \u201c\/etc\/gitlab-runner\/config.toml\u201c should automatically be detected and applied by the runner. The file should be edited or modified using the \u201cgitlab-runner register\u201d command as described <a href=\"https:\/\/docs.gitlab.com\/ee\/ci\/docker\/using_docker_build.html#use-docker-in-docker-executor\">here<\/a>. For dind the privileged = true is important that\u2019s why it already occurred in the logs above. Finally Docker needs to be installed on the same machine as the runner. The installation is described <a href=\"https:\/\/docs.docker.com\/install\/linux\/docker-ce\/debian\/#install-docker-ce\">here<\/a>. We chose to install using the repository. If you don\u2019t know which command to choose in step 4 of \u201cSet up the repository\u201d you can get the information with \u201cuname -a\u201d. We also had to replace the \u201c$(lsb_release -cs)\u201d with \u201cstretch\u201d as mentioned in the Note. To figure out the parent Debian distribution we used \u201clsb_release -a\u201c. <\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Pipeline Setup<\/h4>\n\n\n\n<p>Now that we solved our docker-in-docker problem we can set up a CI pipeline that first builds our project using a suitable image and then builds an image as defined in a corresponding Dockerfile. <br><\/p>\n\n\n\n<p>Each service has its own Dockerfile depending on it\u2019s needs.For the Database service image for example we need to define many environment variables to establish the connection between the database and message broker. You can see it&#8217;s Dockerfile below.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\" data-line=\"\">FROM openjdk:8-jdk-slim\n\nRUN mkdir \/app\/\nCOPY build\/libs\/bahnalyse-database-service-1.0-SNAPSHOT.jar \/app\nWORKDIR \/app\n\nENV RABBIT_HOST 172.17.0.2\nENV RABBIT_PORT 5672\n\nENV INFLUXDB_HOST 172.17.0.5\nENV INFLUXDB_PORT 8086\n\nCMD java -jar bahnalyse-database-service-1.0-SNAPSHOT.jar<\/code><\/pre>\n\n\n\n<p>The frontend Dockerfile is splitted in two stages: The first stages builds the Angular app in an image which inherits from a node image version 8.11.2 based on the alpine distribution. For serving the application we use the nginx alpine image and move the dist-output of our first node image to the NGINX public folder. We have to copy our nginx configuration file, in which we define e.g. the index file and the port to listen to, into the new image as well. This is how the final frontend Dockerfile looks like: <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\" data-line=\"\"># Stage 1 - compile Angular app\n\nFROM node:8.11.2-alpine as node\n\nWORKDIR \/usr\/src\/app\nCOPY package*.json .\/\nRUN npm install\nCOPY . .\nRUN npm run build\n\n# Stage 2 -  For serving the application using a web-server\n\nFROM nginx:1.13.12-alpine\n\nCOPY --from=node \/usr\/src\/app\/dist \/usr\/share\/nginx\/html\nCOPY .\/nginx.conf \/etc\/nginx\/conf.d\/default.conf<\/code><\/pre>\n\n\n\n<p>Now let\u2019s look at our gitlab-ci.yml file shown below:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\" data-line=\"\">image: docker:stable\n \nvariables:\n  DOCKER_HOST: tcp:\/\/docker:2375\/\n  DOCKER_DRIVER: overlay2\n \nservices:\n  - docker:dind\n \nstages:\n  - build\n  - test\n  - package\n  - deploy\n \ngradle-build:\n  image: gradle:4.10.2-jdk8\n  stage: build\n  script: &quot;gradle build -x test&quot;\n  artifacts:\n    paths:\n      - build\/libs\/*.jar\n \nunit-test:\n  image: gradle:4.10.2-jdk8\n  stage: test\n  script:\n    - gradle test\n \ndocker-build:\n  only:\n  - master\n  stage: package\n  script:\n  - docker build -t $CI_REGISTRY_IMAGE:latest -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .\n  - docker login -u token -p $IBM_REGISTRY_TOKEN $CI_REGISTRY \n  - docker push $CI_REGISTRY_IMAGE:latest\n  - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA\n \nserver-deploy:\n  only:\n  - master\n  image: kroniak\/ssh-client\n  stage: deploy    \n  script:    \n  - echo &quot;$CI_SSH&quot; | tr -d &#039;\\r&#039; &gt; pkey\n  - chmod 400 pkey    \n  - ssh -o stricthostkeychecking=no -i pkey root@bahnalyse.mi.hdm-stuttgart.de &quot;docker login -u token -p $IBM_REGISTRY_TOKEN $CI_REGISTRY &amp;&amp; docker-compose pull bahnalysebackend &amp;&amp; docker-compose up --no-deps -d bahnalysebackend&quot;<\/code><\/pre>\n\n\n\n<p>Compared to our first version we now make use of suitable Docker images. This makes the jobs faster and the file clearer. Most of the first parts are taken from this pretty good tutorial, so we\u2019ll keep the explanations short here. At first we specify docker:stable as default image for this pipeline. This overrides the one defined in the runner configuration and can be overridden in every job again. Using the \u201cservices\u201d keyword we also add docker-in-docker to this image. The variable DOCKER_HOST is required to make use of dind because it tells docker to talk with the daemon started inside of the service instead of the default \u201c\/var\/run\/docker.sock\u201d socket. Using an overlay storage driver improves the performance. Next we define our stages \u201cbuild\u201d, \u201ctest\u201d, \u201cpackage\u201d and \u201cdeploy\u201d and then the jobs to run in each stage.<\/p>\n\n\n\n<p>The gradle-build job now uses the gradle image with the version matching our requirements. This includes all the dependencies we need to build our jar file with \u201cgradle build\u201d. We use the <em>-x test<\/em> option here to exclude the tests because we want to run them in a separate stage. This gives a better overview in the GitLab UI because you see what went wrong faster. Using \u201cartifacts\u201d we can store the built jar file to the specified path. There it gets available for other jobs as well as downloadable from the GitLab UI.<\/p>\n\n\n\n<p>In the test stage we simply run our unit tests using \u201cgradle test\u201d. This needs to compile again because we excluded the tests from the jar in our build task.<\/p>\n\n\n\n<p>In the package stage we create a Docker image including our jar file. Using the \u201conly\u201d keyword we specify that this should only happen in the master branch. The first line of the \u201cscript\u201d block uses a backend Dockerfile mentioned above in the root directory of the project (specified by the dot at the end of the line) to create the image.<\/p>\n\n\n\n<p>For the following steps to work we need to solve our second problem: the absence of the GitLab Container Registry in HdM\u2019s GitLab. A <a href=\"https:\/\/docs.docker.com\/registry\/introduction\/\">registry<\/a> is a storage and content delivery system, holding named Docker images, available in different tagged versions.<a href=\"https:\/\/docs.docker.com\/registry\/introduction\/\"> <\/a> A common use case in CI\/CD is to build the new image in the pipeline, tag it with something unique like a timestamp and as \u201clatest\u201d, push it to a registry and then pull it from there for deployment. There are alternatives to the registry integrated in GitLab we will discuss later. First let\u2019s finish the explanations of the yaml file. We followed the just described use case of the registry. As something unique we chose the commit hash because the images get saved with a timestamp in the registry anyway. It is accessible using the predefined environment variable $CI_COMMIT_SHA. We also defined environment variables for the login credentials to the registry so that they don\u2019t appear in any files or logs. Using environment variables like the name of the image can also help to make the registry easier exchangeable because this file could stay the same and only the variables would need to change. They can be defined in the GitLab UI under Settings -&gt; CI\/CD -&gt; Environment variables.<\/p>\n\n\n\n<p>In the deploy stage we used a public image from docker hub that has ssh installed so that we don\u2019t have to always install it in the pipeline what costs time. A more secure solution would be to create such an image ourselves. We login to our server using a ssh key saved in the CI_SSH environment variable. Then run the commands on the server to login to our registry, pull the latest image and start it. To pull and start we use docker-compose. <a href=\"https:\/\/docs.docker.com\/compose\/overview\/\">Docker Compose<\/a> is a tool for defining and running multi-container Docker applications. It is mainly used for local development and single host deployments. It uses a file by default called docker-compose.yml. In this file multiple services can be defined with the Dockerfiles to build them or with the name including registry to get them from as well portmappings and environment variables for each service and dependencies between them. We use the<em> \u2013no-deps<\/em> option to restart only the service where the image has changed and -d to detach it into the background otherwise the pipeline never stops. <\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Choosing a Registry<\/h4>\n\n\n\n<p>Since we cannot use the registry integrated into GitLab we considered the following alternatives: <\/p>\n\n\n\n<ol class=\"wp-block-list\"><li> Set up our own registry<\/li><li> Use docker hub<\/li><li> Use IBM Cloud Registry (or other cloud provider)<\/li><\/ol>\n\n\n\n<p>The first approach is described <a href=\"https:\/\/docs.docker.com\/registry\/deploying\/\">here<\/a>. Especially making the registry accessible from outside e.g. from our pipeline make this approach much more complicated than the other solutions. So we discarded this one.<\/p>\n\n\n\n<p>Instead we started out using the second approach, <a href=\"https:\/\/www.docker.com\/products\/docker-hub\">docker hub<\/a>. To login to it the $CI_REGISTRY variable used in the gitlab-ci.yml file should contain \u201cindex.docker.io\u201d or it can just be omitted because it is the default for the docker login command. Besides the ease of use the unlimited storage is its biggest benefit. But it has also some drawbacks: You get only one private repository for free. To use this repository for different images makes it necessary to distinguish them using tags what is not really their purpose. Also login is only possible with username and password. So using it from a CI pipeline forces a team member to write its private credentials into GitLab\u2019s environment variables where every other maintainer of this project can read them.<\/p>\n\n\n\n<p>For these reasons we switched to the IBM Cloud Registry. There it is possible to create a user with its own credentials only for the pipeline using the IBM Cloud IAM-Tools or just creating a token to use for the docker login. To switch the registry only the GitLab environment variable $CI_REGISTRY needs to be adjusted to \u201cregistry.eu-de.bluemix.net\u201d and the login needs to be updated, too (we changed from a username and password approach to the token one shown in the file above). Also the amount of private repositories is not limited and you get another helpful tool on top: Vulnerability-Checks for all the images. Unfortunately the amount of free storage is limited. Since our images are too big we got access to HdM\u2019s paid account. So to minimize costs we had to ensure that there are not too many images stored in this registry. Since logging in to IBM Cloud\u2019s UI and removing old images manually is very inefficient we added a clean-up job to our pipeline.<\/p>\n\n\n\n<p>The possibilities to such a clean up job work are quite limited. There is no simple docker command for this, like docker login, push or pull. Probably the most docker-native way is would be using the docker REST API as described <a href=\"https:\/\/medium.com\/@mcvidanagama\/cleanup-your-docker-registry-ef0527673e3a\">here<\/a>. But this is only accessible for private cloud customers at IBM. The other approach described in the mentioned blogpost is deleting from the filesystem what is even less accessible in a cloud registry. So we have to use an IBM Cloud specific solution. Some fellow students of us had the same problem and solved it using the IBM Cloud CLI as described in their <a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/01\/04\/radcup-part-3-automation\/\">blogpost<\/a>. We were looking for a solution without the CLI-tools for IBM Cloud and found a REST API that could do the job which is documented <a href=\"https:\/\/console.bluemix.net\/apidocs\/container-registry#lists-authorized-namespaces-in-the-targeted-ibm-cl\">here<\/a>. But for authorization you need a valid bearer token for which to receive in a script you need to use the CLI-tools. We chose to use this API anyway and ended up with the following additional job in our gitlab-ci.yml file: <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\" data-line=\"\">registry-cleanup:\n  stage: deploy\n  script:\n  - apk update\n  - apk add curl\n  - curl -fsSL https:\/\/clis.ng.bluemix.net\/install\/linux | sh\n  - ibmcloud plugin install container-registry\n  - apk add jq\n  - ibmcloud login --apikey $IBM_API_KEY -r eu-de\n  - ibmcloud iam oauth-tokens | sed -e &#039;s\/^IAM token:\\s*\/\/g&#039; &gt; bearertoken.txt\n  - cat bearertoken.txt\n  - &gt;-\n      curl\n      -H &quot;Account: 7e8029ad935180cfdce6e1e8b6ff6910&quot;\n      -H &quot;Authorization: $(cat bearertoken.txt)&quot;\n      https:\/\/registry.eu-de.bluemix.net\/api\/v1\/images\n      |\n      jq --raw-output\n      &#039;map(select(.RepoTags[0] | startswith(&quot;registry.eu-de.bluemix.net\/bahnalyse\/testrepo&quot;)))\n      | if length &gt; 1 then sort_by(.Created)[0].RepoTags[0] else &quot;&quot; end&#039; &gt; image.txt\n  - &gt;-\n       if [ -s image.txt ] ;\n       then \n       curl -X DELETE\n       -H &quot;Account: 7e8029ad935180cfdce6e1e8b6ff6910&quot;\n       -H &quot;Authorization: $(cat bearertoken.txt)&quot;\n       https:\/\/registry.eu-de.bluemix.net\/api\/v1\/images\/$(cat image.txt) ;\n       else\n       echo &quot;nothing to delete&quot; ;\n       fi<\/code><\/pre>\n\n\n\n<p>We run it at deploy stage so it could run in parallel to the actual deploy job if we had more than one runner. <\/p>\n\n\n\n<p>First we install the required tools curl, IBM Cloud CLI and jq. This should be done by creating and using an appropriate image later. Then we login using the CLI-tools and get a bearer token. From the answer we need to cut off the beginning because it is (sometimes) prefixed with \u201cIAM token: \u201c and then write it into a file. Curl is used to call the REST API with the headers for authorization to set and receive all the images available in our registry. We pipe the output to <a href=\"https:\/\/stedolan.github.io\/jq\/\">jq<\/a> which is a command line tool to parse JSON. We select all the images with the same name as the one we just created. If there are already more than two we sort them by the created timestamp, grab the oldest one and write its name, including the tag, to file. If there are only two or less of these images we create an empty file. The <em>\u2013raw-output <\/em>option of jq omits the quotes that would be around a JSON output. Finally we check if the file contains an image and delete it via API call if there is one. Somehow the else block, telling that there is nothing to delete, doesn\u2019t really work yet. It is probably something wrong with the spaces, quotes or semicolon, but debugging a shell script defined in a yaml file is horrible so we\u2019ll just live with our less talking pipeline. The yaml format also makes the &gt;- at the beginning of a command necessary, otherwise the yaml is invalid. In our case an error like \u201c<em>(&lt;unknwon&gt;): mapping values are not allowed in this context at line \u2026 column \u2026<\/em>\u201d occurred. <\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>Our aims for the implementation of the application <em>Bahnalyse <\/em>was to play around with modern technologies and practices. While learning a lot about architectural patterns (like SOA and microservices), cloud providers, containerization and continuous integration, we successfully improved the application&#8217;s architecture. <\/p>\n\n\n\n<p>We found out that the pure implementation of architectural principles is hardly possible and rarely makes sense. Although we initially wanted to split our monolith up into several microservices we ended up creating a SOA which makes use of both, a microservice and services which are composed or make use of other services. To put it in a nutshell, we can conclude there might never be a complete roadmap on which architecture or technology fits your needs the best. Further, a microservice architecture is not the universal remedy, it also entails its drawbacks. In most cases you have to evaluate and compare those drawbacks of the different opportunities available and decide which really succeeds your business case.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Outlook<\/h3>\n\n\n\n<p>Further points to take a look at would be improving our password management. Currently we save our credentials in GibLab\u2019s environment variables which offers a security risk, because in this way every maintainer working at our project with GitLab is able to see them. We want to avoid this e.g. by outsourcing it to a tool like a <a href=\"https:\/\/www.vaultproject.io\/\">Vault by HashiCorp<\/a>. It is a great mechanism for storing sensitive data, e.g. secrets and credentials.<\/p>\n\n\n\n<p>Another thing to focus on is the further separation of concerns into different microservices. A perfect candidate herefore is the search service of which the frontend makes use of to autocomplete the user\u2019s station name input. It\u2019s independent of any other component and just sends the user input to the VVS API and returns a collection of matching station names.<\/p>\n\n\n\n<p style=\"text-align:left\">Finally deploying <em>Bahnalyse<\/em> to the cloud would be an interesting thing for us to try out. We already figured out which cloud provider fits our needs best in the first part of our blog post series. The next step would be to explore the IBM Cloud Kubernetes service and figure out the differences between deploying and running our application on a server and doing this in the cloud. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Written by Verena Barth, Marcel Heisler, Florian Rupp, &amp; Tim Tenckhoff DevOps Code Sharing Building multiple services hold in separated code repositories, we headed the problem of code duplication. Multiple times a piece of code is used twice, for example data models. As the services grow larger, just copying is no option. This makes it [&hellip;]<\/p>\n","protected":false},"author":911,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[21,2],"tags":[],"ppma_author":[777],"class_list":["post-5313","post","type-post","status-publish","format-standard","hentry","category-system-architecture","category-system-engineering"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":1751,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/12\/22\/snakes-exploring-pipelines-a-system-engineering-and-management-project-4\/","url_meta":{"origin":5313,"position":0},"title":"Snakes exploring Pipelines &#8211; A \u201cSystem Engineering and Management\u201d Project","author":"Yann Loic Philippczyk","date":"22. December 2016","format":false,"excerpt":"Part 3: Coding Guidelines This series of blog entries describes a student project focused on developing an application by using methods like pair programming, test driven development and deployment pipelines. An important part of any professional software development process (like ours :D ) are coding guidelines and methodologies, so we\u2019ll\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"As a visualization of badly developed code: Spaghetti-snakes Source: http:\/\/lh5.ggpht.com\/-Nb0b-coQ1tU\/UsZSC0STOKI\/AAAAAAAAu2I\/F2ehNp977Ww\/narcisse-snake-pits-6%25255B2%25255D.jpg?imgmax=800","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/12\/spaghettiSnakes.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/12\/spaghettiSnakes.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/12\/spaghettiSnakes.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/12\/spaghettiSnakes.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":11973,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2020\/09\/30\/ts3-voice-channel-manager\/","url_meta":{"origin":5313,"position":1},"title":"TS3 Voice Channel Manager &#8211; Create and push a Bot to its Limits","author":"jk206","date":"30. September 2020","format":false,"excerpt":"by Jan Kaupe (jk206) Figure 1: Web Configuration Panel for the Bot Introduction TeamSpeak\u00b3 is a Voice-over-IP application allowing users to connect to a server where they can join Voice Channels to communicate with each other. Anyone can download and host own TS\u00b3 servers. Huge community servers have been established.\u2026","rel":"","context":"In &quot;Cloud Technologies&quot;","block_context":{"text":"Cloud Technologies","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/cloud-technologies\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/09\/project.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/09\/project.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/09\/project.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/09\/project.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":1190,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/07\/29\/test-driven-development-part-iv\/","url_meta":{"origin":5313,"position":2},"title":"Test Driven Development Part IV","author":"Matthias Schmidt","date":"29. July 2016","format":false,"excerpt":"[written by Roman Kollatschny and Matthias Schmidt] Welcome back to our fourth and final post in our series. This time we want to deal with code style and code quality to optimize coding on additional ways. You\u2019re new? Hop to the first, second or third post so you don\u2019t miss\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":251,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/01\/25\/continuous-integration-marina-kettschik\/","url_meta":{"origin":5313,"position":3},"title":"Continuous Integration (CI)","author":"Marina Kettschik","date":"25. January 2016","format":false,"excerpt":"Large software projects require a group of programmers. Everybody is focused on their own coding part and it may take quite a while to complete it. But what happens when the individual pieces are merged? The integration of the puzzle pieces in a system can be very time consuming and\u2026","rel":"","context":"In &quot;DevOps&quot;","block_context":{"text":"DevOps","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/devops\/"},"img":{"alt_text":"continuous integration system by Marina Kettschik","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/01\/ci-system-1-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/01\/ci-system-1-1.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/01\/ci-system-1-1.jpg?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":1740,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/12\/09\/snakes-exploring-pipelines-a-system-engineering-and-management-project-3\/","url_meta":{"origin":5313,"position":4},"title":"Snakes exploring Pipelines &#8211; A \u201cSystem Engineering and Management\u201d Project","author":"Yann Loic Philippczyk","date":"9. December 2016","format":false,"excerpt":"Part 2: Initial Coding This series of blog entries describes a student project focused on developing an application by using methods like pair programming, test driven development and deployment pipelines. Onwards to the fun part: The actual coding! In this blog entry, we will focus on test-driven development. Like we\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"A snake looking forward towards the next task, after having performed several incremental test-driven programming iterations Source: http:\/\/cdn1.arkive.org\/media\/0F\/0F35A02E-58A1-408B-B259-88C1E319B1C3\/Presentation.Large\/Curl-snake-coiled.jpg","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/12\/entry2-snake.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/12\/entry2-snake.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/12\/entry2-snake.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":556,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/05\/27\/test-driven-development-with-node-js\/","url_meta":{"origin":5313,"position":5},"title":"Test Driven Development with Node.js","author":"Roman Kollatschny","date":"27. May 2016","format":false,"excerpt":"Test-Driven Development with Mocha and Chai in Node.js","rel":"","context":"In &quot;Rich Media Systems&quot;","block_context":{"text":"Rich Media Systems","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/interactive-media\/rich-media-systems\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":777,"user_id":911,"is_guest":0,"slug":"mh313","display_name":"Marcel Heisler","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/bd2af6a772c2a25bf00f13fa93eef00e7f20ee2d3f164e38194bf04d5793fe91?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/5313","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/911"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=5313"}],"version-history":[{"count":4,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/5313\/revisions"}],"predecessor-version":[{"id":5516,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/5313\/revisions\/5516"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=5313"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=5313"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=5313"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=5313"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}