{"id":3503,"date":"2018-03-30T23:12:28","date_gmt":"2018-03-30T21:12:28","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=3503"},"modified":"2023-06-09T14:22:37","modified_gmt":"2023-06-09T12:22:37","slug":"ci-cd-with-gitlab-ci-for-a-web-application-part-2","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/03\/30\/ci-cd-with-gitlab-ci-for-a-web-application-part-2\/","title":{"rendered":"CI\/CD with GitLab CI for a web application &#8211; Part 2"},"content":{"rendered":"<h1>GitLab<\/h1>\n<p>Our first approach was to use the existing GitLab instance of HdM for our project. For them, a shared runner was already defined on which we could run our jobs, so we were able to focus on the CI process itself. This plan worked out at first. We simply defined build and test jobs, which passed without any problems. But when we tried to deploy to our staging server we were a bit confused, because no matter what we tried, the SSH connection to the server could not be established. Even after several reconfigurations of the keys and rewriting of the job we did not succeed, which was surprising, because we could connect to the server from our PC via SSH without problems. Finally, we found out that the HdM firewall blocked our SSH connection to the outside. Since the &#8220;HdM-GitLab solution&#8221; seemed to be a dead end, we decided to set up our own GitLab instance to be independent of external configurations.<br \/>\n<!--more--><\/p>\n<h2><b>Setting up your own GitLab instance<\/b><\/h2>\n<p>It is recommended to have at least 4GB of RAM available on the machine on which you want to host GitLab (more on that later). We decided to host it on a t2.medium instance on Amazon Web Services (running Ubuntu). It provides the necessary specifications without completely emptying one&#8217;s wallet. In the process of launching a new instance, you can choose an appropriate AMI of GitLab. This will basically relieve you of any further installation duties. Under the tag Community AMIs you will find different versions of the Enterprise or Community Edition. We went for the newest version of the Community Edition. It is essential to specify open ports for SSH (22) and HTTP (80). Therefore you need to configure a security group in your AWS interface. Under &#8220;Security Groups&#8221; you can edit existing groups or add new ones. After making sure the appropriate ports are opened for inbound traffic, you need to assign the security group to the EC2 instance.<\/p>\n<p>After everything is set up you can access your GitLab instance via your preferred browser.<\/p>\n<h2><b>Setting up a GitLab Runner<\/b><\/h2>\n<p>Before we are able to use the Runner, which uses the Docker executor, we first need to install docker on our server. SSH into your AWS server and execute the following commands to install docker (updating the apt package index and then install <em>docker-ce<\/em>).<\/p>\n<p>Now that we have got Docker installed, we can get to installing and configuring the runner.&nbsp;First we need to add GitLab&#8217;s official repository and then install <em>gitlab-runner<\/em>.<\/p>\n<p>Now we can use the CLI to register a new runner, on which the jobs of our CI pipeline are executed. Just execute <em>sudo gitlab-runner register <\/em>and answer the questions of the the command dialog. When asked for the coordinator URL, simply put in the URL of your GitLab instance. You will find the token for registering the runner under Runner settings inside the CI\/CD settings. There is no need to add any tags or set up the runner as unlocked. But all if this can also be changed later in the GitLab UI. As executor choose docker and as default image choose node. The default image will be used, if there is none defined in the <em>.gitlab-ci.yml<\/em> file (which we will do anyway).<\/p>\n<h2><b>Defintion of the pipeline<\/b><\/h2>\n<p>The essential file to specify the continuous integration pipeline is the <em>.gitlab-ci.yml<\/em> file, which needs to be put inside the root folder of the project. The file serves as the definition of the stages and jobs of the pipeline.<\/p>\n<p>First of all we need to define the Docker image, which is used to run the jobs. In our case we want to use the Node.js image, therefore we simply add <em>image: node<\/em> to the yaml file. The CI executor will pull images from the the Docker Hub, so any of the pre-built images can be used here. The described code would use the latest node image, but you may also specify a different version by for example adding&nbsp;<em>image: node:8:10:0&nbsp;<\/em>instead.<\/p>\n<p>You can also define services (for example databases), which should be used, in the same manner. Any image available on the Docker Hub is possible. Since we use MongoDB as a database, we add the following to the file.<\/p>\n<pre class=\"prettyprint lang-yaml\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">services:\n   - mongo<\/pre>\n<p>Our pipeline consists of 4 stages, which are defined as follows:<\/p>\n<ul>\n<li>&nbsp; &nbsp;build<\/li>\n<li>&nbsp; &nbsp;test<\/li>\n<li>&nbsp; &nbsp;staging<\/li>\n<li>&nbsp; &nbsp;deploy<\/li>\n<\/ul>\n<p>The jobs are specified in a similar manner. The naming of the jobs is arbitrary and completely up to the developers. For example we defined a job &#8220;build&#8221; which, as the name suggests, builds our project.<\/p>\n<p>The stage, in which this specific job is supposed to run, is indicated, as well as the script, which we want to run inside this job. In this case it is fairly simple, as we do not really need to run any complicated build jobs for our Node.js application except for the installation of dependecies. There is a variety of configuration options possible. For an overview of all parameters see <em><a href=\"https:\/\/docs.gitlab.com\/ce\/ci\/yaml\/\">https:\/\/docs.gitlab.com\/ce\/ci\/yaml\/<\/a>.<\/em><\/p>\n<p>Furthermore, we want to define two jobs for our test stage, which look pretty similar to one another and deal with linting and testing.<\/p>\n<pre class=\"prettyprint lang-yaml\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Test-jobs\">run_linting:\n   stage: test\n   script: npm run lint\n   artifacts:\n     paths:\n       - build\/reports\/linting-results\/\n\nrun_tests:\n   stage: test\n   script: npm run test\n   artifacts:\n     paths:\n       - build\/reports\/test-results\/<\/pre>\n<p>The code specified for the script parameter, runs the respective npm script in both cases. Those scripts are specified in the <em>package.json<\/em> file: &#8220;lint&#8221; triggers the execution of ESLint, whereas &#8220;test&#8221; makes use of the Mocha testing framework to run our implemented tests and Mochawesome to create HTML test reports.<\/p>\n<p>The HTML pages containing the reports for the linting and testing results are stored in the folders <i>build\/reports\/linting-results<\/i> and <i>build\/reports\/test-results<\/i>. That&#8217;s why we specify those paths in the artifacts parameter, so that GitLab can offer the option to directly download those artifacts.<\/p>\n<p>Finally, we have one job in each of our two stages staging and deploy, which pretty much look the same except for the definition of a different IP address of the appropriate server. The following code defines, what happens in our deployment job.<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/p>\n<pre class=\"prettyprint lang-yaml\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"Deploy--production-job\">deploy_production:\nstage: deploy\nwhen: manual\nbefore_script:\n   # Install ssh-agent if not already installed, it is required by Docker\n   - 'which ssh-agent || ( apt-get update -y &amp;&amp; apt-get install openssh-client -y )'\n\n   # Run ssh-agent (inside the build environment)\n   - eval $(ssh-agent -s)\n\n   # Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store\n   - ssh-add &lt;(echo \"$SSH_PRIVATE_KEY\")\n\n   - mkdir -p ~\/.ssh\n   - '[[ -f \/.dockerenv ]] &amp;&amp; echo -e \"Host *\\n\\tStrictHostKeyChecking no\\n\\n\" &gt; ~\/.ssh\/config'\n\nscript:\n   - echo \"Deploy to production server\"\n   - ssh deploy@159.89.96.208 \"cd \/home\/deploy\/shaky-app &amp;&amp; git pull &amp;&amp; npm install &amp;&amp; sudo systemctl restart shaky.service\"<\/pre>\n<p>A key difference in comparison to the staging job is the parameter <em>when: manual<\/em>. Although we want to automatically run the job to update the staging server to have the newest code, we do not want the same behaviour for our deployment job. We don&#8217;t want the job to automatically run, as soon as the ones before are finished, but we only want the job to run upon a manual triggering. Inside of the GitLab CI interface a &#8220;play button&#8221; for this specific job shows up, with the help of which the developer is able to manually trigger the deployment. Due to this setup one can check, if the application appropriately works on the staging server, before pushing to deployment.<\/p>\n<p>The code, which is defined in the <em>before_script<\/em> parameter, is needed, because GitLab does not support managing SSH keys inside the runner. To deploy, we, of course, need to SSH into our server. By adding those commands, we can inject an SSH key into the build environment. For further details see <em><a href=\"https:\/\/gitlab.ida.liu.se\/help\/ci\/ssh_keys\/README.md\">https:\/\/gitlab.ida.liu.se\/help\/ci\/ssh_keys\/README.md<\/a>.<\/em><\/p>\n<p>The actual command (inside script) to deploy our code simply uses SSH to access the deployment server, where we have already set up a connection to the Git repository. Now we only need to pull the current code base, update the dependencies (<em>npm install<\/em>) and restart the service. This works the same for the deployment to the staging server.<\/p>\n<p>In the GitLab UI our pipeline is displayed like this:<\/p>\n<figure id=\"attachment_3510\" aria-describedby=\"caption-attachment-3510\" style=\"width: 656px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/pipeline-gitlab.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"3510\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/03\/30\/ci-cd-with-gitlab-ci-for-a-web-application-part-2\/pipeline-gitlab\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/pipeline-gitlab.png\" data-orig-size=\"1203,183\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"pipeline-gitlab\" data-image-description=\"\" data-image-caption=\"&lt;p&gt;Shaky Pipeline GitLab&lt;\/p&gt;\n\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/pipeline-gitlab-1024x156.png\" class=\"size-large wp-image-3510\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/pipeline-gitlab-1024x156.png\" alt=\"Shaky Pipeline GitLab\" width=\"656\" height=\"100\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/pipeline-gitlab-1024x156.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/pipeline-gitlab-300x46.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/pipeline-gitlab-768x117.png 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/pipeline-gitlab.png 1203w\" sizes=\"auto, (max-width: 656px) 100vw, 656px\" \/><\/a><figcaption id=\"caption-attachment-3510\" class=\"wp-caption-text\">Shaky Pipeline GitLab<\/figcaption><\/figure>\n<p>We can see that the pipeline consists of 4 stages, if a stage holds multiple jobs you can inspect them via dropdown. With the play-button you can start the deploy-production job, additionally you can download the artifacts of each job by clicking on the &#8220;download&#8221;-icon.<\/p>\n<h2>GitLab Pages (used for test reporting)<\/h2>\n<p>In our project we have implemented various quality assurance measures (unit tests, static code analysis and code coverage). Reports were created for each test to make it easier to analyze the results. Unlike Jenkins, there is no plugin for GitLab CI to publish test results directly in GitLab. So we had to look for an alternative solution and came across GitLab Pages. With GitLab Pages you can host static websites for your GitLab project. We planned, to publish our test results to GitLab Pages, to have them attached to our repo.<\/p>\n<p>Because we ran our own GitLab instance, we first had to enable GitLab Pages. We encountered a problem: for our project, even before we came across GitLab Pages, we had purchased a domain on which we registered our servers (staging and production) and our GitLab instance. However, to be able to properly configure GitLab Pages for your own instance, you have to register a wildcard DNS record, pointing to the host on which the GitLab instance runs. Unfortunately our provider did not bring wildcard subdomain support for our domain package. Therefore we had to register a second domain at another provider, for which the creation of wildcard DNS records was possible. In our case the configuration looked like this:<\/p>\n<figure id=\"attachment_3508\" aria-describedby=\"caption-attachment-3508\" style=\"width: 656px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/wildcard-dns.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"3508\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/03\/30\/ci-cd-with-gitlab-ci-for-a-web-application-part-2\/wildcard-dns\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/wildcard-dns.png\" data-orig-size=\"1399,181\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"wildcard-dns\" data-image-description=\"\" data-image-caption=\"&lt;p&gt;Wildcard DNS record configuration&lt;\/p&gt;\n\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/wildcard-dns-1024x132.png\" class=\"size-large wp-image-3508\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/wildcard-dns-1024x132.png\" alt=\"Wildcard DNS record configuration\" width=\"656\" height=\"85\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/wildcard-dns-1024x132.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/wildcard-dns-300x39.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/wildcard-dns-768x99.png 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/wildcard-dns.png 1399w\" sizes=\"auto, (max-width: 656px) 100vw, 656px\" \/><\/a><figcaption id=\"caption-attachment-3508\" class=\"wp-caption-text\">Wildcard DNS record configuration<\/figcaption><\/figure>\n<p><strong>NOTE:<\/strong> when planning to make use of GitLab Pages, be sure you can register wildcard DNS records for your GitLab instances&#8217; domain.<\/p>\n<p>After we managed this challenge we had to set the external URL for GitLab Pages in the GitLab configuration (<em>gitlab.rb<\/em>).<\/p>\n<p>At last we had to make some small modifications in our <em>gitlab-ci.yml&nbsp;<\/em>file in order to publish the test results to GitLab Pages. For each test report we had to create an artifact. Additionally, we defined a new job, called pages, which moves all artifacts from their original directory, to the public directory, in which GitLab pages expects to find the static website.<\/p>\n<pre class=\"prettyprint lang-yaml\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"pages-job\">pages:\n    stage: staging\n    dependencies:\n        - run_linting\n        - run_tests\n    script:\n        - mkdir public\n        - mv build\/reports\/linting-results\/* public\n        - mv build\/reports\/test-results\/* public\n        - mv build\/reports\/lcov-report\/* public\n    artifacts:\n        paths:\n            - public<\/pre>\n<p>For each test report an HTML file with a different name was created, so after the pages job was completed, the 3 different reports were accessible under:<\/p>\n<ul>\n<li><em>http:\/\/project-name.gitlab-domain.io\/static-test.html<\/em><\/li>\n<li><em>http:\/\/project-name.gitlab-domain.io\/unit-test.html<\/em><\/li>\n<li><em>http:\/\/project-name.gitlab-domain.io\/coverage.html<\/em><\/li>\n<\/ul>\n<p>For better usability, it would have been nice to create an <em>index.html<\/em> which would have served as a dashboard and would have linked to all reports. Unfortunately, there was not enough time to implement this additional feature.<\/p>\n<h1>Production and staging server<\/h1>\n<p>For us, the setup of the Ubuntu servers (staging, production) was a bit tricky at first, because neither of us had done this completely before, but we successfully made it at the end. Our goal was to set up a web server to handle the requests, as well as ensuring that our application is always running and automatically reloading whenever the server crashes or gets restarted.<\/p>\n<p>First, we created a deploy user and installed the needed software (node.js, mongodb &amp; git). Then we established a SSH connection to our repository, cloned the repo and started the application to test whether everything worked up to this point.<\/p>\n<h2>Always-on service<\/h2>\n<p>Next, we configured the &#8220;always-on mode&#8221; for our application. We used the default init system <em>systemd<\/em> for Linux systems.<\/p>\n<p>For this purpose, we created a configuration file <em>\/etc\/systemd\/system\/shaky.service<\/em> in which we specified that the application is started or relaunched at system start, on which port it runs and where the files are stored.<\/p>\n<p>After this, we were able to start our application with the command <em>sudo systemctl start shaky<\/em>. To enable starting the application when the server (re)starts, we ran <em>sudo systemctl enable shaky<\/em>.<\/p>\n<p>With &#8220;<em>sudo systemctl restart shaky&#8221;<\/em> the service could be restarted. We later used this command in our <em>gitlab-ci.yml<\/em> File in the deploy jobs. The first attempt to execute this command during the deployment process failed because our deployment user had to type his password every time he executed the command (which was obviously not possible from GitLab). Therefore we added a custom file to <em>\/etc\/sudoers.d\/<\/em> as root user, which enabled the deploy user to restart the shaky.service without typing his password with this command:<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">$ %deploy ALL=NOPASSWD: \/bin\/systemctl restart shaky.service<\/pre>\n<h2>Nginx<\/h2>\n<p>We used the Nginx web server (and load balancer) to handle all requests from the web. Therefore we downloaded and installed nginx and created a configuration file <em>\/etc\/nginx\/sites-available\/shaky<\/em>.<\/p>\n<pre class=\"prettyprint lang-json\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">upstream node_server {\n    server 127.0.0.1:8080 fail_timeout=0;\n}\n\nserver {\n    listen 80 default_server;\n    listen [::]:80 default_server;\n    \n    server_name shaky-app.de www.shaky-app.de ;\n\n    location \/ {\n        proxy_pass http:\/\/localhost:8080;\n        proxy_http_version 1.1;\n        proxy_set_header Upgrade $http_upgrade;\n        proxy_set_header Connection 'upgrade';\n        proxy_set_header Host $host;\n        proxy_set_header X-Real-IP $remote_addr;\n        proxy_cache_bypass $http_upgrade;\n        proxy_redirect off;\n        proxy_buffering off;\n    }\n}<\/pre>\n<p>We defined our web server to listen to requests on port 80 and forward all requests to our shaky app (listening on port 8080). Additionally we set the server name as well as some request headers. Then we replaced the default Nginx configuration with our own config and restarted Nginx.<\/p>\n<p>When we checked whether the configuration was correct, we found that the page was only accessible if we explicitly specified the port in the request. Later we noticed a small error in the configuration. We forgot to include the port in the proxy_pass property so the request would be automatically forwarded to the correct port (we had set http:\/\/localhost instead of http:\/\/localhost:8080).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>GitLab Our first approach was to use the existing GitLab instance of HdM for our project. For them, a shared runner was already defined on which we could run our jobs, so we were able to focus on the CI process itself. This plan worked out at first. We simply defined build and test jobs, [&hellip;]<\/p>\n","protected":false},"author":866,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[659,650,651,2],"tags":[144,145],"ppma_author":[749],"class_list":["post-3503","post","type-post","status-publish","format-standard","hentry","category-devops","category-scalable-systems","category-system-designs","category-system-engineering","tag-ci-pipeline","tag-gitlab-ci"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":3513,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/03\/30\/ci-cd-with-gitlab-ci-for-a-web-application-part-3\/","url_meta":{"origin":3503,"position":0},"title":"CI\/CD with GitLab CI for a web application &#8211; Part 3","author":"Nina Schaaf","date":"30. March 2018","format":false,"excerpt":"Hosting your own GitLab server Some users might have concerns regarding security using GitLab for a variety of purposes, including commercial and business applications. That is, because GitLab is commonly used as a cloud-based service - on someone else's computer, so to speak. So setting it up for running it\u2026","rel":"","context":"In &quot;DevOps&quot;","block_context":{"text":"DevOps","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/devops\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":7154,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/31\/setting-up-a-ci-cd-pipeline-in-gitlab\/","url_meta":{"origin":3503,"position":1},"title":"Setting up a CI\/CD pipeline in Gitlab","author":"nr037","date":"31. August 2019","format":false,"excerpt":"Introduction For all my university software projects, I use the HdM Gitlab instance for version control. But Gitlab offers much more such as easy and good ways to operate a pipeline. In this article, I will show how we can use the CI\/CD functionality in a university project to perform\u2026","rel":"","context":"In &quot;Cloud Technologies&quot;","block_context":{"text":"Cloud Technologies","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/cloud-technologies\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Screenshot-2019-08-26-at-09.53.13.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Screenshot-2019-08-26-at-09.53.13.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Screenshot-2019-08-26-at-09.53.13.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Screenshot-2019-08-26-at-09.53.13.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Screenshot-2019-08-26-at-09.53.13.png?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Screenshot-2019-08-26-at-09.53.13.png?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":3314,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/03\/28\/continuous-integration-deployment-for-a-cross-platform-application-part-1\/","url_meta":{"origin":3503,"position":2},"title":"Continuous Integration &#038; Deployment for a Cross-Platform Application &#8211; Part 1","author":"Tobias Eberle, Marco Maisel, Tobias Staib, Mario Walz","date":"28. March 2018","format":false,"excerpt":"When we started the project \"Flora CI\" for the lecture \"System Engineering\", we planned to deal with Continuous Integration. As an important aspect of software engineering all of us have previously been involved in projects where code of developers had to be merged and builds had to be automated somehow.\u2026","rel":"","context":"In &quot;DevOps&quot;","block_context":{"text":"DevOps","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/devops\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/flora-app.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/flora-app.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/flora-app.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/flora-app.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/flora-app.jpg?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/flora-app.jpg?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":3348,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/03\/30\/continuous-integration-pipeline-for-unity-development-using-gitlab-ci-and-aws\/","url_meta":{"origin":3503,"position":3},"title":"Continuous Integration Pipeline for Unity Development using GitLab CI and AWS","author":"Jonas Graf, Christian Gutwein","date":"30. March 2018","format":false,"excerpt":"This blog entry describes the implementation of a Continous Integration (CI) pipeline especially adapted for Unity projects. It makes it possible to automatically execute Unity builds on a configured build server and provide it for a further deployment process if required.","rel":"","context":"In &quot;DevOps&quot;","block_context":{"text":"DevOps","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/devops\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/CI_process.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/CI_process.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/CI_process.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/CI_process.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":26965,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2025\/02\/28\/wie-baut-man-eine-ci-cd-pipeline-mit-jenkins-auf\/","url_meta":{"origin":3503,"position":4},"title":"Wie baut man eine CI\/CD Pipeline mit Jenkins auf?","author":"Cedric Gottschalk","date":"28. February 2025","format":false,"excerpt":"Im Rahmen der Vorlesung \"System Engineering und Management (143101a)\" haben wir es uns zum Ziel gesetzt, mehr \u00fcber CI\/CD Pipelines zu lernen und eine eigene Pipeline f\u00fcr ein kleines Projekt aufzusetzen. Wir haben uns dabei entschieden, Jenkins f\u00fcr die CI\/CD Pipeline einzusetzen und eine kleine ToDo App mit dem Framework\u2026","rel":"","context":"In &quot;System Engineering&quot;","block_context":{"text":"System Engineering","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/system-engineering\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/02\/ToDo-List-CICD-1.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/02\/ToDo-List-CICD-1.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/02\/ToDo-List-CICD-1.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/02\/ToDo-List-CICD-1.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/02\/ToDo-List-CICD-1.png?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/02\/ToDo-List-CICD-1.png?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":3496,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/03\/30\/ci-cd-with-gitlab-ci-for-a-web-application-part-1\/","url_meta":{"origin":3503,"position":5},"title":"CI\/CD with GitLab CI for a web application &#8211; Part 1","author":"Nina Schaaf","date":"30. March 2018","format":false,"excerpt":"Introduction When it comes to software development, chances are high that you're not doing this on your own. The main reason for this is often that implementing components like UI, frontend, backend, servers and more is just too much to handle for a single person leading to a slow development\u2026","rel":"","context":"In &quot;DevOps&quot;","block_context":{"text":"DevOps","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/devops\/"},"img":{"alt_text":"Shaky architecture","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/01_shaky-architecture-300x106.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":749,"user_id":866,"is_guest":0,"slug":"ns105","display_name":"Nina Schaaf","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/dd9a4100ded3e9a2610428071424fb98d46d5eb3019b008e5e2e530f0fa55f4a?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/3503","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/866"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=3503"}],"version-history":[{"count":15,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/3503\/revisions"}],"predecessor-version":[{"id":24755,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/3503\/revisions\/24755"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=3503"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=3503"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=3503"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=3503"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}