Ynstagram – Cloud Computing mit AWS & Serverless

Im Rahmen der Vorlesung “Software Development for Cloud Computing” haben wir uns hinsichtlich des dortigen Semesterprojektes zum Ziel gesetzt einen einfachen Instagram Klon zu entwerfen um uns die Grundkenntnisse des Cloud Computings anzueignen.

Grundkonzeption / Ziele des Projektes

Da wir bereits einige Erfahrung mit React aufgrund anderer studentischer Projekte sammeln konnten, wollten wir unseren Fokus weniger auf die Funktionalität und das Frontend der Applikation richten und ein größeres Augenmerk auf die Cloud spezifischen Funktionen und Vorgehensweisen legen. 

Konkret planten wir die Umsetzung eines Instagram Klons mit den grundlegenden Funktionalitäten:

  • Bilder bzw. Beiträge hochladen
  • Titel & Beschreibung von Beiträgen anlegen
  • Liken von Beiträgen 
  • Kommentieren von Beiträgen
  • Accountmanagement

Entwurfsentscheidung

Frontend

Aufgrund von bereits existierenden Vorkenntnissen und guter Erfahrungen entschieden wir uns für die Umsetzung des Frontends mit Hilfe des React Frameworks. Mit der Gestaltung als Web App ergibt sich zudem der Vorteil, dass “Ynstagram” Plattform übergreifend erreichbar ist.

 

Backend – von Firebase zu AWS

Zunächst starteten wir unser Projekt mit Firebase umzusetzen. Zum einen verlor dies jedoch seinen Reiz hinsichtlich des Lerneffekts, da wir parallel unser Softwareprojekt mit Firebase verwirklichten. Gleichzeitig wurde uns durch die Einblicke in den Vorlesungen ein Bewusstsein für den Funktionsumfang von AWS geschaffen.

Interessant war für uns hierbei beispielsweise die wesentlich umfangreicheren Einsatzmöglichkeiten von Lambda Funktionen. Während diese in Firebase nur durch Einträge / Trigger geschehen kann, konnten wir hier auf API Aufrufe zurückgreifen. Auch die umsetzbaren Funktionalitäten gestalteten sich als wesentlich umfangreicher. So bot sich uns unter anderem die Möglichkeit Bilder beim Upload automatisiert zu skalieren und perspektivisch ließe sich auch recht einfach eine Analyse von Inhalten mit Hilfe von Künstlicher Intelligenz umsetzen. Während man bei Firebase in all dem schnell an gewisse Grenzen kommt, gibt es bei AWS einen viel breiteren Horizont an Möglichkeiten. 

Dennoch gestaltete sich dieser Umstieg keineswegs als einfach, denn Firebase bietet eine wesentlich bessere Übersichtlichkeit und Dokumentation.

Serverless

Da uns die AWS Web Oberfläche und die Online Erstellung von Lambda Funktionen keineswegs ansprachen suchten wir nach Lösungen um alle Konfigurationen wenn möglich auch auf Github hinterlegt zu haben und im Code Editor anlegen zu können. 

Dabei sind wir letztlich auf Serverless gestoßen. Hier werden alle Buckets, Tabellen und API Aufrufe über ein serverless.yaml File verwaltet. So lassen sich neue Elemente viel übersichtlicher / schneller anlegen und Konfigurationen können einfach von bereits erstellten Elementen übernommen werden. 

Postman

Um einen Überblick über die erstellten API Routen zu behalten und um diese einfach testen zu können, haben wir uns für Postman entschieden. Über ein in Github geteiltes File können so alle am Projekt beteiligten die aktuellen API Routen sehen und neue Aufrufe anlegen.

Umsetzung / Architektur

AWS Services

DynamoDB

Da wir bereits in Firebase auf die dortige NoSQL Datenbank “Firestore Database” zurückgegriffen hatten, entschieden wir uns hier für eine Beibehaltung dieser Datenbankstruktur. Der Vorteil liegt dabei gegenüber SQL Datenbanken in einfacheren Abfragen bedingt durch eine flachere Datenstruktur. 

Wir verwenden DynamoDB Tabellen um die zu den Bildern gehörenden Informationen wie z.B. Titel, Beschreibung, Autor etc. zu speichern. Die Verknüpfung der Bilder mit den Datensätzen in den Tabellen erfolgt dabei durch eine einzigartige ID.

Es gibt dabei 2 Tabellen, eine in der zunächst die Eingaben gespeichert werden, und eine weitere in die verarbeitete Datensätze übertragen werden. 

Beide Tabellen sind dabei strukturell gleich aufgebaut. Zentral sind hier eine eindeutige ID, Datum der Erstellung, Account Name des Erstellers, sowie Beschreibung und Titel des Beitrags. Kommentare und Likes werden über Arrays verwaltet. 

S3

In S3 Buckets sind die zu den Beiträgen hinterlegten Bilder gespeichert. Dabei gibt es einen Bucket mit den Originaldateien, sowie einen mit verringerter Auflösung. Der Name der Bilder entspricht dabei stets der den Beiträgen zugehörigen einzigartigen ID.

Cognito

Mit AWS Cognito konnten wir in wenigen Schritten unser Account Management einrichten. Cognito bietet dabei die Unterstützung aktueller Identitäts und Zugriffsmanagement Standards wie zB. Oauth 2.0 und SAML 2.0 und bietet gleichzeitig auch die Möglichkeit Multifaktor Authentifizierung zu implementieren. 

Amplify

Wir nutzen AWS Amplify um das Frontend unserer Applikation zu hosten und um hierfür eine CI/CD Pipeline mit Development und Master Umgebung zu realisieren. Eine genauere Erklärung hierzu findet sich im Abschnitt “CI/CD Pipeline”.

Lambda 

API Gateway

Ein großteil unsere Lambda Funktionen wird über API Aufrufe genutzt. Eine Übersicht dieser haben wir wie eingangs erwähnt in einem Postman File hinterlegt.

API-Routen

POST /image-upload

Hochladen eines Bildes in den S3 Bucket. Wird sowohl mit Bilddaten als auch mit dazugehörigen Informationen wie Beschreibung und Titel aufgerufen (JSON-Format). 

POST /image-info

Erstellt einen Eintrag in die DynamoDB Tabelle mit sämtlichen Informationen zu einem Beitrag, wird im Body als JSON übermittelt. 

POST /create-file

Erstellt eine Datei im S3 Bucket. Der Dateiname entspricht dem URL Parameter.

GET /get-all-images

Gibt alle als “valid” markierten Beiträge als Array im JSON-Format zurück.

GET /get-file

Gibt eine Datei anhand des Dateinamens zurück. 

GET /image-info

Gibt die Informationen zu einem einzelnen Beitrag im JSON-Format zurück.

PUT /update-image-info

Genutzt um Kommentare zu Beiträgen hinzuzufügen. Updated Einträge in der DynamoDB Tabelle.

PUT /update-likes

Verwendet um neue Likes hinzuzufügen.

DynamoDB / S3 Trigger

Neben direkten API Aufrufen verwenden wir auch Trigger auf DynamoDB Tabellen, sowie S3 Buckets. 

Exemplarischer Ablauf eines Image Uploads

Dies lässt sich am Besten am Ablauf einer Beitragserstellung darstellen.

Per Post-Request wird zunächst die Lambda Funktion “imageUpload” aufgerufen, welche das Bild in dem S3 Bucket hinterlegt. Dann wird über einen Trigger automatisch die Lambda Funktion “imageResize” aufgerufen, welche die Bilder auf eine Auflösung von 400 x 400 Pixeln skaliert. Diese Bilder werden dann im Bucket für skalierte Bilder gespeichert. So können die Bilder im Feed gerade bei Mobilen Geräten schneller geladen werden.

Parallel dazu wird in der DynamoDB Tabelle ein Eintrag angelegt. Auch hier wird ein Trigger aufgerufen der seinerseits die Funktion “changeText” aufruft. Diese ersetzt in Anlehnung an den Namen “Ynstagram” alle “i” in Beschreibung und Titel durch “y”. Hierbei handelt es sich lediglich um eine Spielerei die aus unserem Interesse entstand verschiedenste Trigger und Einsatzmöglichkeiten von Lambda Funktionen auszuprobieren.

CI/CD Pipeline

Interessant war es für uns zudem erstmals wirklich Erfahrung mit einer CI/CD Pipeline zu sammeln. Wir planten dabei die strikte Unterteilung in eine Entwicklungsumgebung und einer dieser prinzipiell gleichen finalen Umgebung. So dass der aktuelle Stand schon unter realistischen Bedingungen getestet werden kann bevor er letztlich veröffentlicht wird.

Diese CI/CD Pipeline haben wir mit AWS Amplify und Github Actions umgesetzt. Dabei wird zunächst stets auf einen Development Branch gepusht, welcher dann automatisch auf eine Entwicklungsumgebung auf Amplify hochgeladen wird. So können zunächst alle Tests durchgeführt werden, bevor dann mit einem Pull request die Änderung auf den Master Branch übertragen werden. Wenn dies geschehen ist, werden diese ebenfalls automatisch in die Produktionsumgebung übernommen bzw. deployed.

Hier wird neben den durch Github Actions durchgeführten Tests auch überprüft ob die Web Anwendung auch auf verschiedenen Geräten richtig skaliert und damit überprüft ob die UI für den Nutzer funktionsfähig angezeigt wird. Der aktuelle Stand wird dabei selbstverständlich nur übernommen, wenn alle Tests erfolgreich abgeschlossen werden.

Serverless

Um der Unübersichtlichkeit der AWS Weboberflächen aus dem Weg zu gehen und um Elemente leichter und reproduzierbar, über Git verwaltet anlegen zu können haben wir uns für Serverless entschieden. Hier werden alle AWS Komponenten in einem “serverless.yaml” File angelegt. 

Variablen

Es gibt dabei zum Beispiel auch die Möglichkeit unkompliziert Environment Variablen anzulegen:

Welche wir wiederum über eigene custom Variablen, welche an verschiedenen Stellen genutzt werden definiert haben:

Dies bringt den Vorteil mit sich, dass Namen flexibel verändert und direkt überall übernommen werden, sprich sowohl in AWS als auch im Code über die Environment Variablen.

Functions

Gleichermaßen einfach lassen sich auch die Lambda Functions anlegen. Diese werden jeweils über einen “handler” referenziert und werden dann durch ein “event” aufgerufen, was entweder API Aufrufe oder eben bspw. DynamoDB / S3 Trigger sein können.

Resources

Auch alle Buckets und Tabellen sind im .yaml File definiert. So können insbesondere neue Elemente sehr einfach angelegt werden, da man direkt auf zuvor definierte Konfigurationen zurückgreifen kann.

Testing

Beim Testen haben wir uns vor allem auf die API Aufrufe und die grundsätzlichen Funktionen fokussiert. Grundsätzlich werden unsere Tests über die im Abschnitt “CI/CD Pipeline” dargestellte Pipeline mit Github Actions ausgeführt. Diese sind auch Bestandteil des Amplify Deployment Prozesses. Zusätzlich dazu haben wir CircleCi implementiert um die Serverless Komponenten automatisch zu deployen. Für das Testing nutzen wir allgemein ein lokales Mock-Up unserer DynamoDB da wir hier schnell auf das Problem gestoßen sind, dass unsere freien AWS Kontigente aufgebraucht waren.

Ausblick / Fazit

Die größten Schwachstellen des Projektes liegen aktuell in nicht abgesicherten API Aufrufen, diese ließen sich über die Verwendung von API Keys schützen. Dabei sollten perspektivisch auch die Zugriffe auf DynamoDB sowie S3 über IAM Role verwaltet werden. 

Für die Accountverwaltung wäre es sinnvoll die Multifaktor Authentifizierung einzurichten. Der Funktionsumfang ließe sich selbstverständlich noch deutlich ausbauen, wobei besonders die Nutzung von KI Komponenten für uns interessant wäre.  

Insgesamt konnten wir, ausgehend von keinerlei Grundkenntnissen im Bereich Cloud Computing, uns mit Hilfe der Umsetzung des Projektes im Rahmen der Vorlesung einen Überblick und ein Grundverständnis für die Welt des Cloud Computings erarbeiten, welche eine solide Basis bieten um zukünftig die vorliegenden Ansätze noch wesentlich weiter zu vertiefen.

“Studidash” | A serverless web application

by Oliver Klein (ok061), Daniel Koch (dk119), Luis Bühler (lb159), Micha Huhn (mh334)

Abstract

You are probably familiar with the HdM SB-Funktionen. After nearly four semesters we were tired of the boring design and decided to give it a more modern look with a bit more functionality then it currently has. So we created “Studidash” in the course “Software Development for Cloud Computing”. “Studidash” shows your grades and automatically calculates the sum of your ECTS and also an average of your grades. 

Since this is a project for SD4CC it runs as a serverless web application at Amazon Web Services, or AWS for short. Our tech stack for this project consists of Angular, Python, Terraform and some AWS Services like Lambda or S3.

While developing this Web-App we encountered some difficulties but also learned a lot of stuff and we hope that this blog post can give you a quick overview of what we did, what we learned, what problems we had and how we solved them so you have it easier for your next project.

What did we do? 

As mentioned in the abstract, we developed a serverless Web-App called “Studidash” because of said boring design of the SB-Funktionen. First of all, we decided that we wanted to learn a new tech stack and came to the conclusion that Angular as our frontend would be the most modern frontend framework. For our backend we decided to use Python since it’s lightweight and easy to learn. From another course we learned about Terraform so this was something we were already somewhat familiar with and decided to use it for our deployment to AWS. We also used AWS to host the Web-App since we got access to AWS Student Accounts.

After we settled for a project and our tech stack we had to think about a way to make it “cloud native” and started to research some information and came across serverless. We dug a bit deeper and found some useful information. So we came to realize that serverless might be the way to go. Serverless means that our (or maybe your application) isn’t running completely on a “on-prem”-server but is running in the cloud instead. That means the application itself isn’t coupled to the server. Servers are still there but you don’t have to think about the administrative stuff around that. This is all going to be handled by your cloud service provider. The serverless approach brings scalability, high availability and efficient resource usage and management with it. As mentioned, you can focus more on the development itself rather than thinking about servers. A connection to a CI/CD pipeline makes it easy and fast to release a new version of your application. But serverless also has its downsides. The functions have to be as small as possible to only fit one purpose and some Web-Apps can have higher latency due to a cold start (When a function isn’t used for quite some time it gets destroyed and needs to be instantiated again, which takes time). You are also going to have a bad time debugging your application since it isn’t as easy as you might be used to. In the end we went with a static frontend in a S3-Bucket, a backend running as AWS Lambda Functions and AWS API Gateway to connect them. 

Architecture

Our architecture is fully hosted on AWS and our code repositories are hosted on the HdM GitLab server. The clients can access our frontend via their favourite web browser. Our frontend application is hosted in an AWS S3-Bucket. The good thing here is that we don’t have to manage or deploy any web server by ourselves. This reduces the management overhead and in the end the costs. After the frontend is served to the client, the user can input their user credentials to access their grades from the third party service (HdM SB-Funktionen). A HTTP-Request will then be sent to a Lambda Function with an API-Gateway to receive the request. This Lambda Function contains a Python script which will parse the user credentials provided in the received HTTP-Request and use them to make a login at the SB-Funktionen platform and scrape the necessary grades and lecture data from the user. This scraped data will then be preprocessed and returned as a JSON-Object to the frontend.

From the developer side we used Git/GitLab for the version control of our code. In GitLab we created a CI/CD pipeline to build the frontend, the Python grade scraper and a Terraform image to deploy all our neccessary AWS resources. Thanks to the CI/CD pipeline the developer can just push the newest code base to the repository and it will be deployed automatically to AWS.

Architecture overview

Frontend

For our frontend we decided to build an Angular single page application. We made this decision because it’s an up-to-date framework to build fast and easy web applications.

When the user loads the website the header only displays a login component for the HdM SB-Funktionen credentials. This component triggers a POST request to the Lambda Function containing the login data. The Lambda Function then responds and returns several grade objects to the frontend which are identically defined in front- and backend. The grade object exactly maps the table structure of the HdM page. The response then triggers the rendering of the table and you will receive a login message. Also there is an error handling if the login failed. The table can be sorted according to the different values, the grade average and ECTS are calculated and displayed in the header of the page.

Screenshot of our frontend after successful login

Backend

Our backend consists of a Python script which is hosted in a Lambda Function with an API-Gateway to receive HTTP-Requests. The frontend sends a HTTP-Request with the user credentials in the request body to the API-Gateway. The request is then forwarded to the Lambda Function which then injects the HTTP-Request into our Python grade scraper script. The Python script then parses the request body and performs a login at the SB-Funktionen website of the HdM where all the student grades and lectures are stored.

Backend workflow

In our code example the event variable is the received HTTP-Request from the frontend. The received request body is a string, so the content of the body has to be parsed to JSON again. When there is no login data provided, the script will send a HTTP-Response with the status code 401 and a corresponding message.

In the next step our script scrapes all the data we need and parses them into a JSON format which our frontend can handle easily. This JSON data is then sent as response to the Lambda Function which will forward this response to the API-Gateway. The API-Gateway then also forwards this response back to our frontend application where the received data will be processed and displayed.

Code snippet – try-except

We also had to keep some other things in mind. For example what should happen when our backend throws an exception or the third-party-service isn’t available? In our backend we created an error handler which takes a HTTP-Status Code and an error message as parameter, converts the data in the right format for our frontend and then sends the response.

Code snippet – error handling

Our main lambda_handler function is then divided into different parts. Each part is surrounded by a try-except clause to catch exceptions. For example if the third party service is down or if there were no credentials provided by the frontend. This makes our backend more reliable and also gives the user enough feedback to know what’s going on. Since we use an external service we need to think of a solution for the case when the third party service is down, for example for maintenance reasons. A possible solution to this would be to implement a caching mechanism which we don’t provide in the current state.

CI/CD

To make our application as cloud native as possible we implemented a CI/CD pipeline in our project. This pipeline builds our Web-App as well as our Lambda Functions, tests our Python script and deploys them to AWS. For that we are using different stages (build, test, deploy) in our .gitlab-ci.yml file. The build_webapp stage first pulls a Node-image and runs a few lines of script to install all dependencies and then builds the Angular based frontend. While this part is running, a second instance is pulling an Alpine image and is also running a few lines of script to package our Lambda Function(s) into a ZIP file.

After that, the test stage is invoked to test the application before deployment. This is a crucial part in the pipeline since it can reveal mistakes that we made during development before going “live” with the application. When the tests succeed, the next stage is invoked.

In our case, we made the deployment stage manually since we didn’t want to push every small change to AWS and also the Student Accounts had some time limits that would forbid us doing that anyway. But what happens in the deploy stage is fairly simple. Like in the stages before we are pulling an image for Terraform to run the usual Terraform commands like init, validate, plan and apply. This initializes Terraform, validates our main.tf in the root of the repository, creates a plan for creating the different resources in this main.tf and finally applies it. 

But what exactly is in this main.tf file? This file contains every resource we need in AWS and creates it. First of all, we declared variables for our different buckets, one for the Lambda Function and one on which the Angular app is going to be hosted at. After that, we created the S3-Bucket for the Lambda Function and uploaded the ZIP file with the function to the bucket. From there, it gets deployed to AWS Lambda. We also needed to create a role and policy to give the bucket the correct access rights to execute their task properly. After that, the S3-Bucket for the Angular app is created and the needed files are uploaded. This bucket hosts the frontend as a static website which we also configured in our main.tf to do that.

.gitlab-ci.yml file for our pipeline (1/2)
.gitlab-ci.yml file for our pipeline (2/2)

Testing

Testing is one of the most important things when implementing a CI/CD pipeline and with automated deployment. When you don’t implement tests you don’t really know if your application works before deployment and after the deployment, it is too late. So implementing a stage for testing in our project was the way to go. For our Python backend we wrote some basic Unit-Tests to test functionality and also added a test stage for the backend to our CI/CD pipeline.

We also managed to write an End-To-End-Test for our frontend which checks if the Error-Snackbar is shown when the user puts in wrong credentials. The harder part in this scenario was to get it running in the CI/CD pipeline, which we unfortunatly didn’t manage to do.

What problems did we have and how did we solve them?

One of the biggest problems we encountered was due to the fact that we only had access to an AWS Student Account. It ensured that we only had restricted access to AWS. For example we needed to create different kinds of roles to deploy our Lambda Function with the correct set of rights to be executed. Due to the restrictions we were not allowed to give the roles the needed permissions which caused our CI/CD pipeline to fail and our project didn’t get fully deployed. This could only be solved by getting a “real” AWS Account which gives you all the permissions you would need.

Another problem we faced was CORS (Cross-Origin Resource Sharing). In the first steps of our development we always got a CORS-Error when our frontend was requesting the grades and lecture data from our backend service. The reason for that was because in our Python backend script we just sent back the JSON-Object containing all the data but without any HTTP-Headers to our frontend. The frontend then failed to receive the response because the URL of the API-Gateway was different from the URL that our frontend had. To fix this problem we had to set the Access-Control-Allow-Origin HTTP-Header in the response from our backend. 

Code snippet – http-headers (CORS)

After that, the request worked and our frontend could receive the scraped data.

Another problem we had was to integrate our End-to-End-Test in our CI/CD-pipeline, which we unfortunately didn’t manage to fix in time. It would’ve required us to have a runner that has a browser available but we weren’t able to set that up. We managed to implement an E2E-Test which is running locally without any problems. So at least we have a bit of code quality assurance here. Having to run the tests manually isn’t what you want to do for a fully automated cloud native approach.

Conclusion

It was quite a long way from where we started, but in the end we managed to get our Web-App running on AWS as we liked. We made it a bit difficult in the beginning because we said we wanted to learn some new technologies like Python and Angular, so we first had to learn those. But we also had to learn about serverless-architecture. It is also something to look forward to working with in the future.

At the presentations we found out about AWS Amplify, which is basically a tool by AWS to get serverless Web-Apps running as fast as possible without the need of S3-Buckets. It showed us that there isn’t really the “one and only” way to get something running in the cloud. There are many possible solutions. 

In our opinion we learned a lot about AWS, serverless-architecture and cloud in general. But also about developing an application where you don’t have to think about renting and maintaining a server. Maybe we can continue with this project in the near future and give the HdM SB-Funktionen a new look 🙂

Application Updater mit Addon-Verwaltung

von Mario Beck (mb343) und Felix Ruh (fr067)

Einleitung

Unser Ziel war es, einen Programm Updater für Entwickler zu erstellen, den diese einfach in ihre CI/CD-Pipeline integrieren können. Für die Umsetzung haben wir die IBM Cloud und eine Serverless Architektur verwendet, um eine unbegrenzte Skalierbarkeit zu erreichen. Zu den verwendeten Serverless Services zählen die Cloud Functions, DB2 und ein Object Storage.

Das Projekt besteht aus einem Uploader, mit dem der Entwickler sein Programm in den Object Storage hochladen kann. Und einem Downloader für den Benutzer, mit dem automatisch die aktuelle Version heruntergeladen wird.

Verwendung aus der Entwicklersicht:

  • Programm wird registriert und man bekommt die dazugehörigen API-Keys
  • Erstellen der Config für den Downloader
  • Mit dem Uploader kann das Programm hochgeladen werden, dies kann einfach in eine CI/CD Pipeline eingebunden werden

Verwendung aus der Benutzersicht:

  • Herunterladen des Downloaders und der Config
  • Starten des Downloaders
  • Vor Programmstart wird nach neuen Updates gesucht und diese falls vorhanden heruntergeladen
  • Nach dem Update wird das eigentliche Programm gestartet
Continue reading

How do you get a web application into the cloud?

by Dominik Ratzel (dr079) and Alischa Fritzsche (af094)

For the lecture “Software Development for Cloud Computing”, we set ourselves the goal of exploring new things and gaining experience. We focused on one topic: “How do you get a web application into the cloud?”. In doing so, we took a closer look at Continuous Integration / Continuous Delivery, Infrastructure as a Code, and Secure Sockets Layer. In the following, we would like to share our experiences.

Overview of the content of this blog post

  • Comparison GitLab and GitHub 
    • CI/CD in GitLab 
      • Problem: Where are the CI/CD settings in the HdM Gitlab? 
      • Problem: Solve Docker in Docker by creating a runner 
    • CI/CD in GitHub 
  • Set up SSL for the web application 
    • Problem: A lot of manual effort 
      • Watchtower 
      • Terraform 
  • Testing 
    • Create a test environment 
    • Automated Selenium frontend testing in GitHub 
  • Docker Compose 
  • Problem: How to build amd64 images locally with an arm64 processor? 

Continuous Integration / Continuous Delivery

At the very beginning, we asked ourselves which platform was best suited for our approach. We limited ourselves to the best-known platforms so that the comparison would not be too complex: GitHub and GitLab.
Another point we wanted to try was setting up a runner. For this purpose, we set up a simple pipeline in both GitLab and GitHub to update Docker images on Docker Hub.

GitLab vs. GitHub

GitHub is considered the original cloud-based Git platform. The platform focuses primarily on the community. Comparatively, it is also the largest (as of January 2020: 40 million users). GitLab is the self-hosted open-source alternative to GitHub. During our research, we noticed the following differences concerning our project.

GitLab GitHub 
Free private and public repositories ✓ ✓ (since Jan. 2019)
Enterprise versions ✓ ✓ 
Self-hosted version ✓ ○ (only with paid Enterprise plan) 
CI/CD with shared or personal runners ✓ ○ (with third-party apps) 
Wiki ✓ ✓ 
Preview code changes ✓ ✓ 

Especially the point that it is only possible in GitLab to use self-hosted runners for the CI/CD pipeline caught our attention. From our point of view, this is a plus for GitLab in terms of data protection. The fact that GitLab can be self-hosted is an advantage but not necessary for our project. Nevertheless, it is worth mentioning, which is why we have included the point in our list. In all other aspects, GitLab and GitHub are very similar.

CI/CD – GitLab vs. GitHub

GitHub: provides the user so-called GitHub Actions. This way, the user does not have to set up, configure or host his runner.
+ very easy to use
+ free of charge
– Critical from a data protection perspective, as the code is executed/read “somewhere”

GitLab: To use the CI/CD, a custom runner must be configured, hosted, and integrated into the code repository.
+ code stays on own runner (e.g., passwords and source code are safe)
+ the runner can be configured according to one’s wishes
– complex to set up and configure
– Runner could cost money depending on the platform (e.g., AWS)

Additional information: HdM offers students so-called shared runners. However, Docker-in-Docker is not possible with these runners for security reasons. In the following, we will explain how we configured our GitLab runners to allow Docker-in-Docker. Another insight was that the Docker_Host variable must not be specified in the pipeline, otherwise the Docker socket will not be found, and the pipeline will fail.

CI/CD in GitLab 

Where are the CI/CD settings in the HdM GitLab?

We are probably not the first to notice that the CI/CD is missing in the MI GitLab navigation. The “advanced features” have been disabled to avoid “overwhelming” students. However, they can be easily activated via the GitLab settings (Settings > General > Visibility, project features, permissions) (https://docs.gitlab.com/ee/ci/enable_or_disable_ci.html). 

Write the .gitlab-ci.yml file

The next step is to write an individual .gitlab-ci.yml file (https://docs.gitlab.com/ee/ci/quick_start/index.html).  
The script builds a Docker container and pushes it to Docker Hub. The DOCKER_USERNAME and DOCKER_PASSWORD are stored as Variables in GitLab (Settings > CI/CD > Variables). 
Tip: If you want to keep the images private but do not want to pay for the second private repository on Docker Hub (5$/month), you can create a private repo and push the images separated by tag (in our case, “frontend” and “backend”). 

stages:
  - docker

build-push-image:
  stage: docker
  image: docker:stable
  tags:
    - gitlab-runner
  cache: {}
  services:
    - docker:18.09-dind
  variables:
    DOCKER_DRIVER: overlay2
    DOCKER_TLS_CERTDIR: ""
    # This variable DOCKER_HOST should never be set, because otherwise the default address of the Docker host will be
    # overwritten and the runner will not be able to access the socket and the pipeline will fail!
    # DOCKER_HOST: tcp://localhost:2375/
  before_script: # Install docker-compose
    - apk add --update --no-cache curl py-pip docker-compose
  script:
    - echo $DOCKER_PASSWORD | docker login --username $DOCKER_USERNAME --password-stdin
    - docker-compose build
    - docker-compose push
  only:
    - master

Configuring Gitlab

Next, we asked ourselves how we could restrict merges into the master. The goal was only to allow a branch to be added to the master if the pipeline was successful. This setting can be found in Settings > General > Merge requests > Merge checks the item “Pipelines must succeed”.

Setting up and configuring GitLab Runner

For this, we have written a runnerSetup.sh.

#!/bin/bash

# Download the binary for your system
sudo curl -L --output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64

# Give it permission be executed
sudo chmod +x /usr/local/bin/gitlab-runner

# Create a GitLab CI user
sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash

# Install and run as service
sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
sudo gitlab-runner start
sudo gitlab-runner status

# Command to register the runner
sudo gitlab-runner register --non-interactive --url https://gitlab.mi.hdm-stuttgart.de/ \
 --registration-token asdfX6fZFdaPL5Ckna4qad3ojr --tag-list gitlab-runner --description gitlab-runner \
 --executor docker --docker-image docker:stable \
 --docker-volumes /var/run/docker.sock:/var/run/docker.sock \
 --docker-privileged

# Install Docker and give the GitLab runner permissions so that it can access the Docker socket.
echo "Installing Docker"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce

sudo usermod -aG docker gitlab-runner

# Restart Docker and GitLab Runner Service
sudo systemctl restart gitlab-runner
sudo systemctl restart docker.service

During the step “# Command to register the runner” we fixed the problem we had with the HdM runners. “–docker-volumes/var/run/docker.sock:/var/run/docker.sock” gives the runner access to the Docker socket. “–docker-privileged” allows the runner to access all devices on the host and processes outside the container (be careful).

CI/CD in GitHub

This is done by adding the following code in the GitHub repository in the self-created .github/workflows/ci.yml file.
Like the previous .gitlab-ci.yml file, the script creates a Docker container and pushes it to Docker Hub. The DOCKER_USERNAME and DOCKER_PASSWORD are stored in the Action Secrets of GitHub (Settings > Actions).

name: Build and Push to Docker.io

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Login to Docker.io
      run: docker login -u ${{ secrets.DOCKER_USERNAME }} -p ${{ secrets.DOCKER_PASSWORD }} 
 
    - name: Build Docker-Compose
      run: docker-compose build
 
    - name: Deploy Container to Docker.io
      run: docker-compose push

SSL

SSL is used to encrypt the data exchange between the web browser and web server. It thus protects against access by third parties. To set up SSL, it is necessary to have an SSL certificate.

Configuration

We decided to use “all-inkl.com” due to an existing subscription.
In the KAS admin center (after setting up the domains and subdomains), new DNS records can be created and edited (Domain > DNS Settings > Actions (Edit)). Here, a new Type-A record can be created that points to the IP address of the AWS reverse proxy. The email (which needed for verification) can easily be created in email > email Inbox.

Server configuration 

We used the free CA Let’s Encrypt (https://letsencrypt.org/) for the creation and renewal of the SSL certificates. For the configuration, we used the following images: jwilder/nginx-proxy as Nginx Proxy and jrcs/letsencrypt-nginx-proxy-companion as Nginx Proxy Companion (it creates the certificates and mounts them via the volumes into the Nginx Proxy so that it can use them).  
In the docker-compose.yml, the environment variables can now be added for the service “frontend”. 

  frontend:
    image: dr079/webshop:frontend
    build:
      context: ./frontend
      dockerfile: Dockerfile
    restart: always
    environment:
      API_HOST: backend
      API_PORT: 8080
      # Subdomain
      LETSENCRYPT_HOST: webshop.designmyhouse.de
      # Email for domain verification
      LETSENCRYPT_EMAIL: admin@designmyhouse.de
      # For the Nginx proxy
      VIRTUAL_HOST: webshop.designmyhouse.de
      # The Port on which the frontend responds. Tells the Nginx proxy who to send the requests to.
      VIRTUAL_PORT: 80
# Not needed when deploying with reverse proxy
#    ports:
#      - "80:80"

After that, we created the docker-compose-cert.yml file, which starts the Nginx Proxy and the Nginx Proxy Companion.

version: "3.3"
services:
  nginxproxy:
    image: jwilder/nginx-proxy
    restart: always
    volumes:
      - ./nginx/data/certs:/etc/nginx/certs
      - ./nginx/conf:/etc/nginx/conf.d
      - ./nginx/dhparam:/etc/nginx/dhparam
      - ./nginx/data/vhosts:/etc/nginx/vhost.d
      - ./nginx/data/html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock
    ports:
      - 80:80
      - 443:443
    labels:
      - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"

  nginxproxy_comp:
    image: jrcs/letsencrypt-nginx-proxy-companion
    restart: always
    depends_on:
      - nginxproxy
    volumes:
      - ./nginx/data/certs:/etc/nginx/certs:rw
      - ./nginx/conf:/etc/nginx/conf.d
      - ./nginx/dhparam:/etc/nginx/dhparam
      - ./nginx/data/vhosts:/etc/nginx/vhost.d
      - ./nginx/data/html:/usr/share/nginx/html
      - /var/run/docker.sock:/var/run/docker.sock:ro

AWS EC2 instance

In AWS, an EC2 instance (consisting of an Ubuntu server and a security group) can now be created and started with the settings Verify and Launch. 

The IP address of the created instance can now be entered as a Type-A entry under “all-inkl.com”.

Install Docker on Ubuntu

It is now possible to connect to the EC2 instance and run the following commands to make the project accessible through the domain/subdomain. (Note: It may take a few hours for the DNS server to apply the settings. Solution: Use the Tor browser)

# Add GPG key of Docker repository from APT sources.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# Update Ubuntu package database
sudo apt-get update

# Install Docker
sudo apt-get install -y docker-ce

# Install Docker Compose
sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

# Give Docker Compose the execute permission
sudo chmod +x /usr/local/bin/docker-compose

# Log in as root user
sudo -s

# Clone project
git clone https://github.com/user_name/project_name.git

# Build and launch project
docker-compose -f ./project_name/docker-compose-cert.yml up --build -d

# Pull images from DockerHub
sudo docker-compose -f ./cloud-webshop/docker-compose.yml pull

sudo docker-compose -f ./cloud-webshop/docker-compose.yml up -d

Watchtower

With Watchtower, updates to the Docker registry can be automatically detected and downloaded. The container will then be rebooted with the new image. Watchtower accesses the Docker repo via REPO_USER & REPO_PASS and checks in the set time interval (— interval 30) if the Docker images have changed and updates them on the fly.
This requires adding the following code to the docker-compose.yml (replace REPO_USER and REPO_PASS with Docker.io Access Token credentials (Settings > Security)).

  watchtower:
    image: v2tec/watchtower
    environment:
      REPO_USER: REPO_USER
      REPO_PASS: REPO_PASS
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    command: --interval 30

Terraform

The preceding steps involve a considerable manual effort. However, it is possible to automate this, e.g., with Terraform. To achieve this, the following files must be written.

main.tf

resource "aws_instance" "test" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = var.ec2_instance_type

  tags = {
    Name = var.ec2_tags
  }

  user_data = file("docker/install.sh")
//  user_data = file("docker/setupRunner.sh")
  key_name = aws_key_pair.generated_key.key_name
  security_groups = [
    aws_security_group.allow_http.name,
    aws_security_group.allow_https.name,
    aws_security_group.allow_ssh.name]
}

output "instance_ips" {
  value = aws_instance.test.*.public_ip
}

providers.tf

provider "aws" {
  access_key = var.aws-access-key
  secret_key = var.aws-secret-key
  region = var.aws-region
}

security_groups.tf

resource "aws_security_group" "allow_http" {
  name = "allow_http"
  description = "Allow http inbound traffic"
  vpc_id = aws_default_vpc.default.id

  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = [
      "0.0.0.0/0"
    ]
  }

  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = [
      "0.0.0.0/0"]
  }
}

resource "aws_security_group" "allow_https" {
  name = "allow_https"
  description = "Allow https inbound traffic"
  vpc_id = aws_default_vpc.default.id

  ingress {
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = [
      "0.0.0.0/0"
    ]
  }

  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = [
      "0.0.0.0/0"]
  }
}

resource "aws_security_group" "allow_ssh" {
  name = "allow_ssh"
  description = "Allow ssh inbound traffic"
  vpc_id = aws_default_vpc.default.id

  ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    # To keep this example simple, we allow incoming SSH requests from any IP. In real-world usage, you should only
    # allow SSH requests from trusted servers, such as a bastion host or VPN server.
    cidr_blocks = [
      "0.0.0.0/0"
    ]
  }
}

variables.tf

variable "ec2_instance_type" {
  default = "t2.micro"
}

variable "ec2_tags" {
  default = "Webshop"
//  default = "Gitlab-Runner"
}

variable "ec2_count" {
  default = "1"
}


data "aws_ami" "ubuntu" {
  most_recent = true
  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
  }
  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
  owners = ["099720109477"] # Canonical
}

ssh_key.tf

variable "key_name" {
  default = "Webshop"
}

resource "tls_private_key" "example" {
  algorithm = "RSA"
  rsa_bits = 4096
}

resource "aws_key_pair" "generated_key" {
  key_name = var.key_name
  public_key = tls_private_key.example.public_key_openssh
}

resource "aws_default_vpc" "default" {
  tags = {
    Name = "Default VPC"
  }
}

variable_secrets.tf

variable "aws-access-key" {
  type = string
  default = "aws-access-key"
}

variable "aws-secret-key" {
  type = string
  default = "aws-secret-key" 
}

variable "aws-region" {
  type = string
  default = "eu-central-1"
}

install.sh for EC2 setup

To do this, we created a ./docker/install.sh file with the following content.


#!/bin/bash

# Install wget to update IP at all-inkl.com
echo "Setup all-inkl.com"
sudo apt-get install wget

# Save public IP to variable
ip="$(dig +short myip.opendns.com @resolver1.opendns.com)"

# Add all-inkl.com variables
kas_login="username"
kas_auth_data="pw"
kas_action="update_dns_settings"
sub_domain="sub"
record_id="id"

sudo sleep 10s

# Update all-inkl.com dns-settings with current IP and account data
sudo wget --no-check-certificate --quiet \
  --method POST \
  --timeout=0 \
  --header '' \
    'https://kasapi.kasserver.com/dokumentation/formular.php?kas_login='"${kas_login}"'&kas_auth_type=plain&kas_auth_data='"${kas_auth_data}"'&kas_action='"${kas_action}"'&var1=record_name&wert1='"${sub_domain}"'&var2=record_type&wert2=A&var3=record_data&wert3='"${ip}"'&var4=record_id&wert4='"${record_id}"'&anz_var=4'


echo "Installing Docker"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce

echo "Installing Docker-Compose"
sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# Follow guide to create personal access token https://docs.github.com/en/github/authenticating-to-github/keeping-your-account-and-data-secure/creating-a-personal-access-token
sudo git clone https://username:token@github.com/ratzel921/cloud-webshop.git
sudo docker login -u username -p token
sudo docker-compose -f ./cloud-webshop/docker-compose-cert.yml up --build -d
sudo docker-compose -f ./cloud-webshop/docker-compose.yml pull
sudo docker-compose -f ./cloud-webshop/docker-compose.yml up

Next, run the following commands. This will automatically create an EC2 instance (runs the application), a Security_Group (for connections to the EC2 instance via HTTPS, HTTP, and SSH), an SSH_KEY (allows to access the EC2 instance via SSH). In the end, the IP address of the EC2 instance is displayed in the console. This will automatically be entered into all-inkl.com or manually add it.

# Get terraform provider with init and use apply to start the terraform script.
terraform init
terraform apply --auto-approve

# (Optional) Delete EC2 instances
terraform destroy --auto-approve

Testing

Creating a Testing Environment

Using Terraform and an EC2 instance, it is also possible to create a testing environment. We used the GitHub pipeline for this.

Backend/Dockerfile

# Build stage
FROM maven:3.6.3-jdk-8-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean test
RUN mvn -f /home/app/pom.xml clean package

# Package stage
FROM openjdk:8-jre-slim
COPY --from=build /home/app/target/*.jar /usr/local/backend.jar
COPY --from=build /home/app/target/lib/*.jar /usr/local/lib/
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/backend.jar"]

frontend/nginx/nginx.conf

server {
  listen 80;
  server_name www.${VIRTUAL_HOST} ${VIRTUAL_HOST};

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
        try_files $uri $uri/ /index.html;
        proxy_cookie_path / "/; SameSite=lax; HTTPOnly; Secure";
    }

    location /api {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;

        proxy_pass_header Set-Cookie;

        proxy_cookie_domain www.${VIRTUAL_HOST} ${VIRTUAL_HOST};
        #rewrite ^/api/?(.*) /$1 break;
        proxy_pass http://${API_HOST}:${API_PORT};
        proxy_redirect off;
    }

   error_page   500 502 503 504  /50x.html;

   location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

frontend/Dockerfile

# Build stage
# Use node:alpine to build static files
FROM node:15.14-alpine as build-stage

# Create app directory
WORKDIR /usr/src/app

# Install other dependencies via apk
RUN apk update && apk add python g++ make && rm -rf /var/cache/apk/*

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./

RUN npm install

# Bundle app source
COPY . .

# Build static files
RUN npm run test
RUN npm run build


# Package stage
# Use nginx alpine for minimal image size
FROM nginx:stable-alpine as production-stage

# Copy static files from build-side to build-server
COPY --from=build-stage /usr/src/app/dist /usr/share/nginx/html

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/templates/

# EXPOSE 80
CMD ["/bin/sh" , "-c" , "envsubst '${API_HOST} ${API_PORT} ${VIRTUAL_HOST}' < /etc/nginx/templates/nginx.conf > /etc/nginx/conf.d/nginx.conf && exec nginx -g 'daemon off;'"]

Modifying the docker-compose.yml

To do this, we created a copy of docker-compose.yml (docker-compose-testStage.yml). We changed the images and the LETSENCRYPT_HOST & VIRTUAL_HOST for the “backend” and “frontend” service in this file.

Modifying the Terraform files

In the testStage.sh, we changed the record_id and “docker-compose -f ./cloud-webshop/docker-compose.yml pull & sudo docker-compose -f ./cloud-webshop/docker-compose.yml up -d” to “sudo docker-compose -f ./cloud-webshop/docker-compose-testStage.yml pull sudo docker-compose -f ./cloud-webshop/docker-compose-testStage.yml up -d
In the main.tf, “user_date = file(“docker/test_Stage.sh”)” is set.
After that, the EC2 instance, the security group, and SSH can be started as usual using Terraform.

Automated Selenium frontend testing with GitHub 

To do this, create the .github/workflows/selenium.yml file with the following content.
The script is executed on every push to the repository. It installs all necessary packages, creates a screenshot folder, and runs the pre-programmed Selenium tests located in the frontend folder.
After a push or manual execution, the test results with the artifacts (screenshots) are located on the Actions tab.

name: selenium tests
on: push
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Build the stack
        run: docker-compose up -d
      - name: npm install
        run: cd frontend && npm install
      - name: install jest
        run: cd frontend && npm install jest
      - name: install selenium-webdriver
        run: cd frontend && npm install selenium-webdriver
      - name: run tests
        run: mkdir -p /tmp/screenshots/ && cd frontend && npm test
      - name: Archive screenshots
        uses: actions/upload-artifact@v2
        with:
          name: selenium-screenshots
          path: /tmp/screenshots/
      - name: Shutdown
        run: docker-compose down

Note that Chromedriver must be run headless, as GitHub cannot run a browser on a screen.

var driver = await new Builder()
        .forBrowser('chrome')
        .setChromeOptions(new chrome.Options().headless())
        .build();

Infrastructure as a Code

Cloud computing is the on-demand provision of IT resources (e.g., servers, storage, databases) via the Internet. Cloud computing resources can be scaled up or down depending on business requirements. You only pay for the IT resources you use. 
On July 27, 2021, Gartner published the latest “Magic Quadrant” for Cloud Infrastructure and Platform Services. Like last year, Amazon Web Service is the top performer in the Magic Quadrant. Followed by Microsoft and Google. (
https://www.gartner.com/doc/reprints?id=1-271OE4VR&ct=210802&st=sb). Since we were interested in trying Docker Compose, we decided to use AWS for deployment.

Deployment on Amazon ECS with Docker Compose 

Since early 2020, AWS and Docker have started working on an open Docker Compose specification, which will make it possible to use the Docker Compose format to deploy containers on Amazon ECS and AWS Fargate. In July 2020, the first beta version for Docker Desktop was released; the first stable version has been available since September 15, 2020.

Customize docker-compose.yml

The AWS ECS CLI supports Compose versions 1, 2, and 3. By default, it looks for docker-compose.yml in the current directory. Optionally, you can specify a different filename or path to a Compose file with the –file option. The Amazon ECS CLI only supports a few parameters, so correcting the yml may be necessary (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-parameters.html).

# (Optional) Create a new Docker context to point the Docker CLI to the correct endpoint. For this step you need the AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY.
docker context create ecs myecscontext

# (Optional) Use context
docker context use myecscontext

# Deploy application to AWS
docker compose up 

# Here you can see which containers were started as well as the URLs
docker compose ps

# (Optional) Shut down container. (Don't forget to change the context back to default).
docker compose down

# Convert Docker Compose file to CloudFormation to track which resources are created or updated
docker compose convert

BuildX

Building images for other processors

For example, if you have an M1 with an arm64 processor, a locally created image would not be accepted by AWS (error message “EssentialContainerExited: Essential container in task exited”). The reason is that ECS instances only support amd64 images.

Since Docker version >= 19.03, Docker offers buildX. The plugin is officially no longer considered experimental as of August 5, 2020. With the buildX functionality, it is relatively easy to create Docker images that work on multiple CPU architectures.

# (optional) Create a new Builder instance
docker buildx create --name mybuilder

# (optional) Use created builder
docker buildx use mybuilder    

# Show all available builder instances (here you can also see which CPU architectures are supported by the builder)
docker buildx ls

# Build and push image for example for amd64, arm64 and arm/v7
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 --tag username/repository_name:tag_name --push .

# Delete images
docker buildx prune --all

Migrating from Heroku to Hetzner: Achieving Scalability with Docker, Kubernetes and Rancher

Written by Eva Ngo, Niklas Brocker, Benedikt Reuter and Mario Koch.

In the System Engineering and Management lecture, we had the opportunity to apply presented topics like distributed systems, CI/CD or load testing to a real project or with the help of a real application. In the following article we will share our learnings and experiences around the implementation and usage of Docker, Kubernetes, Rancher, CI/CD, monitoring and load testing.

Continue reading

Using Gitlab to set up a CI/CD workflow for an Android App from scratch

  • Tim Landenberger (tl061)
  • Johannes Mauthe (jm130)
  • Maximilian Narr (mn066)

This blog post aims to provide an overview about how to setup a decent CI/CD workflow for an android app with the capabilities of Gitlab. The blog post has been written for Gitlab Ultimate. Nevertheless, most features are also available in the free edition.

The goal is mainly to provide an overview about Gitlab’s CI/CD capabilities. It is not object of the blog post to test and/or develop a complex android app, or to handle special edge-cases in android app development.

The blog post covers the following topics:

  • Defining a decent pipeline
  • Automatically running unit tests
  • Automatically running integration tests
  • Automatically running static code analysis checks
  • Automatically running debug/release builds
  • Automatically distribute the app for testers
  • Adding Gitlab’s drop-in features
    • SAST
    • Dependency management
    • License management
Continue reading

CI/CD with GitLab CI for a web application – Part 3

Hosting your own GitLab server

Some users might have concerns regarding security using GitLab for a variety of purposes, including commercial and business applications. That is, because GitLab is commonly used as a cloud-based service – on someone else’s computer, so to speak. So setting it up for running it on your own server is the conclusion, whether it be a NAS, real dedicated server or even a Raspberry Pi. So, as a side quest, we decided to set things up on a Raspberry Pi Model 3 for comparison. The following part will cover the installation procedure (mostly according to the official GitLab page) as well as hints to some potential pitfalls.
Continue reading

CI/CD with GitLab CI for a web application – Part 1

Introduction

When it comes to software development, chances are high that you’re not doing this on your own. The main reason for this is often that implementing components like UI, frontend, backend, servers and more is just too much to handle for a single person leading to a slow development process. So, you have to team up with others. Therefore some collaboration tools (e.g. SVN, Git) have been established so that you don’t accidentally overwrite someone else’s code and vice versa.

The big challenge with such collaborative projects is to ensure a high quality of the software even with a high level of developer activity. One instrument for this is continuous integration, whereby the individual application components are continuously brought together and successful interaction is ensured.

Especially in large projects high software quality and a structured development process are of enormous importance. That is why we decided to carry out the complete development and quality assurance process from the creation of a project, the definition of tests and continuous integration of the components to the automatic deployment of the application using a small sample project.

The following image shows the architecture of the small node application:

Shaky architecture
Shaky architecture

Continue reading