Recall Trainer – Eine serverless Web-App mit AWS

Einleitung

Im Rahmen der Vorlesung “Software Development for Cloud Computing” habe ich im vergangenen Semester eine Einführung in die Welt des Cloud Computings incl. der relevanten Konzepte und Technologien erhalten. Einige dieser Konzepte habe ich versucht in meinem Abschlussprojekt umzusetzen, das ich im Nachfolgenden vorstellen möchte. 

Idee/Projekt

Die Idee war es eine Webanwendung zu konzipieren und zu entwickeln, welche dem Nutzer beim persönlichen Wissensmanagement unterstützt. Die Applikation hilft dem Lernenden sich jeden Tag mit seinen Wissensgebieten auseinander zu setzen, indem diese ihm täglich eine Email mit einem Link zu seiner personalisierten Wissensabfrage sendet.

Mit einem Klick auf den Link landet der Nutzer bei einer zufälligen Auswahl von bis zu 25 Fragen seiner bisher erfassten Lerninhalte. Die Anwendung zeigt nun eine Frage und einen Timer an. Der Nutzer soll nun innerhalb von 10 Sekunden sich die Antwort ins Gedächtnis rufen. Wenn er eine Antwort im Kopf hat, klickt er den Button „Reveal“, welcher die korrekte Antwort aufdeckt. Nun muss der Nutzer angeben, ob seine Antwort im Kopf mit der tatsächlichen Antwort übereinstimmt. Schaft der Nutzer es nicht innerhalb von 10 Sekunden zu antworten wird automatisch eine neue Frage angezeigt. 

Das Ziel ist es das Gedächtnis zu trainieren. Die jeweiligen Fragen werden so lange per Zufallsauswahl wiederholt, bis der User alle Fragen einmal richtig mental abgerufen hat. 

Konzept Skizze

Ziele für die Umsetzung

Mein Ziel war es eine derartige fachliche Problemstellung als Serverless Architektur zu realisieren. Zugleich wollte ich erste Erfahrungen mit der AWS Infrastruktur sammeln. Um eine zentrale und automatisierte Verwaltung aller Ressourcen sicherzustellen, die Lösung skalierbar zu gestalten und um Cloud Ressourcen zu minimieren sowie das Risiko von versteckten Kosten zu reduzieren, war das Ziel das Projekt als Infrastructure as Code zu implementieren.

Implementierung

Architektur

Als Infrastructure as Code -Tool wurde AWS Sam (AWS Serverless Application Model) verwendet. Dieses baut auf Amazon’s Cloudformation Infrastructure as Code Syntax auf und liefert zusätzlich noch ein Command Line Interface, welches das Deployment und lokale Testing von Lambda Funktionen ermöglicht.

Die Frontend-Applikation wurde mit Hilfe des Frontend-Frameworks Angular erstellt, in welchem ich im Rahmen dieses Projektes erste Erfahrungen sammeln konnte. Deployed wird das Frontend mithilfe einer  AWS Amplify Build Pipeline, welche durch Commits auf den in SAM spezifizierten Branch ausgelöst wird.

Eine besondere Herausforderung hierbei war es,  dass der Link, den die Angular Anwendung nutzt um auf das Backend zuzugreifen dem aktuellen Link des API Gateways entsprechen muss. Dazu wird dieser durch SAM der Amplify Ressource als Environmental Variablen übergeben. Diese wird dann in der Build Pipeline verwendet um den Link in der Index.js des Angular Builds mit der Variablen zu substituieren.

Das Backend der Anwendung setzt basiert auf mehreren AWS Services.

Der Zugriff des Frontends auf das Backend erfolgt über API Gateway mit 2 Routen.

  • post /subscribe: Leitet den Request an SubscribeEndpoint Lambda weiter
  • get /daily-prompt:  Leitet den Request an DailyPromptsEnpoint Lambda weiter

Die Funktionalität der App ist über 3 Lambda Funktionen realisiert.

  • DailyEMailGenerator: Wird täglich von einer CloudWatch Schedule aufgerufen und generiert für jeden registrierten Nutzer eine Selektion an täglichen Fragen aus der Menge aller hinterlegten Fragen.
  • DailyPromptsEndpoint: Liefert die in der Datenbank hinterlegte Menge an täglichen Fragen für einen Nutzer
  • SubscribeEndpoint: Fügt den Nutzer der Subscriber Tabelle hinzu und speichert das mitgelieferte Fragen / Antwort Paar

Die Datenbank ist als DynamoDb mit den 3 Tabellen Subscribers, SubscriberData und DailyPrompts realisiert. 

Alle Ressourcen und notwendigen Rollen und Policies sind in einem SAM Template File erfasst und können bequem per SAM CLI deployt werden. Dabei wurde Wert darauf gelegt sensible Daten wie den GitHub Access Token, den Amplify benötigt, als Environment Variables abzufragen, anstelle diese hart zu coden.

Probleme

Als Hauptproblem hat sich für mich der riesige Umfang von AWS herausgestellt, da es eine Vielzahl an Dokumentationen zu lesen gibt, so dass es für einen Einsteiger schwierig ist, sich direkt zurecht zu finden. Das Angebot an Informationen zu Cloud (Patterns, Best Practices, etc.) und Cloud Services ist gefühlt unendlich. Es ist deshalb schwierig das für die Problemstellung wirklich relevante Wissen zu finden und die Qualität der verfügbaren Informationen ist stark unterschiedlich. Da das User Interface von AWS-Diensten von Service zu Service variiert, ist auch die Navigation innerhalb der Dienste nicht immer einfach. 

Ursprünglich war geplant Terraform als plattformunabhängiges Infrastructure as Code-Tool einzusetzen. Jedoch stellte sich dessen Dokumentation bezüglich spezieller AWS Services teilweise als unvollständig heraus und hat somit nach vielen erfolglosen Versuchen dazu geführt, dass AWS SAM als Infrastructure as Code-Tool adaptiert wurde.

Des Weiteren gestaltet sich das Debugging schwieriger wie auf dem eigenen Rechner. Dies hängt mit dem komplexen Zusammenspiel der vielen AWS Services zusammen und macht eine Fehlersuche oft sehr schwierig. AWS bietet hierfür CloudWatch als zentrale Logging-Plattform an. Es kann aber vorkommen, dass Fehler auftreten ohne dass diese erfasst werden. Ich hatte beispielsweise das Problem, dass eine Lambda-Funktion plötzlich nicht mehr in die Datenbank geschrieben hat, ohne Fehler zu loggen, obwohl andere Funktionen denselben Code ohne Probleme auf dieselbe Datenbank anwenden konnten. Der Fehler hat sich nach einigen Stunden von selbst erledigt, es blieb allerdings intransparent, wodurch die Lösung erfolgt ist. 

Lessons Learned

Ich habe in diesem Projekt sehr viel spannendes über die Entwicklung für AWS und die Cloud gelernt, sodass ich durch diese Veranstaltung einiges an theoretischem als auch praktischem Wissen im Bereich Cloud Computing aufbauen konnte. Das Projekt war gezeichnet von sehr viel ausprobieren was sich häufig auch nicht in der finalen Lösung niedergeschlagen hat.

Darüber hinaus konnte ich praktische Erfahrungen in der Implementierung von Webanwendungen aufbauen. Ein weiteres Lessons Learned für mich ist, dass ich ein solches Projekt in einem komplexen Themengebiet, in welchem ich keinerlei Vorkenntnisse hatte zukünftig wohl nicht mehr alleine angehen würde da ich mir sicher bin das ich mir viele Stunden recherchieren und Trial und Error durch eine weitere Person hätte ersparen können. 

Ynstagram – Cloud Computing mit AWS & Serverless

Im Rahmen der Vorlesung “Software Development for Cloud Computing” haben wir uns hinsichtlich des dortigen Semesterprojektes zum Ziel gesetzt einen einfachen Instagram Klon zu entwerfen um uns die Grundkenntnisse des Cloud Computings anzueignen.

Grundkonzeption / Ziele des Projektes

Da wir bereits einige Erfahrung mit React aufgrund anderer studentischer Projekte sammeln konnten, wollten wir unseren Fokus weniger auf die Funktionalität und das Frontend der Applikation richten und ein größeres Augenmerk auf die Cloud spezifischen Funktionen und Vorgehensweisen legen. 

Konkret planten wir die Umsetzung eines Instagram Klons mit den grundlegenden Funktionalitäten:

  • Bilder bzw. Beiträge hochladen
  • Titel & Beschreibung von Beiträgen anlegen
  • Liken von Beiträgen 
  • Kommentieren von Beiträgen
  • Accountmanagement

Entwurfsentscheidung

Frontend

Aufgrund von bereits existierenden Vorkenntnissen und guter Erfahrungen entschieden wir uns für die Umsetzung des Frontends mit Hilfe des React Frameworks. Mit der Gestaltung als Web App ergibt sich zudem der Vorteil, dass “Ynstagram” Plattform übergreifend erreichbar ist.

 

Backend – von Firebase zu AWS

Zunächst starteten wir unser Projekt mit Firebase umzusetzen. Zum einen verlor dies jedoch seinen Reiz hinsichtlich des Lerneffekts, da wir parallel unser Softwareprojekt mit Firebase verwirklichten. Gleichzeitig wurde uns durch die Einblicke in den Vorlesungen ein Bewusstsein für den Funktionsumfang von AWS geschaffen.

Interessant war für uns hierbei beispielsweise die wesentlich umfangreicheren Einsatzmöglichkeiten von Lambda Funktionen. Während diese in Firebase nur durch Einträge / Trigger geschehen kann, konnten wir hier auf API Aufrufe zurückgreifen. Auch die umsetzbaren Funktionalitäten gestalteten sich als wesentlich umfangreicher. So bot sich uns unter anderem die Möglichkeit Bilder beim Upload automatisiert zu skalieren und perspektivisch ließe sich auch recht einfach eine Analyse von Inhalten mit Hilfe von Künstlicher Intelligenz umsetzen. Während man bei Firebase in all dem schnell an gewisse Grenzen kommt, gibt es bei AWS einen viel breiteren Horizont an Möglichkeiten. 

Dennoch gestaltete sich dieser Umstieg keineswegs als einfach, denn Firebase bietet eine wesentlich bessere Übersichtlichkeit und Dokumentation.

Serverless

Da uns die AWS Web Oberfläche und die Online Erstellung von Lambda Funktionen keineswegs ansprachen suchten wir nach Lösungen um alle Konfigurationen wenn möglich auch auf Github hinterlegt zu haben und im Code Editor anlegen zu können. 

Dabei sind wir letztlich auf Serverless gestoßen. Hier werden alle Buckets, Tabellen und API Aufrufe über ein serverless.yaml File verwaltet. So lassen sich neue Elemente viel übersichtlicher / schneller anlegen und Konfigurationen können einfach von bereits erstellten Elementen übernommen werden. 

Postman

Um einen Überblick über die erstellten API Routen zu behalten und um diese einfach testen zu können, haben wir uns für Postman entschieden. Über ein in Github geteiltes File können so alle am Projekt beteiligten die aktuellen API Routen sehen und neue Aufrufe anlegen.

Umsetzung / Architektur

AWS Services

DynamoDB

Da wir bereits in Firebase auf die dortige NoSQL Datenbank “Firestore Database” zurückgegriffen hatten, entschieden wir uns hier für eine Beibehaltung dieser Datenbankstruktur. Der Vorteil liegt dabei gegenüber SQL Datenbanken in einfacheren Abfragen bedingt durch eine flachere Datenstruktur. 

Wir verwenden DynamoDB Tabellen um die zu den Bildern gehörenden Informationen wie z.B. Titel, Beschreibung, Autor etc. zu speichern. Die Verknüpfung der Bilder mit den Datensätzen in den Tabellen erfolgt dabei durch eine einzigartige ID.

Es gibt dabei 2 Tabellen, eine in der zunächst die Eingaben gespeichert werden, und eine weitere in die verarbeitete Datensätze übertragen werden. 

Beide Tabellen sind dabei strukturell gleich aufgebaut. Zentral sind hier eine eindeutige ID, Datum der Erstellung, Account Name des Erstellers, sowie Beschreibung und Titel des Beitrags. Kommentare und Likes werden über Arrays verwaltet. 

S3

In S3 Buckets sind die zu den Beiträgen hinterlegten Bilder gespeichert. Dabei gibt es einen Bucket mit den Originaldateien, sowie einen mit verringerter Auflösung. Der Name der Bilder entspricht dabei stets der den Beiträgen zugehörigen einzigartigen ID.

Cognito

Mit AWS Cognito konnten wir in wenigen Schritten unser Account Management einrichten. Cognito bietet dabei die Unterstützung aktueller Identitäts und Zugriffsmanagement Standards wie zB. Oauth 2.0 und SAML 2.0 und bietet gleichzeitig auch die Möglichkeit Multifaktor Authentifizierung zu implementieren. 

Amplify

Wir nutzen AWS Amplify um das Frontend unserer Applikation zu hosten und um hierfür eine CI/CD Pipeline mit Development und Master Umgebung zu realisieren. Eine genauere Erklärung hierzu findet sich im Abschnitt “CI/CD Pipeline”.

Lambda 

API Gateway

Ein großteil unsere Lambda Funktionen wird über API Aufrufe genutzt. Eine Übersicht dieser haben wir wie eingangs erwähnt in einem Postman File hinterlegt.

API-Routen

POST /image-upload

Hochladen eines Bildes in den S3 Bucket. Wird sowohl mit Bilddaten als auch mit dazugehörigen Informationen wie Beschreibung und Titel aufgerufen (JSON-Format). 

POST /image-info

Erstellt einen Eintrag in die DynamoDB Tabelle mit sämtlichen Informationen zu einem Beitrag, wird im Body als JSON übermittelt. 

POST /create-file

Erstellt eine Datei im S3 Bucket. Der Dateiname entspricht dem URL Parameter.

GET /get-all-images

Gibt alle als “valid” markierten Beiträge als Array im JSON-Format zurück.

GET /get-file

Gibt eine Datei anhand des Dateinamens zurück. 

GET /image-info

Gibt die Informationen zu einem einzelnen Beitrag im JSON-Format zurück.

PUT /update-image-info

Genutzt um Kommentare zu Beiträgen hinzuzufügen. Updated Einträge in der DynamoDB Tabelle.

PUT /update-likes

Verwendet um neue Likes hinzuzufügen.

DynamoDB / S3 Trigger

Neben direkten API Aufrufen verwenden wir auch Trigger auf DynamoDB Tabellen, sowie S3 Buckets. 

Exemplarischer Ablauf eines Image Uploads

Dies lässt sich am Besten am Ablauf einer Beitragserstellung darstellen.

Per Post-Request wird zunächst die Lambda Funktion “imageUpload” aufgerufen, welche das Bild in dem S3 Bucket hinterlegt. Dann wird über einen Trigger automatisch die Lambda Funktion “imageResize” aufgerufen, welche die Bilder auf eine Auflösung von 400 x 400 Pixeln skaliert. Diese Bilder werden dann im Bucket für skalierte Bilder gespeichert. So können die Bilder im Feed gerade bei Mobilen Geräten schneller geladen werden.

Parallel dazu wird in der DynamoDB Tabelle ein Eintrag angelegt. Auch hier wird ein Trigger aufgerufen der seinerseits die Funktion “changeText” aufruft. Diese ersetzt in Anlehnung an den Namen “Ynstagram” alle “i” in Beschreibung und Titel durch “y”. Hierbei handelt es sich lediglich um eine Spielerei die aus unserem Interesse entstand verschiedenste Trigger und Einsatzmöglichkeiten von Lambda Funktionen auszuprobieren.

CI/CD Pipeline

Interessant war es für uns zudem erstmals wirklich Erfahrung mit einer CI/CD Pipeline zu sammeln. Wir planten dabei die strikte Unterteilung in eine Entwicklungsumgebung und einer dieser prinzipiell gleichen finalen Umgebung. So dass der aktuelle Stand schon unter realistischen Bedingungen getestet werden kann bevor er letztlich veröffentlicht wird.

Diese CI/CD Pipeline haben wir mit AWS Amplify und Github Actions umgesetzt. Dabei wird zunächst stets auf einen Development Branch gepusht, welcher dann automatisch auf eine Entwicklungsumgebung auf Amplify hochgeladen wird. So können zunächst alle Tests durchgeführt werden, bevor dann mit einem Pull request die Änderung auf den Master Branch übertragen werden. Wenn dies geschehen ist, werden diese ebenfalls automatisch in die Produktionsumgebung übernommen bzw. deployed.

Hier wird neben den durch Github Actions durchgeführten Tests auch überprüft ob die Web Anwendung auch auf verschiedenen Geräten richtig skaliert und damit überprüft ob die UI für den Nutzer funktionsfähig angezeigt wird. Der aktuelle Stand wird dabei selbstverständlich nur übernommen, wenn alle Tests erfolgreich abgeschlossen werden.

Serverless

Um der Unübersichtlichkeit der AWS Weboberflächen aus dem Weg zu gehen und um Elemente leichter und reproduzierbar, über Git verwaltet anlegen zu können haben wir uns für Serverless entschieden. Hier werden alle AWS Komponenten in einem “serverless.yaml” File angelegt. 

Variablen

Es gibt dabei zum Beispiel auch die Möglichkeit unkompliziert Environment Variablen anzulegen:

Welche wir wiederum über eigene custom Variablen, welche an verschiedenen Stellen genutzt werden definiert haben:

Dies bringt den Vorteil mit sich, dass Namen flexibel verändert und direkt überall übernommen werden, sprich sowohl in AWS als auch im Code über die Environment Variablen.

Functions

Gleichermaßen einfach lassen sich auch die Lambda Functions anlegen. Diese werden jeweils über einen “handler” referenziert und werden dann durch ein “event” aufgerufen, was entweder API Aufrufe oder eben bspw. DynamoDB / S3 Trigger sein können.

Resources

Auch alle Buckets und Tabellen sind im .yaml File definiert. So können insbesondere neue Elemente sehr einfach angelegt werden, da man direkt auf zuvor definierte Konfigurationen zurückgreifen kann.

Testing

Beim Testen haben wir uns vor allem auf die API Aufrufe und die grundsätzlichen Funktionen fokussiert. Grundsätzlich werden unsere Tests über die im Abschnitt “CI/CD Pipeline” dargestellte Pipeline mit Github Actions ausgeführt. Diese sind auch Bestandteil des Amplify Deployment Prozesses. Zusätzlich dazu haben wir CircleCi implementiert um die Serverless Komponenten automatisch zu deployen. Für das Testing nutzen wir allgemein ein lokales Mock-Up unserer DynamoDB da wir hier schnell auf das Problem gestoßen sind, dass unsere freien AWS Kontigente aufgebraucht waren.

Ausblick / Fazit

Die größten Schwachstellen des Projektes liegen aktuell in nicht abgesicherten API Aufrufen, diese ließen sich über die Verwendung von API Keys schützen. Dabei sollten perspektivisch auch die Zugriffe auf DynamoDB sowie S3 über IAM Role verwaltet werden. 

Für die Accountverwaltung wäre es sinnvoll die Multifaktor Authentifizierung einzurichten. Der Funktionsumfang ließe sich selbstverständlich noch deutlich ausbauen, wobei besonders die Nutzung von KI Komponenten für uns interessant wäre.  

Insgesamt konnten wir, ausgehend von keinerlei Grundkenntnissen im Bereich Cloud Computing, uns mit Hilfe der Umsetzung des Projektes im Rahmen der Vorlesung einen Überblick und ein Grundverständnis für die Welt des Cloud Computings erarbeiten, welche eine solide Basis bieten um zukünftig die vorliegenden Ansätze noch wesentlich weiter zu vertiefen.

Deploying Random Chat Application on AWS EC2 with Kubernetes

1. Introduction

For the examination of the lecture “Software Development for Cloud Computing”, I want to build a simple Random Chat Application. The idea of this application is based on the famous chat application called Omegle. Omegle is where people can meet random people in the world and can have a one-on-one chat. With Omegle people can have a conversation with not only normal chat but also a video chat. Not like Omegle, my application has only a normal texting function.

2. Technologies for the development of application

a. Frontend

React

For Frontend Development there are a great number of open-source libraries. React is recently one of the most popular and widely used libraries. There are many reasons for a developer to choose and use React. It is one of the most popular front-end technologies in the market. Compared to other libraries out there React seems to be easier to learn. As it doesn’t take much time to learn this technology, the developers can rapidly practice and build their own very first project. React helps increase productivity by using reusable components and development tools. There are many development tools available for React that speech up the project. The most important reason is that it has very strong community support. There are thousands of free React tutorial videos and blog posts on the internet which is very helpful for the developer. Therefore, I decided to learn this library during previous semesters. This project gives me a chance to have real practice.

b. Backend

Node.js

Node.js has become one of the most popular JavaScript tools. Node.js is a JavaScript runtime environment, which allows companies to improve their efficiency of the web development process. The frontend and Backend teams can now work more easily together. Since Node.js is written in JavaScript and bases on Google V8 Engine, everything is done very quickly. Node.js can create an Event Loop, which can cover all asynchronous input-output operations.  And the best part is that it can increase the speed of any other framework as well. By allowing developers to write JavaScript code for both the Frontend and the Backend, Node.js makes it easy to send data between the server and the client, which makes it easier to synchronize data immediately.

Socket.io

When I decided to develop a chat application, I already thought about Socket.io. I had a chance to know this JavaScript library during the course Web Development 2. To build a real-time application we should use Socket.io. Socket.io will help parties in different locations connect with each other, transmitting data instantly through an intermediary server. Socket.io can be used in many applications such as chat, online games, updating the results of an ongoing match, … It is used a lot by the developer community, because of its speed and convenience. Socket.io provides us with many methods as well as outstanding features such as security, auto-connect, disconnection detection, multiplexing, room creation, …

3. Application explanation

a. Client

As I mentioned, for the client-side I use React. Therefore, I have a chance to know the concept of a Single Page Application, whose content is loaded only once and updated dynamically. For the interaction with the page or with subsequent pages, we don’t need another server, which means that the page is not reloaded. To apply this concept of the web application, React offer a packet names “react-router-dom”. My application is very simple, so it only has two paths to be loaded. The root path is where the user inputs his name, and it will load the Join component. The other path is the Chat component, in which the user sends messages after getting a room.

Socket.io library for the client is imported because it is not provided by JavaScript. This will expose the ‘io’ namespace.

Endpoint URL will be given for ‘io’ to connect with the socket.io server.

Now users can send and receive messages from the server after the room is created.

b. Server

To setup server, some packets need to be imported:

The server is set up and listen on port 5000

The server can also save users temporally, so it will know which user’s name already existed. And then it can remove users after they terminate their chat. To execute all those actions, I write a users.js file, which will have some functions such as, addUser, removeUser, getUser, …

I want to create a chat application where users don’t have to have an already known friend and a Chat room will be automatically created for them. With this application, they can meet a new friend and the server will get them a chat room.

I created a variable queue, which is an array. It will save a user, who has not had any partner yet, temporarily. Every user, who already inputted their names, will be connected to the socket. Socket knows that he wants to join a room. In the callback function of the socket, the name and socket ID of the user will be saved by function ‘addUser’, which is in users.js. Then socket will check if any other user is waiting for a partner in a queue. If someone is waiting for a partner, he will be popped from the queue. His socket and partner’s socket will be connected. And their room ID will be a combination of the 2 socket IDs. If no one is waiting for a room, the current socket is pushed to a queue and wait for another user to join.

c. Problem

CORS: 

It is an abbreviation for Cross-Origin Request Sharing, which means that all data should come from the same resource. They use it as a security measure because JavaScript can load content from other servers without the knowledge of the user. This problem can be solved when both websites are aware of the data exchange, then the process will be allowed.

I installed the CORS package on my server. Origin will configure the Access-Control-Allow-Origin CORS header. Now client and server can communicate without error.

4. Testing

Testing is very important during the development of the application. Testing helps developers to discover existing errors/bugs before releasing the application. Therefore, the quality of the application would be enhanced. I decided to test only the server-side because it is more complicated than the client-side. Two tests are being created.

  • A single user testing: tests if he can connect to a server and receive a welcome message from the server.
  • Two users testing: tests if a room can be created when there are two users and both of them can receive the same welcome message from the same room.

5. Deployment

a. Docker Swarm and Kubernetes

When deploying this project, I create a container for each of the client and server sites. Docker is the most popular solution for the container platform. I want to learn to write a DockerFile and a docker-compose to create a container.

For the cloud development environment, I choose Amazon Web Service. It is currently one of the most comprehensive platforms for cloud computing services. I use an EC2 virtual server to make my project online. I would like to work with Kubernetes to manage the containers. I choose EKS, which is a service from Amazon web service.

If you work with a lot of containers, you have to be able to manage them efficiently. An orchestration tool enables exactly that. With the orchestration tool, you can integrate containers that you created with Docker. Then you use orchestration to manage, scale, and move the containers.

Although Kubernetes and Docker can work well together, there is competition when it comes to Docker Swarm. I have considered some features of Docker Swarm and Kubernetes.

  • Scaling: The load on our application is too high, Kubernetes can add more nodes to our cluster. Of course, we have to configure Kubernetes correctly so that it can create a new virtual machine. Then a node is added to the cluster.
  • Installation: with Docker Swarm, it is easy to create a new node, then integrate it with Swarm. On the other hand, to configure Kubernetes you have to determine the size of the node, how many master nodes, and worker nodes.
  • Load balancing: Docker swarm offers application auto load balancing. However, Kubernetes gives the flexibility to configure load balancing manually.
  • Storage volume participation: since the docker swarm manages Docker containers, containers find it easy to share data. Not just data, as well as other things. Kubernetes puts the container in a pod so the container cannot simply communicate with another. You need other components from Kubernetes, e.g., Service to create the connection.
  • Monitoring: While Swarm requires additional resources for monitoring and keeping a log, these tasks are already provided for in Kubernetes.

b. Amazon EKS and Kops

When deploying Kubernetes on AWS, you can configure and manage the deployment yourself for full flexibility and control. There are a few options for self-management: Amazon Elastic Kubernetes Service and Kops.

EKS is a managed service offered by AWS. EKS uses automatically provided instances and offers a managed control plane for deployment.

Kops is an open-source tool that can be used to automate the deployment and management of clusters on AWS. It is officially supported by AWS.

c. Docker File

To work with Kubernetes, I need to create all necessary containers. Containers are created by writing docker files. These docker files contain all information about the container, e.g.: name of the image, directory store our application, port, … Docker will follow this information, then step by step, create containers. Besides, I use Docker Compose to start the process of creating containers.

d. Kubernetes architecture on AWS cloud

I choose Kubernetes, Amazon EC2, EKS, ECR for the deployment of my project. What is showed below is the architecture of Kubernetes on AWS cloud.

Source: https://blogs.tensult.com/2019/08/14/guide-to-setup-kubernetes-in-aws-eks-using-terraform-and-deploy-sample-applications/

Kubernetes server is a control panel. It creates a cubic cluster. In the cluster, there are master nodes that create and manage worker nodes. When you call deployment commands, the Kubernetes server sends messages to EKS, then EKS sends the tasks to the worker nodes.

Worker node contains some pods, in which the docker container will be run. I choose controller Deployment to keep these pods running and observe them. For the worker node, I create a pod for the client, 2 pods for the server, and a pod for Redis. The load balancer can be used to communicate with the application from outside. 

I decided to have 2 pods Server because I want to scale my application. In case when more people try to connect to my application, the request will be handled faster when we have 2 pods instead of 1 pod. The picture below shows a horizontal scaling, which means that it has more copies of the application and these copies can work with each other at the same time.

For example: for the pod client I write a client-deployment.yaml file

  • A deployment named client is created, indicated by the .metadata.name field.
  • The .spec.selector field defines how the deployment finds the pods to be managed.
  • The deployment creates one replica pod, indicated by the .spec.replicas field.
  • the .template.spec field indicates that the pod is running a container. The container is created by docker image, which has been saved in ECR (Elastic Container Registry)
  • The container is created using the .spec.template.spec.containers.name field which is called client.

To enable network access to the set of pods I have to create a Service, which is written in client-service.yaml.

This specification creates a new service object called “client” that targets TCP port 3000 on each pod, which is labeled as app = random-chat.

For pods Server and Redis, I also create Deployment and Service for each.

e. Problem

Service of pod Server:

Pods can usually send requests with each other by using a normal type of Service, which means, in my case, that the pod Client can send a request to the pod Server without having an attribute ‘type’ in server-service.yaml. The Endpoint of the Client will be ‘server:5000’, which is the combination of the name of the service and the targetPort. But after many attempts, it still does not work. So, I decided to make the Service of pod Server as type Load Balancer, which is shown in the picture above. Now the Endpoint of the client will be the address of this Load Balancer.

6. Conclusion

During the course ‘Software Development for Cloud Computing’ and this project, I have a chance to know the concept of Docker containers and how to manage them with Kubernetes. I gain not only theoretical knowledge but also practical experience by developing and deploying the application. Moreover, working with cloud computing is new and interesting for me. Cloud computing is nowadays applied in the development of applications a lot. What I applied in my project is just a small part of cloud computing and I want to learn more about it in the future.

“Studidash” | A serverless web application

by Oliver Klein (ok061), Daniel Koch (dk119), Luis Bühler (lb159), Micha Huhn (mh334)

Abstract

You are probably familiar with the HdM SB-Funktionen. After nearly four semesters we were tired of the boring design and decided to give it a more modern look with a bit more functionality then it currently has. So we created “Studidash” in the course “Software Development for Cloud Computing”. “Studidash” shows your grades and automatically calculates the sum of your ECTS and also an average of your grades. 

Since this is a project for SD4CC it runs as a serverless web application at Amazon Web Services, or AWS for short. Our tech stack for this project consists of Angular, Python, Terraform and some AWS Services like Lambda or S3.

While developing this Web-App we encountered some difficulties but also learned a lot of stuff and we hope that this blog post can give you a quick overview of what we did, what we learned, what problems we had and how we solved them so you have it easier for your next project.

What did we do? 

As mentioned in the abstract, we developed a serverless Web-App called “Studidash” because of said boring design of the SB-Funktionen. First of all, we decided that we wanted to learn a new tech stack and came to the conclusion that Angular as our frontend would be the most modern frontend framework. For our backend we decided to use Python since it’s lightweight and easy to learn. From another course we learned about Terraform so this was something we were already somewhat familiar with and decided to use it for our deployment to AWS. We also used AWS to host the Web-App since we got access to AWS Student Accounts.

After we settled for a project and our tech stack we had to think about a way to make it “cloud native” and started to research some information and came across serverless. We dug a bit deeper and found some useful information. So we came to realize that serverless might be the way to go. Serverless means that our (or maybe your application) isn’t running completely on a “on-prem”-server but is running in the cloud instead. That means the application itself isn’t coupled to the server. Servers are still there but you don’t have to think about the administrative stuff around that. This is all going to be handled by your cloud service provider. The serverless approach brings scalability, high availability and efficient resource usage and management with it. As mentioned, you can focus more on the development itself rather than thinking about servers. A connection to a CI/CD pipeline makes it easy and fast to release a new version of your application. But serverless also has its downsides. The functions have to be as small as possible to only fit one purpose and some Web-Apps can have higher latency due to a cold start (When a function isn’t used for quite some time it gets destroyed and needs to be instantiated again, which takes time). You are also going to have a bad time debugging your application since it isn’t as easy as you might be used to. In the end we went with a static frontend in a S3-Bucket, a backend running as AWS Lambda Functions and AWS API Gateway to connect them. 

Architecture

Our architecture is fully hosted on AWS and our code repositories are hosted on the HdM GitLab server. The clients can access our frontend via their favourite web browser. Our frontend application is hosted in an AWS S3-Bucket. The good thing here is that we don’t have to manage or deploy any web server by ourselves. This reduces the management overhead and in the end the costs. After the frontend is served to the client, the user can input their user credentials to access their grades from the third party service (HdM SB-Funktionen). A HTTP-Request will then be sent to a Lambda Function with an API-Gateway to receive the request. This Lambda Function contains a Python script which will parse the user credentials provided in the received HTTP-Request and use them to make a login at the SB-Funktionen platform and scrape the necessary grades and lecture data from the user. This scraped data will then be preprocessed and returned as a JSON-Object to the frontend.

From the developer side we used Git/GitLab for the version control of our code. In GitLab we created a CI/CD pipeline to build the frontend, the Python grade scraper and a Terraform image to deploy all our neccessary AWS resources. Thanks to the CI/CD pipeline the developer can just push the newest code base to the repository and it will be deployed automatically to AWS.

Architecture overview

Frontend

For our frontend we decided to build an Angular single page application. We made this decision because it’s an up-to-date framework to build fast and easy web applications.

When the user loads the website the header only displays a login component for the HdM SB-Funktionen credentials. This component triggers a POST request to the Lambda Function containing the login data. The Lambda Function then responds and returns several grade objects to the frontend which are identically defined in front- and backend. The grade object exactly maps the table structure of the HdM page. The response then triggers the rendering of the table and you will receive a login message. Also there is an error handling if the login failed. The table can be sorted according to the different values, the grade average and ECTS are calculated and displayed in the header of the page.

Screenshot of our frontend after successful login

Backend

Our backend consists of a Python script which is hosted in a Lambda Function with an API-Gateway to receive HTTP-Requests. The frontend sends a HTTP-Request with the user credentials in the request body to the API-Gateway. The request is then forwarded to the Lambda Function which then injects the HTTP-Request into our Python grade scraper script. The Python script then parses the request body and performs a login at the SB-Funktionen website of the HdM where all the student grades and lectures are stored.

Backend workflow

In our code example the event variable is the received HTTP-Request from the frontend. The received request body is a string, so the content of the body has to be parsed to JSON again. When there is no login data provided, the script will send a HTTP-Response with the status code 401 and a corresponding message.

In the next step our script scrapes all the data we need and parses them into a JSON format which our frontend can handle easily. This JSON data is then sent as response to the Lambda Function which will forward this response to the API-Gateway. The API-Gateway then also forwards this response back to our frontend application where the received data will be processed and displayed.

Code snippet – try-except

We also had to keep some other things in mind. For example what should happen when our backend throws an exception or the third-party-service isn’t available? In our backend we created an error handler which takes a HTTP-Status Code and an error message as parameter, converts the data in the right format for our frontend and then sends the response.

Code snippet – error handling

Our main lambda_handler function is then divided into different parts. Each part is surrounded by a try-except clause to catch exceptions. For example if the third party service is down or if there were no credentials provided by the frontend. This makes our backend more reliable and also gives the user enough feedback to know what’s going on. Since we use an external service we need to think of a solution for the case when the third party service is down, for example for maintenance reasons. A possible solution to this would be to implement a caching mechanism which we don’t provide in the current state.

CI/CD

To make our application as cloud native as possible we implemented a CI/CD pipeline in our project. This pipeline builds our Web-App as well as our Lambda Functions, tests our Python script and deploys them to AWS. For that we are using different stages (build, test, deploy) in our .gitlab-ci.yml file. The build_webapp stage first pulls a Node-image and runs a few lines of script to install all dependencies and then builds the Angular based frontend. While this part is running, a second instance is pulling an Alpine image and is also running a few lines of script to package our Lambda Function(s) into a ZIP file.

After that, the test stage is invoked to test the application before deployment. This is a crucial part in the pipeline since it can reveal mistakes that we made during development before going “live” with the application. When the tests succeed, the next stage is invoked.

In our case, we made the deployment stage manually since we didn’t want to push every small change to AWS and also the Student Accounts had some time limits that would forbid us doing that anyway. But what happens in the deploy stage is fairly simple. Like in the stages before we are pulling an image for Terraform to run the usual Terraform commands like init, validate, plan and apply. This initializes Terraform, validates our main.tf in the root of the repository, creates a plan for creating the different resources in this main.tf and finally applies it. 

But what exactly is in this main.tf file? This file contains every resource we need in AWS and creates it. First of all, we declared variables for our different buckets, one for the Lambda Function and one on which the Angular app is going to be hosted at. After that, we created the S3-Bucket for the Lambda Function and uploaded the ZIP file with the function to the bucket. From there, it gets deployed to AWS Lambda. We also needed to create a role and policy to give the bucket the correct access rights to execute their task properly. After that, the S3-Bucket for the Angular app is created and the needed files are uploaded. This bucket hosts the frontend as a static website which we also configured in our main.tf to do that.

.gitlab-ci.yml file for our pipeline (1/2)
.gitlab-ci.yml file for our pipeline (2/2)

Testing

Testing is one of the most important things when implementing a CI/CD pipeline and with automated deployment. When you don’t implement tests you don’t really know if your application works before deployment and after the deployment, it is too late. So implementing a stage for testing in our project was the way to go. For our Python backend we wrote some basic Unit-Tests to test functionality and also added a test stage for the backend to our CI/CD pipeline.

We also managed to write an End-To-End-Test for our frontend which checks if the Error-Snackbar is shown when the user puts in wrong credentials. The harder part in this scenario was to get it running in the CI/CD pipeline, which we unfortunatly didn’t manage to do.

What problems did we have and how did we solve them?

One of the biggest problems we encountered was due to the fact that we only had access to an AWS Student Account. It ensured that we only had restricted access to AWS. For example we needed to create different kinds of roles to deploy our Lambda Function with the correct set of rights to be executed. Due to the restrictions we were not allowed to give the roles the needed permissions which caused our CI/CD pipeline to fail and our project didn’t get fully deployed. This could only be solved by getting a “real” AWS Account which gives you all the permissions you would need.

Another problem we faced was CORS (Cross-Origin Resource Sharing). In the first steps of our development we always got a CORS-Error when our frontend was requesting the grades and lecture data from our backend service. The reason for that was because in our Python backend script we just sent back the JSON-Object containing all the data but without any HTTP-Headers to our frontend. The frontend then failed to receive the response because the URL of the API-Gateway was different from the URL that our frontend had. To fix this problem we had to set the Access-Control-Allow-Origin HTTP-Header in the response from our backend. 

Code snippet – http-headers (CORS)

After that, the request worked and our frontend could receive the scraped data.

Another problem we had was to integrate our End-to-End-Test in our CI/CD-pipeline, which we unfortunately didn’t manage to fix in time. It would’ve required us to have a runner that has a browser available but we weren’t able to set that up. We managed to implement an E2E-Test which is running locally without any problems. So at least we have a bit of code quality assurance here. Having to run the tests manually isn’t what you want to do for a fully automated cloud native approach.

Conclusion

It was quite a long way from where we started, but in the end we managed to get our Web-App running on AWS as we liked. We made it a bit difficult in the beginning because we said we wanted to learn some new technologies like Python and Angular, so we first had to learn those. But we also had to learn about serverless-architecture. It is also something to look forward to working with in the future.

At the presentations we found out about AWS Amplify, which is basically a tool by AWS to get serverless Web-Apps running as fast as possible without the need of S3-Buckets. It showed us that there isn’t really the “one and only” way to get something running in the cloud. There are many possible solutions. 

In our opinion we learned a lot about AWS, serverless-architecture and cloud in general. But also about developing an application where you don’t have to think about renting and maintaining a server. Maybe we can continue with this project in the near future and give the HdM SB-Funktionen a new look 🙂

Application Updater mit Addon-Verwaltung

von Mario Beck (mb343) und Felix Ruh (fr067)

Einleitung

Unser Ziel war es, einen Programm Updater für Entwickler zu erstellen, den diese einfach in ihre CI/CD-Pipeline integrieren können. Für die Umsetzung haben wir die IBM Cloud und eine Serverless Architektur verwendet, um eine unbegrenzte Skalierbarkeit zu erreichen. Zu den verwendeten Serverless Services zählen die Cloud Functions, DB2 und ein Object Storage.

Das Projekt besteht aus einem Uploader, mit dem der Entwickler sein Programm in den Object Storage hochladen kann. Und einem Downloader für den Benutzer, mit dem automatisch die aktuelle Version heruntergeladen wird.

Verwendung aus der Entwicklersicht:

  • Programm wird registriert und man bekommt die dazugehörigen API-Keys
  • Erstellen der Config für den Downloader
  • Mit dem Uploader kann das Programm hochgeladen werden, dies kann einfach in eine CI/CD Pipeline eingebunden werden

Verwendung aus der Benutzersicht:

  • Herunterladen des Downloaders und der Config
  • Starten des Downloaders
  • Vor Programmstart wird nach neuen Updates gesucht und diese falls vorhanden heruntergeladen
  • Nach dem Update wird das eigentliche Programm gestartet
Continue reading

Cloud basierter Password Manager

von Benjamin Schweizer (bs103) und Max Eichinger (me110)

Abstract

Können Passwort Manager Anbieter meine Passwörter lesen? Wir wollten auf Nummer sichergehen und haben unseren Eigenen entwickelt. Dieser Artikel zeigt auf welche Schritte wir hierfür unternehmen mussten.
Dabei haben wir unser Frontend mittels Flutter und unser Backend in AWS umgesetzt. Außerdem gehen wir auf IaC mittels Terraform ein. Am Ende teilen wir unsere Probleme bei der Umsetzung sowie Erweiterungsmöglichkeiten, welche in Zukunft umgesetzt werden könnten.

Mobile App

Für die Umsetzung des Frontends haben wir uns für das Flutter Framework entschieden. Durch Flutter konnten wir eine gemeinsame Codebasis für alle unsere Zielplattformen (iOS, Android) verwenden. Daher konnten neue Features schnell implementiert und Änderungen konnten sofort auf unterschiedlichen Geräten getestet werden. Außerdem wurde durch einen Integrationstest sichergestellt, dass die App auf beiden Plattformen fehlerfrei läuft.


Die App bietet eine einfache Benutzeroberfläche, in welcher folgende Aktionen möglich sind:

  • Login und Registrierung eines Nutzers
  • Hinzufügen und Löschen eines Passworts
  • Ändern von Passwörtern

Diese Funktionen sind in folgenden Benutzeroberflächen abgebildet

Die Oberfläche ist auf Android und iOS identisch.

Architektur

Grundlegend ist unsere Architektur in Frontend und Backend aufgeteilt. Dabei wird, wie bereits erwähnt das Frontend mit Flutter umgesetzt und das Backend über die AWS-Cloud realisiert.
Alle Anfragen an das Backend werden mit HTTP-Requests an das API-Gateway-Service gesendet.
Die Anfragen der Nutzer müssen einen validen JWT (JSON Web Token) enthalten.
Diesen Session Token bekommt der Nutzer bei erfolgreichem Log-in über den Cognito-Service.
Durch den Token kann in den Lambda Funktionen sichergestellt werden, das Nutzer nur Daten ändern können, für die Sie eine Berechtigung haben.
Die Lambda Funktionen evaluieren die Anfragen und Ändern die Passwortdaten im DynamoDB-Service.
Alle Passwortdaten, welche an das Backend übertragen werden, sind bereits lokal vom Client verschlüsselt worden, um Missbrauch zu verhindern.


AWS Services

Da wir dieses Semester bereits in einem anderen Studienfach Erfahrung mit AWS in Verbindung mit IoT sammeln konnten, war AWS unsere erste Wahl. Außerdem waren wir uns am Anfang des Projektes sicher, das AWS alle unsere Anforderungen erfüllt.

DynamoDB
AWS DynamoDB ist ein vollständig verwalteter NoSQL-Datenbankservice. Dort werden alle Passwortdaten gespeichert.
Um schnelle Query Anfragen zu ermöglichen, ist unsere Tabelle in folgende Felder aufgeteilt:

  • Partitionsschlüssel: User_Id (Eine eindeutige ID für jeden Nutzer)
  • Sortierschlüssel: PasswordName
  • Passwort (in verschlüsselter Form)
  • Beschreibung

API-Gateway

Um HTTP-Anfragen an unsere Lambda Funktionen weiterzuleiten, nutzen wir den API-Gateway-Service. Dieser validiert Nutzeranfragen auf erforderliche Parameter und lehnt bei fehlenden Daten die Anfrage ab. Außerdem wird hier der Token mithilfe von Cognito geprüft.
Wir nutzen 3 verschiedene HTTP-Methoden, um unsere Client-Anfragen zu bearbeiten. Zudem sind alle Methoden über einen API-Key gesichert.

  • Die DELETE Methode löscht ein Password von einem Nutzer.
  • Die GET Methode gibt alle Passwörter eines Nutzers zurück.
  • Die PUT Methode erstellt oder überschreibt ein Passwort.

Lambda
Innerhalb jeder unserer Lambda Funktionen haben wir Zugriff auf den Cognito Authorizer. Dieser bietet uns die Möglichkeit, direkt auf die Daten (User_Id, Username etc.) des Nutzers zuzugreifen, welcher den Request geschickt hat.

PUT Methode:

In der PUT Methode werden neue Passwörter gespeichert oder schon vorhandene überschrieben.

Durch das in API-Gateway angelegte JSON-Schema können wir prüfen, ob der HTTP-Body die erforderlichen Parameter enthält.
Trotzdem kann es sein, dass vom Nutzer ein leerer String (“”) geschickt wird.
Dies ist der erste Validierungsschritt, welcher in der Lambda Funktion ausgeführt wird.


Mithilfe der User_Id und der Passwortdaten können wir nun einen neuen Eintrag in DynamoDB erstellen.
Falls der Passwortname bereits in der Datenbank steht, wird das Passwort überschrieben.


GET Methode:

Die GET Methode liefert alle Passwörter eines Nutzers zurück.

Durch die User_Id können wir an DynamoDB eine Query schicken, welche uns alle Daten von einem einzigen Nutzer zurückliefert.
Dadurch wird sichergestellt, dass ein Nutzer nur auf seine eigenen Passwortdaten zugriff hat.

DELETE Methode:

Die DETELE Methode löscht ein Passwort eines Nutzers.

Auch für diese Methode existiert in API-Gateway ein JSON-Schema, welches den Body validiert.

Mithilfe des Authorizers und des Passwortnamens können wir nun für diesen Nutzer das Passwort löschen. Dafür müssen wir den Partitionsschlüssel (User_Id) und den Sortierschlüssel (PasswordName) angeben und diese Anfrage an DynamoDB senden.


Verschlüsselung

Um sicherzustellen, dass alle Passwörter sicher sind und nur von dem Nutzer gelesen werden können, welcher die Passwörter auch angelegt hat, nutzen wir lokale Verschlüsselung.
Damit der Nutzer die App auf mehreren Geräten gleichzeitig verwenden kann, muss es gemeinsamen Schlüssel geben.

Dieser Schlüssel basiert bei uns auf dem Login-Passwort. Dieser Schlüssel wird bei erfolgreichem Einloggen auf dem Gerät in einem sicheren Bereich gespeichert.

Bei iOS wird der “Keychain Service” und bei Android das “Android keystore system” genutzt. Das Login-Passwort wird zuvor zu einem Hash konvertiert.

Beim Ver- und Entschlüsseln wird ein AES Algorithmus verwendet. Die verschlüsselten Bytes müssen danach nur noch zu einem String encodiert werden, damit man das Passwort leichter zu einem JSON serialisieren kann.


Testing in Flutter

Um alle Funktionen in Flutter leicht testen zu können, haben wir uns für einen Integrationstest entschieden. Damit können wir herausfinden, ob alle wichtigen Funktionen richtig funktionieren. Unsere Hauptfunktionen, welche wir getestet haben sind:

  • Login eines Nutzers
  • Hinzufügen und Löschen eines Passworts
  • Ändern von Passwörtern
  • Ver- und Entschlüsselung
Die Login Prozedur prüft ob die App die richtigen Daten anzeigt, sobald der Nutzer eingeloggt wurde.


Terraform

Wir haben in unserem Projekt Terraform eingesetzt. In unser Ziel war es hierbei, das jeder einfach dieses Projekt bei sich zu Hause nachbilden kann. Dadurch war es aber auch möglich, schnell den AWS Account oder die Region zu ändern und sämtliche Infrastruktur Änderungen zu versionieren.

Main und Cognito

Grundsätzlich haben wir unseren Terraform Code in mehrere Files aufgeteilt. Dabei werden in der main.tf vor allem die Standard Terraform Parameter gesetzt und zusätzlich wird hier noch der Cognito Service aufgesetzt.

Variablen

Im variable.tf File werden für Terraform die AWS AccountID und die Region festgelegt. Diese Daten werden von Terraform benötigt, um die richtigen Account und Region für die Cloud Infrastruktur zu finden.

DynamoDB

Dienst und die Tabellen dem dynamodb.tf File werden, wie zu erwarten, alle DynamoDB Einstellungen übernommen. Hier wird also der AWS Dienst und die Tabellen Felder angelegt.

API Gateway

Im api.tf wird zunächst der normale AWS API Gateway Dienst eingestellt. Danach werden die Methoden der API definiert. Für jede Methode muss zudem angegeben werden welche Lambda Funktion ausgeführt werden soll, zusätzlich kann hier ein Template für die Request hinzugefügt werden. Dadurch wird unter anderem auch der Lambda Code vereinfacht. Jetzt wo in den Methoden alles Wichtige eingestellt ist, wird im tf File die Response erstellt.
Als Nächstes wird in Terraform definiert, wie die API bereitgestellt wird. Dadurch spart der Entwickler die Zeit später in AWS API Gateway den Deploy Butten von Hand zu drücken. Am Ende vom api.tf File wird noch für jede API Methode ein Model / Schema für die JSON Daten angeben. Das hat den Vorteil das Request an die API, ohne diese benötigten Daten, automatisch abgelehnt werden. Und zum Schluss wird noch der Authorizer von Cognito für die API richtig eingerichtet.


Lambda

Alle Lambda Funktionen müssen lokal gespeichert sein. Im lambda.tf File wird dann der Pfad zur Funktion definiert, beim Ausführen von Terraform wird dann ein zip File von jeder Funktion erstellt und in AWS Lambda hochgeladen.
Danach wird werden noch die Policies für das Logging mit Cloudwatch und die Access Rechte für DynamoDB geben.
Am Ende werden noch Permissions für API Gateway und Cognito an die richtigen Lambda Funktionen verteilt.

Probleme

Jedes Projekt hat seine Probleme, so auch unseres. Direkt am Anfang hatten wir viel mit AWS zu kämpfen, den wenn man für die Cloud entwickelt ist Debugging nicht mehr so einfach wie in einer IDE. Man kann dadurch kaum mehr nachvollziehen, was das AWS Setup oder der Lambda Code macht. Am Ende konnten wir das Problem über den Service Cloudwatch lösen. Hier werden alle Aktion geloggt, dadurch können Fehler schnell gefunden und gelöst werden.

Weitere Schwierigkeiten sind beim Arbeiten mit Terraform aufgetaucht. So sollte Terraform eigentlich von alleine alle Dienste in der richtigen Reihenfolge anlegen. Leider geht dies oft schief, und wir mussten im Terraform File von Hand mittels depens_on, die Reihenfolge einstellen. Ein weiteres Problem war, dass AWS beim Erstellen von Services viele Optionen im Hintergrund von alleine einstellt. Später mit Terraform raus zu finden, welche Einstellungen man jetzt genau braucht, war nur über das ausführliche Lesen der Dokumentation möglich.

Beim Arbeiten mit den AWS Dienst Cognito hatten wir Schwierigkeiten mit dem Bestätigen eines Nutzers. Denn Cognito legt zwar alleine alle Nutzer an, aber die Bestätigung des Accounts muss man von Hand gemacht werden. Jedoch konnten wir dieses Problem schnell über eine eigene Lambda Funktion lösen, welche automatisch jeden Nutzer verifiziert.

Erweiterungen

Das Projekt wurde von Anfang an mit dem Zeitlimit eines Semesters geplant. Jedoch könnte man mit weiterem zeitlichen Investment noch viele wichtige Features umsetzen:

  • So funktioniert Flutter auch im Web, leider haben wir innerhalb des Codes viele Abhängigkeiten zu iOS / Android. Jedoch wäre der Aufwand, die App auch als Web App laufen zu lassen, überschaubar.
  • Momentan sind die Passwörter in der Detailansicht noch sichtbar, in Zukunft sollte man diese verbergen und ein Kopierbutton hinzufügen. So könnte man die Passwörter noch besser schützen.
  • Zudem könnte man Power-Usern das Leben vereinfachen, in dem man eine Passwortsuche einbauen würde, so könnten Nutzer aus einem Berg von Passwörtern schnell das Richtige finden.
  • Ein weiter Feature wäre das Ändern des Masterpassworts, hier wäre aber der Aufwand sehr groß. Da jedes Passwort des Nutzers in der Datenbank mittels des Masterpassworts verschlüsselt ist, müsste jedes Passwort neu verschlüsselt werden.

Lessons Learned

Wie im Projekt bereits zu erkennen ist, haben wir viel Erfahrung mit den AWS Diensten: IAM, Cognito, Lambda, API Gateway, Cloudwatch und DynamoDB, sammeln können. Dank Terraform ist uns auch das Konzept der IaC (Infrastructure as Code) jetzt viel greifbarer geworden, zusätzlich hat uns Terraform fast schon gezwungen, von jedem benutztem AWS Service, die wichtigste Einstellung zu kennen. Außerdem konnten wir erste Kenntnisse in der App Entwicklung sammeln. Zuletzt haben wir unser Wissen im Bereich Testing um das Thema Integrations Tests erweitert.

How do you get a web application into the cloud?

by Dominik Ratzel (dr079) and Alischa Fritzsche (af094)

For the lecture “Software Development for Cloud Computing”, we set ourselves the goal of exploring new things and gaining experience. We focused on one topic: “How do you get a web application into the cloud?”. In doing so, we took a closer look at Continuous Integration / Continuous Delivery, Infrastructure as a Code, and Secure Sockets Layer. In the following, we would like to share our experiences.

Overview of the content of this blog post

  • Comparison GitLab and GitHub 
    • CI/CD in GitLab 
      • Problem: Where are the CI/CD settings in the HdM Gitlab? 
      • Problem: Solve Docker in Docker by creating a runner 
    • CI/CD in GitHub 
  • Set up SSL for the web application 
    • Problem: A lot of manual effort 
      • Watchtower 
      • Terraform 
  • Testing 
    • Create a test environment 
    • Automated Selenium frontend testing in GitHub 
  • Docker Compose 
  • Problem: How to build amd64 images locally with an arm64 processor? 

Continuous Integration / Continuous Delivery

At the very beginning, we asked ourselves which platform was best suited for our approach. We limited ourselves to the best-known platforms so that the comparison would not be too complex: GitHub and GitLab.
Another point we wanted to try was setting up a runner. For this purpose, we set up a simple pipeline in both GitLab and GitHub to update Docker images on Docker Hub.

GitLab vs. GitHub

GitHub is considered the original cloud-based Git platform. The platform focuses primarily on the community. Comparatively, it is also the largest (as of January 2020: 40 million users). GitLab is the self-hosted open-source alternative to GitHub. During our research, we noticed the following differences concerning our project.

GitLab GitHub 
Free private and public repositories ✓ ✓ (since Jan. 2019)
Enterprise versions ✓ ✓ 
Self-hosted version ✓ ○ (only with paid Enterprise plan) 
CI/CD with shared or personal runners ✓ ○ (with third-party apps) 
Wiki ✓ ✓ 
Preview code changes ✓ ✓ 

Especially the point that it is only possible in GitLab to use self-hosted runners for the CI/CD pipeline caught our attention. From our point of view, this is a plus for GitLab in terms of data protection. The fact that GitLab can be self-hosted is an advantage but not necessary for our project. Nevertheless, it is worth mentioning, which is why we have included the point in our list. In all other aspects, GitLab and GitHub are very similar.

CI/CD – GitLab vs. GitHub

GitHub: provides the user so-called GitHub Actions. This way, the user does not have to set up, configure or host his runner.
+ very easy to use
+ free of charge
– Critical from a data protection perspective, as the code is executed/read “somewhere”

GitLab: To use the CI/CD, a custom runner must be configured, hosted, and integrated into the code repository.
+ code stays on own runner (e.g., passwords and source code are safe)
+ the runner can be configured according to one’s wishes
– complex to set up and configure
– Runner could cost money depending on the platform (e.g., AWS)

Additional information: HdM offers students so-called shared runners. However, Docker-in-Docker is not possible with these runners for security reasons. In the following, we will explain how we configured our GitLab runners to allow Docker-in-Docker. Another insight was that the Docker_Host variable must not be specified in the pipeline, otherwise the Docker socket will not be found, and the pipeline will fail.

CI/CD in GitLab 

Where are the CI/CD settings in the HdM GitLab?

We are probably not the first to notice that the CI/CD is missing in the MI GitLab navigation. The “advanced features” have been disabled to avoid “overwhelming” students. However, they can be easily activated via the GitLab settings (Settings > General > Visibility, project features, permissions) (https://docs.gitlab.com/ee/ci/enable_or_disable_ci.html). 

Write the .gitlab-ci.yml file

The next step is to write an individual .gitlab-ci.yml file (https://docs.gitlab.com/ee/ci/quick_start/index.html).  
The script builds a Docker container and pushes it to Docker Hub. The DOCKER_USERNAME and DOCKER_PASSWORD are stored as Variables in GitLab (Settings > CI/CD > Variables). 
Tip: If you want to keep the images private but do not want to pay for the second private repository on Docker Hub (5$/month), you can create a private repo and push the images separated by tag (in our case, “frontend” and “backend”). 

stages:
  - docker

build-push-image:
  stage: docker
  image: docker:stable
  tags:
    - gitlab-runner
  cache: {}
  services:
    - docker:18.09-dind
  variables:
    DOCKER_DRIVER: overlay2
    DOCKER_TLS_CERTDIR: ""
    # This variable DOCKER_HOST should never be set, because otherwise the default address of the Docker host will be
    # overwritten and the runner will not be able to access the socket and the pipeline will fail!
    # DOCKER_HOST: tcp://localhost:2375/
  before_script: # Install docker-compose
    - apk add --update --no-cache curl py-pip docker-compose
  script:
    - echo $DOCKER_PASSWORD | docker login --username $DOCKER_USERNAME --password-stdin
    - docker-compose build
    - docker-compose push
  only:
    - master

Configuring Gitlab

Next, we asked ourselves how we could restrict merges into the master. The goal was only to allow a branch to be added to the master if the pipeline was successful. This setting can be found in Settings > General > Merge requests > Merge checks the item “Pipelines must succeed”.

Setting up and configuring GitLab Runner

For this, we have written a runnerSetup.sh.

#!/bin/bash

# Download the binary for your system
sudo curl -L --output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64

# Give it permission be executed
sudo chmod +x /usr/local/bin/gitlab-runner

# Create a GitLab CI user
sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash

# Install and run as service
sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
sudo gitlab-runner start
sudo gitlab-runner status

# Command to register the runner
sudo gitlab-runner register --non-interactive --url https://gitlab.mi.hdm-stuttgart.de/ \
 --registration-token asdfX6fZFdaPL5Ckna4qad3ojr --tag-list gitlab-runner --description gitlab-runner \
 --executor docker --docker-image docker:stable \
 --docker-volumes /var/run/docker.sock:/var/run/docker.sock \
 --docker-privileged

# Install Docker and give the GitLab runner permissions so that it can access the Docker socket.
echo "Installing Docker"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce

sudo usermod -aG docker gitlab-runner

# Restart Docker and GitLab Runner Service
sudo systemctl restart gitlab-runner
sudo systemctl restart docker.service

During the step “# Command to register the runner” we fixed the problem we had with the HdM runners. “–docker-volumes/var/run/docker.sock:/var/run/docker.sock” gives the runner access to the Docker socket. “–docker-privileged” allows the runner to access all devices on the host and processes outside the container (be careful).

CI/CD in GitHub

This is done by adding the following code in the GitHub repository in the self-created .github/workflows/ci.yml file.
Like the previous .gitlab-ci.yml file, the script creates a Docker container and pushes it to Docker Hub. The DOCKER_USERNAME and DOCKER_PASSWORD are stored in the Action Secrets of GitHub (Settings > Actions).

name: Build and Push to Docker.io

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Login to Docker.io
      run: docker login -u ${{ secrets.DOCKER_USERNAME }} -p ${{ secrets.DOCKER_PASSWORD }} 
 
    - name: Build Docker-Compose
      run: docker-compose build
 
    - name: Deploy Container to Docker.io
      run: docker-compose push

SSL

SSL is used to encrypt the data exchange between the web browser and web server. It thus protects against access by third parties. To set up SSL, it is necessary to have an SSL certificate.

Configuration

We decided to use “all-inkl.com” due to an existing subscription.
In the KAS admin center (after setting up the domains and subdomains), new DNS records can be created and edited (Domain > DNS Settings > Actions (Edit)). Here, a new Type-A record can be created that points to the IP address of the AWS reverse proxy. The email (which needed for verification) can easily be created in email > email Inbox.

Server configuration 

We used the free CA Let’s Encrypt (https://letsencrypt.org/) for the creation and renewal of the SSL certificates. For the configuration, we used the following images: jwilder/nginx-proxy as Nginx Proxy and jrcs/letsencrypt-nginx-proxy-companion as Nginx Proxy Companion (it creates the certificates and mounts them via the volumes into the Nginx Proxy so that it can use them).  
In the docker-compose.yml, the environment variables can now be added for the service “frontend”. 

  frontend:
    image: dr079/webshop:frontend
    build:
      context: ./frontend
      dockerfile: Dockerfile
    restart: always
    environment:
      API_HOST: backend
      API_PORT: 8080
      # Subdomain
      LETSENCRYPT_HOST: webshop.designmyhouse.de
      # Email for domain verification
      LETSENCRYPT_EMAIL: admin@designmyhouse.de
      # For the Nginx proxy
      VIRTUAL_HOST: webshop.designmyhouse.de
      # The Port on which the frontend responds. Tells the Nginx proxy who to send the requests to.
      VIRTUAL_PORT: 80
# Not needed when deploying with reverse proxy
#    ports:
#      - "80:80"

After that, we created the docker-compose-cert.yml file, which starts the Nginx Proxy and the Nginx Proxy Companion.

version: "3.3"
services:
  nginxproxy:
    image: jwilder/nginx-proxy
    restart: always
    volumes:
      - ./nginx/data/certs:/etc/nginx/certs
      - ./nginx/conf:/etc/nginx/conf.d
      - ./nginx/dhparam:/etc/nginx/dhparam
      - ./nginx/data/vhosts:/etc/nginx/vhost.d
      - ./nginx/data/html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock
    ports:
      - 80:80
      - 443:443
    labels:
      - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"

  nginxproxy_comp:
    image: jrcs/letsencrypt-nginx-proxy-companion
    restart: always
    depends_on:
      - nginxproxy
    volumes:
      - ./nginx/data/certs:/etc/nginx/certs:rw
      - ./nginx/conf:/etc/nginx/conf.d
      - ./nginx/dhparam:/etc/nginx/dhparam
      - ./nginx/data/vhosts:/etc/nginx/vhost.d
      - ./nginx/data/html:/usr/share/nginx/html
      - /var/run/docker.sock:/var/run/docker.sock:ro

AWS EC2 instance

In AWS, an EC2 instance (consisting of an Ubuntu server and a security group) can now be created and started with the settings Verify and Launch. 

The IP address of the created instance can now be entered as a Type-A entry under “all-inkl.com”.

Install Docker on Ubuntu

It is now possible to connect to the EC2 instance and run the following commands to make the project accessible through the domain/subdomain. (Note: It may take a few hours for the DNS server to apply the settings. Solution: Use the Tor browser)

# Add GPG key of Docker repository from APT sources.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# Update Ubuntu package database
sudo apt-get update

# Install Docker
sudo apt-get install -y docker-ce

# Install Docker Compose
sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

# Give Docker Compose the execute permission
sudo chmod +x /usr/local/bin/docker-compose

# Log in as root user
sudo -s

# Clone project
git clone https://github.com/user_name/project_name.git

# Build and launch project
docker-compose -f ./project_name/docker-compose-cert.yml up --build -d

# Pull images from DockerHub
sudo docker-compose -f ./cloud-webshop/docker-compose.yml pull

sudo docker-compose -f ./cloud-webshop/docker-compose.yml up -d

Watchtower

With Watchtower, updates to the Docker registry can be automatically detected and downloaded. The container will then be rebooted with the new image. Watchtower accesses the Docker repo via REPO_USER & REPO_PASS and checks in the set time interval (— interval 30) if the Docker images have changed and updates them on the fly.
This requires adding the following code to the docker-compose.yml (replace REPO_USER and REPO_PASS with Docker.io Access Token credentials (Settings > Security)).

  watchtower:
    image: v2tec/watchtower
    environment:
      REPO_USER: REPO_USER
      REPO_PASS: REPO_PASS
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    command: --interval 30

Terraform

The preceding steps involve a considerable manual effort. However, it is possible to automate this, e.g., with Terraform. To achieve this, the following files must be written.

main.tf

resource "aws_instance" "test" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = var.ec2_instance_type

  tags = {
    Name = var.ec2_tags
  }

  user_data = file("docker/install.sh")
//  user_data = file("docker/setupRunner.sh")
  key_name = aws_key_pair.generated_key.key_name
  security_groups = [
    aws_security_group.allow_http.name,
    aws_security_group.allow_https.name,
    aws_security_group.allow_ssh.name]
}

output "instance_ips" {
  value = aws_instance.test.*.public_ip
}

providers.tf

provider "aws" {
  access_key = var.aws-access-key
  secret_key = var.aws-secret-key
  region = var.aws-region
}

security_groups.tf

resource "aws_security_group" "allow_http" {
  name = "allow_http"
  description = "Allow http inbound traffic"
  vpc_id = aws_default_vpc.default.id

  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = [
      "0.0.0.0/0"
    ]
  }

  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = [
      "0.0.0.0/0"]
  }
}

resource "aws_security_group" "allow_https" {
  name = "allow_https"
  description = "Allow https inbound traffic"
  vpc_id = aws_default_vpc.default.id

  ingress {
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = [
      "0.0.0.0/0"
    ]
  }

  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = [
      "0.0.0.0/0"]
  }
}

resource "aws_security_group" "allow_ssh" {
  name = "allow_ssh"
  description = "Allow ssh inbound traffic"
  vpc_id = aws_default_vpc.default.id

  ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    # To keep this example simple, we allow incoming SSH requests from any IP. In real-world usage, you should only
    # allow SSH requests from trusted servers, such as a bastion host or VPN server.
    cidr_blocks = [
      "0.0.0.0/0"
    ]
  }
}

variables.tf

variable "ec2_instance_type" {
  default = "t2.micro"
}

variable "ec2_tags" {
  default = "Webshop"
//  default = "Gitlab-Runner"
}

variable "ec2_count" {
  default = "1"
}


data "aws_ami" "ubuntu" {
  most_recent = true
  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
  }
  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
  owners = ["099720109477"] # Canonical
}

ssh_key.tf

variable "key_name" {
  default = "Webshop"
}

resource "tls_private_key" "example" {
  algorithm = "RSA"
  rsa_bits = 4096
}

resource "aws_key_pair" "generated_key" {
  key_name = var.key_name
  public_key = tls_private_key.example.public_key_openssh
}

resource "aws_default_vpc" "default" {
  tags = {
    Name = "Default VPC"
  }
}

variable_secrets.tf

variable "aws-access-key" {
  type = string
  default = "aws-access-key"
}

variable "aws-secret-key" {
  type = string
  default = "aws-secret-key" 
}

variable "aws-region" {
  type = string
  default = "eu-central-1"
}

install.sh for EC2 setup

To do this, we created a ./docker/install.sh file with the following content.


#!/bin/bash

# Install wget to update IP at all-inkl.com
echo "Setup all-inkl.com"
sudo apt-get install wget

# Save public IP to variable
ip="$(dig +short myip.opendns.com @resolver1.opendns.com)"

# Add all-inkl.com variables
kas_login="username"
kas_auth_data="pw"
kas_action="update_dns_settings"
sub_domain="sub"
record_id="id"

sudo sleep 10s

# Update all-inkl.com dns-settings with current IP and account data
sudo wget --no-check-certificate --quiet \
  --method POST \
  --timeout=0 \
  --header '' \
    'https://kasapi.kasserver.com/dokumentation/formular.php?kas_login='"${kas_login}"'&kas_auth_type=plain&kas_auth_data='"${kas_auth_data}"'&kas_action='"${kas_action}"'&var1=record_name&wert1='"${sub_domain}"'&var2=record_type&wert2=A&var3=record_data&wert3='"${ip}"'&var4=record_id&wert4='"${record_id}"'&anz_var=4'


echo "Installing Docker"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce

echo "Installing Docker-Compose"
sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# Follow guide to create personal access token https://docs.github.com/en/github/authenticating-to-github/keeping-your-account-and-data-secure/creating-a-personal-access-token
sudo git clone https://username:token@github.com/ratzel921/cloud-webshop.git
sudo docker login -u username -p token
sudo docker-compose -f ./cloud-webshop/docker-compose-cert.yml up --build -d
sudo docker-compose -f ./cloud-webshop/docker-compose.yml pull
sudo docker-compose -f ./cloud-webshop/docker-compose.yml up

Next, run the following commands. This will automatically create an EC2 instance (runs the application), a Security_Group (for connections to the EC2 instance via HTTPS, HTTP, and SSH), an SSH_KEY (allows to access the EC2 instance via SSH). In the end, the IP address of the EC2 instance is displayed in the console. This will automatically be entered into all-inkl.com or manually add it.

# Get terraform provider with init and use apply to start the terraform script.
terraform init
terraform apply --auto-approve

# (Optional) Delete EC2 instances
terraform destroy --auto-approve

Testing

Creating a Testing Environment

Using Terraform and an EC2 instance, it is also possible to create a testing environment. We used the GitHub pipeline for this.

Backend/Dockerfile

# Build stage
FROM maven:3.6.3-jdk-8-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean test
RUN mvn -f /home/app/pom.xml clean package

# Package stage
FROM openjdk:8-jre-slim
COPY --from=build /home/app/target/*.jar /usr/local/backend.jar
COPY --from=build /home/app/target/lib/*.jar /usr/local/lib/
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/backend.jar"]

frontend/nginx/nginx.conf

server {
  listen 80;
  server_name www.${VIRTUAL_HOST} ${VIRTUAL_HOST};

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
        try_files $uri $uri/ /index.html;
        proxy_cookie_path / "/; SameSite=lax; HTTPOnly; Secure";
    }

    location /api {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;

        proxy_pass_header Set-Cookie;

        proxy_cookie_domain www.${VIRTUAL_HOST} ${VIRTUAL_HOST};
        #rewrite ^/api/?(.*) /$1 break;
        proxy_pass http://${API_HOST}:${API_PORT};
        proxy_redirect off;
    }

   error_page   500 502 503 504  /50x.html;

   location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

frontend/Dockerfile

# Build stage
# Use node:alpine to build static files
FROM node:15.14-alpine as build-stage

# Create app directory
WORKDIR /usr/src/app

# Install other dependencies via apk
RUN apk update && apk add python g++ make && rm -rf /var/cache/apk/*

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./

RUN npm install

# Bundle app source
COPY . .

# Build static files
RUN npm run test
RUN npm run build


# Package stage
# Use nginx alpine for minimal image size
FROM nginx:stable-alpine as production-stage

# Copy static files from build-side to build-server
COPY --from=build-stage /usr/src/app/dist /usr/share/nginx/html

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/templates/

# EXPOSE 80
CMD ["/bin/sh" , "-c" , "envsubst '${API_HOST} ${API_PORT} ${VIRTUAL_HOST}' < /etc/nginx/templates/nginx.conf > /etc/nginx/conf.d/nginx.conf && exec nginx -g 'daemon off;'"]

Modifying the docker-compose.yml

To do this, we created a copy of docker-compose.yml (docker-compose-testStage.yml). We changed the images and the LETSENCRYPT_HOST & VIRTUAL_HOST for the “backend” and “frontend” service in this file.

Modifying the Terraform files

In the testStage.sh, we changed the record_id and “docker-compose -f ./cloud-webshop/docker-compose.yml pull & sudo docker-compose -f ./cloud-webshop/docker-compose.yml up -d” to “sudo docker-compose -f ./cloud-webshop/docker-compose-testStage.yml pull sudo docker-compose -f ./cloud-webshop/docker-compose-testStage.yml up -d
In the main.tf, “user_date = file(“docker/test_Stage.sh”)” is set.
After that, the EC2 instance, the security group, and SSH can be started as usual using Terraform.

Automated Selenium frontend testing with GitHub 

To do this, create the .github/workflows/selenium.yml file with the following content.
The script is executed on every push to the repository. It installs all necessary packages, creates a screenshot folder, and runs the pre-programmed Selenium tests located in the frontend folder.
After a push or manual execution, the test results with the artifacts (screenshots) are located on the Actions tab.

name: selenium tests
on: push
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Build the stack
        run: docker-compose up -d
      - name: npm install
        run: cd frontend && npm install
      - name: install jest
        run: cd frontend && npm install jest
      - name: install selenium-webdriver
        run: cd frontend && npm install selenium-webdriver
      - name: run tests
        run: mkdir -p /tmp/screenshots/ && cd frontend && npm test
      - name: Archive screenshots
        uses: actions/upload-artifact@v2
        with:
          name: selenium-screenshots
          path: /tmp/screenshots/
      - name: Shutdown
        run: docker-compose down

Note that Chromedriver must be run headless, as GitHub cannot run a browser on a screen.

var driver = await new Builder()
        .forBrowser('chrome')
        .setChromeOptions(new chrome.Options().headless())
        .build();

Infrastructure as a Code

Cloud computing is the on-demand provision of IT resources (e.g., servers, storage, databases) via the Internet. Cloud computing resources can be scaled up or down depending on business requirements. You only pay for the IT resources you use. 
On July 27, 2021, Gartner published the latest “Magic Quadrant” for Cloud Infrastructure and Platform Services. Like last year, Amazon Web Service is the top performer in the Magic Quadrant. Followed by Microsoft and Google. (
https://www.gartner.com/doc/reprints?id=1-271OE4VR&ct=210802&st=sb). Since we were interested in trying Docker Compose, we decided to use AWS for deployment.

Deployment on Amazon ECS with Docker Compose 

Since early 2020, AWS and Docker have started working on an open Docker Compose specification, which will make it possible to use the Docker Compose format to deploy containers on Amazon ECS and AWS Fargate. In July 2020, the first beta version for Docker Desktop was released; the first stable version has been available since September 15, 2020.

Customize docker-compose.yml

The AWS ECS CLI supports Compose versions 1, 2, and 3. By default, it looks for docker-compose.yml in the current directory. Optionally, you can specify a different filename or path to a Compose file with the –file option. The Amazon ECS CLI only supports a few parameters, so correcting the yml may be necessary (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-parameters.html).

# (Optional) Create a new Docker context to point the Docker CLI to the correct endpoint. For this step you need the AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY.
docker context create ecs myecscontext

# (Optional) Use context
docker context use myecscontext

# Deploy application to AWS
docker compose up 

# Here you can see which containers were started as well as the URLs
docker compose ps

# (Optional) Shut down container. (Don't forget to change the context back to default).
docker compose down

# Convert Docker Compose file to CloudFormation to track which resources are created or updated
docker compose convert

BuildX

Building images for other processors

For example, if you have an M1 with an arm64 processor, a locally created image would not be accepted by AWS (error message “EssentialContainerExited: Essential container in task exited”). The reason is that ECS instances only support amd64 images.

Since Docker version >= 19.03, Docker offers buildX. The plugin is officially no longer considered experimental as of August 5, 2020. With the buildX functionality, it is relatively easy to create Docker images that work on multiple CPU architectures.

# (optional) Create a new Builder instance
docker buildx create --name mybuilder

# (optional) Use created builder
docker buildx use mybuilder    

# Show all available builder instances (here you can also see which CPU architectures are supported by the builder)
docker buildx ls

# Build and push image for example for amd64, arm64 and arm/v7
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 --tag username/repository_name:tag_name --push .

# Delete images
docker buildx prune --all

Getting Started with Cloud Computing – A COVID-19 Data Map

1. Abstract

Are you searching for country-specific, up-to-date numbers and rates for the global pandemic caused by COVID-19? Well, then I got some bad news for you. You won’t find any in this blog post… not directly anyway. If you are looking for in-depth information about public APIs, location-based data visualization or cloud-based Node.js web applications on the other hand, I might just be the right guy to help you out.

After reading this post you will not only have detailed information about the previously mentioned topics, but you will also learn about the challenges and problems I had to face working on this project and what to look out for, when starting a web application from scratch all by yourself.

2. Introduction

Final product

This project is the result of the examination that is part of the lecture “Software Development for Cloud Computing”. The focus of this lecture is to learn about modern cloud computing technologies and cloud services like AWS, Microsoft Azure and IBM Cloud, how to create and develop software using these technologies as well as creating a cloud-based project from scratch.

At first, I wasn’t quite sure what I was going to work on, but I really wanted to work on a web application that uses location-based data in the form of a map to visualize something else. This “something” was yet to be determined when I started brainstorming for my project, so I started to do some research. I then bumped into this collection of free and public APIs which was perfect for my undertaking and I was almost set on creating a weather app, when I found a free API that would provide me with global data all around the Coronavirus.

Now that I knew what I was going to visualize I came up with a personal scope for this project. I decided to create a web application that would deliver COVID-19data for a specific country by either hovering or clicking on this country, as well as a search function, so that the user could jump to a country of choice by entering the name of a city or a country. Since I had only very limited knowledge about web applications and cloud computing as such (I have worked bits and pieces with Microsoft Azure during my 6-months internship at Daimler before, but never really worked with Node.js or a map library) I did some research first, but I was very confident that I could reach this goal.

3. More Research

Now that I determined what I was planning on doing, I had to figure out which tools and cloud technologies I was going to use. Since I already had a little experience with Microsoft Azure it seemed obvious to settle with Azure and the Azure Maps Service for my project. But there were a couple problems with that:

Problem 1: In order to create a private Azure Account, even an education account, one has to provide a credit card, which I do not own.

Problem 2: There is no map material in Azure Maps for the regions China and South Korea. Now that isn’t technically a k.o. criteria, but I would prefer to use a service that supports all regions to avoid limitations.

Problem 3: Again, this isn’t a huge problem, but I would rather learn something new and not go with something I already had prior experience in.

So I decided to go with AWS, Amazon’s Cloud Service instead. Even though in retrospect the documentation for AWS is not as good as for Microsoft Azure (at least in my personal opinion), AWS offers a wide range of services and on top of that you can create a free education account with 100$ worth of credits. Unfortunately AWS does not have a location data service from what I could figure out, so I had to decide on an external service.

For map data services I decided to go with Mapbox. Mapbox GL JS is an open-source JavaScript library that uses WebGL to render interactive maps for websites and mobile apps. The advantage Mapbox has over Azure Maps is that it offers all the services I require for my project for free and also covers every region without restriction. Upon creating a free account, a user gets a subscription-key that grants access to all Mapbox services, including Mapbox Studio and the Mapbox Geocoder API which I will get into more detail later on.

4. But how do I get access to data from the internet?

https://www.wrike.com/de/blog/programmierschnittstelle-api-erklaert/

As I mentioned earlier, I stumbled upon a public Web API called covid19api, which offers all sorts of corona-related, up-to-date data for free. In the abstract I promised in-depth information about public APIs, so I might as well lose a couple words about the functionality of Application Programming Interfaces while we’re at it. An API is a software-to-software interface, not a user-to-software interface.

HTTP-Request for COVID-19 data for Germany between the 20.09 and 21.09
Response to the HTTP-Request above

A good metaphor to understand the concept of APIs would be to think of it as a waiter in a restaurant. The waiter(API) takes an order (HTTP-Request) from a guest(user) and delivers it to the kitchen(backend-system) where the order is acknowledged and prepared. When everything is ready the waiter(API) serves the food(HTTP-Response) to the guest(user). Some companies publish their APIs to the public, so that programmers can use their service to create different products. While some companies provide their APIs for free, others do so against a subscription fee. In the case of the COVID-19-API there is a free tier as well as a 10$, 30$ and 100$ subscription option. By subscribing, the user has access to additional data routes and no request-rate limit, the latter led me to subscribing, because I require several requests per second with my application.

5. Architecture

Basic architecture of my web application hosted in AWS

Let’s take a step back and focus more on which solution I came up with for my project. The architecture of my web application is pretty straight forward. Clients can access a frontend via their browser. If a client hovers over, clicks on or searches for a country, a HTTP-Request is sent to the backend server which then evaluates that request and sends another HTTP-Request to either the COVID-19-API or the Mapbox-Search-API depending on what the client requested. Upon receiving a HTTP-response from either one of the APIs backend systems, my backend server evaluates the data for the respective user request and sends it back to the frontend where it is then visualized. I will go a little more in-depth about these topics later on, but first I want to explain why having a separate frontend and backend makes sense.

Pros for having a separate front and backend:

  1. It’s far easier to distinguish between a frontend or backend issue, in case of a bug
  2. Possibility to upgrade either one without touching the other as they run on different instances (modularity)
  3. Allows use of different languages for front- and backend without problem
  4. Two developers could work on each end individually without causing deployment conflicts, etc.
  5. Adds security, because the server is not tightly bound to the client
  6. Adds level of abstraction to the application

Cons for having a seperate front and backend:

  1. Have to pay two cloud instances instead of just one
  2. Independent testing, building and deployment strategies required
  3. Can’t use templating languages anymore, instead the backend is basically an API for the frontend

6. Frontend

More detailed architecture for the frontend of my application (Note: the Node Server is not part of the frontend, it just receives requests)
How to implement mapbox to your HTML-website

The frontend of my application consists of a static HTML-website that is hosted on an AWS EC2 Linux instance. The EC2 instance gets its data from an S3 bucket that is also hosted in AWS and contains up-to-date code for the website. The implementation of Mapbox is very straight forward. All you have to do is implement the Mapbox CDN(Content Delivery Network) into the head and include the above shown code with a valid access token into the body of your HTML. The “style” tag allows the user to select from different map styles, such as streets, satellite, etc. Users can create custom map styles, tilesets and datasets using Mapbox Studio. The big benefit of this is that the user does not have to store and load the data manually from the server. Instead a user can simply upload a style/tileset/dataset to Mapbox Studio and access it from the HTML by creating a new data source with the respective url for the style/tileset/dataset.

Tileset made from GeoJSON in Mapbox Studio

In my case I created a custom tileset from a GeoJSON file of every country in the world. You can find geographical GeoJSON data for free online, I personally found this handy web tool that lets the user create a fairly accurate GeoJSON from countries of choice. But I encountered a problem by doing so. Even though I had fairly accurate geographical data for each country, the COVID-19-API does not support every single country. By sending a request  to the COVID-19-API I got a list of all supported countries with their respective country-slug and ISO2 country code. Since those country codes are unique I wrote a basic algorithm that would craft a custom GeoJSON from all matching country codes of both the GeoJSON and the country JSON response.

How to get list of supported countries from COVID-19-API

Unfortunately not everything was that easy, because for some reason not every Object in the GeoJSON had a valid ISO2 code. So I had to manually go through all countries of both files and figure out which ones are missing, which was a real pain in the backside. Eventually I had a simple GeoJSON with a FeatureCollection containing a name, a unique slug, a ISO2 code and a geometry for each country, which I then uploaded to Mapbox Studio as a custom tileset.

How to implement and visualize Mapbox Studio tileset in frontend JavaScript

Now that my tileset was uploaded to Mapbox studio, I was able to create a data source and a style layer from it. This allowed me to customize the appearance of the tileset’s polygons to my liking. By using Mapbox’s map.on() function, I could add hover and click events for when the client hovers or clicks over a country and retrieve information from the tileset for this specific country(feature). In my case I get the slug for the country the user has clicked or is currently hovering on and start a HTTP-Request to the backend server with this information and the current and previous date. Hovering will return a basic COVID-19data for a country, while clicking will return premium data.

6.1 COVID-19 Data Request (Frontend)

The request is sent using the fetch method, which is a JavaScript interface. The body of the POST-request contains the country slug for the country you want to get COVID-19data for, the current date and the date of the day before. This information is needed for the backend request from the COVID-19-API in order to get the latest corona-related data.

After receiving a response from my backend in the form of a JSON object, the data is added to an empty <ul> object in the HTML where it is then visible to the client.

Client searched for Berlin and Mapbox flew to the exact location

6.2 Search Request (Frontend)

The search function works very similar to the previous description on how the COVID-19 data is requested, but instead of sending dates and a country slug from the tileset, we send a query. This query is the text that the client enters into the search bar. Upon starting a search, a fetch POST-request is sent to the backend containing the query in its body. After receiving a response from the backend which contains information about the first point of interest the Mapbox geocoder could find, we jump to the location of the POI, as long as it was a valid query. This “jump” is handled from the Mapbox fitBounds() function which tries to fit a POIs bounding box perfectly on the user’s screen.

7. Backend

More detailed architecture for the backend of my application (Note: the Amazon EC2 instance is not part of the backend, it just sends requests)

The backend consists of a single Node.js express server that is hosted in an Elastic Beanstalk instance on AWS. I also added a CI/CD Code Pipeline from AWS that connects the instance to a GitHub repository so I have continuous updates. Since I decided on separating my frontend and backend, the backend server behaves much more like an API system for my frontend.

7.1 COVID-19 Data Request (Backend)

Express route for basic COVID-19 data

Whenever a HTTP-Request for one of the corona-related server routes happens, the server passes the request body to a function and executes this function. Upon execution, the backend sends another HTTP-Request to the COVID-19-API with the country slug, the current and previous date as parameters and the API access token as header. This request is being sent using the request-promise npm dependency.

The COVID-19-API’s response contains specific, corona-related data for the requested country. This data has to be evaluated and adapted, to make sure the backend only responds with data that is needed and correctly formatted. This is necessary, because otherwise larger Integer numbers are difficult to read without a dot every 3 numbers. After evaluation the data will then be sent back to the frontend where it is then displayed.

Backend function that sends a request to the COVID-19-API with respective parameters. (Note: the use of async and await make sure the response is not empty)

A problem that I stumbled upon while working on the backend was that the requested data was only usable within the scope of the callback function. In order to fix that issue and prevent an empty string from being sent to the frontend as a response, I had to learn about promises (async and await). Let’s go back to the restaurant example, shall we? If you create a function in JavaScript it is synchronous by default. That means a waiter would take an order from a table(client) and gives it to the kitchen. If the system was synchronous, the waiter would wait in the kitchen(server) for the chef to be done preparing the order and not serve any other tables in the meantime. He will not serve another table until he brings the finished food to the table which has ordered. As you can see, this would be very inefficient, which is why asynchronous exists. The exact same scenario would work as followed if it was asynchronous: The waiter would take an order and give it to the kitchen, but instead of waiting in the kitchen he would start serving other tables and bring the finished food as it is ready to be served. In the case of my application it is important that I handle requests asynchronously, because there are multiple requests per second when a client hovers over many countries in a short period of time. And that is where the JavaScript keywords async and wait come into play. Async defines that a function is asynchronous and await can be used in the scope of an async function to make sure to wait until a HTTP-request is finished and the response has arrived. This makes sure that the COVID-19-API’s response and not an empty body will be sent to the frontend.

7.2 Search Request (Backend)

If there is a HTTP-Request for a query search, the server simply starts a request to the Mapbox Geocoding API with the request body’s query and the Mapbox access token as parameter. The result will be a list of POIs that fit the query, but for the sake of simplicity the server always sends the very first result back to the frontend.

8. Other Challenges

Another challenge that occured during my work on the project was that I sometimes struggled finding a solution for a problem, because documentation for an API or a service wasn’t clear or simply not existing. Sometimes it would take multiple hours reading up on documentation and community contribution, just to figure out that a single line of code would fix the problem. The biggest issues I probably had with the AWS and COVID-19-API documentation. While I could fix the issues I had with AWS by following YouTube and StackOverflow tutorials, there wasn’t really such a thing for the COVID-19-API. I then joined the official slack server for the API and reached out to the creator and developer who was very supportive and helpful.

9. Conclusion

Cloud computing is versatile and complex. During my time working on the project I got a far better understanding about web applications, APIs and cloud computing as such. I got more confident in working with JavaScript as a frontend and backend language and made my first steps into the world of web and cloud development. I learned a lot about location-based data and server architecture as well as how to do research on these topics. When I look back on what I achieved with this project, I am very happy with the result. I managed to reach all the goals I set for yourself. I’m also happy that I decided to go with AWS over Azure for this project, because I got to work with a new cloud environment. For my next cloud-based web application I probably will go back to Azure though or try a new cloud service, as I am not a big fan of the AWS documentation and management console.

But now it is up to you what you do with this information. Are about to close your browser in disappointment after not learning about the latest Coronavirus numbers or are you going to work on your own cloud-based web application tomorrow? No matter how you decide, I hope you learned something from reading this blog post that will help you on your journey to become a cloud developer.

Thanks for reading!

Generating audio from an article with Amazon Polly

Author: Silas Krause (sk295)

Project

Reading multiple and detailed articles can become a little bit tiring. Listening to the same content, on the other hand, is more comfortable, can be done while driving, and is less straining for the eyes.
Therefore I decided to use this lecture to create a service that converts an article to an audio file using a Text-to-Speech service.

Technical Architecture

The input for the application is quite simple. The user only needs to provide a URL to the article. Then the main application fetches the contents of that URL and cuts out the unwanted markup. Then an audio file needs to be created. I chose the Amazon Polly TTS API and S3 as a file storage solution to try out Amazon Web Services.
To reduce multiple creations of the same article and load time, I intended to add a database that checks if there is already an audio file.
To interact with this application, I also needed a frontend that has an input field and dynamically renders the elements once the API endpoints send a response.

I built the app using NodeJS with the express because even though I do not have a lot of experience building backend applications, I know JavaScript well, and therefore I am familiar with node.
I decided to create three routes for my application. The index should serve the frontend. Additionally, I need two API endpoints, the first one to scrape the content from the URL, and the second one to generate the audio file.


Getting the content

Initially, I thought I could simply fetch the HTML from the source. I quickly discovered that some pages render the content on the client-side or have some kind of confirmation screen. That is why I needed a way to prerender the page. The best solution I found was Puppeteer. Puppeteer is a Headless Chrome Node.js API that runs Chromium headless and enables access to the rendered DOM. To reduce the load time, I blocked all third-party JavaScript.
Pruning the response to exclude everything but the content turned out to be a tedious task because every website structures their content differently. I ended up using unfluff, which is fine for most cases.


TTS

After the extraction, the text can be sent to the Polly API. At first, I was using the synthesizeSpeech method from the SDK. Aside from the parameters, this method accepts a callback function that can handle the response audio stream. That buffer can be stored in a file on the disk. While looking for a way to upload the audio file to S3 I found that there is a much simpler solution, which also eliminates the 3000 character limit of the synthesizeSpeech method. The Polly SDK also has an option to start a task using the method startSpeechSynthesisTask. This method excepts an additional parameter called ‘OutputS3BucketName’. After the task is completed. The output file is placed into the mentioned S3 bucket.
I really enjoyed seeing how this integration of different platform services simplifies the development.

In hindsight, a real consumer application might want to synthesize small snippets and stream them subsequently. That would almost eliminate the wait time, since generating an audio file and loading it can take up a lot of time for impatient users. However, I did not choose this path because I intended to create a cache with my database.

The Response object from the startSpeechSynthesisTask method contains a link to the file, but there are two issues.
The first problem is that S3 files are not public by default. You need to complete three different steps to make them publicly available.
At first, you need to unblock all public access in the permissions. Then you need to enable public access for ‘list objects’ for everyone. After that, a pocket policy needs to be created. The policy generator luckily makes that quite easy.

Even when public access is enabled, the asset cannot be loaded immediately because the generation takes a couple of seconds. I needed to notify and update the frontend. Eventually, I solved this by starting an interval once the audio is requested. The interval checks if the task has been completed and renders an audio element after it is completed.

The authentification for AWS had to be done using the Cognito service by creating an identity pool.

Deployment

After the application was running successfully on my local machine, I had to deploy it. I chose the Platform-as-a-Service Platform on the IBM Cloud because I wanted to try out Cloud Foundry and I thought my simple express application was a good use case for this abstraction layer. I could have solved some parts of the app with a cloud function, but I do not need the control level of a virtual machine. Because Cloud Foundry requires a lot less configuration than a VM, it should be easy to deploy.
That is what I though.
I quickly ran into restrictions anyway. Except for the things I had to figure out due to my lack of knowledge of this platform, I had to spend a lot of time troubleshooting.
The biggest issue I faced was because of Puppeteer. At install time, the puppeteer package includes three versions of Chromium for Mac OS, Linux and Windows, which are all 150-250 MB large. The size exceeds the free tier limit and I had to upgrade. After that, I could not get Puppeteer running on the server, because the Ubuntu instance does not include all the debian packages that are necessary for running Chromium.
This really set me back. There is no way to install packages via sudo apt-get on PaaS and doing anything manually would eliminate the benefits of the simple deployment. I really thought I had reached the limits of Platform-as-a-Service until I discovered that you can use multiple buildpacks with cloud foundry. Even if they are not included on the IBM Cloud, by adding the Github repo.

buildpacks: 
    - https://github.com/cloudfoundry/apt-buildpack
    - nodejs_buildpack

This allows you to add an apt.yml file to specify the packages you want to install.
Afterward, I was able to run my application.


Tests

For tests, I chose to use mocha and chai. Except for a few modifications for the experimental modules I am using, this integration was straightforward. It uncovered a few error cases I was not considering before.


Conclusion

To sum up I can say that I learned a lot during this project, especially because a lot of things were completely new to me. But now I feel more confident to work with those tools and I want to continue to work on this project.
I can also recommend using cloud foundry. If you know how to deal with the restrictions and know your true environment conditions, it is pretty flexible and enjoyable to use.

Repo: https://github.com/krsilas/article2audio

A beginners approach at a cloud backed browser game

Foreword:

This article reflects my experiences while developing a real time browser-based game. The game of choice was Tic-Tac-Toe as it is straight forward to implement and does not have complex game mechanics. The following paragraphs explain my experiences I got while developing this game with a cloud-based infrastructure in mind. The article is not much of a manual on how to create a game in the cloud, it is more of a diary showcasing all the pitfalls and impressions I collected. This is more focused on beginner developers and first timers in projects as I share common pitfalls about my first bigger project which you should totally void.

To try out the prototype that I have created, check out the GitHub repository. There is a complete manual on how to start the application as well.

A simple game of Tic Tac Toe.

Initial project goal:

The initial goal of this cloud project was to create a simple browser game which automatically scales with increasing concurrent players. The key part for any game are game servers which players need to use to play against each other. Having no available game server means that no additional player can join in and have fun playing your game. The seemingless integration of additional game servers is a key point, no one wants to shut down the whole backend and bring it back up to just increase the server size. So, one goal was to achieve the seaming less integration of game servers and when they are not needed, the game servers should be removed without any hassle

The whole structure of the app is thought to fight against load in every possible part. For example, the frontend part, which consists of ReactJS should be relatively easy to scale. A load balancer would just redirect the request to the frontend to one of the available servers. The next server which then gets requested would be the matchmaking server. Here, several matchmaking servers should be free to choose from. However, it’s important to keep the connection to the same server every time, as these connections consist of socket connections which make it possible to transfer changes form the servers, which the frontend can’t access by default, in real time.

Technology stack

The technology used in this project is simple and easy to use. It mostly consists of technology I used in the past and I am quite familiar with. It saves a good amount of time not needing to be actively learning a new technology and using technology you are familiar with and which meets the requirements.

Frontend technology stack

For the frontend part I sided with ReactJS. It is more a personal preference to use ReactJS instead of Vue.js or plain HTML with JavaScript. ReactJS makes it easy to transform changes in data to the rendered HTML without ever writing a function to actively change your DOM by yourself. Changes to the DOM are easy and lightweight making it a great performance deal when doing frequent changes in the DOM. In my use case, a browser game, it was the perfect solution. Just get the data from the game server, push it into the fitting variable in the frontend and ReactJS magically adjusts according to the given data. ReactJS profits form huge community support as well. There are several packages that you can integrate in your project. In this project I integrated two rather famous packages, React-Router and React-Redux. React-Router makes routing between different pages easier without reloading the whole page. In my use case, the page consists of several components. Traditionally there is a header, a navigation bar and then all your information about the page you are on. If you are on the home page, it displays the home page, when you are on the about page, it displays the about page. With React-Router, it just loads the components that are changing. So, when going from the home page to the about page, only the component holding the about page re-renders. The header and navigation bar stays the same, as nothing changed there. It would be a huge waste of resources, re-rendering components which have not changed and are still used by the page. React-Redux is used to achieve a global state. Each React component has a state in which you store information. For example, the value of the input field in your form. But the problem that occurs when having multiple components is that you cannot pass this state to you siblings. Most likely you can pass the state to you children components, but that is it. React-Redux introduces global state that you can freely declare and use wherever you want. In this project it is used to save the information about the game you want to enter. From the lobby component you’d get the room name and the server name, then get redirected to the play component and the play component reads the information about the game you want to join from the global state. Talking about the play component, sockets are used to achieve real time communication between the client and the server. Socket.IO is used to establish a connection between the client and the game server. The game server holds a connection to both players. Each player’s interaction gets send to the game server, validated if needed, and then both players get the resulting game state form the game server back. Socket.IO is a proven framework with good community support and has great features such as rooms, which make it easy to use with a game project. Socket.IO’s rooms are used to create the different game rooms each server has. When a player joins a server, the game servers Socket.IO socket puts it into the matching room. All communication between the players in this room can now be easily emitted to just the room, and not all connected sockets.

The applications home page

Backend technology stack

The backend uses NodeJS servers with Express to provide an easy way to handle requests. Each server has its own different API-interfaces which are used by either other servers or for debugging purposes or general information. Additionally, the game servers and matchmaking servers have Socket.IO socket connections to communicate to the game server, the matchmaking server, or the frontend. With Socket.IO it is easy to listen to connects, disconnects and user defined room events, making managing the sockets not a total nightmare. Listening to disconnects is important for the matchmaking server to remove a game server from its list of available game servers and sends a request to the master server to check for the game server’s health. In case that the game server does not respond, the game server is removed from the master server’s server list as well, because the game server is not reachable anymore and therefore cannot be used to play matches on.

Two npm-packages have shown to be a great gift setting up these servers and making requests to other APIs simple. The first package Is node-fetch which, just like in plain JavaScript, has the fetch() method to asynchronously fetch information from an API. Unlike the standard JavaScript you use on your frontend, the fetch() method is not natively included in NodeJS. The other package is called minimist. It is a great convenience in reading the parameters the servers gets started with. To locally use multiple servers, each server needs adjustable ports. So, most servers created have a fitting parameter to set the port number.

Testing wise, Mocha and Chai are widely used in testing NodeJS applications. Mocha is a very common JavaScript test framework and Chai is a fitting assertion lobby extending Mocha’s asserting capabilities. Chai’s syntax is fairly easy to learn and easy to read as well.

Due to poor structural choices in development, most of the servers I created can not be tested without the others actually running. For example, the test case for the game server requires the master server to run, as a game servers first step is to register itself with a master server. The testing is set up, so that all required servers for testing are running before the test started.

Current state of the project

As of writing this article, the project is in a prototype state. All the servers work like they are meant to, and game servers can be seamlessly integrated into the running application. The whole application was deployed to Azure Virtual Machines and proved to work.

When trying out a different Azure service, like App Service, the application did not deploy as intended and would not work out of the box. When actually deciding on which Azure service to use, you need to check your different services for “compatibility”. For example, the game server uses two ports for sockets, one for the socket to connect with the player and the other one for a socket to connect to the matchmaking server. The Azure app service however only allows your application to use port 8080, so you either change your application to use that port, or completely switch to a different Azure service, virtual machine for example.

The biggest problem I encountered so far is to find a reliable way, to deploy my application to Azure Virtual Machines. Originally, I wanted to use Azure DevOps Pipelines which, after a successful build, then deploy the whole application to different virtual machines, but that did not work out right of the box as I thought. More on that in the ‘Cloud Integration’ chapter.

Application structure

The optional and aimed at structure looks like this:

A draft about the aimed structure of the application

Frontend, matchmaking and game server can be turned on and off depending on the current amount of players and the current load. Unfortunately, in the current state, there is no way implemented and tested, that one matchmaking server gets chosen when a player connects for the first time. It might work, but the frontend needs a couple of changes to dynamically change the address of the matchmaking server. At the moment its hard coded. The current structure looks more like this:

Current state of the structure.

Cloud Integration

Out of the several known cloud service providers, I sided with Microsoft Azure to get to know this service. During the cloud development lectures I have already tinkered AWS, IBM Cloud and Google Cloud but to further expand my basic knowledge about cloud services, I went with Azure. Adding to that, creating an Azure account gets you 170€ (200$) of free credit for the first 30 days, but you must verify yourself with a credit card. Payments only start if you switch your account from the free tier to a subscription-based tier.

Cloud Structure

Azure offers a variety of cloud services like virtual machines, load balancing clustered databases and Azure DevOps. Azure DevOps is basically your cloud enabled Jenkins instance allowing you to connect to your GitHub repository and automatically run pipelines depending the actions you take in your repository. For instance, when you push to the master branch of your repository, your DevOps pipeline automatically builds your projects, runs unit tests, and then can deploy your application to the Azure service of your choice. It is highly customizable and offers a variety of template applications to get started understanding how these pipelines work and are set up.

Cloud Pipelines

The development process should seemingless migrate from local development to deployment. Meaning, that every server can be set up locally, used for development and testing, and when finished, the changes can be pushed into the repository and a current build with all features gets set up automatically. The “dream pipeline” would look like this:

A deployed and running application is just a push away from being ready to use without ever setting up something by hand afterwards. Having such a powerful pipeline has several improvements while developing:

  • Automatic project building and running tests
  • Deployment happens automatically
  • All deployments are handled the same way and are consistently
  • Decreases time fiddling with deployments done by hand

Choosing the fitting cloud services is a key requirement before you actually start developing your application. I already mentioned the problem that I got myself into because I did not research the fitting cloud technologies beforehand. I’m not saying that the azure services I chose were the right and only fitting choice, the problem was, that I did not spent enough research on actually working out the different approaches I could take with Azure’s cloud services and what requirements the Azure services have. After a good amount of fiddling around, which got me to know the Azure App Service better, I understood that my current structure of the application simply could not use this service. The benefits from using Azure App Services would have been huge, as it would automatically scale depending on the load. It does however limit your abilities to directly debug and manage your application. It is not really possible to just login to your service via SSH, look at the logs or start/stop the application. A fully detailed comparison between the different services shows the azure documentation here: Azure Technology Choices

Project challenges

This chapter splits up into two different parts. Challenges in developing the application itself and the other part is about the challenges working on this project.

The biggest problem I encountered while developing the application was socket management in the frontend. This problem encountered, because two different components needed information from the incoming game event data of the active game. The ideal solution would be to share the socket across the application in a global state manner so that each component would set its listeners on the needed information. But that did not work out as a global state with React-Redux. The solution then was to actually get all the information in the game board component and then push it into a global state. The other component, the game status, would then retrieve it from the global state and update its values according to the data. This worked in the end and is sufficient for the prototype, but in a real-world production ready application, some sort of “socket-manager” or “socket-controller” would be needed to be implemented.

Another problem I encountered with the current prototype was testing. Especially the socket connections sometimes make it hard to create reliable tests as each test would need its own socket set up and ready to emit and retrieve data. The straight forward solution is to create “before” and “after” functions that ran before and after each test to setup sockets and afterwards closing them. In the test itself, only the listeners would be set, and data could be emitted through the set-up sockets. The really tricky part about this is to determine when to stop the test. A normal test calling a REST-API would be finished when the call was received and the data got evaluated. With sockets, especially when testing two player operations such as joining and emitting a player move, you have to carefully watch when to stop the test. Stopping the test is done by calling “done()”. In Mocha it’s a simple parameter that gets setup in the test. When “done()” is called, the tests stop. Sockets however can continue to receive information about events they are subscribed to. If two sockets have to receive the same event, on socket gets the information first and the second one last. The order of the sockets receiving information could be mixed up when networking does not deliver packages due packet loss for example. Meaning that the first socket receives the package after the second socket received the package. The test would end after the socket received the data, but the second socket still has to receive data and evaluate it. When running these tests locally, nothing like this occurred, but it is still a viable problem that can cause failure on the tests.

Most of the problems I encountered were on the more formal side of this project. A huge problem that I just realized when there were two to three weeks left until the presentation of the project was my time management concerning the development and deployment of the application. The development was going slower than I expected because of a slow month of August and a packed month of September in which my practical semester started meaning after sitting over 8 hours in front of a computer doing some sort of developing tasks, I had to spent my whole free time after to work on my project. I’d never expected it to be that hard to get things done after work, but after 8 hours, doesn’t matter what I’ve worked on, I simply wasn’t as concentrated, focused and quick while developing and driving the project forward, I was rather exhausted and that caused the project’s progress to slack.

As this is my first bigger project which I decided to do on my own, I got to know the difficulties planning and managing a project on my own, which led to quite some problems during the whole project. Time management got already mentioned, additionally the architectural side would need some great refactoring if the application ever would go into a productive environment. This happened due to poor knowledge about handling all these servers and components and just “coding away”.

The whole idea of this project was to develop something for the cloud. Unfortunately, I set my expectations quite too high for a single person, especially a beginner, to achieve something that big. I did however manage to create some kind of overview of my expectations. I already mentioned the pipeline that would get triggered on an action in the GitHub repository. This pipeline was made to capture everything I would need to research in order to create this kind of pipeline.

Without proper architectural knowledge it is quite hard to keep clean code and a reasonable structure inside each server application. For prototyping this is somewhat sufficient, but to actively develop and maintain a project, a clean structure and clean code is a must.

Learning for future projects

This being my first bigger project that results in actual software that has a real use-case, many different things have approached, whether they were good or bad. In the end the whole project thought me very valuable things about how I should approach the very next project during my studies. There are several key points that are worth pointing out.

The first one being a clear project scope that once defined, it should not suffer from huge changes. The project scope, especially for a timed project, needs to be adjusted just right to match the available man power and the available knowledge. Using new and not yet used technology is great, no arguing there, but getting started with new technology takes a lot of time, especially when going beyond the “tutorial” stuff. In my next project, I will make sure to account enough time for learning new stuff. This kind of goes hand in hand with proper architectural planning. Having no structure and plan to go along, makes it very hard to maintain and expand code. Other people may have a very hard time understanding the project at all.

Cloud architecture and cloud services come with the benefit of having huge resources on demand. It is definitely a topic that is going to be present for quite some time, so I’ll continue using them. Especially the benefits of cloud computing versus traditional computing, like load balancing and creating resources with one call or click, are very promising and easy to take care of resources and managing them. In combination with DevOps, an automatic deployment can save a huge amount of time over the time of developing the application.

 

Realization after finishing the project

During this project, I learned a lot about developing an application that makes, or should make, use of the cloud as a distributed platform enabling my application to scale and run however and wherever I want.

The key realization about project management is, that such a rather complex and feature rich application needs more time and more developers to get done in time with a releasable build. It is surly doable, but you really need to know your stuff. There would only be little time to get to know additional technologies so that you have enough time focusing on releasing a finished build that meets the requirements. It’s more a matter of knowing things and how they work, instead of being a high tier developer. A lot of time got spent on researching and trying things out than actually working with them.

Azure’s cloud services have shown me several possibilities to publish my application with totally different needs and benefits. Understand what you need and how you implement it, is something I have to dig deeper in my own research time. There is huge potential, that can be discovered, but you actually need time to integrate and get comfortable with cloud as your infrastructure provider.

The whole project made a lot of fun even though I just got around to make a working prototype and just got to touch the glimpse of cloud computing, I realized the huge potential for further projects and the necessity to get to work on cloud backed projects.