If WASM+WASI existed in 2008, we wouldn’t have needed to created Docker. That’s how important it is. Webassembly on the server is the future of computing. A standardized system interface was the missing link. Let’s hope WASI is up to the task!
Dieser Tweet über WebAssembly (WASM) des Docker-Erfinders gibt einen Hinweis auf die mögliche Innovationskraft dieser Technologie. Der vorliegende Artikel vermittelt zunächst die wichtigsten technischen Grundlagen von WebAssembly, welche zum weiteren Verständnis notwendig sind. Anschließend wird der WASI-Standard näher beleuchtet, welcher die Brücke zur Containervirtualisierung schlägt. Schließlich betrachten wir mit Krustlet (Kubernetes) und wasmCloud zwei existierende Cloud-Technologien, die zentral auf WebAssembly basieren.
The idea of this project was to create a simple chat application that would grow
over time. As a result, there would be more and more clients that want to chat
with each other, what might lead to problems in the server that have to be
fixed. Which exact problems will occur, we were going to see along the project.
In the center is a simple chat server that broadcasts incoming messages to all
clients. In order to notify the clients about new messages, the connection
should be static and bidirectional. Therefore, we based the communication on the
WebSocket protocol.
Furthermore, wanted to see how the server behaves with the rising load.
Therefore, we had the plan of performing several load tests to display the
weak points and improvements, as the system enhances.
For the examination of the lecture “Software Development for Cloud Computing”, I want to build a simple Random Chat Application. The idea of this application is based on the famous chat application called Omegle. Omegle is where people can meet random people in the world and can have a one-on-one chat. With Omegle people can have a conversation with not only normal chat but also a video chat. Not like Omegle, my application has only a normal texting function.
2. Technologies for the development of application
a. Frontend
React
For Frontend Development there are a great number of open-source libraries. React is recently one of the most popular and widely used libraries. There are many reasons for a developer to choose and use React. It is one of the most popular front-end technologies in the market. Compared to other libraries out there React seems to be easier to learn. As it doesn’t take much time to learn this technology, the developers can rapidly practice and build their own very first project. React helps increase productivity by using reusable components and development tools. There are many development tools available for React that speech up the project. The most important reason is that it has very strong community support. There are thousands of free React tutorial videos and blog posts on the internet which is very helpful for the developer. Therefore, I decided to learn this library during previous semesters. This project gives me a chance to have real practice.
b. Backend
Node.js
Node.js has become one of the most popular JavaScript tools. Node.js is a JavaScript runtime environment, which allows companies to improve their efficiency of the web development process. The frontend and Backend teams can now work more easily together. Since Node.js is written in JavaScript and bases on Google V8 Engine, everything is done very quickly. Node.js can create an Event Loop, which can cover all asynchronous input-output operations. And the best part is that it can increase the speed of any other framework as well. By allowing developers to write JavaScript code for both the Frontend and the Backend, Node.js makes it easy to send data between the server and the client, which makes it easier to synchronize data immediately.
Socket.io
When I decided to develop a chat application, I already thought about Socket.io. I had a chance to know this JavaScript library during the course Web Development 2. To build a real-time application we should use Socket.io. Socket.io will help parties in different locations connect with each other, transmitting data instantly through an intermediary server. Socket.io can be used in many applications such as chat, online games, updating the results of an ongoing match, … It is used a lot by the developer community, because of its speed and convenience. Socket.io provides us with many methods as well as outstanding features such as security, auto-connect, disconnection detection, multiplexing, room creation, …
3. Application explanation
a. Client
As I mentioned, for the client-side I use React. Therefore, I have a chance to know the concept of a Single Page Application, whose content is loaded only once and updated dynamically. For the interaction with the page or with subsequent pages, we don’t need another server, which means that the page is not reloaded. To apply this concept of the web application, React offer a packet names “react-router-dom”. My application is very simple, so it only has two paths to be loaded. The root path is where the user inputs his name, and it will load the Join component. The other path is the Chat component, in which the user sends messages after getting a room.
Socket.io library for the client is imported because it is not provided by JavaScript. This will expose the ‘io’ namespace.
Endpoint URL will be given for ‘io’ to connect with the socket.io server.
Now users can send and receive messages from the server after the room is created.
b. Server
To setup server, some packets need to be imported:
The server is set up and listen on port 5000
The server can also save users temporally, so it will know which user’s name already existed. And then it can remove users after they terminate their chat. To execute all those actions, I write a users.js file, which will have some functions such as, addUser, removeUser, getUser, …
I want to create a chat application where users don’t have to have an already known friend and a Chat room will be automatically created for them. With this application, they can meet a new friend and the server will get them a chat room.
I created a variable queue, which is an array. It will save a user, who has not had any partner yet, temporarily. Every user, who already inputted their names, will be connected to the socket. Socket knows that he wants to join a room. In the callback function of the socket, the name and socket ID of the user will be saved by function ‘addUser’, which is in users.js. Then socket will check if any other user is waiting for a partner in a queue. If someone is waiting for a partner, he will be popped from the queue. His socket and partner’s socket will be connected. And their room ID will be a combination of the 2 socket IDs. If no one is waiting for a room, the current socket is pushed to a queue and wait for another user to join.
c. Problem
CORS:
It is an abbreviation for Cross-Origin Request Sharing, which means that all data should come from the same resource. They use it as a security measure because JavaScript can load content from other servers without the knowledge of the user. This problem can be solved when both websites are aware of the data exchange, then the process will be allowed.
I installed the CORS package on my server. Origin will configure the Access-Control-Allow-Origin CORS header. Now client and server can communicate without error.
4. Testing
Testing is very important during the development of the application. Testing helps developers to discover existing errors/bugs before releasing the application. Therefore, the quality of the application would be enhanced. I decided to test only the server-side because it is more complicated than the client-side. Two tests are being created.
A single user testing: tests if he can connect to a server and receive a welcome message from the server.
Two users testing: tests if a room can be created when there are two users and both of them can receive the same welcome message from the same room.
5. Deployment
a. Docker Swarm and Kubernetes
When deploying this project, I create a container for each of the client and server sites. Docker is the most popular solution for the container platform. I want to learn to write a DockerFile and a docker-compose to create a container.
For the cloud development environment, I choose Amazon Web Service. It is currently one of the most comprehensive platforms for cloud computing services. I use an EC2 virtual server to make my project online. I would like to work with Kubernetes to manage the containers. I choose EKS, which is a service from Amazon web service.
If you work with a lot of containers, you have to be able to manage them efficiently. An orchestration tool enables exactly that. With the orchestration tool, you can integrate containers that you created with Docker. Then you use orchestration to manage, scale, and move the containers.
Although Kubernetes and Docker can work well together, there is competition when it comes to Docker Swarm. I have considered some features of Docker Swarm and Kubernetes.
Scaling: The load on our application is too high, Kubernetes can add more nodes to our cluster. Of course, we have to configure Kubernetes correctly so that it can create a new virtual machine. Then a node is added to the cluster.
Installation: with Docker Swarm, it is easy to create a new node, then integrate it with Swarm. On the other hand, to configure Kubernetes you have to determine the size of the node, how many master nodes, and worker nodes.
Load balancing: Docker swarm offers application auto load balancing. However, Kubernetes gives the flexibility to configure load balancing manually.
Storage volume participation: since the docker swarm manages Docker containers, containers find it easy to share data. Not just data, as well as other things. Kubernetes puts the container in a pod so the container cannot simply communicate with another. You need other components from Kubernetes, e.g., Service to create the connection.
Monitoring: While Swarm requires additional resources for monitoring and keeping a log, these tasks are already provided for in Kubernetes.
b. Amazon EKS and Kops
When deploying Kubernetes on AWS, you can configure and manage the deployment yourself for full flexibility and control. There are a few options for self-management: Amazon Elastic Kubernetes Service and Kops.
EKS is a managed service offered by AWS. EKS uses automatically provided instances and offers a managed control plane for deployment.
Kops is an open-source tool that can be used to automate the deployment and management of clusters on AWS. It is officially supported by AWS.
c. Docker File
To work with Kubernetes, I need to create all necessary containers. Containers are created by writing docker files. These docker files contain all information about the container, e.g.: name of the image, directory store our application, port, … Docker will follow this information, then step by step, create containers. Besides, I use Docker Compose to start the process of creating containers.
d. Kubernetes architecture on AWS cloud
I choose Kubernetes, Amazon EC2, EKS, ECR for the deployment of my project. What is showed below is the architecture of Kubernetes on AWS cloud.
Kubernetes server is a control panel. It creates a cubic cluster. In the cluster, there are master nodes that create and manage worker nodes. When you call deployment commands, the Kubernetes server sends messages to EKS, then EKS sends the tasks to the worker nodes.
Worker node contains some pods, in which the docker container will be run. I choose controller Deployment to keep these pods running and observe them. For the worker node, I create a pod for the client, 2 pods for the server, and a pod for Redis. The load balancer can be used to communicate with the application from outside.
I decided to have 2 pods Server because I want to scale my application. In case when more people try to connect to my application, the request will be handled faster when we have 2 pods instead of 1 pod. The picture below shows a horizontal scaling, which means that it has more copies of the application and these copies can work with each other at the same time.
For example: for the pod client I write a client-deployment.yaml file
A deployment named client is created, indicated by the .metadata.name field.
The .spec.selector field defines how the deployment finds the pods to be managed.
The deployment creates one replica pod, indicated by the .spec.replicas field.
the .template.spec field indicates that the pod is running a container. The container is created by docker image, which has been saved in ECR (Elastic Container Registry)
The container is created using the .spec.template.spec.containers.name field which is called client.
To enable network access to the set of pods I have to create a Service, which is written in client-service.yaml.
This specification creates a new service object called “client” that targets TCP port 3000 on each pod, which is labeled as app = random-chat.
For pods Server and Redis, I also create Deployment and Service for each.
e. Problem
Service of pod Server:
Pods can usually send requests with each other by using a normal type of Service, which means, in my case, that the pod Client can send a request to the pod Server without having an attribute ‘type’ in server-service.yaml. The Endpoint of the Client will be ‘server:5000’, which is the combination of the name of the service and the targetPort. But after many attempts, it still does not work. So, I decided to make the Service of pod Server as type Load Balancer, which is shown in the picture above. Now the Endpoint of the client will be the address of this Load Balancer.
6. Conclusion
During the course ‘Software Development for Cloud Computing’ and this project, I have a chance to know the concept of Docker containers and how to manage them with Kubernetes. I gain not only theoretical knowledge but also practical experience by developing and deploying the application. Moreover, working with cloud computing is new and interesting for me. Cloud computing is nowadays applied in the development of applications a lot. What I applied in my project is just a small part of cloud computing and I want to learn more about it in the future.
Unser Ziel war es, einen Programm Updater für Entwickler zu erstellen, den diese einfach in ihre CI/CD-Pipeline integrieren können. Für die Umsetzung haben wir die IBM Cloud und eine Serverless Architektur verwendet, um eine unbegrenzte Skalierbarkeit zu erreichen. Zu den verwendeten Serverless Services zählen die Cloud Functions, DB2 und ein Object Storage.
Das Projekt besteht aus einem Uploader, mit dem der Entwickler sein Programm in den Object Storage hochladen kann. Und einem Downloader für den Benutzer, mit dem automatisch die aktuelle Version heruntergeladen wird.
Verwendung aus der Entwicklersicht:
Programm wird registriert und man bekommt die dazugehörigen API-Keys
Erstellen der Config für den Downloader
Mit dem Uploader kann das Programm hochgeladen werden, dies kann einfach in eine CI/CD Pipeline eingebunden werden
Verwendung aus der Benutzersicht:
Herunterladen des Downloaders und der Config
Starten des Downloaders
Vor Programmstart wird nach neuen Updates gesucht und diese falls vorhanden heruntergeladen
Nach dem Update wird das eigentliche Programm gestartet
by Dominik Ratzel (dr079) and Alischa Fritzsche (af094)
For the lecture “Software Development for Cloud Computing”, we set ourselves the goal of exploring new things and gaining experience. We focused on one topic: “How do you get a web application into the cloud?”. In doing so, we took a closer look at Continuous Integration / Continuous Delivery, Infrastructure as a Code, and Secure Sockets Layer. In the following, we would like to share our experiences.
Overview of the content of this blog post
Comparison GitLab and GitHub
CI/CD in GitLab
Problem: Where are the CI/CD settings in the HdM Gitlab?
Problem: Solve Docker in Docker by creating a runner
CI/CD in GitHub
Set up SSL for the web application
Problem: A lot of manual effort
Watchtower
Terraform
Testing
Create a test environment
Automated Selenium frontend testing in GitHub
Docker Compose
Problem: How to build amd64 images locally with an arm64 processor?
Continuous Integration / Continuous Delivery
At the very beginning, we asked ourselves which platform was best suited for our approach. We limited ourselves to the best-known platforms so that the comparison would not be too complex: GitHub and GitLab. Another point we wanted to try was setting up a runner. For this purpose, we set up a simple pipeline in both GitLab and GitHub to update Docker images on Docker Hub.
GitLab vs. GitHub
GitHub is considered the original cloud-based Git platform. The platform focuses primarily on the community. Comparatively, it is also the largest (as of January 2020: 40 million users). GitLab is the self-hosted open-source alternative to GitHub. During our research, we noticed the following differences concerning our project.
GitLab
GitHub
Free private and public repositories
✓
✓ (since Jan. 2019)
Enterprise versions
✓
✓
Self-hosted version
✓
○ (only with paid Enterprise plan)
CI/CD with shared or personal runners
✓
○ (with third-party apps)
Wiki
✓
✓
Preview code changes
✓
✓
Especially the point that it is only possible in GitLab to use self-hosted runners for the CI/CD pipeline caught our attention. From our point of view, this is a plus for GitLab in terms of data protection. The fact that GitLab can be self-hosted is an advantage but not necessary for our project. Nevertheless, it is worth mentioning, which is why we have included the point in our list. In all other aspects, GitLab and GitHub are very similar.
CI/CD – GitLab vs. GitHub
GitHub: provides the user so-called GitHub Actions. This way, the user does not have to set up, configure or host his runner. + very easy to use + free of charge – Critical from a data protection perspective, as the code is executed/read “somewhere”
GitLab: To use the CI/CD, a custom runner must be configured, hosted, and integrated into the code repository. + code stays on own runner (e.g., passwords and source code are safe) + the runner can be configured according to one’s wishes – complex to set up and configure – Runner could cost money depending on the platform (e.g., AWS)
Additional information: HdM offers students so-called shared runners. However, Docker-in-Docker is not possible with these runners for security reasons. In the following, we will explain how we configured our GitLab runners to allow Docker-in-Docker. Another insight was that the Docker_Host variable must not be specified in the pipeline, otherwise the Docker socket will not be found, and the pipeline will fail.
CI/CD in GitLab
Where are the CI/CD settings in the HdM GitLab?
We are probably not the first to notice that the CI/CD is missing in the MI GitLab navigation. The “advanced features” have been disabled to avoid “overwhelming” students. However, they can be easily activated via the GitLab settings (Settings > General > Visibility, project features, permissions) (https://docs.gitlab.com/ee/ci/enable_or_disable_ci.html).
Write the .gitlab-ci.yml file
The next step is to write an individual .gitlab-ci.yml file (https://docs.gitlab.com/ee/ci/quick_start/index.html). The script builds a Docker container and pushes it to Docker Hub. The DOCKER_USERNAME and DOCKER_PASSWORD are stored as Variables in GitLab (Settings > CI/CD > Variables). Tip: If you want to keep the images private but do not want to pay for the second private repository on Docker Hub (5$/month), you can create a private repo and push the images separated by tag (in our case, “frontend” and “backend”).
stages:
- docker
build-push-image:
stage: docker
image: docker:stable
tags:
- gitlab-runner
cache: {}
services:
- docker:18.09-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
# This variable DOCKER_HOST should never be set, because otherwise the default address of the Docker host will be
# overwritten and the runner will not be able to access the socket and the pipeline will fail!
# DOCKER_HOST: tcp://localhost:2375/
before_script: # Install docker-compose
- apk add --update --no-cache curl py-pip docker-compose
script:
- echo $DOCKER_PASSWORD | docker login --username $DOCKER_USERNAME --password-stdin
- docker-compose build
- docker-compose push
only:
- master
Configuring Gitlab
Next, we asked ourselves how we could restrict merges into the master. The goal was only to allow a branch to be added to the master if the pipeline was successful. This setting can be found in Settings > General > Merge requests > Merge checks the item “Pipelines must succeed”.
Setting up and configuring GitLab Runner
For this, we have written a runnerSetup.sh.
#!/bin/bash
# Download the binary for your system
sudo curl -L --output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64
# Give it permission be executed
sudo chmod +x /usr/local/bin/gitlab-runner
# Create a GitLab CI user
sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash
# Install and run as service
sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
sudo gitlab-runner start
sudo gitlab-runner status
# Command to register the runner
sudo gitlab-runner register --non-interactive --url https://gitlab.mi.hdm-stuttgart.de/ \
--registration-token asdfX6fZFdaPL5Ckna4qad3ojr --tag-list gitlab-runner --description gitlab-runner \
--executor docker --docker-image docker:stable \
--docker-volumes /var/run/docker.sock:/var/run/docker.sock \
--docker-privileged
# Install Docker and give the GitLab runner permissions so that it can access the Docker socket.
echo "Installing Docker"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce
sudo usermod -aG docker gitlab-runner
# Restart Docker and GitLab Runner Service
sudo systemctl restart gitlab-runner
sudo systemctl restart docker.service
During the step “# Command to register the runner” we fixed the problem we had with the HdM runners. “–docker-volumes/var/run/docker.sock:/var/run/docker.sock” gives the runner access to the Docker socket. “–docker-privileged” allows the runner to access all devices on the host and processes outside the container (be careful).
CI/CD in GitHub
This is done by adding the following code in the GitHub repository in the self-created .github/workflows/ci.yml file. Like the previous .gitlab-ci.yml file, the script creates a Docker container and pushes it to Docker Hub. The DOCKER_USERNAME and DOCKER_PASSWORD are stored in the Action Secrets of GitHub (Settings > Actions).
SSL is used to encrypt the data exchange between the web browser and web server. It thus protects against access by third parties. To set up SSL, it is necessary to have an SSL certificate.
Configuration
We decided to use “all-inkl.com” due to an existing subscription. In the KAS admin center (after setting up the domains and subdomains), new DNS records can be created and edited (Domain > DNS Settings > Actions (Edit)). Here, a new Type-A record can be created that points to the IP address of the AWS reverse proxy. The email (which needed for verification) can easily be created in email > email Inbox.
Server configuration
We used the free CA Let’s Encrypt (https://letsencrypt.org/) for the creation and renewal of the SSL certificates. For the configuration, we used the following images: jwilder/nginx-proxy as Nginx Proxy and jrcs/letsencrypt-nginx-proxy-companion as Nginx Proxy Companion (it creates the certificates and mounts them via the volumes into the Nginx Proxy so that it can use them). In the docker-compose.yml, the environment variables can now be added for the service “frontend”.
frontend:
image: dr079/webshop:frontend
build:
context: ./frontend
dockerfile: Dockerfile
restart: always
environment:
API_HOST: backend
API_PORT: 8080
# Subdomain
LETSENCRYPT_HOST: webshop.designmyhouse.de
# Email for domain verification
LETSENCRYPT_EMAIL: admin@designmyhouse.de
# For the Nginx proxy
VIRTUAL_HOST: webshop.designmyhouse.de
# The Port on which the frontend responds. Tells the Nginx proxy who to send the requests to.
VIRTUAL_PORT: 80
# Not needed when deploying with reverse proxy
# ports:
# - "80:80"
After that, we created the docker-compose-cert.yml file, which starts the Nginx Proxy and the Nginx Proxy Companion.
In AWS, an EC2 instance (consisting of an Ubuntu server and a security group) can now be created and started with the settings Verify and Launch.
The IP address of the created instance can now be entered as a Type-A entry under “all-inkl.com”.
Install Docker on Ubuntu
It is now possible to connect to the EC2 instance and run the following commands to make the project accessible through the domain/subdomain. (Note: It may take a few hours for the DNS server to apply the settings. Solution: Use the Tor browser)
# Add GPG key of Docker repository from APT sources.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Update Ubuntu package database
sudo apt-get update
# Install Docker
sudo apt-get install -y docker-ce
# Install Docker Compose
sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# Give Docker Compose the execute permission
sudo chmod +x /usr/local/bin/docker-compose
# Log in as root user
sudo -s
# Clone project
git clone https://github.com/user_name/project_name.git
# Build and launch project
docker-compose -f ./project_name/docker-compose-cert.yml up --build -d
# Pull images from DockerHub
sudo docker-compose -f ./cloud-webshop/docker-compose.yml pull
sudo docker-compose -f ./cloud-webshop/docker-compose.yml up -d
Watchtower
With Watchtower, updates to the Docker registry can be automatically detected and downloaded. The container will then be rebooted with the new image. Watchtower accesses the Docker repo via REPO_USER& REPO_PASS and checks in the set time interval (— interval 30) if the Docker images have changed and updates them on the fly. This requires adding the following code to the docker-compose.yml(replace REPO_USER and REPO_PASS with Docker.io Access Token credentials (Settings > Security)).
The preceding steps involve a considerable manual effort. However, it is possible to automate this, e.g., with Terraform. To achieve this, the following files must be written.
variable "aws-access-key" {
type = string
default = "aws-access-key"
}
variable "aws-secret-key" {
type = string
default = "aws-secret-key"
}
variable "aws-region" {
type = string
default = "eu-central-1"
}
install.sh for EC2 setup
To do this, we created a ./docker/install.sh file with the following content.
#!/bin/bash
# Install wget to update IP at all-inkl.com
echo "Setup all-inkl.com"
sudo apt-get install wget
# Save public IP to variable
ip="$(dig +short myip.opendns.com @resolver1.opendns.com)"
# Add all-inkl.com variables
kas_login="username"
kas_auth_data="pw"
kas_action="update_dns_settings"
sub_domain="sub"
record_id="id"
sudo sleep 10s
# Update all-inkl.com dns-settings with current IP and account data
sudo wget --no-check-certificate --quiet \
--method POST \
--timeout=0 \
--header '' \
'https://kasapi.kasserver.com/dokumentation/formular.php?kas_login='"${kas_login}"'&kas_auth_type=plain&kas_auth_data='"${kas_auth_data}"'&kas_action='"${kas_action}"'&var1=record_name&wert1='"${sub_domain}"'&var2=record_type&wert2=A&var3=record_data&wert3='"${ip}"'&var4=record_id&wert4='"${record_id}"'&anz_var=4'
echo "Installing Docker"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce
echo "Installing Docker-Compose"
sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# Follow guide to create personal access token https://docs.github.com/en/github/authenticating-to-github/keeping-your-account-and-data-secure/creating-a-personal-access-token
sudo git clone https://username:token@github.com/ratzel921/cloud-webshop.git
sudo docker login -u username -p token
sudo docker-compose -f ./cloud-webshop/docker-compose-cert.yml up --build -d
sudo docker-compose -f ./cloud-webshop/docker-compose.yml pull
sudo docker-compose -f ./cloud-webshop/docker-compose.yml up
Next, run the following commands. This will automatically create an EC2 instance (runs the application), a Security_Group (for connections to the EC2 instance via HTTPS, HTTP, and SSH), an SSH_KEY (allows to access the EC2 instance via SSH). In the end, the IP address of the EC2 instance is displayed in the console. This will automatically be entered into all-inkl.com or manually add it.
# Get terraform provider with init and use apply to start the terraform script.
terraform init
terraform apply --auto-approve
# (Optional) Delete EC2 instances
terraform destroy --auto-approve
Testing
Creating a Testing Environment
Using Terraform and an EC2 instance, it is also possible to create a testing environment. We used the GitHub pipeline for this.
Backend/Dockerfile
# Build stage
FROM maven:3.6.3-jdk-8-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean test
RUN mvn -f /home/app/pom.xml clean package
# Package stage
FROM openjdk:8-jre-slim
COPY --from=build /home/app/target/*.jar /usr/local/backend.jar
COPY --from=build /home/app/target/lib/*.jar /usr/local/lib/
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/backend.jar"]
# Build stage
# Use node:alpine to build static files
FROM node:15.14-alpine as build-stage
# Create app directory
WORKDIR /usr/src/app
# Install other dependencies via apk
RUN apk update && apk add python g++ make && rm -rf /var/cache/apk/*
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build static files
RUN npm run test
RUN npm run build
# Package stage
# Use nginx alpine for minimal image size
FROM nginx:stable-alpine as production-stage
# Copy static files from build-side to build-server
COPY --from=build-stage /usr/src/app/dist /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/templates/
# EXPOSE 80
CMD ["/bin/sh" , "-c" , "envsubst '${API_HOST} ${API_PORT} ${VIRTUAL_HOST}' < /etc/nginx/templates/nginx.conf > /etc/nginx/conf.d/nginx.conf && exec nginx -g 'daemon off;'"]
Modifying the docker-compose.yml
To do this, we created a copy of docker-compose.yml (docker-compose-testStage.yml). We changed the images and the LETSENCRYPT_HOST & VIRTUAL_HOST for the “backend” and “frontend” service in this file.
Modifying the Terraform files
In the testStage.sh, we changed the record_id and “docker-compose -f ./cloud-webshop/docker-compose.yml pull & sudo docker-compose -f ./cloud-webshop/docker-compose.yml up -d” to “sudo docker-compose -f ./cloud-webshop/docker-compose-testStage.yml pull sudo docker-compose -f ./cloud-webshop/docker-compose-testStage.yml up -d“ In the main.tf, “user_date = file(“docker/test_Stage.sh”)” is set. After that, the EC2 instance, the security group, and SSH can be started as usual using Terraform.
Automated Selenium frontend testing with GitHub
To do this, create the .github/workflows/selenium.yml file with the following content. The script is executed on every push to the repository. It installs all necessary packages, creates a screenshot folder, and runs the pre-programmed Selenium tests located in the frontend folder. After a push or manual execution, the test results with the artifacts (screenshots) are located on the Actions tab.
name: selenium tests
on: push
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build the stack
run: docker-compose up -d
- name: npm install
run: cd frontend && npm install
- name: install jest
run: cd frontend && npm install jest
- name: install selenium-webdriver
run: cd frontend && npm install selenium-webdriver
- name: run tests
run: mkdir -p /tmp/screenshots/ && cd frontend && npm test
- name: Archive screenshots
uses: actions/upload-artifact@v2
with:
name: selenium-screenshots
path: /tmp/screenshots/
- name: Shutdown
run: docker-compose down
Note that Chromedriver must be run headless, as GitHub cannot run a browser on a screen.
var driver = await new Builder()
.forBrowser('chrome')
.setChromeOptions(new chrome.Options().headless())
.build();
Infrastructure as a Code
Cloud computing is the on-demand provision of IT resources (e.g., servers, storage, databases) via the Internet. Cloud computing resources can be scaled up or down depending on business requirements. You only pay for the IT resources you use. On July 27, 2021, Gartner published the latest “Magic Quadrant” for Cloud Infrastructure and Platform Services. Like last year, Amazon Web Service is the top performer in the Magic Quadrant. Followed by Microsoft and Google. (https://www.gartner.com/doc/reprints?id=1-271OE4VR&ct=210802&st=sb). Since we were interested in trying Docker Compose, we decided to use AWS for deployment.
Deployment on Amazon ECS with Docker Compose
Since early 2020, AWS and Docker have started working on an open Docker Compose specification, which will make it possible to use the Docker Compose format to deploy containers on Amazon ECS and AWS Fargate. In July 2020, the first beta version for Docker Desktop was released; the first stable version has been available since September 15, 2020.
Customize docker-compose.yml
The AWS ECS CLI supports Compose versions 1, 2, and 3. By default, it looks for docker-compose.yml in the current directory. Optionally, you can specify a different filename or path to a Compose file with the –file option. The Amazon ECS CLI only supports a few parameters, so correcting the yml may be necessary (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-parameters.html).
# (Optional) Create a new Docker context to point the Docker CLI to the correct endpoint. For this step you need the AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY.
docker context create ecs myecscontext
# (Optional) Use context
docker context use myecscontext
# Deploy application to AWS
docker compose up
# Here you can see which containers were started as well as the URLs
docker compose ps
# (Optional) Shut down container. (Don't forget to change the context back to default).
docker compose down
# Convert Docker Compose file to CloudFormation to track which resources are created or updated
docker compose convert
BuildX
Building images for other processors
For example, if you have an M1 with an arm64 processor, a locally created image would not be accepted by AWS (error message “EssentialContainerExited: Essential container in task exited”). The reason is that ECS instances only support amd64 images.
Since Docker version >= 19.03, Docker offers buildX. The plugin is officially no longer considered experimental as of August 5, 2020. With the buildX functionality, it is relatively easy to create Docker images that work on multiple CPU architectures.
# (optional) Create a new Builder instance
docker buildx create --name mybuilder
# (optional) Use created builder
docker buildx use mybuilder
# Show all available builder instances (here you can also see which CPU architectures are supported by the builder)
docker buildx ls
# Build and push image for example for amd64, arm64 and arm/v7
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 --tag username/repository_name:tag_name --push .
# Delete images
docker buildx prune --all
In the System Engineering and Management lecture, we had the opportunity to apply presented topics like distributed systems, CI/CD or load testing to a real project or with the help of a real application. In the following article we will share our learnings and experiences around the implementation and usage of Docker, Kubernetes, Rancher, CI/CD, monitoring and load testing.
In recent years, since the Internet has become available to almost anyone, application and runtime security is important more than ever. Be it an (unknown) application you download and run from the Internet or some server application you expose to the Internet, it’s almost certainly a bad idea to run apps without any security restrictions applied: Continue reading →
The name Kubernetes comes originally from the Greek word for helmsman. It is the person who steers a ship or boat. Representing a steering wheel, the Kubernetes logo was most likely inspired by it. [1]
The choice of the name can also be interpreted to mean that Kubernetes (the helmsman) steers a ship that contains several containers (e.g. docker containers). It is therefore responsible for bringing these containers safely to their destination (to ensure that the journey goes smoothly) and for orchestrating them.
Apparently Kubernetes was called Seven of Nine within Google. The Star Trek fans under us should be familiar with this reference. Having 7 spikes, there might be a connection between the logo and this name. [2]
This blog post was created during the master lecture System Engineering and Management. In this lecture we deal with topics that are of interest to us and with which we would like to conduct experiments. We have already worked with docker containers very often and appreciate the advantages. Of course we have also worked with several containers within a closed system orchestrated by Docker-Compose configurations. Nevertheless, especially in connection with scaling and the big companies like Netflix or Amazon, you hear the buzzword Kubernetes and quickly find out that a distribution of a system to several nodes requires a platform such as Kubernetes.
This is part two of our series on how we designed and implemented a scalable, highly-available and fault-tolerant microservice-based Image Editor. This part depicts how we went from a basic Docker Compose setup to running our application on our own »bare-metal« Kubernetes cluster.
This is part one of our series on how we designed and implemented a scalable, highly-available and fault-tolerant microservice-based Image Editor. The series covers the various design choices we made and the difficulties we faced during design and development of our web application. It shows how we set up the scaling infrastructure with Kubernetes and what we learned about designing a distributed system and developing a production-grade Kubernetes cluster running on multiple nodes.
You must be logged in to post a comment.