For all my university software projects, I use the HdM Gitlab instance for version control. But Gitlab offers much more such as easy and good ways to operate a pipeline. In this article, I will show how we can use the CI/CD functionality in a university project to perform automated testing and an automated build process.
Everyone knows the problem of keeping track of expenses. Many applications offer an overview of all expenses, but entering all data individually can be quite time-consuming. To overcome this task, we have developed SWAI, ‘A Scanner with A.I.’.
Daniel Knizia – firstname.lastname@example.org Benjamin Janzen – email@example.com
CatchMe is a location-based multiplayer game for mobile devices. The idea stems from the classic board game Scotland Yard, basically a modern version of hide & seek. You play in a group with up to 5 players outside, where on of the players gets to be chosen the “hunted”. His goal is trying to escape the other players. Through the app he can constantly see the movement of his pursuers, while the other players can only see him in set intervals.
The backend of the game builds on Colyseus, a multiplayer game server for Node.js, which we have adjusted to our needs. There’s a lobby, from which the players can connect into a room with other players and start the game.
As part of the lecture „Software Development for Cloud
Computing“, we had to come up with an idea for a cloud related project we’d like
to work on. I had just heard about Artistic Style Transfer using Deep Neural
Networks in our „Artificial Intelligence“ lecture, which inspired me to choose
image transformation as my project. However, having no idea about the cloud
environment at that time, I didn’t know where to start and what is possible. A
few lectures in I had heard about Infrastructure as a Service (IaaS), Platform
as a Service (PaaS) and Function as a Service (FaaS). Out of those three I
liked the idea of FaaS the most. Simply upload your code and it works. Hence, I
went with Cloud Functions in IBMs Cloud Environment. Before I present my project
I’d like to explain what Cloud Functions are and how they work.
What are Cloud Functions?
Choose one of the supported programming languages. Write
your code. Upload it. And it works. Serverless computing. That’s the theory
behind Cloud Functions. You don’t need to bother with Infrastructure. You don’t
need to bother with Load Balancers. You don’t need to bother with Kubernetes. And
you definitely do not have to wake up at 3 am and race to work because your servers
are on fire. All you do is write the code. Your Cloud Provider manages the
rest. Cloud provider of my choice was IBM.
Why IBM Cloud Functions?
Unlike Google and Amazon, IBM offers FREE student
accounts. No need to deposit any kind of payment option upon creation of your
free student account either. Since I have no experience using any cloud
environment, I didn’t want to risk accidentally accumulating a big bill. Our
instructor was also very familiar with the IBM Cloud, in case I needed support
I could have always asked him as well.
What do IBM Cloud Functions offer?
IBM offers a Command Line Interface (CLI), a nice User Interface on their cloud website, accessible using the web browser of your choice and very detailed Documentation. You can check, and if you feel like it, write or edit your code using the UI as well. The only requirement for your function is: It has to take a json object as input and it has to return a json as well. You can directly test the Function inside the UI as well. Simply change the Input, declare an example json object you want to run it with, then invoke your function. Whether the call failed or succeeded, the activation ID, the response time, results, and logs, if enabled, are then displayed directly. You can add default input Parameters or change your functions memory limit, as well as the timeout on the fly as well. Each instance of your function will then use the updated values.
Another nice feature of IBM Cloud Functions are Triggers.
You can connect your function with different services and, once they trigger
your function, it will be executed. Whether someone pushed new code to your
GitHub repository or someone updated your Cloudant Database, IBMs database
service. Once invoked by this trigger, your function executes.
You can also create a chain of Cloud Functions. The
output of function 1 will then be the input of function 2.
IBM Cloud Function use the Apache OpenWhisk service,
which packs your code into a Docker Container in order to run it. However, if
you have more than one source file, or dependencies you need, you can pack it
in a docker image or, in some cases, like Python or Ruby, you can also zip
them. In order to do that in Python, you need a virtual environment using
virtualenv, then zip the virtualenv folder together with your python files. The
resulting zip files and Docker images can only be uploaded using the CLI.
You can also enable your function as Web Action, which
allows it to handle HTTP Events. Since the link automatically provided by
enabling a function as web action ends in .json, you might want to create an
API Definition. This can be done with just a few clicks. You can even import an
OpenAPI Definition in yaml or json format. Binding an API to a function is as
simple as defining a base path for your API, giving it a name and creating an
operation. For example: API name: Test, Base path for API: /hello and for the
operation we define the path /world select our action and set response content
type to application/json. Now, whenever we call <domain>/hello/world, we
call our Cloud Function using our REST-API. Using the built-in API-Explorer we
can test it directly. If someone volunteers to test the API for us, we can also
share the API Portal Link with them. Adding a custom domain is also easily
done, by dropping the domain name, the certificate manager service and then
Certificate in the custom domain settings.
Finally, my Project
The idea was:
A user interacts with my GitHub Page, selects a filter, adds an Image, tunes some parameters, then clicks confirm. The result: They receive the transformed image.
It sends a POST request to the API I defined, which is bound to my Cloud
Function, written in Python. It receives information about the chosen filter,
the set parameters and a link to the image (for the moment, only jpeg and png
are allowed). It then processes the image and returns the created png byte64
encoded. The byte64 encoded data will then be embedded in the html site and the
user can then save the image.
The function currently has three options:
You can transform an image into a greyscale
You can upscale an image by a factor of two, three or four
and you can transform an image into a Cartoon
Cartoon images are characterized by clear edges and homogenous
colors The Cartoon Filter first creates a grayscale image and median blurs it,
then detects the edges using adaptive Threshold, which currently still has a
predefined window size and threshold. It then median filters the colored image
and does a bitwise and operation between every RGBA color channel of our median
filtered color image and the found edges.
Dis-/ Advantage using (IBM) Cloud Functions
Serverless Infrastructure was fun to work with. No need to
manually set up a server, secure it, etc. Everything is done for you, all you
need is your code, which scales over 10.000+ parallel instances without issues.
Function calls themselves don’t cost that much either. IBMs base rate is
currently $0,000017 per second of execution, per GB of memory allocated. 10.000.000
Executions per month with 512MB action memory and average execution time of
1.000ms only cost $78,20 per month, including the 400,000 GB-s free tier.
Another good feature was being able to upload zip packages and docker images.
Although those could only be uploaded using the CLI. As a
Windows user it’s a bit of a hassle. But one day I’ll finally set up the 2nd
boot image on my desktop pc. One day. Afterwards, no need for my VM anymore.
The current code size limit for IBM Cloud Functions is 48 MB. While this seems plenty, any modules you used to write your code, not included by default in IBMs runtime, needs to be packed with your source code. OpenCV was the module I used before switching over to Pillow and numpy, since OpenCV offers a bilateral filter, which would have been a better option than a median filter on the color image creation of the Cartoon filter. Sadly it is 125 MB large. Still 45 MB packed. Which was, according to the real limit of 36 MB after factoring in the base64 encoding of the binary files, sadly still too much. Neither would the 550 MB VGG16 model I initially wanted to use for an artistic style transfer neural network as possible filter option. I didn’t like the in- and output being limited to jsons either. Initially, before using the GitHub Page, the idea was to have a second Cloud Function return the website. This was sadly not possible. There being only a limited selection of predefined runtimes and modules are also more of a negative point. One could always pack their code with modules in a docker imag/zip, but being able to just upload a requirements.txt and the cloud automatically downloading those modules as an option would have been way more convenient. My current solution returns a base64 encoded image. Currently, if someone tries to upscale a large image and the result exceeds 5 MB, it returns an error, saying „The action produced a response that exceeded the allowed length: –size in bytes– > 5242880 bytes.“
What’s the Issue?
Currently, due to Github
Pages not setting Cross Origin Resource Sharing (CORS) Headers, this does not work currently. CORS is a
mechanism that allows web applications to request resources from a different
origin than its own. A workaround my instructor suggested was creating a simple
node.js server, which adds the missing CORS Headers. This resulted in just GET
requests being logged in the Cloud API summary, which it responded to with a Code
500 Internal Server Error. After reading up on it, finding out it needs to be
set by the server, trying to troubleshoot this for… what felt like ages,
adding headers to the ajax jquery call, enabling cross origin on it, trying to
workaround by setting the dataType as jsonp. Even uploading Cloud Function and
API again. Creating a test function, binding it to the API (Which worked by the
way. Both as POST and GET. No CORS errors whatsoever… till I replaced the
code). I’m still pretty happy it works with this little workaround now, thank
you again for the suggestion!
Other than that, I spent more
time than I’m willing to admit trying to find out why I couldn’t upload my
previous OpenCV code solution. Rewriting my function as a result was also a
rather interesting experience.
I could give the user more options for the Cartoon Filter.
the adaptive Threshold has a threshold limit, this one could easily be managed
by the user. An option to change the window size could also be added, maybe in
I could always add new filters as well. I like the resulting
image of edge detection using a Sobel operator. I thought about adding one of
Finding a way to host a website/find a provider that adds
CORS Header, allowing interested people to try a live-demo and play around with
it, would be an option as well.
What i’d really like to see would be the artistic style
transfer uploaded. I might be able to create it using IBM Watson, then add it as
sequence to my service. I dropped this idea previously because i had no time
left to spare trying to get it to work.
Another option would be allowing users to upload files,
instead of just providing links. Similar to this, I can also include a storage
bucket, linked to my function in which the transformed image is saved. It then
returns the link. This would solve the max 5 MB response size issue as well.
Cloud Functions are really versatile, there’s a lot one can
do with them. I enjoyed working with them and will definitely make use of them
in future projects. The difference in execution time between my CPU and the CPUs
in the Cloud Environment was already noticeable for the little code I had. Also
being able to just call the function from wherever is pretty neat. I could create
a cross-platform application, which saves, deletes and accesses data in an IBM
Cloudant database using Cloud Functions.
Having no idea about Cloud Environments in general a
semester ago, I can say I learned a lot and it definitely opened an
interesting, yet very complex world I would like to learn more about in the
And at last, all Code used is provided in my GitHub repository. If you are interested, feel free to drop by and check it out. Instructions on how to set everything up are included.
Annika Strauß – as324 Julia Grimm – jg120 Rebecca Westhäußer – rw044 Daniel Fearn – cf056
As a group of four students with little to no knowledge of cloud computing our main goal was to come up with a simple project which would allow us to learn about the basics of software development for cloud computing. We had decided a simple game would do the trick. And to make it a little more challenging it should be a two player online game. First we thought of Tic Tac Toe but that seemed too simple. Then we took a look at the Chinese game of Go, but that was too complicated. In the end we agreed on Connect4. Not too simple. Not too complicated.
Getting started / Prerequisites / Tech Choices
In our first group meeting we sat down and brainstormed on all the requirements, features and technologies we would require to realize the game. We also tried to avoid coming up with too many additional features that would be nice to have but exceed our possible workload. The main focus was to get something running in the cloud.
Our version of Connect Four therefore should have a simple user login, so that one can play a game session with other players, interrupt the game and come back later to finish where one left off, which also means game sessions need to be saved between two players. We thought about matching players via a matching algorithm, in order to ensure that players of about equal strength get matched, but then we realized that was way too much effort and our focus really should stay on getting something done. So we decided on simply make it a random match up, or adding friends and connect via a friend list, since this is a study project, not a commercial game.
Of course we don’t really assume that millions of people are going to play our game at once, but we want to learn about scalability, so we are going to act as if no one has ever played it before and everyone in the world is going to act like it’s the new Pokémon Go. In reality it will probably just be the four of us and whomever we show it to.
We already knew beforehand that we would want to program the game in Python, simply because it is getting pretty popular in the web application field and we want to get some experience with it. Also, we finally want to learn something other than Java. But we also want to make sure Python actually is a good choice, so we’re going to check the pros and cons just to be sure and possibly decide on something else later.
We also agreed that we want to use a NO-SQL Database for saving game sessions, simply because they can store arrays and we want to get more experience with it. Again, we must check first if it’s a viable option.
Next question we needed to ask, is what infrastructure, what platform would we put this on. AWS? Our own hardware? IBM Cloud? What are the options there? Also, what requirements does our game bring with it for the platform? Do we need micro services? Should we use Docker? How is the scalability going to be handled? So many questions. It was time to ask the Master for some wisdom. So we talked to lecturer Thomas Pohl, who gave us some very useful insights.
Luckily for us IBM Cloud is available for free for students. And as Thomas works for IBM and has plenty of experience with it, it’s kind of a no brainer to go with IBM Cloud. As we found out, IBM Cloud already provides a great deal of infrastructure and platform, all handled by IBM, which allows us to simply focus on deployment. The service we are looking at in particular is called Cloud Foundry. It handles all the scaling, load balancing and everything automagically for you in the background, so that we can focus on simply getting our game running. It comes with a great variety of tools for almost any technology requirements we desire.
This is definitely the sandbox we want to be playing in. With some help from Thomas we came up with this relatively simple architecture:
So what exactly do we require from the cloud foundry? To answer this we first need to ensure we have all the technology choices figured out. Meaning, programming language, database, user login authentication and so on.
First we’ll start with the programming language. Python. Is it a good choice?
To get a clear impression of whether or not to use Python some research is necessary. After reading some online articles and watching some videos, we came to the conclusion that it is a good choice. Why? Similar to Java, it is an interpreted language and is easy to use. It is portable and has a huge library and lots of prewritten functions. The main drawback is that it is not too mobile friendly, but we’re not worried about that. And it is so simple, that other languages may seem to tedious in comparison. But it will get the job done quickly and it is simple to maintain. Debugging works very well, it has built in memory management and most importantly, it can be deployed in Cloud Foundry. So, Python is definitely a winner. Other languages may also be a good choice, but the point is not to find the best language, but to confirm that Python is not a bad choice. Cause, we want to learn Python, and we just want to be sure it’s not a waste of time. And from what I’ve read, Python is a pretty good choice.
SQL or NoSQL? We want to use NoSQL, but does that make sense? To answer this question we need to have a look at what kind of data we are actually storing and what the advantages and disadvantages are of using NoSQL and see how it compares to SQL.
SQL is great if one has complex data that is very interwoven and a write would mean updating several different places, which SQL manages well by merely linking the data via relations instead of duplicating it. NoSQL stores all relevant data in one place, which makes reading really fast, but making any changes can mean, having to replace all duplicates of the data. So, NoSQL is efficient if changes only need to take place in one area.
Now, the data we are storing is basically just a game session. This mainly consists of two users, which will not likely change. Then the game state, which only changes in the saved game. And once the game is done, the entire game session no longer needs to be stored anyway and can probably be deleted.
The player data will be coming from a UserDataBase supplied by App ID, a user login service provided by IBM. Feeding this data into either DB Type should be no problem. But the main reason for using a NoSQL Database remains arrays. As we will be storing the game progress in form of arrays, getting them translated into something a normal SQL database could handle would be way too tedious and NoSQL can handle storing arrays no problem.
We’ve also just touched on the topic of handling user accounts. As already mentioned IBM provides a called App ID. Rather than using a social media login service such as facebook provides or worse, coming up with a whole system on our own, we were very happy to discover this tool already existed in the IBM cloud foundry. So we gladly decided to use that.
Getting the show on the road. Sort of.
Now that we had decided on all of our technical resources it was time to actually make it all. So, we all created our IBM Cloud accounts and started setting everything up. Now we had to look around Cloud Foundry and figure out where and how we would find all the services we needed and which ones were the right ones for us. Our main services needed to be: a place for the main python app to live – Python Cloud Foundry a NO-SQL database – Cloudant a user login manager – AppID
Everything was relatively simple to find and set up. Our main app would be living in the Python Web App with Flask. A basic web serving application. For the NO-SQL database would use IBMs Cloudant. For the login of users we set up AppID. Now, I’m not going to go into detail on how we set it up, since it was pretty simple and anyone with a basic understanding of clicking through a webpage could have done most of this.
Everything we needed was in place. Our little playground was ready. Now came the really fun part. Actually writing the code. And with it, all the problems we needed to overcome, which would help us learn the ins and outs of cloud computing.
Random Problems we encountered
Merging Cloud Foundry Python Server with our Python-Flask Server
IBM Python Cloud Foundry delivers the following server.py:
We wanted to load our Flask Templates instead. How to do that? The server.py sets the folder “static” as the starting point, which contains the static index.html. But just putting the path onto the templates is not enough, as Flask is initalized with app.run(). Could we just replace server.js with a Python-Flask server? What on the server.js do we need so that the server still runs in the cloud. We picked out the port and added it in our app.js (Python-Flask server).
Read port selected by the cloud for our application:
Adding the port in the app start:
So we added this in the first row:
Success! It worked!
Couldn’t start the app from the Cloud anymore The app wouldn’t start anymore after pushing it to the cloud. We got an error message: “[errno 99] cannot assign requested address in namespace”. This meant that our app couldn’t be found under the URL in the cloud. The mistake was that when we loaded locally we were using “localhost:8000”, but in the cloud, that doesn’t work of course. What was the correct address in the cloud? Adding host=’0.0.0.0′ into app.run solved the issue. Now we could run the app from the cloud and locally.
Explanation If you bind localhost or 127.0.0.1 it means you can only connect to the service locally. 10.0.0.1 cannot be connected, as it is not ours. We can only connect to IPs which belong to our computer. We CAN bind 0.0.0.0 though, because this means, all IPs are on our computer, so that every IP can connect to it.
Templates not found!!! While two of us were working with Visual Studio Code, the templates for the front end html stuff were working quite nicely locally. But one of us was using PyCharm. And PyCharm did not know where the templates were and kept saying, wrong path, template not found. On our quest for answers we were victorious and found ProfHase85 in one of the threads on stackoverflow.com. We followed his wisdom and did as he said: “Just open the project view (View –> Tool Windows –> Project). Once there, though shalt right-click on your templates folder. Not left-click. Not double-click. And most certainly not center-click. No. Though shalt right-click on it. There you will find Mark as Direcory and from there you will find the Template Directory. It is there, that thee will find salvation. There you will set the path of your template and all shall come to life.” So, that worked great. Thank thee, ProfHase85.
The NO-SQL Cloudant DB
When getting our Cloudant DB running we immediately ran into several problems. When we tried to create the DB, as it said in the tutorial from IBM, nothing happened. We then found that the code checks to see if the instance is a cloud instance. And we were trying to run the code locally. So we needed to push our code first and run it from the cloud. The we needed to enter the domain name in the manifest.yml, which took us a while to find, which turned out to be eu-gb.mybluemix.net. Then everything decided to freeze anytime when the password was being prompted. Apparently we made a mistake when we created the DB when configuring the authentication method. We said to only use IAM. So we went back and created the database again, and during setup the authentication method to use both legacy and IAM. Now we also had our in our service credentials and creating the DB and connecting to the Cloudant service finally worked.
Using the DB statically as well as dynamically Now we were able to use Cloudant locally and in the cloud and could add static data in form of documents to our DB. The next problem we were facing was getting data like the username out of the user input into the document and making that data accessible again. Unfortunately our course on accessibility didn’t help in this case. Unfortunately the documentation on what is possible with Cloudant didn’t seem to be very expansive. Functions such as checking if a file exists were only possible locally, but not from the Python-Cloudant extension in the cloud. After several days of trying around we finally had the idea that maybe it was the accessibility functionality that was causing this. Maybe we needed to use IAM to access the Cloudant DB from the Python-Cloudant extension.
After a small issue with finding the right username we tried to connect using:
But were greetet with: Error: type object ‘Cloudant’ has no attribute ‘iam’
IAM requires at least Python-Cloudant version 2.9.0 and for whatever reason the version had in our requirements was 2.3.1. Problem solved. Connection finally established. And then the next problem came flying along. When updating a file: 409 Client Error: Conclict document update at document.save() What? OK. More reading up to do. Went and read through this article: https://developer.ibm.com/dwblog/2015/cloudant-document-conflicts-one/ This didn’t bring us much further, but it seemed to be better to use document.updatefield() rather than trying to go directly into the DB, in order to avoid simultaneous calls.
How to sort data in CloudantDB Three SQL programmers went into a NO-SQL bar. They came back out after five minutes because they couldn’t find a table.
In Cloudant data is stored in completely independent documents. This makes everything more flexible, but also very cluttered and difficult to differentiate when reading. Without any kind of sorting, all data needs to be searched for a specific ID. For our project we needed the following data structures:
We had to differenciate between users and game sessions. How could we accomplish this in Cloudant?
Use views? A view makes a query quick and easy. But anytime a document gets updated, so does the whole view, which is counterproductive with big data sets.
Partitions? We found out, one can create a partitioned DB in Cloudant, whereby naming the ids as follows: <partition>:<documentid>
In our case: games:gameID123 and users:userAbc
This way one can add the partition to the queries, resulting in much better search performance. And also the DB looks a lot tidier.
Search query example:
And that was that.
AppID – The “simple” user login service for web apps
IBM offers a nice little web app called AppID. Easy to integrate. Made for the cloud. Great security features. Easy, right? Well, they have all the code you need for Java or node.js. But not Python. So, a few more lines of code, research and effort. How hard can it be? AppID is based on OIDC (OpenID Connect). Since we used Python we needed to fall back on Flask-pyoidc. This module is a OIDC client for Python and the Flask framework which interacts with AppID for authentication.
Configuring the OpenID Connect Client
The metadata in “appIDinfo” serves as input for configuring the OIDC client.
Securing Web Routes After configuration the OpenID client can be used to secure single pages or sections (“Routes”) of the web-app. This is achieved by attaching a decorator to the rout definition:
“@auth.oidc_auth” ensures that the code only gets executed for authenticated users.
The first problems with using AppID already arose with establishing a connection. First we tried connecting with a direct approach via the create button, which show a connection in the browser, but not when pushing the app to the cloud. So, we created the service again directly in the project via the command line. And voila. The next test push got a connection.
Creating an instance of the AppID service We connected to “ibmcloud resource service-instance-create connect4AppID appid lite eu-gb”. After that an alias of the service instance is created in Cloud Foundry. Then we had “ibmcloud resource service-alias-create connect4AppID –instance name connect4AppID
And we had a finally established a connnection between our app and the AppID service. Seemed like things were coming along. But then of course we encountered the next problem. Turns out the redirect_uri doesn’t work with secured connections.
And then the next problem was that the AppID login widget was probably not going to work with our Python-Flask app either. So, we decided not to use AppID after all. Instead we created our own user login in python.
Sometimes something that looks like plug and play turns out to be plug and pray. And in this case our prayers weren’t heard. But now we know that one needs to thoroughly check the capabilities of these services before trying to implement them.
The heart of the game/Game Engine
Probably the most challenging part of creating this game was writing the main application, as the first slew of questions arose. How do you write a game for two players? How to connect the database? How do make it refresh when a player makes a move so the opponent can see it? How do make taking turns work, so the opponent is blocked from making another move?
Reloading the window after a player made a move After some online research we figured out we could use a socket server to handle the multiplayer functionality. But that seemed like way too much overhead as it meant possibly having to learn an entire new framework. The first issue we needed to takle was getting the web page to update/reload for both players, anytime one made a move. With Python-Cloudant one can listen to changes in the DB, but unfortunately this loop blocks all other actions in Python. Were cloud functions maybe the answer? They are like serverless event listeners. The function gets triggered when a watched event occurs. And fortunately the was even a quickstart template available from IBM Cloud. You can create an action sequence and a trigger on the DB. We would need to call a Flask template in the cloud function. But it was unclear if that was possible. So we tried a Python-Cloudant only approach instead. Same as before, but this time asynchronous. That way the feed can run continuously and listen for changes in the DB. But now the problem is that the asynchronous loop, which is waiting for changes in the DB, cannot be executed at the same time as the return render_template and is blocking. Which also means that it’s blocking server side, which is causing the website to freeze. According to a post on stackoverflow threading is a better solution. One can deamonize a thread and thereby make it run in the background. But, then it was time for a new approach. Getting a better understanding. What is a feed? What is a trigger? Several documentations and coffees later we finally came up with a proper solution.
What can one say? Everything always sounds so simple and easy in theory. But when it comes down to it, one often gets stuck on little things. Some choices we made in the beginning were good, some were quite challenging and some lead to dead ends. When we started out, we had only developed software for local use on computers or mobile devices. The closest we’ve gotten to something like cloud computing was maybe getting something to run over a network. It is quite challenging getting everything to run in the cloud. It’s a whole new game. Similar, but different rules. And even with all the services provided by IBM, we still ran into many obstacles. Especially when developing locally and then trying to make it work in the cloud. Also, getting all the different types of technology to work together is pretty tricky. Only with experience will one get good at it. Because you won’t know if the service can provide the functionality one requires until you try it. And often we needed additional features or functions we didn’t think of beforehand. Aspects we didn’t consider. The software technologies we’ve encountered may be very powerful, but with great power comes great confusion. But that is where progress happens. Not when everything is going smoothly, but when one is faced with difficult challenges. And we’ve had plenty. I would say that this project, this course, has been one of the most beneficial in our studies at the HdM. We had the opportunity to get our hands dirty, with expert guidance in a safe environment. The experience we’ve gained is priceless. Our understanding of cloud computing and our ability to develop software for such has progressed several levels. And since this was our main goal I would have to say that our project was a complete success.
As a part of the lecture “Software Development for Cloud Computing” our task was to bring an application into a cloud environment. When we presented our software project MUM at the Media Night in our 4th semester, we talked with a few people about dockerizing MUM together with a whole email server configuration. While brainstorming a project idea for the lecture, we remembered these conversations. And since Docker by itself would not have fulfilled all of our requirements, we decided to create a Kubernetes cluster that would house a complete email server environmen and would be even easier to install. That way we could learn more about containerization and how clustering with Kubernetes works.
How Does Email Work?
First of all, we need to make a small trip to the world of emails to better understand what we actually wanted to do.
As part of the lecture “Software Development for Cloud Computing” we developed a doodle image recognition game. The idea came to us when we were searching for possible mini-games for our semester project “Peers – The Party”, an iOS app using Apple’s MultipeerConnectivity framework.
Video game streaming has taken over a big part of the commercial video game scene with Twitch being its biggest platform. Viewers are able to communicate with other viewers and streamers through the chat, or watch their previous streams or highlights. These highlights are made by streamers creating short clips of their favourite moments either by looking through all of their footage, or marking the time of a potential highlight by using the chat command /mark.
Since a streamer marking a highlight in the chat himself while playing a game is highly improbable and not every streamer has a chat moderator to take that responsibility into his or her hands, there is mostly only the possibility of looking through hours of video footage, to find a few highlights to cut out.
Because of this issue, I had the idea of creating a Twitch Chat Bot, which analyzes and evaluates the chat of a Twitch channel, taking message frequency, length and emotes into account, which is able to detect sudden surges in chat activity and sets a highlight marker accordingly. Also it would be nice to be able to actively track the collected data in a way, which makes it easy and intuitive to understand.
I decided to implement the Bot first, worrying about everything that comes after later. I took a look at different Twitch Bots to help me start things and created a simple connection to my Twitch Channel through the Java Twitch API, integrated into my project by Gradle. Soon I was able to receive messages I had in my chat and relay them to my console. The next issue I had was to determine the way I detected highlights in my chat. I had to find a way to reliably detect surges in chat activity, without counting and marking isolated cases. Research suggested using the exponentially weighted moving average(EWMA), allowing my bot to rule out the occasional nonstarter.
The EWMA allows the detection of exceptional deviation by comparing a value to a history, but adding more weight on recent instead of old values through adding exponentially decreasing weights. This added reliable analysis of my received data, so finally all that was left to do was setting a threshold deviation value and giving my bot access to set a marker command in the chat as soon as this threshold was exceeded.
Database and Visualization
Throughout my research for a appropriate and comfortable interface between the database to implement and the visualization tool, I came by countless options. The original plan was to turn my Bot into a scheduled task as a service in my very own Wildfly application server. This should be integrated into my IntelliJ IDE via Gradle, or tested out through Eclipse via Maven. Additionally I wanted to save my data in a SQLite database which would save data with JPA/Hibernate and which would communicate with my Chartjs visualization tool on my website through a REST API. Chartjs would take the data and show a graph, describing the EWMA value of the chat’s activity in realtime.
Rethinking the Architecture
After some research for the best way to set up the Wildfly Server I came across an article, describing .NET core and its web and database features. It seemed like there was everything I needed to realise my project in one package, so I decided to leave the comfort of my Java-only experience and dive into uncharted territory. Setting up the bot again in a .NET core web application using C# was no issue, since I was able to take over a lot of my Java logic into my new project. NuGet proved itself to be an intuitive tool, to integrate external libraries into my workspace and through the integration of TwitchLib, the .NET library for Twitch related projects, my bot was able to do everything, the Java version was able to do.
Next was the creation and storage of the received data in a database. For this I used the combination of .NET’s provided library for databases of webapplications and Microsoft’s Entity Framework. These allowed me to create a SQLite database model which could be initialized by NuGet and filled by connecting my Bot with the database context.
Through the NuGet Package Manager I also found a convenient interface to connect the ASP.NET MVC framework to chartjs. By writing a ChartController class, I created the means to send a JSON format result from my database to the visualization website, which I configured to show chat data from every second in a historical line graph.
Additionally to just being able to run the WebApplication through my IDE, the .NET Web Application allowed me to create a Docker script, which, when running Docker in Linux mode, could simply be started through the Visual Studio IDE, running my application as well.
Issues and Conclusion
My biggest issue was to find and configure the right tools, which would connect every service I needed. This, of course, was also made difficult by the fact, that I have never done a project like this, so initially I didn’t even know what to look for. It was eventually rectified by research and deciding to use .NET core as my preferred platform, which however only happened after a lot of trial and error episodes, trying to get Wildfly working in my IntelliJ, als well as Eclipse IDE. As a developer who has first and foremost worked with Eclipse, Maven and Java I first tried to try something new with the IntelliJ IDE and Gradle. Of course I needed a little time to get used to this change, even though the difference to my usual workflow was not too big.
By deciding to switch everything up, I challenged myself to take on something completely new. Though I worked with C# once, as part of a Unity project where I was in charge of the enemy AI in a video game, the scope of the project was vitally different. So first I decided which libraries, frameworks and interfaces I would need and use, to afterwards research and understand how I needed to use them in order to set up the base for my project.
In conclusion this was a really interesting and fun project to work on. By combining Twitch as my private interest and the cloud technologies, I gained valuable knowledge and insights into the subject matter. It also eased my fear in trying new things and technologies. Sometimes it’s necessary to jump over your own shadow in order for things to get eventually easier.
If you would like to take a look at the source code, or try the bot yourself, click here.
When I was invited to a design thinking workshop of the summer school of Lucerne – University of Applied Sciences and Arts, I made my first experience with the end user interaction part of Industry 4.0. It was an awesome week with a lot of great people and made me interested in the whole Industry 4.0 theme. So when we did projects in the lecture of cloud development I was sure to do a production monitoring project. Continue reading →
Imagine you are having a bad day, but you don’t know what to do. Your friends are not available, but you’d like to have advice depending on your mood. For that case, we created the Supporting Shellfish! This service generates advice based on the mood it recognises on your face.
Research on different services
In order to realise our idea, we had to choose between different cloud based services in the field of image recognition or to be more specific, in the area of face recognition.
Machine Learning as a service is the overall definition for diverse cloud-based services providing functionalities in the area of Artificial Intelligence such as data pre-processing, model training and prediction. The prediction results can be used and integrated through REST APIs. First of all, we analysed three of the most popular companies and their services.
Google, Amazon and IBM. Which one should we choose?
All of those services provide the usage of pre-trained models via API or the possibility to create and use a customised model. This Website provides a very good overview of the detailed functionalities of the different services. However, for our case we focused on the following pros and cons of those different services:
Creating a customised classifier
After analysing the pros and cons of the different services,
we decided to use IBM Cloud. The deciding factor for us was the pricing. But the well-structured
documentation and the available tutorials also helped convince us.
Although IBM provides a facial emotion classifier, we
decided to create our own facial expression recognizer based on Visual
Recognition of IBM Watson for studying purposes.
We searched for different emotion datasets and found the MUG Facial Expression Database. After having read and accepted the license agreement we requested access. A few weeks later we received the necessary access credentials. The database provides videos and images of 52 different people expressing the emotions happiness, sadness, neutral, anger, surprise, disgust and fear.
To create our own classifier in IBM Visual Recognition, we had to summarise the data in a zip file per class / emotion and therefore created a whole new structure for the facial dataset. We had the possibility to choose between using the terminal or the well-structured user interface of IBM Watson Studio – we decided to use the later.
First, we configured the model:
After the model was created, we were able to drag and drop our zipped training data on the right-hand side of the user interface below “2. Add from project”. We named the zip files equal to the classes we wanted to predict. We had to censor the faces in the following screenshots due to data protection. As soon as we finished uploading our training data files, we hit the button „Train Model “ and the training began.
After circa 15 to 20 minutes, the training finished
successfully and we were able to embed our custom model into the backend of our
Building the Web App
Parallel to this process, we created a one-page web application,
The frontend of our application is made up of one html site,
which can be rendered as a Jinja template by flask. The functionalities are
We got two buttons: One enables the selection of an image from your local device and the other button enables the upload of that selected file. As soon as an image is selected, the user receives a preview of the image in a form next to Shelly, the Supporting Shellfish. After selecting the file, the image is encoded into a base64 format and sent to the backend. After pushing the Button “Upload File”, an XMLHttpRequest will be made.
Finally, the frontend waits for the status code of the
backend and catches exceptions, if something went wrong.
The backend consists of two routes: one GET Route for the
landing page and one POST Route for requesting the image from the frontend. The
requested image will be decoded from base64 and processed by IBM visual
recognition. Our classifier will predict the mood based on the received image
and sends back a JSON File containing the predicted class with the highest
Based on that prediction, a random advice will be picked
from the corresponding advice-list and send to the frontend.
How does Shelly the Supporting Shellfish generate her advice? First of all, upload a picture of your face. After hitting the button “Upload File”, Shelly will use the customised model via IBM Cloud and predict the mood on your face. Based on the recognised mood, she will provide you a more or less helpful advice.
Every member of the Supporting Shellfish Team has been
active in the area of artificial intelligence. However, we wanted to analyse
the advantages and disadvantages of integrating a cloud based service and the
usage of “machine learning as a service” in an application.
The most interesting part for us was to create a customized
model in a cloud. We were especially impressed by the functionality and
usability of this process. The tough part was the selection of the dataset to
train the model. We had to restructure the data to fit our needs and the
requirements of IBM. After the training was completed, the integration of the
model into our Web-App went smoothly and quite quickly.
If you are interested in the project, you can have a deeper insight here.