Embedded Security using an ESP32

Ever wondered why your brand-new Philips Hue suddenly starts blinking SOS?

Or why there is an ominous Broadcast on your Samsung TV while watching your daily Desperate Housewives?

And didn’t you wear an Apple Watch a few minutes ago, and why did you buy 2 TVs in that time?

Security of smart and embedded devices is one of those topics everyone heard about – might it be good or (more likely) bad.

Let us take a journey down the rabbit hole and find out how such devices handle security today and how we can improve that. On that journey, we will visit 5 points which, in all fairness, are going to be quite technical. Continue reading

AIRA Voice Assistant – A proof of Concept in virtual reality


As part of the lecture “Software Development for Cloud Computing” we were looking for a solution, how a user can get basic assistance within our existing virtual reality game AIRA. The primary objective was a maximum of user-friendliness, while avoiding an interruption of the immersive gaming experience. It is also important to keep in mind, that the user is on its own and any kind of support from outside is usually not possible.

Moreover, considering that within virtual reality applications generally no conventional input devices will be available and therefore a keyboard is not an option. If we still following up this idea, many people may think next of an on-screen keyboard, as they know it from their smart TV at home, which might be operated by a game controller. Although such an approach would be contrary to a high ease of use and majority of implementations are quite crippled as well as hard to use.

So, what would be obvious and take all previous considerations into account? Simply think of something that each of us is carrying along at any time – the own unique voice. According to this we decided to implement a personal voice assistant into our game. In the following, it can be seen that the individuality of each human voice leads into a lot of difficulties we have to take care of.

In the following, it will be explained in detail how we implemented a personal voice assistant using multiple Watson serviceswhich are part of the IBM Bluemix cloud platform. Especially fundamental problems we run into will be discussed and then possible approaches will be pointed out.

Continue reading

Cloud Security – Part 2: The vulnerabilities and threats of the cloud, current scientific work on cloud security, conclusion and outlook

I’m glad to welcome you to the second part of two blog posts about cloud security. In the first part, we looked at the current cloud market and learned about the concepts and technologies of the cloud. Thus, we created a basis for the areas of this post in which we will now deal with the vulnerabilities and threats of the cloud, have a look at current scientific work on the topic and finally conclude with a résumé and an outlook.

Once again, I wish you to enjoy reading! 🙂

The vulnerabilities and threats of the cloud

First, we are trying to identify and discuss the vulnerabilities of cloud systems as far as possible. After that, we will look at a list of threats companies can encounter when using the cloud.


Identifying vulnerabilities is important, because, in my opinion, this is right the way to address and eliminate issues that can lead to security problems and thereby we can create more robust systems.

The vulnerabilities of the cloud can be in my opinion:

  • the entire software stack,
  • the hardware,
  • the communication and connections between software and hardware components,
  • and the people or employees, who are involved.

The software stack is a weak point because, in the case of an infected hypervisor, all overlying components can be taken over, if no special security precautions have been taken (in the section scientific work under Iago Attacks, the background for this is explained in more detail). In addition, using service models of the public cloud means that there is an unknown system environment, which poses a potential danger. Therefore it is necessary to protect the application by means of certain mechanisms from this unknown and harmful environment. Weak points, that can be found in the software stack and thus can be exploited, in my opinion, are:

  • The programming language used: In C, for example, integer types can be exploited to generate buffer overflows causing the program to crash. When the program restarts, a malicious code, which has been written to the memory, can be executed and thereby the system can be taken over eventually.
  • Insecure settings and rights distributions in systems.
  • Insecure techniques for virtualisation: If multiple containers are based on a single kernel, there is no adequate isolation.
  • Insecure programmed apps: For example, no use of frameworks for secure programming, or no secure exception and error handling.
  • Programming errors of all kinds: Programming errors can lead to crashes, thus enableing the execution of malicious code.
  • Drivers needed for the hardware (mouse, keyboard, GPU, etc.) can be infected.

Next to software the hardware also represents a weak point. A processor, can already be delivered faulty. Thus, even the basic building blocks of systems would not be trustworthy. For this reason, Google, for example, designs its own hardware and provides it with unique verification numbers. This ensures security at the lowest level. But not every company is able to design hardware on its own, which is why a certain degree of trust in the hardware manufacturer is important. In order to increase security, trust could be strengthened by contractual regulations, or the purchased hardware could be checked for errors. A further weak point are transmissions between the hardware and software components. As long as transmissions are not encrypted, they can be read or, in the worst case, even changed. This problem could be counteracted by appropriate encryption with verification and certification.

Humans are also a weak point in systems (and maybe even the biggest). Again and again we can read in news that employees are causing system failures. One example is Amazon: an employee has mistakenly pushed down many servers by entering an additional decimal place and has thus stopped a large part of Amazons Cloud without any malicious intent. After the incident, however, the fault was not sought at the employee, but the fault source was countered by a more robust system configuration. In this way, such an event is no longer possible through an employee. In my opinion, this is the right approach for solving such issues, because by means of more robust systems, errors have lesser impact. Another example would be the incident that happened at a Dutch cloud provider. A former admin has deleted the complete customer database of the company. The data could then be recovered only with difficulty and not completely. The consequences of such an incident are considerable. In this case, a better elimination process might have been saved from damage.

If we would go deeper, we could certainly figure out many more weak points, but we will leave it with these.

There are various ways to obtain information about current vulnerabilities and exploits of systems or software to be able to counteract them early. For example, exploit databases like CVE (Common Vulnerabilities and Exposures) or Exploit Database can be searched for proof of concepts. In addition, the distributors of software or systems often have their own web pages regarding security information and current publicly vulnerabilities. It can also help to keep up with the latest happenings by tracing journals, blogs or websites about IT security, such as heise Security. Also, various conferences on IT security, such as the USENIX Securitiy Symposium, are a good place to get up-to-date information (you can find examples for current scientific work in the section “Scientific work on cloud security”). Also, there is certain software for patch management helping you to keep up-to-date with current patches. This might be useful if you included several external dependencies in your software.

Up next, we will look at the threats companies can face in the area of cloud computing.


For the threats associated with cloud usage, we will look at a list of the Cloud Security Alliance (CSA). This list is called “The Treacherous 12” and is from 2016. It represents the top 12 cloud computing threats, ranked by severity level (not by the probability of occurrence). The threats of the CSA relate to information security, not to more robust systems, but I would like to go through them with you for the sake of completeness. Below we will look at the individual points of the list with a short explanation of each in descending order:

  • Shared Technology Issues. These are security problems due to shared infrastructures, platforms or applications when using the service models (IaaS, PaaS and SaaS) of the public cloud.
  • Denial of Service (DOS). This is when an attack causes a failure of certain system components, which means that data or applications are no longer available. Examples are: WannaCry, a malicious program (ransomware) for Windows, which encrypts the data of a system, so it’s no longer able to work without a ransom payment for the decryption of data. Or: A system is attacked using a botnet, sending so many queries to a service or server that it fails. This is referred to as distributed denial of service (DDOS).
  • Abuse and Nefarious Use of Cloud Services. Security problems due to poorly secured cloud services, free trials, and fraudulent registrations with payment fraud.
  • Insufficient Due Diligence. For example, an inadequate risk assessment in development of a business model that involves the use of cloud technologies.
  • Data Loss. Data loss can occur in a variety of ways and doesn’t necessarily have to be caused by an attack. Examples are: inadvertent deletion of data, and loss of data due to unpredictable physical effects, e.g. fire or water. It is recommended to store data georedundant or at several providers.
  • Advanced Persistent Threats (APTs). Companies are an attractive target, because of their technological advance.
  • Malicious Insiders. Danger outgoing by employees, as explained by the case of the Dutch company in the previous section.
  • Account Hijacking. Hijacking of accounts, e.g. by phishing, fraud, or exploiting vulnerabilities in a software.
  • System and Application Vulnerabilities. Errors in programs or systems can lead to unauthorised access.
  • Insecure Interfaces and APIs. User interfaces, or APIs are exploited, for example, by injections, to gain unauthorised access, to modify data, or to reach a crash.
  • Weak Identity, Credential and Access Management. Security issues due to flawed multi-factor authentication, weak passwords, or not changing cryptographic keys, passwords, or certificates frequently.
  • Data Breaches. An incident in which sensitive, protected or confidential information is released, stolen, displayed or used without authorisation of the recipient. Data breaches represent the most serious threat in the field of cloud computing according to the classification of the CSA.

If you would like to have further detailed information about the threats, please refer to the CSA publication.

According to Intel, there are several tips helping to protect against Data Breaches (although these tips may be for rookies and should be clear to any computer scientist). You should:

  • keep your anti-virus software up-to-date (for this point, there are certainly shared opinions among computer scientists),
  • provide additional protection through patch management or software for intrusion detection,
    backup your data (e.g. georedundant backups),
  • do not neglect the security of mobile devices, as they are used to the same extent as laptops or PCs today,
  • and train the users or employees appropriately (this point is still the main reason for the occurrence of a large part of the above-mentioned threats).

So far so good. In the next section, we will eventually look at the scientific work on cloud security.

Scientific work on cloud security

Before looking at examples of recent scientific work in the field of cloud security, I would like to briefly consider two special cases. They represent a threat and a protection of systems. Let’s briefly look at them below.

Iago Attacks

The example that represents a threat in the field of cloud computing is Iago Attacks according to a paper by Checkoway and Shacham.

First, some background information: Iago is a fictitious figure from the play Othello by William Shakespeare. He represents a demonic fool and is the antagonist in this play. Iago is a servant and would like to play his master against his knowledge and his will. Now let’s see how this is related to the attacks from the paper.

Iago Attacks exploit the fact that the kernel and the applications are peers, and the system call API is a remote procedure call (RPC) interface. The kernel is used as a mount for Iago Attacks, over which the attacks can be controlled and carried out. When there’s a system call by application, the malicious kernel returns a certain sequence of integer values as a response, which can affect the application to be against its will and perform calculations at the behest of the malicious kernel. We can conclude that in the field of public cloud computing, the application must be protected from an unknown and potentially harmful environment, in contrast to any previous way of thinking, trying to protect the environment from a bad application.

Intel Software Guard Extentions (Intel SGX)

The example concerning a protection against attacks is Intel’s Software Guard Extentions (SGX), explained in a paper by Costan and Devadas.

Intel SGX provides developers with CPU-based capabilities. Areas of an app that are protected by the CPU with SGX are called enclaves. Developers can decide which parts of an app should be enclaved. If the application is running, the data is protected by the enclave and if the app is not running it is protected by the CPU capabilities. There are CPU-based verification processes for creating enclaves, and to see if the CPU has Intel SGX capabilities. Critical data is not added to the enclaves, if the the app doesn’t verify appropriately.

According to the Intel, SGX can be used to protect against a compromised BIOS, corrupted hypervisors and firmware as well as against modification and disclosure of data. It is designed for C and C ++ applications only. SGX can protect applications from corrupted hypervisors, but not the hypervisor itself from a takeover. Intel SGX is available for processors from the 6th generation (also known as Skylake), but this only applies to CPUs with an S specification, that must be declared specifically. You can check this list, that provides information about which hardware supports SGX.

Criticism of Intel SGX

There is also criticism of Intel’s SGX: 98% of the cloud runs on Intel processors, which is almost a monopoly in the field of cloud computing today. MIT researchers criticise that Intel could expand another monopoly position with SGX: manufacturers must purchase a license and attestation key from Intel for the use of SGX. Thus, Intel can decide about the use of SGX and thereby over the winner and loser of the market. For example, Intel could have a very strong influence on the market through targeted pricing.

A paper by Schwarz et. al (2017), for example, deals with how to hide cache attacks using SGX. We can see that a technology that is supposed to be conductive for protection of systems can be exploited in order to conceal malicious processes.

Another drawback, which is associated with SGX, is that the there is a growing number of extensions for Intel’s CPUs, thus (among others) the complexity increases drastically. The documentation of SGX alone contains 120 pages and an increasing number of such extensions are brought into the CPU, which leads to an incredible complexity. Reading the description of a single modern Intel CPU would take a month with 40 hours of reading per week.

After getting to know these two special cases, we will now take a look at the papers that are currently dealing with cloud security.

Overview of recent papers on cloud security

The following is a selection of recent papers about cloud security. It is briefly described what the respective paper deals with. The selection of these papers is based on the assortment of Adrian Colyer, author of the blog The Morning Paper. Colyer chooses a paper of his interest daily and describes what it is about. In my opinion, this offers a pretty good pre-selection.

These papers are:

  • Deconstructing Xen, Shi et al., NDSS ’17. For this paper all publicly known vulnerabilities of the most widely used hypervisor Xen were investigated and an implementation named Nexen was developed, which counteracts most of the vulnerabilities.
  • SGXIO: Generic Trusted I/O Path for Intel SGX, Weiser & Werner, CODASPY ’17. SGXIO provides support for generic and trusted I/O paths to protect the input and output of users between enclaves and I/O devices.
  • Panoply: Low-TCB Linux applications with SGX enclaves, Shinde et al., NDSS ’17. This paper looks at what happens when an application is split into several small services, each running in its own enclave communicating with each other.
  • Shielding applications from an untrusted cloud with Haven, Baumann et al., OSDI ’14. This paper introduces the concept of shielded execution. Shielded execution is the opposite of sandboxing (a sandbox protects the system environment from a bad application): It protects the confidentiality and integrity of the code and data of an application against an untrustworthy host. That is also referred to as reverse-sandboxing.
  • SCONE: Secure Linux Containers with Intel SGX, Arnautov et al., OSDI ’16. By using SCONE (Secure CONtainer Environment) applications should not only be packaged and deployed with ease, but also most securely.
  • SGXBounds: memory safety for shielded execution, Kuvaiskii et al., EuroSys ’17. The authors of the paper show that memory safety attacks are still possible when using SGX, and offer a possible solution.
  • A study of security vulnerabilities on Docker Hub, Shu et al., CODASPY ’17. Docker Hub offers a huge selection of different images. The authors developed a framework for automated scanning of images on Docker Hub for vulnerabilities. The results are partially frightening.

As we can see, many of these papers deal with Intel’s SGX. This shows the market influence that Intel has already exerted and confirms the previously mentioned criticism of the MIT researchers.

All of these papers are certainly very interesting and worth looking at, but going into detail would go beyond boundaries. However, I would like to introduce you to one of these papers, since we have already dealt with the structure of the according technology in the first post: Deconstructing Xen.

Deconstructing Xen

Hypervisors are important for virtualisation, but vulnerable to attacks (especially Xen due to its monolithic design in pure C). Hypervisors are missing a security monitor for supervision and separation strategies to isolate processes. The authors analysed all public vulnerabilities of the Xen hypervisor. 144 vulnerabilities were located in the core of the hypervisor, not in Dom0 (domain 0 – Dom0 is the first domain launched by Xen on booting; it has special privileges, such as launching new domains or accessing the hardware directly, therefore is responsible for device drivers and hardware).

For the study, Xen was divided into Security Monitor, Shared Service Domain and isolated per VM Xen slices as the following figure shows:

Figure 1: Structure of Nexen
(Source: Deconstructing Xen, Shi et al.)

(Para-VM is a virtualisation technology that provides a software interface similar but not identical to the actual hardware.) A nested kernel architecture was interleaved into the Xen address space (that’s also where the name Nexen comes from, it stands for “nested Xen”). Thereby, VM-based hypervisor compromises are tied to individual Xen VM instances. This already prevents 107 of the 144 known vulnerabilities. It also enhances the code integrity of Xen because it protects against all code injection compromises. It has an average overhead of 1.2% (which is very low in contrast to the prevented vulnerabilities).

If you would like to deal more deeply with scientific work of the field of cloud security, you can use the papers listed above as an entry point or read their summaries of Adrian Colyer on The Morning Paper, which I can highly recommend.

This should be almost all information on the topic of cloud security I wanted to provide and I already thank you right now, if you made it this far. In the next and the last section, we will briefly recapitulate, draw a conclusion, and take a look at the possible prospects of cloud security.

Summary, Conclusion and Outlook

Let’s first summarise what we’ve looked at in the two blog posts:
In the first post, we first looked at how much the cloud is contributing to digital transformation and why security in the cloud is important. After that, we looked at the concepts and technologies of the cloud and have already drawn conclusions about the security of the different techniques for server virtualisation. Thus we learned about specialisation and isolation in the cloud computing area. In this post, we looked at the various vulnerabilities and threats of the cloud, and finally looked at some scientific approaches that deal with the topic of cloud security. The two posts are intended to serve as a basis on which further development can be carried out in order to be able to deal with more specific topics from the cloud computing area in future and to go even further into the depth of the topic.

I would like to draw a conclusion by answering the following question: “Which type of the cloud is safer, the public or the private cloud and why?”

In the public and private cloud, same technologies can be used for server virtualisation. As a result, both types generally have the same technical weaknesses.There are, however, some points which can adversely affect security in the public cloud. These are:

  • The system environment is shared by several users, multiple users though lead to a larger attack area.
  • Resource management is handed to the cloud provider. A threat can be posed, for example, by the employees of the provider.
  • The provider could use unsafe techniques for server virtualisation.
  • There is an unknown and potentially insecure system environment. This differs according to the service model:
    • SaaS: You have to rely on the safety precautions of the supplier. There is no technical influence on the application or the system (except maybe the app settings).
    • PaaS: You should protect the application against an unknown system environment (and, of course, also against external influences).
    • IaaS: The operating system can be influenced by a malicious hypervisor. Similar to PaaS, the application should be protected against threats.

We see the public cloud has a variety of additional attack points in contrast to the private cloud. The cloud providers, however, are professionals and should be able to protect their systems against attackers or a failure respectively. In my opinion, you should weigh whether it is worth outsourcing business processes to the public cloud or not with these issues in mind.

Finally, let’s look at a few points, which I believe the field of cloud security will continue to deal with in future.

The developments of the above-mentioned papers help to protect an outsourced application from an unknown system environment. A problem, however, is that these developments are relatively new and some are not yet publicly available. Examples are SCONE and SGXBounds (which is based on SCONE). Furthermore, according to my findings, there is still no established way to particularly protect applications, according to the type of server virtualisation. This illustrates the relevance of the topic today. Due to the abundance of work, however, we can conclude that much is currently being done to make the systems in the field of cloud computing more secure.

Machine learning in the context of cloud security is, in my opinion, a topical issue. Malware detection software to spot anomalies in system environments are highly in demand. Together with machine learning, techniques can be considered to identify malicious processes, for example, by detecting outliers. But also processes that disguise themselves as ordinary processes must be recognised. There will certainly still be a lot going on in this area in the future.

In addition, the buzzword “zero trust” is also leading the way, according to a trend prediction from Forrester. Zero trust means “never trust, always verify”, on technical as well as on user or employee level. Examples for its importance at user level, are repeatedly confirmed by media reports. But it is questionable how corresponding processes would affect the motivation of the employees.

We can see there is a lot going on in the field of cloud security and the topic is far from exhausted. I hope you could get a basic insight into the topic with these two posts and possibly an interest to go even deeper.

Did you notice areas that I did not cover or do you possibly have different views in certain points? Let me know and write your opinion in the comments! I am looking forward to your feedback!

Thank you for your interest! 🙂

Sources (Web)

Sources that were not stated in the text. All sources were last accessed on September 3, 2017.

Cloud Security – Part 1: A current market overview and the concepts and technologies of the cloud

Welcome to the first of two blog posts, that will deal with the latest developments in cloud security.

In this post, we will initially look at the role the cloud plays in today’s market and why it is important to deal with the security of the cloud. In order to address the security aspects, we need to know how the cloud works, so we’ll then take a closer look at the concepts and technologies used in the cloud.

After we know the technologies of the cloud, we will consider their weaknesses and threats in the next post. To this end, we are trying to identify the weaknesses of the cloud as far as possible, and we will regard a list of threats that companies can face when using the cloud. After that we will observe scientific papers that currently deal with the issue of cloud security. Finally, we will summarise, draw a conclusion and look ahead to potential future developments in the area of cloud security.

And now I wish you to enjoy reading! 🙂

A current market overview

Through digital transformation, cloud computing has become increasingly important in the last few years for companies from a variety of areas. In addition to big data, mobility, Internet of Things and cognitive systems, today, cloud computing is one of the most important technologies for implementing digital transformation. With the cloud, new services can be quickly and easily made available and scalable, mostly without acquiring own hardware (that applies to public and hybrid clouds, as well as to private hosted clouds – you can find a more detailed explanation of the various cloud types in the next section), which is why the cloud can rightly call itself an innovation accelerator. In addition, it has now become an essential component of IT strategy of many companies and has thus become the de facto architectural model of digital transformation. It is not surprising that IT budgets in many companies are strongly moving towards cloud computing. By 2020, expenditures for cloud computing in Germany are expected to rise to about 9 bil. euros, which is about three times as much as in 2015 with 2.8 bil. euros.

Companies have three essential requirements when using the cloud. These are: availability, performance and safety. A cloud will only be successful if the users’ data is secure and customers can rely on it, because even today the greatest obstacle to the use of the public cloud is the fear that unauthorised people can have access to sensitive company data. This fear is reflected in the opinions of cloud users: German users tend to prefer cloud providers that are bound to the German data protection law; The majority of German IT decision makers tend to rely more on German cloud providers, and even more than half of them distrust American providers when regarding data privacy (which in turn creates opportunities for regional providers that can complement the offer of cloud giants). This is one reason why German companies have opted for private cloud solutions in the last few years and have therefore run applications with sensitive data on their own IT environments. However, as early as 2015, more than half of the companies have already planned to migrate to hybrid cloud within the next one or two years (and thereby maintaining the full control of the private cloud and being able to scale to the public cloud at request peaks).

The confidence in the German cloud, because of legal provisions, has not been hidden from the EU and therefore it is going to implement the General Data Protection Regulation (GDPR) from May 2018 onwards. This will enable cloud providers, which are located within the EU, to enjoy the same trust as a German cloud. This regulation will also apply to third countries offering their services within the EU. Legal regulations are a not negligible point when it comes to expanding the security level on a large scale. Attackers, however, who are intent on causing damage and revealing information, cannot be impressed by any legal provisions. They rigorously try to exploit any technical or human weaknesses to achieve their goals. Legal regulations may help to avoid procedures that could jeopardise the safety of certain systems, but to achieve proper safety in systems, weak points, be it human or technical origin, must be eradicated, resulting in more robust and less susceptible systems. For this reason, we will focus on the technical background of cloud security issues in these two posts.

In order to be able to deal with technical security in the field of cloud computing, we need to have a profound knowledge of the concepts and technologies used in the cloud, which is why we will look more closely at them in the next section.

The concepts and technologies of the cloud

Now we will look at the concepts and technologies of the cloud a little bit more closely. We’ll start from the beginning with the explanation of what the cloud is, and then go on to the various concepts and solutions that exist, up to the technical construction of the different methods for server virtualisation. Regarding the methods of virtualisation, we’ll already try to draw conclusions about their impact on security. So, let’s start!

What is the cloud?

Cloud Computing is storing, managing and processing data online. An ordinary data center consists of computing resources and storage resources, which are interconnected by a network. In the cloud, these resources are virtualised by certain techniques (at which we will look more closely later on). The virtualisation in combination with the use of specific management software enables an intelligent and automated orchestration and thus an efficient utilisation of recourses.

Resource management

Regarding resource management, there are two keywords that you should have heard of when dealing with the cloud. These keywords are: software-defined infrastructure (SDI) and infrastructure as code (IaC). SDI stands for an agile IT infrastructure, which is a flexible environment in which the resource management is automated. At IaC, the infrastructure can be automatically managed and provided by code, which is also referred to as so-called programmable infrastructure. The advantages resulting therefrom are automated and dynamic adaptability and scalability. Examples include: the ability to quickly make virtual servers available and remove them just as quickly, and automatically de-escalate overloads and failures. The available resources thus can be used more (cost-)effectively than in ordinary data centers. This results in the business model of the public cloud where users usually only have to pay for actual usage, because unused resources can be quickly and flexibly distributed to other users.

Cloud types

There are different cloud types, which, in my opinion, contribute a decisive part to the security of the cloud. They determine on which servers processes and data are located. For this reason, we will consider them briefly below.

The main types of the cloud are: public cloud, private cloud and hybrid cloud. There is also the classification in community cloud (where only a selected pool of users from several organisations can access), and different variations of the mentioned cloud types, such as hosted private cloud (for the hosted private cloud hardware is rented from a web hosting company and then a private cloud is established on it), but we will not go into it any further. The simultaneous use of different cloud services and cloud providers is called multi cloud.

The public cloud is characterised by a multi-tenant environment, which usually only needs to be paid for actual use and really occurring data traffic. The customer uses a workspace or a service on the provider’s servers. Due to the opaque environment, only insensitive data should be processed in public cloud. The public cloud is used to host everyday apps, hand in resource management to the provider, and for applications where data traffic can go up unpredictable. In the private cloud, unlike the public cloud, there is only one tenant or user of resources and the services run on dedicated servers. The advantage of efficient and flexible resource management is, of course, also given in private cloud. In addition, the private cloud is completely and independently controllable. It is used for foreseeable workloads and for sensitive or company-critical processes and data. The hybrid cloud is a combination of private and public cloud. It is used for services with uncertain demands. For example, applications can be deployed on private cloud and extended or scaled to public cloud at demand peaks (which is also referred to as cloud bursting). Thus for hybrid cloud dedicated and shared servers are used. It offers the controlability of the private and additionally the scalability of the public cloud.

Service models of the public cloud

In the public cloud, there are different service models, which we will look at briefly. They determine which parts you can manage and which are managed by the cloud provider. Therefore, in my opinion, they also contribute to the security of the cloud.

The different services are (as defined by the National Institute of Standards and Technology): infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). In addition, there are countless other names for services, such as anything or everything as a service (X/EaaS) (XaaS comprises all public cloud services to the use of human intelligence as human as a service (HuaaS)), , but they are not relevant to us.

Figure 1: Service models of the cloud – Who manages what?
(Source: http://community.kii.com/t/iaas-paas-saas-whats-that/16)

Figure 1 shows who manages what when using the above-mentioned service models. With SaaS, the user utilises a prepared application or service. Examples of this are Facebook or Google’s search engine. In PaaS, the user hosts an application and data on a platform supplied by the cloud provider. An example of this is Amazon Lambda, where the customer deploys code in one of the available programming languages and then executes it by means of specific (function) triggers (the executable code is called Lambda function). At IaaS, the customer is provided with a complete infrastructure below the operating system. The user can control and configure everything on the operating system (OS). The OS itself is set up by the provider, because the virtualisation, the core of cloud technology, which allows the flexible management of resources (including the creation of virtual systems) is done by the provider. Compared to the service models of the public cloud, the stack on the left represents the private cloud, where full control over all instances is available, from the network to the application itself.

Up next, we’ll have a closer look at the structure of server virtualisation techniques used in the cloud.

Technologies for virtualisation

Virtualisation is the key point that makes the cloud possible at all. Through the virtualisation of systems, resources can be efficiently utilised. For virtualisation, there are several technologies that can extend from virtualisation level (see Figure 1) up to the application. These are: Hypervisors, virtual machines (VMs), unikernels and containers (and today also a combination of unikernels and containers). In the following we will look more closely at the characteristics and structure of these virtualisation technologies, because they are, in my opinion, the key point for security in the cloud (virtual machines should be known to everyone, so we don’t go into them separately).


Hypervisors, also known as virtual machine monitors (VMMs), are the core of many products for server virtualisation. They enable simultaneous execution and control of several VMs and distribute the available capacities. Hypervisors are lean, robust and performant, because they contain only the software components needed to achieve and manage virtualisation of operating systems (OSes), which reduces the attack surface. Figure 2 shows the two different types of hypervisors.

Figure 2: Types of hypervisors

Type I is referred to as a native or bare metal hypervisor, because it relies on the hardware and can access it directly. Multiple VMs can be created on the hypervisor itself, and again multiple applications can run on the VMs. The most widely used type I hypervisor representative is Xen. Type II is called hosted hypervisor, because it is based on a host OS. On the hypervisor itself, VMs can be created (they are called guest OSes here). The logical separation of the entities is evident by the indicated boundaries. Vertical boundaries represent a certain security barrier or isolation. Malicious code primly spreads vertically. If, for example, a hypervisor is infected, the attacker can enter the components based on it and control it, if no special security precautions have been taken in the overlying layers.


Unikernels are specialised, single-address space machine images created using library operating systems (Lib OSes – a Lib OS has only one virtual address space). In contrast to normal OSes, they are not designed for multi-user environments and general-purpose computing. Unikernels contain only a minimal set of libraries (of an operating system) needed to run a app. To create a unikernel, libraries, the app and code for configuration are compiled. These properties result in a complete system optimisation, which is also referred to as specialisation, which has the advantage of a reduced attack surface. Unikernels can run directly on the hardware or on hypervisors.

Figure 3: Specialised unikernel
(Source: https://mirage.io/wiki/technical-background)

Figure 3 shows the reduced codebase of a specialised unikernel in contrast to the software stack of a common OS or a VM. Unikernels only have about 4% of the amount of code of a comparable OS, making the footprint and the attack surface very small. As mentioned above, a Lib OS has only one virtual address space. This means that only one address space is available for each specialised unikernel and thus also per application. This leads to process isolation against other entities. As a result, malicious code cannot spread horizontally. Unikernels are also particularly fast. If a request is sent to a unikernel, the unikernel can be booted and answer the request within a request-timeout. In summary, unikernels have a service-oriented or microservices software architecture. A Lib OS used for the creation of unikernels is, for example, MirageOS.


In a container (sloppily formulated), everything is packaged, needed to get a piece of software run. Containers only include libraries and settings necessary to run certain software (no full OSes, as opposed to VMs). Thus, containers are efficient, lightweight and self-contained (isolated) systems. The software they contain will run the same regardless of the operating system they are deployed. The leading software for building containers is Docker.

Let’s now look more closely at the structure of containers.

Figure 4: Virtual machines vs. Docker containers
(Source: https://www.computerwoche.de/a/docker-in-a-nutshell,3218380)

Figure 4 shows virtualisation with a type II hypervisor and VMs on the left-hand side, and on the right- hand side virtualisation with containers. As we can see, different containers (App 1 and App 2 on the right) are isolated from each other, but same apps share the same binaries or libraries, and different containers even build on the same operating system (the orange field on the right represents the Docker engine, which is the software for creating containers, called containerisation). Whereas hypervisoring (on the left) creates separate guest OSes with binaries and libraries for each app. Thus, we can conclude that on the one hand containers are a slimmer alternative than the virtualisation via VMs, but on the other hand they have a serious lack of isolation and thus in terms of security.

The kernel, on which a container builds, comprises control possibilities for file systems, network, application processes and so on. If a kernel is compromised, the entire functionality can be exploited to access containers, that build on the kernel, and to influence or control them. A hypervisor, on the other hand, offers much less functionality than a kernel, that can be exploited. To counteract this problem, today, there is a solution that unites containers and unikernels. It was introduced after Unikernel Systems (the organisation of the open-source unikernel software) joined Docker, because both containers and unikernels pursue the same goals: isolation and specialisation.

Figure 5: Shared kernel vs. specialised kernels
(Source: https://www.youtube.com/watch?v=X1Lfox-T0rs)

Figure 5 shows a typical shared kernel with several containers running on the left-hand side. On the right-hand side, we see specialised kernels (green) on each of which a container runs (the specialised kernels each represent unikernels). Thanks to the combination of unikernel and container, the desired insulation can now be achieved and the additional specialisation offers the advantage of a reduced attack surface.

We could go deeper into the different virtualisation technologies and see how the components are build up in detail and are connected and communicating with each other. But this would exceed the already very extensive scope of this post even further. So we will leave it at this time.

That’s it for the first part of the two blog posts about cloud security. I would like to thank everyone who has made it to this point (that was surely not easy ;-P). Let’s quickly recap what we’ve learned in this post, and how it goes on in the next one. We looked at the latest metrics of the cloud market, and we saw that there is a need for security of cloud users, which should best be satisfied by the elimination of technical and human weaknesses. In addition, we learned the different concepts and technologies used in the cloud and have already drawn some conclusions on the security of the different technologies for server virtualisation.

In the next post, we will be able to look at the vulnerabilities and threats of cloud computing. Also, we will look at current scientific papers dealing with security of the cloud. To conclude, we will draw a résumé and venture out into the prospects of cloud security.

If you have any questions or comments about this post, please let me now by leaving a comment. I am really looking forward to your feedback! And I’m already looking forward to welcoming you to the next blog post, where it is time to go deeper into the security aspects of the cloud! So, stay tuned and see you soon!

Sources (Web)

All sources were last accessed on September 3, 2017.

VVS-Delay – AI in the Cloud


Howdy, Geeks! Ever frustrated by public transportation around Stuttgart?
Managed to get up early just to find out your train to university or work is delayed… again?
Yeah, we all know that! We wondered if we could get around this issue by connecting our alarm clock to some algorithms. So we would never ever have to get up too early again.

Well, okay, we’re not quite there yet. But we started with getting some data and did some hardly trustworthy hypothesis of prediction on it. In the end it’s up to you if you gonna believe it or not.

To give you a short overview, here are the components that are involved in the process. You will find the components described in more details below.
Process overview

A view parts in short:
1. crawler and database – get and store departure information
2. visualization – visualizes the delays on a map
3. statistical analysis – some statistical analysis on the delays over a week
4. continuous delivery – keep the production system up to date with the code


Ok, where to start? We need some highly available service to gather live data from the VVS API. VVS is the public transportation company in Stuttgart. Thankfully, their API serves information about delays.
Because we were able to work on this project in the class Software Development for Cloud Computing we got access to the IBM Bluemix cloud platform. This platform offers the deployment of lots of services and applications. A Python App is one of it. So, because we love and speak Python, why not use it for an API data crawler.

Crawling the data

This is where we got into contact with some cloud service the first time and yes, for someone who is used to be root (access to all and everything on a computer), this is going to be nasty. This is one of the things you have to learn quickly: Adapt your needs to the given environment. The cloud tries to help you to just deploy and use things out of the box. But this “convenience” comes with a lot of constraints and preconditions.
In this case what we wanted to have was a standalone worker that simply crawls data.
The crawler is supposed to call the VVS API every 5 min to get departure information for every S-Bahn station in and around Stuttgart. The answer of the API contains information about the planned and estimated departure times and whether it is real-time controlled or not. “Real-time controlled” means that there is live information about the train, including its (possible) delay. On top of that it serves messages about some events that impact the schedule, as well as detailed information about the train and station.

The crawler was coded from scratch. It builds a framework for integration of different APIs and unit tests. Currently, our tests cover the basic functionality, database connection, read and write checks and some sanity checks as well.
A cool feature of the cloud platform that we used, is the ability to have a Continuous Integration Pipeline.
The service monitors our GIT repository and pulls the code on any change, rebuilds and restarts the crawler.

Before we could get this all up and running we had to get around an issue, you might also run into.
What the cloud assumed was that we wanted to deploy a Web App that somehow serves a web interface.
The special thing about the web interface is that this is used to perform a health check.
This means the cloud platform does some “App is alive” check for you to restart it or inform you if it is not running anymore. So be aware how this check is done!
In our case it was like: no open web port, no passing health check. This caused the cloud to restart our crawler, which just wanted to crawl data without serving any web interface for e.g. live progress tracking.

To come around this we had to tell the cloud that it shall use the process status as a health check. Sounds easy, but it might be tricky to figure out where this option can be set. For us this could be done within the manifext.yml file, which defines some properties for the cloud platform. Notice the health-check-type parameter.

 - name: vvs-delay-crawler
   no-route: true
   memory: 128M
   health-check-type: process

Storing the data

Now that we’re able to retrieve data, what to do with it? Obviously it needs to be stored in some way, but: How? Quite a lot of time and thinking went into solving this question. Should we use a relational database or NoSQL? Should we store the data exactly in the form we get it or do we do some transformation beforehand? What data do we really need, what is redundant, what is superfluous?

Finally (or not so final, see later) we settled on the following:

NoSQL database

When we were considering a relational database, we actually could come up with data models that would have been able to depict our case. But: We just didn’t see how it would be efficient to first divide the single documents we get from the API into different relations, only to set them back together later on for further processing. We figured the SQL queries would become just too complicated – and unnecessary. Since we already have the data (kind of) in the form we’d like to use it – a JSON structure – we decided to stick to that.

Transform before storing

Saying that the data coming from the API already has the form we want it to have is not quite right. As can be seen in the screenshot below, it comes with a bunch of extra info that we are not interested in, or that we don’t know how to process with machine learning.

Screenshot of JSON data

As an example, take the detailed description of the station: redundant names, IDs, types and so on. We don’t need all that, just a single number to identify the station is sufficient. And the document in the picture is already significantly shortened; the actual API result was originally 668 lines long.

Aside from the redundant stuff, there is also information that might be interesting for our case, even if we don’t know how to deal with it yet. For instance, in the case of severe service disruptions, often a message for the customers is added. It contains information about alternative routes or, important to us, the reason for the delay or disruption. These kinds of information are quite difficult to include in the machine learning process, however, we are certain that they can play a role. So, to avoid the regret later on, we do store them, just in case.

However, to use our storage space most effectively, we don’t just strip off the unneeded parts, but do a little more transformation before writing to the database. Especially in the first version, our data model was designed to need as little space as possible. For this purpose we did not use a proper “key”: “value” approach, but rather put the value directly into the key position, so to say.

Unfortunately we had to find out the hard way how impractical this is. The issue: Dealing with “unknown keys”. Well, we do know, say, the keys “S1” to “S60” for our respective lines, but it’s really not good style to use a switch statement or the like here. And when it comes to the IDs of the stations, such a workaround is almost impossible, given that there are 83 of them. Luckily, JavaScript and Python both offer functions to retrieve all keys of an object, so this wouldn’t be any problem here. Only the Cloudant query interface does not have such a functionality – at least we couldn’t find one. So it’s simply not possible to filter documents by nested values while ignoring the key in front. To give a practical example: You can’t retrieve only documents containing data of any S1-trains no matter at which station.

Further down the road we thus settled on another layout, “version 1.3”, which represents some kind of compromise. It still contains line numbers as keys, but everything else is properly structured following the “key”: “value” scheme. See below:

Screenshot of JSON data

Thanks to our versioning, however, we are still able to change this layout in the future and adapt all APIs and functions to deal with new versions.

Accessing the data

When talking about “accessing” data we have to differ between
a) inspecting the database manually for development purposes
b) retrieving and processing data programmatically.

For a) the Cloudant DB offers a quite neat graphical web interface. The user can click themselves through IBM Bluemixcollections and documents, and even create, update or delete them. It includes the possibility to run queries, i.e. filters on the database to only show you matching documents. Sounds nice, but doesn’t work that well in practice. First of all, it takes a while to grow accustomed to the syntax, which is being complicated by the (can you guess?) incomplete documentation. Secondly, we had some trouble with a missing filter for fields nested below “any key”, as mentioned in section “Transform before storing”. Thirdly, if the amount of documents matching the query is very high, displaying the result either takes a loooot of time, or just doesn’t happen at all. And lastly, finding errors in your query is very hard, due to non-existent or incomprehensible error messages.

For b) there’s also a nice feature of IBM Bluemix which makes it easy to deal with permissions and all that jazz. If an app needs access to a database you can “connect” them via the web interface and the app is automatically authorized for CRUD operations (create, read, update, delete). No need to create new admin users for each app, to grant them the respective rights, copy-paste keys around etc. Well, actually you do have to create a set of credentials to be used, but that’s a one-click task. The best part is that there’s no need to maintain and get confused by various config files. The credentials are automatically stored in an environment variable “VCAP_SERVICES” of the connected app – and you can just lean back and watch cloud magic.

In addition to that, you can download these credentials and access the database from outside the cloud. We have to say, programmatically this worked really well. Meaning, opening a connection from a running program. We were able to develop locally, working on the cloud database (not the production one, of course) without much hassle. A simple check whether the environment variable is set and that’s about it.
But – and there’s always a “but” – manually, i.e. with Postman or in the browser, this didn’t work out. Maybe we just haven’t tried hard enough, but when we tried to retrieve documents from Cloudant’s REST API following the instructions from the documentation, we had no luck.

For this and other reasons (as described in the “Visualization” section), and because we thought it’d be a good opportunity to try out the “API Connect” feature of Bluemix, we decided to build our own REST API service for our database.
Up until now, said API has remained quite basic, though it can easily be extended in the future. It merely offers the route “GET /entries”, which, well, returns entries (documents) of the database. By the query parameters “startTime” and “endTime” (both UNIX timestamps in milliseconds), the user may delimit the requested timeframe. An additional parameter “filterEmpties” (boolean) does exactly that, and a last one “transform” (also boolean) governs whether the data should be returned in original form or should undergo further changes in structure first. These changes mainly include reordering of the documents from “per timestamp” to “per station”, which is important for visualization, and the addition of some minimal statistical info about the requested dataset, which has already helped discover some oddities within the crawled data.
The API service has been set up conceivably easily using a Node.js server and an API definition framework called “Swagger“. We didn’t get as far as integrating it with “API Connect”, however. Unfortunately, Bluemix and its buggy UI crossed our plans yet again and what seemed so simple in the docs just did. Not. Work. In reality. Rather than waste our time trying to find out why, we gave up in frustration after a few tries and settled for our own stand-alone API, which can be accessed here: [vvs-delay-db JSON API] (https://vvs-delay-api.eu-de.mybluemix.net/).

Ideas to go further

Ok, let us think a bit about how this could develop. Unfortunately we could not get a machine learning algorithm running on the data. But it would be predestined for one. Why? Because it could learn in an on-line fashion. We have real-time data, every 5 minutes a new training sample is generated by the crawler. So the learning algorithm could learn on and on. If you’re not to familiar with machine learning, don’t worry. Just keep in mind that awesome stuff can be done with it!

So why didn’t we implement it, if it’s that awesome? Machine learning needs a lot of resources during the training. To keep costs low a cloud service normally offers just a few things that has to do with machine learning therefore. This is a cool thing for any service which just wants to use a pre-trained model like image classification, where a given image shall be tagged with the things that are visible in it. If you have a custom thing to train and still want to do this in the cloud, like we do, you might need a (virtual) server with root access. You could set up your own environment then and build everything you need by your own. But this might get way more expensive.


Before machine learning rules prediction systems, statistical approaches were means of choice. We were interested, how the delays are distributed over the week. So we started to count the delays for each 5 minutes interval of the week. By that, we mean an average delay value for each S-Bahn for the time slots 00:00-00:05, …, 14:00-14:05, 14:05-14:10, …, 23:50-23:55. With a lightweight web API the system can be asked about the average delays, e.g. on Monday morning 08:10-08:20.


With our end goal in mind – an app that predicts train delays for sleepy students and hurried business people – there’s one thing that’s still very obviously missing: An intuitive representation of the data. Unfortunately, bunches of JSON documents resting in a cloud database are not enough. Also for us, during development, an easy-to-read overview of what the crawler is actually collecting would be great. Even more so, as the Cloudant DB interface is buggingly slow and the query options very limited.
What we would like to have is a nice visualization of our data. As long as we’re not able to make any predictions, at least we want to show past and current delays.

This sounds easier than it is. Because our data is that multi-dimensional, we’re not just done with an easy scatter plot or pie chart. We have:
1. Spatial information (the VVS net map)
2. Categorial information (different S-Bahn lines)
3. Temporal information I (the “current” time, meaning the time a data point refers to/was collected)
4. Temporal information II (minutes of delay)

More technically, we have thousands of documents looking like this:

Screenshot of JSON data

How can we display these in an optically pleasing and easily understandable way? For us, it came down to a representation of the VVS net grid with an overlay of the delay data. To reduce the temporal dimension, we chose to show delay averages of an adjustable interval. A similar form can be used to show the predicted delays at one point the future.

Screenshot of visualization

The technologies we used to implement this are HTML, SVG, CSS and JavaScript. In our opinion, a web User Interface (UI) is the most easily publicly accessible way. And with the library d3.js at hand we had the perfect tool to create a dynamic SVG graphic. For those who don’t know it yet, go take a look at some of the stunning plots made with d3.js: d3 Gallery. Enough reason to try it out, right?

And we have to say, with all the (exceptionally good) documentation, examples and tutorials available on the interwebs, working with d3.js was a piece of cake. You do have to get used to the declarative coding style and understand d3.js’s data binding concept first, but after that, it’s easy.

One problem however that we ran into was, again, our data model. d3.js is basically made to work with arrays of data. Of course it can handle nested objects as well (it’s JavaScipt, duh), but it’s a bit less straightforward.
So the struggle was to re-form our documents as they came from the API (see above) into something that could 1:1 be rendered into a graphic.
We achieved this by separating the static (stations and lines) from the dynamic data (the delays).

The array which we feed into d3.js to draw the top layer of delay information looks like this:

Screenshot of JSON data

See how every data point contains the actual delay value? Many little transformation steps beforehand are necessary to reach this form, but it’s easier than using d3.js to iterate over the data and calculate it during rendering (if it’s even possible…). The “labels” resolve to coordinates stored in a separate JSON document. If you’re curious about the details, you can check out the code on GitHub.

A lot of time and thought also into the design of the map, and making it really dynamic and (somewhat) responsive. SVG is great, but you need to provide all the details, i.e. where and how things should be drawn. d3.js thankfully covers a lot of it: You can programmatically determine e.g. colors, sizes, even positions, dependent on the respective data. However, in our case, all of the 125 stations (counting stations serving multiple lines repeatedly) have unique positions on the grid, which cannot be calculated automatically.

To solve this, we didn’t puzzle all of them into the right position by hand, but used another little homemade JavaScript tool to retrieve the coordinates of all stations from the well-known VVS net map image. With the tool, you can click on one station after the other, enter its name, and the respective coordinates are stored automatically – absolute pixel positions as well as relative ones (for scaling). We realized that it’s not enough to draw stations with multiple lines as one point – the graphic just became confusing that way. What’s more, we also had to draw the lines connecting the stations, which we found to be kind of a hassle for various reasons. Looking around for alternatives, we found this graphic with a much simpler structure and SVG already! First we tried to parse its path data programmatically. We figured we could easily hijack the graphic and add our own elements in some way. Halfway down the road we realized that it does not quite match our present mapping from data points to SVG elements so we went back to retrieving the coordinates manually with the mentioned tool. Finally, the spatial data we got out of that round, we could use to render the net map including all 125 stations and 6 lines.

Now putting our delay data on top of that worked like a charm. We represent it with colored circles for each line at each station that grow in size and darken their shade relative to an increase of the average delay. White circles mean that there’s no data available – an issue that we are yet to resolve.
The graphic is interactive in that the viewer can choose the time interval to be displayed. Choosing a very short and very recent interval, you can get pretty close to a real-time rendering. Another limitation that we still need to fix is that it’s currently not possible to request data of an interval of more than 24 hours. There’s an incomprehensible problem with our JSON API service prohibiting this.

Other than that, the visualization service is open for use and ready to be extended for the depiction of future delay predictions.

Integration Pipeline/Deployment

When it comes to things you have to do over and over again, informatics are lazy. So they automate it. This is where the Integration Pipeline service comes in. It does several things for you:
1. Get code from the code source, e.g. GitHub
2. Edit the code if needed, e.g. config files
3. Compile code if needed
4. Deploy the application

Instead of getting the code to the servers manually, the service pulls it from a repository. The process gets triggered as soon as new code is available, so there is no need for a developer to do any action, besides pushing the code, once everything is set up.

We use different versions of the application. One in development state and one deploy or stable version. This results in two different configuration files on the server, normally. However, the cloud service can generate or adapt the config files for us, with respect to the different versions. So, the development version uses the development database and the deploy version, you guess it, the deploy database. Yeah, even more laziness!

Sometimes it is necessary to compile code, i.e. bring some program code to something a specific computer can understand. This depends mainly on the operating system and architecture of the used computer. This means, the code has to be translated on the cloud computers anyways. And this has to be done every time new code is available. Perfect job for the Integration Pipeline.

Finally, we want the newest application version running. And that’s something the integration service can achieve, too. After everything was pulled, altered and compiled the currently running application gets shut down and the new one starts up. Awesome!

The mentioned abilities are just a few. You can have multiple stages, with different tasks getting executed. It is like your personal little robot, doing all the nasty stuff for you, once you told him what is needed.

Lessons Learned

  • know your environment
  • debugging gets harder the farer the system is away (which is somehow the case for a cloud platform)
  • use existing solutions if some is available. With individual code, you need individual platforms and that is going to be expensive

What’s next?

We achieved some cool visualization and the database is already filled with a lot of data. The code that was used for this project is available on GitHub.

Furthermore, some machine learning would be nice and could increase the possible applications.

Sport data stream processing on IBM Bluemix: Real Time Stream Processing Basics

New data is created every second. Just on Google the humans preform 40,000 search queries every second. By 2020 Forbes estimate 1.7 megabytes of new information will be created every second for every human on our planet.
However, it is about collecting and exchanging data, which then can be used in many different ways. Equipment fault monitoring, predictive maintenance, or real-time diagnostics are only a few of the possible scenarios. Dealing with all this information, creates certain challenges for stream processing of huge amounts of data is among them.

Improvement of technology and development of big scaling systems like IBM Bluemix it is now not only possible process business or IoT data, it is also interesting to analyze complex and large data like sport studies. That’s the main idea of my application – collect data from a 24-hour swimming event to use real time processed metrics to control event and athletes flow.

In this article explains how to integrate and use the IBM tools for stream processing. We explore IBM Message Hub (for collecting streams), the IBM Streaming Analytics service (for processing events) and IBM Node.JS Service (for visualization data).


In the swim sport, there is a competition called “24-hour swimming”. The goal is to swim the larges distance within 24 hours. Don´t worry! It is allowed to leave the pool whenever you like and take as many breaks as you want. In an earlier project, we developed a server/app combination to count the laps of each swimmer electronically. You don’t need to have people sitting around the pool any longer and counting by hand with pencil and paper. But there is still a problem. Each swimmer can choose the lane he wants to swim on by his own. Most people consider to be faster than they really are. So why don’t process the data, each tap of the counting app produces to calculate averages of swimming time for each lane.

Below is the scheme of a stream processing flow that we will implement in this post.

Event Producer or in production the app sends messages, which then go to Message Hub. IBM Streaming Analytics Service receive them up from Message Hub, process and send calculated metrics to the Node.JS App which visualize the data. In further development Streaming Analytics store the data into an Cloudant storage on which the node app can make some lookups.

IBM Message Hub

The IBM Message Hub is a fully managed, cloud-based messaging service. It is built on the open source Big Data tool Apache Kafka and is available through IBM Bluemix® Platform. Each message which is send to the Kafka Cluster has got a topic. This topic allows us to send various messages within our Kafka environment and make some addressing. This is especially useful to set up a micro service environment. In our case we just need one topic because all our data, that the apps are sending, should be processed.

The reason why we need to use a service like the Message Hub is caused by the policy rules for IBM Bluemix services. The Streaming Analytics service provides a function to receive directly messages but these messages have to be sent within the IBM Bluemix system. That’s the reason why we take this detour through Message Hub.

Because Message Hub is based on Apache Kafka all client libraries for communication are full compatible, e.g. for Android Apps there is an Java Library  or some for Node.js .
To set up and configure IBM Message Hub check out this sample.

IBM Streaming Analytics

Streaming Analytics is a full managed service, which allows us to build streaming applications with ease. The developer doesn’t have to worry about managing and configuring the infrastructure like an apache Spark Service, which is also offered by the IBM Bluemix infrastructure. But because that service comes out of the box you have first to configure it. That is a big advantage of Streaming Analytics. Developer can focus on building business logic and analytics. The service supports real-time analytics with extremely low latency and high performance. Either your application supports a single device and data source or connects and monitors hundreds of thousands of devices, Streaming Analytics performs seamlessly and reliably.

The usage of the Streaming Analytics Service is really simple, either interactively through the Streaming Analytics Console[1] or programmatically through the Streaming Analytics REST API.

Through this service you can add a Streaming Analytics application e.g. an instance of that application running in the IBM Bluemix cloud. Scale the instances, check errors or get a visualization of the data flow graph.

IBM Streaming Analytics Application

When submitting a job to the IBM Streaming Analytic Service, you are prompted to identify a Streams Application Bundle (.sab) to upload and submit. The Streaming Analytics service in Bluemix requires to develop your Streams application in another Streams environment, outside of Bluemix.

IBM Streaming Analytics is beside the usage within the Bluemix a stand-alone product for usage on your own hardware to set up a Streams environment. If you don’t already have a Streams environment where you can develop and test applications, you can develop locally using the Quick Start Edition. The virtual machine provides a preconfigured Streams environment with development tools.

Developing these types of applications is easy and can be done in multiple ways. Streaming Analytics supports a Java Application API, which means any Java developer can crank up an application with extreme ease. Same for Python developers.

But the easiest way is to develop applications with the IBM® Streams Processing Language (SPL). It is a java near syntactic language for describing data streams with a lot of build in operators.

To write a Streams application, you first need to understand the basic building blocks. A single block is an operator. A block consists of input ports and output ports. A input port consumes a stream of continuous records. An operator can have one or more input ports. Through the output port an operator processes the records and create a new Stream. An operator can have one or more output ports.

A Streaming Application consists of a flow graph of operators. Each block in that graph takes one small task like prepare, filter or aggregate on the records. A record is called a tuple. Each stream has a defined data structure e.g. a defined structure of a tuple and a stream consists only of one type of tuples.

The Info Sphere Streams Studio supports an interactive or programmatically way building your SPL applications. Below we see our application for processing the swimming data.

First we are reading our data from a Kafka Consumer e.g. read from Message Hub. After that I prepared the tuples to the right format. So one touple consist out of an ID, the name of the swimmer, the lane he is swimming on, an actual timestamp and the time remaining from last count. In Code it looks like this:

In top we see the schema of our touples. The Composite is a wrapper for one graph which is declared here with graph. We see beginning with “stream” our first block reading from Kafka/Message Hub. The second block is the converting block convert the message from Kafka to our schema.

After these preparation I did some filtering to remove wrong tuples like if the time between two tuple is under 21 seconds, which is the world record for 50m swimming. The second tuple will be removed from stream because something can´t be right. Same procedure for duplicates within the stream.

After that our stream splits up into one block counting the total amount of meters, one for the total time average per lane and beyond another filtering the calculating of the averages per each lane. At the end the calculated metrics be converted back to a JSON string and send via HTTP to our Node.js Application.

While writing Streaming Application it is highly recommended to regard design pattern for processing the data to keep your application performant as possible. The following diagram shows a good approach for developing such stream applications.

Node.JS SDK for Bluemix

The Node.js app is used to visualize the data we calculated, based on a simple Node.js environment for Bluemix. The App also controls our Streaming Analytics Application through the Rest Interface of Streaming Analytics for Bluemix. So we did not have to upload the Streaming Application manually. The Streaming Analytics Service is connected to the Node.js App so we can share credentials and also send the data from the Analytics Application via HTTP. For easy creating charts I used Char.js.

Final Thought

Streaming Analytics is an fascinating part of the possibility systems like Microsoft Azure Cloud or IBM Bluemix offers. Not only the crazy idea of ruling that mass of data makes that fascination. It is more the ability to proceed them in real time, making description.

Creating this little project showed me that the technology is available for everyone but that are only a few out there using it. There are so many use cases like real time suggestions while online shopping, better insights into exchange market or faster processing of medical test like an DNA analysis. But I realized also that it is not that easy to find metrics which are of interest and in my opinion that distinguish a Big Data analyst from a good one. He knows which metrics are interesting for the client, which could be interesting or which new key metrics he can calculate.

All together it was interesting getting in touch with that technology and I hope I could share some information with you getting in touch with Streaming Analytics.

Here are some links which helped me getting started about SPL (https://www.ibm.com/support/knowledgecenter/SSCRJU_4.2.0/com.ibm.streams.dev.doc/doc/dev-container.html) or creating applications (https://developer.ibm.com/streamsdev/docs/bluemix-streaming-analytics-starter-application/).

Wettersave – Realizing Weather Forecasts with Machine Learning


Since the internet boom a few years ago companies started to collect and save data in an almost aggressive way. But the huge amounts of data are actually useless if they are not used to gain new information with a higher value. I was always impressed by the way how easy statistical algorithms can be used to answer extremely complex questions if you add the component “Machine Learning”. Our goal was to create a web service that does exactly that thing: We realized Weather Forecasts using Machine Learning algorithms.

The Application can be split in four parts:

  • The website is the final user interface to start a query and to see the resulting prediction.
  • The server connects all parts and hosts the website.
  • The database stores all important data about the weather.
  • IBM Watson is used to calculate the forecasts with the data of the database.

In the following I will explain the structure more detailed and show how we developed the application.


The Database

First, we needed the right data about the weather to start with. The idea was to connect our application with the weather API hosted at IBM Bluemix. Unfortunately, it did not work out because first the data we found there was useless for our aimed predictions and second the service was too expensive. So, what we did is, we took the free weather data provided by the DWD (Deutscher Wetter Dienst) and saved it at our own database hosted at Bluemix. The data includes all important information from 12/23/2015 until 6/20/2017 so only predictions in this range are logically possible and comparable to the actual values.

Machine Learning with IBM Watson

We created the prediction models with the SPSS Modeler by IBM. These so-called streams can be uploaded directly to the Watson Machine Learning Service. To create a stream based on our data, we first had to connect the database we used to train the model with the SPSS Modeler. The next step was to filter the data and to leave out all information that was irrelevant for creating the prediction model, for example the ID of each record as it has no influence on the predicted values. The records contained in the database are adapted to the model that predicts the weather for a chosen day using the information of one day before. This is why each record contains all measurements of one day and the measurements of the following one. Then all inputs and the relating outputs or results that will be predicted are marked at the field operation “type”. The measurements that were not used in this prediction model are marked with the role “none” to exclude them from the following computation.

On the following picture you can see the filter operation (left) and the type operation (right):

The Data was now ready for modelling and the auto numeric operation of the SPSS Modeler could calculate the models. The auto numeric operation calculates different kind of model types and chooses the one with the best results. For the prediction based on one previous day a neural network was defined to be the model with the best results.

The figure below shows the features with their weights which represent the importance for the prediction. You can see that the middle and maximum temperature of the previous day have the biggest impact on the result.

The calculated neural network has one hidden layer with six neurons and is shown below:


To produce the models which use two or four previous days to predict the next day, the datasets had to be adjusted within the SPSS Modeler. Every data set must contain the measurements of the two or four previous days as well as the data of the day for the prediction. For that we had to use operations for choosing, filtering, connecting and ordering data. In the following picture, you can see the stream which uses four previous days for calculating the model.



After executing the stream, a model is created (yellow diamond at the right site) that can be used in another stream which is used for calculating the resulting prediction and uploaded at Bluemix.
We noticed that including the data of two or four previous days as input at the prediction model instead of the data of only one day creates only minimal better results (1 day: 92,4%, 2 days: 92,9%, 4 days: 92,8%).

Server and website

After uploading the streams to Watson at Bluemix, we connected all parts with a Node.js server and created the website. As this was only a side aspect of the project, I would not describe this process more detailed.

The Result

You can test the web service at wettersave.mybluemix.net. You will see that only dates from 12/23/2015 until 6/20/2017 are possible to predict as described at the paragraph about the database.

If you are interested in the code, you can find it at the following GitLab repository:


Developing a Chat Server and Client in the Cloud


During the Lecture “Software Development for Cloud Computing” I decided to develop a Cloud based Chat Application with the help of IBM’s Bluemix.
The Application consists of 3 separate Applications:

  • Chat Server: Allows Clients to connect to it, manages the Chat-Channels/Users and relays messages sent from a client to the other clients in the same channel.
  • Chat Client: The Client consists of a GUI where the User can connect to the Server and chat with other Users.
  • Chat Backend Database: A simple Database which records and provides the chat history of a given Chat-Channel via REST.

The following Image describes the connections/interactions between the different applications. These will be described in more detail later.

The motivation behind this was that I always wanted to do a traditional Client-Server application and this Project was a good reason for it.
To develop everything for the Cloud was an additional challenge since I had some experience with cloud based application but never really got into it.

In the following paragraphs I will explain how these services were developed, what Problems arose and how IBM’s Bluemix performed.
(You can try out the client right here.
It should be working for at least another month of the time of this publishing, after that my trial runs out and I don’t know what’ll happen.)


Used Tools

All 3 applications were written in Java since I have the most experience with it and already knew some libraries I was going to use.
As an IDE I used Eclipse, as Build Tool I used Gradle.
A great tool to check for additional flaws or erros I used SonarQube for code analysis which also has a plugin(SonarLint) for Eclipse.
As a Build tool I chose Gradle since it is a lot more flexible due to its programmable nature.
And since groovy is very close to Java it’s quite easy to work with.
I chose Git as a Version Control System out of personal preference and experience.
The Continuous Delivery Service of Bluemix supports Github Repositories which is an additional plus.

First Steps

When I first started developing on this project I started with the most important part, exchanging messages in a Server-Client structure.
Naivly I tried to use standard Java Sockets(java.net.Socket) which quickly failed for a reason:
When developing a Application in Bluemix usually only one single Port is open for use, which is mapped to the url of your application(in my case https://studychatclient.bluemix.net etc.).
In my case the port defaulted to 8080 but others are possible.
Since each Java Socket connection needs its own port and I wanted/needed multiple connections that option was qickly ruled out.
After that I tried to solve the issue by using REST calls, which worked ok at the time but obviously not fast enough.
Only after a few weeks I found the oerfect(?) Solution for my problem: WebSockets.
WebSockets are based on TCP and work by sending an Upgrade request on an existing HTTP connection.
When this was succesful we have a full-duplex connection over an already used port and this isn’t limited to only one connection.
After this revelation I quickly searched for a fitting library and found org:java-websocket:Java-WebSocket(https://github.com/TooTallNate/Java-WebSocket) which provides two classes, WebSocketServer and WebSocketClient, which can easily be extended.
An empty WebSocketServer extended class looks something like this:

public class Server extends WebSocketServer
    public Server(final InetSocketAddress address)

    public void onOpen(final WebSocket conn, final ClientHandshake handshake)

    public void onClose(final WebSocket conn, final int code, final String reason, final boolean remote)

    public void onMessage(final WebSocket conn, final String message)

    public void onError(final WebSocket conn, final Exception ex)

The Client looks the same except for the Constructor, which needs an URI instead of an InetSocketAddress.
For actually connection to the Server there exists the “connect()” and “connectBlocking()” methods, with the first one being run in the background.
Additionally the WebSocket objects received as paramters in these methods can be used to send messages, either as a String, byte array or ByteBuffer.
With this solution it is very easy to send messages between server and client but for a chat application there needed to be a few more things done.

Graphical User Interface

A chat application needs a GUI to be used efficiently, chatting via a command line interface would be quite bad and very limiting.
I started looking into a HTML+Javascript combination but quickly stopped due to my aversion and limited experience of JavaScript.
But then I remembered Vaadin, which makes it possible to build web UIs in Java.
You can try out their demos here.
It’s based on the “javax.servlet.annotation.WebServlet” interface and provides similar functionality to the JavaFX framework, which I was already familiar with.
I had quite a few problems with settings it up and running it correctly but after many tries I settled on this:

public class ChatUI extends UI
    @WebServlet(name = "ChatUIServlet", asyncSupported = true)
    @VaadinServletConfiguration(ui = ChatUI.class, productionMode = false)
    public static class ChatUIServlet extends VaadinServlet
        private static final long serialVersionUID = -6216866496615055637L;

    private static final Logger LOGGER = LoggerFactory.getLogger(ChatUI.class);

    private static final long serialVersionUID = 903938514945760669L;

    protected void init(final VaadinRequest request)
        final ChatView chatView = new ChatView();

        this.addDetachListener(event ->
            ChatUI.LOGGER.info("Closing View!");
            if (chatView.getClient() != null)

The UI class, which I extended here, represents the top most component in the Vaadin component hierarchy.
For every instance of the Chat Client openend in a Browser a ChatUI Object will be created.
As its content I created a ChatView, as you can see in the “init” method, which contains all the buttons, textfields and similar(usually in further subclasses).
There exists a Graphical Designer to design a Vaadin UI in a Drag-and-Drop fashion but it isn’t available for free so I had to do it all in Code which is unfortunate but not the end of the world.


The Server has effectively two responsibilities:

  • Managing users
  • Managing channels

Managing channels

Managing Channels contains relaying messages from one user to the rest of the channel and handling joining/leaving of a user.

Channel registry

I had the idea of a single channel registry which controls/manages all channels.
Originally I wanted to add the ability to create custom channels but left it out because of time reasons and so the channel registry is a pretty lightweight class.
It has a list containing all channels, in this case 6 fixed channels. It allows to access them by name or to send a list of all channels to a user.
I made this class a singleton, since there should be only every one instance of it.

    public boolean userJoin(final RemoteUser user)
        LOGGER.debug("User {} wants to join Channel {}", user.getName(), this.getName());
        if (this.userList.size() < this.maxUsers && !this.userList.contains(user))
            this.sendMessageToChannel(user, MessageType.CHANNEL_USER_CHANGE);
            return true;
        return false;

When a User wants to join a channel, first the channel checks if the user can join and if so, notifies all other users that the user list has changed.
After that the chat history of that channel is retreived and sent to the new users client for display.

    public boolean userExit(final RemoteUser user)
        LOGGER.debug("User {} wants to exit Channel {}", user.getName(), this.getName());
        final boolean success = this.userList.remove(user);
        this.sendMessageToChannel(user, MessageType.CHANNEL_USER_CHANGE);
        return success;

A user leaving is almost the opposite, he/she is removed from the list of users and another notification is sent so everyone else has the correct user list.
The “sendHistoryToUser” method resolves to a relatively simple REST call which retreives the chat history and sends it to the user as a message.

Relaying messages
    public void sendMessageToChannel(final RemoteUser sender, final Message message)
        final String messageType = message.getType();
        LOGGER.debug("Sending message to channel {} with type {}", this.getName(), messageType);
        Message msg = null;
        switch (messageType)
        case MessageType.CHANNEL_USER_CHANGE:
            msg = MessageBuilder.buildUserChangeMessage(this.userList, this);
        case MessageType.CHANNEL_MESSAGE:
            msg = MessageBuilder.buildMessagePropagateAnswer(message.getMessage(), sender.getName());
            msg = new Message("{}");
            LOGGER.error("Tried to send message to channel with unknown type: {}", msg.getType());
        for (final RemoteUser user : this.userList)
        LOGGER.debug("Message sent to channel {} with {} users, Content: {}", this.name, this.userList.size(), message.toJson());

Relaying messages was pretty much a simple loop iterating over all joined users and sending them the given message.
If it was of the type “CHANNEL_USER_CHANGE” it sends the whole list of currently joined users, if it was just “CHANNEL_MESSAGE” it sent the chat message around and uploaded it into the chat history.

Managing users

User registry

As with the channels I created a user registry, managing all users.
When a user wants to join the server, he/she has to send the username they want to use while chatting.
Since every username should be unique to avoid confusions, already used usernames were rejected.
Additionally I assigned every user a unique and positive ID of the “long” datatype to make identification easy.
Out of the name and ID I created user objects which were stored in a Map with the ID as key and the user object as value.
Then of course I created methods for accessing and removing users.


The client has a relatively simple GUI and, as written above, was built with the Vaadin framework.
I won’t go into too much detail of how the GUI itself was built, it’s mainly sticking pre-built blocks together.

Vaadin has several containers such as “GridLayout”, “VerticalLayout” or similar.
Those containers can be filled with controls such Textfields for inputting text, buttons for pressing, so called “ListSelects” for displaying a list and many more.
When you look at the screenshot above you can see many of them in action.
I used a GridLayout as the base and then tried to group the different parts together in sub-layouts, i.e. the user list on the right is a VerticalLayout containing a Label for the title and a ListSelect for the user list.
This splitting of parts helps a lot since they can be worked on seperately.
The more interesting part is the WebSocketClient although it’s very similar to the WebSocketServer.

Detecting disconnected clients

One interesting problem with Vaadin is detecting when a user/client disconnects under abnormal instances.
When he disconnects via the WebSocketClient it’s all fine and good but how is it detected if he closes the tab or even the Browser.
The answer is unfortunately not easy.
Vaadin provides several ways to detect this but I haven’t found a reliable solution.
You can add a “DetachListener” to the Client but I found this to be called only around 30% of the time when I was closing a tab or window.
I then built in a custom “Heartbeat”, basically a periodically updating timestamp for each user on the server.
Each period a message would be sent to the respective client and he would (not) answer it and when he would fail to do so, the server would disconnect him.
But since the closing isn’t detected very reliably more often than not there would be ghost users, sometimes I would find myself with 6 other users while testing, all of them created by me earlier.

Chat History Database

For the chat history service I chose to make it a REST based service with a NoSQL database attached.
Mainly because I already knew how to work with REST APIs but never user a NoSQL database before.
Its service is also kind of detached from the server and client so if it fails at any point, the rest can still work.
IBM Bluemix provides such a database, namely the Cloudant NoSQL database, which I promptly connected with my application.
This gives access to the database authentication via environment while running in the cloud.
Cloudant also provides a nice library to use with this database.

I used SpringBoot as the framework using a single controller with 3 methods:

public class ResourceController
    public String addMessage(@RequestBody final String input, @PathVariable final String channelName)
        final JsonObject jo = MessageDatabase.addChannelMessageToDB(channelName, input);
        return jo.toString();

    public String getChannelMessages(@PathVariable final String channelName)
        return MessageDatabase.getMessageFromDB(channelName, new MessageList().getId());

    public void removeDB(@PathVariable final String channelName)

The methods are pretty self explanatory except the removeall POST mapping, which is only in there for development purposes.

The database has a very simple structure:
It is split into 6 documents, matching the 6 chat channels.
Each of those document contain the list of all messages sent to that channel, with the username who sent it.
These then can be requested per channel and will be sent as a JSON message containing the array of messages.

Custom Message protocol:

For the Server and Client to reliably exchange messages I created custom, standardized JSON messages.
Their base structure looks like this:

    "version" : <versionNumber>,
    "type" : <messageType>,
    "content" : {
        <additional Content, depending on messageType>


The “version” field contains the Version of the chat protocol and serves to differentiate them, if there ever would be a conflict.
The “type” field contains the type of the message, one of the following:

  • USER_JOIN: A new Client wants to join a Server(not Channel)
  • USER_HEARTBEAT: Periodic message to check if the client is still running
  • CHANNEL_MESSAGE: A normal chat message to a channel
  • CHANNEL_JOIN: A Client connected to the Server wants to join a Chat Channel
  • ACK_CHANNEL_JOIN: Response to Client for succesful Channel join
  • CHANNEL_USER_CHANGE: Send to all Clients of a Channel whenever the Userlist of that channel changes, contains the list of clients connected to the channel
  • CHANNEL_CHANGE: Sends the currently available channels to a freshly connected client
  • CHANNEL_HISTORY: Contains the Chat history of a channel, which is sent to a client on a channel join

This would be a message sent from the Client to the Server, containing a chat “message” from a Client with a given “userID”.
With this ID the Server can find the User, the channel he/she is in and send the message to the rest of the users of this channel.

    "version" : 1,
    "type" : "CHANNEL_MESSAGE",
    "content" : {
        "userID" : 1234567890L,
        "message" : "Hello World!"

IBM Bluemix

Right from the start of the project I wanted to use a continuous delivery pipeline to make developing and testing much easier and comfortable.
Luckily Bluemix had in-built support for this with a (mostly) easy setup:

When creating a Toolchain Bluemix provides several pre built templates to choose from. In this case the “Simple Cloud Foundry Toolchain” was sufficient.

This toolchain is based around a Github repository and even provides a Web based IDE.
The toolchain for the Server(and the other two apps) looked like this in the end.
I used the Github repository and configured the toolchain with it. This enabled me to use it as an input in the build steps.
I used the two pre-configured build Stages: “Build” and “Deploy”.
In Build the unit tests are run and the application assembled while in Deploy the application is pushed to the cloud via the cloud foundry cli.
Each stage consists of one or more Jobs. In case of the Build stage there was a “Test” job and a “Build” job. As the names imply, the Test job ran the unit tests and the Build job assembled the application, all via Gradle tasks.
In case of the unit tests, Bluemix can display the result of the tests so you can see where something went wrong.
In the Input Tab I configured the Stage to use the Git repository as an Input and more importantly set it to run whenever something is pushed to the repository on Github.

The Deploy stage is very simple, it configures the different Cloud-Foundry Variables(Organization, Space, Application Name etc.) and executes the command “cf push $CF_APP”. Additional configuration was stored per project in the “manifest.yml”.

When the jobs of a stage have finished the next stage begins and runs its jobs and so forth.
This whole process made testing the application very easy since all I needed to was develop something, push the changes and I could test them after 1 or 2 minutes live in the cloud.
But not everything was great, the biggest problem was configuring a recent version of the JDK. As of this writing the standard version configured is the IBM JDK version 7 and I noticed this not until I tried to build a early version with Java 8 Lambdas. Then began the search on how to change to a Version 8 JDK, which took quite a while. I found the answer first on StackOverflow and later also in the official Documentation: Change the “JAVA_HOME” environment variable to $HOME/java8 (“export JAVA_HOME=$HOME/java8”).
Other Problems were with the UI of Bluemix itself but these were minor and some of them were solved over time.


Github Repositories

Here the list of Github repositories used for this project.
Code quality and documentation could use some improvement but these were not intended for further development.
Still, if there are questions about the code, the best option would be to create an issue on Github.

Moodkoala – An intelligent Social Media application

Welcome to our blog post ‘Moodkoala – An intelligent Social Media application’. The following provides an overview of our contents.


       – The idea behind Moodkoala
       – Technologies overview
       – Frontend and Backend
       – Bluemix Services
       – Liberty for Java
       – Natural Language Processing
       – Tone Analyzer
       – Language Translator
       – Cloudant
       – Mood analysis with IBM Watson
       – Mood Analysis with IBM Watson Tone Analyser
       – The Mood Analysis Algorithm
       – Embedding the Tone Analyzer into the Java EE application
       – Filtering Hate comments using Natural Language Understanding
       – Natural Language Understanding
       – Summing up text analysis
       – Google and Facebook API
       – Mood imaging analysis
       – Implementing the mood analysis algorithm
       – Deserializing JSON Strings
       – Implementing the Natural Language Processing API
       – Implementing hate comment filter into the Java EE application
       – Set up Google Sign-in
       – Set up Facebook login
       – Mood imaging implementation
       – Configuration
       – Docker and IBM Bluemix
       – Gitlab CI
       – Jenkins
Discussion and conclusion
       – Discussion Moodkoala and 12 factor app
       – Comparison to other cloud providers
       – Conclusion


Social media is becoming more and more important to people these days. Its use has risen by more than six times among the average american adult. This means that a lot more content is being published on social media. Because of this increased use, content from social media has a high influence on people. It is important to organize and filter this information. This is done to some extend. But most of it is done by humans.

We wanted to develop an intelligent social media app (name: Moodkoala) that can organize this large amount of content and is adaptable to the user’s mood.
With the increased use of cloud computing ‘as a service’, we decided to use IBM Watson’s cloud services to analyse text and rely on additional third party services for image analysis and image storage.

Our goal was to see what was possible with our automated social media content organization and which cloud service providers provide the best solutions.

The idea behind Moodkoala

The idea behind Moodkoala is to create a web app which reflects the emotions of a person. Every logged in user is able to create posts. The posts are being analyzed and tagged with a corresponding sentiment. Users can view the posts sorted by their emotion. Obviously users are able to comment these posts in the same manner. Filtering posts or comments for a desired mood is also possible.

The main idea is to reflect the user’s moods snd give the user the possibility to filter based on moods. On many websites such as Facebook and YouTube people often write hateful messages and offend other people. Insults occur more frequently and are only deleted when they are detected manually. We wanted to use these services to delete hate comments automatically. The users can also track their emotions on their profile. A statistic visualizes their activity and which emotions are most prevalent.

In addition to posting messages and interacting with other people, a user can get a custom Spotify playlist based on their mood. We get this mood from a photo the user takes. This image is analyzed by an image analysis service and the sentiment is extracted from the result.

Technologies overview

To implement the app, we opted for a web app, so any user can use the software regardless of their operating system.
It was managed using Git as a version control system and many different frameworks, technologies and software.

A small impression of the used technologies and services is displayed below:

        - IBM Bluemix Cloud Service
        - Java EE (JSF and EJB)
        - PrimeFaces (Mobile)
        - CouchDB as Database
        - Tone Analyzer
        - Natural Language
        - Language Translator
        - Google API
        - Facebook API
        - Microsoft Emotion API
        - Spotify API
        - Minio API
        - Cloudinary API
        - Musicovery API
        - Docker
        - Jenkins


In this section the technologies we used are presented and described in detail.

Frontend and Backend

Our backend is based on two technologies: JavaEE as the framework and the Enterprise Java Beans (EJB), as a so-called component technology. These technologies can be easily integrated into the Bluemix cloud services.

On our frontend, we chose to use JavaServerFaces 2.2, a component based frontend framework. We used PrimeFaces and PrimeFaces Mobile as component libraries.

The downside of this approach was that we were stuck with the styling provided by PrimeFaces. There are different themes (CSS-Files) available. You have to pay for premium looking themes. You can create your own themes, but it proved to be difficult, because we had to override the PrimeFaces styles. That’s why we chose to stick with the default PrimeFaces styling.

We used IBM’s Websphere Liberty application server for local development. This server was recommended by IBM as being optimal for developing applications for their cloud. The related or subsequent deployment to the cloud was carried out with the provided IBM Bluemix tools.

Bluemix Services

For the implementation of our idea, we incorporated various services provided by Bluemix in our web app. For this purpose, we added these to our project using Maven.

<!-- https://mvnrepository.com/artifact/javax.validation/validation-api -->

In the following, the services we use are described in more detail and the respective advantages and disadvantages are shown.

Liberty for Java

Liberty for Java is a highly customizable, fast, and very lean WebSphere Application Server profile. It was especially designed for Bluemix cloud applications. The deployment of the code (using the Bluemix Tools) to the server was problem-free in most cases, but there were occasional internal server errors (AppClassLoader Error). These were caused by a faulty rendering of the PrimeFacesMobile components and could be repaired by restarting the server.

Natural Language Processing

Natural Language Understanding allows the user to perform a semantic analysis of texts. Keywords, emotions and relationships of a text a determined. In terms of emotions, it is possible to analyze specific phrases or the whole document.

Example Response


However, since this service was not optimally adapted to our idea / application (analyzing social media), we decided against further use. Furthermore, only English texts could be translated.

Tone Analyzer

The tone analyzer provides the possibility of differences in tone, e.g. joy, sadness, anger and disgust. For this purpose, a linguistic analysis is used to identify a multitude of variants in tone on both the sentence and the document plane. Three types of tone are recognized / differentiated into text:

  • Emotions (anger, disgust, fear, joy and sadness)
  • Social characteristics (openness, conscientiousness, extroversion, kindness and  emotional range)
  • Language styles (analytical, confident and hesitant)

Because this service provides a more accurate analysis of the emotions, e.g. due to the linguistic analysis, it was used instead of the Natural Language Understanding. Furthermore, it was better suited for the analysis of social media.

Language Translator

The Language Translator can be used to translate texts (the source language must be known). In our application, it was used to recognize the language used. The so-called language detection was carried out before the analysis of the emotions, so that only English texts were analyzed. Otherwise, for example, mistakes occurred in the Natural Language Understanding.

Example Response


The Cloudant NoSQL database is a Document-Oriented Database as a Service. The documents are stored in the JSON format. It is based on CouchDB and works very similar to it.

Since the Java Cloudant API, which was initially used, was not recommended by IBM, unexpected errors arose or did not work as expected. For example, some access URLs were out of date. This meant that the API could not access views.

For this reason, we used the HTTP API and the Java API to get the best of both worlds. To avoid confusion, we abstracted the core database access methods behind the DatabaseManager.

We first tried to access the data using queries. This is possible but not recommended. For this reason, we have created views (special to Cloudant) to access the data as needed.

public User getCurrentUser (String username)
        List<User> users = databaseManager.getDb().findByIndex(""selector": { "username": "" + username + ""}",

        return users.get(0);
public String getView (String designDocument, String name, boolean descending, int limit, int skip) throws IOException

        String url = "https://"+ USERNAME +
                    ".cloudant.com/" + database +
                    "/_design/" + designDocument +
                    "/_view/" + name +
                    + "descending=" + descending + "&"
                    + "limit=" + limit + "&"
                    + "skip=" + skip;

        Request request = new Request.

        Response response = httpClient.newCall(request).execute();

        return response.body().string();

Example view

When reading the data, it was important to ensure that the existing JSON strings were correctly deserialized.

Mood analysis with IBM Watson

One of the central aspects of the application was to analyse the general sentiment of a post. The user should be able to choose what types of posts they want to see.

The most difficult part of this is that each of the Watson services we used gives back a lot of complex data through JSON Objects. So what we needed to do was to simplify that.

We decided to put every post into one of three categories:

  • Positive
  • Negative
  • Neutral

These broad categories are fairly simple to understand and pretty easy to extract from the data that the Watson services give us. Every user can instantly see what the general sentiment of a post is.

Also, it is pretty simple to filter these posts by sentiment since there aren’t many properties involved.

What we wanted to do with that is give the user the possibility to see posts depending on what kind of mood they’re in. For instance: if I’m sad or angry or something similar to that I don’t want so see posts of people complaining about things or people arguing with each other about unimportant topics. The only thing that’ll do is bring me down even more. What I definitely want to see in that case are posts that are positive/uplifting or at least neutral.

On the other hand if I’m in a good mood I can handle negative posts.

Additionally, we wanted to automatically delete hate comments or posts. Those are comments with a lot of negativity and attitude (basically a lot anger and disgust in them). Though it could happen very easily that most negative comments/posts are wrongly classified as hate comments and therefore deleted. To avoid that we used a separate method to handle the hate comments before any analysis is made. More on that later.

To analyse the general sentiment we used the Tone Analyser Service.

To be able to analyse the text properly, it has to be in proper english. We wanted to leave the possibility open to classify the content by topic. That’s why we used the Language Translator to get the used language and throw exceptions if the wrong language is typed in or if the message is too short to be analyzed. Natural Language Understanding is used to make content-based analysis of the text (i.e. topics discussed) and tag posts with this information.

Mood Analysis with IBM Watson Tone Analyser

The Tone Analyzer analyzes a text based on social, emotional and language tones. You can get the image of what kind of personality the writer has, as well as what mood they were in when they wrote the text. The language analysis returns values based on the Big 5 personality traits. The analysis can be done on the whole text (document level) and also on every sentence (sentence level).

You can try every Watson service under https://servicename-demo.mybluemix.net/.


Our Example text is as follows:

I would say the biggest problem with PHP is that it is so easy to do wrong. Modern PHP has a lot of good stuff and I am going to try it again soon but older PHP has ruined a lot. There is so much bad outdated code, libraries, and tutorials that most people don’t know where to start when they learn.
I have heard this referred to as cowboy PHP. These coders tend to reinvent the wheel and it rarely works and has lots of security issues.
Modern PHP with a framework like laravel sounds pretty amazing to work on. I would choose it over Node in a heartbeat.

This is a post on reddit that discusses PHP being highly criticised. In my personal opinion the writer is not at all trying to criticize PHP as a whole, but only admits that it has some bad libraries and outdated code. Also in his opinion there are too many tutorials to effectively learn it. Still, he is being optimistic towards modern PHP frameworks like laravel and modern PHP in general.

So this reddit user is overall not very critical towards the subject, rather optimistic. My suggestion is that this post should be labeled as neutral.

If we use the tone analyser on this text we get the following result:

    "document_tone": {
        "tone_categories": [{
                "tones": [{
                        "score": 0.135708,
                        "tone_id": "anger",
                        "tone_name": "Anger"
                        "score": 0.062654,
                        "tone_id": "disgust",
                        "tone_name": "Disgust"
                        "score": 0.168413,
                        "tone_id": "fear",
                        "tone_name": "Fear"
                        "score": 0.654988,
                        "tone_id": "joy",
                        "tone_name": "Joy"
                        "score": 0.528533,
                        "tone_id": "sadness",
                        "tone_name": "Sadness"
                "category_id": "emotion_tone",
                "category_name": "Emotion Tone"
                "tones": [{
                        "score": 0.053012,
                        "tone_id": "analytical",
                        "tone_name": "Analytical"
                        "score": 0,
                        "tone_id": "confident",
                        "tone_name": "Confident"
                        "score": 0.884138,
                        "tone_id": "tentative",
                        "tone_name": "Tentative"
                "category_id": "language_tone",
                "category_name": "Language Tone"
                "tones": [{
                        "score": 0.616868,
                        "tone_id": "openness_big5",
                        "tone_name": "Openness"
                        "score": 0.065941,
                        "tone_id": "conscientiousness_big5",
                        "tone_name": "Conscientiousness"
                        "score": 0.141926,
                        "tone_id": "extraversion_big5",
                        "tone_name": "Extraversion"
                        "score": 0.117478,
                        "tone_id": "agreeableness_big5",
                        "tone_name": "Agreeableness"
                        "score": 0.467606,
                        "tone_id": "emotional_range_big5",
                        "tone_name": "Emotional Range"
                "category_id": "social_tone",
                "category_name": "Social Tone"

The IBM demo gives us a graphical representation of the result as well, which is easier to understand:

Our Tone Analyzer paints a clear picture of what is going on in this reddit post. This user is trying not to offend anyone, probably being aware that this is a touchy subject. You can see that in the language style (tentative has a very high rating). He displays an open minded rather than a close-minded or pragmatic personality. On an emotional level there is an equal amount of sadness and joy.

The problem is that you can’t exactly say whether this post is positive or neutral only by looking at the data. One thing is clear: it isn’t negative. But that doesn’t help much. That’s why we had to create an algorithm that is able to process this data and return a clear result.

The Mood Analysis Algorithm

We ran a lot of texts through the analyzer and found some patterns. Through these patterns we created an algorithm that interpretes the Tone Analyzer data and returns either positive, negative or neutral.
One side note: the social tendencies are not used in this algorithm at all.

It works as follows:

All emotions have a numeric value. The higher the value, the more positive it is. The higher the value the more positive it is. An emotion is added to the equation when its value is above 0.3. This saves us a lot of trouble because all emotions are present to some extend.
Likewise, every Language Style property is added if it is higher than 0.5. If there is more than one emotion present, we use their average numeric value. After doing that we get a value that we calculated based on the emotions.

Since you can express angry or sad emotions while still not being negative, we took the language tone into the equation as well. If the emotions-numeric is negative, we add one for every tentative or analytical language tone property. The reason for this is that if the general sentiment seems to be negative but the writer is pulling himself back and tries to argue neutrally, the text is perceived as neutral, not negative. For every confident language tone property, one is subtracted. The reason for this is that sad or angry tone with a lot of confidence is always perceived as spreading negativity in our examples.

If you’re being positive but have tentative or analytical language tone properties, the text is probably neutral. It still remains positive if joy is the only emotion detected in the text. It just may not seem as euphoric, because tentative or analytical always means that the writer pulls himself back. Also a positive and confident positive text is even more positive.

Tentative and analytical can add themselves up. Confident in combination with tentative/analytical properties never showed up in our texts, maybe because they contradict each other.

The neutral range starts at -0.5 and ends at 0.5. The final numerical value determines whether a text is positive, neutral or negative.


Let’s run our Reddit post through this algorithm:

  • Emotions
  • Joy: value of 4
  • Sadness: value of -1
  • Language Style: Tentative
  1. Calculate the average value of the emotions: emotion = (4 – 1) / 2 = 1.5
  2. Emotion is positive
  3. The only language style property available is tentative. If we look at the diagram above, we can see that we have to subtract one.
    result = emotion – 1 = 1.5 – 1 = 0.5
  4. Since the result is 0.5 we can see that this reddit post is neutral as expected.

Embedding The Tone Analyzer into the Java EE application

To use the Tone Analyzer in a Java EE application, we need to use IBMs Java API. There is also an HTTP API. So it is possible to use it with every programming language. It is more comfortable to use the Java API.

First, we need to create an instance of the service.

this.toneAnalyzer = new ToneAnalyzer(ToneAnalyzer.VERSION_DATE_2016_05_19);
this.toneAnalyzer.setUsernameAndPassword(TONE_ANALYZER_USERNAME, TONE_ANALYZER_PASSWORD);

You need to specify the version of the API that you use for it to work.
To authenticate, we need to first generate some credentials on the cloud. We don’t want them hard coded in our app. So we need to save them as environment variables and access them from the application. Environment variables can also be set on the bluemix cloud. That way the credentials are easily accessible in this environment, too. Every deployed application can use different cloud services (depending on the environment variables).

After the service has been initialized, we need to specify what kind of analysis it should run. In our case it should only run a document scoped analysis with social, language and emotional tones. We kept the social analysis to be able to use it in a later build of the application.

this.toneOptions = new ToneOptions.Builder().

To run this service we need to make sure that it is connected to the cloud. It returns a JSON String. We deserialize this String into a custom Java object, which is called ToneAnalysis and return that.

public ToneAnalysis getTone(String text) {
    JsonParser jsonParser = new JsonParser();
    JsonElement jsonToneAnalysis = jsonParser.parse(toneAnalyzer.getTone(text, toneOptions).execute().getDocumentTone().toString());
    return Deserializers.deserializeToneAnalysis(jsonToneAnalysis);

Filtering Hate comments using Natural Language Understanding

Filtering hate comments is important if we want to keep any conversation online from escalating. To find examples for this, the only thing you need to do is look at the comment section of any YouTube video. Although some platforms are more prone to it than others, people on the internet are anonymous and therefore have less restraint. Normally, real people are needed to delete offensive content, which isn’t very effective considering the massive amount of text that is posted on most online platforms.

We tried to automatically filter comments and posts like that without anyone needing to manually delete them. We found that the easiest way to do that is to use the Natural Language Understanding Service by IBM. Its purpose is not to analyse text tone, but to analyse text meaning. That’s why we didn’t need an algorithm as complex as the one that analyses text mood. It would also be possible with the Tone Analyzer. However, we wanted to leave the possibility open to expand the filter on content based filtering and not be stuck with an analysis of the general tone of a text.

Natural Language Understanding

The API of the Natural Language Understanding Service works very similar to the Tone Analyzer API.

If we run our previously analysed text through the Natural Language Understanding service, we get a lot of different kinds of results.

First of, it returns an overall sentiment of the text:

    "sentiment": {
        "document": {
            "score": -0.0115505,
            "label": "negative"

Then, we get the emotions. In a way it is very similar to Tone Analyzer.

    "emotion": {
        "document": {
            "emotion": {
                "sadness": 0.528533,
                "joy": 0.654988,
                "fear": 0.168413,
                "disgust": 0.062654,
                "anger": 0.135708

Also, we get frequently used keywords:

    "keywords": [{
                "text": "bad outdated code",
                "relevance": 0.973213
                "text": "Modern PHP",
                "relevance": 0.8045
                "text": "biggest problem",
                "relevance": 0.66206
                "text": "good stuff",
                "relevance": 0.573971

And three Categories that the text could be tagged with:

    "categories": [{
            "score": 0.551908,
            "label": "/technology and computing/programming languages"
            "score": 0.353947,
            "label": "/business and industrial"
            "score": 0.258282,
            "label": "/technology and computing/software"

The text has to have more than a few words. Otherwise, the service will return an error. That’s why this kind of service can only be used with texts containing at least one rather lengthy sentence.

To see whether a comment is hateful or not, we have to first check if the sentiment is positive or negative. If it is negative, we can progress to the next step.
If either the value of disgust or the value of anger is more than two times larger than the values of sadness and fear, we have a hate filled text.

The function looks as follows:

public static boolean isHaterComment(String text) throws ServiceResponseException {
    boolean isHateComment = false;
    LanguageProcessor languageProcessor = new LanguageProcessor();
    LanguageAnalysis languageAnalysis = languageProcessor.analyzeText(text);

    if (languageAnalysis.getSentiment() <= 0) {
        if (hasHighAngerRatio(languageAnalysis) || hasHighDisgustRatio(languageAnalysis)) {
            isHateComment = true;

    return isHateComment;

Summing up text analysis

There are many possibilities to use these services for our app idea. We could have done a lot more in terms of content based filtering. An idea we had was to analyse the user’s behavior and filter the posts accordingly. There are infinite possibilites that you experiment with.

When it comes to their accuracy, most of the time the services are correct. However, you can never be a 100% certain that a text is positive, negative or neutral. There is always room for interpretation. It depends on the person reading it. Some people would perceive a neutral post to be negative because it contradicts their views on a topic. We tried to design the mood analysis algorithm in a way that was logical and not based too much on pattern analysis. That’s one of the reasons we used the Tone Analyzer for that. What helped a lot were the analytical/tentative/confident properties. These enabled us to make the analysis a lot more accurate and less based on patterns and heuristics.

Still, it isn’t 100% accurate. And that can be a problem. If we want an intelligent social media app, we want a very high accuracy. The Tone Analyzer can’t process irony. That’s why a lot of playful joking would be marked as negative.


oml one time sprite came out of my nose and it burned 😂😂😂

Apparently this text is very sad, eventhough it isn’t intended to be.

One problem with Natural Language Understanding is that it needs a lot of text to make a proper analysis. The Tone Analyzer only needs a few words. If you want to enable content based analysis and filtering, as well as tone based analysis, we need a lot of text. To find a solution for that can prove to be difficult, because you have to make the content analysis optional or come up with some other solution.

Given more time and resources, it is certainly possible to implement these services in a way that is better for a real life application. But you have to be aware that there are a lot of exceptions to be handled (like ironic texts). To see all of these exceptions to the rules, we would have to analyse a lot more texts. To integrate Natural Language Understanding in messaging system like YouTube, it would have to be optional and the hate comment analysis would have to be done using the Tone Analyser.

Moodkoala is therefore best for deep conversations rather than short messages.

What is astonishing though, is that it wasn’t too hard to build a basic intelligent social media app that works almost all of the time. The most difficult part is making it user friendly and handling exceptions to rules.

Google and Facebook API

Everyone who wants to use the app needs to log in. For this purpose, each user can create an account on the login page of the app by clicking the register button.

Additionally we decided to support logging in with a Google or Facebook account (if available). In this case, the user does not have to create his own user account since he can use his Google or Facebook account to log in. Many websites and apps have been offering this possibility for a long time, as it is more convenient for the users. Google and Facebook offer an appropriate API for realizing this.

Google Sign-in API: https://developers.google.com/identity/
Facebook Login API: https://developers.facebook.com/docs/facebook-login

The procedure differs only minimally for both APIs. First, you need an account to use the APIs. Next, enter the domain / URL and then you get a public key and private key from the provider. The public key is then entered into the software.

Mood imaging analysis

We wanted our app to be able to take images of our users and determine their current mood based on these images. The resulting mood should be used to query a list of matching songs from Spotify. To accomplish such a complex task we divided it into the following steps.

1. Shoot or upload user images
2. Save user images
3. Analyze user images
4. Map the result of the analysis to songs on Spotify
5. Show the resulting songs

Before implementing, we did research to get an overview of the different ways to achieve our goal. In the following section, this overview is specified.

1. Shoot or upload user images

Since we were using PrimeFaces, our first attempt was to search on the PrimeFaces ShowCase website for documentation on how to shoot or upload an image. For shooting an image we found a very simple to use component called PhotoCam and for uploading an image we found the FileUpload component. The PrimeFaces examples are available here:

Alternatively, we could have used plain HTML5 with JavaScript to upload an image or take a photo. HTML5 allows us to access webcam via video tag and the getUserMedia functionality. Uploading an image can be done with an Ajax request. The code HTML samples are accessible at http://jsfiddle.net/sBYHN/83/ and


2. Save user images

We did not want to store the uploaded images directly into our database due to performance reasons.

Additionally, we wanted to write textual data only in our database. That’s why we used an object storage service. Storage is a valuable resource and sadly most of the big Cloud Platforms like Google, Amazon, Microsoft, IBM do not provide free use of an object store or require a credit card number for identification. Neither of us had a credit card at our disposal. So we had to search for free object storage services. The following table displays the different Cloud Platforms and services we considered using as object storage to save our images.

Minio Cloud Storage Cloudinary imagekit.io Google Amazon Microsoft IBM
Java SDK Yes Yes No Yes Yes Yes Yes
JavaScript SDK Yes Yes Yes Yes Yes Yes Yes
Private Server Yes No No No No No No
Pricing 0$ 0$ – (49$ – 549$)/month 0$ – 19$/month 0,023$ 0,0245$ (50 TB) – 0,0235$ (450 TB) – 0,0225$ (over 500 TB) 0,021$ (50 TB) – 0,02016$ (499 TB)
0,01932$ (4999 TB)
0,09$ (50 TB) – 0,07$ (150 TB) – 0,05$ (over 501 TB)
Require Credit Card No No No Yes Yes Yes Yes

After realizing that we had to search for free to use object storage services we first came up with Cloudinary. Cloudinary offers image and video management on the cloud in the form of two varieties. The first one is a free plan and second one is an upgrade plan. If you want to use the upgrade plan, you can choose up to 250 GB of total storage and 1 TB of monthly bandwidth. The free plan includes a total storage of 2 GB and a monthly bandwidth of 5 GB. Cloudinary also supports image transformation by URL. E.g. if you have an URL to your image like this:


You can now scale the image by simply editing the URL.


You can also apply effects to it.


Cloudinary supports a great variety of framework integrations and programming languages including JavaScript, Java, .Net, IOS, Ruby and Scala.

The second solution we found was Minio Cloud Storage. Minio Cloud Storage is a server application implementing the Amazon S3 v4 API. It is licensed under the Apache License 2.0, which permits us to free use. In comparison to the other object storage services, Minio Cloud Storage is a server itself, which allows us to host our own object storage. Additionally, Minio Cloud Storage may store any kinds of files, where Cloudinary on the other hand only supports images and videos. For a simple setup and deployment the server application is available as a Docker image at Docker Hub. A public accessible instance of the Minio Cloud Storage server can be reached at https://play.minio.io:9000/.

Setting up your own Minio Cloud Storage server is very straight forward. Using Docker you can simply run the following command.

docker run -p 9000:9000 minio/minio server /data

This will pull the official Docker image minio/minio from Docker Hub if it is not locally available. Afterwards, Docker will start a new Container with a port mapping for the port 9000 and a data volume at the path ‘/data’ using this image. If the Minio Cloud Storage server is running, it writes a valid access name and key to the standard output. You can use these credentials to log into the local minio instance at http://localhost:9000/.

Similarly to Cloudinary, Minio Cloud Storage comes with SDKs for several programming languages including JavaScript, Java, .Net, Phyton and Go.

Imagekit.io is also a free Cloud Storage Service. Like Cloudinary, imagekit.io offers a free trial plan and a billable upgrade plan. The trial plan includes a total storage of 2 GBs, 5 GB of monthly bandwidth and 80.000 requests per month. Similar to Cloudinary, imagekit.io provides functionality to transform images via URL.



Compared to Cloudinary the image transformations take a bit longer to process.

Concerning support for SDKs of different programming languages or plugins imagekit.io covers Python, JavaScript/jQuery, a WordPress plugin and a REST API. So if we wanted to use imagekit.io in our Java application we would have to extend an existing Java HTTP client to access the REST API, since a Java SDK is currently not supported.

3. Analyze user images

Image object recognition and further human face and emotion recognition is a difficult job to achieve. When it comes to analyzing images due to prior experiences in our studies we immediately thought of machine learning. One elegant and efficient manner of solving this kind of problem is using a neural network. A neural network has to learn its classification during a phase of supervised training with a lot of training data to be able to begin unsupervised classification. Since we did not have the resources to generate enough training data to train our own neural network, we decided to use service containing a pre-trained neural network to analyze our images.

When investigating human face recognition in addition to emotion recognition only a few Cloud Platforms provide a solution for this particular use case. The following table illustrates the most prominent cloud services for image recognition and compares them with respect to criteria like emotion recognition, multi-tracking and other requirements.

Kairos Amazon Google Microsoft IBM Affectiva OpenCV
Face recognition Yes Yes Yes Yes Yes Yes Yes
Face recognition (video) Yes No Yes Yes No No Yes
Emotion recognition Yes Yes Yes Yes No No Yes
Emotional depth Yes No Yes Yes No Yes No
Age & Gender Yes Yes Yes Yes No Yes No
Multi-tracking Yes Yes Yes Yes Yes Yes Yes
Creditcard No Yes Yes Yes No No No

The Microsoft Emotion API caught our attention first. On this demonstration website you are able to try the human face and emotion recognition yourself. As input, you need to either upload an image or you provide a URL to a web hosted image. As output, the Microsoft Emotion API returns a JSON string, which contains the number of recognized faces including a static spectrum of emotions with numerical scores between 0 and 1 representing the affiliation to the respective emotion. Let’s take a look at the following example. We looked up some sample images with a predominant emotion to be processed by the neural networks.

Happy pictures:






Sad pictures:






Neutral pictures:







Angry pictures:





Non human pictures:





We compared the Microsoft, Kairos and Google APIs using their demo websites.

Microsoft: https://azure.microsoft.com/de-de/services/cognitive-services/emotion/

Google: https://cloud.google.com/vision/?hl=uk

Kairos: https://www.kairos.com/demos

The Microsoft Emotion API recognized nearly every human face and emotion correctly. Only one of these sample pictures could not be recognized despite of a human face being shown. Also, the pictures of animal faces returned an empty response correctly. Kairos on the other hand did not recognize as many faces and emotions correctly. Additionally, Kairos subjectively needed more time to process the images than the Emotion API from Microsoft. The Google Vision API returns much more data than the prior APIs, but also requires a lot of time to process the images. Our sample pictures were recognized without any errors. However, due to a greater processing time and the aspect that Google requires a valid credit card in order to use their API, proper usage was hard to achieve.

Microsoft Kairos Google
Happy pictures 1/2 1/2 2/2
Sad pictures 2/2 0/2 2/2
Neutral pictures 2/2 0/2 2/2
Angry pictures 2/2 1/2 2/2
Non human pictures 2/2 2/2 2/2
Score 9/10 4/10 10/10

4. Map the result of the analysis to songs on Spotify

Mapping a mood or emotion to query music is quite a non-trivial task to manage. Since searching songs by mood requires a database containing several songs already mapped to particular moods.

Building an own database mapping moods to songs would require huge amounts of processing time because every song to be stored would have to be worked beforehand by some kind of audio mood analyzer. Searching for backing services was also quite difficult due to this specialized use case.

While researching we stumbled on some promising applications like http://moodfuse.com/ or http://www.stereomood.com/ which might help us to solve this kind of problem. After a more detailed analysis of the source code, these applications turned out to not use any mood or emotion data to query the music at all. So we had to continue investigating. Finally, we found a service called musicovery.com which offers a music database to search by mood via two parameters. The two parameters are called trackarousal and trackvalence and have range of values between 1 and 1 million to express different moods. While the trackarousal value determines if the song will be either calm and smooth or very harsh and rough, the trackvalence value regulates, whether the song will be either sorrowful and sad or happy and uplifting. For example, with the following query only songs which are very calm and feeling good listed. In addition to the trackvalence and trackarousal values other parameters like the listener country, a certain decade and the track popularity can be specified.

Example request 1

When feeling more aroused and more energetic the trackarousal value can be increased.

Example request 2

Corresponding to the change the resulting list of songs adapted. Furthermore, musicovery.com offers a feature to add an id of a specific foreign music service like spotify, deezer, itunes or amazon music to each song, so the songs can be looked up quicker at the foreign music service. Unfortunately, this feature did not work and the song ids were not added to the returned songs.

Apart from this, we still wanted to establish the Spotify API in our app to show and play the suggested tracks. In order to query several tracks from the Spotify API the only option is to use the REST API endpoint ‘https://api.spotify.com/v1/tracks’, which allows us to enter an array of track ids for each requested song. Since the musicovery feature for returning foreign ids did not work, we needed to send a search request to spotify for each track returned by musicovery, which is very unfortunate.

Alternatively, we could have tried to use the last.fm API by mapping the mood to preexisting song tags. But in this case a numerical value mapping as with musicovery.com would not have been possible.

The outcome of the Microsoft Emotion API gave us a distribution of values for the different emotions but the Musicovery API requires input data in form of trackarousal and trackvalence values. So in order to combine both APIs we created our own mapping (similar to the Tone Analyzer mapping). The functionality of this mapping is represented in the following example. Imagine the image we saw earlier was processed by the Microsoft Emotion API.

A part of the result we receive contains the emotion scores which look similar to the following JSON document.

    "anger": 0.153076649,
    "contempt": 1.63173226E-08,
    "disgust": 0.0004185517,
    "fear": 0.00120871491,
    "happiness": 0.8409795,
    "neutral": 4.25880842E-07,
    "sadness": 0.0005652569,
    "surprise": 0.00375083764

An important fact for mapping or processing these kinds of JSON documents is that the above listed range of emotions is static and does not change. Knowing that we need two resulting values trackarousal and trackvalence with a value range between 1 and 1000000 we could split up and assign the different emotions to certain values. The following table shows the assignment of emotions we came up with for the particular trackvalence and trackarousal values. The emotions are sorted. So for example a high fear value should result in a low trackarousal value and a high surprise value should result in a high trackarousal value.

Trackarousal Trackvalence
Fear Anger
Contempt Sadness
Disgust Neutral
Neutral Happiness
Happiness Surprise

Knowing this, a percentage distribution among the assigned emotions can be calculated. In case of our example the following values would result from the calculation.

trackArousalSum = 0.8463580464481

trackValenceSum = 0.9983726694208

Assigned emotions Distribution trackarousal Assigned emotions Distribution trackvalence
Fear 0.15 % Anger 15.33 %
Contempt 0.00 % Sadness 0.05 %
Disgust 0.05 % Neutral 0.00 %
Neutral 0.00 % Happiness 84.30 %
Happiness 99.36 % Surprise 0.37 %
Surprise 0.44 %

Now, the assignment to the scale of the resulting trackvalence and trackarousal values has to be done. Next, we needed to match the assigned emotions to the range of values between 1 and 1000000. Therefore, we split up the range of values into the following three categories. We specified the influencing emotions for each category.

Range of values Emotions trackarousal Range of values Emotions trackvalence
1 – 333333 Fear
1 – 333333 Anger
333333 – 666665 Neutral 333333 – 666665 Neutral
666666 – 1000000 Happiness
666666 – 1000000 Happiness

Next, we calculate the sum of the assigned emotions for each category. The category with the greatest sum of assigned emotions is selected. Afterwards, it is set in ratio to the range of values. So the higher the sum of the assigned emotions in category 1 – 333333, the lower will be the resulting value. Similarly, the higher the sum of assigned emotions in category 666666 – 1000000 the higher the resulting value. So in our example we get the following result.

Range of values Emotions trackarousal Emotions trackvalence
1 – 333333 0.15 %
+ 0.00 %
+ 0.05 %
= 0.20 %
15.33 %
+ 0.05 %
= 15.38 %
333333 – 666665 = 0.00% = 0.00 %
666666 – 1000000 99.36 %
+ 0.44 %
= 99.8 %
84.30 %
+ 0.37 %
= 84.67 %

For both trackaousal and trackvalence values we select category 666666 – 1000000.

To set the values in ratio we need the overlaying percentage. Therefore we subtract by 33 % and divide by 67 % and apply the category. The final trackvalence and trackarousal would be calculated like this.

trackArousal = ((99.8 – 33) / 67) * (1000000 – 666666) + 666666 = 9990004.97

trackValence = ((84.67 – 33) / 67) * (1000000 – 666666) + 666666 = 923731.19

The selected songs would be feeling very good and exciting.

5. Show the resulting songs

For displaying the suggested tracks from Spotify we wanted to be able to support as many platforms as possible. Ideally when running on a mobile device our app should trigger the Spotify app to play the selected tracks if it is installed. Similarly when running on a desktop computer our app should trigger the preinstalled Spotify client if it is available. Otherwise the Spotify Web Player should be used to play the tracks. To achieve such functionality we found the Spotify Play Button.

The Spotify Play Button is a HTML iframe tag which is very straightforward to set up. It only requires an URL to the Spotify track or playlist, a width and height to be embedded in your website. The following example shows the source code for a fully operational Spotify iframe and its result.

<iframe src="https://open.spotify.com/embed?uri=spotify:track:5FZxsHWIvUsmSK1IAvm2pp" width="300" height="80" frameborder="0"></iframe>

Along with this neat HTML widget, we were able to show the resulting tracks in our app with Spotify support.


In this section we state our way of realizing our features and our experiences we learned while implementing them.
The source files we created during the implementation can be found here on Gitlab.

Implementing the mood analysis algorithm

We need to create some collections to store the individual tone values. Also, we need to manually map the emotions to the numeric value that they’re given.

private List languageTones;

private List emotionTones;

private Map < String, Integer > emotionNumerics;

public TextMoodAnalyzer() {
    this.toneAnalyzer = new TextToneAnalyzer();
    this.emotionTones = new ArrayList < > ();
    this.languageTones = new ArrayList < > ();
    this.emotionNumerics = new HashMap < > ();
    this.toneAnalysis = null;

    this.emotionNumerics.put("Disgust", -4);
    this.emotionNumerics.put("Fear", -3);
    this.emotionNumerics.put("Anger", -2);
    this.emotionNumerics.put("Sadness", -1);
    this.emotionNumerics.put("Joy", 4);

This will initialize the mood analyzer. Before we can execute all the algorithmic steps, we need to call the API and deserialize the returning JSON String into a Java object. With that being done we need to add all of the tone values that are relevant for the analysis to the collections. Finally, we can execute the algorithm and return the numeric mood value.

private float analyzeMood(String text) {
    float emotionToneNumeric = 0;

    this.toneAnalysis = toneAnalyzer.getTone(text);

    // Going through all collected values and adding all language tones above 0.5
    // and all emotion tones above 0.3

    // Average all the numeric values of the emotions
    emotionToneNumeric = getAverageEmotionNumeric();

    // add/subtract language values
    return (float) emotionToneNumeric + addLangValues(emotionToneNumeric);

With all the calculations being done, what we need to do is to map the returned float value to one of the three mood states (positive/negative/neutral) and return the resulting state. We created a MoodState enum for that.

public Moodstate getMoodState(String text) {
    float moodNumeric = analyzeMood(text);
    Moodstate moodstate;

    if (moodNumeric > NEUTRAL_UPPER_BOUND) {
        moodstate = Moodstate.POSITIVE;
    } else if (floatEquals(moodNumeric, NEUTRAL_UPPER_BOUND) ||
                            moodNumeric < NEUTRAL_UPPER_BOUND &&
                            floatEquals(moodNumeric, NEUTRAL_LOWER_BOUND) ||
                            moodNumeric > NEUTRAL_LOWER_BOUND) {
        moodstate = Moodstate.NEUTRAL;
    } else {
        moodstate = Moodstate.NEGATIVE;
    return moodstate;

Every time a post is being made, the analysis runs and the resuls are saved in the database to later be filtered accordingly.
The Cloudant API maps Java objects to the database. Every user is saved as a seperate document. All information is essentially contained in the user object/document. That’s why we have to add the post we made to the currently logged-in user. Also, we have to add the analysis results to the Post object that is going to be added to the User object. Knowing that we have to send complete documents to update a document in the database, we send the updated user object with the added post to add one post including the language analysis data.

public void addPost(Post post) {
    LanguageProcessor languageProcessor = new LanguageProcessor();

    Response responseAddPost = databaseManager.getDb().update(user);

    // Refresh the rev ID to be able to access the object the next time

This is what one simple post from out of posts view looks like:

    "id": "9",
    "key": 1502310563437,
    "value": [
        "test test",
        "hello hello. I am very happy, but also sad and suprised.",
            "language": "en",
            "categories": [{
                    "label": "/art and entertainment/music",
                    "score": 0.707064
                    "label": "/shopping/retail/outlet stores",
                    "score": 0.000201069
                    "label": "/shopping/toys/dolls",
                    "score": 0.000201069
            "sentiment": 0.0140318,
            "anger": 0.001416,
            "disgust": 0.008378,
            "fear": 0.051329,
            "joy": 0.119018,
            "sadness": 0.863275,
            "moodstate": "NEGATIVE"

Before we add a post, we have to verify that the post is in english. This can be done using the Language Translator by IBM Watson:

public String getTextLanguage(LanguageTranslator service, String text) {
    service.setUsernameAndPassword(USERNAME, PASSWORD);
    return service.identify(text).execute().get(0).getLanguage();

This way we can minimize the exceptions being thrown because of wrong language (or bad grammar).

String lang = translator.getTextLanguage(new LanguageTranslator(),



Deserializing JSON Strings

We used Gson for all of our JSON file handling. To deserialize a JSON String into an object, we first need a GsonBuilder. We need to register a TypeAdapter to implement the deserialization logic into the GsonBuilder. The logic is stored inside of a JsonDeserializer.

public static ToneAnalysis deserializeToneAnalysis (JsonElement toneAnalysisElement)
    GsonBuilder builder = new GsonBuilder();
    builder.registerTypeAdapter(ToneAnalysis.class, new ToneAnalysisDeserializer());
    Gson gson = builder.create();
    return gson.fromJson(toneAnalysisElement, ToneAnalysis.class);

This is what the deserializer looks like:

public class ToneAnalysisDeserializer implements JsonDeserializer{

        public ToneAnalysis deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context)
        throws JsonParseException {

        final JsonObject jsonObject = json.getAsJsonObject();
        final ToneAnalysis toneAnalysis = new ToneAnalysis();

        final JsonArray jsonToneCategories = jsonObject.get("tone_categories").getAsJsonArray();


        return toneAnalysis;

So basically we manually create an object and parse all properties from the JSON element into this object. It could also be possible to do this automatically, but most of the time the JSON structure delivered back from the database or the services isn’t equal to the object structure. Hence the manual deserialization.

Implementing the Natural Language Processing API

Creating the main object works similar to creating the main Tone Analyzer object. We have to authenticate the service via username and password and specify the version.

this.service = new NaturalLanguageUnderstanding(
NaturalLanguageUnderstanding.VERSION_DATE_2017_02_27, USERNAME, PASSWORD);

Then we need to specify which analysis options we want to use. These settings are stored in a Features object.

CategoriesOptions categoriesOptions = new CategoriesOptions();

EmotionOptions emotionOptions = new EmotionOptions.Builder()

SentimentOptions sentimentOptions = new SentimentOptions.Builder()

this.features = new Features.Builder()

Running the analysis works a little different from running the tone analysis, but is still very straight forward:

public LanguageAnalysis analyzeText(String text) throws ServiceResponseException {
    LanguageAnalysis result = null;
    AnalyzeOptions parameters = new AnalyzeOptions.Builder()

    AnalysisResults response = service.analyze(parameters).execute();


    // Creating the LanguageAnalysis object and storing ToneAnalysis
    // and Natural Language Understanding results in it.

It can happen that the text is too short or written in the wrong language. If that is the case, a ServiceResponseException is thrown.

Implementing hate comment filter into the Java EE application

We wanted to filter every comment before it could be posted.
That’s why we used a JSF validator to prevent these texts from being sent to the backend. The user gets notified if a text is classified as being hateful.

To do that we needed to implement a custom JSF validator and let this validator validate both, the textfield for posting comment and the textfield for writing posts. If the text is recognized as being hateful, we throw a ValidatorException which is then caught by PrimeFaces and shown as a pop up dialog.

First, we need to create a custom JSF validator and name it accordingly. Ours is called de.mi.hdm_stuttgart.hatertextvalidator.
We need to implement the Interface javax.faces.validator.Validator.
The Object that is passed to the validate method is the string typed into the textfield. This string has to be validated.

public class HatertextValidator implements Validator {

    public void validate(FacesContext facesContext, UIComponent uiComeponent, Object obj)
    throws ValidatorException {

        if (obj instanceof String) {
            String value = (String) obj;
            boolean haterComment = false;

            FacesMessage msg = new FacesMessage("Warning", "Please don't be rude to others.");

            try {
                haterComment = TextMeaningAnalyzer.isHaterComment(value);
            } catch (ServiceResponseException e) {
                msg = new FacesMessage("Info", "Text this short cannot be analyzed.");
                throw new ValidatorException(msg);

            if (haterComment) {
                throw new ValidatorException(msg);


To use this validator, it has to be referenced from the textfield that it validates:

<p:inputTextarea value="#{postsController.postModel.post.body}" id="post"
        requiredMessage="Please type something in to make a Post">
    <f:validator validatorId="de.mi.hdm_stuttgart.hatertextvalidator"/>

Also, this Textarea has to be within a form that needs to be updated on every submit to trigger the validation and also trigger the notification to pop up and show the error message:

    <p:growl id="growl" showDetail="true" sticky="true"/>
    <p:commandButton style="width: 100%;" action="#{postsController.makePost()}"
            value="Send" icon="ui-icon-check"
            update="@form" />


That concludes the text analysis algorithms.

Set up Google Sign-in

On the Google Console API, which is shown at the image below, you can enter your URL.

Go to the “APIs” section. In credentials, you can then create a new project and enter a product name under “OAuth content screen”. After clicking “Create credential” button, select OAuth client ID. In the next step you can select the application type (in this case it is a web application). You can enter one or several URLs under the respective project name.
The client ID is the public key, which must be included in the software. In the HTML file, the following is entered into the header area together with the client ID:

<meta name="google-signin-scope" content="profile email" />
  <meta name="google-signin-client_id"
            content="YOUR CLIENT ID.apps.googleusercontent.com" />
  <script src="https://apis.google.com/js/platform.js" async="async"

Next, you have to bind the button with which the user can log in.
If the user clicks on the sign-in button a popup appears with which the user can login (of course he needs his Google Account). Then, a token is created with which you can get all data from the user.

For more information: https://developers.google.com/identity/sign-in/web/sign-in

The token is obtained on the frontend page and has to move it from the JavaScript code to the backend (Java). JSF does not work with servlets, so you cannot send the token with normal requests to the backend.
In PrimeFaces, you can create an invisible input field as part of a form. To transfer the token to backend we need to write it into the input field first. Then we can submit the form to send the data to the backend.
On the backend you can work with the token. To find the name of a person and to store such important data in the database (if the user is new) and pass the user a valid session.

Tokeninfo example:

 // These six fields are included in all Google ID Tokens.
 "iss": "https://accounts.google.com",
 "sub": "110169484474386276334",
 "azp": "1008719970978-hb24n2dstb40o45d4feuo2ukqmcc6381.apps.googleusercontent.com",
 "aud": "1008719970978-hb24n2dstb40o45d4feuo2ukqmcc6381.apps.googleusercontent.com",
 "iat": "1433978353",
 "exp": "1433981953",

 // These seven fields are only included when the user has granted the "profile" and
 // "email" OAuth scopes to the application.
 "email": "testuser@gmail.com",
 "email_verified": "true",
 "name" : "Test User",
 "picture": "https://lh4.googleusercontent.com/-kYgzyAWpZzJ/ABCDEFGHI/AAAJKLMNOP/tIXL9Ir44LE/s99-c/photo.jpg",
 "given_name": "Test",
 "family_name": "User",
 "locale": "en"

You can call the token endpoint with this URL:

Set up Facebook login


After you have created a developer account in Facebook, you can create a project just like Google and enter your URL. Afterwards, you also get a public key, which is integrated into the software.

In the following link you can customize and create a button, which should be used for the login – https://developers.facebook.com/docs/facebook-login/web/login-button?locale=en.

Afterwards the login functionality can be handled by these JavaScript API functions (https://developers.facebook.com/docs/facebook-login/web).

Firstly check the login status of the user, to determine, whether he is already logged in.

FB.getLoginStatus(function(response) {

If the user is not logged in, the user logs in via the button. The following function is called:

function checkLoginState() {
  FB.getLoginStatus(function(response) {

FB.login(function(response) {
  if (response.status === 'connected') {
    // Logged into your app and Facebook.
  } else {
    // The person is not logged into this app or we are unable to tell.

Compared to Google, where you get a token, you get the content directly (for example the name of the logged in person). The contents are then submitted to the backend and handed over the user as a valid session.

For testing with various Facebook accounts, you can create test accounts on the developer site from Facebook.

Mood imaging implementation

The functionality to take images of our users and determine their current mood based on these images was at first realized with the PrimeFaces PhotoCam and FileUpload components as well as with the Microsoft Emotion API. Since we were already using PrimeFaces with Maven we did not have to add any dependency to use the PhotoCam or the FileUpload component. Using the Microsoft Emotion API did not require any additional dependencies either, because we were able to reuse the HTTPClient from the org.apache.http package. The PhotoCam component was easy to integrate into our project. By adding the code snippets from the PrimeFaces showcase website, we were able to establish a basic functionality.

Similar to the example on the showcase website we held our PhotoCam HTML widget on the client side and a callback handler as part of a Java Bean on the server side. Whenever a user would click on the ‘capture’ button the callback handler on the server side would be called containing the captured image data as byte array. Then the images had to be uploaded. Therefore we started to use Cloudinary. Cloudinary comes with its own Maven dependency, which makes uploading images quite easy. Uploading data is basically done the following two lines of code.

Cloudinary cloudinary = new Cloudinary(baseUrl+username+”:”+password);
Map<String, Object> uploadResult = cloudinary.uploader().upload(data, ObjectUtils.emptyMap());

First, you need to connect to your Cloudinary backend service. A URL, a username and a password is required. Then you can call the uploader instance and upload your data e.g. as byte array. The resulting HashMap contains a persistent secure URL to the uploaded image, which can be accessed by the key ‘secure_url’.

This image URL is now handed to the Microsoft Emotion API. Similar to the Cloudinary API you need to connect to the service by specifying a correct URL containing the credentials. Using the org.apache.http package, this is done using the URIBuilder class. Afterwards, a HTTP Post request with the image URL is built and sent. The response is a JSON object containing the current mood based on the user image structured exactly as previously shown on the Microsoft Emotion API demonstration website.

To process the resulting mood and query a list of matching songs from Spotify we continued as follows. The JSON object with the key ‘scores’ is then handed to a function which maps the emotions to corresponding trackvalence and trackarousal values as described beforehand.

Since the execution of too many requests at the backend server may lead to high network traffic and performance issues we decided to source further requests out to the client side. The result of our own emotion mapping is stored into a JavaScript variable using the PrimeFaces RequestContext as follows.

RequestContext.getCurrentInstance().execute("var arousal = " + arousal + "; var valence = "+valence+";");

These JavaScript variables are used to call the Musicovery API from JavaScript returning a list of tracks corresponding to the current mood. Due to the missing feature to receive track ids from Spotify using the Musicovery API we have to execute a search request to Spotify for each song in the list of tracks in order to get the track ids. With the help of the track ids and jQuery we are able to add the Spotify Play Button for each song found on Spotify.

Using the Spotify API and listing the tracks requires a Spotify Developer account and the user to be logged in. After setting up a Developer account you get a client id to identify your Spotify app. Additionally a redirect URL for logging in your users with Spotify is required. After a successful login with Spotify you will be redirected to this particular URL and Spotify will append the user specific access token as parameter to it. As stated in this Spotify login example http://jsfiddle.net/JMPerez/62wafrm7/ the redirect URL will fetch the access token and send it via cross-origin communication to the current page of our app. For security reasons the URL to the sender must be predefined. When the access token is received it is stored into a Session Object so the user does not have to login every time.

When implementing the functionality to add images to a post or a comment we thought it would be a relatively simple task because we already realized this feature of taking and uploading an image with the PhotoCam component before. Since the PhotoCam component was already operational we tried to duplicate it. Trying this we discovered the disadvantages of prebuilt PrimeFaces components. They seem to hide a lot of complexity and embedded code. For example while debugging the PhotoCam component we realized that PrimeFaces uses webcam.js in background and injects a JavaScript tag into the HTML file while initialization. We assume that only one PhotoCam component is supported due to the initialization process and the injected JavaScript tag.

When running two PhotoCam component instances at the same time only the last one gets initialized correctly. After some time of retrying, debugging, researching and reading documentation about the PhotoCam component without any success we decided to use replace the current PhotoCam implementation with a pure HTML and JavaScript solution based on the example represented earlier. This solution approach enabled us to make progress very quickly and easily create components for shooting images when creating a new post, creating a new comment or listing suggested tracks from Spotify. Replacing the PhotoCam component did also require us to change the backend side a bit. Instead of handling a predefined callback function we had to create an own callback function which we would call if the user clicks on the capture button at the frontend. Therefore we send the image in the form of an dataURL to the backend server. At the backend server the dataURL can be converted into a byte array. This way the function to upload the images to Cloudinary did not need to be changed. So in order to extend the creation of a new post or comment with an image we only needed to upload the images while processing the data at the backend. Adding the resulting image URL as separate field into the data object which is stored to the database will persistently assign the image URL with the post or comment. For displaying the posted or commented images we simply read the stored image URLs and added PrimeFaces graphicalImage components. The graphicalImage component contains an attribute ‘rendered’ which allows us to not render the HTML img tag if the read image URL is not set.

After our app was successfully running with Cloudinary as object storage we wanted to setup an own Minio Docker Container due to not being dependent on the Cloudinary service. Therefore we used the Minio Docker image we already showed earlier. So in order to deploy the Minio Docker Container we needed to upload the Docker image to our own image registry at Bluemix. Since Minio is an object storage server it requires a lot of disk space for storing the data. When running the Docker container a data volume can be mounted in addition. Bluemix offers the functionality to manage and create own data volumes by using the following command.

cf ic volume create minio_storage

Mounting a newly created volume into a container can be done by using the ‘volume’ option.

cf ic run –volume minio_storage:/export -p 9000:9000 registry.ng.bluemix.net/moodkoalaimages/minio:latest server

After the Docker container has been started successfully you can bind a public IP address to it when entering the web interface of Bluemix. When the public IP address was assigned the web interface of the Minio Server can be accessed by it using the port 9000. Our Minio instance is running here. For logging into the bucket management a username and password is required. These credentials are printed into the standard output when the Minio server starts and can be accessed with Docker using the command ‘logs’.

After setting up our own Minio server we wanted to access it and upload files to it. Therefore we used the Java SDK provided by Minio. Including it into our maven project was done by simply adding the particular Minio dependency. Setting up a code example is quite easy and very similar to other cloud service APIs we used before. The following example code demonstrates how to upload an image with the Java Minio API.

MinioClient minioClient = new MinioClient(minioURL, minioUsername, minioPassword);
// Check if the bucket already exists.
boolean isExist = minioClient.bucketExists(minioBucketName);
if(isExist) {
    System.out.println("Bucket already exists.");
} else {
    // Make a new bucket
// Upload the image to the bucket with putObject
minioClient.putObject(minioBucketName, filename, inputStream, "image/jpeg");

When trying to run this sample code we experienced some difficulties related the local time of the executing computer. Using an URL for connecting securely via HTTPS to the Minio server and having a deviation in local time set on your computer leads to issues. We experienced this issue due to a dual boot configuration on our machine while developing. A synchronized local time at the executing computer is therefore a crucial requirement for connecting to a Minio server. First we assumed that in comparison to Cloudinary Minio does not offer persistent URLs to access the stored objects by default. Minio allows to generate pre signed URLs which expire at the latest in a week. So replacing the Cloudinary implementation with this option would require to store the bucketname and the filename into the database instead of an URL. Whenever a resource assigned with an image is being requested from the database a new presigned URL would have to be generated. By this means caching images from older posts would not be possible due to the continuously changing image URLs. Fortunately Minio offers a way to access objects via persistent URLs. Setting the bucket policy to public and prefixing the objects with the string ‘public/’ allowed us to get persistent URLs. Thus the adjustment regarding the implementation of Cloudinary was very manageable.

In addition to the previous way of deploying our application we also wanted to create an own Docker Container. Therefore we created a new Docker Image referencing the Websphere Liberty Docker image on Docker Hub https://hub.docker.com/_/websphere-liberty/. Since multiple versions of this image are available we decided to choose the one with the webProfile7 because it also comes with a preinstalled JavaEE7 version and support features required at a typical production scenario like app security, monitoring, ssl and web cache. In addition to the reference of this image we set the required environment variables to run our application. Deploying the application to the running container can be realized with the docker ‘cp’ or copy command. We copy a customized server.xml and WAR file into the running Websphere Liberty container and restart it afterwards. This way the new application is loaded and published by the Websphere Liberty server. After deploying and running the Docker container on Bluemix we experienced an issue we have seen before at a standard Java Liberty app on Bluemix. When pushing our application as WAR file to the JavaLiberty app an AppClassLoader exception due to JAR files we included appeared. Restarting the JavaLiberty app did solve the problem. Trying to fix this error we did a bit of search on the web and apparently it many people have experienced this behavior in different versions of the Websphere Liberty server. After several attempts of trying to configure the classloader properly and a lot of time, we were not able to solve this issue.


Due to quite a lot of backing services our app has a lot of connection URLs and credentials to access and authorize against them. For the sake of security, deployability and maintenance it is not recommended to hard code or write the credentials and connection URLs unencrypted into a configuration file. It is much smarter to store those values into environment variables at the particular machine where the application is deployed. Ideally the gap between a development and a production version of the application only relies on the configuration of the environment variables.

Since our app is based on Java and using Maven as build and management tool sourcing the connection URLs and credentials out into environment variables was quite easy. However different development platforms need to set up environment variables differently. So in order to make sure to set up the same environment variables on every platform we had to create shell or bash scripts for this task. Additionally Java reads the environment variables from a configuration file and does support reading environment variables after they were just set. Reading the environment variables with Java is very straight forward. The class ‘System’ maps the environment variable names to their values and makes them accessible via the method ‘getenv’. So simply by the following line of code a environment variable can be read.

String variable = System.getenv("variable_name");

Since some of our app is written in JavaScript and HTML we needed to source connection urls and credentials out there as well, we used a Maven replacer plugin to exchange those values. Maven is able to read environment variables out of the box, simply by writing the following:


Ideally Maven should replace the values right before the Web Application Archive (WAR) is build. Transferring this to the maven build phases the proper phase to replace the values would be the ‘pre-package’ phase. Unfortunately when using the plugin at that phase the files with the replacements are overridden by the original ones. A workaround we used to achieve a proper environment variable replacement with maven is to create a separate configuration folder containing the template files with the tokens to be replaced. Then the maven plugin is executed and replaces the tokens with the environment variables. Afterwards the resulting files are replaced with the currently existing source code files. After executing this plugin the main maven build process is started to begin to build the WAR file. That way we can also replace tokens with environment variables in arbitrary text files with Maven when building the application.

For deploying multiple instance of our application we also needed to configure multiple apps at our Spotify developer account because one app only supports one host URL. Since we wanted to create multiple deployments with different host URLs we had to create one Spotify app for each one resulting in different client ids and different environment variables to be set at the various deployments.

Docker and IBM Bluemix

Using Docker in combination with IBM Bluemix is quite easy after overcoming some issues during the setup. Bluemix offers an own registry for Docker images. In order to run Docker images in Bluemix you have to push your own local Docker images to the image registry from Bluemix. Therefore the Cloud Foundry command line tool with the IBM Container plugin has to be installed. Installing the Cloud Foundry tool from https://clis.ng.bluemix.net/ui/home.html is quite simple. At first, it was bit confusing that the two different types of command line tools Bluemix CLI and Cloud Foundry CLI were available. Each CLI comes with an own plugin to interact with the Docker and the IBM Docker registry. Since the IBM Bluemix documentation suggests using the Bluemix CLI to connect with Docker we tried this way first. Unfortunately we got stuck when setting a namespace for the IBM Docker images. When running the following command we received an error telling us that a connection to the backend service could not be established and we should try to connect a few minutes later.

bx ic namespace-set moodkoalaimages

Retrying and troubleshooting running the command did not help. So in order to connect to IBM Docker anyway we tried out the Cloud Foundry CLI by following the instructions mentioned in this IBM blog. After overcoming these minor problems with the Bluemix CLI we were able to interact with the IBM Docker registry using the Cloud Foundry CLI. Then we successfully pushed our first images to the IBM Docker registry and started to run some containers. After pushing a few more local Docker images to the IBM Docker registry we quickly realized the limitations of the IBM trial account. Our quota from one trial account was exhausted and we had to assign our images to two trial accounts.

Gitlab CI

Since we never built an own Contiguous Integration Pipeline before we first informed our selfs at https://docs.gitlab.com/ce/ci/quick_start/README.html to get a general overview about the topic.

Quickly we understood that the process is divided into the two components Gitlab Runner and the Gitlab Repository. The Gitlab Runner is a server which runs jobs specified at the Gitlab Repository at the .gitlab-ci.yml file. The Gitlab Runner server is available for different platforms. We chose the option to deploy the Docker image to IBM Bluemix so that it is available at all times. Until now the Gitlab Repository and the Gitlab Runner do not know each other. Connecting the two is fairly easy. When entering the Docker Container interactively you are able to register the runner at a Gitlab Repository. Therefore connection credentials available at the Gitlab Repository have to be entered. After the Gitlab Runner and the Gitlab Repository are connected we may add the .gitlab-ci.yml file to define jobs. There are many options to run jobs. You can for example execute a bunch of shell commands directly on the Gitlab Runner or start and run a docker container for each job or build. In order to determine the execution order of the jobs you can define stages and assign those to the jobs. For deploying our application to Bluemix via Cloud Foundry CLI we created and pushed an own Docker image to Docker Hub available at https://hub.docker.com/r/moodkoala/cf_image/. We wanted to use the Docker for each job and at first it seemed work out quite well. But quickly we realized that the jobs we defined were not run on our own Gitlab Runner. Apparently by default a public Docker Gitlab Runner was connected to our Gitlab Repository running our jobs. After detaching this public Docker Gitlab Runner our jobs were executed by our own Gitlab Runner. Suddenly all jobs which worked fine beforehand failed. The reason for that was, we did not had installed Docker on our Gitlab Runner before. So we installed Docker and tried to rerun the Pipeline again but it still did not work. It turned out that our setup was responsible for this failure. Using Docker inside Docker requires the host resource /var/run/docker.sock to be mounted into the container. Since in our case the host of our Gitlab Runner is Bluemix and due to security reasons it is not recommended to share this resource we could not run Docker containers for each job. So after realizing this we created a new Docker image containing the pre-installed tools to build our application, run tests and deploy it via shell. After a bit of struggling with pre-defined environment variables we got our Gitlab CI Pipeline to work. The pipeline consists of four stages. The first one builds our application for our Docker deployment using Maven. The second one deploys the resulting WAR file to Bluemix via Cloud Foundry CLI. The last two stages do the same thing for building and deploying the application to our Java Liberty app in Bluemix. The deployed instances can be found here http://javacloudantappmoodkoala.mybluemix.net/ and here


In addition to the Gitlab CI we also wanted to get familiar with Jenkins CI. Since we already did quite well using Docker we wanted to stick with it too. So our starting point was Docker Hub to check whether we could use an existing Docker image of Jenkins or we would have to create one ourselves. First we used the Docker image library/jenkins:latest which did perform while running locally quite well. After running Jenkins in a container we wanted to know how to combine it with Gitlab next. Using the Gitlab Plugin it is possible to configure a connection to the Gitlab Repository. The required parameters like the URL to the Gitlab repository and a way for connecting to the repository need to be configured. In our case we had to set https://gitlab.mi.hdm-stuttgart.de/ as URL. In order to connect to the repository we created a new Gitlab API token and handed it to Jenkins. The configuration of a WebHook would require us to enter the publicly available URL of the Jenkins server. Since we were running the Jenkins Docker image locally this was not possible at that time. However by using this configuration we were able to create a new freestyle project. As source code management we configured our Gitlab repository URL. In order to be able to fetch the project files from the Gitlab repository we needed a way to authenticate ourselves. Therefore, we created a new ssh key pair and registered it at our Gitlab repository.

Since our application uses Maven to build we needed to install Maven via Jenkins first. After installing it the project could be configured to run a Maven build step. In order to read environment variables during the Maven build we needed to set them first at Jenkins. Environment variables can be configured globally at the Jenkins settings. For deployment to Bluemix we needed to install the Cloud Foundry CLI. Thereby we were able to add another build step to our Jenkins project. This build step would be execute commands at the shell. Credentials are required for connecting and logging in to Bluemix with the Cloud Foundry CLI. Not to hard code the credentials in shell commands we used the bindings feature to make the credentials accessible via environment variables. Since we were done configuring our build steps we could set an action to execute after the build. A neat feature of Jenkins in combination with the Gitlab plugin enables us to create an icon at each commit which represents the current build status. By enabling this feature we were able to view whether the current build failed or succeeded on the web page of our Gitlab repository. After that we were able to run and deploy our application with Jenkins. Still we were running Jenkins locally with Docker, so we wanted to deploy it to Bluemix. After pushing the Jenkins Docker image and running a container with an IP address binding we experienced that the container did fail quite often at random times. After a bit of research we found out that the Docker image we were using was a deprecated one and that this is a known issue. So we did update our Docker image to the latest version and we did not experience those random failures anymore. When we had our Jenkins Docker container successfully running on Bluemix and assigned a public IP address to it we wanted to set up the WebHook at Gitlab next. In order to set up a WebHook we needed to create a new WebHook at Gitlab using an URL like, according to the documentation of the Gitlab Plugin. Since we already had a running Gitlab CI triggering on Git changes we thought it would be quite nice to configure our Jenkins to build and deploy our application frequently based on time. Using this code ‘0 8 * * *’ we tell Jenkins to build and deploy our application every day at 8 am. The deployed instance can be found here http://javacloudanttest1223234234234.mybluemix.net/.

Discussion and conclusion

In this section we want to discuss our application in comparison to the requirements of a twelve factor app. Also a summary about the cloud proviers is given. Finally, we conclude our blog post.

Discussion Moodkoala and 12 factor app

For managing the codebase of our app we used Git as version control system. Our codebase maps a one-to-one correlation to our app. Therefore with one codebase we are able to deploy multiple instances of our app for production like multiple Java Liberty apps and a Docker Container or for development on our local machines.

Related to dependencies our app makes use of Maven and therefore declares all its dependencies as a manifest in the form of a pom.xml file. For building our app the only requirement is a Java runtime environment with the version 1.8 or newer. The executable code will be generated by the deterministic build command ‘mvn clean install’.

As suggested by the twelve factor app our app is configured via environment variables. Simply because environment variables allow a very flexible configuration and setting up a production and development instance becomes very easy because only changing the values of the environment variables is required. For setting up our environment variables for the differing platforms and deployments we created a configuration folder at our codebase containing templates and shell scripts for setting up our environment variables at the deployments.

The backing services of our app like an object storage, a database, or image recognition service are addressed via a REST API which makes it easy to configure whether the attached resources are managed locally or by a third party. For example with the Minio SDK a local Minio server can be coupled to our application simply by changing the connection URL and the credentials required to access it.

The separation of the three stages build, release and run is guaranteed by using a Continuous Integration/Deployment tool like Gitlab CI or Jenkins. Though in our case both build and release stages are done using Maven. When building with Maven the required environment variables are configured in order to build the release for the particular deployment. At the run stage the execution environment will contain the predefined environment variables specialized for this instance.

Processes have to be scalable and not rely on the environment they’re running on for data storage. We never stored data on the local environment. Images were stored on Cloudant and Minio and normal data was stored on the database. The EJB beans are all stateless, while the JSF beans are stateful. All performance heavy operations (business logic, database operations) are executed by the EJB beans to make sure that this app is scalable.
EJB beans are highly scalable, while JSF beans aren’t. That’s why we use JSF beans only for GUI logic.

We never started any new processes during app runtime. Deployment on the local environment was handled by the application server. On the bluemix cloud, the process could also be restarted when changing some service configuration. The redeployment or rebooting doesn’t take long.

With respect to port binding our app is able to declare its own port the web service will be bound to. Using the configuration file server.xml we are able to tell the Websphere Liberty server on which port our app should have a binding. Therefore, our application could theoretically be used as backing service by other services.

In relation to Dev/Prod parity we tried to keep the three gaps (time gap, personnel gap and tool gap) as small as possible. To minimize the tool gap we used the same web server on our local development machines as in production as well as the same Docker image we used to deploy later on. Since we added our Continuous Integration pipeline a bit late to our project we had to manually deploy our application beforehand. Thus the Dev/Prod parity became a bit inaccurate. After setting up our Continuous Integration pipeline the gap between our production and development deployments was reduced to a minimum of changing environment variables.

In terms of logging we also have adhered to the twelve factor app’s guidelines. Logging the output of our application to the standard output comes with great flexibility. You can either redirect the output into a log file, or you can hand it to a analysis tool like Splunk or Elasticsearch, Logstash and Kibana.

We didn’t have any admin processes that we used to manage our app configuration. We used minimal in app configuration and relied as much as we could on external services like cloudant, that are accessible from every environment.

Comparison to other cloud providers

Before starting the development we looked at different cloud service providers to see which of fit our needs best. For this, Bluemix’s services were compared with those of Microsoft, Google and Amazon. The following overview shows the results of this comparison:

Cloud comparison


Moodkoala offers the user a social-media-application with which you can write posts and see with which mood it was written, the app also offers appropriate music, thanks to various cloud services which use artificial intelligence.

By the abundance of cloud services, one is inclined to try out many, which can lead to a problem, if one is dependent on many different providers.
We find that there is a lot of potential behind cloud services and you can quickly get a good software with it, since you do not have to program everything yourself, especially with regard to artificial intelligence. Cloud services offer a lot and are therefore a blessing for the respective Developer.

Creating this app was a very informative experience and made a lot of fun. Especially learning how to properly set up a project for long term maintenance including Continuous Integration and Continuous Deployment. We really appreciate the fact that we got to know and experience ourselves how cloud applications need to be designed and on which things we must pay attention the most in order to create an application for multiple deployments, instead of creating apps which will never be deployed once in any production scenario like in other lectures.

IoT with the Raspberry Pi – Final application – Part 3

In our final application, we have put together a solution consisting of four different modules. First, we have again the Raspberry Pi which raises and sends the sensor data using the already presented Python script. We changed the transfer protocol in the final application to MQTT, which gives us more possibilities in different aspects, but more on that later.

The main part of our application in the cloud is based on a Node.js application where various frameworks such as Express and Passport are used.

To secure the data persistently, we use the NoSQL solution MongoDB, which is linked to our application via its manifest file as a Bluemix service.

Last but not least, we also need a broker for the chosen MQTT transmission protocol. The approximate task of the broker is to take care of the transmission of data between Raspberry Pi and the cloud application. In order to realize this service at Bluemix, we have created a Docker image that includes the MQTT broker.

Module 1 – Raspberry PI

To determine the sensor data, the Raspberry PI continues using the already presented Python script. In order to transmit the sensor data via the MQTT protocol, we integrated the Eclipse Paho MQTT client into the project.

Module 2 – MQTT broker

MQTT (Message Queue Telemetry Transport)

MQTT is a lightweight transmission protocol for machine-to-machine communication (M2M). To realize the transmission of the data the protocol is based on the principle of a Publish / Subscribe solution. In this context, there are clients that take on different roles. On the one hand, there is the role as publisher, which provides and sends messages. On the other hand, there is the role as subscriber, which accepts the provided messages. Communication takes place via a topic with a unique ID. You can see the topic as a bulletin board with

a unique inventory number. For example, there may be a publisher who sends its data to a particular topic and an undefined set of subscribers who have subscribed to this topic to receive the data.

MQTT broker

A MQTT broker is the central component in the communication via MQTT. It manages the topics and the related messages. Also, it regulates access to these topics and takes care of data security as well as the Quality of Service levels. The Quality of Service can be defined in three different stages. The lowest level is declared by 0 and means that there is no guarantee that the message arrives at the receiver. This variant produces the least overhead during transmission, it follows the principle fire’n’forget. The next level, 1, says the message is recognized at least once in the topic queue. At level 2 it is ensured that the message arrives exactly once.

Setting up our own MQTT-Broker

Instead of using the Bluemix-Services to register the devices, we wanted to use our own solution to be more flexible. For that step, we used this Docker image which includes the open source Eclipse Mosquitto MQTT broker. The Docker image builds on a Linux distribution called Alpine Linux which is described as small, fast and safe.  

For more security and to disable everybody is sending us data an auth plugin for the MQTT broker is included into the Docker image. The auth plugin is written in c and uses a c-library for handling requests to the MongoDB. The Bluemix service for the MongoDB requires an authentication via a certificate. But the auth plugin doesn’t support this so we had to include this step by ourselves and had to modify the Docker image for our needs.

In order to be able to transfer data, users first have to register on our cloud application and create a sensor. With this user data a topic is generated to which the user is allowed to send his data. This topic is structured as follows:


The auth plugin expects the password for authentication in the form of PBKDF2 (Password-Based Key Derivation Function 2). This is a standard function to derive a key from a password.


The various parts of this key are separated by the separator $. The first part is the start marker followed by the description of the hash function. The third part shows the number of iterations followed by the salt. The last part is the hashed password. We have adapted the registry of our application so that the user password is stored in the database in this form.

For the cloud application, we have created a so-called superuser, who has the rights to subscribe to each topic. In the application, this user connects and subscribes to the topic “client/#”. The diamond is a placeholder for all possible topics.

To load the Docker image in Bluemix you have to install the Bluemix Container registry plug-in. This is done via the command-line interface.

bx plugin install container-registry -r Bluemix

You now have two options: Either create the Docker image on the local computer and push it to Bluemix or you create the image, like us, right in the Bluemix cloud.

bx ic build -t registry.eu-de.bluemix.net/my_namespace/my_image:v1 .

At Bluemix, we also tested the delivery pipeline for the Docker image. Unfortunately, compiling the image in the pipeline failed. Since we had to do this step only once, we did not deal with this problem any further. Then log in to Bluemix and create a new container. The created image should now be selectable. You have to unlock the required ports if not already done in the Dockerfile and request a fixed IP address.


Module 3 – MongoDB as Service

We used MongoDB in our main application in combination with Mongoose. It was somewhat tiring to establish within the Node.js application the connection with the MongoDB via Mongoose. The supplied credentials by the MongoDB service are designed to automatically connect to the admin database. However, this database ideally shouldn’t be used in productive use. Mongoose does not directly provide the possibility to select another one after connecting to a database. You have to submit the desired database name into the connection call with additional settings. Without this settings the connection call would be much shorter.

// load local VCAP configuration and service credentials
var vcapLocal;
try {
    vcapLocal = require('./vcap-local.json');
} catch (e) { }

const appEnvOpts = vcapLocal ? { vcap: vcapLocal} : {}
const appEnv = cfenv.getAppEnv(appEnvOpts);

// mongoose - mongoDB connection
var mongoDBUrl, mongoDBOptions = {};
var mongoDBCredentials = appEnv.services["compose-for-mongodb"][0].credentials;

if (mongoDBCredentials) {
    var ca = [new Buffer(mongoDBCredentials.ca_certificate_base64, 'base64')];
    mongoDBUrl = mongoDBCredentials.uri;
    mongoDBOptions = {
        auth: {
            authSource: 'admin'
        mongos: {
            ssl: true,
            sslValidate: true,
            sslCA: ca,
            poolSize: 1,
            reconnectTries: 1,
            promiseLibrary: global.Promise
} else {
    console.error("No MongoDB connection configured!");

// connect to our database (iot)
var db = mongoose.createConnection(mongoDBUrl, "iot", mongoDBOptions);

While testing the application, we probably filled the 1 GB database completely because we couldn’t save any new data anymore. The Bluemix backend has reported only 0.035 GB of used storage capacity. After clearing the database, the data was saved again.


Module 4 – Cloud application

We created and launched a Cloud Foundry App for Node.js at Bluemix for the main application. The application allows users to register, create a sensor and then download a configuration file for the Raspberry Pi script. The user must enter his or her correct password in the configuration file. If the sensor now successfully sends data to the application, these can be read out in a chart.

The application relies on common JavaScript packages which are loaded via the package manager npm and managed in the package.json file.

  • Express is a web framework for Node.js and performs important tasks such as routing and has a view system that supports a variety of template engines.
  • Passport is an express compatible authentication middleware for Node.js. Through a plugin for Passport we implemented the user registration in our application.
  • Instead of using Paho Eclipse as a MQTT client like at the Raspberry Pi, we use MQTT.js in the application. MQTT.js offers a wider range of functions, a simpler API and often receives updates.


We use Pug as template engine in the frontend. It’s a very simple view engine and recommended for small projects. If we would be going to expand the project further, we would exchange it for a view engine with a larger range of functions. Reading out nested arrays wasn’t always easy with Pug.

For the basic layout we used Bootstrap 4, so the application is also responsive. However, reading the diagrams on the smartphone is not ideal. Bootstrap 4 was still in beta status during the project. This status was already so far advanced that we were able to work productively with it. Currently Bootstrap 4 has gone into the alpha phase.

Last but not least, the graphic heart of our application. We use Chart.js, which offers a large range of configuration possibilities and a variety of different diagram types. Its declarative approach makes it easy to use and the huge community makes it easy to find answers to any additional questions. Via a modal we offer adjustment possibilities such as the limitation of the time period of the sensor data. These settings are currently stored via a cookie.

Build tool

As build tool we used Webpack. For the development we additionally relied on the plugin webpack-dev-middleware. It provides a watch mode that recompiles the JavaScript and Sass files as soon as changes are made to them. The plugin does not write the files directly to the hard disk, but keeps them in memory, which increases the speed. Before the application is pushed into the cloud, the data must be compiled with Webpack.


As repository we used the Gitlab of the HdM. In Gitlab we have set up a delivery pipeline. Once a commit is made to the repository, the data from the repository is pushed into the cloud. In the application, a Yaml file with the name .gitlab-ci.yml is created in which the required commands are stored.

  - test
  - deploy
  image: node:6
      - node_modules/
  stage: test
   - npm install
   - npm test
  image: ruby:2.3
  type: deploy
    - apt-get update -yq
    - apt-get install -y ruby-dev
    - gem install dpl -v 1.8.39
    - dpl --provider=bluemixcloudfoundry --username=$BLUEMIX_USER --password=$BLUEMIX_PASSWORD --organization=$BLUEMIX_ORG --space=$BLUEMIX_SPACE --api=https://api.ng.bluemix.net --skip-ssl-validation
  - master

The variables are created in the backend of Gitlab.