Month: March 2017
Livestreaming with libav* – Tutorial (Part 1)
Lifestreaming is the real deal of video today, however there aren’t that many content creation tools to choose from. YouTube, Facebook and Twitter are pushing hard to enable their users to stream vlogging-style content live from their phones with proprietary Apps, and OBS is used for Let’s Plays and Twitch streams. But when you want to stream…
Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services – Part 6 – Apache Spark and/vs Apache Hadoop?
At the beginning of this article series we introduced the core concepts of Hadoop and Spark in a nutshell. Both, Apache Spark and Apache Hadoop are frameworks for efficient processing of large data on computer clusters. The question arises how they differ or relate to each other. Hereof it seems the opinions are divided. In…
Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services – Part 5 – Spark applications in PIA project
The main reason for choosing Spark was a second project which we developed for the course “Programming Intelligent Applications”. For this project we wanted to implement a framework which is able to monitor important events (e.g. terror, natural disasters) on the world through Twitter. To separate important tweets from others we use Latent Dirichlet Allocation…
Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services – Part 4 – Big Data Engineering
Our objective in this project was to build an environment that could be practical. So we set up a virtual Hadoop test cluster with virtual machines. Our production environment was a Hadoop Cluster in the IBM Bluemix cloud which we could use for free with our student accounts. We developed and tested the logic of…
Building an HdM Alexa Skill – Part 4
We present our own HdM Alexa Skill and share the experience we gained throughout this project. This time: Automating tests and deployment with Continuous Integration via Jenkins.
Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services – Part 3 – What is Apache Spark?
Apache Spark is a framework for fast processing of large data on computer clusters. Spark applications can be written in Scala, Java, Python or R and can be executed in the cloud or on Hadoop (YARN) or Mesos cluster managers. It is also possible to run Spark applications standalone, that means locally on a computer.…
Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services – Part 2 – Apache Hadoop Ecosystem
In our project we primarily implemented Spark applications, but we used components of Apache Hadoop like the Hadoop distributed file system or the cluster manager Hadoop YARN. For our discussion in the last part of this blog article it is moreover necessary to understand Hadoop MapReduce for comparison to Apache Spark. Because of this we…
Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services – Part 1 – Introduction
As part of the lecture “System Engineering and Management” in the winter semester 2016/17, we run a project with Apache Spark and the Apache Hadoop Ecosystem.
Building an HdM Alexa Skill – Part 3
We present our own HdM Alexa Skill and share the experience we gained throughout this project. This time: Developing the skill using Test-driven Development.
Choosing the correct build system for your game project
In this blog entry we take a look at Travis CI, Jenkins, Gitlab CI and Buildbot and evaluate their benefits and downsides when trying to build a content heavy project with it (e.g. games).