{"id":2633,"date":"2017-08-28T01:20:27","date_gmt":"2017-08-27T23:20:27","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=2633"},"modified":"2023-06-08T15:41:30","modified_gmt":"2023-06-08T13:41:30","slug":"how-to-build-an-alexa-skill-to-get-information-about-your-timetable","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/08\/28\/how-to-build-an-alexa-skill-to-get-information-about-your-timetable\/","title":{"rendered":"How to build an Alexa Skill to get information about your timetable"},"content":{"rendered":"<h1><span style=\"font-weight: 400;\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/Echo-and-Echo-Dot.jpg\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"2643\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/08\/28\/how-to-build-an-alexa-skill-to-get-information-about-your-timetable\/echo-and-echo-dot\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/Echo-and-Echo-Dot.jpg\" data-orig-size=\"3000,1689\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Amazon Echo and Echo Dot\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/Echo-and-Echo-Dot-1024x577.jpg\" class=\"alignnone size-large wp-image-2643\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/Echo-and-Echo-Dot-1024x577.jpg\" alt=\"\" width=\"656\" height=\"370\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/Echo-and-Echo-Dot-1024x577.jpg 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/Echo-and-Echo-Dot-300x169.jpg 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/Echo-and-Echo-Dot-768x432.jpg 768w\" sizes=\"auto, (max-width: 656px) 100vw, 656px\" \/><\/a><\/span><\/h1>\n<h1><span style=\"font-weight: 400;\">Introduction<\/span><\/h1>\n<p>With information technology today we can easily get any kind of information someone is interested in. Whether you want to know how the weather will be tomorrow or how to cook your favorite cake, you can find out almost anything today. But as a user it\u2019s getting more important to gain information quickly, and in a comfortable way. Google for example did just that. If you are using Google\u2019s search engine, then you either type in what you are searching for or you can just say it. With spoken language this feature can be used easier and the response you get is quicker.<br \/>\n<!--more--><br \/>\nWe as a team decided, that there is a lot of potential in applications, that can be used by spoken language. That\u2019s why we have built an Alexa Skill for HdM Students, where they get information of their personal timetable. Every HdM student, owning an Amazon Alexa device, can ask Alexa something about his personal timetable.<\/p>\n<p>Well, the benefits of this skill are clear. Students will have a more comfortable way to gain such information. A big advantage of such a skill is, that it will not be necessary anymore to use a computer, smartphone or tablet then to visit the web page and then to enter the personal credentials for gaining such information. This way includes many steps that have to be taken, considering the fact, that you want to gain only a few information sometimes. With this skill, students can just ask Alexa almost anything about their timetable, without taking the previous described steps.<\/p>\n<h1><span style=\"font-weight: 400;\">General Information<\/span><\/h1>\n<p>Alexa is Amazon\u2019s voice service. To use this voice service Amazon has released two devices, the Amazon Echo and the Amazon Echo Dot. With these devices you can interact with the Alexa Voice Service (AVS). With spoken language you can play music for example or you could ask about the latest sport results.<br \/>\nAmazon also enables you, as a developer, to use AVS, to add Alexa to your own devices. Which means, that you are not strictly bound to Amazon\u2019s devices.<br \/>\nAt the beginning, your device provides standard applications like playing music or retrieving weather information for example. As a developer you can add more applications, so that they can be used as well. This can be achieved by developing an Amazon Alexa Skill (AAS). An AAS is an extension to the default skill set.<\/p>\n<p>To build an Alexa Skill you need to sign up for an Amazon Developer Account. The Alexa Skill consists of three main parts: the logical part, intents with slots and Alexa App to provide configuration possibilities to the user.<\/p>\n<p>The logical part is the typical piece of software that contains all links between the Alexa API and (in this example) the HdM API.<\/p>\n<h1><span style=\"font-weight: 400;\">Authentication in Alexa Skills<\/span><\/h1>\n<p>For many skills the user needs to authenticate to use it. To provide the login credentials the user needs the Alexa App installed on his smartphone where he can configure anything possible.<\/p>\n<p>We, as developers have to build a website with input fields for dealing with the user credentials. During the process of authentication a authentication token is generated by the request processing service.<\/p>\n<p>The user has to activate the skill in the Alexa App Skill Store. After the activation the app will show a tab for the skill configuration. If the user activates the skill on the echo device it will call the website with the configured credentials to get the token. This token is now used in every request, sent by the Alexa Skill.<\/p>\n<h1><span style=\"font-weight: 400;\">Create an Alexa Skill on Amazon Backend<\/span><\/h1>\n<p>When you have signed up, you need to create a new skill in the Amazon backend. After that you need to configure it to get the skill working. The configuration starts with the display and activation name of the skill.<\/p>\n<p><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_01.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"2636\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/08\/28\/how-to-build-an-alexa-skill-to-get-information-about-your-timetable\/skill_01\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_01.png\" data-orig-size=\"1042,724\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Skill Information\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_01-1024x711.png\" class=\"alignleft size-full wp-image-2636\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_01.png\" alt=\"\" width=\"1042\" height=\"724\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_01.png 1042w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_01-300x208.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_01-768x534.png 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_01-1024x711.png 1024w\" sizes=\"auto, (max-width: 1042px) 100vw, 1042px\" \/><\/a><\/p>\n<p>Amazon Echo works with so called intents and slots. An intent is something like a function where slots are parameters. Echo will recognize which intent a user is asking for and automatically fills the slots. All the information to perform this action have been configured by us with intents and the corresponding slots. On the bottom of the same site you write the speech commands mapped to the intents.<\/p>\n<p><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_02.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"2637\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/08\/28\/how-to-build-an-alexa-skill-to-get-information-about-your-timetable\/skill_02\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_02.png\" data-orig-size=\"1031,679\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Interaction Model\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_02-1024x674.png\" class=\"alignleft size-full wp-image-2637\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_02.png\" alt=\"\" width=\"1031\" height=\"679\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_02.png 1031w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_02-300x198.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_02-768x506.png 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_02-1024x674.png 1024w\" sizes=\"auto, (max-width: 1031px) 100vw, 1031px\" \/><\/a><\/p>\n<p>To test the skill it\u2019s sufficient to configure the host where the skill is hosted. In our case the skill was hosted on AWS Lambda.<\/p>\n<p><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_03.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"2638\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/08\/28\/how-to-build-an-alexa-skill-to-get-information-about-your-timetable\/skill_03\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_03.png\" data-orig-size=\"1023,830\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Configuration\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_03.png\" class=\"alignleft size-full wp-image-2638\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_03.png\" alt=\"\" width=\"1023\" height=\"830\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_03.png 1023w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_03-300x243.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/skill_03-768x623.png 768w\" sizes=\"auto, (max-width: 1023px) 100vw, 1023px\" \/><\/a><\/p>\n<h1><span style=\"font-weight: 400;\">Implementation<\/span><\/h1>\n<h2><span style=\"font-weight: 400;\">Development with Alexa Skills Kit for Java<\/span><\/h2>\n<p>Amazon provides a library for skill development with java. It manages the parsing of the in- and output JSON strings.<\/p>\n<p>So we can focus on the skills logic. The library provides an interface named Speechlet. There are four predefined methods which are called by the skill.<\/p>\n<pre class=\"prettyprint lang-java\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">@Override\npublic SpeechletResponse onIntent(IntentRequest request, Session session) throws SpeechletException {\n    \/\/ TODO Auto-generated method stub\n    return null;\n}\n\n@Override\npublic SpeechletResponse onLaunch(LaunchRequest arg0, Session session) throws SpeechletException {\n    \/\/ TODO Auto-generated method stub\n    return null;\n}\n\n@Override\npublic void onSessionEnded(SessionEndedRequest arg0, Session session) throws SpeechletException {\n    \/\/ TODO Auto-generated method stub  \n}\n\n@Override\npublic void onSessionStarted(SessionStartedRequest arg0, Session session) throws SpeechletException {\n    \/\/ TODO Auto-generated method stub\n}<\/pre>\n<p>The methods <em>onIntent<\/em> and <em>onLaunch<\/em> are most important for our skill..<\/p>\n<p>Like the name says, the onLaunch method is called when a user starts the skill. Here we simply generate a greeting for the user and ask him how the skill (in \u2018person\u2019 of Alexa) can help.<\/p>\n<pre class=\"prettyprint lang-java\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">@Override\npublic SpeechletResponse onLaunch(LaunchRequest arg0, Session arg1) throws SpeechletException {\n    PlainTextOutputSpeech speech = new PlainTextOutputSpeech();\n    speech.setText(\"Willkommen im Stundenplan der Hochschule der Medien. Wie kann ich helfen?\");\n\n    return SpeechletResponse.newAskResponse(speech, createRepromptSpeech());\n}\n<\/pre>\n<p>The method <em>onIntent<\/em> is called when a user calls an intent of the skill. Information about the intent is passed in the request parameter. So we can simply read an intent name to determine which function of our code should be called.<\/p>\n<pre class=\"prettyprint lang-java\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">private static final String INTENT_LECTUREOFDAY = \"lectureOfDay\";\n\n@Override\npublic SpeechletResponse onIntent(IntentRequest request, Session session) throws SpeechletException {\n    String intentName = request.getIntent().getName();\n\n    if (intentName != null) {\n\n        switch (intentName) {\n        case INTENT_LECTUREOFDAY:\n            return handleLectureOfDay(request.getIntent(), session);\n        case INTENT_STOP:\n            return this.handleStopIntent();\n        }\n    }\n    throw new SpeechletException(\"Invalid intent\");\n}<\/pre>\n<p>The method shown above lets us read the slot values from the intent. Therefore the skill libraries class <em>Intent<\/em> provides the method <em>getSlot(string slotName)<\/em>. The slot itself has a method <em>getValue<\/em>.<\/p>\n<pre class=\"prettyprint lang-java\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">intent.getSlot(\u201cMySlot\u201d).getValue()<\/pre>\n<h2><span style=\"font-weight: 400;\">HdM Timetable API<\/span><\/h2>\n<p>In general the HdM Timetable API gives an overview of the current week. So it\u2019s not possible to gain information about the upcoming or previous week(s).<\/p>\n<p>When calling the API link (http:\/\/www.hdm-stuttgart.de\/studenten\/stundenplan\/pers_stundenplan\/stundenplanfunktionen\/all_in_one_sql\/timetable) by using the basic authentication method, a JSON like this will be responded:<\/p>\n<pre class=\"prettyprint lang-json\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">{\n\"lectures\":[\n    {\n        \"vorlesung_id\":\"5214854\",\n        \"sgblock_vorlesung_semester_id\":\"5170336\",\n        \"name\":\"Software Development for Cloud Computing\",\n        \"type\":\"regular\",\n        \"tag_id\":4,\n        \"zeit_id\":5,\n        \"edvnr\":\"113479a\",\n        \"raum\":\"136\",\n        \"findet_statt\":1\n    },\n    {\n        \"vorlesung_id\":\"5214854\",\n        \"sgblock_vorlesung_semester_id\":\"5170336\",\n        \"name\":\"Software Development for Cloud Computing\",\n        \"type\":\"regular\",\n        \"tag_id\":4,\n        \"zeit_id\":6,\n        \"edvnr\":\"113479a\",\n        \"raum\":\"135\",\n        \"findet_statt\":1\n    }\n    ]\n}\n<\/pre>\n<p>The example shows the lecture <em>Software Development for Cloud Computing<\/em> which takes place in the 5<sup>th<\/sup> and 6<sup>th<\/sup> lecture block on a Thursday. That means from 2:15 PM to 3:45 PM and from 4 PM to 5:30 PM. First part of the lecture is in room 136, and second in room 135. As you can see, both of them took place (if you make the same request e.g. during the time of holidays you will get a \u20180\u2019 for \u2018findet_statt\u2019 instead). At least \u2018edvnr\u2019 which is the university\u2019s internal unique id for a lecture.<\/p>\n<p>There is a few more information provided but due missing documentation we unfortunately can\u2019t give a clear explanation for it.<\/p>\n<p>For our skill we used this fields:<\/p>\n<ul>\n<li>name ([string] which is directly used by alexa voice service)<\/li>\n<li>tag_id ([int | 1-7] \u2018day of week\u2019 to map the pronounced version bidirectionally)<\/li>\n<li>zeit_id ([int | 1-8] to to separate a day into the lecture blocks)<\/li>\n<li>raum ([string] also used directly)<\/li>\n<li>findet_statt (int | 0-1 used as boolean whether the lecture takes place)<\/li>\n<\/ul>\n<h2><span style=\"font-weight: 400;\">Test driven development with Java<\/span><\/h2>\n<p>Our Alexa Skill &#8211; the code &#8211; runs in a cloud where we are not able to simply debug it. That\u2019s the reason why unit tests get more important to ensure the correct functionality of our software. The components were easy to test through the encapsulation we made.<\/p>\n<p>At first we planned our architecture with its components, classes and their public methods. Then we wrote the unit tests which defined the expected results. Next task was to implement the logic until all tests passed.<\/p>\n<h3><span style=\"font-weight: 400;\">Request-Management<\/span><\/h3>\n<p>Our Alexa skill can handle a set of requests. For example you can get information about which lectures you have on a specific day or when a lecture is being held. To get that kind of information programmatically we provide an interface <em>IRequestManager<\/em>. The functions within this interface have been implemented in a concrete class <em>LectureRequestManager<\/em>. A roughly, not completed architecture diagram, is shown below:<\/p>\n<p><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/RequestManager_UML.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"2639\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/08\/28\/how-to-build-an-alexa-skill-to-get-information-about-your-timetable\/requestmanager_uml\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/RequestManager_UML.png\" data-orig-size=\"474,520\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"RequestManager UML\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/RequestManager_UML.png\" class=\"wp-image-2639 alignleft\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/RequestManager_UML.png\" alt=\"\" width=\"474\" height=\"520\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/RequestManager_UML.png 474w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/RequestManager_UML-273x300.png 273w\" sizes=\"auto, (max-width: 474px) 100vw, 474px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>For each request, sent to HdM-timetable-system\u2019s API, you get a JSON response, containing all lectures of the personal timetable.<\/p>\n<p>All needed information is being extracted from this JSON-data and stored in different <em>Lecture<\/em>-Objects. These objectes are based on a custom-created <em>Lecture<\/em> class.<\/p>\n<p>In class <em>LectureRequestManager<\/em> each method gets a list of Lecture-Objects. From the given list, only lectures will be filtered out, matching a certain criteria. An example of such a method provided by <em>LectureRequestManager<\/em> is given in the code below:<\/p>\n<pre class=\"prettyprint lang-java\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">@Override\npublic List&lt;Lecture&gt; getLectures(String lecturer) {\n    this.loadLectures();\n\n    final List&lt;Lecture&gt; result = this.lectures.stream()\n                        .filter(lecture -&gt; lecture.getLecturer().equals(lecturer))\n                        .collect(Collectors.toList());\n    return result;\n}<\/pre>\n<p>The criteria for filtering can be obtained by the function\u2019s parameters. What this method does is, it takes a list of lectures and filters out only the lectures, which have the lecturer-name in their lecturer-property. The result is then stored and returned as a new list.<\/p>\n<p>To get a corresponding and correct answer for every kind of request, all functions in class <em>LectureRequestManager<\/em> have been implemented in a similar way.<\/p>\n<h1>Hosting an Alexa Skill on AWS Lambda<\/h1>\n<p>While an Alexa Skill could be hosted on any server that supports a https service request we decided to host it on AWS Lambda. That\u2019s a service provided by Amazon itself. A Lambda function uses the serverless computing construct. That means, that the code is not running all the time. In the moment of a request the server starts our application, handles the request and stops the application. This way the servers can handle more applications, because they only use resources when they are really needed.<\/p>\n<p>In the AWS developer console we had to configure some values. For example we had to choose a runtime (Java 8), the full qualified name of the entry point in the source code and the trigger what starts the Lambda function (Alexa Skills Kit).<\/p>\n<p>After the configuration of the AWS Lambda function we needed to implement the entry point for that in our java code.<br \/>\nThe Alexa Skill Kit provides a class named <em>SpeechletRequestStreamHandler<\/em>. We only needed to create a class that extends this class. We created a constructor that calls its super constructor.<br \/>\nIn the configuration of the lambda function we had to set the fully qualified name of the class as entry point. Otherwise AWS Lambda wasn\u2019t able to start our piece of software.<br \/>\nLast but not least we had to compile our code and uploaded the jar file to the AWS developer console.<br \/>\nFor compiling we had to ensure that all dependencies are deployed in the jar file.<br \/>\nTherefore we started the maven build process with the following parameters:<\/p>\n<pre class=\"prettyprint lang-batchfile\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">mvn clean assembly:assembly -DdescriptorId=jar-with-dependencies package<\/pre>\n<h1>Sample video<\/h1>\n<p><span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe loading=\"lazy\" class=\"youtube-player\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/vDq3zApHRb0?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent\" allowfullscreen=\"true\" style=\"border:0;\" sandbox=\"allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox\"><\/iframe><\/span><\/p>\n<h1>Conclusion<\/h1>\n<p>One problem we faced during development was that we didn\u2019t knew exactly what the values of some fields, in the JSON data of the HdM-Timetable-API, mean (see \u201cHdM Timetable API\u201d). For some fields we had to guess what the values really mean (e.g. what the value behind the field \u201czeit_id\u201d means). The problem was, that the API we used was undocumented. For example when asking Alexa about lectures on a given day &#8211; for example Wednesday &#8211; she responded with the lectures of Tuesday every time. Suddenly we found what was going wrong, the implementer of the API decided to start a week not with 0 for Sunday (American format) or Monday, no &#8211; instead Monday was mapped to \u20181\u2019 increasing every day up to Sunday, ending with \u20187\u2019.<\/p>\n<p>If you are interested in developing a piece of software that uses data provided by HdM, make sure to get access to a fully and well documented API (maybe there is a way to use the HdM-App API or something equal like this more properly). Otherwise you will end up wasting your time with fixing stupid problems which could have been avoided by using a clearly defined interface to communicate with.<br \/>\nAmazon Voice Service has such a potential &#8211; but never expect a solution for all and everything or to many open opportunities from it (e.g. no custom strings).<\/p>\n<p>Also it\u2019s pretty hard to get reliable information from Amazon to develop a skill. Sometimes there are more than one documentations or instructions that do not match each others way to fix a problem. There was also a high chance getting outdated information.<\/p>\n<p>All in all it was a nice project and we were really proud when Alexa started to react on our questions with suitable answers.<\/p>\n<hr>\n<p>Title Image: Amazon Press Media (http:\/\/phx.corporate-ir.net\/phoenix.zhtml?c=176060&amp;p=irol-imageproduct41)<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction With information technology today we can easily get any kind of information someone is interested in. Whether you want to know how the weather will be tomorrow or how to cook your favorite cake, you can find out almost anything today. But as a user it\u2019s getting more important to gain information quickly, and [&hellip;]<\/p>\n","protected":false},"author":491,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[120,650,22],"tags":[],"ppma_author":[730],"class_list":["post-2633","post","type-post","status-publish","format-standard","hentry","category-cloud-technologies","category-scalable-systems","category-student-projects"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":3747,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/07\/23\/rust-safety-during-and-after-programming\/","url_meta":{"origin":2633,"position":0},"title":"RUST &#8211; Safety During and After Programming","author":"Alexander Georgescu","date":"23. July 2018","format":false,"excerpt":"Summary about Rust and how RustBelt has achieved to prove its safety with mathematical concepts.","rel":"","context":"In &quot;Secure Systems&quot;","block_context":{"text":"Secure Systems","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/secure-systems\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/07\/LambdaRustSyntax-1.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/07\/LambdaRustSyntax-1.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/07\/LambdaRustSyntax-1.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/07\/LambdaRustSyntax-1.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/07\/LambdaRustSyntax-1.png?resize=1050%2C600&ssl=1 3x"},"classes":[]},{"id":4309,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/09\/10\/building-a-document-translator-for-a-multi-language-blog\/","url_meta":{"origin":2633,"position":1},"title":"Building a Document Translator for a Multi-Language Blog","author":"Efstratia Tramountani","date":"10. September 2018","format":false,"excerpt":"Motivation The idea for this project occurred to me while I was listening to my sister share her vision for her recently started blog: To create a platform where writers of different ethnicity can publish texts in their native languages and exchange their stories with people from all over the\u2026","rel":"","context":"In &quot;Cloud Technologies&quot;","block_context":{"text":"Cloud Technologies","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/cloud-technologies\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/09\/titelbild-1.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/09\/titelbild-1.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/09\/titelbild-1.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/09\/titelbild-1.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":28189,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2026\/01\/18\/opening-new-frontiers-with-tiny-language-models\/","url_meta":{"origin":2633,"position":2},"title":"Opening new frontiers with Tiny Language Models","author":"Nikola Damyanov","date":"18. January 2026","format":false,"excerpt":"Note: This blog post was written for the lecture \u201eEnterprise IT (113601a)\u201c during the winter semester 2025\/26. In artificial intelligence bigger isn\u00b4t always better. While large language models (LLMs) often dominate the spotlight, a new generation of more compact versions, often called tiny or small language models (TLMs), is rapidly\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/01\/Opening-New-Frontiers-with-Tiny-Language-Models-Knowledge-Distillation-scaled.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/01\/Opening-New-Frontiers-with-Tiny-Language-Models-Knowledge-Distillation-scaled.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/01\/Opening-New-Frontiers-with-Tiny-Language-Models-Knowledge-Distillation-scaled.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/01\/Opening-New-Frontiers-with-Tiny-Language-Models-Knowledge-Distillation-scaled.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/01\/Opening-New-Frontiers-with-Tiny-Language-Models-Knowledge-Distillation-scaled.jpg?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/01\/Opening-New-Frontiers-with-Tiny-Language-Models-Knowledge-Distillation-scaled.jpg?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":2651,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/08\/28\/how-we-integrated-ibm-watson-services-into-a-telegram-chat-bot\/","url_meta":{"origin":2633,"position":3},"title":"How we integrated IBM Watson services into a Telegram chat bot","author":"Adrian Steinert, Oliver Speck, Megan Klaiber","date":"28. August 2017","format":false,"excerpt":"Introduction IBMs artificial intelligence \u2018Watson\u2019 on the IBM Bluemix platform offers a wide range of cognitive services like image and audio analysis among other things. During our semester project in the lecture \u2018Software Development for Cloud Computing\u2019 we integrated useful Watson services into a Telegram chat bot to provide a\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/12-factor.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/12-factor.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/12-factor.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":2360,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/06\/11\/analyzing-text-with-ibm-watson-services-on-bluemix\/","url_meta":{"origin":2633,"position":4},"title":"Analyzing text with IBM Watson services on Bluemix","author":"Patrick Kleindienst","date":"11. June 2017","format":false,"excerpt":"You might have already heard of IBM's artificial intelligence \"Watson\", which beat two former champions of the american television game show \"Jeopardy!\" back in 2011. What you probably don't know is that today lots of predefined Watson services are publicy available on IBM's cloud platform \"Bluemix\". These services cover different\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/06\/postman-watson-result.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/06\/postman-watson-result.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/06\/postman-watson-result.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/06\/postman-watson-result.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":711,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/06\/28\/the-elixir-programming-language\/","url_meta":{"origin":2633,"position":5},"title":"The Elixir Programming Language","author":"Yann Loic Philippczyk","date":"28. June 2016","format":false,"excerpt":"An introduction to the language, its main concepts and its potential. The number of security incidents has been on the rise for years, and the growth of the Internet of Things is unlikely to improve the situation. Successful attacks on all kinds of interconnected smart devices, from car locks over\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"elixir","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/06\/elixir-flame-229x300.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":730,"user_id":491,"is_guest":0,"slug":"dm080","display_name":"dm080@hdm-stuttgart.de","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/067d9fcbeaf7e965533d32ccd1ba7e826f6495155fb5a15a48bdec0879faf20b?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/2633","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/491"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=2633"}],"version-history":[{"count":11,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/2633\/revisions"}],"predecessor-version":[{"id":24726,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/2633\/revisions\/24726"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=2633"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=2633"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=2633"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=2633"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}