{"id":7032,"date":"2019-08-04T10:19:18","date_gmt":"2019-08-04T08:19:18","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=7032"},"modified":"2023-06-18T18:25:01","modified_gmt":"2023-06-18T16:25:01","slug":"how-to-create-and-integrate-a-customised-classifier-based-on-ibm-visual-recognition","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/04\/how-to-create-and-integrate-a-customised-classifier-based-on-ibm-visual-recognition\/","title":{"rendered":"How to create and integrate a customised classifier based on IBM Visual Recognition"},"content":{"rendered":"\n<p>Helga Schwaighofer \u2013 hs082<br> Celine Wichmann \u2013 cw089<br> Lea Baumg\u00e4rtner \u2013 lb092<\/p>\n\n\n\n<ul class=\"wp-block-gallery columns-1 is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\"><li class=\"blocks-gallery-item\"><figure><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"429\" data-attachment-id=\"7033\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/04\/how-to-create-and-integrate-a-customised-classifier-based-on-ibm-visual-recognition\/image1-2\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image1.png\" data-orig-size=\"2514,1054\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Image1\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image1-1024x429.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image1-1024x429.png\" alt=\"\" data-id=\"7033\" data-link=\"https:\/\/blog.mi.hdm-stuttgart.de\/?attachment_id=7033\" class=\"wp-image-7033\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image1-1024x429.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image1-300x126.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image1-768x322.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Motivation<\/h3>\n\n\n\n<p>Imagine you are having a bad day, but you don\u2019t know what to do. Your friends are not available, but you\u2019d like to have advice depending on your mood.<br> For that case, we created the Supporting Shellfish! This service generates advice based on the mood it recognises on your face. <\/p>\n\n\n\n<p>The following blog post describes a step by step explanation of how to create a personalised classifier based on the IBM Visual Recognition cloud service and the integration of those functionalities into a JavaScript- \/ Flask-based Web Application.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"7049\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/04\/how-to-create-and-integrate-a-customised-classifier-based-on-ibm-visual-recognition\/image2-4\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image2-1.png\" data-orig-size=\"754,883\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Image2\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image2-1.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image2-1.png\" alt=\"\" class=\"wp-image-7049\" width=\"392\" height=\"459\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image2-1.png 754w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image2-1-256x300.png 256w\" sizes=\"auto, (max-width: 392px) 100vw, 392px\" \/><figcaption>Architecture of shelly the supporting shellfish web app<\/figcaption><\/figure><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Research on different services<\/h3>\n\n\n\n<p>In order to realise our idea, we had to choose between different cloud based services in the field of image recognition or to be more specific, in the area of face recognition. <\/p>\n\n\n\n<p>Machine Learning as a service is the overall definition for diverse cloud-based services providing functionalities in the area of Artificial Intelligence such as data pre-processing, model training and prediction. The prediction results can be used and integrated through REST APIs. First of all, we analysed three of the most popular companies and their services.<\/p>\n\n\n\n<p>Google, Amazon and IBM. Which one should we choose?<\/p>\n\n\n\n<p>All of those services provide the usage of pre-trained models via API or the possibility to create and use a customised model. <a href=\"https:\/\/wire19.com\/cloud-services-comparison-tool-aws-vs-google-vs-ibm-vs-microsoft\/\">This<\/a> Website provides a very good overview of the detailed functionalities of the different services. However, for our case we focused on the following pros and cons of those different services:<\/p>\n\n\n\n<ul class=\"wp-block-gallery columns-1 is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\"><li class=\"blocks-gallery-item\"><figure><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"763\" data-attachment-id=\"7035\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/04\/how-to-create-and-integrate-a-customised-classifier-based-on-ibm-visual-recognition\/image3-2\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image3.png\" data-orig-size=\"1138,848\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Image3\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image3-1024x763.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image3-1024x763.png\" alt=\"\" data-id=\"7035\" data-link=\"https:\/\/blog.mi.hdm-stuttgart.de\/?attachment_id=7035\" class=\"wp-image-7035\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image3-1024x763.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image3-300x224.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image3-768x572.png 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image3.png 1138w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Creating a customised classifier<\/h3>\n\n\n\n<p>After analysing the pros and cons of the different services,\nwe decided to use IBM Cloud. The deciding factor for us was the pricing. But the <a href=\"https:\/\/developer.ibm.com\/exchanges\/models\/all\/max-facial-emotion-classifier\/\">well-structured\ndocumentation<\/a> and the available tutorials also helped convince us.<\/p>\n\n\n\n<p>Although IBM provides a facial emotion classifier, we\ndecided to create our own facial expression recognizer based on Visual\nRecognition of IBM Watson for studying purposes. <\/p>\n\n\n\n<p>We searched for different emotion datasets and found <a href=\"https:\/\/mug.ee.auth.gr\/fed\/\">the MUG Facial Expression Database<\/a>. After having read and accepted the license agreement we requested access. A few weeks later we received the necessary access credentials. The database provides videos and images of 52 different people expressing the emotions happiness, sadness, neutral, anger, surprise, disgust and fear.  <\/p>\n\n\n\n<p>To create our own classifier in IBM Visual Recognition, we had to summarise the data in a zip file per class \/ emotion and therefore created a whole new structure for the facial dataset. We had the possibility to choose between using the terminal or the well-structured user interface of IBM Watson Studio \u2013 we decided to use the later.<\/p>\n\n\n\n<p>First, we configured the model:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"817\" height=\"510\" data-attachment-id=\"7036\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/04\/how-to-create-and-integrate-a-customised-classifier-based-on-ibm-visual-recognition\/image4\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image4.png\" data-orig-size=\"817,510\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Image4\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image4.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image4.png\" alt=\"\" class=\"wp-image-7036\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image4.png 817w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image4-300x187.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image4-768x479.png 768w\" sizes=\"auto, (max-width: 817px) 100vw, 817px\" \/><figcaption>Configuration of the Model in IBM Watson Studio<\/figcaption><\/figure>\n\n\n\n<p>After the model was created, we were able to drag and drop our zipped training data on the right-hand side of the user interface below \u201c2. Add from project\u201d. We named the zip files equal to the classes we wanted to predict. We had to censor the faces in the following screenshots due to data protection.  As soon as we finished uploading our training data files, we hit the button \u201eTrain Model \u201c and the training began. <\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"944\" height=\"446\" data-attachment-id=\"7038\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/04\/how-to-create-and-integrate-a-customised-classifier-based-on-ibm-visual-recognition\/image6\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image6.png\" data-orig-size=\"944,446\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Image6\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image6.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image6.png\" alt=\"\" class=\"wp-image-7038\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image6.png 944w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image6-300x142.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image6-768x363.png 768w\" sizes=\"auto, (max-width: 944px) 100vw, 944px\" \/><figcaption>Upload of training data and training process of customised model<\/figcaption><\/figure>\n\n\n\n<p>After circa 15 to 20 minutes, the training finished\nsuccessfully and we were able to embed our custom model into the backend of our\nweb application. <\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"944\" height=\"346\" data-attachment-id=\"7041\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/04\/how-to-create-and-integrate-a-customised-classifier-based-on-ibm-visual-recognition\/image7-2\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image7-1.png\" data-orig-size=\"944,346\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Image7\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image7-1.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image7-1.png\" alt=\"\" class=\"wp-image-7041\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image7-1.png 944w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image7-1-300x110.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image7-1-768x281.png 768w\" sizes=\"auto, (max-width: 944px) 100vw, 944px\" \/><figcaption>These few lines of code enabled the integration of the customised model<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Building the Web App<\/h3>\n\n\n\n<p>Parallel to this process, we created a one-page web application,\nusing Python \/ Flask as backend and HTML \/ JavaScript as frontend.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Frontend Description<\/h4>\n\n\n\n<p>The frontend of our application is made up of one html site,\nwhich can be rendered as a Jinja template by flask. The functionalities are\nimplemented in JavaScript and the design was created via <a href=\"https:\/\/imperavi.com\/kube\/\">Kube CSS<\/a>.<\/p>\n\n\n\n<p>We got two buttons: One enables the selection of an image from your local device and the other button enables the upload of that selected file. As soon as an image is selected, the user receives a preview of the image in a form next to Shelly, the Supporting Shellfish. After selecting the file, the image is encoded into a base64 format and sent to the backend. After pushing the Button \u201cUpload File\u201d, an XMLHttpRequest will be made. <\/p>\n\n\n\n<p>Finally, the frontend waits for the status code of the\nbackend and catches exceptions, if something went wrong. <\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Backend Description<\/h4>\n\n\n\n<p>The backend consists of two routes: one GET Route for the\nlanding page and one POST Route for requesting the image from the frontend. The\nrequested image will be decoded from base64 and processed by IBM visual\nrecognition. Our classifier will predict the mood based on the received image\nand sends back a JSON File containing the predicted class with the highest\nprobability.<\/p>\n\n\n\n<p>Based on that prediction, a random advice will be picked\nfrom the corresponding advice-list and send to the frontend.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Demo<\/h3>\n\n\n\n<p>How does Shelly the Supporting Shellfish generate her advice? <br>First of all, upload a picture of your face. After hitting the button \u201cUpload File\u201d, Shelly will use the customised model via IBM Cloud and predict the mood on your face. Based on the recognised mood, she will provide you a more or less helpful advice.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"551\" data-attachment-id=\"7040\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/04\/how-to-create-and-integrate-a-customised-classifier-based-on-ibm-visual-recognition\/image9\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image9.png\" data-orig-size=\"2560,1378\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Image9\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image9-1024x551.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image9-1024x551.png\" alt=\"\" class=\"wp-image-7040\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image9-1024x551.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image9-300x161.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image9-768x413.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"260\" data-attachment-id=\"7042\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/04\/how-to-create-and-integrate-a-customised-classifier-based-on-ibm-visual-recognition\/image10\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image10.png\" data-orig-size=\"2767,702\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Image10\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image10-1024x260.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image10-1024x260.png\" alt=\"\" class=\"wp-image-7042\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image10-1024x260.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image10-300x76.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/Image10-768x195.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Based on the recognised mood, Shelly will show you her empathy<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Conclusion<\/h3>\n\n\n\n<p>Every member of the Supporting Shellfish Team has been\nactive in the area of artificial intelligence. However, we wanted to analyse\nthe advantages and disadvantages of integrating a cloud based service and the\nusage of \u201cmachine learning as a service\u201d in an application. <\/p>\n\n\n\n<p>The most interesting part for us was to create a customized\nmodel in a cloud. We were especially impressed by the functionality and\nusability of this process. The tough part was the selection of the dataset to\ntrain the model. We had to restructure the data to fit our needs and the\nrequirements of IBM. After the training was completed, the integration of the\nmodel into our Web-App went smoothly and quite quickly.<\/p>\n\n\n\n<p>If you are interested in the project, you can have a deeper insight <a href=\"https:\/\/gitlab.mi.hdm-stuttgart.de\/lb092\/supporting_shellfish\">here<\/a>.<br><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Helga Schwaighofer \u2013 hs082 Celine Wichmann \u2013 cw089 Lea Baumg\u00e4rtner \u2013 lb092 Motivation Imagine you are having a bad day, but you don\u2019t know what to do. Your friends are not available, but you\u2019d like to have advice depending on your mood. For that case, we created the Supporting Shellfish! This service generates advice based [&hellip;]<\/p>\n","protected":false},"author":918,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[652,120,650],"tags":[],"ppma_author":[676],"class_list":["post-7032","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-cloud-technologies","category-scalable-systems"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":3140,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/09\/07\/aira-voice-assistant-a-proof-of-concept-in-virtual-reality\/","url_meta":{"origin":7032,"position":0},"title":"AIRA Voice Assistant \u2013 A proof of Concept in virtual reality","author":"Dominic Kossinna","date":"7. September 2017","format":false,"excerpt":"Motivation As part of the lecture \u201cSoftware Development for Cloud Computing\u201d we were looking for a solution, how a user can get basic assistance within our existing virtual reality game AIRA. The primary objective was a maximum of user-friendliness, while avoiding an interruption of the immersive gaming experience. It is\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/09\/aira_error.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":2651,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/08\/28\/how-we-integrated-ibm-watson-services-into-a-telegram-chat-bot\/","url_meta":{"origin":7032,"position":1},"title":"How we integrated IBM Watson services into a Telegram chat bot","author":"Adrian Steinert, Oliver Speck, Megan Klaiber","date":"28. August 2017","format":false,"excerpt":"Introduction IBMs artificial intelligence \u2018Watson\u2019 on the IBM Bluemix platform offers a wide range of cognitive services like image and audio analysis among other things. During our semester project in the lecture \u2018Software Development for Cloud Computing\u2019 we integrated useful Watson services into a Telegram chat bot to provide a\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/12-factor.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/12-factor.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/12-factor.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":2360,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/06\/11\/analyzing-text-with-ibm-watson-services-on-bluemix\/","url_meta":{"origin":7032,"position":2},"title":"Analyzing text with IBM Watson services on Bluemix","author":"Patrick Kleindienst","date":"11. June 2017","format":false,"excerpt":"You might have already heard of IBM's artificial intelligence \"Watson\", which beat two former champions of the american television game show \"Jeopardy!\" back in 2011. What you probably don't know is that today lots of predefined Watson services are publicy available on IBM's cloud platform \"Bluemix\". These services cover different\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/06\/postman-watson-result.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/06\/postman-watson-result.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/06\/postman-watson-result.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/06\/postman-watson-result.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":7174,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/26\/how-to-recognize-doodles-in-the-cloud\/","url_meta":{"origin":7032,"position":3},"title":"How to Train a Doodle Image Classifier and Recognize Doodles in the Cloud","author":"Stephanie Jauss, Benjamin Kramser &amp; Johanna Reiting","date":"26. August 2019","format":false,"excerpt":"As part of the lecture \u201cSoftware Development for Cloud Computing\u201d we developed a doodle image recognition game. The idea came to us when we were searching for possible mini-games for our semester project \"Peers \u2013 The Party\", an iOS app using Apple's MultipeerConnectivity framework. Many people don\u2019t realize that the\u2026","rel":"","context":"In &quot;Artificial Intelligence&quot;","block_context":{"text":"Artificial Intelligence","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/artificial-intelligence\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/ai.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/ai.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/ai.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/ai.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/ai.jpg?resize=1050%2C600&ssl=1 3x"},"classes":[]},{"id":2764,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/08\/31\/moodkoala-an-intelligent-social-media-app-using-cloud-services\/","url_meta":{"origin":7032,"position":4},"title":"Moodkoala &#8211; An intelligent Social Media application","author":"Daniel Bruckner","date":"31. August 2017","format":false,"excerpt":"Welcome to our blog post 'Moodkoala - An intelligent Social Media application'. The following provides an overview of our contents. Contents Introduction \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - The idea behind Moodkoala \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - Technologies overview Technologies \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - Frontend and Backend \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - Bluemix Services \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - Liberty for Java \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - Natural\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/image_0-160x300.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":2161,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/03\/09\/of-apache-spark-hadoop-vagrant-virtualbox-and-ibm-bluemix-services-part-5-spark-applications-in-pia-project\/","url_meta":{"origin":7032,"position":5},"title":"Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services &#8211; Part 5 &#8211; Spark applications in PIA project","author":"bh051, cz022, ds168","date":"9. March 2017","format":false,"excerpt":"The main reason for choosing Spark was a second project which we developed for the course \u201cProgramming Intelligent Applications\u201d. For this project we wanted to implement a framework which is able to monitor important events (e.g. terror, natural disasters) on the world through Twitter. To separate important tweets from others\u2026","rel":"","context":"In &quot;Student Projects&quot;","block_context":{"text":"Student Projects","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/student-projects\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":676,"user_id":918,"is_guest":0,"slug":"lb092","display_name":"lb092","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/21e237b9aeb2b5df92cb1b2e0f416b497ec135b3755300dd5426408c9d4bf1f8?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/7032","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/918"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=7032"}],"version-history":[{"count":6,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/7032\/revisions"}],"predecessor-version":[{"id":7052,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/7032\/revisions\/7052"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=7032"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=7032"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=7032"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=7032"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}