{"id":2153,"date":"2017-03-08T18:52:27","date_gmt":"2017-03-08T17:52:27","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=2153"},"modified":"2023-08-06T21:53:33","modified_gmt":"2023-08-06T19:53:33","slug":"of-apache-spark-hadoop-vagrant-virtualbox-and-ibm-bluemix-services-part-3-what-is-apache-spark","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/03\/08\/of-apache-spark-hadoop-vagrant-virtualbox-and-ibm-bluemix-services-part-3-what-is-apache-spark\/","title":{"rendered":"Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services &#8211; Part 3 &#8211; What is Apache Spark?"},"content":{"rendered":"<p>Apache Spark is a framework for fast processing of large data on computer clusters. Spark applications can be written in Scala, Java, Python or R and can be executed in the cloud or on Hadoop (YARN) or Mesos cluster managers. It is also possible to run Spark applications standalone, that means locally on a computer. Possible data sources for Spark applications are e.g. the Hadoop Distributed File System (HDFS), HBase (Hadoop distributed NoSQL Database), Amazon S3 or Apache Cassandra. [1]<\/p>\n<p><!--more--><\/p>\n<p>The figure below shows the components of the Apache Spark framework.<\/p>\n<p><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/03\/spark-overview.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"2154\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/03\/08\/of-apache-spark-hadoop-vagrant-virtualbox-and-ibm-bluemix-services-part-3-what-is-apache-spark\/spark-overview\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/03\/spark-overview.png\" data-orig-size=\"1032,262\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"spark-overview\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/03\/spark-overview-300x76.png\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/03\/spark-overview-1024x260.png\" class=\"alignnone size-medium_large wp-image-2154\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/03\/spark-overview-768x195.png\" alt=\"\" width=\"656\" height=\"167\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/03\/spark-overview-768x195.png 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/03\/spark-overview-300x76.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/03\/spark-overview-1024x260.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/03\/spark-overview.png 1032w\" sizes=\"auto, (max-width: 656px) 100vw, 656px\" \/><\/a><\/p>\n<p>Following, we will consider the most important core concepts as well as the components SparkSQL, Spark Streaming and MLlib. We developed our Applications in Java, so the following examples are written in Java.<\/p>\n<h3>Apache Spark Core<\/h3>\n<h4>Resilient Distributed Datasets (RDD)<\/h4>\n<p>The core concept of Apache Spark is called Resilient Distributed Datasets (RDD). A RDD is a partitioned collection of elements which can be processed in parallel in a fault-tolerant manner. Fault-tolerant means, if any partition of an RDD is lost, it will be automatically executed again. You can create a RDD by transform an existing collection. Furthermore you can create RDDs by reading datasets from external data storages like distributed file systems or databases. We will show the basics of RDDs in a short example at the end of this section. [2]<\/p>\n<h5>First RDD Example<\/h5>\n<p>We used in our Project Spark in Version 1.6.1, although the newest release is 2.1.0. The reason for that are on the one hand dependency issues to other Hadoop projects, on the other hand was this the installed version in our Hadoop cluster. We will cover this issue in more detail in the Big Data Engineering part. In the code below there are implemented some of the concepts described above.<\/p>\n<pre class=\"prettyprint lang-java\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">\/\/ create jspark configuration\nSparkConf conf = new SparkConf().setAppName(\"First Spark Test\");\n\/\/ create java spark context\nJavaSparkContext jsc = new JavaSparkContext(conf);\n\n\n \/\/ create test data\nList&lt;String&gt; data = Arrays.asList(\"This\",\"is\",\"a\",\"first\",\"spark\",\"test\");\n\/\/ create a simple string rdd\nJavaRDD&lt;String&gt; firstRDD = jsc.parallelize(data);\n\n\/\/ save to hdfs\nfirstRDD.saveAsTextFile(\"hdfs:\/\/..user\/syseng\/simple_spark_test\/example.txt\");\n\n\/\/ read text file\nJavaRDD&lt;String&gt; tfRDD = jsc.textFile(\"hdfs:\/\/..\/user\/syseng\/simple_spark_test\/example.txt\");<\/pre>\n<p>First we have to create a Spark Configuration where we set the Name of the Application (line 2). Additionally, we need a Spark Context which expects the Configuration as Parameter (line 4). In line 7 we initialize a usual Java list. This list we pass as parameter to the parallelize-method in line 9 of the Spark Context object. This method transforms the list into an RDD, which means that the elements of the list will be partitioned. It is possible to pass a second parameter which indicates in how many partitions the List should be divided. If this program will be executed on a computer cluster, each host would process a partition for example. With the saveAsTextFile-method in line 11 the results will be saved in a file system for example a Hadoop File System. In the last line there is shown another way how an RDD can be created. The method textFile of the Spark Context reads a text file e.g. from a hadoop distributed file system and transforms the lines of the text file into an RDD of strings.<\/p>\n<h4>RDD Operations<\/h4>\n<p>There are two types of operations you can apply on RDDs: transformations and actions. An transformation is applies a function to each element of an RDD and returns a new (transformed) RDD. An action actually calculates a result for the driver program. In Spark you build pipelines with higher order functions in the manner of functional programming like shown in the figure below. Each pipeline will be executed only if an action operation is called on a pipeline. That means if a Pipeline consists only of transformations, nothing will be executed. Each time an action is called on a Pipeline all transformations will be recomputed. To prevent this, parts of the pipeline can be cached by calling the appropriate methods (cache or persist).[3]<\/p>\n<p>Assuming we continue the code example above and we want to invoke operations on the firstRDD object. In the code below, on the RDD object a method map is called, which is a higher order function. This method we pass a Java Lambda expression. The map function transforms each string in the list into upper-case letters. The reduce function concatenates the strings pairwise. The result is one string with all strings of the RDD transformed into uppercase-letters and concatenated together.<\/p>\n<pre class=\"prettyprint lang-java\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">String sentence = firstRDD.map( str -&gt; str.toUpperCase())\n                                                  .reduce( (str1,str2) -&gt; str1 +\" \"+str2);\n\/\/ result: THIS IS A FIRST SPARK TEST<\/pre>\n<h4>RDD Persistence &#8211; Storage Levels<\/h4>\n<p>Per default Spark caches all intermediate results, like the result of the map operation in the example above, in memory. The storage level can be configured. For example if data with huge size is processed the intermediate results can be stored on disk. The possible storage levels can be viewed in the programming guide of Apache Spark: <a href=\"http:\/\/spark.apache.org\/docs\/latest\/programming-guide.html#rdd-persistence\">http:\/\/spark.apache.org\/docs\/latest\/programming-guide.html#rdd-persistence<\/a>.<\/p>\n<p>So at this point we covered the very basics of Apache Spark. There are more concepts and operations which you can read on the <a href=\"http:\/\/spark.apache.org\/docs\/latest\/programming-guide.html\">Apache Spark programming guide<\/a>.<\/p>\n<h4>Spark SQL<\/h4>\n<p>SparkSQL is something similar to Apache Hive. Both allow querying structured data from heterogeneous data sources with SQL statements.<\/p>\n<p>In the example below the json method of an Spark SQL Context loads a JSON-file containing multiple people-JSON objects and transforms the jsons into rows of a data frame.<\/p>\n<pre class=\"prettyprint lang-java\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">\/\/ create sql context with java spark context\nSQLContext sqlContext = new SQLContext(jsc);\n\/\/ read people.json and transform into data frame\nDataFrame dfPeople = sqlContext.read().json(\"hdfs:\/\/path\/to\/file\/people.json\");<\/pre>\n<p>A data frame in Spark can be compared to relational database tables or a python data frame [6]. That means the JSON-file above is transformed into a table where the attributes represents the columns like pictured below. A people JSON simply consists of the two attributes name and age.<\/p>\n<table style=\"height: 234px;\" width=\"205\">\n<tbody>\n<tr>\n<td><strong>name<\/strong><\/td>\n<td><b>age<\/b><\/td>\n<\/tr>\n<tr>\n<td>Tim<\/td>\n<td>24<\/td>\n<\/tr>\n<tr>\n<td>Max<\/td>\n<td>45<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Now on this data frame SQL statements can be executed like shown below. First of all a table name which can be referenced in the SQL statements have to be declared. This happens in the first line with the registerTempTable method. The Table will be named \u201cpeople\u201d. With the sql method of the Spark SQL Context SQL statement can be executed. This method returns a new data frame. In line 2 we define a statement which returns all data sets of the data frame where the column age has a value lower than 40.<\/p>\n<pre class=\"prettyprint lang-java\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">dfPeople.registerTempTable(\"people\"); \/\/ now this table name can be used in queries\nDataFrame dfPeopleUnder40 = sqlContext.sql(\"SELECT * FROM people WHERE age &lt; 40\");\ndfPeopleUnder40.show(); \/\/ print table in console<\/pre>\n<p>The show method is part of the Spark Data Frame API and prints the table into the console. The Data Frame API is an alternative way to query a data frame in Spark. For example the SQL statement above can be programmed with the Data Frame API like this:<\/p>\n<pre class=\"prettyprint lang-java\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">DataFrame dfPeopleUnder40 = dfPeople.filter(col(\"age\").lt(40));<\/pre>\n<p>There are many possible data sources for a Spark data frames, such as mentioned above structered data (json, csv etc.) or external databases, existing RDDs or hive tables [6]. Thus SparkSQL can be used to read data from Hive tables.<\/p>\n<h4>Spark Streaming<\/h4>\n<p>The Spark Streaming module allows application to process streams of many data sources like Apache Kafka, Apache Flume or I\/O streams such as TCP sockets or files. It is an extension of the core module and allows scalable and high-throughput stream processing in a fault-tolerant manner. The data of a spark stream can be processed with the high-level functions like map or reduce (like shown in the rdd operations section). [7]<\/p>\n<p>Primarily, Spark is actual doing micro-batch processing. This means the received input data is divided into batches like shown in the figure below [7].<br \/>\n<img decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/streaming-flow.png\"><\/p>\n<h6 style=\"text-align: center;\">image source: https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/streaming-flow.png<\/h6>\n<p>In the example below firstly a Spark Streaming Context is created. The second parameter defines the batch interval, in this case 10 seconds. This means the stream source will be read for 10 seconds. Then the data which was collected in this time will be processed. In line 2 via the Streaming Context a text file stream is created. The wildcard (*) says that every text file in this directory should be read. The textFileStream method returns a DStream, which represents a continuous stream of data and is a high-level abstraction in Spark Streaming [7]. A DStream is actually a sequence of RDDs [7].<\/p>\n<pre class=\"prettyprint lang-java\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(10));\nJavaDStream&lt;String&gt; txtStream = jssc.textFileStream(\"hdfs:\/\/path\/to\/dir\/*.txt\");\n        txtStream.foreachRDD(rdd -&gt; {\n            rdd.saveAsTextFile(\"hdfs:\/\/path\/to\/result\/result\"+new Date().getTime()+\".txt\");\n        });<\/pre>\n<p>In this simple example the data of the text file stream will be saved in other text files in the HDFS.<\/p>\n<h4>MLlib (Machine Learning Library)<\/h4>\n<p>The MLlib component contains among others distributed implementations of common machine learning algorithms based on RDDs. For example topic modelling (LDA), logistic regression, K-means and many more. Furthermore there are features like utility classes for vector, matrix computation etc., machine learning pipeline construction or persistence of models [9]. An overview of this component can be seen on the Apache Spark Website: <a href=\"http:\/\/spark.apache.org\/mllib\/\">http:\/\/spark.apache.org\/mllib\/<\/a><\/p>\n<p><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/03\/09\/of-apache-spark-hadoop-vagrant-virtualbox-and-ibm-bluemix-services-part-4-big-data-engineering\/\">Part 4 &#8211; Big Data Engineering<\/a><\/p>\n<h5>References<\/h5>\n<h6>1 <a href=\"http:\/\/spark.apache.org\/\">http:\/\/spark.apache.org\/<\/a><\/h6>\n<h6>2 <a href=\"http:\/\/spark.apache.org\/docs\/latest\/programming-guide.html#resilient-distributed-datasets-rdds\">http:\/\/spark.apache.org\/docs\/latest\/programming-guide.html#resilient-distributed-datasets-rdds<\/a><\/h6>\n<h6>3 <a href=\"http:\/\/spark.apache.org\/docs\/latest\/programming-guide.html\">http:\/\/spark.apache.org\/docs\/latest\/programming-guide.html<\/a><\/h6>\n<h6>5 <a href=\"http:\/\/spark.apache.org\/sql\/\">http:\/\/spark.apache.org\/sql\/<\/a><\/h6>\n<h6>6 <a href=\"http:\/\/spark.apache.org\/docs\/latest\/sql-programming-guide.html#overview\">http:\/\/spark.apache.org\/docs\/latest\/sql-programming-guide.htm<\/a><\/h6>\n<h6>7 <a href=\"http:\/\/spark.apache.org\/docs\/latest\/streaming-programming-guide.html\">http:\/\/spark.apache.org\/docs\/latest\/streaming-programming-guide.html<\/a><\/h6>\n<h6>8 <a href=\"https:\/\/spark.apache.org\/docs\/latest\/mllib-guide.html\">https:\/\/spark.apache.org\/docs\/latest\/mllib-guide.html<\/a><\/h6>\n<h6>9 <a href=\"http:\/\/spark.apache.org\/mllib\/\">http:\/\/spark.apache.org\/mllib\/<\/a><\/h6>\n","protected":false},"excerpt":{"rendered":"<p>Apache Spark is a framework for fast processing of large data on computer clusters. Spark applications can be written in Scala, Java, Python or R and can be executed in the cloud or on Hadoop (YARN) or Mesos cluster managers. It is also possible to run Spark applications standalone, that means locally on a computer. [&hellip;]<\/p>\n","protected":false},"author":49,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[22,651,2],"tags":[],"ppma_author":[721],"class_list":["post-2153","post","type-post","status-publish","format-standard","hentry","category-student-projects","category-system-designs","category-system-engineering"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":2151,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/03\/08\/of-apache-spark-hadoop-vagrant-virtualbox-and-ibm-bluemix-services-part-2-apache-hadoop-ecosystem\/","url_meta":{"origin":2153,"position":0},"title":"Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services &#8211; Part 2 &#8211; Apache Hadoop Ecosystem","author":"bh051, cz022, ds168","date":"8. March 2017","format":false,"excerpt":"In our project we primarily implemented Spark applications, but we used components of Apache Hadoop like the Hadoop distributed file system or the cluster manager Hadoop YARN. For our discussion in the last part of this blog article it is moreover necessary to understand Hadoop MapReduce for comparison to Apache\u2026","rel":"","context":"In &quot;Student Projects&quot;","block_context":{"text":"Student Projects","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/student-projects\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":2165,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/03\/09\/of-apache-spark-hadoop-vagrant-virtualbox-and-ibm-bluemix-services-part-6-apache-spark-andvs-apache-hadoop\/","url_meta":{"origin":2153,"position":1},"title":"Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services &#8211; Part 6 &#8211; Apache Spark and\/vs Apache Hadoop?","author":"bh051, cz022, ds168","date":"9. March 2017","format":false,"excerpt":"At the beginning of this article series we introduced the core concepts of Hadoop and Spark in a nutshell. Both, Apache Spark and Apache Hadoop are frameworks for efficient processing of large data on computer clusters. The question arises how they differ or relate to each other. Hereof it seems\u2026","rel":"","context":"In &quot;Student Projects&quot;","block_context":{"text":"Student Projects","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/student-projects\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":2143,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/03\/08\/of-apache-spark-hadoop-vagrant-virtualbox-and-ibm-bluemix-services\/","url_meta":{"origin":2153,"position":2},"title":"Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services &#8211; Part 1 &#8211; Introduction","author":"bh051, cz022, ds168","date":"8. March 2017","format":false,"excerpt":"As part of the lecture \u201cSystem Engineering and Management\u201d in the winter semester 2016\/17, we run a project with Apache Spark and the Apache Hadoop Ecosystem. In this article series firstly we want to introduce Apache Spark and the Apache Hadoop Ecosystem. Furthermore we want to give an overview of\u2026","rel":"","context":"In &quot;Student Projects&quot;","block_context":{"text":"Student Projects","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/student-projects\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":2157,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/03\/09\/of-apache-spark-hadoop-vagrant-virtualbox-and-ibm-bluemix-services-part-4-big-data-engineering\/","url_meta":{"origin":2153,"position":3},"title":"Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services &#8211; Part 4 &#8211; Big Data Engineering","author":"bh051, cz022, ds168","date":"9. March 2017","format":false,"excerpt":"Our objective in this project was to build an environment that could be practical. So we set up a virtual Hadoop test cluster with virtual machines. Our production environment was a Hadoop Cluster in the IBM Bluemix cloud which we could use for free with our student accounts. We developed\u2026","rel":"","context":"In &quot;Student Projects&quot;","block_context":{"text":"Student Projects","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/student-projects\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/03\/dev-env-spark-768x512.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/03\/dev-env-spark-768x512.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/03\/dev-env-spark-768x512.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":2161,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/03\/09\/of-apache-spark-hadoop-vagrant-virtualbox-and-ibm-bluemix-services-part-5-spark-applications-in-pia-project\/","url_meta":{"origin":2153,"position":4},"title":"Of Apache Spark, Hadoop, Vagrant, VirtualBox and IBM Bluemix Services &#8211; Part 5 &#8211; Spark applications in PIA project","author":"bh051, cz022, ds168","date":"9. March 2017","format":false,"excerpt":"The main reason for choosing Spark was a second project which we developed for the course \u201cProgramming Intelligent Applications\u201d. For this project we wanted to implement a framework which is able to monitor important events (e.g. terror, natural disasters) on the world through Twitter. To separate important tweets from others\u2026","rel":"","context":"In &quot;Student Projects&quot;","block_context":{"text":"Student Projects","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/student-projects\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":10289,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2020\/03\/09\/distributed-stream-processing-frameworks-what-they-are-and-how-they-perform\/","url_meta":{"origin":2153,"position":5},"title":"Distributed stream processing frameworks &#8211; what they are and how they perform","author":"Alexander Merker","date":"9. March 2020","format":false,"excerpt":"An overview on stream processing, common frameworks as well as some insights on performance based on benchmarking data","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/storm_arch.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/storm_arch.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/storm_arch.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":721,"user_id":49,"is_guest":0,"slug":"bh051","display_name":"bh051, cz022, ds168","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/6e0cfeb23e37b530d4d35d4e46d3e6f39969124f52f6474b4cf0f23b6ff524ac?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/2153","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/49"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=2153"}],"version-history":[{"count":13,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/2153\/revisions"}],"predecessor-version":[{"id":25524,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/2153\/revisions\/25524"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=2153"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=2153"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=2153"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=2153"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}