{"id":3925,"date":"2018-08-14T19:24:12","date_gmt":"2018-08-14T17:24:12","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=3925"},"modified":"2023-08-06T21:48:06","modified_gmt":"2023-08-06T19:48:06","slug":"differential-privacy","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/08\/14\/differential-privacy\/","title":{"rendered":"Differential Privacy &#8211; Privacy-preserving data analysis"},"content":{"rendered":"<p>It is widely known that tech companies, like Apple or Google and their partners collect and analyse an increasing amount of information. This includes information about the person itself, their interaction and their communication. It happens because of seemingly good motives such as:<\/p>\n<ul>\n<li>Recommendation services: e.g. word suggestions on smartphone keyboard<\/li>\n<li>Customizing a product or service for the user<\/li>\n<li>Creation and Targeting in personalised advertising<\/li>\n<li>Further development of their product or service<\/li>\n<li>Simply monetary, selling customer data (the customer sometimes doesn&#8217;t know)<\/li>\n<\/ul>\n<p>In the process of data collection like this clients&#8217; or users&#8217; privacy is often at risk. In this case privacy includes confidentiality and secrecy. Confidentiality means that no other party or person than the recipient of sent message can read the message. In the special case of data collection: no third party or even no one else but the individual, not even the analysing company should be able to read its information to achieve proper confidentiality. Secrecy here means that individual information should be kept secret only to the user.<\/p>\n<p>Databases may not be simply accessible for other users or potential attackers, but for the company collecting the data it probably is. Despite anonymization\/pseudonymization, information can often be associated to one product, installation, session and\/or user. This way conclusions to some degree definite information about one very individual can be drawn, although actually anonymized or not even available. Thus, individual users are identifiable and traceable and their privacy is violated.<\/p>\n<p>The approach of Differential Privacy aims specifically at solving this issue, protecting privacy and making information non-attributable to individuals. It tries to reach an individual deniability of sent\/given data as a right for the user. The following article will give an overview of the approach of differential privacy and its effects on data collection.<\/p>\n<p><!--more--><\/p>\n<h1>Privacy<\/h1>\n<p>The first questions to answer are: What is the definition of privacy? Can it be reached, if yes how? The swedish mathematician and computer scientist Tore Dalenius defines privacy as follows in 1977:<\/p>\n<blockquote><p>\u201cNothing about an individual should be learnable from the database that cannot be learned without access to the database.\u201d<\/p>\n<p>Dwork C. (2006) Differential Privacy. In: Bugliesi M., Preneel B., Sassone V., Wegener I. (eds) Automata, Languages and Programming. ICALP 2006. Lecture Notes in Computer Science, vol 4052. Springer, Berlin, Heidelberg<\/p><\/blockquote>\n<p>In other words: A collection of data or the process of collecting it violates an individual\u2019s privacy, if an information about this very individual can be defined more precisely having insights into the collection than without it. This does makes sense at first sight: reading a collection of information should not enable you to learn about an individual\u2019s information, otherwise its privacy would be violated.<\/p>\n<p>However, the definition is too strict in context of today\u2019s massive data collections with an increasing number of data collections holding increasing amount of information. Regarding Tore Dalenius&#8217; definition, privacy could easily be violated associating a data source with auxiliary sources. An example for this is the Netflix Prize Challenge, in which Netflix challenged everyone interested to beat the (sum of) algorithm(s) used by them to recommend movies to a user he might like. Contestants were given an anonymized database containing reviews and ratings from real Netflix users. By executing a so-called linkage attack, a group of contestants was able to associate this anonymized data with secondary (non-anonymized) source (namely reviews from ImDb). That way they accomplished to identify individual users. With the information from Netflix they could learn more about these users than they could have without it. This violates the aforementioned definition of privacy.<\/p>\n<h1>Possible Approaches (?)<\/h1>\n<p>&#8220;Classic&#8221; approaches like encryption or anonymization seem useful at first. However, they do not solve the issue at hand that information about individuals can be deduced and associated with them.<\/p>\n<p>Encrypting data and communication prevents unauthorized access. This strengthens confidentiality of data, as only authorized individuals or companies are granted access to data. Yet, an authorized party, such as the company collecting the information can attribute it to individuals easily. Furthermore, operations to gain statistical insights on collected data should either be performed on clean data. This would require decryption before every computation. Or so-called homomorphic encryption algorithms must be used, profiting from their characteristic that the decrypted result of an operation on encrypted data is equal to that one of an operation on decrypted data.<\/p>\n<p>Data could be anonymized to hinder attribution of data to individuals, as the individuals aren\u2019t identifiable in the best case. However, the party realizing anonymization needs to be trusted in performing it correctly. Also, as mentioned before de-anonymization is possible due to linkage attacks. It has happened on relatively benign information, yet also on sensitive data such as medical files. De-anonymization could be impeded reducing the resolution of information for example by storing only a month instead of a specific day or a country instead of a city. But, thereby the value or utility of the examined data is diminished.<\/p>\n<p>Both encryption and anonymization won\u2019t really prevent the problem of attribution. The approach of differential privacy however is specifically targeted at this issue.<\/p>\n<h1>Introduction to differential privacy<\/h1>\n<p>Differential Privacy is a theoretical concept and approach to preserving privacy. It gives mathematical formal definition for privacy, but does not define one specific implementation or framework. It is important to note that it does not rely on encryption at all. The overall goal is to reduce the risk that information will be used in a way that harms human rights. Moreover, it defines and tackles the tradeoff between privacy and utility of the examined information. It shall help to both&#8230;<\/p>\n<blockquote><p>&nbsp;\u201c[&#8230;] <strong>reveal useful information<\/strong> about the underlying population, as represented by the database, while <strong>preserving the privacy of individuals<\/strong>.\u201d<\/p>\n<p>Dwork C. (2006) Differential Privacy. In: Bugliesi M., Preneel B., Sassone V., Wegener I. (eds) Automata, Languages and Programming. ICALP 2006. Lecture Notes in Computer Science, vol 4052. Springer, Berlin, Heidelberg<\/p><\/blockquote>\n<p>The concept of differential privacy deals with this trade-off, tries to formalise and even quantify it.<\/p>\n<h1>Method of differential privacy<\/h1>\n<p>The following example explains the method of differential privacy:<\/p>\n<p>Suppose Alice has access to some kind of private information (for example from her customers). She is called the curator of data, as she holds it, has access to it and can control what and how it is revealed to any other party. Bob is representing anyone, who would like to gain insight to (statistic) information about Alice\u2019s data. This could be an employee of a company collecting the information, a customer, client or partner of this company.<\/p>\n<p>As Bob would like to know more about the information Alice holds, he makes a request to Alice. Alice aims at protecting her customers\u2019 privacy. Therefore, she does not want to simply grant Bob access to the data nor respond with the real (clean) data, Bob requested. Thus, Bob and Alice agree on the following:<\/p>\n<ul>\n<li>Bob can ask any question, but<\/li>\n<li>Alice gives responses, which are to some degree randomised, yet probably close to the real data<\/li>\n<\/ul>\n<p>While Bob gets useful approximate data as a response, he cannot deduce any individual information. The privacy of Alice\u2019s customers is protected. The benefit of this approach is having both utility (for Bob) and privacy (for Alice and her customers). In the following it is going to be explained according to what principle Alice randomises her answers.<\/p>\n<h1>Randomization<\/h1>\n<p>In differential Privacy two variants of randomising information or adding noise exist:<\/p>\n<ul>\n<li>Adding noise to a response given to a request on the data<\/li>\n<li>Adding noise already in the process of collecting data<\/li>\n<\/ul>\n<p>The following example shows how the second variant works. It deals with a simple survey in which people are asked to answer a polar question. As differential privacy preserves the respondents\u2019 privacy the question could be sensitive, e.g. whether he\/she takes drugs or has a specific disease.<\/p>\n<p>To determine what answer is stored for a respondent, a randomized experiment is performed twice. This experiment is independent from the respondent\u2019s real answer and can be a coin toss in the simplest case. The outcome of the first experiment determines, whether the real answer is stored in the database of answers to this survey. With a 50 percent probability the real answer is stored. The other 50 percent of the time the randomized experiment is performed again.<\/p>\n<p><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/coin-toss.jpg\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"3928\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/08\/14\/differential-privacy\/coin-toss\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/coin-toss.jpg\" data-orig-size=\"1033,482\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;Philipp Joseph&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1534274972&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"coin toss\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/coin-toss-1024x478.jpg\" class=\"alignnone wp-image-3928\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/coin-toss-300x140.jpg\" alt=\"\" width=\"622\" height=\"290\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/coin-toss-300x140.jpg 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/coin-toss-768x358.jpg 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/coin-toss-1024x478.jpg 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/coin-toss.jpg 1033w\" sizes=\"auto, (max-width: 622px) 100vw, 622px\" \/><\/a><\/p>\n<p>The following second run of the experiment decides what answer is saved instead of the individual\u2019s real answer: yes or no. With a 25 percent chance the stored answer can be Yes although he\/she actually responded with No.<\/p>\n<p>This method adds random errors to single measured values and has the effect of plausible deniability of every individual\u2019s answer. This means that because of the added randomised noise respondents can deny having given the stored answer.<\/p>\n<p>After collecting a large enough amount of respondents\u2019 answers with added noise according to this variant, one would be able to gather approximate statistical values. As the proportion and distribution of noise is known, the amount of \u201cwrong\u201d answers can be compensated to get statistical values close to those based on data without noise. Yet only approximate, they are useful to the requesting party depending on their proximity to the \u201creal\u201d values (which in turn partly depend on the method of adding noise).<\/p>\n<p>Using a coin toss to add noise is the simplest technique. More complex mathematical functions and distributions are used in sophisticated systems taking the approach of differential privacy (e.g. Laplace or Gaussian distribution).<\/p>\n<p>Apple outlines the approach and effects of differential privacy in it\u2019s \u201cTechnical Overview\u201d as:<\/p>\n<blockquote><p>\u201c[\u2026] the idea that statistical noise that is slightly biased can mask a user\u2019s individual data before it is shared with Apple. If many people are submitting the same data, the noise that has been added can average out over large numbers of data points, and Apple can see meaningful information emerge.\u201d<\/p><\/blockquote>\n<p>By inserting noise, the differential privacy model guarantees that even if someone has complete information about 99 of 100 people in a data set, they still cannot deduce the information about the final person. For example, the 100th respondent\u2019s answer to a polar question can still be yes or no, be it the real answer or simply the outcome of an experiment. An individual\u2019s information is not determinable by an outsider only looking at the noisy data.<\/p>\n<p>Furthermore, differential privacy relies on the fact that removing a single record from a large enough data collection has little impact on statistical values based on them. The impact shrinks with growing size of the collection. The probability of a result of the evaluation of the data would neither be increased nor decreased significantly if this particular person would refrain from providing its record. This addresses possible concerns of potential participant of the data collection. They might fear that an individual\u2019s information could be disclosed or deduced. However, these concerns can be dispelled and motivation can be build more easily.<\/p>\n<h1>Definition of privacy in the context of differential privacy<\/h1>\n<p>This leads to a definition of privacy, here in the context of differential privacy:<\/p>\n<p style=\"padding-left: 30px;\"><em>Privacy of an individual is preserved if it makes no difference to a statistical request whether the data collection contains its specific record or not.<\/em><\/p>\n<p>If this statement is met, one cannot deduce any individual\u2019s information by comparing a statistical value based on the collection including the individual&#8217;s record to one not including it.<\/p>\n<p>This is basically what the mathematical definition of differential privacy states.<\/p>\n<p style=\"padding-left: 30px;\">Pr[ \u039a(D<sub>1<\/sub>) \u2208 E ]&nbsp; \u2264&nbsp; e<sup>?<\/sup> \u00d7 Pr[ \u039a(D<sub>2<\/sub>) \u2208 E ]<\/p>\n<p>D<sub>1<\/sub> and D<sub>2 <\/sub>are so called neighboring data sets, differing in at most one element (informal definition: D<sub>2<\/sub> := D<sub>1<\/sub> \u2013 Person X). K is a function mapping the data sets to an output space S, E is a subset of S. K maps random records in D to noise. The inequation compares the probability of the mapping to one value (e.g. answer to a survey question) based on D<sub>1<\/sub> (left side) to the probability of that mapping based on D<sub>2<\/sub>.<\/p>\n<p>The only factor differing between the two sides is e<sup>?<\/sup>. The bigger the value of ? the bigger the right side of the inequation. This parameter ? describes to what extent the absence of one single record should affect statistical values or responses to requests at the maximum. To achieve a high level of privacy, ? must be kept as low as possible. It is a measure for the loss of privacy.<\/p>\n<h1>Privacy budget<\/h1>\n<p>? is also called the privacy budget, as queries or requests on the data collection can be limited by granting certain values for ?. Each query has an assigned value for ?, based on the question: <em>How much could an attacker learn about anyone from this query?<\/em>&nbsp; Thus, it is measuring privacy loss.<\/p>\n<p>A query which has a higher amount of obtainable knowledge about individuals requires a lower value for ?. Privacy budget must be granted carefully, since<\/p>\n<ul>\n<li>it adds up with queries: Two queries with ? = 1 add up to ? = 2<\/li>\n<li>it grows exponentially: A query assigned with ? = 1 is almost 3 times more private than one with&nbsp; ? = 2&nbsp; and more than 8,000 times more private than with ? = 10.<\/li>\n<\/ul>\n<p>Though, the consequence of a higher preservation of privacy is a lower level of accuracy (or worse approximation) of statistical values.<\/p>\n<table style=\"height: 113px;\" width=\"369\">\n<tbody>\n<tr>\n<td><\/td>\n<td><strong>High <\/strong><strong>?<\/strong><\/td>\n<td><strong>Low <\/strong><strong>?<\/strong><\/td>\n<\/tr>\n<tr>\n<td><strong>Accuracy<\/strong><\/td>\n<td><em>High<\/em><\/td>\n<td><em>Low<\/em><\/td>\n<\/tr>\n<tr>\n<td><strong>Privacy<\/strong><\/td>\n<td><em>Low<\/em><\/td>\n<td><em>High<\/em><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>An advantage of the approach of differential privacy is that it reduces the tradeoff between accuracy\/utility on the one hand and privacy on the other hand to one key ratio represented by ?. Some experts in theoretical computer science and data protection recommend values between zero and one for ? for a good level of privacy and say one to ten is still acceptable. They advise against values higher than ten, as privacy is significantly decreased granting requests with a budget higher than this.<\/p>\n<h1>Variants<\/h1>\n<p>As stated earlier, noise can be added at two different points in the overall process of data collection and analyzation.<\/p>\n<p>The method described above, namely adding noise already when collecting the data (randomizing some of the answers to a polar question in a survey) is part of a variant of differential privacy called \u201clocally private\u201d. An advantage of this is that the participant already adds the noise himself. Thus, the analyzing party can only see noisy data, and does not need to be trusted. However, because the information is noisy statistical values are usually less accurate than when using clean data. As noise is added in the very first step of collection and after that only noisy data exists, the safety of the information is high.<\/p>\n<p>In contrast to this approach stands one called \u201cglobally private\u201d. A curator collects (clean) data and adds noise to the whole, usually when giving a response. The participants need to trust in him performing the process correctly. Since the curator can operate on clean data, the accuracy is higher than in the locally private method. The safety however is lower, because trust is needed, and the curator is a single point of failure.<\/p>\n<h1>Application<\/h1>\n<p>Large companies like Apple and Google have adopted techniques of differential privacy into their systems and software. Apple uses the approach since iOs 10 and macOS Sierra for various purposes:<\/p>\n<p>It is used in the Safari browser to monitor displaying which websites consumes a large amount of battery power. Yet, using the approach of differential privacy Apple cannot track visited websites per individual user. Furthermore, Apple tries to analyze, which words are typed in which context to improve their word recommendation service on the iOs keyboard, the same goes for emoji usage.<\/p>\n<p>Google implemented the approach of differential privacy in an open source library and published it on github for everyone to integrate in their own system. It is called Randomized Aggregable Privacy Preserving Ordinal Response, in short RAPPOR. They use this library in the Chrome browser&nbsp; to monitor which and how other software changes browser settings since 2014. The goal is to detect malware. Google also uses techniques from differential privacy in Google Maps to observe traffic density and integrate this information into planned routes.<\/p>\n<h1>Conclusion<\/h1>\n<p>Differential privacy is a promising approach giving a mathematical guarantee for privacy. However it is not suitable and usable for all kinds of data. It works best with a large amount of records containing numeric data or a little limited number of possible answers (for example: yes\/no or 1 to 5). Other types of data like images or audio recordings would be made useless adding noise to it.<\/p>\n<p>It is already well researched theoretically, but not yet strongly\/widely used in practice. A reason for that could be higher implementation costs compared to classic anonymization techniques.<\/p>\n<h1>Discussion<\/h1>\n<h2>Series of data points<\/h2>\n<p>Regarding existing applications of differential privacy, location pattern mining (such as in Google Maps) draws attention. Differential privacy seems useful in this context at first, as it addresses the users\u2019 possible concern of exposing the current location to Google and related companies. At first sight it seems impossible for anyone else but you to access your current location (using the locally private variant) or anyone but the collecting party (here: Google, in the globally private approach).<\/p>\n<p>However, information about location is probably obtained as a series of data points. Therefore, it is practically possible to detect a progression and outliers added into the series by a randomization function. Thus, users\u2019 individual exact location history can be made accessible.<\/p>\n<h2>Correlation<\/h2>\n<p>The theoretic approach of differential privacy works for single questions or requests. However, often questions do not stand alone, like in a questionnaire or another form of a survey. From an amount of questions related to one topic, correlation and repeating patterns can be drawn. This allows building models for example according to social stereotypes. Again, outliers from patterns and typical models can be detected as they are less likely to be given in the context of answers. Thus, privacy would be violated.<\/p>\n<p>Besides, combining multiple requests on data sources querying different but connectable or correlatable information can be used to deduce information about individuals one would not have been able to see as \u201cclean data\u201d otherwise.<\/p>\n<h2>Collusion<\/h2>\n<p>A similar issue to the one with correlatable data exists with collusion. Users can agree on making similar requests and combining them afterwards. Thereby they can achieve a higher privacy budget than they were originally granted.<\/p>\n<h2>Privacy Budget<\/h2>\n<p>This leads to the question of defining the right privacy budget. It needs to be researched whether a static definition per user suffices or a dynamic definition would be more reasonable and more secure. This in turn gives rise to the next question, namely which parameters should be considered when defining a dynamic privacy budget.<\/p>\n<p>All in all, differential privacy looks like a good approach on tackling privacy issues in some, but not all cases. It depends on the type of data collected, the amount of information stored and correlation between it. Defining and granting privacy budget should be carefully thought out, as it can be misused in the ways described above.<\/p>\n<hr>\n<p>For anyone interested in the mathematical basis and proofs underlying the approach of differential privacy, I suggest reading \u201cThe Algorithmic Foundations Of Differential Privacy\u201d (Cynthia Dwork, Microsoft Research, USA and Aaron Roth, University of Pennsylvania, USA).<\/p>\n<hr>\n<h4>Sources<\/h4>\n<p>Dwork C. (2006) Differential Privacy. In: Bugliesi M., Preneel B., Sassone V., Wegener I. (eds) Automata, Languages and Programming. ICALP 2006. Lecture Notes in Computer Science, vol 4052. Springer, Berlin, Heidelberg; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/dwork.pdf\">https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/dwork.pdf<\/a> (28.06.18)<\/p>\n<p>Differential Privacy &#8211; Simply Explained by User \u201cSimply Explained &#8211; Savjee\u201d on Youtube; <a href=\"https:\/\/www.youtube.com\/watch?v=gI0wk1CXlsQ\">https:\/\/www.youtube.com\/watch?v=gI0wk1CXlsQ<\/a> (28.06.18)<\/p>\n<p>Differential Privacy Technical Overview, Apple; <a href=\"https:\/\/images.apple.com\/privacy\/docs\/Differential_Privacy_Overview.pdf\">https:\/\/images.apple.com\/privacy\/docs\/Differential_Privacy_Overview.pdf<\/a> (28.06.18)<\/p>\n<p>Tore Dalenius.Towards a methodology for statistical disclosure control. Statistik Tidskrift, 15(429-444):2, 1977.<\/p>\n<p>Benett Cyphers. Understanding differential privacy and why it matters for digital rights (25.10.2017); <a href=\"https:\/\/www.accessnow.org\/understanding-differential-privacy-matters-digital-rights\/\">https:\/\/www.accessnow.org\/understanding-differential-privacy-matters-digital-rights\/<\/a> (28.06.18)<\/p>\n<p>Tianqing Zhu. Explainer: what is differential privacy and how can it protect your data? (18.03.2018); <a href=\"https:\/\/theconversation.com\/explainer-what-is-differential-privacy-and-how-can-it-protect-your-data-90686\">https:\/\/theconversation.com\/explainer-what-is-differential-privacy-and-how-can-it-protect-your-data-90686<\/a> (28.06.18)<\/p>\n","protected":false},"excerpt":{"rendered":"<p>It is widely known that tech companies, like Apple or Google and their partners collect and analyse an increasing amount of information. This includes information about the person itself, their interaction and their communication. It happens because of seemingly good motives such as: Recommendation services: e.g. word suggestions on smartphone keyboard Customizing a product or [&hellip;]<\/p>\n","protected":false},"author":876,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[26,651],"tags":[178,177,164],"ppma_author":[758],"class_list":["post-3925","post","type-post","status-publish","format-standard","hentry","category-secure-systems","category-system-designs","tag-data-protection","tag-differential-privacy","tag-privacy"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":10428,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2020\/08\/19\/gdpr-and-information-security-for-startups\/","url_meta":{"origin":3925,"position":0},"title":"GDPR and Information Security: A practical guide for Startups and small businesses","author":"Mario Koch","date":"19. August 2020","format":false,"excerpt":"Let me start with a story. My first contact with GDPR (general data protection regulation) and the topic of information security was during my bachelor throughout an app project. We had set ourselves the goal of uploading the app to Google Play Store by the end of the semester and\u2026","rel":"","context":"In &quot;Interactive Media&quot;","block_context":{"text":"Interactive Media","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/interactive-media\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/08\/Screenshot-2020-08-19-at-10.47.49.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/08\/Screenshot-2020-08-19-at-10.47.49.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/08\/Screenshot-2020-08-19-at-10.47.49.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/08\/Screenshot-2020-08-19-at-10.47.49.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":10720,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2020\/08\/24\/corona-warning-app\/","url_meta":{"origin":3925,"position":1},"title":"Corona Warning App","author":"Patrick Brenner","date":"24. August 2020","format":false,"excerpt":"In 2020 many things are different. People work and study from home, wear face masks and are facing restrictions in their fundamental rights. These measures and restrictions were taken to bring the global pandemic under control. More than 800.000 people have died as a result of Covid-19 (SARS-CoV-2) (25.08.2020). \"Let's\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/08\/plot_rki_cwa_per_week-1.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/08\/plot_rki_cwa_per_week-1.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/08\/plot_rki_cwa_per_week-1.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/08\/plot_rki_cwa_per_week-1.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":7327,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/30\/about-the-robustness-of-machine-learning\/","url_meta":{"origin":3925,"position":2},"title":"About the Robustness of Machine Learning","author":"Marcel Heisler","date":"30. August 2019","format":false,"excerpt":"In the past couple of years research in the field of machine learning (ML) has made huge progress which resulted in applications like automated translation, practical speech recognition for smart assistants, useful robots, self-driving cars and lots of others. But so far we only have reached the point where ML\u2026","rel":"","context":"In &quot;Secure Systems&quot;","block_context":{"text":"Secure Systems","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/secure-systems\/"},"img":{"alt_text":"Glitch","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/glitch-2463363_1920.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/glitch-2463363_1920.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/glitch-2463363_1920.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/glitch-2463363_1920.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/glitch-2463363_1920.jpg?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/glitch-2463363_1920.jpg?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":20620,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2021\/10\/04\/pal\/","url_meta":{"origin":3925,"position":3},"title":"Palantir: An uncanny company?","author":"Niklas Janssen","date":"4. October 2021","format":false,"excerpt":"In the future data privacy could be one of the biggest issues of our time with growing supervision capabilities from big enterprises and governments. China is a telling example of what could happen if data privacy and anonymity on the internet is completely eroded. On the other hand, there is\u2026","rel":"","context":"In &quot;Secure Systems&quot;","block_context":{"text":"Secure Systems","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/secure-systems\/"},"img":{"alt_text":"image of multiple hard disks","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/08\/pexels-panumas-nikhomkhai-1148820-scaled.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/08\/pexels-panumas-nikhomkhai-1148820-scaled.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/08\/pexels-panumas-nikhomkhai-1148820-scaled.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/08\/pexels-panumas-nikhomkhai-1148820-scaled.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/08\/pexels-panumas-nikhomkhai-1148820-scaled.jpg?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/08\/pexels-panumas-nikhomkhai-1148820-scaled.jpg?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":5523,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/02\/28\/federated-learning\/","url_meta":{"origin":3925,"position":4},"title":"Federated Learning","author":"Simon Fabian Wolf","date":"28. February 2019","format":false,"excerpt":"The world is enriched daily with the latest and most sophisticated achievements of Artificial Intelligence (AI). But one challenge that all new technologies need to take seriously is training time. With deep neural networks and the computing power available today, it is finally possible to perform the most complex analyses\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/Federated_Learning_averaging_sw189.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/Federated_Learning_averaging_sw189.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/08\/Federated_Learning_averaging_sw189.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":780,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/07\/11\/incognito-in-the-dark-web-a-guide\/","url_meta":{"origin":3925,"position":5},"title":"Incognito in the dark web &#8211; a guide","author":"Chris Uhrig","date":"11. July 2016","format":false,"excerpt":"\u201cBig Brother is watching you\u201d, \u201cdata kraken\u201d or \u201cthe transparent man\u201d are often used as catchwords, when talking about the shrinking privacy on the internet. This blog post will how a possible way of protecting the anonymity and privacy of the users in the internet. A possible way to do\u2026","rel":"","context":"In &quot;Secure Systems&quot;","block_context":{"text":"Secure Systems","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/system-designs\/secure-systems\/"},"img":{"alt_text":"darknet","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/darknet-300x298.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":758,"user_id":876,"is_guest":0,"slug":"pj014","display_name":"Philipp Joseph","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/a6052295942ac5875e9c7131c560e636ecd3fbadb35abc709e53428b3ceed71d?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/3925","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/876"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=3925"}],"version-history":[{"count":6,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/3925\/revisions"}],"predecessor-version":[{"id":25472,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/3925\/revisions\/25472"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=3925"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=3925"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=3925"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=3925"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}