{"id":27711,"date":"2025-07-08T20:05:04","date_gmt":"2025-07-08T18:05:04","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=27711"},"modified":"2025-07-08T20:42:47","modified_gmt":"2025-07-08T18:42:47","slug":"open-source-in-ai-principles-pitfalls-and-practicalities-for-enterprise-adoption","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2025\/07\/08\/open-source-in-ai-principles-pitfalls-and-practicalities-for-enterprise-adoption\/","title":{"rendered":"Open Source in AI: Principles, Pitfalls, and Practicalities for Enterprise Adoption"},"content":{"rendered":"\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>Note:<\/strong> This article was written for the module Enterprise IT (113601a) during the summer semester of 2025.<\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">Introduction<\/h2>\n\n\n\n<p>In various sectors, the rapid advancement of artificial intelligence (AI) is transforming business processes, product development, and competitive landscapes. The availability of open-source AI models, particularly language models, empowers enterprises to accelerate digital transformation, lower barriers to advanced technology, and foster in-house innovation without prohibitive licensing costs. However, as companies increasingly integrate AI into their operations, questions surrounding the true openness, legal compliance, and reliability of these systems become increasingly significant. This article investigates the extension of open-source principles from traditional software to AI, maps the current landscape of enterprise-relevant language models, and critically evaluates the real-world implications of \u201copenness\u201d for organizations navigating regulatory and operational complexities.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Principles of Open Source Software<\/h2>\n\n\n\n<p>Open source fundamentally reshapes the way software is developed and distributed, driven by the principle of breaking down barriers to learning, using, sharing, and improving systems. The Open Source Definition (OSD) formalizes these ideas by granting anyone the rights to use, study, modify, and share software, with the goal of fostering autonomy, transparency, seamless reuse, and collaborative advancement [<a href=\"#tref2\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">2<\/a>]. These freedoms, rooted in the Free Software Definition, become the cornerstone of modern software innovation, yielding substantial benefits for both individuals and organizations [<a href=\"#tref2\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">2<\/a>].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Open Source AI Definition (OSAID)<\/h3>\n\n\n\n<p>Building on these foundations, the Open Source AI Definition (OSAID) adapts the core concepts of open source to the unique context of AI. While the OSD requires access to software\u2019s preferred form for modification\u2014its source code, unobfuscated and complete [<a href=\"#tref1\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">1<\/a>]\u2014the OSAID extends this requirement to encompass the complexities of machine learning systems [<a href=\"#tref2\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">2<\/a>]. For an AI system to be considered truly open source, not only the full codebase but also detailed data documentation and model parameters (such as weights and configuration files) must be available. This ensures that a knowledgeable user can recreate and meaningfully modify the system, mirroring the practical transparency that open source software enjoys [<a href=\"#tref2\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">2<\/a>].<\/p>\n\n\n\n<p>Another essential feature that both definitions share is the emphasis on unrestricted use and distribution. The OSD mandates free redistribution without licensing fees and prohibits discrimination against any person, group, or field of endeavor [<a href=\"#tref1\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">1<\/a>]. Derivative works must be allowed under the same licensing terms as the original software [<a href=\"#tref1\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">1<\/a>]. Similarly, the OSAID guarantees the right to use, study, modify, and share AI systems for any purpose, with or without changes, and allows licenses to require that modified versions remain equally open [<a href=\"#tref2\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">2<\/a>]. This symmetry guarantees that the proven principles of open source continue to foster innovation and collaboration as they are applied to the rapidly evolving domain of AI.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">LLMs versus SLMs<\/h2>\n\n\n\n<p>Although AI encompasses a wide range of model categories, this discussion focuses on language models, given their pivotal role in propelling recent advances in generative AI and natural language processing. Large Language Models (LLMs) are a class of \u201cfoundation models\u201d that have been trained on massive datasets and are capable of understanding and generating natural language as well as other forms of content for a myriad of tasks [<a href=\"#tref5\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">5<\/a>]. Their performance in text generation, summarization, translation, and conversation plays a central role in popularizing generative AI technologies [<a href=\"#tref5\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">5<\/a>]. Small Language Models (SLMs), while also designed to process, understand, and generate natural language, are distinguished by their more compact architecture: SLMs typically range from a few million to a few billion parameters (see Figure <a href=\"#tfig1\" data-type=\"internal\" data-id=\"#tfig1\" rel=\"nofollow\">1<\/a>), whereas LLMs can contain hundreds of billions or even trillions of parameters (see Figure <a href=\"#tfig1\" data-type=\"internal\" data-id=\"#tfig1\" rel=\"nofollow\">1<\/a>) [<a href=\"#tref3\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">3<\/a>]. This enhancement in efficiency leads to reduced memory and computational demands, rendering SLMs well-suited for environments with limited resources, such as edge devices and mobile applications [<a href=\"#tref3\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">3<\/a>].<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized is-style-default\" id=\"tfig1\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/llm_slm_param_comp.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"819\" data-attachment-id=\"27727\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2025\/07\/08\/open-source-in-ai-principles-pitfalls-and-practicalities-for-enterprise-adoption\/llm_slm_param_comp\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/llm_slm_param_comp.jpg\" data-orig-size=\"1694,1355\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"llm_slm_param_comp\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/llm_slm_param_comp-1024x819.jpg\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/llm_slm_param_comp-1024x819.jpg\" alt=\"\" class=\"wp-image-27727\" style=\"width:auto;height:400px\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/llm_slm_param_comp-1024x819.jpg 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/llm_slm_param_comp-300x240.jpg 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/llm_slm_param_comp-768x614.jpg 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/llm_slm_param_comp-1536x1229.jpg 1536w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/llm_slm_param_comp.jpg 1694w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong><a href=\"#tfig1\" rel=\"nofollow\">Figure 1<\/a><\/strong>: Comparison of the average (mean) and maximum parameter sizes for LLMs and SLMs.<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Prominent Examples of LLMs<\/h3>\n\n\n\n<p>Some of the most widely recognized LLMs include OpenAI\u2019s GPT-3 and GPT-4, which are supported by Microsoft and broadly accessible to the public [<a href=\"#tref5\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">5<\/a>]. Other notable examples include Google\u2019s BERT\/RoBERTa and PaLM models [<a href=\"#tref5\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">5<\/a>]. Meta released its Llama models as open source, with Llama 2 positioned as an open foundation model and Llama 3.1 (with 405 billion parameters) also available as open source [<a href=\"#tref5\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">5<\/a>,<a href=\"#tref6\" data-type=\"internal\" data-id=\"#tref6\" rel=\"nofollow\">6<\/a>]. IBM introduced its Granite model series on <a href=\"https:\/\/www.ibm.com\/products\/watsonx-ai\" data-type=\"link\" data-id=\"https:\/\/www.ibm.com\/products\/watsonx-ai\" target=\"_blank\" rel=\"noreferrer noopener\">watsonx.ai<\/a>, which serve as the generative AI backbone for other IBM products and are released as open source for commercial use [<a href=\"#tref3\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">3<\/a>,<a href=\"#tref5\" data-type=\"internal\" data-id=\"#tref6\" rel=\"nofollow\">5<\/a>]. Another prominent open-source LLM is Mistral AI\u2019s Mixtral 8x22B, a Mixture-of-Experts (MoE) model [<a href=\"#tref5\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">5<\/a>,<a href=\"#tref6\" data-type=\"internal\" data-id=\"#tref6\" rel=\"nofollow\">6<\/a>].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Prominent Examples of SLMs<\/h3>\n\n\n\n<p>In the realm of SLMs, several noteworthy options exist. DistilBERT is a streamlined variant of Google\u2019s influential BERT model [<a href=\"#tref3\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">3<\/a>]. OpenAI offers GPT-4o mini, a more compact and cost-effective version of GPT-4o with multimodal capabilities that accepts both text and image inputs; this model is accessible to ChatGPT users and developers via API, replacing GPT-3.5 [<a href=\"#tref3\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">3<\/a>]. The IBM Granite series also includes SLMs with 2 and 8 billion parameters, available as open source and optimized for low latency [<a href=\"#tref3\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">3<\/a>]. Meta\u2019s Llama series features smaller versions like Llama 3.2, with 1 and 3 billion parameters, including highly efficient quantized variants [<a href=\"#tref3\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">3<\/a>]. Mistral AI\u2019s Ministral models (3 and 8 billion parameters) are additional open-source SLMs [<a href=\"#tref3\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">3<\/a>]. Microsoft\u2019s Phi suite includes SLMs such as Phi-2 (2.7 billion parameters) and Phi-3-mini (3.8 billion parameters), which are available through platforms like Microsoft Azure AI Studio, Hugging Face, and Ollama [<a href=\"#tref3\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">3<\/a>]. Further efficient, compact models include TinyBERT, BabyLLaMA, TinyLLaMA, and MobileLLM, all of which aim to achieve for high efficiency via knowledge transfer and parameter sharing techniques [<a href=\"#tref4\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">4<\/a>].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Accessibility and Open-Source Trends<\/h3>\n\n\n\n<p>Access to these models varies: many LLMs are offered via APIs or through platforms like <a href=\"https:\/\/www.ibm.com\/products\/watsonx-ai\" data-type=\"link\" data-id=\"https:\/\/www.ibm.com\/products\/watsonx-ai\" target=\"_blank\" rel=\"noreferrer noopener\">watsonx.ai<\/a> [<a href=\"#tref5\" data-type=\"internal\" data-id=\"#tref1\" rel=\"nofollow\">5<\/a>]. However, a growing number of SLMs\u2014and some LLM variants\u2014are released as open source, which accelerates research and development while expanding accessibility to a broader audience [<a href=\"#tref3\" data-type=\"internal\" data-id=\"#tref3\" rel=\"nofollow\">3<\/a>,<a href=\"#tref5\" data-type=\"internal\" data-id=\"#tref5\" rel=\"nofollow\">5<\/a>,<a href=\"#tref6\" data-type=\"internal\" data-id=\"#tref6\" rel=\"nofollow\">6<\/a>]. There is a clear trend toward the development of more efficient models that deliver strong performance even in resource-limited settings [<a href=\"#tref3\" data-type=\"internal\" data-id=\"#tref3\" rel=\"nofollow\">3<\/a>,<a href=\"#tref4\" data-type=\"internal\" data-id=\"#tref4\" rel=\"nofollow\">4<\/a>].<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Superficial Openness<\/h2>\n\n\n\n<p>When it comes to AI, \u201copen source\u201d is far from a binary attribute\u2014rather, openness is a composite and graduated property that can vary widely across AI models marketed as open source [<a href=\"#tref7\" data-type=\"internal\" data-id=\"#tref7\" rel=\"nofollow\">7<\/a>]. A meaningful assessment of how open an AI model truly is must consider 14 dimensions grouped into three main categories: availability (such as source code, language model weights and data, fine-tuning data and weights, and licensing), documentation (including code, model architecture, scientific preprints and peer-reviewed publications, model cards, and datasheets), and access (such as software packages and APIs) [<a href=\"#tref7\" data-type=\"internal\" data-id=\"#tref7\" rel=\"nofollow\">7<\/a>]. Truly open models like BLOOMZ or OLMo Instruct exemplify high transparency by providing access to training data, source code, and comprehensive scientific documentation (see Figure <a href=\"#tfig2\" data-type=\"internal\" data-id=\"#tfig2\" rel=\"nofollow\">2<\/a>), which together enable thorough scrutiny and independent verification [<a href=\"#tref7\" data-type=\"internal\" data-id=\"#tref7\" rel=\"nofollow\">7<\/a>].<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\" id=\"tfig2\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/openness_matrix-scaled.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"216\" data-attachment-id=\"27729\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2025\/07\/08\/open-source-in-ai-principles-pitfalls-and-practicalities-for-enterprise-adoption\/openness_matrix\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/openness_matrix-scaled.jpg\" data-orig-size=\"2560,541\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"openness_matrix\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/openness_matrix-1024x216.jpg\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/openness_matrix-1024x216.jpg\" alt=\"\" class=\"wp-image-27729\" style=\"width:auto;height:180px\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/openness_matrix-1024x216.jpg 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/openness_matrix-300x63.jpg 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/openness_matrix-768x162.jpg 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/openness_matrix-1536x324.jpg 1536w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/openness_matrix-2048x432.jpg 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong><a href=\"#tfig1\">Figure 2<\/a><\/strong>: Overview of the openness matrix for three prominent LLMs (OLMo 7B Instruct, BLOOMZ, ChatGPT). This table is adapted from Figure 2 in Liesenfeld et al. [<a href=\"#tref7\" data-type=\"internal\" data-id=\"#tref7\" rel=\"nofollow\">7<\/a>].<\/figcaption><\/figure>\n\n\n\n<p>In contrast, the phenomenon of \u201c<a href=\"https:\/\/michellethorne.cc\/2009\/03\/openwashing\/\" data-type=\"link\" data-id=\"https:\/\/michellethorne.cc\/2009\/03\/openwashing\/\" target=\"_blank\" rel=\"noreferrer noopener\">open-washing<\/a>\u201d becomes increasingly common. This occurs when companies present themselves as open while withholding critical details about training and fine-tuning processes\u2014often to avoid scientific, legal, or regulatory scrutiny [<a href=\"#tref7\" data-type=\"internal\" data-id=\"#tref7\" rel=\"nofollow\">7<\/a>]. A telltale sign of this practice is the so-called \u201crelease-by-blogpost\u201d strategy: models are promoted as open, yet their documentation and supporting information fail to meet the standards of scientific publication or peer review (see Figure <a href=\"#tfig2\" data-type=\"internal\" data-id=\"#tfig2\" rel=\"nofollow\">2<\/a>) [<a href=\"#tref7\" data-type=\"internal\" data-id=\"#tref7\" rel=\"nofollow\">7<\/a>]. Many such models are, at best, \u201copen weight\u201d\u2014they release only model weights under an open license, while key elements like training datasets and methodologies remain undisclosed [<a href=\"#tref7\" data-type=\"internal\" data-id=\"#tref7\" rel=\"nofollow\">7<\/a>].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Consequences<\/h3>\n\n\n\n<p>This superficial approach to openness is not just misleading\u2014it can have serious negative impacts on innovation, research, and the public understanding of AI [<a href=\"#tref8\" data-type=\"internal\" data-id=\"#tref7\" rel=\"nofollow\">8<\/a>]. Researchers may be unable to properly audit or adapt models, and the broader perception of AI technology can become distorted [<a href=\"#tref8\" data-type=\"internal\" data-id=\"#tref7\" rel=\"nofollow\">8<\/a>]. Looking ahead, the recently enacted\u2014but only partially applicable\u2014European Union Artificial Intelligence Act (EU AI Act), which exempts open-source models from certain disclosure requirements, may unintentionally incentivize further open-washing if the definition of \u201copen source\u201d is not sufficiently robust and relies too heavily on simple licensing [<a href=\"#tref7\" data-type=\"internal\" data-id=\"#tref7\" rel=\"nofollow\">7<\/a>]. In reality, genuine, multidimensional openness is essential for risk analysis, verifiability, scientific reproducibility, and legal accountability in AI systems [<a href=\"#tref7\" data-type=\"internal\" data-id=\"#tref7\" rel=\"nofollow\">7<\/a>].<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Open-Source AI Models for Industry and Research<\/h2>\n\n\n\n<p>Open-source AI models offer industry\u2014especially in the era of \u201cGeneral Purpose AI\u201d (GPAI) and foundation models\u2014a wide range of advantages that significantly impact the development and deployment of AI systems. By enabling democratized access to advanced technologies, they allow academic researchers, start-ups, and developers to use, modify, and build upon these models without encountering significant financial or licensing constraints [<a href=\"#tref10\" data-type=\"internal\" data-id=\"#tref11\" rel=\"nofollow\">10<\/a>,<a href=\"#tref11\" data-type=\"internal\" data-id=\"#tref12\" rel=\"nofollow\">11<\/a>]. This leads to more affordable digital innovation, particularly for the public sector and small and medium-sized enterprises (SMEs) that would otherwise struggle with the high costs of proprietary systems [<a href=\"#tref11\" data-type=\"internal\" data-id=\"#tref12\" rel=\"nofollow\">11<\/a>,<a href=\"#tref13\" data-type=\"internal\" data-id=\"#tref14\" rel=\"nofollow\">13<\/a>].<br><br>A core benefit is the enhanced transparency and interpretability of these models [<a href=\"#tref11\" data-type=\"internal\" data-id=\"#tref12\" rel=\"nofollow\">11<\/a>]. Since model weights, architectures, and often even training data are publicly available, developers and researchers can better understand the inner workings of these systems, trace and fix errors, and foster reproducibility [<a href=\"#tref11\" data-type=\"internal\" data-id=\"#tref12\" rel=\"nofollow\">11<\/a>,<a href=\"#tref13\" data-type=\"internal\" data-id=\"#tref14\" rel=\"nofollow\">13<\/a>]. This openness encourages collaborative innovation and a broader expert community working together to improve and secure the models [<a href=\"#tref11\" data-type=\"internal\" data-id=\"#tref12\" rel=\"nofollow\">11<\/a>,<a href=\"#tref13\" data-type=\"internal\" data-id=\"#tref14\" rel=\"nofollow\">13<\/a>]. In sectors such as healthcare, open models can enhance clinical support and public health surveillance by providing access to high-quality AI tools, particularly in resource-constrained environments [<a href=\"#tref11\" data-type=\"internal\" data-id=\"#tref12\" rel=\"nofollow\">11<\/a>].<br><br>Despite these advantages, open-source AI models also bring significant drawbacks and risks. Their openness renders them susceptible to attacks and manipulation, including the introduction of malicious or biased content through adversarial fine-tuning or prompt injection [<a href=\"#tref11\" data-type=\"internal\" data-id=\"#tref12\" rel=\"nofollow\">11<\/a>]. This can result in hallucinated or inaccurate information, damaging public trust or causing poor decisions in critical domains such as medicine [<a href=\"#tref9\" data-type=\"internal\" data-id=\"#tref9\" rel=\"nofollow\">9<\/a>,<a href=\"#tref11\" data-type=\"internal\" data-id=\"#tref12\" rel=\"nofollow\">11<\/a>,<a href=\"#tref12\" data-type=\"internal\" data-id=\"#tref13\" rel=\"nofollow\">12<\/a>]. The inheritance of bias from training data, especially from web-scraped sources, can exacerbate societal inequalities and lead to unfair or discriminatory outcomes [<a href=\"#tref11\" data-type=\"internal\" data-id=\"#tref12\" rel=\"nofollow\">11<\/a>,<a href=\"#tref12\" data-type=\"internal\" data-id=\"#tref13\" rel=\"nofollow\">12<\/a>,<a href=\"#tref13\" data-type=\"internal\" data-id=\"#tref14\" rel=\"nofollow\">13<\/a>].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Operational and Regulatory Concerns<\/h3>\n\n\n\n<p>Another concern is the challenge of implementing data protection rights\u2014such as correction, deletion, or access to personal data\u2014since LLMs store information as billions of parameters rather than in traditional databases [<a href=\"#tref12\" data-type=\"internal\" data-id=\"#tref13\" rel=\"nofollow\">12<\/a>]. This complicates attribution and liability for harms caused by faulty AI-driven decisions, given the complexity and limited interpretability of these models [<a href=\"#tref11\" data-type=\"internal\" data-id=\"#tref12\" rel=\"nofollow\">11<\/a>,<a href=\"#tref13\" data-type=\"internal\" data-id=\"#tref14\" rel=\"nofollow\">13<\/a>].<br><br>Operational challenges include potentially higher inference times for quantized models [<a href=\"#tref9\" data-type=\"internal\" data-id=\"#tref9\" rel=\"nofollow\">9<\/a>] and increased computational costs for complex reasoning tasks that require large token throughput [<a href=\"#tref11\" data-type=\"internal\" data-id=\"#tref12\" rel=\"nofollow\">11<\/a>]. The absence of standardization in documentation and training information for many open-source models further complicates transparency and reproducibility [<a href=\"#tref10\" data-type=\"internal\" data-id=\"#tref11\" rel=\"nofollow\">10<\/a>]. Lastly, there is often regulatory and legal uncertainty regarding compliance with global data protection and security standards (e.g., GDPR, HIPAA), which can hinder deployment in sensitive industries [<a href=\"#tref11\" data-type=\"internal\" data-id=\"#tref12\" rel=\"nofollow\">11<\/a>,<a href=\"#tref13\" data-type=\"internal\" data-id=\"#tref14\" rel=\"nofollow\">13<\/a>].<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>The integration of open-source principles into AI development offers transformative benefits for industry and enterprise\u2014enabling rapid innovation, reducing costs, and fostering collaboration across organizational boundaries. However, realizing these advantages requires more than nominal openness. Enterprises must navigate the complexities of model transparency, legal compliance, and operational risks, particularly in the face of \u201copen-washing\u201d and evolving regulatory frameworks. As the EU AI Act comes into full effect, the enterprise sector plays a crucial role in demanding and defining what true openness means in practice. Only through genuine commitment to transparency, documentation, and open collaboration can organizations fully leverage the potential of open-source AI, ensuring both competitive advantage and public trust.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">References<\/h2>\n\n\n\n<p>[<a href=\"#tref1\" id=\"tref1\">1<\/a>] Open Source Initiative, \u201cThe Open Source Definition\u201d, <em>Open Source Initiative<\/em>. [Online]. Available: <a href=\"https:\/\/opensource.org\/osd\">https:\/\/opensource.org\/osd<\/a>.<br>[<a href=\"#tref2\" id=\"tref2\">2<\/a>] Open Source Initiative, \u201cThe Open Source AI Definition \u2013 1.0\u201d, <em>Open Source Initiative<\/em>. [Online]. Available: <a href=\"https:\/\/opensource.org\/ai\/open-source-ai-definition\">https:\/\/opensource.org\/ai\/open-source-ai-definition<\/a>.<br>[<a href=\"#tref3\" id=\"tref3\">3<\/a>] R. D. Caballar, \u201cWhat are Small Language Models?\u201d, <em>IBM Think<\/em>. [Online]. Available: <a href=\"https:\/\/www.ibm.com\/think\/topics\/small-language-models\">https:\/\/www.ibm.com\/think\/topics\/small-language-models<\/a>.<br>[<a href=\"#tref4\" id=\"tref4\">4<\/a>] Nguyen, C. V., Shen, X., Aponte, R., Xia, Y., Basu, S., Hu, Z., Chen, J., Parmar, M., Kunapuli, S., Barrow, J., Wu, J., Singh, A., Wang, Y., Gu, J., Dernoncourt, F., Ahmed, N. K., Lipka, N., Zhang, R., Chen, X., Yu, T., Kim, S., Deilamsalehy, H., Park, N., Rimer, M., Zhang, Z., Yang, H., Rossi, R. A., and Nguyen, T. H., \u201cA Survey of Small Language Models\u201d, <em>arXiv preprint<\/em>, 2024. doi: <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2410.20011\" data-type=\"link\" data-id=\"https:\/\/doi.org\/10.48550\/arXiv.2410.20011\">10.48550\/arXiv.2410.20011<\/a>.<br>[<a href=\"#tref5\" id=\"tref5\">5<\/a>] [Author(s) not specified], \u201cWhat Are Large Language Models (LLMs)?\u201d, <em>IBM Think<\/em>. [Online]. Available: <a href=\"https:\/\/www.ibm.com\/think\/topics\/large-language-models\">https:\/\/www.ibm.com\/think\/topics\/large-language-models<\/a>.<br>[<a href=\"#tref6\" id=\"tref6\">6<\/a>] Naveed, H., Khan, A. U., Qiu, S., Saqib, M., Anwar, S., Usman, M., Akhtar, N., Barnes, N., and Mian, A., \u201cA Comprehensive Overview of Large Language Models,\u201d <em>arXiv preprint<\/em>, submitted July 12, 2023; last revised October 17, 2024. doi: <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2307.06435\" data-type=\"link\" data-id=\"https:\/\/doi.org\/10.48550\/arXiv.2307.06435\">10.48550\/arXiv.2307.06435<\/a>.<br>[<a href=\"#tref7\" id=\"tref7\">7<\/a>] Liesenfeld, A., and Dingemanse, M., \u201cRethinking open source generative AI: open\u2011washing and the EU AI Act\u201d, in <em>Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT \u201924)<\/em>, Rio de Janeiro, Brazil, pp. 1774\u20131787, 2024. doi: <a href=\"https:\/\/doi.org\/10.1145\/3630106.3659005\" data-type=\"link\" data-id=\"https:\/\/doi.org\/10.1145\/3630106.3659005\">10.1145\/3630106.3659005<\/a>.<br>[<a href=\"#tref8\" id=\"tref8\">8<\/a>] Greve, E., \u201cOpenness Hype and Open Washing: A Critical Analysis of Openness Discourses in Generative AI\u201d, Master\u2019s Major Research Paper, Department of Communication Studies and Media Arts, McMaster University, August 2024. [Online]. Available: <a href=\"https:\/\/macsphere.mcmaster.ca\/bitstream\/11375\/31548\/1\/Greve%2C%20Ellie_MRP%20Final.pdf\">https:\/\/macsphere.mcmaster.ca\/bitstream\/11375\/31548\/1\/Greve%2C%20Ellie_MRP%20Final.pdf<\/a>.<br>[<a href=\"#tref9\" id=\"tref9\">9<\/a>] Raj, M. J., Kushala, V. M., Warrier, H., and Gupta, Y., \u201cFine Tuning LLMs for Enterprise: Practical Guidelines and Recommendations\u201d, <em>arXiv preprint<\/em>, 2024. doi: <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2404.10779\" data-type=\"link\" data-id=\"https:\/\/doi.org\/10.48550\/arXiv.2404.10779\">10.48550\/arXiv.2404.10779<\/a>.<br>[<a href=\"#tref10\" id=\"tref10\">10<\/a>] Sangari, E., Abughoush, K., and Azarm, M., \u201cUnveiling the Dynamics of Open-Source AI Models: Development Trends, Industry Applications, and Challenges\u201d, in <em>Proceedings of the 58th Hawaii International Conference on System Sciences<\/em>, pp. 4838\u20134847, 2025. doi: <a href=\"https:\/\/doi.org\/10.24251\/HICSS.2025.582\" data-type=\"link\" data-id=\"https:\/\/doi.org\/10.24251\/HICSS.2025.582\">10.24251\/HICSS.2025.582<\/a>.<br>[<a href=\"#tref11\" id=\"tref11\">11<\/a>] Ye, J., Bronstein, S., Hai, J., and Abu Hashish, M., \u201cDeepSeek in Healthcare: A Survey of Capabilities, Risks, and Clinical Applications of Open\u2011Source Large Language Models\u201d, <em>arXiv preprint<\/em>, 2025. doi: <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2506.01257\" data-type=\"link\" data-id=\"https:\/\/doi.org\/10.48550\/arXiv.2506.01257\">10.48550\/arXiv.2506.01257<\/a>.<br>[<a href=\"#tref12\" id=\"tref12\">12<\/a>] Lareo, X., \u201cLarge language models (LLM)\u201d, <em>TechSonar<\/em>, European Data Protection Supervisor, in <em>TechSonar Report 2023\u20132024<\/em>, p. 6, 2024. [Online]. Available: <a href=\"https:\/\/www.edps.europa.eu\/data-protection\/technology-monitoring\/techsonar\/large-language-models-llm_en\">https:\/\/www.edps.europa.eu\/data-protection\/technology-monitoring\/techsonar\/large-language-models-llm_en<\/a>.<br>[<a href=\"#tref13\" id=\"tref13\">13<\/a>] Calanzone, D., Coppari, A., Tedoldi, R., Olivato, G., and Casonato, C., \u201cAn open source perspective on AI and alignment with the EU AI Act\u201d, in <em>AISafety\/SafeRL Workshop @ IJCAI 2023<\/em>, Macao, SAR China, 2023. [Online]. Available: <a href=\"https:\/\/halixness.github.io\/assets\/pdf\/calanzone_coppari_tedoldi.pdf\">https:\/\/halixness.github.io\/assets\/pdf\/calanzone_coppari_tedoldi.pdf<\/a>.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The rapid rise of open-source AI models is transforming enterprise innovation, but the true meaning of \u201copenness\u201d is often unclear. This article explores the principles behind open-source AI, examines key language models, and highlights the legal and operational challenges businesses face\u2014including the growing risk of \u201copen-washing.\u201d<\/p>\n","protected":false},"author":1260,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"[]"},"categories":[652,660],"tags":[1115,106,1031,1116,353],"ppma_author":[1102],"class_list":["post-27711","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-chatgpt-and-language-models","tag-ai-for-business","tag-artificial-intelligence","tag-enterprise-it","tag-language-model","tag-open-source"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":27783,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2025\/07\/25\/open-source-ai-models-opportunities-and-challenges-for-enterprises\/","url_meta":{"origin":27711,"position":0},"title":"Open-Source AI Models \u2013 Opportunities and Challenges for Enterprises","author":"Julian Schniepp","date":"25. July 2025","format":false,"excerpt":"Note: This article was written for the module Enterprise IT (113601a) during the summer semester of 2025. Introduction AI has taken over the tech landscape, going from novelty and experimental technology to a critical piece of infrastructure in enterprise, and is perhaps the most spoken of technology related topic of\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/image-1.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/image-1.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/image-1.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/07\/image-1.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":28189,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2026\/01\/18\/opening-new-frontiers-with-tiny-language-models\/","url_meta":{"origin":27711,"position":1},"title":"Opening new frontiers with Tiny Language Models","author":"Nikola Damyanov","date":"18. January 2026","format":false,"excerpt":"Note: This blog post was written for the lecture \u201eEnterprise IT (113601a)\u201c during the winter semester 2025\/26. In artificial intelligence bigger isn\u00b4t always better. While large language models (LLMs) often dominate the spotlight, a new generation of more compact versions, often called tiny or small language models (TLMs), is rapidly\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/01\/Opening-New-Frontiers-with-Tiny-Language-Models-Knowledge-Distillation-scaled.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/01\/Opening-New-Frontiers-with-Tiny-Language-Models-Knowledge-Distillation-scaled.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/01\/Opening-New-Frontiers-with-Tiny-Language-Models-Knowledge-Distillation-scaled.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/01\/Opening-New-Frontiers-with-Tiny-Language-Models-Knowledge-Distillation-scaled.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/01\/Opening-New-Frontiers-with-Tiny-Language-Models-Knowledge-Distillation-scaled.jpg?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/01\/Opening-New-Frontiers-with-Tiny-Language-Models-Knowledge-Distillation-scaled.jpg?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":24427,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2023\/03\/03\/ai-and-scaling-the-compute-for-the-new-moores-law\/","url_meta":{"origin":27711,"position":2},"title":"AI and Scaling the Compute for the new Moore\u2019s Law","author":"Marvin Blessing","date":"3. March 2023","format":false,"excerpt":"AI and Scaling the Compute becomes more relevant as the strive for larger language models and general purpose AI continues. The future of the trend is unknown as the rate of doubling the compute outpaces Moore's Law rate of every two year to a 3.4 month doubling. IntroductionRequiring compute beyond\u2026","rel":"","context":"In &quot;Artificial Intelligence&quot;","block_context":{"text":"Artificial Intelligence","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/artificial-intelligence\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/03\/image-4.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/03\/image-4.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/03\/image-4.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/03\/image-4.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/03\/image-4.png?resize=1050%2C600&ssl=1 3x"},"classes":[]},{"id":24243,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2023\/03\/03\/modern-application-of-voice-ai-technology\/","url_meta":{"origin":27711,"position":3},"title":"Modern application of Voice AI technology","author":"Ngoc Ton","date":"3. March 2023","format":false,"excerpt":"With the advancement of technology and the gradually increasing use of artificial intelligence, new markets are developed. One of such is the market of Voice AI which became a commercial success with voice bots such as Alexa or Siri. They were mainly used as digital assistants who could answer questions,\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/03\/01.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/03\/01.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/03\/01.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/03\/01.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":26976,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2025\/02\/24\/challenges-in-demand-forecasting-in-the-automotive-industry-traditional-vs-ai-ml-based-approaches\/","url_meta":{"origin":27711,"position":4},"title":"Challenges in Demand Forecasting in the Automotive Industry: Traditional vs. AI\/ML-Based Approaches","author":"yw016","date":"24. February 2025","format":false,"excerpt":"Note:\u00a0This blog post was written for the module\u00a0Enterprise IT (113601a). Introduction Demand forecasting is all about estimating the future demand for a product or service. The automotive industry operates in highly volatile markets with complex supply chains, making demand forecasting a critical challenge. Recent developments, such as fluctuating sales, changing\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/02\/image-13.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/02\/image-13.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/02\/image-13.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2025\/02\/image-13.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":4024,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/08\/22\/why-ai-is-a-threat-for-our-digital-security\/","url_meta":{"origin":27711,"position":5},"title":"Why AI is a Threat for our Digital Security","author":"Katharina Strecker","date":"22. August 2018","format":false,"excerpt":"Artificial intelligence has a great potential to improve many areas of our lives in the future. But what happens when these AI technologies are used maliciously? Sure, a big topic may be autonomous weapons or so called \u201ckiller robots\u201d. But beside our physical security - what about our digital one?\u2026","rel":"","context":"In &quot;Artificial Intelligence&quot;","block_context":{"text":"Artificial Intelligence","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/artificial-intelligence\/"},"img":{"alt_text":"Computer image recognition has beaten human-level image recognition in 2015","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/human-level-image-recongition-1024x717.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/human-level-image-recongition-1024x717.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/human-level-image-recongition-1024x717.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":1102,"user_id":1260,"is_guest":0,"slug":"tobias_metzger","display_name":"Tobias Metzger","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/398931caf6b599a8060406a0ce13ac8bd506da352f12f635a4a57420fadcd8b7?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/27711","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/1260"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=27711"}],"version-history":[{"count":23,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/27711\/revisions"}],"predecessor-version":[{"id":27748,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/27711\/revisions\/27748"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=27711"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=27711"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=27711"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=27711"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}