{"id":28857,"date":"2026-03-01T20:53:54","date_gmt":"2026-03-01T19:53:54","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=28857"},"modified":"2026-03-01T21:22:05","modified_gmt":"2026-03-01T20:22:05","slug":"building-a-modern-c-project-zig-webassembly-and-visual-ci-cd","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2026\/03\/01\/building-a-modern-c-project-zig-webassembly-and-visual-ci-cd\/","title":{"rendered":"Building a Modern C Project: Zig, WebAssembly, and Visual CI\/CD"},"content":{"rendered":"<p><em>How we used Zig as a build system, Emscripten for the web, and Python for automated visual regression testing on a C-based path tracer.<\/em><\/p>\n<h3 id=\"introduction\">1. Introduction<\/h3>\n<p>Writing a path tracer from scratch in C is a fantastic way to learn the physics of light simulation. But maintaining that project, ensuring it <strong>builds across platforms, catching memory leaks, deploying it to the web, and proving the math remains correct<\/strong> after every commit, is where the real engineering challenges begin.<\/p>\n<p>We built <a href=\"https:\/\/github.com\/timo-eberl\/tracy\"><strong>Tracy<\/strong><\/a>, a cross-platform, <strong>multi-threaded C11 path tracer<\/strong>. While constructing the path tracer itself caused its own challenges, this post isn\u2019t about the math of rendering but about the infrastructure surrounding it. Here is how we used reliable tools like Python to build a <strong>visual CI\/CD pipeline<\/strong>, while using the project as a sandbox to experiment with bleeding-edge, not quite production-ready tech: using <strong>Zig to replace CMake<\/strong>, and Emscripten to bring heavy, <strong>multi-threaded<\/strong> native code to the <strong>browser<\/strong>, along with the hard truths we learned along the way.<\/p>\n<h3 id=\"ditching-cmake-zig-as-a-c-build-system\">2. Ditching CMake: Zig as a C Build System<\/h3>\n<p>If you have ever built a C project of reasonable size, you know the traditional drill. You start with a simple, elegant Makefile. It works beautifully until you need cross-platform support, dependency management, or <strong>WebAssembly<\/strong> compilation. At that point, developers usually surrender and migrate to CMake, bracing themselves for a clunky scripting language that often feels like it\u2019s fighting against you.<\/p>\n<p>Since we wanted to compile our path tracer, Tracy, for Linux, Windows, and the web, we needed a modern alternative. For that purpose we chose <strong>Zig<\/strong>.<\/p>\n<p>Even though Tracy is written almost entirely in C11, Zig acts as a drop-in C\/C++ compiler (<code class=\"\" data-line=\"\">zig cc<\/code>). More importantly, it features a build system where configuration files (<code class=\"\" data-line=\"\">build.zig<\/code>) are written in a proper, strongly typed programming language. There are no weird macros or string-matching hacks; just standard, imperative code.<\/p>\n<p><em><strong>The Performance Hunt<\/strong><\/em><\/p>\n<p>However, it wasn\u2019t all sunshine and rainbows from day one. When we first migrated from our quick-and-dirty Unix Makefile to Zig, we noticed something alarming: Zig\u2019s <code class=\"\" data-line=\"\">ReleaseFast<\/code> mode (the build profile for maximum execution speed) was actually running slower than our previous <code class=\"\" data-line=\"\">clang -O3 -march=native -flto<\/code> Makefile setup. In a path tracer, where millions of ray-scene intersections occur every second, this was a major issue.<\/p>\n<p>We dug into the build graph and realized the problem lay in our linking strategy, specifically regarding our random number generator (RNG). Monte Carlo path tracing requires millions of random numbers per pixel. Initially, we were building our RNG dependency (<a href=\"https:\/\/www.pcg-random.org\/index.html\">the PCG library<\/a>) separately and linking it to our library afterwards. This isolated the optimization steps, creating a boundary that prevented the compiler from applying a crucial optimization (inlining the frequent RNG calls).<\/p>\n<p>Typically, the modern solution for crossing this compilation boundary is Link Time Optimization (LTO), but enabling it in Zig wasn\u2019t enough to fully catch up to the performance of the raw <code class=\"\" data-line=\"\">clang<\/code> command. Instead, we bypassed the linking step entirely by forcing a \u201cUnity build\u201d. We compiled the PCG source files directly alongside <code class=\"\" data-line=\"\">tracy.c<\/code>, which guarantees the compiler can apply the previously mentioned optimizations.<\/p>\n<p>With these optimizations we not only matched our old Makefile performance, but the Zig build actually ran <em>faster<\/em>.<\/p>\n<p>Ultimately, we created a single <code class=\"\" data-line=\"\">build.zig<\/code> file that generates our native C binaries and orchestrates our entire Zig-based test suite.<\/p>\n<p>However, we do have to admit one engineering reality check: our <code class=\"\" data-line=\"\">build.zig<\/code> does not currently handle our WebAssembly target. While Zig has impressive cross-compilation capabilities, we hit a roadblock trying to get it to compile our OpenMP multi-threading directives for the web. Rather than fighting the toolchain, we made the pragmatic choice to rely on Emscripten (<code class=\"\" data-line=\"\">emcc<\/code>) for the web build, orchestrated with Vite using standard Node.js scripts.<\/p>\n<h3 id=\"white-box-testing-c-code-with-zig\">3. White-Box Testing C Code with Zig<\/h3>\n<p>Testing C code is a well-solved problem, but it usually involves reaching for third-party frameworks like <a href=\"https:\/\/libcheck.github.io\/check\/\">Check<\/a>, <a href=\"https:\/\/cmocka.org\/\">CMocka<\/a>, or <a href=\"https:\/\/www.throwtheswitch.org\/unity\">Unity<\/a>. While these tools are robust, they often require wiring up separate build targets and writing a fair amount of macro-heavy boilerplate just to assert simple math operations.<\/p>\n<p>Since we were already using Zig to orchestrate our build, we decided to use its built-in test runner to test our C codebase. The integration promises to be seamless thanks to Zig\u2019s <code class=\"\" data-line=\"\">@cImport<\/code> function, which can parse C code directly.<\/p>\n<p>Instead of just importing our public <code class=\"\" data-line=\"\">tracy.h<\/code> header, our Zig test files import the actual <code class=\"\" data-line=\"\">tracy.c<\/code> source file. This allowed us to perform true <strong>white-box testing<\/strong>. We could instantiate internal C structs and write tests for private geometry intersection functions without exposing them in our public API or restructuring our codebase.<\/p>\n<p><em><strong>The Free Sanitizer Catch<\/strong><\/em><\/p>\n<p>Initially, this setup felt great thanks to Zig\u2019s compiler automatically instrumenting the code with safety checks similar to C\u2019s Undefined Behavior Sanitizer (UBSan).<\/p>\n<p>This saved us from a notoriously difficult-to-debug graphical glitch early on. During our global HDR to LDR tonemapping phase, a calculated pixel luminance value was slightly exceeding its bounds before being assigned to an 8-bit unsigned integer. In our previous standard C compilation, this value silently overflowed, resulting in yellow speckles in the brightest parts of the generated images. Because of Zig\u2019s automatic runtime safety checks, it caught the overflow instantly and pointed us to the exact line of C code that caused the issue.<\/p>\n<p>While enabling tools like UBSan in a traditional C setup is as simple as adding a compiler flag, having these checks baked into the default <code class=\"\" data-line=\"\">zig build test<\/code> command ensures that safety isn\u2019t an opt-in configuration you have to remember to enable.<\/p>\n<p><em><strong>The Reality Check: Zig\u2019s Rough Edges<\/strong><\/em><\/p>\n<p>However, our enthusiasm was eventually tempered by reality. While <code class=\"\" data-line=\"\">@cImport<\/code> sounds like magic, relying on a pre-1.0 language for complex C interop comes with severe limitations that made us question its production readiness.<\/p>\n<p>The first major issue was with Zig\u2019s <code class=\"\" data-line=\"\">translate-c<\/code> engine. In our C code, we define our scenes using C99 designated initializers combined with unions (e.g., <code class=\"\" data-line=\"\">{.shape.type=TRIANGLE, .shape.data.triangle={...}}<\/code>). The Zig translation engine completely choked on this syntax. Instead of failing gracefully, it fell back to declaring our scene definitions as <code class=\"\" data-line=\"\">extern<\/code> variables, causing \u201cundefined symbol\u201d errors from the linker. To fix it, we had to introduce a terrible hack: exporting dummy, zero-length arrays from Zig just to satisfy the linker so our unit tests would compile.<\/p>\n<p>While this is problematic and should be fixed on Zig\u2019s end, the most dangerous issue was the caching system. The absolute worst thing a test suite can do is give you a false positive. At one point, we modified the implementation in <code class=\"\" data-line=\"\">tracy.c<\/code> and ran <code class=\"\" data-line=\"\">zig build test<\/code>. The tests should have failed here, but everything passed. We later realized that Zig\u2019s build cache had failed to detect the change in the underlying C file and simply re-ran a cached test executable. We had to nuke the <code class=\"\" data-line=\"\">.zig-cache<\/code> to get our tests to fail properly, and we eventually disabled caching for the unit tests in our CI pipeline to avoid being lied to.<\/p>\n<p>Ultimately, using Zig to test C code is a fascinating concept that yields DX wins. But until the translation engine and the build cache mature, it remains a tool that you have to handle with care.<\/p>\n<h3 id=\"automating-regressions-the-quest-for-physical-correctness\">4. Visual CI\/CD: The Quest for Physical Correctness<\/h3>\n<p>While unit testing individual C functions with Zig is fantastic for ensuring our vector math or ray-sphere intersections are mathematically sound, it falls short of validating the renderer as a whole. A microscopic bias in a material calculation or a missing cosine term in the integration loop might easily slip past isolated unit tests, yet completely break the physical accuracy of the final image.<\/p>\n<p>To guarantee the entire system works from end to end, we rely on <strong>image-to-image comparison<\/strong>. To test if a rendered image is \u201cphysically correct\u201d we compare it to a ground truth. For this, we use <a href=\"https:\/\/www.mitsuba-renderer.org\/\">Mitsuba 3<\/a>, an industry-standard, heavily validated research path tracer. By recreating our test scenes (geometry, materials, and lighting) exactly in Mitsuba\u2019s XML format, we generate \u201cgolden\u201d reference images. Crucially, we output these as EXR files to preserve the raw, floating-point High Dynamic Range (HDR) light data. Comparing Tracy\u2019s output directly against these references allows us to test the entire rendering pipeline in one go.<\/p>\n<p><em><strong>Choosing the Right Metric<\/strong><\/em><\/p>\n<p>While relying on human visual inspection alone is possible if you know exactly what to look for, it\u2019s highly subjective and error-prone. This manual approach is better suited for optimizing the look and feel (e.g., for a video game) rather than mathematical correctness.<\/p>\n<p>Humans are surprisingly bad at spotting math errors in rendered images, especially in high-contrast areas. This is mostly due to the brain\u2019s filtering of visual information; we are hypersensitive to contrast and edges but relatively blind to subtle shifts in uniformly colored surfaces or specific color channels where the eye has lower sensitivity. In cases where human-perceived quality is the main goal, perceptual metrics for image-to-image comparison such as <strong>SSIM<\/strong> or the more modern <strong>FLIP<\/strong> developed by NVIDIA can be used. These try to mimic the human \u201cfilter\u201d to automate the evaluation process.<\/p>\n<p>For Tracy, we need to know that our <strong>math is actually correct<\/strong>, not just that it looks pleasing to the eye. Human perception was no longer a sufficient benchmark, so we shifted our focus from perceptual metrics to physical ones.<\/p>\n<p>We initially looked at <strong>Mean Squared Error (MSE)<\/strong>, but standard MSE is notoriously biased toward bright pixels in HDR rendering. A tiny error in a bright area would spike the score, while a massive error in a dark shadow might go unnoticed. To fix this, we implemented <strong>Relative Mean Squared Error (RelMSE),<\/strong> which normalizes the error across extreme HDR brightness levels. This serves as an objective measure of physical accuracy against our Mitsuba-generated ground truth.<\/p>\n<p>Ultimately, we landed on a comparison to a ground-truth image using relMSE, which distills the quality of the entire render into a single, objective score.<\/p>\n<p><em><strong>Difference Maps<\/strong><\/em><\/p>\n<p>A single error score does tell you that something is wrong with the rendered image, but for pinpointing the error you need a visual representation. For this purpose, we generate <strong>difference maps<\/strong>. By comparing our render against the Mitsuba reference pixel-by-pixel, we produce an image where bright areas represent high error and dark areas represent high accuracy. This allows a developer to instantly see if a bug is localized to, for example, refractive surfaces or certain light sources.<\/p>\n<p>This comparison image below perfectly illustrates why relying on the naked eye is a trap. Looking at our render in the middle panel, the scene appears completely fine. The lighting, shadows, and glass all look subjectively \u201ccorrect.\u201d<\/p>\n<p>However, the difference map on the right reveals the mathematical reality. It highlights two distinct types of errors: the faint, grainy texture across the walls is simply expected variance (standard Monte Carlo noise). However, the glaringly bright ring around the edge of the glass sphere is a systematic failure. It instantly exposed a subtle bug in our Fresnel reflection logic at grazing angles. Without image diffing, a physical inaccuracy like that could have easily gone unnoticed.<\/p>\n<p>For even more precision, we could instead generate <strong>error heatmaps<\/strong>, applying a configurable color gradient that makes subtle flaws more obvious, but currently we are still using simple difference maps.<\/p>\n<figure id=\"attachment_28861\" aria-describedby=\"caption-attachment-28861\" style=\"width: 1920px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Mitsuba-Reference-1.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"28861\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2026\/03\/01\/building-a-modern-c-project-zig-webassembly-and-visual-ci-cd\/mitsuba-reference-2\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Mitsuba-Reference-1.png\" data-orig-size=\"1920,480\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Mitsuba Reference\" data-image-description=\"\" data-image-caption=\"&lt;p&gt;Mitsuba Reference (left), Tracy Render (middle), Difference Map (right)&lt;\/p&gt;\n\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Mitsuba-Reference-1-1024x256.png\" class=\"size-full wp-image-28861\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Mitsuba-Reference-1.png\" alt=\"\" width=\"1920\" height=\"480\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Mitsuba-Reference-1.png 1920w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Mitsuba-Reference-1-300x75.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Mitsuba-Reference-1-1024x256.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Mitsuba-Reference-1-768x192.png 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Mitsuba-Reference-1-1536x384.png 1536w\" sizes=\"auto, (max-width: 1920px) 100vw, 1920px\" \/><\/a><figcaption id=\"caption-attachment-28861\" class=\"wp-caption-text\">Mitsuba Reference (left), Tracy Render (middle), Difference Map (right)<\/figcaption><\/figure>\n<p>While RelMSE gave us a performance score, running metrics manually across every commit was not scalable. To make certain our math stayed correct and performant as the codebase grew, we integrated benchmarking, unit tests and logging directly into a <strong>GitHub Actions CI\/CD pipeline<\/strong>. By archiving these metrics on every push, we laid the groundwork to automatically generate trendline graphs that visualize our renderer\u2019s improvements over time and catch regressions at a glance.<\/p>\n<p><em><strong>Continuous Integration<\/strong><\/em><\/p>\n<p>We build our binaries inside a dedicated Docker container. This ensures that library versions, system configurations, and compiler toolchains remain identical for every run, preventing the \u201cit works on my machine\u201d issue. The pipeline executes benchmarks based on a YAML configuration file that defines target scenes, and path tracing settings.<\/p>\n<div id=\"cb1\" class=\"sourceCode\">\n<pre class=\"sourceCode yaml\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-01-195921.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"28859\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2026\/03\/01\/building-a-modern-c-project-zig-webassembly-and-visual-ci-cd\/screenshot-2026-03-01-195921\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-01-195921.png\" data-orig-size=\"411,476\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Screenshot 2026-03-01 195921\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-01-195921.png\" class=\"size-medium wp-image-28859 alignnone\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-01-195921-259x300.png\" alt=\"\" width=\"259\" height=\"300\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-01-195921-259x300.png 259w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-01-195921.png 411w\" sizes=\"auto, (max-width: 259px) 100vw, 259px\" \/><\/a><\/pre>\n<\/div>\n<p>This example shows a simplified configuration of a scene named \u201cCaustics\u201d as well as two rendering jobs, each with a different configuration (std &#8211; Standard, rr &#8211; Russian roulette ray elimination).<\/p>\n<p>At the end of the CI step all of the important files get exported as run artifacts. This includes raw benchmark logs (error scores and timings) alongside PNG versions of the rendered images, reference images, and difference maps.<\/p>\n<p>However, path tracing is a computationally expensive task, so to keep the pipeline from feeling sluggish we included two main optimizations:<\/p>\n<ul>\n<li>We only rebuild the benchmark container if the Dockerfile actually changes. Otherwise, the pipeline pulls the existing image from the GitHub Container Registry (GHCR), saving minutes of environment setup.<\/li>\n<li>We use GitHub\u2019s cache to store Zig\u2019s compilation artifacts and <strong>Docker Layer Caching<\/strong> for both the CI and CD step. This allows the runner to reuse intermediate layers, ensuring that small code changes don\u2019t trigger a full, redundant container build.<\/li>\n<\/ul>\n<p><em><strong>Dashboard<\/strong><\/em><\/p>\n<p>Generating logs and previews during CI is well and good but having to go through each pipeline run\u2019s artifacts and analyze by hand is a chore. We needed a way to track our progress over time without bloating our <code class=\"\" data-line=\"\">main<\/code> branch with thousands of images and CSV logs.<\/p>\n<p>That\u2019s why we decided to include a <a href=\"https:\/\/github.com\/timo-eberl\/tracy\/tree\/benchmarks\">dashboard<\/a> for storing and visualizing all of the data generated by the CI Step. Our solution was to use a <strong>Git Orphan Branch<\/strong>. Unlike a standard branch, an orphan branch shares no history with <code class=\"\" data-line=\"\">main<\/code> which makes it ideal for storing data while excluding the codebase. That way, the repository retains a clean distinction between the code and the pipeline data.<\/p>\n<p>For <strong>visualization<\/strong> we set up a <strong>Python<\/strong> script that runs at the end of the CI Pipeline. It parses the historical data and generates trendline plots and tables showing our rendering accuracy and runtime performance over the life of the project.<\/p>\n<p>The following graphs are two of the generated graphs displayed in the dashboard. They show the trendlines for a section of commits for two scenes (caustics, cornell) in two configurations each (std, rr). While the RelMSE stayed relatively consistent with one minor improvement in a recent change of the path tracer, the runtime trend on the other hand shows a steady improvement in speed, especially on the cornell scene (brown and purple lines).<\/p>\n<figure id=\"attachment_28872\" aria-describedby=\"caption-attachment-28872\" style=\"width: 844px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_score_trend.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"28872\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2026\/03\/01\/building-a-modern-c-project-zig-webassembly-and-visual-ci-cd\/history_score_trend\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_score_trend.png\" data-orig-size=\"1500,900\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"history_score_trend\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_score_trend-1024x614.png\" class=\"wp-image-28872\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_score_trend.png\" alt=\"\" width=\"844\" height=\"506\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_score_trend.png 1500w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_score_trend-300x180.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_score_trend-1024x614.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_score_trend-768x461.png 768w\" sizes=\"auto, (max-width: 844px) 100vw, 844px\" \/><\/a><figcaption id=\"caption-attachment-28872\" class=\"wp-caption-text\">RelMSE (lower is better) vs build version<\/figcaption><\/figure>\n<figure id=\"attachment_28871\" aria-describedby=\"caption-attachment-28871\" style=\"width: 844px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_time_trend.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"28871\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2026\/03\/01\/building-a-modern-c-project-zig-webassembly-and-visual-ci-cd\/history_time_trend\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_time_trend.png\" data-orig-size=\"1500,900\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"history_time_trend\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_time_trend-1024x614.png\" class=\"wp-image-28871\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_time_trend.png\" alt=\"\" width=\"844\" height=\"507\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_time_trend.png 1500w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_time_trend-300x180.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_time_trend-1024x614.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2026\/03\/history_time_trend-768x461.png 768w\" sizes=\"auto, (max-width: 844px) 100vw, 844px\" \/><\/a><figcaption id=\"caption-attachment-28871\" class=\"wp-caption-text\">Runtime (lower is better) vs build version<\/figcaption><\/figure>\n<p>To automate the maintenance of the dashboard, we set up the GitHub Actions runner to process and push benchmarks results to the <code class=\"\" data-line=\"\">benchmarks<\/code> (orphan) branch. The <strong>pipeline<\/strong> builds the C code from the <code class=\"\" data-line=\"\">main<\/code> branch, runs the path tracer but then uses the <a href=\"https:\/\/github.com\/actions\/checkout\">actions\/checkout<\/a> tool to checkout the <code class=\"\" data-line=\"\">benchmarks<\/code> branch into a sub-directory. It appends new RelMSE scores to a <code class=\"\" data-line=\"\">history.csv<\/code> and copies over the latest generated images to the orphan branch. It then runs another Python script that dynamically updates the <code class=\"\" data-line=\"\">README.md<\/code> file on the orphan branch, embedding the newly generated graphs and linking to the latest renders, creating a live dashboard. Instead of analyzing raw data, a developer can simply switch to the <code class=\"\" data-line=\"\">benchmarks<\/code> branch and instantly see how their latest refactor affected physical correctness and render speed of the path tracer.<\/p>\n<p>This <strong>dashboard<\/strong> moves our CI from a simple benchmarking step into a detailed history of the path tracer\u2019s evolution. It provides a transparent, objective look at our progress both for developers and anyone else following the project\u2019s development.<\/p>\n<p><em><strong>Continuous Deployment<\/strong><\/em><\/p>\n<p>The CD part of the pipeline triggers only after the benchmark step passes. It compiles Tracy&#8217;s <b>Web <\/b><strong>version <\/strong>and injects it into a lightweight <code class=\"\" data-line=\"\">nginx:alpine<\/code> image. By using a multi-stage build, we can exclude the heavy build toolchains (like Emscripten and Zig) from the final artifact, keeping the deployment footprint minimal. The image gets uploaded to the GHCR afterwards. For both setting up the CI docker image as well as the deployment we make use of Docker Layer Caching to save some extra time by reusing docker layers still present in the GitHub cache.<\/p>\n<h3 id=\"from-terminal-to-browser-webassembly-shared-memory\">5. From Terminal to Browser: WebAssembly &amp; Shared Memory<\/h3>\n<p>Making a C-based path tracer accessible via a zero-install web URL presents a modern take on a traditionally native application. Using Emscripten, we compiled our core renderer into WebAssembly (WASM) so it could run directly in the browser. While cool in concept, rendering is a heavily blocking operation so if you run it on the main thread, the browser UI instantly freezes.<\/p>\n<p>Besides just displaying one generated image in the browser, the web application should also feature real-time interaction (rotating the camera). Our solution was to spawn the WASM module inside a <strong>Web Worke<\/strong>r. However, passing high-resolution image frames back and forth between a worker and the main thread via standard message passing (serialization) is far too slow for that purpose.<\/p>\n<p>We aimed for a zero-copy architecture using a <code class=\"\" data-line=\"\">SharedArrayBuffer<\/code> (via Emscripten\u2019s <code class=\"\" data-line=\"\">-sSHARED_MEMORY=1<\/code> flag). Our C backend writes pixel data directly into a shared block of memory, and the TypeScript main thread creates a <code class=\"\" data-line=\"\">Uint8ClampedArray<\/code> view over that exact same memory block.<\/p>\n<p>But here is the catch: <strong>browser APIs enforce copying the image<\/strong>. The HTML5 <code class=\"\" data-line=\"\">&lt;canvas&gt;<\/code> <code class=\"\" data-line=\"\">ImageData<\/code> constructor refuses to accept a shared memory view. We are forced to manually copy the data out of the shared buffer on the main thread before painting it. Not only does this add a slight performance overhead, but it also introduces a race condition: The worker might write new pixels while the main thread is copying the array. However, as this is extremely unlikely to happen and has no problematic impact, fixing the issue (using double buffering) is currently not a priority. Still, bypassing message serialization keeps the process fast enough.<\/p>\n<p><em><strong>The Multi-Threading Roadblock<\/strong><\/em><\/p>\n<p>Natively, our C code gets its speed from <strong>OpenMP<\/strong> directives like <code class=\"\" data-line=\"\">#pragma omp parallel for<\/code> to utilize multi-threading. Unfortunately, Emscripten doesn\u2019t officially support OpenMP yet (though there is a <a href=\"https:\/\/github.com\/emscripten-core\/emscripten\/pull\/25937\">recent PR<\/a> waiting to be merged).<\/p>\n<p>Instead of rewriting our entire threading model using raw POSIX threads, we opted for a pragmatic solution. We used <a href=\"https:\/\/github.com\/MuTsunTsai\/simpleomp\">SimpleOMP<\/a>, a lightweight library that implements the subset of OpenMP we needed (parallel loops and atomics) on top of Emscripten\u2019s existing pthread support.<\/p>\n<p><em><strong>Developer Experience and the Reality of High-Performance Web<\/strong><\/em><\/p>\n<p>Debugging compiled C code inside a browser sounds like a nightmare, and honestly, it kind of is. While it is technically possible to set breakpoints and step through your raw C source code using the <a href=\"https:\/\/chromewebstore.google.com\/detail\/cc++-devtools-support-dwa\/pdcpmagijalfljmkmjngeonclgbbannb\">C\/C++ DevTools Support (DWARF) extension<\/a> for Chrome, the overall experience is incredibly cumbersome. The setup is finicky on Chrome and on Firefox we unfortunately didn\u2019t get it to work at all. In the end, the friction was too high. We abandoned browser-based debugging entirely, adopting a workflow where we test and debug all our C code natively using standard debugging tools, such as GDB.<\/p>\n<p>Ultimately, our WASM implementation reached roughly <strong>75% of our native performance<\/strong>, which is an impressive feat for a web application. The WebAssembly ecosystem is undeniably powerful, but pushing it to its absolute limits with multi-threading and shared memory requires going through web development hell and accepting compromises.<\/p>\n<h3 id=\"conclusion\">6. Conclusion<\/h3>\n<p>Developing a path tracer from scratch means writing highly complex, mathematically dense C code. As we actively built out this project adding new rendering features and continually optimizing our library for speed, the risk of introducing subtle bugs was incredibly high. Writing complex systems is one thing but guaranteeing they remain correct across hundreds of commits is another.<\/p>\n<p>This is where our <strong>automated CI\/CD<\/strong> pipeline paid off: It wasn\u2019t just a fun DevOps side quest but instead became our primary development safety net. As we switched to multi-threading for performance or tweaked our core path tracing logic, the dashboard was there to keep us on course. The visual difference maps and RelMSE trendlines caught subtle regressions that our eyes would have missed, proving that our performance optimizations actually worked without breaking the underlying math. It turned subjective guessing (\u201cDoes this shadow look right?\u201d) into objective, actionable data.<\/p>\n<p>None of the tools we adopted were flawless, but they solved pain points in the traditional C workflow. <strong>Zig<\/strong> successfully replaced the clunkiness of CMake and gave us built-in memory safety for native unit testing, even if its C-translation and caching aren\u2019t entirely bulletproof yet. On the deployment side, <strong>Emscripten<\/strong> brought our computationally heavy code to the browser, though pushing WebAssembly to its multi-threaded limits meant navigating a minefield of API quirks. Finally, our Python and <strong>GitHub Actions pipeline<\/strong> automated the hardest part of graphics programming: verifying visual and mathematical regressions.<\/p>\n<p>Replacing manual workflows with automation transformed how we maintain our C path tracer. While Zig showed us that the gap between low-level systems programming and modern developer experience is finally closing, Emscripten taught us the true pain of high-performance web development.<\/p>\n<p>You can try out the <a href=\"https:\/\/tracy.timoeberl.de\/\">interactive Web demo right here<\/a>. If you want to see how we track physical correctness, check out our <a href=\"https:\/\/github.com\/timo-eberl\/tracy\/tree\/benchmarks\">automated benchmark dashboard<\/a>, or dive into the source code on our <a href=\"https:\/\/github.com\/timo-eberl\/tracy\">GitHub repository<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>How we used Zig as a build system, Emscripten for the web, and Python for automated visual regression testing on a C-based path tracer. 1. Introduction Writing a path tracer from scratch in C is a fantastic way to learn the physics of light simulation. But maintaining that project, ensuring it builds across platforms, catching [&hellip;]<\/p>\n","protected":false},"author":1315,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1,659,649,22,651,2,662],"tags":[1229,150,3,1227,149,266,1228],"ppma_author":[1200,1226],"class_list":["post-28857","post","type-post","status-publish","format-standard","hentry","category-allgemein","category-devops","category-interactive-media","category-student-projects","category-system-designs","category-system-engineering","category-web-performance","tag-c","tag-ci-cd","tag-docker","tag-path-tracing","tag-testing","tag-webassembly","tag-zig"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":3421,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/03\/28\/take-me-home-project-overview\/","url_meta":{"origin":28857,"position":0},"title":"Take Me Home &#8211; Project Overview","author":"cp054","date":"28. March 2018","format":false,"excerpt":"Related articles:\u00a0\u25baCI\/CD infrastructure: Choosing and setting up a server with Jenkins as Docker image\u00a0\u25baDockerizing Android SDK and Emulator for testing\u00a0 \u25baAutomated Unit- and GUI-Testing for Android in Jenkins\u00a0 \u25baTesting a MongoDB with NodeJS, Mocha and Mongoose During the winter term 2017\/2018, we created an app called Take Me Home. The\u2026","rel":"","context":"In &quot;Mobile Apps&quot;","block_context":{"text":"Mobile Apps","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/interactive-media\/mobile-apps\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/tmh_admin_usermanagement_bearbeitet.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/tmh_admin_usermanagement_bearbeitet.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/tmh_admin_usermanagement_bearbeitet.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/tmh_admin_usermanagement_bearbeitet.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/tmh_admin_usermanagement_bearbeitet.png?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/tmh_admin_usermanagement_bearbeitet.png?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":22623,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2022\/03\/30\/webassembly-das-neue-docker-und-noch-mehr\/","url_meta":{"origin":28857,"position":1},"title":"WebAssembly: Das neue Docker und noch mehr?","author":"Raphael Wettinger","date":"30. March 2022","format":false,"excerpt":"If WASM+WASI existed in 2008, we wouldn't have needed to created Docker. That's how important it is. Webassembly on the server is the future of computing. A standardized system interface was the missing link. Let's hope WASI is up to the task! Tweet, Solomon Hykes (Erfinder von Docker), 2019 Dieser\u2026","rel":"","context":"In &quot;Cloud Technologies&quot;","block_context":{"text":"Cloud Technologies","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/cloud-technologies\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2022\/03\/rancher_blog_01-rancher-k8s-node-components-architecture.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2022\/03\/rancher_blog_01-rancher-k8s-node-components-architecture.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2022\/03\/rancher_blog_01-rancher-k8s-node-components-architecture.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2022\/03\/rancher_blog_01-rancher-k8s-node-components-architecture.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":6688,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/04\/javascript-performance-optimization-with-respect-to-the-upcoming-webassembly-standard\/","url_meta":{"origin":28857,"position":2},"title":"JavaScript Performance optimization with respect to the upcoming WebAssembly standard","author":"tt031","date":"4. August 2019","format":false,"excerpt":"Written by Tim Tenckhoff \u2013 tt031 | Computer Science and Media 1. Introduction Speed and performance of the (worldwide) web advanced considerably over the last decades. With the development of sites more heavily reliant on JavaScript (JS Optimization, 2018), the consideration of actions to optimize the speed and performance of\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/07\/javascript1-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/07\/javascript1-1.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/07\/javascript1-1.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/07\/javascript1-1.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/07\/javascript1-1.jpg?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/07\/javascript1-1.jpg?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":10392,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2020\/02\/29\/attempts-at-automating-the-build-process-of-a-net-wpf-application-with-gitlabs-ci-cd-pipeline\/","url_meta":{"origin":28857,"position":3},"title":"Attempts at automating the build process of a .NET WPF application with GitLab&#8217;s CI\/CD pipeline","author":"Felix Messner","date":"29. February 2020","format":false,"excerpt":"(Originally written for System Engineering and Management in 02\/2020) Introduction In the System Engineering course of WS1920, I took the opportunity to look into automating the build process of a Windows desktop application. Specifically, the application in question is built in C#, targeting .NET Framework 4.0 and using Windows Presentation\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/08\/windows_runner_Tree.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/08\/windows_runner_Tree.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/08\/windows_runner_Tree.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/08\/windows_runner_Tree.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":5163,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/02\/24\/migrating-to-kubernetes-part-1-introduction\/","url_meta":{"origin":28857,"position":4},"title":"Migrating to Kubernetes Part 1 &#8211; Introduction","author":"Can Kattwinkel","date":"24. February 2019","format":false,"excerpt":"Written by: Pirmin Gersbacher, Can Kattwinkel, Mario Sallat Introduction The great challenge of collaborative working in a software developer team is to enable a high level of developer activity while ensuring a high product quality. In order to achieve this often CI\/CD processes are utilized. Talking about modern development techniques\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/02\/pexels-photo-379964.jpeg?resize=1050%2C600&ssl=1 3x"},"classes":[]},{"id":3496,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/03\/30\/ci-cd-with-gitlab-ci-for-a-web-application-part-1\/","url_meta":{"origin":28857,"position":5},"title":"CI\/CD with GitLab CI for a web application &#8211; Part 1","author":"Nina Schaaf","date":"30. March 2018","format":false,"excerpt":"Introduction When it comes to software development, chances are high that you're not doing this on your own. The main reason for this is often that implementing components like UI, frontend, backend, servers and more is just too much to handle for a single person leading to a slow development\u2026","rel":"","context":"In &quot;DevOps&quot;","block_context":{"text":"DevOps","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/devops\/"},"img":{"alt_text":"Shaky architecture","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/01_shaky-architecture-300x106.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":1200,"user_id":1315,"is_guest":0,"slug":"tilman_zorn","display_name":"Tilman Zorn","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/c25410723d92bf712a6311507e7614dace4a97f48bfdd71786ecb4547b516f57?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""},{"term_id":1226,"user_id":1325,"is_guest":0,"slug":"timo_eberl","display_name":"Timo Eberl","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/d568385da782b4ecdad95eef38466b7e688702e1ec10ccd9064cb7b29040577a?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/28857","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/1315"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=28857"}],"version-history":[{"count":21,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/28857\/revisions"}],"predecessor-version":[{"id":28894,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/28857\/revisions\/28894"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=28857"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=28857"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=28857"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=28857"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}