{"id":12178,"date":"2021-02-24T15:15:00","date_gmt":"2021-02-24T14:15:00","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=12178"},"modified":"2023-08-06T21:41:51","modified_gmt":"2023-08-06T19:41:51","slug":"web-audio-api-tips-for-performance","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2021\/02\/24\/web-audio-api-tips-for-performance\/","title":{"rendered":"Web Audio API &#8211; Tips for Performance"},"content":{"rendered":"\n<p>This post is about specific performance issues of the <em>Web Audio API<\/em>, especially its <em>AudioNodes<\/em>. It also briefly explains what this API was developed for and what you can do with it. Finally, it mentions a few tips and tricks to improve the performance of the Web Audio API. <\/p>\n\n\n\n<!--more-->\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"12270\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2021\/02\/24\/web-audio-api-tips-for-performance\/grafik-7\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik.png\" data-orig-size=\"605,272\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"grafik\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik.png\" alt=\"\" class=\"wp-image-12270\" width=\"753\" height=\"338\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik.png 605w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-300x135.png 300w\" sizes=\"auto, (max-width: 753px) 100vw, 753px\" \/><\/a><figcaption class=\"wp-element-caption\"><sup><sub>Image by <a href=\"https:\/\/pixabay.com\/users\/theglassdesk-149631\/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=1109588\">Becca Clark<\/a> from <a href=\"https:\/\/pixabay.com\/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=1109588\">Pixabay<\/a><\/sub><\/sup><\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-left\">This Article Contains<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li><a href=\"#t1\">What is the Web Audio API?<\/a><\/li>\n\n\n\n<li><a href=\"#t2\">The different implementations<\/a><\/li>\n\n\n\n<li><a href=\"#t3\">Performance relevant AudioNodes<\/a><\/li>\n\n\n\n<li><a href=\"#t4\">Tips and tricks<\/a><\/li>\n\n\n\n<li><a href=\"#t5\">Conclusion<\/a><\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading has-text-align-center\" id=\"t1\">What Is The Web Audio API?<\/h2>\n\n\n\n<p>The <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/Web_Audio_API\" target=\"_blank\" rel=\"noreferrer noopener\">Web Audio API<\/a> is an interface for creating and editing audio signals in web applications. It is written in JavaScript. The standard is developed by a working group of the W3C. The Web Audio API is particularly suitable for interactive applications with audio.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Before Web Audio API<\/h3>\n\n\n\n<p>Of course, sound could also be played in browsers before the Web Audio API. However, this was not so simple. Two possibilities are mentioned below, each of which was revolutionary when it was introduced.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Flash Player<\/h4>\n\n\n\n<p><a href=\"https:\/\/www.adobe.com\/de\/products\/flashplayer\/end-of-life.html\" target=\"_blank\" rel=\"noreferrer noopener\">Adobe Flash<\/a> was a platform for programming as well as displaying multimedia and interactive content. Flash enabled vector graphics, raster graphics and video clips to be displayed, animated and manipulated. It supported bidirectional streaming of audio and video content. Since version 11 it also allowed the display of 3D content.<\/p>\n\n\n\n<p>Flash version 1 was released by Macromedia in 1997. The Shockwave Flash Player was included accordingly. The integration of audio was possible with it. Flash version 2 and an extended Shockwave Flash Player appeared in the same year. New actions were available to the developer. With these, simple interactions could be realised.<\/p>\n\n\n\n<p>The programming of content for Flash Player was done in the object-oriented script language <em>ActionScript<\/em>. User input could be processed via mouse, keyboard, microphone and camera.<br>Adobe stopped distributing and updating Flash Player on 31 December 2020, and it was no longer made available after 2020.<br>Programming with the Adobe Flash Player was very time-consuming and the performance was considered <a href=\"https:\/\/mashable.com\/2017\/07\/25\/adobe-is-killing-flash-player\/?europe=true\" target=\"_blank\" rel=\"noreferrer noopener\">poor and insecure<\/a>. <\/p>\n\n\n\n<h4 class=\"wp-block-heading\">HTML5 Audio<\/h4>\n\n\n\n<p>The W3C published the finished <a href=\"https:\/\/html.spec.whatwg.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">HTML5<\/a> in 2014. HTML5 became the core language of the web. The HTML5 language offers new features such as video, audio, local storage, and dynamic 2D and 3D graphics. That previously could only be implemented with additional plug-ins, such as Flash Player.<\/p>\n\n\n\n<p>For embedding audio and video data, HTML5 defines the elements <em>audio <\/em>and <em>video<\/em>. Since no format was defined that had to be supported as a minimum standard, for a long time there was no format that was supported by all browsers. A major issue was the licensing fees for various formats, such as H.264. Now that internet streaming of H.264 content should no longer be subject to licensing fees in the long term, this format is supported by all modern browsers.<\/p>\n\n\n\n<p>The audio element is supported in most browsers with a small player function that often allows fast forward, rewind, play, pause and volume adjustment. However, this still makes basic functions of a modern DAW largely impossible.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">How Does The Web Audio API Work?<\/h3>\n\n\n\n<p>The <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/Web_Audio_API\" target=\"_blank\" rel=\"noreferrer noopener\">Web Audio API<\/a> enables various audio operations. It allows modular routing. Basic audio operations are performed with <em>audioNodes<\/em>, which are connected to each other and form an audio routing graph. Thus, it follows a widely used scheme that is also found in DAWs or on analogue mixing consoles.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-6.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"12328\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2021\/02\/24\/web-audio-api-tips-for-performance\/grafik-6-2\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-6.png\" data-orig-size=\"452,247\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"grafik-6\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-6.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-6.png\" alt=\"\" class=\"wp-image-12328\" width=\"526\" height=\"287\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-6.png 452w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-6-300x164.png 300w\" sizes=\"auto, (max-width: 526px) 100vw, 526px\" \/><\/a><figcaption class=\"wp-element-caption\"><sup><sub>The Web Audo API follows the same scheme as analogue mixing consoles.<br>Image by <a href=\"https:\/\/pixabay.com\/de\/users\/thearkow-2526946\/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=5638072\">TheArkow<\/a> auf <a href=\"https:\/\/pixabay.com\/de\/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=5638072\">Pixabay<\/a><\/sub><\/sup><\/figcaption><\/figure>\n\n\n\n<p><a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/AudioNode\" target=\"_blank\" rel=\"noreferrer noopener\">AudioNodes<\/a> are linked via their inputs and outputs to form chains and simple paths. This string of nodes is called a <em>graph<\/em>. They typically start with the source. This can be an audio file, an audio stream, a video file or an oscillator. There is even an extra <em>OscillatorNode <\/em>for this, but more on that later. The sources provide samples with audio information. Depending on the sample rate, there are tens of thousands per second. <\/p>\n\n\n\n<p>The outputs of the nodes can be linked to the inputs of other nodes. In this way, the samples can also be routed into several channels and processed independently of each other and later reassembled. Each node can change the signal with mathematical operations. To make a signal louder, for example, it is simply necessary to multiply the signal value by another value.<\/p>\n\n\n\n<p>Finally the last node is connected to the node <em>AudioContext.destination. <\/em>This sends the sound to speakers or headphones on the end device. However, you can also do without this. This makes sense if a signal is only to be displayed visually and you do not need to hear it at all.<\/p>\n\n\n\n<p>This structure makes it possible to play back sound from streams or files in browsers and to create sound in real time. In addition, the sound can also be edited interactively in real time. This ranges from changing the volume via various filters to the creation of realistic room sounds like Doppler effect, reverb, acoustic positioning and movement of the user. The signal processing mainly is done by the underlying implementation of the API. Custom processing in JavaScript is also possible.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The AudioWorklet<\/h3>\n\n\n\n<p>The Web Audio API thus enables interactivity and complex operation tasks with audio. Thus, it already fulfilled many requirements when it was introduced and was well suited for many use cases. One point of criticism, however, was the lack of extensibility for developers. As already mentioned, the API also offered developers a way to execute their own JavaScript code via the <em>ScriptProcessor <\/em>node. This function was perceived as insufficient. <\/p>\n\n\n\n<p>Therefore, the W3C Audio Working Group developed the so-called <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/AudioWorklet\" target=\"_blank\" rel=\"noreferrer noopener\">AudioWorklet <\/a>to support sample-accurate audio manipulation in JAVA without compromising performance and stability.<\/p>\n\n\n\n<p>The first design of the AudioWorklet interface was presented in an API specification in 2014. The first implementation was published in the Chrome browser at the beginning of 2018. Especially for the <a href=\"https:\/\/hoch.io\/media\/icmc-2018-choi-audioworklet.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">computer music community<\/a>, this opened up many new options. Thus, the AudioWorklet is considered a bridge between conventional music software and the web platform.<\/p>\n\n\n\n<h2 class=\"wp-block-heading has-text-align-center\" id=\"t2\">The Different Implementations<\/h2>\n\n\n\n<p>The Web Audio API is supported by all modern browsers. These include Mozilla Firefox, Google Chrome, Microsoft Edge, Opera and Safari. Most mobile browsers also support the API.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><a href=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-8.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"12355\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2021\/02\/24\/web-audio-api-tips-for-performance\/grafik-8\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-8.png\" data-orig-size=\"1703,516\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"grafik-8\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-8-1024x310.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-8-1024x310.png\" alt=\"\" class=\"wp-image-12355\" width=\"866\" height=\"262\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-8-1024x310.png 1024w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-8-300x91.png 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-8-768x233.png 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-8-1536x465.png 1536w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/02\/grafik-8.png 1703w\" sizes=\"auto, (max-width: 866px) 100vw, 866px\" \/><\/a><figcaption class=\"wp-element-caption\"><sup><sub>Browsers supporting the Web Audio API <br>Taken from <a href=\"https:\/\/caniuse.com\/?search=web%20audio%20api\" target=\"_blank\" rel=\"noreferrer noopener\">Can I Use&#8230;<\/a> February 2021<\/sub><\/sup><\/figcaption><\/figure>\n\n\n\n<p>The performance of the Web Audio API differs in different browsers. Four Web Audio API implementations are currently present in browsers.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><a href=\"https:\/\/trac.webkit.org\/browser\/trunk\/\" target=\"_blank\" rel=\"noreferrer noopener\">WebKit<\/a>: Chrome and Safari used to share the same code here.<\/li>\n\n\n\n<li><a href=\"https:\/\/code.google.com\/p\/chromium\/codesearch#chromium\/src\/ zu finden\" target=\"_blank\" rel=\"noreferrer noopener\">Blink<\/a>: When Blink was forked from WebKit, a separate implementation of the API was also developed.<\/li>\n\n\n\n<li><a href=\"https:\/\/dxr.mozilla.org\/mozilla-central\" target=\"_blank\" rel=\"noreferrer noopener\">Gecko<\/a>: The implementation in Gecko was developed from scratch and differs in its philosophy from the others to some extent.<\/li>\n\n\n\n<li>Edge: The source code of Edge is not public.<\/li>\n<\/ol>\n\n\n\n<p>One difference is the processes per tab. Gecko has one process for all tabs and the chrome, all other browsers use several processes. The difference mainly affects responsiveness when Gecko is still processing something in the background in a web application that uses the web audio API. At the moment, this problem is being further developed. Other engines, as I said, use multiple processes. This divides the load and makes delays less likely.<\/p>\n\n\n\n<p>Another difference is the implementation of the <em>AudioNodes<\/em>. This is clarified in the following. In general, it can be said that Gecko often paid more attention to quality and the other engines focused on performance.<\/p>\n\n\n\n<h2 class=\"wp-block-heading has-text-align-center\" id=\"t3\">Performance Relevant AudioNodes<\/h2>\n\n\n\n<p>The Web Audio API offers many different <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/Web_Audio_API\" target=\"_blank\" rel=\"noreferrer noopener\">AudioNodes<\/a> for different functions. In the following, the <em>AudioNodes <\/em>that can have a particularly strong influence on <a href=\"https:\/\/padenot.github.io\/web-audio-perf\/#performance-analysis\">performance<\/a> are described. They concern CPU and memory. If you have trouble with your performance using the Web Audio API you should double check the following <em>AudioNodes<\/em>. If you have trouble with the performance and not necessary you should not use them because of their costs.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">AnalyserNode<\/h3>\n\n\n\n<p>With the <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/AnalyserNode\">AnalyserNode<\/a>, you can read out analysis information in the frequency and time domain in real time. This <em>AudioNode <\/em>does not influence the signal but analyses the signals and can forward the generated data. This can be used to create visualisations, for example.<\/p>\n\n\n\n<p>This <em>AudioNode <\/em>can provide information about the frequency response, using a Fast Fourier Transform algorithm. This Fourier transformation is computationally intensive. The more signal that needs to be analysed at once, the more computationally intensive the process becomes.<\/p>\n\n\n\n<p>The fast Fourier transform algorithms use internal memory for processing. Different platforms and browsers have different algorithms, so it is not possible to make an exact statement about the memory requirements of this node.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">PannerNode<\/h3>\n\n\n\n<p>The <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/PannerNode\" target=\"_blank\" rel=\"noreferrer noopener\">PannerNode<\/a> makes it possible to position an audio source spatial and adjust the position in real time. To do this, the position is calculated in real time and described with a velocity vector and a directivity. For this to work, the output of the <em>PannerNode <\/em>must always be stereo. There are two modes in which the <em>PannerNode <\/em>can be used. Especially the HRTF mode is performance-critical.<\/p>\n\n\n\n<p>This is so performance-critical because it calculates a convolution. The input data is convolved with HRTF pulses that simulate a room. This procedure is already known in all other fields of audio processing, but the <em>PannerNode <\/em>makes it possible in the browser. When the position of the audio source changes, additional interpolation is done between the old and the new position to provide a smooth transition. For stereo sources, several convolvers must operate simultaneously during movement.<\/p>\n\n\n\n<p>The HRTF panner must load the HRTF pulses for the calculation. In the Gecko engine, the HRTF database only loads when needed, while other engines always load it. The convolver and the delay lines also need memory. Depending on how the Fast Fourier Transformation works on the respective system, the memory requirement also varies here.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">ConvolverNode<\/h3>\n\n\n\n<p>The <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/ConvolverNode\" target=\"_blank\" rel=\"noreferrer noopener\">ConvolverNode<\/a> also works with convolution. Here, convolution is used to achieve a certain reverb effect. A certain room response to a signal is convolved with the signal in the <em>ConvolverNode<\/em>, whereby this room response is transferred to the signal.<\/p>\n\n\n\n<p>Here convolution makes the calculation is very performance-critical too. It correlates with the duration of the convolution pulse. Again, some browsers are more likely to experience a computational congestion than others, depending on how the computation is offloaded to background threads.<\/p>\n\n\n\n<p>The <em>ConvolverNode <\/em>creates different copies of the signal to calculate convolutions independently. Therefore, it needs quite a lot of memory, which also depends on the duration of the pulse. Additionally, depending on the platform, memory may be added for the implementation of the Fast Fourier Transform.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">DelayNode<\/h3>\n\n\n\n<p>The <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/DelayNode\" target=\"_blank\" rel=\"noreferrer noopener\">DelayNode<\/a> interface enables delays. A delay is introduced between the arrival of the signal and the forwarding process. The storage costs result from the number of input and output channels and the length of the delayed signal.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">WaveShaperNode<\/h3>\n\n\n\n<p>The <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/WaveShaperNode\" target=\"_blank\" rel=\"noreferrer noopener\">WaveShaperNode<\/a> represents a non-linear distortion. A function is computed with the signal to obtain a wave-shaped distortion. This <em>WaveShaperNode <\/em>creates a copy of the curve and can therefore be quite memory-intensive.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">OscillatorNode<\/h3>\n\n\n\n<p>As already mentioned, audio can be generated with the <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/OscillatorNode\" target=\"_blank\" rel=\"noreferrer noopener\">OscillatorNode<\/a>. It generates a periodic oscillation that is interpreted as an audio signal.<\/p>\n\n\n\n<p>The oscillations are implemented with tables that are calculated via the inverse Fourier transformation. As a result, only when the waveform is changed are there initially higher computational loads when these tables are calculated. In Gecko-based browsers, the waves are cached except for the sine, which is calculated directly.<\/p>\n\n\n\n<p>The stored wave tables can take up a lot of memory. They are shared in Gecko-based browsers, except for the sine wave in Gecko, which is calculated directly.<\/p>\n\n\n\n<h2 class=\"wp-block-heading has-text-align-center\" id=\"t4\">Tips and Tricks<\/h2>\n\n\n\n<p>The following tips and tricks come from <a href=\"https:\/\/github.com\/padenot\" target=\"_blank\" rel=\"noreferrer noopener\">Paul Adenot<\/a>, one of the developers of the Web Audio API. You can find more details <a href=\"https:\/\/padenot.github.io\/web-audio-perf\/#using-lighter-processing\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a>. They should help you to achieve optimal performance for your web application with the Web Audio API.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">For Developers<\/h3>\n\n\n\n<p>Sometimes the Web Audio API is not sufficient to solve certain problems. In this case, you can use the AudioWorklet described above to create functions yourself. In the best case, these are implemented in JavaScript to remain in the language of the API.<\/p>\n\n\n\n<p>Paul Adenot recommends the following rules to get the best results:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>You should use <strong>typed arrays<\/strong> because they are faster than normal arrays. <\/li>\n\n\n\n<li>You should also <strong>reuse arrays<\/strong>.<\/li>\n\n\n\n<li><strong>Do not manipulate<\/strong> the DOM or the object prototype <strong>during processing<\/strong>.<\/li>\n\n\n\n<li>Stay <strong>mono-morphic<\/strong> and use the<strong> same code path<\/strong>.<\/li>\n\n\n\n<li><strong>Compile <\/strong>C or C++ to JavaScript.<\/li>\n\n\n\n<li><strong>Extensions <\/strong>like SIMD.js or SharedArrayBuffer can improve the performance in browsers supporting them.<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Reverb<\/h3>\n\n\n\n<p>As already mentioned, the Web Audio API offers the ConvolverNode to simulate very good sounding convolution reverb. Since this process is computationally intensive, it is worth looking for alternatives for mobile devices.<\/p>\n\n\n\n<p>This is possible with delay, equalisers and low-pass filters, which can also be used to create reverb effects. More information on creating alternative reverb effects instead of convolution reverb with the Web Audio API can be found <a href=\"https:\/\/blog.gskinner.com\/archives\/2019\/02\/reverb-web-audio-api.html\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Panning<\/h3>\n\n\n\n<p>For browser applications such as online games, where acoustic localisation should take place, binaural panning is very important. The HRTF panning is based on convolution, sounds good, but is as already mentioned, very computationally intensive. Here it is worthwhile to use an alternative for mobile devices. <\/p>\n\n\n\n<p>You can use a short reverb and a panner in <em>equalpower <\/em>mode, which makes similar localisation as the HRTF panner possible. This is especially useful if the position of the source is constantly changing. More information about the PannerModel can be found <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/PannerNode\/panningModel\">here<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading has-text-align-center\" id=\"t5\">Conclusion<\/h2>\n\n\n\n<p>The Web Audio API was a revolutionary step for audio in browsers. If you keep an eye on a few AudioNodes and stick to some advice when you wnt to do custom processing, it also has a good performance.<br>If you are now interested in the Web Audio API or want to try out a small example, I recommend this <a href=\"https:\/\/codepen.io\/Rumyra\/pen\/qyMzqN\/\" target=\"_blank\" rel=\"noreferrer noopener\">example<\/a>.<br>You will learn there how to build a boom box with little code that can pan the sound in real time.<br>I hope you enjoyed this article.<\/p>\n\n\n\n<h2 class=\"wp-block-heading has-text-align-center\">Related Links<\/h2>\n\n\n\n<p><a href=\"https:\/\/www.w3.org\/TR\/webaudio\" target=\"_blank\" rel=\"noreferrer noopener\">More information about the Web Audio API (W3C)<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/Web_Audio_API\" target=\"_blank\" rel=\"noreferrer noopener\">Further informaton about the Web Audio API (MDN Web Docs; Mozilla)<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/AudioNode\" target=\"_blank\" rel=\"noreferrer noopener\">Detailed Informations about the AudioNodes (MDN Web Docs; Mozilla)<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/AudioWorklet\" target=\"_blank\" rel=\"noreferrer noopener\">Information about the AudioWorklet interface of the Web Audio API (MDN Web Docs; Mozilla)<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/padenot.github.io\/web-audio-perf\/#introduction\" target=\"_blank\" rel=\"noreferrer noopener\">Web Audio API performance and debugging notes (Paul Adenot)<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This post is about specific performance issues of the Web Audio API, especially its AudioNodes. It also briefly explains what this API was developed for and what you can do with it. Finally, it mentions a few tips and tricks to improve the performance of the Web Audio API.<\/p>\n","protected":false},"author":1014,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[649,262,662],"tags":[412,411,410],"ppma_author":[838],"class_list":["post-12178","post","type-post","status-publish","format-standard","hentry","category-interactive-media","category-rich-media-systems","category-web-performance","tag-audionodes","tag-performance","tag-web-audio-api"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":25813,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2023\/09\/15\/cost-efficient-server-structure-merging-static-and-dynamic-api\/","url_meta":{"origin":12178,"position":0},"title":"Cost-Efficient Server Structure: Merging Static and Dynamic API","author":"mc071","date":"15. September 2023","format":false,"excerpt":"While developing our guessing game, \"More or Less\", we found a method to significantly reduce traffic on our serverless API, leading to cost savings and an improved content creation experience.\u00a0 The Problem In our game, players can contribute their own game modes, using the web editor.\u00a0 Additionally, we develop game\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"Thumbnail for merging of static and dynamic API structures for optimized server costs and efficient content creation.","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/09\/2_cost_efficient_server_structure_merging_static_and_dynamic_api.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/09\/2_cost_efficient_server_structure_merging_static_and_dynamic_api.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/09\/2_cost_efficient_server_structure_merging_static_and_dynamic_api.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/09\/2_cost_efficient_server_structure_merging_static_and_dynamic_api.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2023\/09\/2_cost_efficient_server_structure_merging_static_and_dynamic_api.png?resize=1050%2C600&ssl=1 3x"},"classes":[]},{"id":4147,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/08\/30\/using-the-power-of-google-cloud-api-a-dockerized-node-app-counting-words-in-prasentations\/","url_meta":{"origin":12178,"position":1},"title":"Using the power of google cloud API:  A dockerized node app counting words in prasentations.","author":"sd092","date":"30. August 2018","format":false,"excerpt":"For the Dev4Cloud lecture at HdM Stuttgart, we created a simple Go\/NodeJS\/React App, which helps people to keep track of often used words during presentations. In a presentation setting, most people tend to use too many fill words and to train against this, we want to introduce our presentation counter\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/aufbau.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/aufbau.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/aufbau.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/aufbau.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/aufbau.jpg?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/aufbau.jpg?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":8217,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/08\/29\/progressive-web-apps-wer-braucht-noch-eine-native-app\/","url_meta":{"origin":12178,"position":2},"title":"Progressive Web Apps \u2013 Wer braucht noch eine native App?","author":"Dominik Zinser","date":"29. August 2019","format":false,"excerpt":"Beispiele zum Einstieg Progressive Web Apps sind schon weiter verbreitet wie man denkt. Auch gro\u00dfe, innovative Unternehmen Twitter, Airbnb, Spotify oder Tinder setzen auf Progressive Web Apps. Abb. 1: Eine Auswahl von Progressive Web Apps [1] Wer sich ein tolles Beispiel anschauen m\u00f6chte, dem empfehle ich https:\/\/riorun.theguardian.com\/ (auf mobile) zu\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/1-150x150.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/1-150x150.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/1-150x150.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/08\/1-150x150.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":4122,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/08\/27\/building-a-serverless-web-service-for-music-fingerprinting\/","url_meta":{"origin":12178,"position":3},"title":"Building a Serverless Web Service For Music Fingerprinting","author":"Alexis Luengas","date":"27. August 2018","format":false,"excerpt":"Building serverless architectures is hard. At least it was to me in my first attempt to design a loosely coupled system that should, in the long term, mean a good bye to my all-time aversion towards system maintenance. Music information retrieval is also hard. It is when you attempt to\u2026","rel":"","context":"In &quot;Cloud Technologies&quot;","block_context":{"text":"Cloud Technologies","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/cloud-technologies\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/Architecture-Diagram-300x190.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":12117,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2020\/09\/30\/generating-audio-from-an-article-with-amazon-polly\/","url_meta":{"origin":12178,"position":4},"title":"Generating audio from an article with Amazon Polly","author":"sk295","date":"30. September 2020","format":false,"excerpt":"Author: Silas Krause (sk295) Project Reading multiple and detailed articles can become a little bit tiring. Listening to the same content, on the other hand, is more comfortable, can be done while driving, and is less straining for the eyes.Therefore I decided to use this lecture to create a service\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/09\/technical-architecture-b2a.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/09\/technical-architecture-b2a.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/09\/technical-architecture-b2a.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2020\/09\/technical-architecture-b2a.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":926,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/07\/26\/socialcloud-configure-all-the-things-part-6\/","url_meta":{"origin":12178,"position":5},"title":"SocialCloud &#8211; Configure all the things! &#8211; Part 6","author":"ew033","date":"26. July 2016","format":false,"excerpt":"One of the requirements of the system is that organizations should be able to set up and deploy the HumHub system on their own. For this purpose, we have designed the Configtool. To meet this requirement we have to use different tools, procedures and interfaces from Bluemix. In the following\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/socialCloud.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/socialCloud.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/socialCloud.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/socialCloud.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/07\/socialCloud.jpg?resize=1050%2C600&ssl=1 3x"},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":838,"user_id":1014,"is_guest":0,"slug":"jk239","display_name":"Johanna Kuch","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/8dc17ca4188c406849ba797c5ad366517103edfcb0a86beda3c601e484d9f66b?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/12178","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/1014"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=12178"}],"version-history":[{"count":161,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/12178\/revisions"}],"predecessor-version":[{"id":25375,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/12178\/revisions\/25375"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=12178"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=12178"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=12178"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=12178"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}