If WASM+WASI existed in 2008, we wouldn’t have needed to created Docker. That’s how important it is. Webassembly on the server is the future of computing. A standardized system interface was the missing link. Let’s hope WASI is up to the task!
Dieser Tweet über WebAssembly (WASM) des Docker-Erfinders gibt einen Hinweis auf die mögliche Innovationskraft dieser Technologie. Der vorliegende Artikel vermittelt zunächst die wichtigsten technischen Grundlagen von WebAssembly, welche zum weiteren Verständnis notwendig sind. Anschließend wird der WASI-Standard näher beleuchtet, welcher die Brücke zur Containervirtualisierung schlägt. Schließlich betrachten wir mit Krustlet (Kubernetes) und wasmCloud zwei existierende Cloud-Technologien, die zentral auf WebAssembly basieren.
Written by Tim Tenckhoff – tt031 | Computer Science and Media
Interaction with Host Objects
Rotation via CSS
Speaking of selectors to pick DOM elements, the usage of jQuery allows a highly specific selection of elements based on tag names, classes and CSS (25 Techniques 2012). But according to the online blog article by Steven de Salas, it is important to be aware that this approach involves the potential of several iterations through underlying DOM elements to find the respective match. He states that this can be improved by picking nodes by ID. An example can be seen in Figure 6:
What also increases the (DOM interaction) performance, is to store references to browser objects during instantiation. If it can be expected, that the respective website is not going to change after instantiation, references to the DOM structure should be stored initially, when the page is created not only when they are needed. It is generally a bad idea to instantiate references DOM objects over and over again. That’s why it is rather advisable to create few references to objects during instantiation which are needed several times (25 Techniques 2012). If no reference to a DOM object has been stored and needs to be instantiated within a function, a local variable containing a reference to the required DOM object can be created. This speeds up the iteration considerably as the local variable is stored in the fastest and most accessible part of the stack (25 Techniques 2012).
The general amount of DOM-elements is also a criteria with respect to the performance. As the time, used for changes in the DOM is proportional to the complexity of the rendered HTML, this should also be considered as an important performance factor (Speeding up 2010).
Another important aspect regarding the DOM interaction is to batch (style) changes. Every DOM change causes the browser to do a re-rendering of the whole UI (25 Techniques 2012). Therefore, it should be avoided to apply each style change separately. The ideal approach to prevent this, is to do changes in one step, for example by adding a CSS class. The different approaches can be seen in Figure 7 below.
Additionally it is recommended to build DOM elements separately before adding them to a website. As said before, every DOM requires a re-rendering. If a part of the DOM is built “off-line” the impact of appending it in one go is much smaller (25 Techniques 2012). Another approach is to buffer DOM content in scrollable <div> elements and to remove elements from the DOM that are not displayed on the screen, for example, outside the visible area of a scrollable <div>. These nodes are then reattached if necessary (Speeding up 2010).
Looking at different pages in the www, e.g. the cities’s website of Stuttgart, it can be observed that the screen rendering is delayed for the user until all script dependencies are fully loaded. As seen in Figure 8, some dependencies cause the delayed download of other dependencies that in return have to wait for each other. To solve this problem the active management and reduction of the dependency payload are a core part of performance optimization.
One approach to do so, is to reduce the general dependency on libraries to a minimum (25 Techniques 2012). This can be done by using as much in-browser technology as possible. For example the usage of document.getElementById(‘element-ID’) instead of using (and including) the jQuery library. Before adding a library to the project, it makes sense to evaluate whether all the included features are needed, or if single features can be extracted from the library and added separately. If this is the case, it is of course important to check whether the adopted code is subject to a license – to credit and acknowledge the author is recommended in any case (25 Techniques 2012).
A further way to optimize the dependency management, is the usage of a post-load dependency manager for libraries and modules. Tools like Webpack or RequireJS allow, that the layout and frame of a website appears before all of the content is downloaded, by post-loading the required files in the background (JS Optimization 2018). This gives users a few extra seconds to familiarise themselves with the page (25 Techniques 2012).
By maximizing the usage of caching, the browser downloads the needed dependencies only at the first call and otherwise accesses the local copy. This can be done by manually adding eTags to files that need to be cached, and putting *.js files to cache into static URI locations. This communicates the browser to prefer the cached copy of scripts for all pages after the initial one (25 Techniques 2012).
To create interactive and responsive web applications, event binding and handling is an essential part. However, event bindings are hard to track due to their ‘hidden’ execution and can potentially cause performance degradation e.g. if they are fired repeatedly (25 Techniques 2012). Therefore it is important to keep track of the event execution throughout various use cases of the developed code to make sure that events are not fired multiple times or bind unnecessary resources (25 Techniques 2012).
To do so, it is especially important to pay attention to event handlers that fire in quick repetition. Browser events such as e.g. ‘mouse move’ and ‘resize’ are executed up to several hundred times each second. Thus, it is important to ensure that an event handler that reacts to one of these events can complete in less than 2-3 milliseconds (25 Techniques 2012). The box below visualizes the amount of events that are fired when the mouse is moved over an element.
Another important point that needs to be taken care of, is the event unbinding. Every time an event handler is added to the code, it makes sense to consider the point when it is no longer needed and to make sure that it stops firing at this point. (Speeding up 2010) This avoids performance slumps through handlers that are bound multiple times, or events firing when they are no longer needed. One good approach to prevent this, is the usage of once-off execution constructs like jQuery.one() or manually adding/coding the unbind behavior at the right place and time (25 Techniques 2012). The example below, shows the usage of jQuery.one – an event binding on each p element that is fired exactly once and unbinds itself afterwards. The implementation of this example can be seen in Figure 9.
Click the boxes to trigger a jQuery.one event!
Fired once. Also fired once. This is also fired only once.
A last important part of the event binding optimization is to consider and understand the concept of event bubbling. A blog article by Alfa Jango describes the underlying difference between .bind(), .live(), and .delegate() events (Event Bubbling 2011).
Figure 10 shows what happens if e.g e a link is clicked that fires the click event on the link element, which triggers functions that are bound to that element’s click event: The click event propagates up the tree, to the next parent element and then to each ancestor element that the click event was triggered on one of the descendent elements (Event Bubbling 2011). Knowing this, the difference between the jQuery functions bind(), live() and delegate() can be explained:
jQuery binds the function to the $(document) tag including the parameters ‘click’ and ‘a’. If the event is fired, it checks if both parameters are true, then executes the function.
Similar to .live(), but binds the handler to a specific element, not the document.root.
The article says that .delegate() is better than .live(). But why?
Speed: $(‘a’) first scans for all a elements and saves them as objects, this consumes space and is therefore slower.
Flexibility: live() is linked to the object set of $(‘a’) elements, although it actually acts at the $(document) level..
The next topic is the implementation of efficient iterations. As seen in Figure 11 below, the execution time for string operations grows exponentially during long iterations (String Performance 2008). This shows why iterations can often be the reason for performance flaws. Therefore it always makes sense to get rid of unnecessary loops, or calls inside of loops (25 Techniques 2012) .
The difference between reference and primitive value types, also comes up in terms of efficient iterations. Primitive types such as String, Boolean or Integer are copied if they are handed over to a function. Reference types, such as Arrays, Objects or Dates are handed over as a light-weight reference. This knowledge should be considered if a reference is handed over to a function, running in an iteration: Obviously, it is better to avoiding frequent copying of primitive types and pass lightweight references to these functions.
One easy example (25 Techniques 2012) regarding this, is to prefer the usage of native, optimized constructs over self-written algorithms: Functions as e.g. Math.floor()or new Date().getTime() for timestamps don’t need to be rewritten. The operator === instead of == provides an optimized, faster type-based comparison (25 Techniques 2012). Furthermore, the switch statement can be used instead of long if-then-else blocks to provide an advantage during compilation, to name just a few examples.