How does Tor work?

Written by Tim Tenckhoff – tt031 | Computer Science and Media

1. Introduction

The mysterious dark part of the internet – hidden in depths of the world wide web, is well known as a lawless space for shady online drug deals or other criminal activities. But in times of continuous tracking on the Internet, personalized advertising or digital censorship by governments, the (almost) invisible part of the web promises to bring back lost anonymity and privacy as well. This blogpost aims to shed light into the dark corners of the deep web and primarily deals with the explanation of how TOR works.

Reference: Giphy, If Google was a person: Deep Web
  1. Introduction
  2. The Deep Web
    1. 2. 1 What is the Tor Browser?
  3. The Tor-Network
    1. 3.1 Content
    2. 3.2 Accessing the Network
    3. 3.3 Onion Routing – How Does Tor Work?
  4. Conclusion – Weaknesses
  5. References

2. The Deep Web

So, what exactly is the deep web? To explain this, it makes sense to cast a glance at the overall picture. The internet as most people know it, forms only a minimal proportion of the overall 7.9 Zettabyte (1 ZB = 10007 bytes = 1021 bytes = 1000000000000000000000 bytes 
= 1 trillion Gigabytes?) of data available online (Hidden Internet 2018). This huge amount of data can be separated into three parts:

Separation of the worldwide web, Reference: (Search Engines 2019)

As seen in the picture above, we are accessing only 4% available on search engines like Google or Bing. The remaining 96% (90% + 4%) are protected by passwords, hidden behind paywalls or can be accessed via special tools (Hidden Internet 2018). But what separates the hidden parts into Deep Web and Dark Web by definition?

The Deep Web is fundamentally referred to data which are not indexed by any standard search engines as e.g. Google or Yahoo. This includes all web pages that search engines cannot find, such as user databases, registration-required web forums, webmail pages, and pages behind paywalls. Thus, the Deep Web can, of course, contain content that is totally legal (e.g. governmental records). The Dark Web is a small unit of the Deep Web – which refers to web pages that cannot be found by common search engines. The collection of websites that belongs to this dark web​ only exists on an encrypted network that cannot be reached by regular browsers (such as Chrome, Firefox, Internet Explorer, etc.). In conclusion, this area is the well-suited scene of cybercrime. Accessing these Dark Websites requires the usage of the Tor Browser.

…hidden crime bazaars that can only be accessed through special software that obscures one’s true location online.

– Brian Krebs, Reference: (Krebs On Security 2016)

2. 1 What is the Tor Browser?

The pre-alpha version of the Tor Browser was released on September 2002 (Onion Pre Alpha 2002 and the Tor Project, the company maintaining Tor, was started in 2006. The name Tor consists of three subterms and is the abbreviation of The onion router. The underlying Onion Routing Protocol was initially developed by the US Navy in the mid-1990s at the U.S Naval Research Laboratory (Anonymous Connections 1990). The protocol basically describes a technique for anonymous communication over a public network: By encapsulating each message carried in several layers of encryption and redirecting Internet traffic through a free, worldwide overlay network. It is called onion routing because of the layers in this network and the layers of an onion. Developed as free and open-source software for enabling anonymous communication, the Tor-Browser still follows the intended use today: protecting ​personal privacy and communication by protecting internet activities from being monitored.

With the Tor Browser, barely anyone can get access to The Onion Router (Tor) network by downloading and running the software. The browser does not need to be installed in the system and can be unpacked and transported as portable software via USB stick (Tor Browser 2019). As soon as this is done, the browser is able to connect to the Tor network. This is a network of many servers, the Tor nodes. While surfing, the traffic is encrypted by each of these Tor nodes. Only at the last server in the chain of nodes, the so-called​ exit node, the data stream is decrypted again and normally routed via the Internet to the target server, which is located in the address bar of the Tor browser. In concrete terms, the Tor browser first downloads a list of all available Tor servers for the connection over the Tor network and then defines a random route from server to server for data traffic, which is called Onion Routing as said before. These routes consist of a total of three Tor nodes, with the last server being the Tor exit node (Tor Browser 2019).

Conncetion of a Web-Client to Server via Tor Nodes, Reference: (Hidden Internet 2018)

For the reason that traffic to the Onion service runs across multiple servers from the Tor Project, the traces that users usually leave while surfing with a normal Internet browser or exchanging data such as email and messenger messages become blurred. Even though the payload of normal Internet traffic is encrypted, e.g. via https, the header containing routing source, destination, size, timing etc. can simply​ be spied by attackers or Internet providers. Onion routing in contrast​ also obscures the IP address of Tor users and keeps their computer location anonymous. To continuously disguise the data route, a new route through the Tor network is chosen every ten (Tor Browser 2019) minutes. The exact functionality of the underlying encryption will be described later in section Onion Routing – How Does Tor Work?.

3. The Tor-Network

For those concerned about the privacy of their digital communications in times of large-scale surveillance, the Tor network provides the optimal obfuscation. The following section explains which content can be found on websites hidden in the dark web, how the multi-layered encryption works in detail, and what kind of anonymity it actually offers.

3.1 Content


Most of the content in relation to the darknet involves nefarious or illegal activity. With the provided possibility of anonymity, there are many criminals trying to take advantage of it. This results in a large volume of darknet sites revolving around drugs, darknet markets (sites for the purchase and sale of services and goods), and fraud. Some examples found within minutes using the Tor browser are listed in the following:

  • Drug or other illegal substance dealers: Darknet markets (black markets) allow the anonymous purchase and sale of medicines and other illegal or controlled substances such as pharmaceuticals. Almost everything can be found here, quite simply in exchange for bitcoins.
  • Hackers: Individuals or groups, looking for ways to bypass and exploit security measures for their personal benefit or out of anger for a company or action (Krebs On Security 2016), communicate and collaborate with other hackers in forums, share security attacks (use a bug or vulnerability to gain access to software, hardware, data, etc.) and brag about attacks. Some hackers offer their individual service in exchange for bitcoins.
  • Terrorist organizations use the network for anonymous Internet access, recruitment, information exchange and organisation (What is the darknet?).
  • Counterfeiters: Offer document forgeries and currency imitations via the darknet.
  • Merchants of stolen information offer e.g. credit card numbers and other personally identifiable information can be accessed and ordered for theft and fraud activities.
  • Weapon dealers: Some dark markets allow the anonymous, illegal purchase and sale of weapons.
  • Gamblers play or connect in the darknet to bypass their local gambling laws.
  • Murderers/assassins: Despite of existing discussions about whether these services are real or legitimate, created by the law enforcement or just fictitious websites, some dark websites exist, that offer murder for rent.
  • Providers of illegal explicit material e.g. child pornography: We will not go into detail here.
Screenshot of the infamous Silk Road (platform for selling illegal drugs, shutdown by the FBI in October 2013) , Reference: (Meet Darknet 2013)

But the same anonymity also offers a bright side: freedom of expression. It offers the availability to speak freely without fear about persecution in countries where this is no fundamental right. According to the Tor project, hidden services allowed regime dissidents in Lebanon, Mauritania and the Arab Spring to host blogs in countries where the exchange of those ideas would be punished (Meet Darknet 2013). Some other use-cases are:

  • To use it as a censorship circumvention tool, to reach otherwise blocked content (in countries without free access to information)
  • Socially sensitive communication: Chat rooms and web forums where rape and abuse survivors or people with illnesses can communicate freely, without being afraid of being judged.

A further example of​ that is the New Yorker’s Strongbox, which allows whistleblowers to upload documents and offers a way to communicate anonymously with the magazine (Meet Darknet 2013).

3.2 Accessing the Network

The hidden sites of the dark web can be accessed via special onion-domains. These addresses are not part of the normal DNS, but can be interpreted by the Tor browser if they are sent into the network through a proxy (Interaction with Tor 2018). In order to create an onion-domain, a Tor daemon first creates an RSA key pair, calculates the SHA-1 hash over the generated public RSA key, shortens it to 80 bits, and encodes the result into a 16-digit base32 string (e.g. expyuzz4waqyqbqhcn) (Interaction with Tor 2018). For the reason that onion-domains directly derive from their public key, they are self-certifying. That implements, that if a user knows a domain, he automatically knows the corresponding public key. Unfortunately, onion-domains are therefore difficult to read, write, or to remember. In February 2018, the Tor Project introduced the next generation of onion-domains, which can now be 56 characters long, use a base32 encoding of the public key, and includes a checksum and version number (Interaction with Tor 2018). The new onion services also use elliptic curve cryptography so that the entire public key can now be embedded in the domain, while it could only be the hash in previous versions. These changes led to enhanced security of onion-services, but long and unreadable domain names interfered the usability again (Interaction with Tor 2018). Therefore, it is a common procedure, to repeatedly generate RSA keys until the domain randomly contains the desired string (e.g. facebook). These vanity onion domains look like this for e.g. Facebook (facebookcorewwwi.onion) or the New York Times (nytimes3xbfgragh.onion) (Interaction with Tor 2018). In contrast to the rest of the Worldwide Web, where navigation is primarily done via search engines, the darknet often contains pages with lists of these domains for further navigation. The darknet deliberately tries to hide from the eyes of the searchable web (Meet Darknet 2013)

3.3 Onion Routing – How Does Tor Work?

So how exactly does the anonymizing encryption technology behind Onion Routing work? As said before, the Tor browser chooses an encrypted path through the network and builds a circuit in which each onion router only knows (is able to decrypt) its predecessor and the successor, but no other nodes in the circuit. Tor thereby uses the Diffie-Hellman algorithm to generate keys between the user and different onion routers in the network (How does Tor work 2018). The algortihm is one possible application of Public Key Cryptography that makes use of two large prime numbers which are mathematically linked:

  1. A public-key — public and visible to others
  2. A private-key — private and kept secret

The public key can be used to encrypt messages and the private key is in return used to decrypt the encrypted content. This implicates, that anyone is able to encrypt content for a specific recipient, but this recipient alone can decrypt it again (How does Tor work 2018).

Tor normally uses 3 nodes by default, so 3 layers of encryption are required to encrypt a message (How does Tor work 2018). It is important to say, that every single Tor packet (called cell) is exactly 512kb large. This is done for the reason, that attackers cannot guess which cells are larger cells e.g images/media (How does Tor work 2018). On every step, the transferred message/package reaches, one layer of encryption is decrypted, revealing the position of the next successor in the circuit. This makes it possible, that nodes in the circuit do not know where the previous message originated or where its final destination is (How does Tor work 2018). A simplified visualization of this procedure can be seen in the picture below.

Removing one layer of encryption in every step to the next node, Reference (How does Tor work 2018)

But how does the network allow different users to connect without knowing each other’s network identity? The answer are so-called “rendezvous points”, formerly known as hidden services. (Onion Service Protocol 2019). The following steps are mainly extracted and summarized from the official documentation of Tor about the Onion Service Protocol 2019 and describe the technical details of how this is made possible:

Step 1: Before a client is able to contact an onion service in the network, it needs to broadcast its existence. Therefore, the service randomly selects relays in the network and requests them to act as introduction points by sending its public key. The picture below shows these circuit connections in the first step as green lines. It is important to mention, that these lines mark Tor circuits and not direct connections. The full three-step circuit makes it hard to associate an introduction point with the IP address of an onion server: Even though the introduction point is aware of the onion servers identity (public key) it does never know the onion server’s location (IP address)(Onion Service Protocol 2019).

Step 1: Reference: (Onion Service Protocol 2019)

Step 2: Step two: The service creates a so-called onion service descriptor that contains its public key and a summary of each introductory point (Onion Service Protocol 2019). This descriptor is signed with the private key of the service and then uploaded to a distributed hash database table in the network. If a client requests an onion domain as described in section Accessing the Network the respective descriptor is found. If e.g. “abc.onion” is requested, “abc” is a 16 or 32 character string derived by the service’s public key as seen in the picture below.

Step 2: Reference: (Onion Service Protocol 2019)

Step 3: When a client contacts an onion-service it needs to initiate the connection by downloading the descriptor from the distributed hash table as described before. If that certain descriptor exists for the address abc.onion, the client receives the set of introduction points and the respective public key. This action can be seen in the picture below. At the same time, the client establishes a connection circuit to another randomly selected node in the network and​ asks it to act as a rendezvous point by submitting a one time-secret key (Onion Service Protocol 2019).

Step 3: Reference: (Onion Service Protocol 2019)

Step 4: Now the client creates a so-called introduce message (encrypted with the public key of the onion service), containing the address of the rendezvous point and the one-time secret key. This message is sent to one of the introduction points, requesting the onion service as its final target. For the reason that the communication is again realized by a gate circuit, it is not possible to uncover the clients IP address and thus its identity.

Step 4: Reference: (Onion Service Protocol 2019)

Step 5: At this point, the onion service decodes the introduce message including the address of the rendezvous point and the one-time secret key. The service is then able to establish a circuit connection to the now revealed rendezvous point and communicates the one-time secret in a rendezvous message to the node. Thereby, the service remains with the same set of entry guards for the creation of new circuits (Onion Service Protocol 2019). By application of this technique, an attacker is not able to create his own relay to force the onion service to create an optional number of circuits, so that the corrupt relay might be randomly selected as the entry node. This attack scenario which is able to uncover the anonymity in the Deep Web networks was described by Øverlier and Syverson in their paper (Locating Hidden Servers 2006).

Step 5: Reference: (Onion Service Protocol 2019)

Step 6: As seen in the last picture below, the rendezvous point informs the client about the successfully established connection. Afterwards, both the client and onion service are able to use their circuits to the rendezvous point to communicate. The (end-to-end encrypted) messages are forwarded through the rendezvous point from client to the service or vice versa (Onion Service Protocol 2019). The initial introduction circuit is never used for the actual communication for one important reason mainly: A relay should not be attributable to a particular onion service. The rendezvous point is therefore never aware of the identity of any onion service (Onion Service Protocol 2019). Altogether, the complete connection between service and onion service and client consists of six nodes: three selected by the client, whereas the third is the rendezvous point and the other three are selected by the service.

Step 6: Reference: (Onion Service Protocol 2019)

4. Conclusion – Weaknesses

Different from what many people believe (How does Tor work 2018) Tor is no completely decentralized peer-to-peer system. If it was, it wouldn’t be very useful, as the system requires a number of directory servers that continuously manage and maintain the state of the network.

Furthermore, Tor is not secured against end-to-end attacks. While it does provide protection against traffic analysis, it cannot and does not attempt to protect against monitoring of traffic at the boundaries of the Tor network (the traffic entering and exiting the network), which is a problem that cyber security experts were unable to solve yet (How does Tor work 2018). Researchers from the University of Michigan even developed a network scanner allowing identification of 86% of worldwide live Tor “bridges” with a single scan (Zmap Scan 2013). Another disadvantage of Tor is its speed – because the data packages are randomly sent through a number of nodes, and each of them could be anywhere in the world, the usage of Tor is very slow. Despite its weaknesses, the Tor browser is an effective, powerful tool for the protection of the user’s​ privacy online, but it is good to keep in mind that a Virtual Private Network (VPN) can also provide security and anonymity, without the significant speed decrease of the Tor browser (Tor or VPN 2019) . If total obfuscation and anonymity regardless of the performance play a decisive role, a combination of both is recommended.

5. References

Hidden Internet [2018], Manu Mathur, Exploring the Hidden Internet – The Deep Web [Online]
Available at:
[Accessed 27 August 2019].

Search Engines [2019], Julia Sowells, Top 10 Deep Web Search Engines of 2017 [Online]
Available at:
[Accessed 24 July 2019].

Krebs On Security [2016], Brian Krebs, Krebs on Security: Rise of Darknet Stokes Fear of The Insider [Online]
Available at:
[Accessed 14 August 2019].

Anonymous Connections [1990], Michchael G.Reed, Paul F. Syversion, and David M. Goldschlag Naval Research Laboratory Anonymous Connections and Online Routing [Online]
Available at:
[Accessed 18 August 2019].

Onion Pre Alpha [2002], Roger Dingledine, pre-alpha: run an onion proxy now! [Online]
Available at:
[Accessed 18 August 2019].

Tor Browser [2019], Heise Download, Tor Browser 8.5.4 [Online]
Available at:
[Accessed 29 August 2019].

Interaction with Tor [2018], Philipp Winter, Anne Edmundson, Laura M. Roberts, Agnieszka Dutkowska-Zuk, Marshini Chetty, Nick Feamster, How Do Tor Users Interact With Onion Services? [Online]
Available at:
[Accessed 16. August 2019].

What is the darknet?, Darkowl, What is THE DARKNET? [Online]
Available at:
[Accessed 22. August 2019].

Meet Darknet [2013], PCWorld: Brad Chacos ,Meet Darknet, the hidden, anonymous underbelly of the searchable Web [Online]
Available at:
[Accessed 23. August 2019].

Onion Service Protocol [2019], Tor Documentation, Tor: Onion Service Protocol [Online]
Available at:
[Accessed 8. July 2019].

How does Tor work [2018], Brandon Skerritt, How does Tor *really* work? [Online]
Available at:
[Accessed 8. July 2019].

Locating Hidden Servers [2006], Lasse Øverlier, Paul Syverson, Locating Hidden Servers [Online]
Available at:
[Accessed 8. August 2019].

Zmap Scan [2013], Peter Judge, Zmap’s Fast Internet Scan Tool Could Spread Zero Days In Minutes [Online]
Available at:
[Accessed 21. August 2019].

Tor or VPN [2019], Bill Man, Tor or VPN – Which is Best for Security, Privacy & Anonymity? [Online]
Available at:
[Accessed 8. August 2019].

JavaScript Performance optimization with respect to the upcoming WebAssembly standard

Written by Tim Tenckhoff – tt031 | Computer Science and Media

1. Introduction

Speed and performance of the (worldwide) web advanced considerably over the last decades. With the development of sites more heavily reliant on JavaScript (JS Optimization, 2018), the consideration of actions to optimize the speed and performance of web applications grows in importance. This blogpost aims to summarize practical techniques to enhance the performance of JavaScript applications and provides a comparative outlook regarding the upcoming WebAssembly standard.

Let´s jump into hyperspeed…

  1. Introduction
    1. JavaScript 
    2. Why optimize? 
  2. How can the JavaScript Speed and Performance be increased?
    1. Interaction with Host Objects
    2. Dependency Management
    3. Event Binding
    4. Efficient Iterations
    5. Syntax 
  3. Is WebAssembly replacing JavaScript?
  4. References


Back in the days, in 1993 a company called Netscape was founded in America. Netscape aimed to exploit the potential of the burgeoning World Wide Web and created their own web browser Netscape Navigator (Speaking JavaScript 2014). As Netscape realized in 1995, that the web needed to become more dynamic, they decided to develop a scripting language with a syntax similar to Java’s to rule out other existing languages. In May 1995, the prototype of this language was written by the freshly hired software developer Brendan Eich within 10 days. The initial name of the created code was Mocha, which was later changed by the marketing to LiveScript. In December 1995, it was finally renamed to JavaScript to benefit from Java’s popularity (Speaking JavaScript 2014).

Today, every time the functionality of a web page exceeds to just display static content, e.g. by timely content updates, animated graphics, an interactive map or the submission of an input formular, JavaScript is probably involved to solve the given complexity. Thus it is the third layer cake of the standard web technologies, which adds dynamic behavior to an existing markup structure (e.g. defining paragraphs, headings, and data tables in HTML), customized with a definition of style rules (e.g. defining paragraphs, headings, and data tables in CSS). If a web browser loads a webpage, the JavaScript code is executed by the browser’s engine, after the HTML and CSS have been assembled and rendered into a web page (JavaScript 2019). This ensures that the content of the required page is already in place before the JavaScript code starts to run, as seen in Figure 1.

Figure 1: The execution of HTML, CSS and JavaScript in the Browser (JS Optimization, 2018)

The script language often abbreviated as JS includes a curly-bracket syntax, dynamic typing, object orientation, and first-class functions. Initially only implemented in and for web browsers on the client side, today’s JavaScript engines are now embedded in many other types of host software, including web servers (e.g. NodeJS), databases, or other non-web use-cases (e.g. PDF software).

Why optimize?

Speaking about the performance optimization in networks, most developers think about it in terms of the download and execution cost – sending more bytes of JavaScript code takes longer, depending on the users internet connection (JS Optimization 2018). Therefore, it can be said beforehand, that it generally makes sense to reduce the transferred network traffic, especially in areas where the available network connection type of users might not be 4G or Wi-Fi. But one of JavaScript’s heaviest costs regarding the performance is also the parse/compile time. The Google Chrome browser visualizes the time spend in these phases in the performance panel as seen in Figure 2. But why is it even important to optimize this time?

Figure 2: Parse and compile time of JavaScript (JS Optimization 2018)

The answer is, that if more time is spent parsing/compiling, there might be a significant delay in the user’s interaction with the website. The longer it takes to parse and compile the code, the longer it takes until a site becomes interactive. According to a comparison in an article by Google Developer Addy Osmani, JS is more likely to negatively impact a pages interactivity than other equivalently sized resources (JS Optimization 2018). As an example seen in Figure 3, the resource processing time of 170kb JavaScript bytes and the same amount of JPEG bytes require the same amount of 3,4 seconds network transmission time. But as the resource processing time to decode the image (0,064 seconds) and rasterize paint in the image (0,028 seconds) are relatively low, it takes much longer to parse the JavaScript code (~2 seconds) and execute it (~1,5 seconds). Due the fact that website users differ not only regarding their provided network connection, but also in the hardware they have, older hardware may also have a bad influence on increased execution time. This shows how much more JS can potentially delay the interactivity of a website due to parse, compile and execution costs and proves the need for optimization.

Figure 3: The difference between JavaScript and JPEG resource processing (JS Optimization 2018)

2. How can the JavaScript Speed and Performance be increased?

The question appearing at this point is the following: What needs to be practically done during the development of JavaScript to optimize websites or web applications regarding speed and interactivity. The depths of the world wide web expose several blogs and articles about this kind of optimization. The following section aims to sum up found techniques categorized by the underlying performance problem:

Interaction with Host Objects

The interaction with “evil” host objects needs to be avoided as Steven de Salas says in his blog (25 Techniques 2012)

As described before, JavaScript code is compiled by the browser’s scripting engine to machine code – offering an extraordinary increase in performance and speed. However, the interaction with host objects (in the browser) outside this native JavaScript environment raises a loss in performance, especially if these host objects are screen-rendered DOM (Document Object Model) objects (25 Techniques 2012). To prevent this, the interaction with these evil host objects needs to be minimized. Let’s take a look at how this can be done.

Rotation via CSS

A first useful approach is the usage of CSS-classes for DOM animations or interactions. Unlike JavaScript code, solutions like e.g. CSS3 Transitions, @keyframes, :before, :after are highly optimized by the browser (25 Techniques 2012). As shown above, the rotation of the small white square can either be animated by addressing the id via document.getElementById(“rotateThisJS”), triggered by a button starting the rotate() function below…

Figure 4: Rotation via JavaScript (How not to…)

…or by adding a CSS class (as seen in Figure 5) to the <div> element which works much more efficiently, especially if the respective website contains several animations. Except from animations, CSS is also able to handle interactions like e.g. hovering elements, thus it can be said, that the way JavaScript is optimized this time is not to use it in certain cases.

Figure 5: Animation via CSS

Speaking of selectors to pick DOM elements, the usage of jQuery allows a highly specific selection of elements based on tag names, classes and CSS (25 Techniques 2012). But according to the online blog article by Steven de Salas, it is important to be aware that this approach involves the potential of several iterations through underlying DOM elements to find the respective match. He states that this can be improved by picking nodes by ID. An example can be seen in Figure 6:

Figure 6: Different ways of DOM element selection

What also increases the (DOM interaction) performance, is to store references to browser objects during instantiation. If it can be expected, that the respective website is not going to change after instantiation, references to the DOM structure should be stored initially, when the page is created not only when they are needed. It is generally a bad idea to instantiate references DOM objects over and over again. That’s why it is rather advisable to create few references to objects during instantiation which are needed several times (25 Techniques 2012). If no reference to a DOM object has been stored and needs to be instantiated within a function, a local variable containing a reference to the required DOM object can be created. This speeds up the iteration considerably as the local variable is stored in the fastest and most accessible part of the stack (25 Techniques 2012).

The general amount of DOM-elements is also a criteria with respect to the performance. As the time, used for changes in the DOM is proportional to the complexity of the rendered HTML, this should also be considered as an important performance factor (Speeding up 2010).

Another important aspect regarding the DOM interaction is to batch (style) changes. Every DOM change causes the browser to do a re-rendering of the whole UI (25 Techniques 2012). Therefore, it should be avoided to apply each style change separately. The ideal approach to prevent this, is to do changes in one step, for example by adding a CSS class. The different approaches can be seen in Figure 7 below.

Figure 7: How to batch changes in DOM

Additionally it is recommended to build DOM elements separately before adding them to a website. As said before, every DOM requires a re-rendering. If a part of the DOM is built “off-line” the impact of appending it in one go is much smaller (25 Techniques 2012). Another approach is to buffer DOM content in scrollable <div> elements and to remove elements from the DOM that are not displayed on the screen, for example, outside the visible area of a scrollable <div>. These nodes are then reattached if necessary (Speeding up 2010).

Dependency Management

Figure 8: Checking the dependency Management of

Looking at different pages in the www, e.g. the cities’s website of Stuttgart, it can be observed that the screen rendering is delayed for the user until all script dependencies are fully loaded. As seen in Figure 8, some dependencies cause the delayed download of other dependencies that in return have to wait for each other. To solve this problem the active management and reduction of the dependency payload are a core part of performance optimization.

One approach to do so, is to reduce the general dependency on libraries to a minimum (25 Techniques 2012). This can be done by using as much in-browser technology as possible. For example the usage of document.getElementById(‘element-ID’) instead of using (and including) the jQuery library. Before adding a library to the project, it makes sense to evaluate whether all the included features are needed, or if single features can be extracted from the library and added separately. If this is the case, it is of course important to check whether the adopted code is subject to a license – to credit and acknowledge the author is recommended in any case (25 Techniques 2012).

Another important approach is the combination of multiple JavaScript files to bundled ones. The reason behind this is, that one network request with e.g. 10kb of data is transferred much faster than 10 requests with 1kb each (25 Techniques 2012). This difference is caused by lower bandwidth usage and network latency of the single request. To save additional traffic, the combined bundle files can also be minified and compressed afterwards. Minification tools, such as UglifyJS or babel-minify remove comments and whitespacing from the code. Compression tools as e.g. gzip or Brotli are able to compress text-based resources to smaller memory size. (JS Optimization 2018)

A further way to optimize the dependency management, is the usage of a post-load dependency manager for libraries and modules. Tools like Webpack or RequireJS allow, that the layout and frame of a website appears before all of the content is downloaded, by post-loading the required files in the background (JS Optimization 2018). This gives users a few extra seconds to familiarise themselves with the page (25 Techniques 2012).

By maximizing the usage of caching, the browser downloads the needed dependencies only at the first call and otherwise accesses the local copy. This can be done by manually adding eTags to files that need to be cached, and putting *.js files to cache into static URI locations. This communicates the browser to prefer the cached copy of scripts for all pages after the initial one (25 Techniques 2012).

Event Binding

To create interactive and responsive web applications, event binding and handling is an essential part. However, event bindings are hard to track due to their ‘hidden’ execution and can potentially cause performance degradation e.g. if they are fired repeatedly (25 Techniques 2012). Therefore it is important to keep track of the event execution throughout various use cases of the developed code to make sure that events are not fired multiple times or bind unnecessary resources (25 Techniques 2012).

To do so, it is especially important to pay attention to event handlers that fire in quick repetition. Browser events such as e.g. ‘mouse move’ and ‘resize’ are executed up to several hundred times each second. Thus, it is important to ensure that an event handler that reacts to one of these events can complete in less than 2-3 milliseconds (25 Techniques 2012). The box below visualizes the amount of events that are fired when the mouse is moved over an element.

hover me!

Another important point that needs to be taken care of, is the event unbinding. Every time an event handler is added to the code, it makes sense to consider the point when it is no longer needed and to make sure that it stops firing at this point. (Speeding up 2010) This avoids performance slumps through handlers that are bound multiple times, or events firing when they are no longer needed. One good approach to prevent this, is the usage of once-off execution constructs like or manually adding/coding the unbind behavior at the right place and time (25 Techniques 2012). The example below, shows the usage of – an event binding on each p element that is fired exactly once and unbinds itself afterwards. The implementation of this example can be seen in Figure 9.

Click the boxes to trigger a event!
Fired once. Also fired once. This is also fired only once.
Figure 9: The usage of

A last important part of the event binding optimization is to consider and understand the concept of event bubbling. A blog article by Alfa Jango describes the underlying difference between .bind(), .live(), and .delegate() events (Event Bubbling 2011).

Figure 10: Propagation of a click event through the DOM (Event Bubbling 2011)

Figure 10 shows what happens if e.g e a link is clicked that fires the click event on the link element, which triggers functions that are bound to that element’s click event: The click event propagates up the tree, to the next parent element and then to each ancestor element that the click event was triggered on one of the descendent elements (Event Bubbling 2011). Knowing this, the difference between the jQuery functions bind(), live() and delegate() can be explained:


jQuery scans the entire document for all $(‘a’) and binds the alert function to each of these click events.


jQuery binds the function to the $(document) tag including the parameters ‘click’ and ‘a’. If the 
event is fired, it checks if both parameters are true, then executes the function.


Similar to .live(), but binds the handler to a specific element, not the document.root.

The article says that .delegate() is better than .live(). But why?

$(document).delegate(‘a’, ‘click’, function() { blah() });

$(‘a’).live(‘click’, function() { blah() });

According to the blog entry (Event Bubbling 2011), delegate() can be preferred for two reasons:

Speed: $(‘a’) first scans for all a elements and saves them as objects, this consumes space and is therefore slower.

Flexibility: live() is linked to the object set of $(‘a’) elements, although it actually acts at the $(document) level..

Efficient Iterations

The next topic is the implementation of efficient iterations. As seen in Figure 11 below, the execution time for string operations grows exponentially during long iterations (String Performance 2008). This shows why iterations can often be the reason for performance flaws. Therefore it always makes sense to get rid of unnecessary loops, or calls inside of loops (25 Techniques 2012) .

Figure 11: Comparative String Performance Analysis (String Performance 2008)

One technique to avoid unnecessary loops, is to use JavaScript indexing. Native JavaScript objects can be used to store quick-lookup indexes to other objects, working in a similar way to how database indexes work (25 Techniques 2012). As seen in Figure 12 below it also speeds up finding objects by using e.g. the name as an index. This is highly optimized and avoids long search iterations.

Figure 12: Finding Objects by JavaScript indexing

Additionally it is always a good idea to use native JavaScript array functions such as push(), pop() and shift(), especially working with arrays. These functions also have a small overhead and are closely connected to their assembly language counterparts (25 Techniques 2012).

The difference between reference and primitive value types, also comes up in terms of efficient iterations. Primitive types such as String, Boolean or Integer are copied if they are handed over to a function. Reference types, such as Arrays, Objects or Dates are handed over as a light-weight reference. This knowledge should be considered if a reference is handed over to a function, running in an iteration: Obviously, it is better to avoiding frequent copying of primitive types and pass lightweight references to these functions.


A final option to consider for optimization is the usage of correct JavaScript syntax. Writing simple function patterns without being familiar to advanced native ECMAScript can lead to inefficient code. It is recommendable to try to stay up to date and learn how to apply these constructs.

One easy example (25 Techniques 2012) regarding this, is to prefer the usage of native, optimized constructs over self-written algorithms: Functions as e.g. Math.floor()or new Date().getTime() for timestamps don’t need to be rewritten. The operator === instead of == provides an optimized, faster type-based comparison (25 Techniques 2012). Furthermore, the switch statement can be used instead of long if-then-else blocks to provide an advantage during compilation, to name just a few examples.

3. Is WebAssembly replacing JavaScript?

On the 17th of June 2015, Brendan Eich (the creator of JavaScript) announced a new project that aims to make it easier to compile projects written in languages like C and C++ to run in browsers and other web related JavaScript environments (Why we need Web Assembly). The developing team consists of members from Apple, Google, Microsoft, Mozilla and others collective under the name of the W3C WebAssembly Community Group (Why we need Web Assembly). A blog article by Eric Elliot comments on these release announcements and states that „the future of the web platform looks brighter than ever“ (What is WebAssembly?).

But what exactly is WebAssembly? Elliot further explains that WebAssembly, often shortened as WASM, is a new language format. The code defines an AST (Abstract Syntax Tree) represented in a binary format that can be edited/developed as readable text. The instruction format has been built to compile languages such as C, C++, Java, Python and Rust and allows the deployment on the web and in server applications. Through WebAssembly it is now possible to run the respective code on the web at a native speed (WASM replace JS 2018). But WASM is also an improvement to JavaScript: The performance critical code that needs to be optimized can be implemented in WASM and imported like a standard JavaScript module (What is WebAssembly?). The blog entry by Elliot additionally explains that WASM is also an improvement for browsers. Through the fact, that browsers will be able to understand the binary code that can be compressed to smaller files than currently used Javascript files, smaller payloads would lead to faster delivery and make websites run faster (What is WebAssembly?).

But listing all these advantages, does WebAssembly have the potential to replace JavaScript in the nearest future? And is WebAssembly compilation really so much faster? A blog article from Winston Chen compares the performance of WebAssembly vs JavaScript and comes to a surprising result (WASM vs JS 2018). The performed experiment involved several implementations of matrix multiplications as a simple and computationally intensive way to compare JavaScript and C++ (in WASM). In summary it can be said that JavaScript performed better than WebAssembly on smaller array sizes and WebAssembly outperformed JavaScript on larger ones. Chen concludes, that outgoing from his results, JavaScript is still the best option for most web applications. According to him, Web Assembly was therefore “best used for computationally intense web applications, such as web games” (WASM vs JS 2018).

Among different articles throughout the internet, dealing with the question whether WASM is going to replace JavaScript, it is not possible to find a precise answer or prediction. But Brendan Eich, the creator of JavaScript himself, finds a relatively clear answer to the question if he was trying to KILL JavaScript (by developing WASM): “…We’re not killing JavaScript. I don’t think it’s even possible to kill JavaScript”(Why we need wasm 2015). The JavaScript ecosystem currently supports all major browser and most developers write libraries and frameworks in it (e.g. React, Bootstrap or Angular)(WASM replace JS 2018). In order to overtake JavaScript, any competitor (as WASM) would need to provide replacement options for all these libraries. Furthermore, it is not easily feasible to replace the existing code base of JavaScript based projects. With growing popularity in calculation intense projects as browser-based games, WebAssembly can possibly decrease the market share of JavaScript, but is not able to replace these already existing JS applications. It can be said, that the native speed improvements of WebAssembly are rather a complementation of the existing JavaScript features (WASM replace JS 2018). By using both, (e.g. by using WebAsembly run alongside JS using WASM JavaScript APIs) developers can benefit from the flexibility of JS and the native speed advantages of WASM in combination. Wrapping this up, the creator was right and is very unlikely that WASM is going to overtake JavaScript – still the single, dominating language of the web.

4. References

Speaking JavaScript 2014, Axel Rauschmeyer, Speaking JavaScript: An In-Depth Guide for Programmers
Chapter 4. How JavaScript Was Created
[Accessed 24 July 2019].

WebAssembly 2015, Eric Elliot, What is WebAssembly? [Online]
Available at:
[Accessed 28 July 2019]. 

Why we need wasm 2015, Eric Elliot, Why we Need WebAssembly [Online]
Available at:
[Accessed 28 July 2019]. 

WASM replace JS 2018, Vaibhav Shah, Will WebAssembly replace JavaScript? [Online]
Available at:
[Accessed 28 July 2019]. 

WASM vs JS 2018, Chen Winston, Performance Testing Web Assembly vs JavaScript [Online]
Available at:
[Accessed 27 July 2019]. 

25 Techniques 2012, Steven de Salas, 25 Techniques for Javascript Performance Optimization [Online]
Available at:
[Accessed 27 July 2019].

JS Optimization 2018, Addy Osmani, JavaScript Start-up Optimization [Online]
Available at:
[Accessed 27 July 2019].

JavaScript 2019, Chris David Mills (MDN web docs), What is JavaScript? [Online]
Available at:
[Accessed 27 July 2019].

Speeding up 2010, Yahoo Developer Network, Best Practices for Speeding Up Your Web Site [Online]
Available at:
[Accessed 27 July 2019].

String Performance 2008, Tom Trenka, String Performance: an Analysis [Online]
Available at:
[Accessed 27 July 2019].

Event Bubbling 2011 Alfa Jango, THE DIFFERENCE BETWEEN JQUERY’S .BIND(), .LIVE(), AND .DELEGATE() [Online]
Available at:
[Accessed 24 July 2019].