Cryptomining Malware – How criminals use your devices to get wealthy!

Has your computer ever been slow and you couldn’t tell what the problem was? Nowadays, illicit cryptomining can cause those performance problems. It dethroned ransomware as the top cybersecurity threat in 2018. (Webroot Threat Report 2018) A simple website visit can start the mining process as a javascript running in the background of the browser or an accidentally installed malware on your computer. These two examples for different modes of illicit cryptomining are called browser-based cryptojacking and binary-based cryptomining. In both cases hash-rates can be up to medium-sized mining farms. This blog article will give an overview over binary-based cryptomining malware. In that case the mining process is embedded in the payload of a malware. Criminals hide it as good as possible which makes it hard to detect to gain a massive income. All the tools they need to start a malicious cryptomining business are easy to get in underground markets. For example Malware can be purchased for a few dollars (e.g. the average cost for an encrypted miner for Monero XMR is 35$). We will also take a quick look at how companies are legally using cryptomining to monetize web content as an alternative business model.

Source: “A First Look at the Crypto-Mining Malware
Ecosystem: A Decade of Unrestricted Wealth”
by S.Pastrana and G.Suarez-Tangil

Basics

In this part we will have a look on basics which are required for this article.

Mining Pools

Since more and more computational power is required to calculate cryptocurrencies mining pools are popular. A mining pool is a collection of miners who pooled their resources together to mine a cryptocurrency and share their rewards for every calculated block. But there are advantages and disadvantages of mining pools. One main advantage is a more stable income by using mining pools due better chances to solve a cryptographic puzzle for the next block. On the other hand miners have to share their rewards which can be seen as a disadvantage, but without enough resources the outcome is potentially lower. (Mining Pools and How They Work 2019)

Cryptocurrency Wallets

Cryptocurrency wallets are not exactly like wallets we know from daily life. Users can  monitor their balance and send money or execute other operations. The virtual wallets contain a private and public key to perform operations. The keys are used to access the public blockchain adresse and confirm a transaction. The private key is used for the transaction of the wallet owner an the public key is similar to a international Bank Account Number. For example, if someone wants to transfer money to your wallet this person needs your public key, but you don’t get actual money on your account. The transaction is only identified by a transaction record on the blockchain and a balance change in your cryptocurrency wallet. Important to know is that the private key is totally unique and in the case of a lost of it the wallet won’t be accessible anymore for its owner. (What is a wallet 2019)

Binary-based Mining

Binary-based mining is the common way to mine cryptocurrency. Users install a program or application on a device to mine. That would be the legitimately way as the user gets the rewards for accomplished performance. It gets illicitly if a malicious actor gains access to the users computer power through a malware and mines for their own benefits. The mining software would be installed on the computer and drains the CPU performance of the victim and the payments for the rewards are going to the wallet of the attacker.

Browser-based Mining

In addition to the two types of illicit cryptomining we will have a brief look at browser-based cryptojacking. Illicit browser-based mining is continually rising in the past years. As in the introduction mentioned it is really simple to run into it. As long as an user navigates on a website and uses the services the mining process is running. The browser of the victim performs scripts which execute the mining progress. It is only illicit if the user is not aware of it. There are some websites that use this method to generate money legally for maintenance, as donations or as a substitute for advertising. For example the UNICEF organization in Australia used this method to provide donations. (UNICEF Donation 2019)

Source: thehopepage.org

UNICEF notifies the users about the procedure and started the mining operation after an agreement to the terms on the devices of the users which makes the activity legitimate.

Key Enablers of Illicit Cryptomining

The factors of key enablers of the malicious actors to conduct were analyzed by the cyber threat alliance in 2018 (The illicit Cryptocurrency Mining Threat 2018). Let’s have a look on these factors :

  • It’s more profitable since the increased value of cryptocurrencies.
  • Cryptocurrencies with anonymity for transactions, such as Monero and Ethereum that can be mined with personal computers or IoT devices and create a potential attack surface.
  • Malware and browser-based exploits are easy to use and easily available.
  • The number of mining pools is increasing, facilitating the pooling of resources and providing a scalable method for mining.
  • Enterprises and individuals with inadequate security measures are targets for malicious actors and are unaware of the potential impact on their infrastructure and operations.

Most popular Cryptocurrency

Since the popularity of Bitcoin dropped for illicit cryptomining over time, because of the increased amount of time to calculate a single coin, underground economies focus other cryptocurrencies like Monero (XMR). Monero is the most popular cryptocurrency for illicit cryptomining, because of the use of innovative ringsturcutres and decoys to retain transactions completely untraceable. (Webroot Threat Report 2019) Researchers found out that 4,32% of the circulating XMR was mined with cryptomining malware which has an estimated revenue of nearly 57 million USD. (First Look 2019)

Damage caused by Cryptomining

Cryptomining can cause serious damage in different ways. It is draining the CPU usage which could be detected easily during the use of an infected computer, but criminals use distinct methods to evade detection of the mining process. These methods will be explained later in the article. Another main damage is the increased power supply of the CPU or GPU which cause high electricity bills. Through the excessive load of computer components during the process the hardware deteriorates rapidly.

How Criminals spread the Malware

The common approach to spread the malware is to host it in public cloud storage sites such as Amazon Web Services (AWS), Dropbox, Google Drive, Github and so on. Criminals often hide the malware in stock mining software for instance xmrig or xmr-stak to get access. Another approach is the use of botnets which are offered as pay-per-install (PPI) services in the deep web markets. (First Look 2019)

Source: “A First Look at the Crypto-Mining Malware
Ecosystem: A Decade of Unrestricted Wealth”
by S.Pastrana and G.Suarez-Tangil

A Further and probably the oldest approach to transfer these executables to a user is to deliver malicious spam or exploit kits by email. The malware starts to infect the computer after opening the attachment. Once the machine installs the malicious mining software it starts to mine cryptocurrency. In some cases the malware begins to scan the network for more accessible devices and tries to infiltrate them with an exploit.

Mechanisms to evade Detection

As earlier mentioned most of the cryptomining malware make use of stealth techniques. The more difficult it is to detect them, the longer the malware can utilize the computing power. The method idle mining starts the mining process only when the computer is in idle state and no operations are running for a certain time. For example if you leave your computer without turning it off for a longer time the mining process starts and lasts as long as there is no interaction with the computer. After an interaction the process shuts down and the performance is free for the user. The programmers of the malware take care in many ways to evade detection. There are cryptomining malwares with different modes for desktop  and laptop to get the best computing power for the infected device. For instance the malware on the laptop would take just as much performance as possible to keep the fans quiet. Another technique is the execution stalling code which makes the process almost invisible when Task Manager is running. If the Task Manager is running the mining process is slowing down the CPU utilization. It is possible to bypass this execution stalling code by using other process monitoring applications. Furthermore cryptomining campaigns use domain aliases (e.g. CNAME) to prevent blacklisting of mining pools.

Source: coindesk.com [Accessed 4. September 2019]

The image above shows how the execution stalling of the malicious miner called Norman works. It is based on a XMRig-based crypto-miner and avoids detection. After the Task Manager opens the malware stops operating and re-injects itself as soon as the Task Manager is closed.

Source: “THE ILLICIT CRYPTOCURRENCY MINING THREAT” by the Cyber Threat Alliance

In the figure above we can see another stealth technique which was described by Palo Alto Networks. This cryptomining malware uses only 20 percent of the machines CPU. The benefits of using this method is to persist longer on the infected machine and avoid detection as the mining performance is lower than possible.

Campaigns

Source: “A First Look at the Crypto-Mining Malware
Ecosystem: A Decade of Unrestricted Wealth”
by S.Pastrana and G.Suarez-Tangil

If we have a look on the illicit cryptomining campaigns we see a small number of actors that monopolize the cryptomining malware ecosystem. It is common to see campaigns mining in various pools. The most popular are crypto-pool, dwarfpool and minexmr and there are successful campaigns that are running for over 5 years without getting detected. In the next part we will have a look on the most profitable campaigns which were still active in 2018 and were analysed by Sergio Pastrana of the Carlos III University of Madrid and Guillermo Suarez-Tangil of the King’s College London and which this article is based on.

The Freebuf Campaign

The Freebuff Campaign was and probably is still active since 2016 and has mined over 163K XMR (approx. 18 million USD). It is named “Freebuf” because of the main domain xt.freebuf.info. Statistics of two banned wallets have shown that they were connected from 5,352 and 8,009 different IPs and had mined 362.6 and 1,283.7 XMR. The campaign used 7 wallets which are connected to the mining pools minexmr and crypto-pool by using domain aliases. After the ban of the two wallets the operator changed to another mining pool.

Source: “A First Look at the Crypto-Mining Malware
Ecosystem: A Decade of Unrestricted Wealth”
by S.Pastrana and G.Suarez-Tangil

In the figure above we can see the structure of the Freebuf campaign. The green nodes are malware miners and are connected to wallets shown as blue nodes. Gray and pink nodes represent the infrastructure of the campaign. Therefore the gray nodes represent the contacted domain server and the pink node shows the malware hosts. The red and orange nodes symbolize additional malware. As earlier mentioned the campaign uses 7 wallets which we can see in this graph. All the malware miners are connected to one of the wallets and linked to one mining pool which is hidden behind a CNAME alias domain. We can see three different domain servers in this graph: xt.freebuf.info, x.alibuf.com and xmr.honker.info. All of them have been aliases of common used mining pools. For example xt.freebuf.info and xmr.honker.info are aliases for minexmr and x.alibif.com for crypto-pool.

The USA-138 Campaign

The USA-138 has mined at least 6,709 XMR (approx. 651K USD) using 5 wallets. An interesting point about this campaign is it has mined the cryptocurrency Electroneum (ETN) with earnings of 314.18 ETN in late 2018. It was worth less than 5 USD, but it was a speculative for the future.

Source: “A First Look at the Crypto-Mining Malware
Ecosystem: A Decade of Unrestricted Wealth”
by S.Pastrana and G.Suarez-Tangil

The figure above shows the structure of the USA-138 campaign. The meaning of the nodes are the same as previously described in the Freebuf campaign chapter.

Countermeasures

The simplest method to prevent cryptomining malware is to keep the anti virus updated and avoid to download tools from suspicious websites. Furthermore the operating system should stay updated to seal vulnerabilities and prevent injections. Another possibility is to track the network data transfers and web-proxies to detect attacks. In case of suspicions that the computer performance is slower than normal and illicit cryptomining might drain the CPU/GPU load it’s useful to monitor the activities and analyse if any suspicious services are running. (Cryptominer Protection 2019)

The most successful approach to stop illicit cryptomining was the change of the Monero PoW (Proof-of-Work) algorithm in 2018 which stopped approximately 73% to 90% of the campaigns, because their malware couldn’t adjust to the changes.

Conclusion

The fact is that cyberattacks with cryptomining malware is constantly rising and the enterprises and individuals are most of the time not aware of the situation. It causes enormous performance problems and hardware deterioration. The attackers are getting more and more creative with the use of stealth techniques which makes it hard to detect. They got an almost anonymous platform to generate money on victims devices with the cryptocurrency and cryptomining. That’s why it is unlike ransomware, where the victim is aware of the situation and can deal with it. Cryptomining attacks are most of the time silent and without an awareness of this problem it will go on. As an common user you can only have a monitor your CPU/GPU performance if there are any suspicious performance drops. Keep your antivirus software and operating system updated.

References

The (in)security about speaker legitimacy detection

For the most of us, voices are a crucial part in our every-day communication. Whether we talk to other people over the phone or in real life, through different voices we’re able to distinguish our counterparts, convey different meanings with the same words, and – maybe most importantly – connect the voice we hear to the memory of a person we know – more or less.

In relationships lies trust – and whenever we recognize something that’s familiar or well-known to us, we automatically open up to it. It happens every time we make a phone call or receive a voice message on WhatsApp. Once we recognize the voice, we instantly connect the spoken words to that person and – in case of a friend’s or partner’s voice – establish our connection of trust.

But what if that trusty connection could be compromised? What if a voice could be synthesized by a third person in a way that makes it indistinguishable from the original one?

There are some very interesting studies that explore the possibility of “speech synthesis” in the matter of “speaker legitimacy” – the art of determining the authenticity of a voice heard. By the way, that doesn’t only affect us as humans. There are a number of systems that use a voice to recognize a person in order to grant access to sensitive data or controls – think about your digital assistant on your smart phone, for example.

Today, there are several ways to synthesize a voice – purely artificially or based on human input. To give you a quick overview: There is the articulatory approach, where basically the human speech apparatus is mimicked in order to modify a sound signal through different parameters, like the position of the tongue, lips or jaw. This approach is by far the most difficult to achieve due to the vast number of sensor measurements that have to be taken in several iterations of a speaker analysis. To this day, a complete speech synthesis system based solely on this approach doesn’t exist.

Another approach is the signal modelling approach. Where before, the signal was based on the question of “how does a human create it”, this approach raises the question “how the signal actually sounds” – so the acoustic signal itself is being modified here. This is basically done through applying several filters with specific settings in a specific order – the best results can mostly be achieved with a “convolutional neural network” (CNN), but there are many speech signals necessary for training the engine, and it comes with high computational cost.

The by far most successful way to create a realistic-sounding voice is by applying the approach of “concatenation”. Here, fitting segments of a existing, recorded (“real”) voice a taken and put together to create syllables, words and eventually whole sentences. Think about your GPS navigation system – it would probably take forever to record all the street names that exist in your country or region of language. But if you had just the right number of syllables in different pitches, they can be concatenated in a way where every possible combination of street names can be pronounced in a realistic way.

But how can all of this be used to attack me and my phone calls?

This rather shocking example is based on a study by D. Mukhopadhyay, M. Shirvanian, and N. Saxena. They tried to impersonate a voice by a threat model that includes three steps:
First, samples from the voice of a “target victim” are collected. That can be done in numerous ways, either through wiretapping phone calls, recording the victim in it’s surrounding or simply use voice samples shared on social media.
In a second step, an attacker speaks the same utterance of the victim into a voice morphing engine – that way, he receives a model of the voice of the victim. The engine now basically knows “what was said”, and “how did it sound”. That model can now be used by the attacker to speak any utterance, while the morphing engine is able to apply the model built before to make the attacker’s voice sound like the target victim.
Note that the term “voice morphing”: It is a technique where a source voice can be modified to sound like a desired target voice, by applying the respective different spectral features between the two voices. This process makes use of signal modelling and concatenation, that were mentioned before.
The image below illustrates the described threat model:

Source: “All Your Voices Are Belong to Us” by D. Mukhopadhyay et al.

If you want to listen into a short sample of the result of a voice morphing software, watch this little video.

As shown in Phase III of the threat model, the fake utterance of Bob’s voice will be used to attack both a machine-based, as well as a human-based legitimacy detection capability.

The machine-based setup was targeting the “Bob SPEAR Speaker Verification System”, a Python-based open source tool for biometric recognition. Two different speech datasets (Voxforge – short 5 second samples in high quality, and MOBIO – longer samples of 7-30 seconds, recorded with basic laptop microphone) were used to train the engine, which was in this case the “Festvox” conversion system.
The results of this attack system were startling:

Source: “All Your Voices Are Belong to Us” by D. Mukhopadhyay et al.

This data shows how the system responded to the original voices as well as the faked one’s. To clarify the overall accuracy of the system, for each dataset a “different speaker attack” as well as a “conversion attack” as made – the different speaker attack means that the voice used to authenticate itself was a completely different one on purpose. The conversion attack however is the attacker’s voice morphed into the original speaker’s one.
The “False Acceptance Rate” (FAR) shows that in both conversion attack scenarios the system granted access to more than 50% of the voices played back – enough to say that the system fails significantly against a voice conversion attack. It also shows that there is indeed a difference in the results based on the quality of the conversion samples.

Having tested the machine-based speaker verification it is kind of eagerly awaited to see how the human-based verification will perform.
For this setup, online workers from Amazon Mechanical Turk (M-Turk, a crowdsourcing marketplace) were recruited to give their voices to build a model for the attack. The setup consisted of two parts: A “famous speaker study”, and a “briefly familiar speaker study”. The former aimed for an attacker to mimic the voice of a popular celebrity – one that many participants knew and would be able to recognize more easily. For that scenario, the voices of Morgan Freeman and Oprah Winfrey were used by collecting samples from some of their speeches. The latter intended to re-create the situation where somebody received a call from a person he or she met just briefly before – like at a conference. The participants from both studies conducted the tests and were asked, after listening to each of the vocal samples, to state whether the voice they just heard belonged to one of the famous speakers – or in the second case, to one of the briefly familiar speakers. The results from both of these studies are shown below:

Source: “All Your Voices Are Belong to Us” by D. Mukhopadhyay et al.

They show that the participants were a bit more successful in detecting a “different speaker” (an unknown voice), than verifying the original one – but the rate of successfully detecting a conversion attack was around 50%, which is not really a comforting value. The indicator “not sure”, that the participants were able to state shows, that they got confused. If this scenario should happen in real life, it is to be expected that this confusion could highly affect a person’s ability to verify a speaker’s identity.
With the briefly familiar speakers, the success rate of detecting a conversion attack was about 47%, which means that also over 50% of the users could not say for sure if an attack was present.

Let’s recap for a moment – we’ve seen that with modern means of technology it is rather easy and accessible to mimic a voice and trick people into believing that it is actually the real voice of a person familiar to them – with a possible success rate over 50%, depending on the quality of the samples used.

But why can we be tricked so easily? Isn’t there a way to sharpen our subconscious decision-making when it comes to speaker legitimacy detection?

Well, relating to the first question, another study by A. Neupane, N. Saxena, L. Hirshfield, and S. Bratt tried to find a biological relation to the rather poor test results.
In their paper – that describes a brain study based on the same tests from the studies described before – they try to find that relation.
Why a brain study? Previous studies have found differences in neural activation in the human brain in similar areas when users were viewing real and counterfeit items like websites and Rembrandt paintings.
In their study, Neupane and his team tried to confirm that some specific “and other relevant brain areas might be activated differently when users are listening to the original and fake voices of a speaker”.

To investigate this, they conducted the same tests, but monitored the users’ brain activities using a neuroimaging technique called “fNIRS” (Functional Near-Infrared Spectroscopy), by which activities in neural areas of interest can be inferred by examining changes between oxy-Hb and deoxy-Hb.
There are basically only a few neural activation areas of interest for this kind of scenarios. They are listed below:

Source: “The Crux of Voice (In)Security:
A Brain Study of Speaker Legitimacy Detection” by A. Neupane et al.

For brevity’s sake, only the applicable abbreviations are used furtherly.

You can see the three test runs where first the Original Speaker Attack is perceived, the second frame shows the Morphed Voice Attack and the third one the Different Speaker Attack. During the tests, the active regions around DLPFC, FPA and STG (working memory and auditory processing) show that the participants were actively trying to decide if the voice they heard was real or fake.

Following their hypothesis, the team tried to prove that there should be a difference in the Orbitofrontal Area (OFA), where the decision making and trust processes take place, especially when comparing the original speaker vs. the morphed voice.
But surprisingly, there were no such statistically significant differences! That suggests that the morphed voices may have sounded identical enough to the original voices to remain untroubled by skepticism on the part of the human brain. Further, a higher activation in FPA and MTG were observed when the participants were listening to the voice of a familiar speaker, compared to an unfamiliar speaker. This illustrates that the human brain processes familiar voices differently from the unfamiliar ones.

To sum up, here’s what we learned from all of that:

  • Human voice authenticity can easily be breached
  • People seem to detect attacks against familiar celebrities voices better than briefly familiar voices, but still an uncertainty of about 50% remains
  • The brain study surprisingly shows that even though users put considerable effort in making real vs. fake decisions, no significant difference is found in neural areas of interest with original vs. morphed voices

Still wonder what that means for you?

Well, first, we should all be aware of the fact that a vocal impersonation of individuals is indeed possible, even with reasonable effort. That could target politicians as well as family members, friends or employees of your bank. Voice phishing via phone becomes a real threat, especially when an attacker is able to perform an attack where his or her voice can be morphed “on the fly” (without prior rendering or preparation of spoken statements).

It is also important to mention that the studies described were conducted with young and healthy participants. Imagining older people or people with hearing disabilities becoming victims of such attacks, the might perform even worse against those than the participants of the studies.
Finally, voice morphing technologies will probably advance faster in time than our brains evolve – our very own “biological weakness” remains.

Now, isn’t there anything we can do about that?

Probably the most important thing about all of these findings is to become aware of the possibilities of such attacks. It helps not to rely only on information given to you via phone, especially when it comes to handling sensitive information or data.
With social media becoming a growing part of your lives, we should nevertheless be wary about posting our audio-visual life online, especially not in a public manner, where samples of our voices become available to everyone.

A tip against voice phishing is to never call back to provided phone numbers. If the caller claims to be from your bank – look up the phone number online, it might be a much safer option.

Conclusively, voice is not the only way of biological identification that contains flaws – even though in our own perception it is kind of unique. Regardless, it should never be used solely to ascertain a person’s identity.
But even with security through strongly encrypted private keys, at some point in human interaction the link between machine and human needs to happen – and it is where we will continue find weak spots.

References

  • “All Your Voices Are Belong to Us: Stealing Voices to Fool Humans and Machines” by D. Mukhopadhyay, M. Shirvanian, N. Saxena
  • “The Crux of Voice (In)Security: A Brain Study of Speaker Legitimacy Detection” by A. Neupane, N. Saxena, L. Hirshfield, S. E. Bratt
  • “Sprachverarbeitung: Grundlagen und Methoden der Sprachsynthese und Spracherkennung” by B. Pfister and T. Kaufmann
  • https://krebsonsecurity.com/2018/10/voice-phishing-scams-are-getting-more-clever/
  • http://www.koreaherald.com/view.php?ud=20190317000115

Social Engineering – Learn From the Best!

Kevin David Mitnick, Social Engineering, Hacker, Manipulation

It isn’t always necessary to attack by technical means to collect information or to penetrate a system. In many cases, it’s more effective to exploit the human risk factor. To successfully protect yourself and your company from social engineering, you’ve to understand how a social engineer works. And the best way to do this is by listening to the world’s most wanted hacker Kevin David Mitnick. Nowadays, the former social engineering hacker uses his expert knowledge to advise companies on how to protect themselves against such attacks. This blog entry is based on his bestseller “The Art of Deception: Controlling the Human Element of Security”. It sheds light on the various techniques of social engineering and enumerates several ways in which you can arm yourself against them.

Continue reading

Security and Usability: How to design secure systems people can use.

Security hit a high level of importance due to rising technological standards. Unfortunately it leads to a conflict with Usability as Security makes operations harder whereas Usability is supposed to make it easier. Many people are convinced that there is a tradeoff between them. This results in either secure systems that are not usable or in usable systems that are not secure. Though developers are still struggling with the tradeoff, this point of view is outdated somehow. There are solutions that do help to design secure systems people can use.

Continue reading

Convenient internet voting using blockchain technology

src: https://www.extremetech.com/wp-content/uploads/2018/10/511382-how-to-register-to-vote-online.png

Within this century the use of digital technology has probably never been as high and as convenient as of today. People use the internet to access encyclopedias, look up food recipes and share pictures of their pets. It doesn’t matter whether you are at home, standing in an aisle at the grocery store or even flying on an airplane. Our devices provide unlimited access to modern technology and even somewhat changed the way we used to do things. For instance, it is now a matter of minutes, sometimes even seconds for us to buy some products online or quickly check our balance on banking accounts, whereas those things used to require you to at least leave the house for some time. In some cases, we even narrowed down our involvement for buying products to simply pushing down a button. In comparison to the older day methods for those actions this seems like a huge improvement. And it is. But maybe not in all regards.

Continue reading

Multiplayer TypeScript Application run on AWS Services

Daniel Kniziadk100@hdm-stuttgart.de
Benjamin Janzen bj009@hdm-stuttgart.de

The project

CatchMe is a location-based multiplayer game for mobile devices. The idea stems from the classic board game Scotland Yard, basically a modern version of hide & seek. You play in a group with up to 5 players outside, where on of the players gets to be chosen the “hunted”. His goal is trying to escape the other players. Through the app he can constantly see the movement of his pursuers, while the other players can only see him in set intervals.


The backend of the game builds on Colyseus, a multiplayer game server for Node.js, which we have adjusted to our needs. There’s a lobby, from which the players can connect into a room with other players and start the game.
Continue reading

How does Tor work?

Written by Tim Tenckhoff – tt031 | Computer Science and Media

1. Introduction

The mysterious dark part of the internet – hidden in depths of the world wide web, is well known as a lawless space for shady online drug deals or other criminal activities. But in times of continuous tracking on the Internet, personalized advertising or digital censorship by governments, the (almost) invisible part of the web promises to bring back lost anonymity and privacy as well. This blogpost aims to shed light into the dark corners of the deep web and primarily deals with the explanation of how TOR works.

Reference: Giphy, If Google was a person: Deep Web
  1. Introduction
  2. The Deep Web
    1. 2. 1 What is the Tor Browser?
  3. The Tor-Network
    1. 3.1 Content
    2. 3.2 Accessing the Network
    3. 3.3 Onion Routing – How Does Tor Work?
  4. Conclusion – Weaknesses
  5. References

2. The Deep Web

So, what exactly is the deep web? To explain this, it makes sense to cast a glance at the overall picture. The internet as most people know it, forms only a minimal proportion of the overall 7.9 Zettabyte (1 ZB = 10007 bytes = 1021 bytes = 1000000000000000000000 bytes 
= 1 trillion Gigabytes?) of data available online (Hidden Internet 2018). This huge amount of data can be separated into three parts:

Separation of the worldwide web, Reference: (Search Engines 2019)

As seen in the picture above, we are accessing only 4% available on search engines like Google or Bing. The remaining 96% (90% + 4%) are protected by passwords, hidden behind paywalls or can be accessed via special tools (Hidden Internet 2018). But what separates the hidden parts into Deep Web and Dark Web by definition?

The Deep Web is fundamentally referred to data which are not indexed by any standard search engines as e.g. Google or Yahoo. This includes all web pages that search engines cannot find, such as user databases, registration-required web forums, webmail pages, and pages behind paywalls. Thus, the Deep Web can, of course, contain content that is totally legal (e.g. governmental records). The Dark Web is a small unit of the Deep Web – which refers to web pages that cannot be found by common search engines. The collection of websites that belongs to this dark web​ only exists on an encrypted network that cannot be reached by regular browsers (such as Chrome, Firefox, Internet Explorer, etc.). In conclusion, this area is the well-suited scene of cybercrime. Accessing these Dark Websites requires the usage of the Tor Browser.

…hidden crime bazaars that can only be accessed through special software that obscures one’s true location online.

– Brian Krebs, Reference: (Krebs On Security 2016)

2. 1 What is the Tor Browser?

The pre-alpha version of the Tor Browser was released on September 2002 (Onion Pre Alpha 2002 and the Tor Project, the company maintaining Tor, was started in 2006. The name Tor consists of three subterms and is the abbreviation of The onion router. The underlying Onion Routing Protocol was initially developed by the US Navy in the mid-1990s at the U.S Naval Research Laboratory (Anonymous Connections 1990). The protocol basically describes a technique for anonymous communication over a public network: By encapsulating each message carried in several layers of encryption and redirecting Internet traffic through a free, worldwide overlay network. It is called onion routing because of the layers in this network and the layers of an onion. Developed as free and open-source software for enabling anonymous communication, the Tor-Browser still follows the intended use today: protecting ​personal privacy and communication by protecting internet activities from being monitored.

With the Tor Browser, barely anyone can get access to The Onion Router (Tor) network by downloading and running the software. The browser does not need to be installed in the system and can be unpacked and transported as portable software via USB stick (Tor Browser 2019). As soon as this is done, the browser is able to connect to the Tor network. This is a network of many servers, the Tor nodes. While surfing, the traffic is encrypted by each of these Tor nodes. Only at the last server in the chain of nodes, the so-called​ exit node, the data stream is decrypted again and normally routed via the Internet to the target server, which is located in the address bar of the Tor browser. In concrete terms, the Tor browser first downloads a list of all available Tor servers for the connection over the Tor network and then defines a random route from server to server for data traffic, which is called Onion Routing as said before. These routes consist of a total of three Tor nodes, with the last server being the Tor exit node (Tor Browser 2019).

Conncetion of a Web-Client to Server via Tor Nodes, Reference: (Hidden Internet 2018)

For the reason that traffic to the Onion service runs across multiple servers from the Tor Project, the traces that users usually leave while surfing with a normal Internet browser or exchanging data such as email and messenger messages become blurred. Even though the payload of normal Internet traffic is encrypted, e.g. via https, the header containing routing source, destination, size, timing etc. can simply​ be spied by attackers or Internet providers. Onion routing in contrast​ also obscures the IP address of Tor users and keeps their computer location anonymous. To continuously disguise the data route, a new route through the Tor network is chosen every ten (Tor Browser 2019) minutes. The exact functionality of the underlying encryption will be described later in section Onion Routing – How Does Tor Work?.

3. The Tor-Network

For those concerned about the privacy of their digital communications in times of large-scale surveillance, the Tor network provides the optimal obfuscation. The following section explains which content can be found on websites hidden in the dark web, how the multi-layered encryption works in detail, and what kind of anonymity it actually offers.

3.1 Content

Reference: Giphy: SILK ROAD GIF BY ANTHONY ANTONELLIS

Most of the content in relation to the darknet involves nefarious or illegal activity. With the provided possibility of anonymity, there are many criminals trying to take advantage of it. This results in a large volume of darknet sites revolving around drugs, darknet markets (sites for the purchase and sale of services and goods), and fraud. Some examples found within minutes using the Tor browser are listed in the following:

  • Drug or other illegal substance dealers: Darknet markets (black markets) allow the anonymous purchase and sale of medicines and other illegal or controlled substances such as pharmaceuticals. Almost everything can be found here, quite simply in exchange for bitcoins.
  • Hackers: Individuals or groups, looking for ways to bypass and exploit security measures for their personal benefit or out of anger for a company or action (Krebs On Security 2016), communicate and collaborate with other hackers in forums, share security attacks (use a bug or vulnerability to gain access to software, hardware, data, etc.) and brag about attacks. Some hackers offer their individual service in exchange for bitcoins.
  • Terrorist organizations use the network for anonymous Internet access, recruitment, information exchange and organisation (What is the darknet?).
  • Counterfeiters: Offer document forgeries and currency imitations via the darknet.
  • Merchants of stolen information offer e.g. credit card numbers and other personally identifiable information can be accessed and ordered for theft and fraud activities.
  • Weapon dealers: Some dark markets allow the anonymous, illegal purchase and sale of weapons.
  • Gamblers play or connect in the darknet to bypass their local gambling laws.
  • Murderers/assassins: Despite of existing discussions about whether these services are real or legitimate, created by the law enforcement or just fictitious websites, some dark websites exist, that offer murder for rent.
  • Providers of illegal explicit material e.g. child pornography: We will not go into detail here.
Screenshot of the infamous Silk Road (platform for selling illegal drugs, shutdown by the FBI in October 2013) , Reference: (Meet Darknet 2013)

But the same anonymity also offers a bright side: freedom of expression. It offers the availability to speak freely without fear about persecution in countries where this is no fundamental right. According to the Tor project, hidden services allowed regime dissidents in Lebanon, Mauritania and the Arab Spring to host blogs in countries where the exchange of those ideas would be punished (Meet Darknet 2013). Some other use-cases are:

  • To use it as a censorship circumvention tool, to reach otherwise blocked content (in countries without free access to information)
  • Socially sensitive communication: Chat rooms and web forums where rape and abuse survivors or people with illnesses can communicate freely, without being afraid of being judged.

A further example of​ that is the New Yorker’s Strongbox, which allows whistleblowers to upload documents and offers a way to communicate anonymously with the magazine (Meet Darknet 2013).

3.2 Accessing the Network

The hidden sites of the dark web can be accessed via special onion-domains. These addresses are not part of the normal DNS, but can be interpreted by the Tor browser if they are sent into the network through a proxy (Interaction with Tor 2018). In order to create an onion-domain, a Tor daemon first creates an RSA key pair, calculates the SHA-1 hash over the generated public RSA key, shortens it to 80 bits, and encodes the result into a 16-digit base32 string (e.g. expyuzz4waqyqbqhcn) (Interaction with Tor 2018). For the reason that onion-domains directly derive from their public key, they are self-certifying. That implements, that if a user knows a domain, he automatically knows the corresponding public key. Unfortunately, onion-domains are therefore difficult to read, write, or to remember. In February 2018, the Tor Project introduced the next generation of onion-domains, which can now be 56 characters long, use a base32 encoding of the public key, and includes a checksum and version number (Interaction with Tor 2018). The new onion services also use elliptic curve cryptography so that the entire public key can now be embedded in the domain, while it could only be the hash in previous versions. These changes led to enhanced security of onion-services, but long and unreadable domain names interfered the usability again (Interaction with Tor 2018). Therefore, it is a common procedure, to repeatedly generate RSA keys until the domain randomly contains the desired string (e.g. facebook). These vanity onion domains look like this for e.g. Facebook (facebookcorewwwi.onion) or the New York Times (nytimes3xbfgragh.onion) (Interaction with Tor 2018). In contrast to the rest of the Worldwide Web, where navigation is primarily done via search engines, the darknet often contains pages with lists of these domains for further navigation. The darknet deliberately tries to hide from the eyes of the searchable web (Meet Darknet 2013)

3.3 Onion Routing – How Does Tor Work?

So how exactly does the anonymizing encryption technology behind Onion Routing work? As said before, the Tor browser chooses an encrypted path through the network and builds a circuit in which each onion router only knows (is able to decrypt) its predecessor and the successor, but no other nodes in the circuit. Tor thereby uses the Diffie-Hellman algorithm to generate keys between the user and different onion routers in the network (How does Tor work 2018). The algortihm is one possible application of Public Key Cryptography that makes use of two large prime numbers which are mathematically linked:

  1. A public-key — public and visible to others
  2. A private-key — private and kept secret

The public key can be used to encrypt messages and the private key is in return used to decrypt the encrypted content. This implicates, that anyone is able to encrypt content for a specific recipient, but this recipient alone can decrypt it again (How does Tor work 2018).

Tor normally uses 3 nodes by default, so 3 layers of encryption are required to encrypt a message (How does Tor work 2018). It is important to say, that every single Tor packet (called cell) is exactly 512kb large. This is done for the reason, that attackers cannot guess which cells are larger cells e.g images/media (How does Tor work 2018). On every step, the transferred message/package reaches, one layer of encryption is decrypted, revealing the position of the next successor in the circuit. This makes it possible, that nodes in the circuit do not know where the previous message originated or where its final destination is (How does Tor work 2018). A simplified visualization of this procedure can be seen in the picture below.

Removing one layer of encryption in every step to the next node, Reference (How does Tor work 2018)

But how does the network allow different users to connect without knowing each other’s network identity? The answer are so-called “rendezvous points”, formerly known as hidden services. (Onion Service Protocol 2019). The following steps are mainly extracted and summarized from the official documentation of Tor about the Onion Service Protocol 2019 and describe the technical details of how this is made possible:

Step 1: Before a client is able to contact an onion service in the network, it needs to broadcast its existence. Therefore, the service randomly selects relays in the network and requests them to act as introduction points by sending its public key. The picture below shows these circuit connections in the first step as green lines. It is important to mention, that these lines mark Tor circuits and not direct connections. The full three-step circuit makes it hard to associate an introduction point with the IP address of an onion server: Even though the introduction point is aware of the onion servers identity (public key) it does never know the onion server’s location (IP address)(Onion Service Protocol 2019).

Step 1: Reference: (Onion Service Protocol 2019)

Step 2: Step two: The service creates a so-called onion service descriptor that contains its public key and a summary of each introductory point (Onion Service Protocol 2019). This descriptor is signed with the private key of the service and then uploaded to a distributed hash database table in the network. If a client requests an onion domain as described in section Accessing the Network the respective descriptor is found. If e.g. “abc.onion” is requested, “abc” is a 16 or 32 character string derived by the service’s public key as seen in the picture below.

Step 2: Reference: (Onion Service Protocol 2019)

Step 3: When a client contacts an onion-service it needs to initiate the connection by downloading the descriptor from the distributed hash table as described before. If that certain descriptor exists for the address abc.onion, the client receives the set of introduction points and the respective public key. This action can be seen in the picture below. At the same time, the client establishes a connection circuit to another randomly selected node in the network and​ asks it to act as a rendezvous point by submitting a one time-secret key (Onion Service Protocol 2019).

Step 3: Reference: (Onion Service Protocol 2019)

Step 4: Now the client creates a so-called introduce message (encrypted with the public key of the onion service), containing the address of the rendezvous point and the one-time secret key. This message is sent to one of the introduction points, requesting the onion service as its final target. For the reason that the communication is again realized by a gate circuit, it is not possible to uncover the clients IP address and thus its identity.

Step 4: Reference: (Onion Service Protocol 2019)

Step 5: At this point, the onion service decodes the introduce message including the address of the rendezvous point and the one-time secret key. The service is then able to establish a circuit connection to the now revealed rendezvous point and communicates the one-time secret in a rendezvous message to the node. Thereby, the service remains with the same set of entry guards for the creation of new circuits (Onion Service Protocol 2019). By application of this technique, an attacker is not able to create his own relay to force the onion service to create an optional number of circuits, so that the corrupt relay might be randomly selected as the entry node. This attack scenario which is able to uncover the anonymity in the Deep Web networks was described by Øverlier and Syverson in their paper (Locating Hidden Servers 2006).

Step 5: Reference: (Onion Service Protocol 2019)

Step 6: As seen in the last picture below, the rendezvous point informs the client about the successfully established connection. Afterwards, both the client and onion service are able to use their circuits to the rendezvous point to communicate. The (end-to-end encrypted) messages are forwarded through the rendezvous point from client to the service or vice versa (Onion Service Protocol 2019). The initial introduction circuit is never used for the actual communication for one important reason mainly: A relay should not be attributable to a particular onion service. The rendezvous point is therefore never aware of the identity of any onion service (Onion Service Protocol 2019). Altogether, the complete connection between service and onion service and client consists of six nodes: three selected by the client, whereas the third is the rendezvous point and the other three are selected by the service.

Step 6: Reference: (Onion Service Protocol 2019)

4. Conclusion – Weaknesses

Different from what many people believe (How does Tor work 2018) Tor is no completely decentralized peer-to-peer system. If it was, it wouldn’t be very useful, as the system requires a number of directory servers that continuously manage and maintain the state of the network.

Furthermore, Tor is not secured against end-to-end attacks. While it does provide protection against traffic analysis, it cannot and does not attempt to protect against monitoring of traffic at the boundaries of the Tor network (the traffic entering and exiting the network), which is a problem that cyber security experts were unable to solve yet (How does Tor work 2018). Researchers from the University of Michigan even developed a network scanner allowing identification of 86% of worldwide live Tor “bridges” with a single scan (Zmap Scan 2013). Another disadvantage of Tor is its speed – because the data packages are randomly sent through a number of nodes, and each of them could be anywhere in the world, the usage of Tor is very slow. Despite its weaknesses, the Tor browser is an effective, powerful tool for the protection of the user’s​ privacy online, but it is good to keep in mind that a Virtual Private Network (VPN) can also provide security and anonymity, without the significant speed decrease of the Tor browser (Tor or VPN 2019) . If total obfuscation and anonymity regardless of the performance play a decisive role, a combination of both is recommended.

5. References

Hidden Internet [2018], Manu Mathur, Exploring the Hidden Internet – The Deep Web [Online]
Available at: https://whereispillmythoughts.com/exploring-hidden-internet-deep-web/
[Accessed 27 August 2019].

Search Engines [2019], Julia Sowells, Top 10 Deep Web Search Engines of 2017 [Online]
Available at: https://hackercombat.com/the-best-10-deep-web-search-engines-of-2017/
[Accessed 24 July 2019].

Krebs On Security [2016], Brian Krebs, Krebs on Security: Rise of Darknet Stokes Fear of The Insider [Online]
Available at: https://krebsonsecurity.com/2016/06/rise-of-darknet-stokes-fear-of-the-insider/
[Accessed 14 August 2019].

Anonymous Connections [1990], Michchael G.Reed, Paul F. Syversion, and David M. Goldschlag Naval Research Laboratory Anonymous Connections and Online Routing [Online]
Available at: https://www.onion-router.net/Publications/JSAC-1998.pdf
[Accessed 18 August 2019].

Onion Pre Alpha [2002], Roger Dingledine, pre-alpha: run an onion proxy now! [Online]
Available at: https://archives.seul.org/or/dev/Sep-2002/msg00019.html
[Accessed 18 August 2019].

Tor Browser [2019], Heise Download, Tor Browser 8.5.4 [Online]
Available at: https://www.heise.de/download/product/tor-browser-40042
[Accessed 29 August 2019].

Interaction with Tor [2018], Philipp Winter, Anne Edmundson, Laura M. Roberts, Agnieszka Dutkowska-Zuk, Marshini Chetty, Nick Feamster, How Do Tor Users Interact With Onion Services? [Online]
Available at: https://arxiv.org/pdf/1806.11278.pdf
[Accessed 16. August 2019].

What is the darknet?, Darkowl, What is THE DARKNET? [Online]
Available at: https://www.darkowl.com/what-is-the-darknet/
[Accessed 22. August 2019].

Meet Darknet [2013], PCWorld: Brad Chacos ,Meet Darknet, the hidden, anonymous underbelly of the searchable Web [Online]
Available at: https://www.pcworld.com/article/2046227/meet-darknet-the-hidden-anonymous-underbelly-of-the-searchable-web.html
[Accessed 23. August 2019].

Onion Service Protocol [2019], Tor Documentation, Tor: Onion Service Protocol [Online]
Available at: https://2019.www.torproject.org/docs/onion-services
[Accessed 8. July 2019].

How does Tor work [2018], Brandon Skerritt, How does Tor *really* work? [Online]
Available at: https://hackernoon.com/how-does-tor-really-work-c3242844e11f
[Accessed 8. July 2019].

Locating Hidden Servers [2006], Lasse Øverlier, Paul Syverson, Locating Hidden Servers [Online]
Available at: https://www.onion-router.net/Publications/locating-hidden-servers.pdf
[Accessed 8. August 2019].

Zmap Scan [2013], Peter Judge, Zmap’s Fast Internet Scan Tool Could Spread Zero Days In Minutes [Online]
Available at: https://www.silicon.co.uk/workspace/zmap-internet-scan-zero-day-125374
[Accessed 21. August 2019].

Tor or VPN [2019], Bill Man, Tor or VPN – Which is Best for Security, Privacy & Anonymity? [Online]
Available at: https://blokt.com/guides/tor-vs-vpn
[Accessed 8. August 2019].

Cloudbased Image Transformation

Introduction

As part of the lecture „Software Development for Cloud Computing“, we had to come up with an idea for a cloud related project we’d like to work on. I had just heard about Artistic Style Transfer using Deep Neural Networks in our „Artificial Intelligence“ lecture, which inspired me to choose image transformation as my project. However, having no idea about the cloud environment at that time, I didn’t know where to start and what is possible. A few lectures in I had heard about Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Function as a Service (FaaS). Out of those three I liked the idea of FaaS the most. Simply upload your code and it works. Hence, I went with Cloud Functions in IBMs Cloud Environment. Before I present my project I’d like to explain what Cloud Functions are and how they work.

What are Cloud Functions?

Choose one of the supported programming languages. Write your code. Upload it. And it works. Serverless computing. That’s the theory behind Cloud Functions. You don’t need to bother with Infrastructure. You don’t need to bother with Load Balancers. You don’t need to bother with Kubernetes. And you definitely do not have to wake up at 3 am and race to work because your servers are on fire. All you do is write the code. Your Cloud Provider manages the rest. Cloud provider of my choice was IBM.

Why IBM Cloud Functions?

Unlike Google and Amazon, IBM offers FREE student accounts. No need to deposit any kind of payment option upon creation of your free student account either. Since I have no experience using any cloud environment, I didn’t want to risk accidentally accumulating a big bill. Our instructor was also very familiar with the IBM Cloud, in case I needed support I could have always asked him as well.

What do IBM Cloud Functions offer?

IBM offers a Command Line Interface (CLI), a nice User Interface on their cloud website, accessible using the web browser of your choice and very detailed Documentation. You can check, and if you feel like it, write or edit your code using the UI as well. The only requirement for your function is: It has to take a json object as input and it has to return a json as well. You can directly test the Function inside the UI as well. Simply change the Input, declare an example json object you want to run it with, then invoke your function. Whether the call failed or succeeded, the activation ID, the response time, results, and logs, if enabled, are then displayed directly. You can add default input Parameters or change your functions memory limit, as well as the timeout on the fly as well. Each instance of your function will then use the updated values.

Another nice feature of IBM Cloud Functions are Triggers. You can connect your function with different services and, once they trigger your function, it will be executed. Whether someone pushed new code to your GitHub repository or someone updated your Cloudant Database, IBMs database service. Once invoked by this trigger, your function executes.

You can also create a chain of Cloud Functions. The output of function 1 will then be the input of function 2.

IBM Cloud Function use the Apache OpenWhisk service, which packs your code into a Docker Container in order to run it. However, if you have more than one source file, or dependencies you need, you can pack it in a docker image or, in some cases, like Python or Ruby, you can also zip them. In order to do that in Python, you need a virtual environment using virtualenv, then zip the virtualenv folder together with your python files. The resulting zip files and Docker images can only be uploaded using the CLI.

You can also enable your function as Web Action, which allows it to handle HTTP Events. Since the link automatically provided by enabling a function as web action ends in .json, you might want to create an API Definition. This can be done with just a few clicks. You can even import an OpenAPI Definition in yaml or json format. Binding an API to a function is as simple as defining a base path for your API, giving it a name and creating an operation. For example: API name: Test, Base path for API: /hello and for the operation we define the path /world select our action and set response content type to application/json. Now, whenever we call <domain>/hello/world, we call our Cloud Function using our REST-API. Using the built-in API-Explorer we can test it directly. If someone volunteers to test the API for us, we can also share the API Portal Link with them. Adding a custom domain is also easily done, by dropping the domain name, the certificate manager service and then Certificate in the custom domain settings.

Finally, my Project

Architecture of the Image Transformation Service

The idea was:

A user interacts with my GitHub Page, selects a filter, adds an Image, tunes some parameters, then clicks confirm. The result: They receive the transformed image.

The GitHub Page has been written with HTML, CSS and JavaScript. It sends a POST request to the API I defined, which is bound to my Cloud Function, written in Python. It receives information about the chosen filter, the set parameters and a link to the image (for the moment, only jpeg and png are allowed). It then processes the image and returns the created png byte64 encoded. The byte64 encoded data will then be embedded in the html site and the user can then save the image.

The function currently has three options:

You can transform an image into a greyscale representation.

Left: Original Image by Johannes Plenio, Right: black and white version

You can upscale an image by a factor of two, three or four

Left: Original Image by Johannes Plenio, Right: Upscaled by a factor of 2

and you can transform an image into a Cartoon representation.

Left: Original Image by Helena Lopes, Right: Cartoon version

Cartoon images are characterized by clear edges and homogenous colors The Cartoon Filter first creates a grayscale image and median blurs it, then detects the edges using adaptive Threshold, which currently still has a predefined window size and threshold. It then median filters the colored image and does a bitwise and operation between every RGBA color channel of our median filtered color image and the found edges.

Dis-/ Advantage using (IBM) Cloud Functions

Serverless Infrastructure was fun to work with. No need to manually set up a server, secure it, etc. Everything is done for you, all you need is your code, which scales over 10.000+ parallel instances without issues. Function calls themselves don’t cost that much either. IBMs base rate is currently $0,000017 per second of execution, per GB of memory allocated. 10.000.000 Executions per month with 512MB action memory and average execution time of 1.000ms only cost $78,20 per month, including the 400,000 GB-s free tier. Another good feature was being able to upload zip packages and docker images.

Although those could only be uploaded using the CLI. As a Windows user it’s a bit of a hassle. But one day I’ll finally set up the 2nd boot image on my desktop pc. One day. Afterwards, no need for my VM anymore.

The current code size limit for IBM Cloud Functions is 48 MB. While this seems plenty, any modules you used to write your code, not included by default in IBMs runtime, needs to be packed with your source code. OpenCV was the module I used before switching over to Pillow and numpy, since OpenCV offers a bilateral filter, which would have been a better option than a median filter on the color image creation of the Cartoon filter. Sadly it is 125 MB large. Still 45 MB packed. Which was, according to the real limit of 36 MB after factoring in the base64 encoding of the binary files, sadly still too much. Neither would the 550 MB VGG16 model I initially wanted to use for an artistic style transfer neural network as possible filter option. I didn’t like the in- and output being limited to jsons either. Initially, before using the GitHub Page, the idea was to have a second Cloud Function return the website. This was sadly not possible. There being only a limited selection of predefined runtimes and modules are also more of a negative point. One could always pack their code with modules in a docker imag/zip, but being able to just upload a requirements.txt and the cloud automatically downloading those modules as an option would have been way more convenient. My current solution returns a base64 encoded image. Currently, if someone tries to upscale a large image and the result exceeds 5 MB, it returns an error, saying „The action produced a response that exceeded the allowed length: –size in bytes– > 5242880 bytes.“

What’s the Issue?

Currently, due to Github Pages not setting Cross Origin Resource Sharing (CORS) Headers, this does not work currently. CORS is a mechanism that allows web applications to request resources from a different origin than its own. A workaround my instructor suggested was creating a simple node.js server, which adds the missing CORS Headers. This resulted in just GET requests being logged in the Cloud API summary, which it responded to with a Code 500 Internal Server Error. After reading up on it, finding out it needs to be set by the server, trying to troubleshoot this for… what felt like ages, adding headers to the ajax jquery call, enabling cross origin on it, trying to workaround by setting the dataType as jsonp. Even uploading Cloud Function and API again. Creating a test function, binding it to the API (Which worked by the way. Both as POST and GET. No CORS errors whatsoever… till I replaced the code). I’m still pretty happy it works with this little workaround now, thank you again for the suggestion!

Other than that, I spent more time than I’m willing to admit trying to find out why I couldn’t upload my previous OpenCV code solution. Rewriting my function as a result was also a rather interesting experience.

Future Improvements?

I could give the user more options for the Cartoon Filter. the adaptive Threshold has a threshold limit, this one could easily be managed by the user. An option to change the window size could also be added, maybe in steps?

I could always add new filters as well. I like the resulting image of edge detection using a Sobel operator. I thought about adding one of those.

Finding a way to host a website/find a provider that adds CORS Header, allowing interested people to try a live-demo and play around with it, would be an option as well.

What i’d really like to see would be the artistic style transfer uploaded. I might be able to create it using IBM Watson, then add it as sequence to my service. I dropped this idea previously because i had no time left to spare trying to get it to work.

Another option would be allowing users to upload files, instead of just providing links. Similar to this, I can also include a storage bucket, linked to my function in which the transformed image is saved. It then returns the link. This would solve the max 5 MB response size issue as well.

Conclusion

Cloud Functions are really versatile, there’s a lot one can do with them. I enjoyed working with them and will definitely make use of them in future projects. The difference in execution time between my CPU and the CPUs in the Cloud Environment was already noticeable for the little code I had. Also being able to just call the function from wherever is pretty neat. I could create a cross-platform application, which saves, deletes and accesses data in an IBM Cloudant database using Cloud Functions.

Having no idea about Cloud Environments in general a semester ago, I can say I learned a lot and it definitely opened an interesting, yet very complex world I would like to learn more about in the future.

And at last, all Code used is provided in my GitHub repository. If you are interested, feel free to drop by and check it out. Instructions on how to set everything up are included.