Mechanismen bei System-Programmiersprachen zum Erreichen von Memory-, Thread- und Type-Safety

Motivation

Ende letzten Monats (Juli, 2021) hat die CWE (Common Weakness Enumeration) einen Bericht veröffentlicht, welcher die Top 25 der Software-Schwachstellen aufzeigt. Es handelt sich dabei um eine Auswertung von gefundenen Schwachstellen der letzten zwei Jahre in verschiedenen Software-Systemen. Der Bericht umfasst eine Liste der 40 schwerwiegensten Schwachstellen. Dabei handelt es sich um Schwachstellen, welche einfach zu finden oder auszunutzen sind, häufig vorkommen und äußerst wirkungsvoll und dementsprechend gefährlich sind. [1]

Platz 1 der Liste wird von einer Speicher-Schwachstelle belegt, dem Out-Of-Bounds-Write dicht gefolgt vom Out-Of-Bounds-Read auf Platz 3. Insgesamt sind 7 der 40 vorgestellten Schwachstellen solche, welche aufgrund mangelnder Speicher-Sicherheit entstehen können. Andere gefährliche Schwachstellen entstehen aus Fehlern in der parallelen Programmierung sowie durch zu schwache Typ-Systeme in der verwendeten Programmier-Sprache. [1]

Speicher-spezifische Software-Schwachstellen sind die mit am häufigsten vorkommenden Schwachstellen. Circa 70% der Schwachstellen in Systemen von Microsoft sind auf Speicher-Fehler zurück zu führen. [11]

Andere Beispiele für die Häufigkeit von Speicher-Fehlern sind der Exim-Mail-Server vergangenen Mai, bei dem 12 der 21 gefundenen Schwachstellen Speicher-spezifische Fehler waren. [12] Aber auch zwei schwerwiegende Sicherheits-Lücken beim Samba-LDAP-Server vom März diesen Jahres sind auf Speicher-Fehler in der Software zurück zu führen. [13] In der Webkit-Komponente von Apple’s MacOS wurde ebenfalls ein Speicher-Fehler in Form eines Puffer-Überlaufs entdeckt. [14]

Unsafe Languages, Inadequate Defense Mechanisms and Our Dangerous Addiction to Legacy Code

bugs, bugs everywhere

A recent study found that 60-70% of vulnerabilities in iOS and macOS are caused by memory unsafety. Microsoft estimates that 70% of all vulnerabilities in their products over the last decade have been caused by memory unsafety. Google estimated that 90% of Android vulnerabilities are memory unsafety. An analysis of 0-days that were discovered being exploited in the wild found that more than 80% of the exploited vulnerabilities were due to memory unsafety.

(Gaynor, 2019)

Over the last 20 years, developing secure software has become increasingly important. To this day, we write a significant amount of code in languages with manual memory management. However, the Peter Parker principle states that “great power comes with great responsibility”. Many scoring systems classify, enumerate and rank prevalence of known vulnerabilities. In theory, developers should be aware of common programming mistakes leading to these bugs. Yet, the last 20 years are living proof that manual memory management is highly error-prone. Because most systems share memory between data and instructions, controlling small portions of memory can be enough to take over entire systems (Szekeres et al., 2013). The fight over attacking and defending the security measures on top of unsafe systems is often called the eternal war in memory.

In this blog post, I want to examine what properties make programming languages like C/C++ fundamentally unsafe to use. After that, I briefly discuss the inadequacies of our defense mechanisms. Last of all, I reflect on the sociopolitical implications arising from the continued use of unsafe languages.

Continue reading

Supply Chain Attacks – Die Lieferkette schlägt zurück

ein Artikel von Verena Eichinger, Amelie Kassner und Elisa Zeller

Nach SolarWinds schafft es eine neue Schlagzeile aus der IT-Welt in den Massenmedien ihre Kreise zu ziehen. Über 500 Supermärkte in Schweden mussten wegen eines Cyberangriffs schließen. Wie bereits bei SolarWinds handelt es sich auch hier um eine Supply Chain Attack (SCA). Mittlerweile fällt dieser Begriff immer häufiger und er schafft es nicht nur allgemein für Aufmerksamkeit zu sorgen, sondern führt auch zu großer Besorgnis in IT-Kreisen. [12]

Wir alle, die wir diesen Artikel lesen, sitzen gerade vor einem technischen Gerät, das ein komplexes Zusammenspiel aus Hardware und Software darstellt. Diese Komplexität können wir in der heutigen Zeit nicht mehr vollständig durchdringen, oder wissen Sie, woher die einzelnen Komponenten stammen und wer zu diesen Zugang hatte? Ist jegliche Software auf dem neuesten Stand und sind genutzte Bibliotheken auch wirklich vertrauenswürdig? Kann man sich sicher sein, dass in das letzte Softwareupdate keine Malware eingeschleust wurde? Das alles sind Fragen, die sich stellen, wenn man sich mit SCAs befasst und beginnt, die Tiefe der dahinterstehenden Problematik zu begreifen. Zwar waren bisher stets Firmen das Ziel der Angriffe, doch wie es der kürzliche Vorfall bei der Supermarktkette Coop in Schweden zeigt, sind deren Auswirkungen auch beim Endnutzer angekommen. An dieser Stelle lohnt es sich für jedermann, die Thematik genauer zu betrachten. Dazu wird im ersten Abschnitt dieses Artikels das zugrundeliegende Prinzip der SCAs erläutert und geklärt, womit wir es genau zu tun haben. Anschließend werden die genannten Beispiele SolarWinds und der Angriff auf Kaseya, zu dessen Opfern auch die Supermarktkette Coop gehört, betrachtet. Diese beiden Angriffe sind Musterbeispiele für die Gefahr und den Wirkungsgrad der SCAs. Zuletzt soll es darum gehen, was gegen die Bedrohung durch SCAs unternommen werden kann.

Continue reading

Wie man Menschen dazu bringt Dinge zu tun, die sie nicht tun sollten

Disclaimer

Der folgende Artikel bedient sich einer zynischen, teils sarkastischen Sprache und ist als “Anleitung für Manipulatoren” verfasst. Diese Perspektive ist selbstverständlich als stilistisches Mittel zu verstehen – das Ziel des Artikels ist Aufklärung und Sensilibisierung.

Einleitung

Nicht nur Maschinen lassen sich hacken; auch Menschen sind dazu hervorragend geeignet – wenn nicht sogar besser. Das sicherste System (das nicht autotom handelt) hat nach wie vor eine große Schwachstelle: Der Mensch.

Der Mensch ist ein Opfer seiner eigenen Psyche und kognitiver Verzerrungen. Diesen entkommt selbst der sorgfältigste, aufmerksamste und achtsamste Mensch nicht immer. Das öffnet Tür und Tor für Manipulation.

Zum Glück – muss man fast sagen. Denn wenn der Gesetzgeber mal wieder auf die Idee kommt, Ihnen als Unternehmen vorzuschreiben, wie Sie Ihre Kunden und Nutzer zu informieren haben, welche Auswahlmöglichkeiten Sie ihm zu stellen haben, so bleibt Ihnen immer noch ein geschickter Umweg, Recht umzusetzen, aber den eigentlichen Wesensgehalt zu umgehen.

Falls Sie ein windiger Hacker sind, dann scheren Sie sich wahrscheinlich ohnehin wenig um Gesetze. Wie schön: Dann stehen Ihnen sogar noch mehr unmoralische Manipulationsmöglichkeiten zur Verfügung.

Dieser Artikel gibt einen Überblick über verschiedene Methoden und psychologische Prinzipien, mit denen sich Menschen dahingehend manipulieren lassen Dinge zu tun, die sie eigentlich nicht tun möchten oder sollten – bezogen auf Computersysteme, Coorporate Security, aber auch auf den Alltag im WWW. Da der Artikel den Charakter eines Überblicks hat, kann er einen Ansatzpunkt geben, sich tiefergehend mit den einzelnen Prinzipien auseinander zu setzen.

Und Achtung: Diesen Manipulationen sind nicht etwa Zukunftsmusik – wir begegnen Ihnen bereits Tag für Tag.

Manipulation

Unter Manipulation versteht man das Horvorrufen einer Handlung, einer Handlungsänderung, einer Einstellung oder Einstellungsänderung unter Einflussnahme, die dem Betroffenen nicht unmittelbar bewusst ist. Die beste Manipulation ist diejenige, die nicht auffällt und bei der der Manipulierte davon überzeugt ist, eine freie, unmanipulierte Entscheidung getroffen zu haben.

Das an sich ist nicht zwangsläufig moralisch verwerflich. Gute Nutzerinterfaces können den Menschen dahingehend beeinflussen, dass sie Flüchtigkeitsfehler vermeiden. Der Begriff “Manipulation” hat jedoch eher eine negative Konnotation. In diesem Kontext wäre dies beispielsweise ein Nutzerinterface, das Flüchtigkeitsfehler bewusst herbeiführt oder fördert und als Methode der Einflussnahme ausnutzt.

Continue reading

Zero Trust Security – The further development of perimeter security?

Most companies use perimeter security to secure their cooperate applications, services and data from attackers and unauthorised users. This approach includes a cooperate network, where clients, that are part of the network are able to access the applications. This includes attackers that got access to these networks.
Additionally more applications are getting shifted from cooperate networks into the cloud and clients are getting more mobile from day to day. It’s getting increasingly difficult to trust or identify who and what should be allowed or trusted with access to their network. That means that setting up firewalls and other security mechanisms, securing the perimeter, is getting a real challenge and can result in very high costs. [12] [13]

In order to adapt to the new requirements and to create a system that is compatible with the cloud- and the cooperate applications there is a new security approach: Zero Trust Security.

Continue reading

Ergodicity and the Revolutionizing of Systems

Ergodicity, a term I had never heard before, threatens to undermine the very foundations of economics and risk analysis as it exposes flaws in the fundamental assumptions of these fields.

Despite the new garnered attention, the concept of ergodicity is rather old and has been applied commonly in the fields of mathematics and physics. Unsurprising is then that this concept is brought to the limelight, not by an economist but by Ole Peters, a theoretical physicist, and Nassim Nicholas Taleb a distinguished Professor of Risk Engineering and popular author. 

So how does ergodicity work, and why could it revolutionize economics?

Continue reading

Hacking on Critical Infrastructure – Is there a Problem in Germany?

In May of this year, a cyberattack on the largest pipeline in the United States made global headlines. The supply of various petroleum products came to a standstill for several weeks. Pictures went around the world showing how fuels were filled and transported in a wide variety of containers. Even plastic bags were filled with the highly flammable liquids for fear of not having access to fuels for a longer period of time.

Accelerated by the Corona crisis, digitization in Germany is on the rise. But as digitization increases, so do the potential risks in IT security. As in the USA, critical infrastructures in a wide range of sectors in Germany have long been digitized. For years, there have also been repeated attacks on critical infrastructures in Germany. But is Germany prepared for the growing threat? The question is whether Germany is heading for major problems if critical infrastructure such as the energy and healthcare systems or the water supply are not adequately protected.

This article takes a look at attacks on critical infrastructure in Germany. What has happened so far? Is Germany prepared for attacks? Who helps when being attacked? And what needs to change?

Continue reading

Your first Web App in the cloud – AWS and Beanstalk

Hello fellow readers! 

In this blog you will learn how to set up a web-game with worldwide ranking in the cloud without having to deal with complicated deployment. That means for you: More time on your application. 

The app uses Node.js with Express and MongoDB as backend. The frontend is made out of plain html, css and javascript. For deployment we used the PAAS (platform as a service) from Amazon AWS, Beanstalk.

You can find the game on GitHub:
https://github.com/Pyrokahd/CircleClicker

App 

CircleClicker is a skill game with the goal to click on the appearing circles as fast as possible before they disappear again. 

We used javascript for the Frontend and Backend, in the form of Node.js. Javascript is well known and is used almost everywhere. This is also connected to a huge community and therefore already many answers to questions asked. 

Tip:
When deciding which programming language or tool you want to work with, always consider what the community is like and how many tutorials are available. 

Backend

Getting started

After installing node.js and express, we can use the “Express Application Generator” to generate a skeleton web server. We do this by navigating to the folder, in which we want the project to be and type the following command:

With –view we define which view engine to use, for example pug or ejs. Those are used to generate response pages using templates and variables.
However in this project we don’t use them except for the auto generated uses, to keep it simple.

To install additional modules type npm install ModuleName.

This will add the dependency to the package.json. Beanstalk will install all the modules listed in your package.json file automatically, so you don’t have to worry about uploading or installing them by yourself.

Now we add responses for the server to reply to client requests.

The first one sends the default page to the client, once it is connected. This will be our html site containing the game.

app.get('/', function(req, res, next) {
    res.sendFile(path.join(__dirname +           "/public/main.html")); 
});

Here we tell our server to respond to GET requests on the root server path “/”, which is the URL without anything behind it. The Response is our main.html located in the public folder behind the root. This means we have our main.html as the default web page.

The next GET response will respond to the url “/getScore” by sending the top 10 highscores to the client. 

app.get('/getScore', function(req, res, next) {

In there we make a query to our MongoDB and then construct a JSON string out of it to send it to the client. The query is shown later when we talk about MongoDB.

The last response answers a POST request from the client and is used to create new entries for the database, if the player sends a score.

app.post('/sendScore', function(req, res, next) {

In this function we receive a name and a score from the client in the form of a JSON object. Those variables are then used to create a database entry.

Frontend

The Frontend consists of the HTML page, the CSS stylesheet and one or more javascript files for the logic and functionality.

The HTML and javascript files are stored in their respective folder under the “public” folder in the server.

HTML

For the HTML document the important parts are a form to enter the archived score plus a name and buttons to start the game and send the score to the server. Other important parts, like the HTML canvas are generated once the game is started.

How exactly the html looks is not important. It just needs all the relevant elements with a respective ID, to be called upon in the javascript file, in which all the event handling and game logic happens.

The following should be avoided because it violates the content security policy (CSP):

  • inline css
  • inline javascript
  • inline eventhandler (i.e. onClick=”function” for a button)
  • importing scripts which are not hosted on your own server (i.e. download jquery and put it on your server)

It is also important to import the gameLogic script at the end of the html body. That way the html will be loaded first and we can access all elements in the javascript file without trouble.

Javascript

First we give the buttons on the site some functions to invoke, by adding click events to them. A HTML canvas is created inside an object, which holds the gamelogic, once the site has loaded. 

document.getElementById("startBtn").addEventListener("click", startGame);
document.getElementById("showPopUpBtn").addEventListener("click", showPopUp);
document.getElementById("submitBtn").addEventListener("click", sendScore);
document.getElementById("resetBtn").addEventListener("click", hidePopUp);

The first three of those buttons provide the necessary functionality to play the game and interact with the server. 

The first one “startBtn” does exactly what the name suggests. It starts the game by calling the startGame function. This sets some variables like lifes and points and then starts an intervalTimer to act as our Game Loop. After a certain time, which decreases the longer the game runs, a circle-object is spawned. This has the logic to increase points or decrease lifes and despawn after a while.
The canvas has an EventListener to check for clicks inside it. To check if a circle is clicked, the mouse-position and the circle-position are compared (the circle position needs to be increased by the offset of the canvas position and decreased by the scroll-y height).

The second button has the functionality to show the popup-form, which is used to enter a name and send it, together with the archived, score to the server. This function is also called automatically once you lose the game.

The submit Button calls the sendScore function, which uses jquery and ajax to make a POST request to the server and send the score and username as a JSON object named data. Here we also use the “/sendScore” url which we set for a post response, at the server.

const serverURL = window.location.origin;

$.ajax({
            type:"POST",
            async: true,
            url: serverURL+"/sendScore",
            data:data,
            success: function(dataResponse){
                console.log("success beim senden: ");
                console.log(dataResponse);
                setTimeout(requestLeaderboard,1000);
            },
            statusCode: {
                404: function(){
                    alert("not found");
                } //, 300:()=>{}, ...
            }
});

The last relevant function is to request the leaderboard. This is called upon loading the page and once we send our new score to the server, to update the leaderboard. This also uses jquery and ajax to make it easier to write the GET request.

$.ajax
({
    type:"GET",
    async: true,
    url: serverURL+"/getScore",
    success: function(dataResponse){
    highScores = JSON.parse(dataResponse);

    //Build html String from JSON response
    var leaderBoardString = "<h2>Leaderboard</h2>";
    var tableString = '<table id="table"><tr>
                       <th>Name</th>  <th>Score</th>';
    for (var i = 0; i < highScores.user.length;i++){
        tableString += "<tr><td>"+highScores.user[i].name
                    +"</td><td>"
                    +highScores.user[i].score
                    +"</td></tr>";
    }
    tableString += '</tr></table>';
    //Update leaderBoard with created table
    document.getElementById("leaderboardArea").innerHTML
                    = leaderBoardString+tableString;
     },
     statusCode: {
         404: function(){
             alert("not found");
         } //, 300:()=>{}, ...
    }
});

The “/getScore” url from the GET response of the server used to get the top 10 scores as a JSON object. This JSON object is then used to create a table in the form of a HTML string, which is set as the innerHTML of a div-box.

MongoDB

In our application we use MongoDB together with the mongoose module. The mongoose module simplifies certain MongoDB functions. 

Host

As the host for the database we used MongoDB Atlas, which lets you create a free MongoDB in the cloud of a provider of your choice. We chose AWS, since we also use their Beanstalk service. The free version of this database has some limits, but for a small project like this, it is more than enough.

After a database is created, you can create a user, assign rights and copy a connection-string from MongoDB Atlas. 

Connection

We are using this string and mongoose to connect to the DB in a few lines of code.

mongoose.connect(mongoDBConnectionString, { useNewUrlParser: true },);
//Get the default connection
var db = mongoose.connection;
//Bind connection to error event (to get notification of connection errors)
db.on('error', console.error.bind(console, 'MongoDB connection error:'));

After the connection is established we can create a mongoose model, which is like a blueprint for the entries in one table. This model is created from a mongoose schema in which we define the structure.

var UserModelSchema = new mongoose.Schema({
name: String,
score: Number
});
var UserModel = mongoose.model('UserModel', UserModelSchema );

Create Entry

Now we are ready to add new entries to the database. This is easily done by creating an instance of the mongoose model and filling in the appropriate fields. After calling the save function, a new entry is added.

var testInstance = new UserModel({name: _name, score: _score});
    testInstance.save(function (err, testInstance) {
        if (err) return console.error(err);
        console.log("new Entry saved");
    });

Query

To make a query is just as easy. First we define a query object and set the parameter according to our needs.

// find all athletes that play tennis
    var query = UserModel.find();
    // selecting the 'name' and 'score' fields but no _id field
    query.select('name score -_id'); 
    // sort by score
    query.sort({ score: -1 });
    //limit our results to 10 items
    query.limit(10);
    //to return JS objects not Mongoose Documents
    query.lean();

We can then execute that query and receive a result depending on the settings

query.exec(function (err, queryResult) {
      if (err) return handleError(err);

        //use queryResult (JS-Object)
        //to create JSON string

        //and send it to client
        res.send(resJSON); 
    });

Into the cloud!

But how do we get into the cloud? The first thing we do is decide on a cloud provider. We very quickly chose AWS because we were particularly impressed by the service they offered. It’s about AWS Beanstalk. Amazon promises to take over a lot of backend provisioning, like load-balancing,scaling, e2-instances and security groups.

Which means more time for actually programming. 

Does that work? To a large extent yes! 

How does it work?

Very simple. You create an Amazon AWS account, go into the development console and create a new AWS Beanstalk environment. Here you define the domain name and in which programming language your app is written. Then start the environment. Afterwards you upload the app which is put together to an archive and check on the domain if the app works. Updates are done the same way. Merge the new code into an archive, upload and select. 

You now have a website with your own game. Load Balancer is included, highscore is included and basic security settings are set. 

Check Logs on CloudWatch

In your Beanstalk environment configuration under monitoring, you can enable your logs to be streamed to CloudWatch. This comes with some extra cost, but allows you to have all your logs online and centralized in one location. 

CloudWatch also has many more functionalities like alarms when your application gets downscaled because there is less traffic, or monitoring of the performance of your ressources.

However we are interested in our logs, which you find under your protocol groups in CloudWatch. There is one log with the ending stdout.log (by default). This is where all of the console.log or console.error messages from our server are located.

Security Settings

If you want to make your application even more secure, you can put a little more time into it. We have decided to set up HTTPS and use the SCP protocol. 

HTTPS

To provide HTTPS, proceed as follows:

  • Create a certificate for domain names in AWS Certificate Manager (ACM)
  • LoadBalancer https Configuration
  • Add listener
  • Port 443
  • Protocol HTTPS
  • Upload Certificate

Helmet Security

Helmet is a library that checks some basic security issues regarding the header. One important aspect of this is the Content Security Policy (CSP) header. It forces you to write your html files in a secure manner to avoid cross site scripting attacks.

Problem: CSP-header enforces security policies

Despite not being completely CSP conform it worked on the local server but

to make it work in the cloud we had to follow the CSP guidelines. So we had to remove all inline Javascript and CSS to eradicate all errors regarding CSP.

Problem: HTTPS Forwarding

By activating Helmet security the resources of the website become unreachable. Through the AWS logs we found the following error message, but could not find out exactly where the problem was:

2020/09/05 17:53:16 [error] 19019#0: *7 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.37.17, server: , request: “GET / HTTP/1.1”, upstream: “http://127.0.0.1:8080/”, host: “172.31.41.163”

We have also looked at load balancers and security groups, but all the settings are correct here. In further research with the Developer Console in the browser, we found out that https requests are required. 

GET https://…elasticbeanstalk.com/javascripts/GameLogic.js net::ERR_CONNECTION_TIMED_OUT

Since we do not have the required permissions, we could not set up https. So we had to turn off the CSP for our game. The already implemented security features such as the removal of css inline tags remain. 

In previous tests, while disabling helmet we didn’t generate a new package-lock.json file, which holds a tree of generated packages. Because this was not updated we didn’t remove helmet completely and got wrong results for the test, which led us wrongly to believe helmet was not the problem.

Conclusion

Although this was not the initial goal, we learned a lot about networking during this project. Especially topics like setting up a server as well as the communication between database, client and server. Despite the roadblocks, we were very positively surprised by the AWS Cloud and Beanstalk. We can recommend Beanstalk if you want to get simple applications and websites in the cloud ASAP. As soon as we implemented security related code we had to make an additional effort, which went beyond beanstalk alone, to get it running.  

The development of the intranet into BeyondCorp

Aron Köcher, Miro Bilge

Only a few years earlier, the solution to exchange digital information like documents or pictures was to establish a physical connection between the participants. A usb stick was passed around the class to exchange music, you went to your friends house to print some urgent papers or a group of friends met to play games via LAN. With the increasing access to the Internet, new solutions have emerged.
While some users are satisfied with sending files by mail, for a company with different locations and a large number of digital data, a business solution is required.
With Virtual Private Network (VPN), a solution was created that allows one or more participants to become part of another network. The connected member has full access to devices, data and services as if he were physically present. Instead of a real connection, a tunnel is built over public networks, which is why it is called “virtual private”. The basic structure of a virtual private network is always the same.  A VPN connection consists of two participants, a client and a server, which establish a connection. Depending on the protocol, the connection can be encrypted and use different layers.

Initially, independent lines and connections based on the data link layer were used to connect individual locations. With the help of Frame Relay, a permanent, virtual link was established between the sites. This technology was replaced with the increasing change from layer 2 to ip-based network technology.  Compared to a dedicated line, the financial costs are much lower with the benefits of an ip-based link. Only a one-time configuration and Internet access, which is usually available anyway, is required.
The end user of VPNs usually does not come into contact with Layer-2 VPNs, because if a dedicated line is rented, it is set up in the background by the network operator and appears to the consumer as a physical connection. Therefore, due to current usage and the relevance for the readers of this blog, we will only discuss ip-based VPN in the following. 

Besides different protocols with different encryption methods, there are three types of VPN:

Client to Client VPN

Here a connection between two clients is established. This is used, for example, to control a computer using TeamViewer. and it is the only connection type where the complete message traffic is encrypted. Therefore it is limited to two devices.

Client to Site

With a Client to Site VPN, it is possible to connect a client to a remote network. This makes it possible for remote employees, such as during the COVID-19 crisis in the home office, to be a regular part of the company network. They have unlimited access to data and devices in the network. On the network side, this requires a VPN server to which the employee can connect after configuring a VPN client. The client software comes with the common operating systems and is also available as a mobile solution for smartphones. The message traffic is encrypted until it enters the network. This can also be used to protect privacy. Web pages that are accessed via a VPN only see the VPN server, but not the client. This allows the user to spoof his position and, unless logging is used on the server, he cannot be distinguished from the other users. Furthermore a Man-In-The-Middle attack in insecure networks is made more difficult. Attackers only see that an encrypted tunnel to a server has been established by the client. It is hard to draw conclusions about the called services, even if they are not encrypted. The different data protocols are repackaged in a VPN frame and are therefore not recognizable.

Site to Site

If companies grow beyond one location, the question arises how employees at both locations are given access to company data. 
For smaller locations, a client-to-site solution is sufficient to allow employees to access data from the main location. If the second location receives services, devices or data carriers that are to be accessed bidirectionally, both networks must be accessible via VPN. For this purpose, a VPN server is set up on the main network and a VPN concentrator on the second. With this site-to-site VPN, all internal network connections are exchanged between the VPN nodes and the two networks appear as one large network.


Limitations

By tunnelling all packets over the VPN interface, all network traffic and speed is directly dependent on this connection. With the failure of the VPN, internal processes are disrupted and, depending on use, the company’s business might come to a halt.
A further point is the security of a VPN. Externally, the network is protected against dangers from the Internet by the firewall. With the VPN a user becomes part of the network and has access to the devices and data contained in it. If an attacker overcomes the encryption of the VPN, the advantage of unrestricted access becomes its disadvantage. 
Apart from the security concept which ends at the VPN gateway and firewall, the VPN tunnel is also not untouchable. The number, size and VPN remote terminal offers the possibility of drawing conclusions about the transmitted data.
In addition to security, the provision of services and files and the extension of the network to include external services requires complex configuration. The internal network must be divided into further subnets with different access rights. This adjustment may be required for each customer and service. This demands active network management and can quickly become difficult to manage.

Network Access Control


There are various approaches to prevent the flexibility and options of a VPN, such as location and device independence, from becoming a vulnerability in the company network. 
The difficulty lies in the fact that by bridging the VPN, malicious software has access to the internal network. So the approach is to prevent malware from entering the network in the first place. In return, administrators can only grant access to known devices and restrict the installation of drivers and programs on these devices. These restrictions must always be re-evaluated in terms of productivity and must not restrict the user too much. With increasing rights, such as required port sharing, the control over each individual device becomes more difficult.  Moreover, the case of a contaminated device is not covered. This is why Cisco published the first approach to transferring network security away from the devices to the network as early as 2003. With Network Access Control (NAC), all devices that want access to the network are subjected to a security check. NAC thus forms a further layer between the VPN and the network, which handles access to services and resources.

System Overview Network Access Control [17:
https://blogs.getcertifiedgetahead.com/network-access-control/]

For the NAC system to grant access to a compliant device, the definition of compliant must first be defined in a policy rule. Depending on the NAC software and provider, the possibilities of the rules to be set vary. With current anti-virus signatures and installed security updates and patches, Cisco has created a basis for its approach. The network needs additional help to read out such information and check whether the connected device is a new one. The installation of an agent on the devices offers, in addition to access to this data, the possibility of automatically restoring a compliant state in case of non-compliance. 
If a device wants to connect to the network, the NAC Health Server notifies the agent to read off the necessary data and checks it against the rules for the respective user group. When the status of the device does not match the rule set, the device is quarantined and cannot access the network. 
The NAC server sends the deficiencies to the NAC agent on the unit, which then tries to resolve them.This includes everything from simply installing updates up to removing programs and software. If the compliance of the device can be restored, the NAC server allows access to the resources. The remediation process should be as self-sufficient as possible, but can quickly become quite complex depending on the deficiencies and user role.  When creating the rules, different scenarios must therefore be considered. For example, if a customer needs access to shared resources but has an outdated operating system, the agent cannot simply upgrade it, but in principle cannot deny access to these resources. The resource would have to be relocated to a sub-network, which still does not clarify the question of how this potentially risky resource is handled internally. Depending on the number of customers and resources, this process also becomes increasingly complex and difficult to maintain.

Software Defined Perimeter

Compared to Network Access Control, Software Defined Perimeter (SDP) does not establish the connection until the device and user have been authorized and authenticated. At the time of the authentication process, the location of the resource is unknown because it is not registered in the DNS. This is why SDP is also called a Black Cloud, which has many advantages over NAC. With the unique assignment of access rights and roles for each resource, a segmentation of the network is no longer necessary. Resources are only accessible for the respective user roles. This simplifies management by eliminating the need to create an additional subnet for each customer or service. The customer receives a user and is assigned to the resource.

System Overview Software Defined Perimeter [14: https://procureadvisor.com/the-definitive-guide-to-software-defined-perimeter/]

If the customer now wants to access the resource, he first contacts the SDP controller, which confirms his identity and integrity via the user management. Then the user is authorised and receives an authentication token. This token contains the resources that the user can access. If the user now accesses a resource, a VPN connection is established to the respective SDP gateway. This connection is established and terminated automatically via a client software. At the SDP gateway, the user is again identified via his token and then gains access to the one resource. 
By using SDP, Destributed Denial of Service, Man-in-the-Middle Attack and Code Injection attacks are prevented or made more difficult. In addition, in most cases the attacker does not receive access to the entire network if the attack is successful. 
The use of Software Defined Perimeter forms the basis of Zero Trust Network Access (ZTNA). No device, user or service inside or outside the network is trusted. Every single connection is encrypted and no resource can be reached without prior authentication. Viewing different connections as a separate environment with individual security requirements creates a minimal space of attack. ZTNA is transparent for the user, he only has to log in once via the client and can then access the resources.

To set it up, all users and resources must be assigned a user role and a predefined risk profile. This categorization of services, users and devices means a lot of effort for the company. Once completed, the system can easily be expanded to include roles and guidelines.

BeyondCorp

Before we take a closer look at the BeyondCorp Remote Access business model, the following chapter will first discuss Google’s initial idea. In 2011, Google started to develop its own intranet away from the VPN and towards Google BeyondCorp. 

Google’s idea was to get rid of the privileged network with single perimeter security and move to a more flexible solution similar to the Zero Trust Model. Important core components were to evaluate access depending on the respective device and the respective user. For example, a user can be authorized to access a resource from his company laptop, but if he wants to access the same resource via smartphone, this is not allowed. Furthermore, BeyondCorp is intended to provide unlimited network location and user experience. This means that it should make no difference to the employee whether he works from home, the company location or a public Internet café (depending on latency, of course). The same user experience also means that this can only be achieved if secure access is possible without VPN for the employees.

Google’s BeyondCorp was built on the basis of these core components. To ensure these key elements are in place, every request is fully authenticated, authorized and encrypted no matter where it is made from.

Architecture

To realize Google’s goals, the network architecture was redesigned. In the following the individual architecture components are described on the basis of the diagram:

1) Although it is a Google building, there is a privileged network, i.e. a network in which users are trusted and an unprivileged network. The latter is, similar to an external network, not trustworthy at first sight. Users who are in the Google building and on the network could just as well be sitting in a public Internet café from a security point of view. Therefore, access from the Google Building is equivalent to remote access. The difference is that it is possible to make requests from the unprivileged network via a private address space.
Consequently, requests to the Internet run from the unprivileged network. If a user wants to make an internal access to another part of the Google network, this is checked via an Access Control List (ACL). 

2) All user requests e.g. from the unprivileged network or enterprise applications from Google run through an Internet Access Proxy. This proxy forces an encrypted connection between the connection partners. The proxy can be specially configured for each application and offers various features such as global reachability, load balancing, access control checks, application health checks and DoS protection.

3) The basic prerequisite for granting access to the Access Proxy from the unprivileged network as well as from the public network is that the device has a so-called “Managed Device” status. This status means that the device is actively managed by the company and only these devices can access company applications via the Access Proxy. At the same time, Managed Device status implies that the company can track, monitor and analyze changes to the device. The goal of this is to be able to react dynamically to the security status of each device in order to allow or deny requests.
Technically, the Managed Device Status is realized by a certificate.  Each device that has the declared status is unique and can be recognized by the certificate. The certificate is renewed periodically and serves as a key to confirm that the device information is valid. In order to obtain a certificate, the respective device must be present in the Device Inventory Database (DID in short) and correctly stored. On the device, the certificate is then stored on a TPM (Trusted Platform Module) or a qualified certificate store, depending on the platform either on the hardware side or on the software side. 

4) The Access Proxy is fed by the Access Control Engine so that it can decide which requests from which user and which device it allows and which it does not allow. Based on the Access Control Engine, the Access Proxy can act as a dynamic access layer. In order to provide the Access Proxy with “advisory” support, the Access Control Engine itself has various sources of information at its disposal. Based on this data, both static rules and heuristics are deduced. In addition, machine learning is also used. Information that can be relevant for the Access Control Engine can be, for example, the operating system version number, the device class (cell phone model, tablet, …), access from a new location, user or user group, the device certificate, but also other information and analyses from the device inventory database.
For each request, the Access Control Engine then evaluates whether the required security level matches the security level established for the requested device based on the analyzed data. By determining the security level on the request side, it is also possible to separate parts of an application. For example: A user may be authorized to view an entry in a bug tracking software. But if he wants to update the status of the bug or edit the ticket, it is possible that this request will be blocked, because the trust to this user is not sufficient.

5) The Access Control Engine is in turn fed by a pipeline that extracts and aggregates the dynamic information.

6) BeyondCorp also uses Single Sign On for authentication, similar to the classic Zero Trust Model. The central user authentication portal is used to validate the primary access data. Furthermore, a two-factor authentication was added in the same step. After validation, short-lived tokens are generated, which then form part of the authorization process for specific resources. Depending on the trust level of the resource, the authentication measures can be more or less stringent. 
Since the administration of user groups and associated authorizations is relatively complex, for example if authorizations can change when a department changes, the user/group database is closely linked to the processes of HR (Human Resources). Consequently, if there is a new hire, new role/responsibility or someone leaves the company, these processes are recorded in HR. Every change in the HR processes also triggers an update in the database. This ensures that the employee data is always kept updated, while the effort required to keep the database up-to-date at the same time remains low.

7) Besides Single Sign On, Google uses RADIUS Server for network authentication. A user’s access via LAN or WLAN is transferred to the corresponding network via the RADIUS Server, so that an attacker cannot attack the entire network, but only a segment. In the case of Google, the RADIUS server assigns a managed device to the unprivileged network as soon as the device has authenticated itself using a certificate and 802.1x handshake. Another advantage besides security is that the network management is not done statically via fixed VLAN areas and switch/port configurations but can be dynamically referenced. Other devices without certificates for example are assigned to a guest network. In addition, in the case of an outdated device version the RADIUS server can also refer a potentially compromised device from the unprivileged network to a special quarantine network.

Architecture of BeyondCorp components. Own presentation according to [6: A New Approach to Enterprise Security (BeyondCorp)] 

As could be seen in the architecture, the Access Proxy plays a central role in the development of BeyondCorp, and since Google has tried to reuse as much existing technology as possible in the architecture, with the Access Proxy they have done the same. This was based on HTTP/HTTPS reverse proxies, so-called Google Front Ends (GFEs), which were already used in the front-end infrastructure and offered load balancing and TLS handshake “as a service”. These were subsequently extended to access proxies with several configurations like authentication and authorization policies. Since the Access Proxy is a central communication element, it supports OpenID Connect and OAuth as well as user-defined protocols that can be integrated. As a result, the user authenticates himself to the Access Proxy. If access is granted by the Access Control Engine, the request is forwarded to the backend service without any further credentials. There are several reasons for this. On the one hand, this increases security, since no credentials are intercepted on the backend side. Secondly, the Access Proxy is transparent for the backend. If the backend service supports its own authentication e.g. by credentials and/or cookies, confusion would occur if these credentials were also passed on to the backend service.
Nevertheless, the communication between Access Proxy and Backend Service must be secured. Therefore the internal communication takes place via HTTP with an encrypted channel. For this Google uses an internal authentication and encryption framework called LOAS (Low Overhead Authentication System), which enables the service to trust all receiving data. The framework works with mutual authentication which means that both entities in a communications link authenticate each other. This ensures that metadata is also not spoofable. An advantage of this is that new features can be added to the Access Proxy and different backend services can subscribe to the new features by parsing header fields.
Also the combination of Access Proxy with Access Control List through the Access Control Engine offers some advantages. For example, the central location of the components provides a uniform access point, which makes forensic analysis more effective, since logging is controlled centrally, so that an attack can be responded to not only one service, but directly for all backend services. Furthermore, enforcement policies can be managed centrally and defined consistently. Changes can thus be implemented more quickly. Another advantage is that backend developers do not have to worry about authorization. If the trust level of the service does not require any further authentication measures, the developer can rely on the fact that users are already homogeneously authenticated. If this is not sufficient, the rough approach can be refined by a fine-grained approach. For example, if a database application requires an additional authentication measure, this can be combined by the service itself integrating authentication. In this way, the system remains maximally flexible to the needs of the respective service. The service only has to initially configure the Access Proxy correctly to ensure that external communication between the service and the Access Proxy works.

After showing the architecture, the question arises how employees without VPN access can access the network from a client perspective? The answer from Google’s BeyondCorp provides a chrome extension. All access requirements, whether in the office or on the road, are handled through this access point. This is possible at Google, since the majority of all applications are accessible via the web according to the internal company guideline “online first” and the percentage of local applications is kept to a minimum.

The extension automatically manages a user’s Proxy Auto-Config (PAC) files and then routes the user through the Access Proxy to the appropriate destination. When a user connects to a network, the extension automatically downloads the latest PAC file and displays the “Good Connection” icon. Since all requests from the BeyondCorp extension are routed to the Access Proxy, it cannot communicate with devices that cannot reach the Access Proxy. For example, the local printer at the employee’s home. The status provides a solution here. When the employee enters the printer’s IP address in a new browser tab for configuration purposes, the request is sent to the Access Proxy along with all other private address space traffic. The routing request fails and the user receives a failure. Customized 502 error messages have been implemented to tell the employee that the extension must be switched to “Off:Direct”. Subsequently, the user can configure the printer and afterwards reconnect to the Access Proxy.

Infrastructure components

In the upper section we have often talked about different trust levels. In the following section, we will take a closer look at trust tiers and how BeyondCorp structures its infrastructure elements. 

Each resource is associated with a minimum trust tier that is necessary for access. The rule is that the lower the level, the more sensitive the information and thus the higher the necessary trust. If an employee now wants to access a resource, the first step is to check the trust level of the employee and his device. Subsequently, a trust level is assigned to the employee. Then it is checked whether the employee’s trust level is equal to or higher than the trust level of the requested resource. This results in the advantage that maintenance costs, e.g. costs for support and productivity, of highly secured devices are kept low and usability is improved at the same time.

On the architecture side, the Trust Inferer is responsible for the classification of the trust, which continuously analyzes the device status for the trust evaluation and thus determines the trust level. For this purpose, it uses the information of the Device Inventory Service, which in turn uses various information for aggregation (see figure below). If, for example, a laptop has not applied a security patch for the operating system, this may be less severe for a laptop with a lower trust level than for a laptop that was initially assigned a higher trust level. Conversely, this laptop with a high trust level could be temporarily downgraded due to the missing patch until the patch is applied. This way, employees are always encouraged to keep their software up-to-date. If the trust level has dropped to a minimum, consequences can also be drawn on the network side: A completely outdated laptop can thus be transferred to a quarantine network until the device is rehabilitated. This limits access to resources to a maximum and protects confidential information.

Architecture of the BeyondCorp Infrastructure Components [7:  Design to Deployment at Google (BeyondCorp)]

BeyondCorp Migration

As we have already seen in previous sections, it is not easy to convert the company’s own intranet in the same way as BeyondCorp, since a relatively large amount of restructuring is required, both on the network side and from the perspective of the overall architecture. The following section gives some advice on how to do this and how Google has implemented the restructuring.

First of all, it is important to realize that the intranet conversion is initially less technical effort, but rather more bureaucratic. One must be aware that the conversion affects the entire company including all employees and therefore the idea should be communicated early. The goal of this is to get maximum support at all management levels, which also means that everyone in management must have understood the benefits of a restructuring for the company. For example, reducing the risk of attacks could be an argument, while at the same time improving productivity. A risk table can be helpful for better understanding, as shown in the following example: 

Own figure

Once the management has understood that the changeover makes sense, supporting processes can be set up through early communication, which can be declared by change management. 

It is also important to be aware that the changeover is a lengthy process. It is only possible to renew incrementally, as many layers are affected, such as the network, security gateways, client platforms and backend services. Therefore, it makes sense to define migration teams in the different layers and to determine a leader who coordinates with the other leaders of all layers. 

Automatic transfer of employees to Managed Non-Privileged Network

The idea of Google was to keep the administrative effort to transfer employees from the privileged network to the unprivileged network as small as possible. For this purpose, a pipeline was developed that would automatically move the user away from the VPN to BeyondCorp with the Chrome Extension. The pipeline consists of three phases and starts with the logging mode: A traffic monitor was initially installed on each device. Each call from the privileged network is analyzed via an Access Control List (ACL) and classified whether the same call would have been accessible from the unprivileged network. This means that the same service would have been accessible via the Access Proxy. This was then logged and recorded. The content of the ACL was then stored centrally in a repository with the source IP address data to identify the user and the destination IP address to determine which service was not available. In the first phase it was possible to analyze relatively quickly which service was not yet connected to the Access Proxy, but at the same time had a high demand from the employees. As a result, a prioritization list could be created, which services should be attached in which order. The logging mode was executed until the following rule came into effect: If the employee could have accessed more than 99.9% of the content over 30 days via the unprivileged network, he will be put into enforcement mode after an e-mail notification with the employee’s consent. This differs from the logging mode in that requests that could not have been accessed from the unprivileged network are captured and dropped. If an employee has again been able to reach more than 99.99% of the requests via the unprivileged network over a period of 30 days, he will be transferred to the unprivileged network again after an e-mail notification. If less than 99.99% of the requests can be reached from the unprivileged network or the employee rejects the request, he is automatically downgraded back to logging mode. With this approach more than 50% of all employees could be automatically transferred to the unprivileged network.

The pipeline for moving Google computers to the Managed Non-Privileged (MNP) network [9: Maintaining Productivity While Improving Security (Migrating to BeyondCorp)]

BeyondCorp Remote Access

In early 2020, Google launched BeyondCorp Remote Access. This is a SaaS solution designed to support companies, especially during COVID-19, to be able to work securely from home without VPN access. The reason for the current launch, to create an alternative without VPN, is the bottleneck of VPN. Due to the sudden shift away from the office to the home office, many IT departments could not provide enough or a stable VPN network for all employees. Google has heard from many customers that this has made it impossible to access internal web applications such as customer service systems, software bug trackers and project management dashboards that would otherwise have been easily accessible from the company’s own network via web browser.

As a result, BeyondCorp Remote Access was released as a zero-trust solution based on its own BeyondCorp system. In addition to the aforementioned advantage of providing a fast solution without VPN, Google promises that customers, for example, can also easily access internal web applications. The proxy service also has enforcement policies that are checked depending on location, user and device. For example, Google provides the following example of an enforcement policy in its blog entry: “My HR managers who work from home with their own laptops can access our web-based document management system (and nothing else), but only if they use the latest version of the operating system and use phishing resistant authentication such as security keys.”. 

Another advantage of BeyondCorp Remote Access is its rapid deployment. With little local technology required and the ability to incrementally migrate individual applications, the Google service can be quickly integrated into the corporate structure, with Google advising that during a pandemic, key services should be connected first and then incrementally added to keep employee productivity high. This includes network-side architectural changes and security controls, as internal web applications can remain hosted in the same location. BeyondCorp Remote Access only takes care of the connection and linkage between application and employee. Finally, with the proxy service, a company can also avoid outsourcing time-consuming deployment, maintenance and infrastructure management tasks to the cloud and simplify licensing. This also promises easy scaling, low latencies and redundancy.

Overview BeyondCorp Remote Access Architecture [12: https://medium.com/andcloudio/remote-access-with-beyondcorp-f3bedd1432f2]

How does BeyondCorp Remote Access work?

If the user tries to call up a web application, the access first goes to the Cloud Identity Access Proxy (IAP). In addition to load balancing and encryption, the IAP also takes care of authentication and authorization. The service uses Google Accounts for this purpose. It is also possible to connect a local identity management system such as Active Directory. In this case, Google Cloud Directory Sync is used to synchronize user names with the cloud identity, while passwords are stored locally and instead SAML (Security Assertion Markup Language) as SSO is implemented to authenticate users with the existing local identity management system. The access between client and proxy is then analogous to Google’s BeyondCorp via a chrome extension. This extension collects and reports device information that is constantly synchronized with the Google Cloud and can be stored in a device inventory database in the Google Cloud. 

Subsequently, IAM (Identity- and Access Management) roles can be used during authorization to decide whether or not the user is granted access. Behind the firewall is then the IAP Connector, which is used to forward the data traffic secured by Cloud IAP to local applications. This is also supported by a DNS, which creates public domain names for the internal local apps and assigns the IAP Proxy IP address to them. This allows access to a locally hosted enterprise application. It is also possible to integrate Google Cloud Apps and applications from other clouds.

Connecting BeyondCorp Remote Access and local web application [12: https://medium.com/andcloudio/remote-access-with-beyondcorp-f3bedd1432f2]

To initially link the company’s internal network traffic with Google Cloud and remote access, there are three solutions that Google offers. Firstly, the company’s internal network is connected via Dedicated Interconnect (direct connection to Google). The traffic flows directly from network to network, not over the public Internet. The next variant is Partner Interconnect (more connection points through a supported service provider). Here, the traffic between the networks is routed via a service provider, but also not via the public Internet. The last variant is to use IPsec VPN, where the traffic is extended to the Google Cloud VPC (virtual private cloud) network, which enables private IP addresses as well.

Reservations about BeyondCorp Remote Access

While BeyondCorp Remote Access offers many advantages, it also provides some concerns that are discussed below:

First, BeyondCorp Remote Access is limited only to web and cloud based applications that can be linked. In the long term, Google plans to link local applications as well, but this is not yet possible. Another drawback is that each application must be individually integrated into the system. Therefore, in times of the pandemic, Google recommends prioritizing the applications according to their importance and to see which applications should be connected first. This must be done incrementally and there is no generic solution that can connect all applications at once with one configuration. Another point is the deep integration of the Google Cloud with the company network. This entails both technical and financial dependency. The former, because web applications moved to the cloud allow both Control Plane and Data Plane to operate via it. Furthermore, in the event of a technical problem, the administrators can do nothing to remedy it. You have to wait until Google gets the problem under control. In March 2019, for example, there was an operational disruption in the Google Cloud that would have made the company network unreachable from the outside. Financial dependency is also a point that should not be neglected. If the entire company architecture is linked to the Google Cloud over time, the company is also dependent on its pricing policy. If prices rise to this extent, moving to an alternative system will be very expensive and possibly not profitable. Finally, data protection is also an important issue. Depending on the explosive nature of the data, a company must consider whether it should be linked to the Google Cloud. All queries run via Google’s identity proxy, it is questionable whether every company wants to give Google such deep insights into the system. The same applies to user recognition. Even if you integrate an Active Directory system, user names are still synchronized via the Google Cloud. Finally, not all institutions are authorized to integrate BeyondCorp Remote Access. For example, the HdM would not have the necessary authority to connect students to the intranet via remote access because SSO must not synchronize from LDAP. 

Conclusion

In summary, a zero trust approach makes sense in any case. The Zero Trust solution greatly simplifies the security over VPN access and firewall as a single perimeter, and also the complicated administrative overhead of integrating mobile devices and cloud systems. Each access is evaluated not only on the basis of authorization, but also depending on the respective request. This allows a much more fine-grained determination of whether or not access is permitted in the context of time, place and device. BeyondCorp Remote Access is also very useful for small companies, especially in times of COVID-19, to allow easy and fast access to the home office without VPN. However, the dependency on Google is a risk that must be made aware of and evaluated in the company context. If necessary, it may be worthwhile in the medium term to fall back on BeyondCorp Remote Access during the pandemic, but in the long term it is worth planning a strategy to set up one’s own zero trust model.

Further Reading

https://gcppodcast.com/post/episode-221-beyondcorp-with-robert-sadowski/

https://cloud.google.com/solutions/beyondcorp-remote-access?hl=de

https://cloud.google.com/beyondcorp?hl=de

https://www.computerwoche.de/a/zero-trust-verstehen-und-umsetzen,3547307

Sources

[1]: Bhattarai, Saugat & Nepal, Sushil. (2016). VPN research (Term Paper). 10.13140/RG.2.1.4215.8160. (Accessed 31.9.2020)

[2]: Sridevi, Sridevi & D H, Manjaiah. (2012). Technical Overview of Virtual Private Networks(VPNs). International Journal of Scientific Research. 2. 93-96. 10.15373/22778179/JULY2013/32. 

[3]: Minnich, S. (2020, August 13). Heise Medien GmbH & Co. KG. Retrieved September 05, 2020, from
https://www.heise.de/download/specials/Anonym-surfen-mit-VPN-Die-besten-VPN-Anbieter-im-Vergleich-3798036

[4]: Helling, P. (n.d.). Was ist VPN? Retrieved September 05, 2020, from https://www.netzorange.de/it-ratgeber/vpn-bietet-sichere-verbindungen-auf-unsicheren-kanaelen/

[5]: Török, E. (2009, August 10). NAC-Grundlagen, Teil 1: Sicheres Netzwerk durch Network Access Control. Retrieved September 13, 2020, from https://www.tecchannel.de/a/sicheres-netzwerk-durch-network-access-control,2020365,3

[6]: Ward, R., & Beyer, B. (2014, December). A New Approach to Enterprise Security (BeyondCorp). 39(6)

[7]: Osborn, B. A., Mcwilliams, J., Beyer, B., & Saltonstall, M. X. (2016). Design to Deployment at Google (BeyondCorp). 41(1)

[8]: Luca Cittadini, Batz Spear, Betsy Beyer, & Max Saltonstall. (2016). The Access Proxy (BeyondCorp Part III). 41(4)

[9]: Peck, J., Beyer, B., Beske, C., & Saltonstall, M. X. (2017). Maintaining Productivity While Improving Security (Migrating to BeyondCorp). 42(2)

[10]: Victor Escobedo, Betry Beyer, Max Saltonstall, & Filip Żyźniewski. (2017). The User Experience (BeyondCorp 5). 42(3)

[11]: Hunter King, Michael Janosko, Betsy Beyer, & Max Saltonstall. (2018). Building a Healthy Fleet (BeyondCorp). 43(3)

[12]: Gunjetti, D. kumar. (2020, August 28). Remote Access to Corporate Apps with BeyondCorp. Medium. Retrieved 13. September 2020, from 
https://medium.com/andcloudio/remote-access-with-beyondcorp-f3bedd1432f2

[13]: Keep your teams working safely with BeyondCorp Remote Access. (o. J.). Google Cloud Blog. Retrieved 13. September 2020, from https://cloud.google.com/blog/products/identity-security/keep-your-teams-working-safely-with-beyondcorp-remote-access/

[14]: ProcureAdvisor, & *, N. (2019, February 14). The definitive guide to Software-defined perimeter. Retrieved September 13, 2020, from https://procureadvisor.com/the-definitive-guide-to-software-defined-perimeter/

[15]: „ZTNA”-Technologien: Was ist das, warum jetzt erwerben und wie wählt man die richtige Lösung? (n.d.). Retrieved September 13, 2020, from https://www.zscaler.de/blogs/corporate/ztna-technologies-what-they-are-why-now-and-how-choose

[16]: Problem mit Google Cloud: Massive Störung bei mehreren Google-Diensten. (o. J.). Retrieved September 13, 2020, from https://www.handelsblatt.com/technik/it-internet/problem-mit-google-cloud-massive-stoerung-bei-mehreren-google-diensten/24413414.html

[17]: Darril. (2015, March 19). Network Access Control. Retrieved September 14, 2020, from
https://blogs.getcertifiedgetahead.com/network-access-control/