This is an attempt to provide an overview of the topics in “Secure Systems”, a seminar held during the summer term 2016 at the Stuttgart Media University HdM. Presentations have been given and blog entries into our new MI blog were made. With the chosen topics we have been quite lucky, as some of them turned out to be in the headlines sometimes only a few weeks after the presentation. Examples were the Dark Web, wireless car keys, side channel attacks, operating systems security, software supported racism and last but not least the threat of attacks on critical infrastructures like power grids and airports.
Open research questions about the topics will be discussed as well and you can find blog entries at blog.mi.hdm-stuttgart.de
The seminar structure was roughly as follows:
1. current topics and developments (what is happening in IT-Security, Capability approaches compared to ACLs)
2. infrastructure security (IT-Sec in critical infrastructures like power grids, car production etc.)
3. IT-Security problems in other areas and branches (Satellites, company infrastructures, Law, Data Sciences, Movies and Literature, Dark Web, Botnets etc.)
4. Ways to improve Security:
– Basic problems (psychological factors)
– New languages (Rust, Elixir)
– new operating systems and containers (MirageOS, ChromeOS)
– new protocols (secure end-to-end messaging)
The “current topics” were supposed to throw a light on vulnerabilities and attacks currently happening. And to bring the desperate state of IT-Security to the attention of the participants. Embedded into the discussions on attacks were damage control concepts like capabilities or language security issues. My goal was to show, that IT-security is unable to create secure systems and to protect critical infrastructures simply because the underlying base of OS, languages and access control has too many weak points.
Let’s start with critical infrastructures. I gave a presentation on the possibilities of a “black out” in Germany and Europe, causing major disturbances and even loss of lives. At this time this looked like an exercise in academic research, but only 3 month later the German Secretary of internal affairs, Mr. DeMaiziere, published new regulations and recommendations for large catastrophes in Germany (store water and food supplies etc.). Initially discussed under the influence of the latest immigration events and terrorist attacks in Germany, it quickly became obvious, that the most realistic threat handled by the recommendations was exactly a large black-out.
I have given a number of talks on IT-Security in critical infrastructures and I am not convinced, that the current movement in the grid community towards more centralization and IT-based control is going in the right direction. The current topics and vulnerabilities discussed at the beginning of each seminar showed exactly how vulnerable we really are and that IT-Security is simply trying to create a dome like in Chernobyl over something that cannot be secured finally.
To add some more reality I invited an IT-Security Manager of a larger German car maker, where we had done a thesis on how to secure production. We discussed the results very openly and the audience got some understanding on the complexity of the problems there. Countless external connections are needed right into the production zone to control machines and processes. Some machinery is so old, that no upgrades are possible anymore. Or the vendors don’t allow upgrades. The use of an IDS is problematic due to the fact that missing bytes bring down robots and require a complicated and long reboot phase. Corporate IT-Security rules and processes are geared towards office IT and cannot be used in production if you want to keep it running. There are very interesting cultural differences between office people and people in production and knowing and understanding those is fundamental for a proper risk assessment.
Production nowadays is constantly under threats of DDOS attacks, ransomware etc. Even more scary are threats like long-term sabotage of parts due to fine-grained changes to the production process, which is now a remotely controlled real-time activity anyway.
An important result of the infrastructure sessions in combination with current vulnerabilities was, that IT-Security has a very hard time to secure critical infrastructure and that it takes new approaches to achieve resilient systems. The rest of the seminar was either investigation of other branches where IT-Security is used or trying to find new ways to secure systems.
In the following paragraphs I will summarize and discuss the presentations given in the seminar. I will start with presentations that dealt with IT-Security problems in other areas and branches: Mobile Devices, Satellites, Car Security, Film and Fiction
IT-Security in Film and Fiction, Jörg Einfeldt, Merle Hiort
This is certainly not your everyday topic in IT-security. Remember: IT-Security is this boring topic where total nerds talk in a language that is even more remote than IT itself. And who understands IT?
So why did we have a presentation on how IT-Security is reflected in Film and Fiction? Which actually means in some kind of documentary as well as in complete fictional contexts (like an action movie)?
It is important to understand that security is a) the reason and definition of the modern state and b) IT-Security is both a reality in a society (it is a profession, it makes money, laws are created for it and so on.) But it is also a topic of discussion, of research itself, and of course also a topic for newspapers and fiction. And being part of society, its reflection in film and fiction has effects on IT-security: It can be seen as a danger, as a necessity or a fantasy. This can result in better acceptance or in the worst case in public aversion. IT-security can have impacts on politics and politics can impact IT-Security. And the way it is presented both “objective” and fictional will have an impact both ways.
The Authors investigated how IT-Security is presented in fiction and compare that view with reality. They also raise the question on whether the role of IT-Security in fictional works can create an understanding for security in the real world.
Three main areas were researched: The “transparent human”, hacktivism and critical infrastructures
The first area: loss of privacy, the transparent human being, is a rather old one in literature. According to the authors, books and movies use scenarios which are quite realistic in the post-Snowden era. Corrupt surveillance, privacy vs. security, trust in companies and/or the state and moral questions about the surveillance means are all covered. But the big question is: Do people realize how close the movies show reality or does it stay just fictional/suspense creating content? Do the scenarios cause some effect in people watching the movies and reading the books?
Hacktivism is also rather old. Fiction paints a picture of an IT-Cowboy, who gets lost in hacking activities. The way hacking is presented is necessarily quite simple. And there are things in reality that support the outlaw image of hacktivism: Anonymous, the Dark Web, Hackers etc.
The systems shown in those movies are insecure and can be hacked (sometimes even by kids). Is this hacktivism really a sign of protest or is it just the cowboy topic brought into the modern world? And how do regular people see the systems displayed here? Just as an IT landscape or a place to project all kinds of fears and fantasies – simply because everything seems to be possible?
Critical infrastructures based on IT have only recently begun to enter mainstream media. An example is the book “black-out” which tells the story of a large scale power breakdown, based on scientific studies. The emergency response proposals recently published by the German government may put some more emphasis on the topic, as well as continuous reports on cyberwar and attacks on public infrastructures like airports (just recently in Austria). Again the question: what do people watching those movies take home? Most people will not have the necessary background to understand the IT-Security problems behind. But they will have to vote for political parties with special agendas in that area.
In a few days the movie “Zero Days – world war 3.0” will start in the cinemas. It is a documentary that deals with Stuxnet and other ATPs in the context of their use for cyber war activities. And just now the Austrian government accused Turkey of attacking the airport in Vienna with cyber weapons. Literature and reality seem to be extremely close in this case. But what is the effect on people?
The authors give no answer to this question – and this is probably quite OK so, because it would require a detailed empirical study of both content and reception to answer it. But at least some guesses are allowed.
For me, the representation of IT and IT-security in fictional contexts is mostly done rather superficial and for suspense reasons. I doubt that people will take away something useful from those movies. Frequently, because they miss the IT background to understand how real some scenarios really are. Surveillance as it is displayed frequently is shown as a total thing, driven by hidden forces or even more magic: artificial intelligence with a strange voice (:-). But this is exactly what people do NOT experience in real life (yet) and therefore miss the small steps towards it: payback cards, e-money, video cams everywhere.
We could have spent a complete term just on the questions raised here, and it would have been very interesting to dive deeper into this topic. At least we can watch “Zero Days” soon…
Bring your own device, Mona Brunner, Maren Gräff, Verena Hofmann
After spending many years in a highly controlled IT-environment of a Swiss bank, I have to admit that “Bring Your Own Device (BYOD)” made me cringe initially. Bringing and using our own device used to be reason for termination immediately. Now it looks like it is no problem for companies if employees use their own devices together with company services and data. How come?
The bad things first: There are endless numbers of mobile devices out there. Facebook uses a test environment with 2000 devices for just ONE application! And the use cases are difficult too, ranging from using the laptop at home and in the company to storing company data and credentials on devices and using services from anywhere. Malware can be carried into the company easily and important data leave the company silently without going through tracked channels. What happens when devices are lost or contracts terminated? How are copyright and licensing issues handled?
Some helping facts: The case Apple vs. FBI has shown, that device encryption at least with Apple devices really seems to work and credentials might be protected. But that is just one vendor of mobile devices. Android using Qualcom ICs seems to be unable to protect the credentials properly.
Surprisingly, there is even a professional enterprise scale solution by Microsoft for the BYOD problem. It seems to be a mixture of restrictions and operational guidelines and I don’t even want to know how much the introduction in a large corporation might cost. It deals with responsibilities, spam and malware handling, protection for all devices, regulations for employees and much more. What happens if my private device is lost or damaged while working? If it damages something in the company? Just clarifying all those issues will cost endless hours of management meetings.
Restrictions are certainly an important part of any solution for BYOD. The Microsoft solution restricts the use of mobile devices to web/cloud based services only.
The last years show an interesting trend away from BYOD in companies and I am just assuming here, that this has something to do with an increased understanding of the risks and costs behind BYOD. I believe it finally boils down to the protection of data and credentials on those devices. When only access to cloud-services is possible, we can skip some of the malware problems.
Two aspects deserve more research regarding BYOD: a) how can credentials and data be protected when we have to assume loss of devices and possible data theft by employees too?
b) how can terminal server software help with BYOD? This could be a solution against data theft because only a video stream reaches the device.
The Attack on the German Parliament IT-System, Christian Lang
Again, we were up-to-date on IT-security challenges, when we picked this topic. Recently attacks on servers of the Democratic Party in the US have raised concerns about the security of the political infrastructure. This is especially dangerous, when we talk about possible manipulation of election machines, which at least in the US are many years old (did somebody say XP here?) and in most cases have no paper trails. But as election rigging seems to be a favorite pastime of the US parties anyway, this might not be such a big deal (:-). See https://www.wired.com/2016/08/hack-brief-fbi-warns-election-sites-got-hacked-eyes-russia/ and the latest cryptogram by Bruce Schneier for more information. For the general election manipulations in the US: http://www.deutschlandradiokultur.de/wahlmanipulation-in-den-usa-klassenkampf-mit-anderen-mitteln.1270.de.html?dram:article_id=361594
The presentation started with a big surprise: There was a huge difference between a careful analysis of technology and context of the parliament IT and the way things were displayed in the press. The media were quick to accuse the parliament infrastructure teams of negligence and some even demanded or asked, if the IT-Systems had to be replaced completely. And the presentation showed, that technology alone is not enough for an IT-Security analysis.
The most important factor for an investigation of the attack is the legal background that drives both infrastructure design and the options for getting help from outside specialist or state authorities. And this legal background demands the protection of the privacy and independence of representatives and forbids control by certain state authorities like secret service, BSI etc.
In other words: the representatives need to be able to do what they want. In reality this led to countless ways for external access from employees into the system (teamviewer etc.). The representatives can install whatever they want and delegate authorities as well. It is extremely hard to provide a common and secure infrastructure in such a context. It is very different to corporate IT and shames even BYOD approaches with respect to complexity and unsafety.
The attack was a classic, starting with takeover of single machines, gathering information and then work its way deeper into the system and other offices. Interestingly, the attack was based on APT software typically used by or developed by states and given the structure of the system, defense was futile in most cases.
According to the report, the parliament IT-team got informed from friendly secret services about data flowing out of the German Bundestag.
The responses were classic too: Shutdown servers, sanitizing of machines and accounts, shutdown of external c2-servers, blacklisting of bad web sites etc. The last point could already be problematic with respect to the privacy and independence of representatives.
A bit more interesting were some of the more long-term countermeasures like smart cards for authentication, hardening of systems but not software, increased logging and patching and a better control of third part devices (see BYOD). I am not sure how much of this will be really accepted by representatives as it might conflict with their rights. (can you just forbid the use of teamviewer?)
The machines finally did NOT get destroyed, even though there is a possibility of deep infections of UEFI, Intel System Management and other hardware controllers. My guess: It was deemed impossible to prevent future attacks and a replacement seemed to be a rather ill-received measure by the general public due to the costs involved.
The presentation did not include a proposal for a future system, that could be more secure. Some research questions in this context: Would a non-Windows based Infrastructure help? (A linux base had been replaced a couple of years ago). Or is the overall model wrong: the parliament is NOT a company and therefore enterprise software/models/thinking does not help.
Should all common services be web/cloud based? Will authentication really help or is better access control needed?
Can we isolate all workgroups better? How can tracking and logging work when all connections are encrypted? Are there any other infrastructures similar to the parliament IT (Universities come to mind). Does the German parliament even NEED a common infrastructure? How much of a danger is Active Directory really (20000 accounts). Some IT-security people claim that only 30 minutes are necessary to compromise AD.
Finally: attacks on the political infrastructure are relatively new but the reaction of the media both in the US and in Germany showed a large potential for REAL damage, especially in the public opinion. Putting it in the context of other attacks on public services causing data leaks, loss of privacy etc. it can be said, that the government and political infrastructure right now is not secured well. Depending on the political situation, this can cause arbitrary problems up to political unrest.
The Dark Web, Dennis Jonietz, Chris Uhrig
Another presentation where we hit pay dirt was on the “Dark Web”. How can somebody provide services and clients can use them without either learning who and where the partner really is? The answer lies in two patterns used in the TOR network. The first one is the famous onion pattern, where every message travels over several servers until it reaches the destination. Only the destination is able to decode the message, the intermediate servers only forward the message to the next intermediate server or the final destination. Only the first intermediate server knows the sender but cannot read the message.
This requires the sender to know the receiver in advance, but hides the message traversal and the message content. The second pattern used in TOR hidden services relies on further intermediate servers (so called introduction servers) where services can offer access, and rendevous servers where clients can place requests for services. Clients can place one-time passwords that can be used by servers to authenticate themselves against the client requests.
This is also nicely described in the TOR web getting started guide.
A glance at TOR and hidden services has been long overdue for me, but the time… Who would have thought that only a few weeks after the presentation the attack at the Munich shopping center brought the Dark Web into the headlines of all newspapers in Germany? The reason was, that the attacker bought the weapons used in the attack in the Dark Web. Suddenly, the Dark Web became a fantastic point in the mind of many people, where everything illegal seemed to be possible. This is a pattern that also showed up in the presentation on IT-Security in movies and literature. It looks like IT systems allow to project all kinds of fantasies into and onto them simply because the majority of people does not understand a thing about them (remember the famous sentence: A sufficiently developed technology is indistinguishable from magic?)
Luckily those who saw the presentation were better equipped to tell fantasy from reality. Is it really so easy to perform illegal acts in the Dark Web and stay anonymous at the same time?
The presentation showed clearly how hard this really is and where the final problem lies.
First, achieving and keeping anonymity in the Dark Web is very hard: Your need a special RAM based OS (the use of which is described in the blog post to the presentation). You need bitcoins (several wallets). You need to be extremely careful with what applications and tools you use. You need to keep your regular live and Dark Web existence separate ALL THE TIME. And you need to use TOR and VPNs correctly.
Given all this – can you stay anonymous? It all depends on side effects. A side effect in this context is something that reaches from the Dark Web into the real world. Typical examples are the order of goods and their delivery. In the case of the Munich attack it turned out that the seller of the weapons got caught because of other fake orders placed by law enforcement and because of bragging about the deal.
Using the Dark Net is a lesson in trust establishment and its problems in the online world. You just don’t know who is offering something. So if you order, don’t be surprised when law enforcement accompanies the delivery to your door. Except if you can use intermediate drop points too, that put an insulation layer between yourself and the shipping.
All connections to the real world have the potential to link your Dark Web identity to your real identity.
Can’t you rely on ratings in the Dark Web? Yes, shops there use rating systems like the regular web shops. But do you know the raters? Sybil Attacks are easy.
Besides side-effects, the biggest problem to your anonymity is the need for absolute discipline in the use of all the tools mentioned. You need bitcoins, but you need to keep your wallets separately ALWAYS. You need to use mixers to transfer between wallets, but even those are not safe and need to be changed frequently. According to Dark Web stories, even the owner of Silkroad was unable to transfer the bitcoins from Dark Web business into his real life. Or he did not dare to do so.
last but not least are the exit nodes a constant danger for your anonymity. Here the connection to the final destination is made and anything that is unencrypted can lead to de-anonymization. Spynodes are trying to identify hidden services by trying to gather information flowing through the network. (Cory Doctrorow, https://boingboing.net/2016/07/01/researchers-find-over-100-spyi.html)
And the protocol for online routing has some more traps: Hidden Services should never allow clients to suggest different intermediate servers. This way, the hidden service could be enticed to go to a specific host under the control of the client. The paper on botnets shows that nowadays even the encrypted patterns of botnets going through tor can be used to detect darknet users.
But the final problem for users besides discipline and side-effects is simply the creation of trust online. Many things are offered in the Dark Net. Weapons (we saw, that you can really buy some on the Dark Net), drugs, hacking software and even a hitman. But almost all hidden services were very careful to prevent and exclude anything that has to do with child pornography. A clear sign that this is an area where a) clueless noobs as potential clients and b) law enforcement are waiting for each other.
A research question in this context could be, whether the TOR principles could be used generally in the distributed web to achieve better privacy (mail, messaging etc. come to mind). Or in social networks like the proposed safebook. Given the slow speed of TOR, a better involvement and use of TOR users and their nodes could be helpful. The Munich attack and its connection to the TOR network has shown how fast online anonymity gets under attack too. Another question would be, how offline contacts could be used as proxies to establish safe online contacts – in other word: can you find trusted guides to connect yourself into the Dark Web?
Side-Channel Attacks, Daniel Grießhaber
Even algorithms are based on physics one they run. And while they run, the physical base generates side effects which carry information. This information can be used to detect algorithms, keys or data during processing. It is unclear whether such side effects can be avoided completely.
The presentation showed some historical attacks like the using the differences in type-writers, the capturing of EMT signals from CRTS or the acoustic analysis of algorithms running on laptops. Neither basic signal protection nor shields provided protection from those attacks.
Sometimes side-channel attacks use extreme situations in devices to cause information leaks. Freezing RAM ICs e.g. can prevent the deletion of information contained in those ICs.
How important/dangerous are those attacks really? Frequently it takes a complicated experimental setup to see the effects. But we have seen that e.g. the distance from a source where we are still able to read the signals has increased considerably in the past (e.g. Bluetooth). Sensors are becoming much more sensitive and the granularity of measurements gets much finer. This means finally, that it is only a matter of time and effort to exploit even very tiny side channels. A recent example has been the attack on http connections as described by Heise: http://www.heise.de/security/meldung/Sicherheitsforscher-kapern-HTTP-Verbindungen-von-Linux-3292257.html
Actually I am not so sure if this is a side-channel attack at all.
Research questions: Let’s assume that our instruments are two orders more sensitive. What kind of attacks become possible? What kind of channels exist? Do we know them already? About the identity of things: When we look close enough, aren’t all things in the world unique then? That would mean we cannot really hide the identity of things. Will future APTs use side-channel attacks?
Can we manipulate systems via side-channel attacks? I.e. going from passive information stealing to actively changing systems?
Smart Home Security and Usability, Lena Krächen, Tobias Schneider
Frequent and current software updates in case of security problems are right now the most important means to keep client systems safe. We are even getting used to automated updates without user interaction.
Smart Home equipment provide challenges to the update logic of the industry. There are few standards in this area. Users are frequently unable or unwilling to perform updates but still feel uncomfortable when not being consulted before updates happen.
The presentation discovers two main reasons for the current problems: New features and security fixes are mixed within updates and in general the update process is both in-transparent and unusable for normal users.
The industry seems to oscillate between expecting either too much or too little from users.
Currently, manual updates are frequently hidden behind several menu layers. They leave no logs and do not communicate information to the user.
Automatic updates without user interaction are problematic, especially when they cause failures or unexpected behavior. They are necessary on the other hand to ensure that all systems are patched. Unpatched systems are a danger for all other components in a smart home environment. I would say that this point weights heavily in favor of automatic updates.
Research questions: is the split between feature and security updates feasible? Should a generic update process for all vendors exist? How should information about system changes be presented to users? How much can we expect from home users?
And from the perspective of infrastructure security: How can we prevent a wrong update from disturbing millions of systems and by doing so bringing other systems like the power grid down? (black-out scenario from the book). Who initiates an update? Can devices start searching by themselves? All devices or are there master devices which search for updates? How long is a cycle between update search? How do firewalls cope with updates? Should devices communicate updates to firewalls? Are firewalls necessary in smart homes? Do we need separate sub-nets in smart homes? Is a smart home just like a small or medium company with respect to IT-security? Can users prevent data leaks? Can users controls what kind of data is collected and shipped during updates or constantly?
Treating smart homes just like small or medium sized companies – as the update logic essentially does, could turn out as a big mistake finally. Many of those open question have been asked already a long time ago and we did not get real answers. Smart homes probably need a new definition of the borders between user, device and industry. With new regulations, as not everything will be handled technically. Without new regulations, users will turn out to be victims of the industry. Unfortunately the German data protection laws are seen more and more as an obstacle for the industry even by the heads of state. (http://www.heise.de/newsticker/meldung/Zwei-Jahre-digitale-Agenda-Cloud-hoert-sich-an-wie-Stehlen-3314846.html)
Web App File Upload Vulnerabilities, Thomas Derleth
It looks like file upload is still a very critical point for many websites. And websites are an interesting target for many attacks: c2-server for botnets, malware distribution, ransomware etc. all prefer public websites for their distribution.
The presentation showed how quickly a backdoor shell can be placed within the file-system of a server. Missing mime type validation can cause this as well. Dangerous extensions need to be blocked and the place where files are stored should not allow execution.
While the talk mostly had its focus on securing your own code against file upload vulnerabilities, it also demonstrated the use of special validators for content management systems like wpscan for wordpress instances.
Missing in the talk was the use of damage reduction techniques in case of a validation error. We just cannot assume, that there will be no more errors in validation algorithms. What happens then? In that case, the rights of the executable decide about the fate of the system. If the rights are restricted (e.g. the server has few rights in itself, the directory has been excluded from execution by some sandbox (SE-Linux, Jails, Capabilities etc.) or by Java security policies, then the damage will be minor or not at all happening. If the executable runs with admin rights, all hope is lost.
This is a very simple thing to do, but many sites still ignore damage control.
research questions: how can we solve file upload in a way that leaves developer errors out of the picture? How can we automate the damage reduction techniques needed for file upload? Do software engineers understand enough about system administration and operation to understand the problems? I am not really sure about the last point and we will discuss it in one of our next days on devops and development in the winter term.
XSS Cross-Site Scripting , Mario Erazo, Sven Lindauer
No, not another input-validation problem! How can this really happen nowadays? Doesn’t every developer know about XSS? And how to prevent it? Surprisingly NOT!
The presentation started with an explanation of the four basic forms of XSS: (persistent, reflecting, dom-based, self-xss through social eng.) Interestingly, even the latest client-frameworks like Angular allow breaking out of the sandbox via overwriting constructor functions.
Research Questions: It is not easy to come up with the right questions on XSS. Do we know what it is and how it works? Yes. What gets compromised with XSS? Everything. What can we do? Looks like nothing as developers simply do not use the features provided. Do we need more XSS analysis? No, because the result is always missing or wrong input validation.
These facts have puzzled me for a long time already. I believe that there is something fundamentally wrong with web applications and how we build them. Restricted rights and capabilities do NOT help here, because the damage is done on the client side, not on the server. There is a trust relation between client and server which gets abused by a third party. The real questions are:
What can a browser do to detect XSS and prevent the passing of session cookies in that case? Can it detect that there is script attached to a URL? And then even more fundamental: Client and server communicate using a certain language. This language is defined by application developers, but it is not formally defined. There is no grammar that defines correct sentences, literals etc. Parsing of the language is implicitly done in code, and that is where the mistakes are made.
Docker Security, Patrick Kleindienst
Containers – the hottest topic in large-scale computing in 2016 I guess. But are they secure? The presentation (and two blog entries) shows the basic architecture of docker containers, the linux features they are based on, and the maintenance and setup needed. (See the presentation for name-spaces, union-filesystem etc.)
Containers are so popular because one can get ready made images for certain applications and add own layers on top of those images without changing them. A central role in docker containers plays the docker daemon which can be controlled locally or remotely through docker clients. This way, quota can be set for containers and namespaces, mounts etc. can be configures.
The first problem for security is the fact, that there is only one UID 0 (root) on a machine and container and host share this ID.
But it gets worse: Every member of the docker group has access to the docker daemon, which operates only one namespace. In other words: the docker daemon is not multi-tenant enabled. In my eyes this prevents currently different clients from running containers on one and the same machine. It even is a problem for test and development stages together with production on one machine.
Setuid-root binaries if used present another problem for security, as they allow privilege escalation if not tightly controlled. Capabilities are possible but need to be configured. Tools like SE-linux are also available and can secure the execution of a container even further.
Escaping is a well known security problem with VMs and containers and it is frequently caused by bad system configurations. Remember to split networks, memory access etc. or prevent access to them. Containers can attack each other and perform all kinds of attacks like credential stealing, spoofing or DOS attacks.
Image forgery is another danger that can be prevented with the docker system.
At the end of the presentation it was shown, that careless mounting of a file-system allows a container to leave its confinement with effort. This raised a very interesting question: How much of a system admin/engineer must be in a software developer to make her or him avoid such mistakes?
Research questions: How will the relation between containers and operating systems develop? (see MirageOS). This will have a major impact on container security. how can we make the docker daemon multi-tenant enabled? Running containers on top of VMs would provide isolation, but negates the advantages provided by lightweight containers.
Elixir, Yann Philippczyk
With the next presentations on new languages we started the section on “new ways” for secure systems. Most of the current vulnerabilities we looked at were caused by malware. Malware enters systems through overflows of various kinds (stack, heap). An endless number of techniques has been developed to prevent those overflows. All to no avail. The programming language is still the most important factor in those vulnerabilities which are not really IT-Security problems (Real IT security problems deal with encryption, authentication, protocols, smart contracts etc.) Malware simply uses software bugs to work, caused by bad languages.
Another vulnerability frequently is caused by concurrency problems, which are often a result of shared memory concurrency provided by bad programming languages.
Elixir is based on Erlang, a language and runtime well known for its reliability and failure tolerance. Some properties are:
functional language, distributed, concurrency via message passing, dynamic type system. Elixir runs on top of a virtual machine and guarantees type- and memory – safety. There is not memory sharing end therefore also no locking or concurrency related race condition. Lightweight process structure applications easily. Supervisor processes watch lower processes and – due to the missing state in those – can easily restart dead processes. Only temporary state is held in tasks. There are few side-effects possible.
Hot-code swapping is a big advantage in critical infrastructures and embedded control. All in all Elixir seems to be a good candidate for future IoT and Smart home/Industry 4.0 systems and infrastructures.
The most important research questions obviously are:
can the performance of a functional language based on message passing compare with c++?
What is the memory footprint of VM plus application? How small can embedded systems be built to still run Elixir?
How does the language behave in large distributed systems
How usable is the language also with respect to security? Only usable languages can support secure code.
Which components might cause the most problems?
Is code-injection possible? In what language is the VM written?
How is hot-swapping used in critical infrastructures?
Will Elixir become mainstream? Accepted? Is its functional nature a problem?
The answers to those questions will tell us, if Elixir is a contender for the new programming language that is needed to
write secure systems.
Rust- The Language, Jakob Schaal,
Like with Elixir, the presentation of Rust took a look especially at the three areas mostly responsible for unsafe code and use in embedded control: memory management, thread handling and performance.
The presentation started with a detailed introduction to programming features. Rust uses the “ownership” model for references to memory and carefully tracks all references. It prevents both modifying and constant references and allows only one reference that could modify a memory cell. I have to admit, that this puts a lot of programmer attention on memory management and leads – at least for me – to some cumbersome constructions in the language.
But the language requires this model as it has no virtual machine and no garbage collector but still wants to provide type- and memory safety. The result is performance which is on par with C++ but does not show the overflow weaknesses of this language.
In the philosophy of C++ Rust uses static typing.
Some comments on the basics: automatic typing and fixed data sizes are nice. Functions can be of higher order and macros solve the problem, that there is no variable number of parameters. There are structures, interfaces (called traits) and whatever is needed for advanced programming.
Memory management gets especially interesting when concurrency enters the game: when several references point to the same memory and when those references are mutable (they can change the memory). This typically results in memory corruption and wrong de-allocation of memory. When memory is shared, RUST copies or moves memory to prevent corruption. This causes some syntactic juggling with multi-threaded code, but keeps the program safe from overflows and corruption.
Usability is finally understood as a basic requirement for secure programs. RUST uses rather conservative syntax and requires brackets for all if statements. Implicit casts are getting more and more outlawed (for a reason) and RUST does not allow them.
Threading: RUST enforces the use of Locks when data are shared between threads and copying or moving cannot solve the problem.
RUST is surely more familiar to the C/C++ camp than Elixir or even OCAML (see the MirageOS presentation). Its chances for acceptance are a bit higher for that reason. And that it does use a VM or a garbage collector.
Many of the research questions for Elixir do not apply to RUST. There is no question about performance. The ownership concept is safe and the runtime system probably quite small and efficient for use in embedded control. Elixir on the other hand looks more modern (functional) and more elegant because of the message passing concept that is also successfully used in GO.
For RUST I have only one research question: Given the memory/type based vulnerabilities of the last years, which ones would have been prevented by RUST. I am thinking about heartbleed, the double goto problem from Apple, the OpenSSL bugs and others.
KEYLESS GO(NE) Vulnerabilities with keyless cars
Antonia Böttinger, Andreas Gold
Oh my goodness, could we have been more current than with this topic? Literally only days after the seminar finished, a storm of reports on the problems with keyless cars showed up on TV and in the newspapers all over the world.
Before the presentation I had no clue, that nowadays even mid-size cars can be operated without keys. Opening the car doors and starting a car without a key seems to be standard with practically all car makers and it looks like they all use the same type of system if not even the same components. The simple communication between a wireless key and the car is enough to open the doors and start the car. This can be called an invitation to a Man-In-The-Middle attack (MIM) if not further protected.
The attack is quite simple and has been shown on TV several times: Attackers bring a transceiver close to where the car key really is (frequently just close behind the main door of the house) and another one close to the car. Then they can open the car and drive away. When the car is started and loses the signal it is not allowed to turn the engine off for security reasons.
This unfortunately is all an old hat. Years ago Patrick Nisch did a short paper on car security and this type of attack is already well documented there.
The late reaction of the media is what really comes as a surprise here. Only now (possibly after the diesel scandal?) media discovered several quite unsettling facts on keyless go(ne):
First, that countermeasures that can be performed by users lead to a much worse usability. Isolation of the receiver with special boxes or the destruction of the battery are either cumbersome or can have other side-effects.
Second: The car companies uniformly deny any problems and are unwilling to take counter-measures (now this is again no real surprise..)
Third: Car insurance companies give car owners a very hard time once the car was stolen without any visible evidence. According to the authors, some owners are still fighting with insurance even they had theft coverage.
Again, all of this does not really come as a surprise. Traditionally users are left on their own when IT-systems are compromised, no matter how weak the systems were programmed or operated. Loss of important credentials through websites, data loss through ransomware etc. are all common threats without much hope for the end-users. Politics is simply ignoring those threats as well.
Some possible countermeasures:
the keys could be programmed to be inactive for a certain time after closing the doors. But this does not exclude the scenario from above. A real solution would be, if the key and the car would know the real distance between, e.g. by measuring transmit times.
My guess is, that the car makers will continue to ignore the problem until most cars are semi-autonomous and can stop by themselves when the original key is out of reach.
Research questions: Is this really a comfort feature given the facts? Do users understand the danger behind the feature?
Why is it possible that industry and politics simply ignore the problem? Why is there no place for people to place those problems and get help with non-functioning software?
Do stolen cars report their position and if not, why?
How can an owner come up with a proof of theft?
Keyless go(ne) is in my opinion just one case of many (e-banking, ransomware) where citizens are completely left alone and industry and politics completely deny responsibility. This is the way the software industry has been working for many years and there are no signs of changes yet. The so called diesel scandal has once again shown the tight relations between government and industry and the damage is with the citizens.
From a wider distance: the car industry is just one more industry that is now heavily based on IT. But this IT has inherited all the problems of the PC area, which we will find in the future in our cars, homes, clinics, power grids, water supplies, and so on.
Can Software be racist or sexist? Natali Bopp
I have learned a lot during this seminar, but it was this presentation where some previously unclear and unrelated bits and pieces in my mind suddenly “fell into place” and I had a major “wow” effect.
The presentation started with definitions of racism and sexism. And when I saw the definition of racism as “superiority of the own race” I realized that many people in Germany might not like immigrants but should not be called “racist” right away. Because just not wanting immigrants in Germany does not imply superiority of the own race. It can even be rationally based on fears about ones own job, increasing criminality (even very light one like thefts) or wage dumping, especially in critical regions or with low-income jobs. It was a rare case which showed the power of definitions up front and that we are not using them enough.
The next example looked like an obvious example of racism: Social Networks like Twitter allow the manual tagging of pictures and add another set of tags automatically through machine learning applications. It happened several times, that pictures of Afro-american people got automatically tagged as “ape” or “animal”. These taggings are done automatically seemingly without human intervention and they can also happen with white people wearing face colors or extreme tattoos. The tagging is clearly dependent on the quality of test data, and it is exactly there, where discrimination of minorities starts: applications trained less for minorities do automatically make more mistakes with those! Are those differences in test data just coincidences? We get a different feeling when we read that e.g. Google shows an extreme hiring diversity skewed against minorities.
The second lesson learned here was, that non-discrimination shows in treating minorities equally well by providing enough test data to the learning algorithm and by avoiding huge hiring differences. Using less test data (because it is about minorities) causes skewed algorithms.
The next example added some intercultural lessons. Two baby-pictures – a black baby and a white baby – where tagged as “black baby” vs. just “baby”. Is this evidence of racial prejudice? I guess the tagging depends on cultural factors: Do you live in a state with almost exclusively white people? Then a black baby is something special. In other countries it would be just a baby.
Tagging is done by crowd-sourcing. How “neutral” can this be? Does it not always reflect the cultural background of the people tagging? International sites might even benefit from the diversity in taggers. But how can you control the quality of the tagging?
Economics is a crucial factor in the next lesson learned from advertising. An experiment showed, that an ad for a high paying job was shown three times more to male visitors than female visitors. The female visitors saw much more of lesser paid job ads. And don’t forget, your Google profile contains age and sex!
Is there an obvious discrimination happening through software? It depends on the point of view. Clearly women might argue, that they are treated unfairly be the software. Those who pay for the ads and those who implement the algorithm argue, that by favoring male viewers, the ads are much more likely to be successful, as males are more likely to work in high paying jobs and also to accept them more readily. So economics is one major factor here, but if we look closely, we see much more: Women are less likely to react on those job offers, because they carry some handicaps: baby breaks, lower eduction etc. The real lesson from this case is, that social discrimination gets reflected in software algorithms for economic reasons. It is the status quo that keeps replicating itself through software. From an evolutionary viewpoint one might argue, that the environmental conditions have formed the software like a DNA. And that DNA turns around and generates new things that fit exactly to the existing environment. And if the environment does discriminate minorities or females, the new things generated will so too.
And there is some more to this example: the machine learning algorithm “learns” the discrimination, because Google and others track the success of ads and the behavior of users. And the decision to show an ad or not does not look at individual data: it just draws conclusions from aggregate data.
Not sexist but racist is another case from advertising. A company who sells data on people with a criminal record to the public paid for ads that were shown, when a “typical black” name was mentioned somewhere: The ads went like: “First name… arrested?”
In Germany this might not be legal.
It all comes together in the next example: Software used at US courts to predict the probability of a person doing more criminal acts in the future was shown to carry an extreme bias against Afro-american persons. By now we know, that we don’t have to assume racism of the software developers (there might be some racial prejudice in some cases, but it is not even necessary for the argument). They might have just taken well known rates and put into code. Or the probability might have been “learned” through big data analysis. There is perhaps no doubt that Afro-american people have a higher rate. But is it correct to put this into software which will make the judge deny probation to those people very often? Lawrence Lessing said it clearly: code is law! The software denies the suspect an individual treatment. Instead, aggregate data are used against him or her. This alone is discrimination in my point of view, regardless of the other errors with the rate itself. Empirical data show, that only a small fraction of those people do really commit more crimes in the future.
The last consequence of this approach could be predictive software, which uses DNA, face or gestalt comparisons, behavioral patterns etc. to imprison people BEFORE they have done anything. That is the fundamental problem behind predictive software no matter how it got created!
Another lesson learned was, that the makers of socially active software are unwilling to discuss the algorithms used. This fits well into the “public-private partnership” model used for prisons in the US. And that machine learning tends to just reinforce existing discrimination. The software acts like a self-fulfilling prophecy.
Let’s take a short look at machine learning itself: Representatives in that area frequently claim, that there is no theory necessary with respect to big-data results. Due to the large scale patterns in data become very obvious and can be detected and used without a theory behind. Action instead of proof is the goal of those algorithms. And by doing so, it does not matter that those algorithms learn discrimination and apply it later on again. In other words: those algorithms – even so they get detected by data science methods – are far from non-discrimination. The fact that an algorithm has been “learned” automatically does not mean, that the algorithm does not discriminate minorities!
The truth behind this lesson can be seen in the last example.
The presentation ended with a discussion of Microsoft’s AI “Tay” which learnd racist behavior within days from the users and had to be pulled from the net. Tay answer to the question, which race should die, with “I think half black”. Tay got its “training” from its users. And started to replicate discrimination quickly.
Lot’s of research questions can be formulated about software and discrimination:
do we even realize that software discriminates? What if it just replicates the opinion of majorities? Do we realize how skewed police controls in trains or cars really are against certain people? In the future it will be more and more software that guides the police officer in the selection of possible targets. How many social decisions will be delegated to software? Is it legal and just to draw conclusions from aggregate data on specific individuals? Does big-data hide behind its supposed “theory-less” approach? How do we control/software software algorithms in private companies?
The talk ended with the famous three robot laws by Isaac Asimov. The first law demands, that robots – and software is nothing else – do not hurt human beings. This was not the case in the examples above. And the US military started talking about autonomous kill robots on battlefields (and I guess the inner cities of mostly black communities would fit as well).
The security of drones, Ina de marco, Lisa Möcking,
Models and use of drones are constantly increasing in numbers and features. The presentation starts with an overview of drone technology and speculates about new applications in the future. With respect to Germany a big question will be about the internal use within the borders. At least military drones were initially designed for the use in battlefields and outside of ones own country.
The German military uses drones right now only for surveillance and intelligence purposes in international conflicts.
That raises the question, whether military drones can be called defensive weapons? As currently used by the US, there is a clear NO to this. And that is why this question is deeply political for Germany.
Right now military drones are used in hidden and undeclared war zones like Pakistan or the Arabian peninsula. There is probably no legal base for this use. The use is justified with the war on terrorism. But the Syknet example shows, how questionable the selection of potential terrorist targets really is. Thousands of innocent civilians have been killed already. They are considered collateral damage.
All this makes German investments in military drones rather questionable.
From a technical point of view drone security is also not a given. It has been shown that drones are vulnerable for hacking attacks, GPS disruption and decryption of communication. Also DDOS attacks are an issue.
While currently military drones are operated by humans it is a clear goal of the US government to make them autonomous in the future. Such drones carry huge weapons and would be a preferred target in cyber war scenarios.
The legal state of drones is quite unclear yet. Some drones need special licenses and there are special regulations regarding the use around airports and other buildings.
An important question is certainly, whether the drone technology will be used by terrorists? Or should we better say when? Because the technology itself seems to be not very complicated and recently most parts needed for a drone have been built with a 3D printer in Russia.
This could change the power in asymmetric wars considerably.
Drones raise a lot of ethical and political questions. Just like robots they tend to become autonomous and they will be equipped with weapons that kill autonomously. The use for civil purposes within a country is still in an experimental phase. Police forces demand drones for mass surveillance and companies like Amazon and DHL plan to use them for shipping goods. When Isaac Asimov wrote his famous robot laws in the seventies, only few people would have thought, that those rules will become vital within their own lifetime. Drones and robots are no longer science fiction. They will form the 21. century at the very core.
The authors avoided premature judgment of this development, which is the right thing to do given the fast changing development in this area.
Claudius Messerschmidt, Machine Learning in secure systems
While machine learning got some bashing in the previous presentation, companies like Google are hoping to use ML in almost every aspect of their work in the near future. Even special hardware is being built for support.
Could ML help with detecting attacks in large scale networks? The main problem for firewalls and intrusion detection systems is in the human part of the solution: Humans are needed to configure and control the installation, to detect and evaluate anomalies and to act on the findings. The job would be to detect attacks and to prevent false positives as much as possible.
The presentation first runs through some examples of ML use in business and then explains the methodology behind ML. Supervised and unsupervised learning are explained and an example of a detection system is given, that combines both methods by allowing tags through specialist.
The success rate of this system is quite high and there is hope, that more automatic monitoring and evaluation will happen.
The weak spots of the system are again the test data and the test phase. Attacks on the test data can prevent proper functioning later on and lead to compromised systems.
Research questions: how do we evaluate and validate those systems, given the huge amount of data they usually operate on? We can only insert test attacks and check the recall rate.
How well can thoses systems cope with new kinds of attacks? Will we see self-adjusting systems? True self-learning after training phases?
What are the costs of false positives in critical infrastructures? Do we dare to tie immediate actions to results from ML systems?
WhatsApp encrypts, Martin Kopp, Jonas Häfele
Right into the time frame of the seminar, WhatsApp announced end-to-end encryption for its user, based on solutions from Open Whisper Systems. The presentation gave on overview of the technology used and tried to find loopholes and vulnerabilities.
The basic concept is quite simple and based on PKI. The local WhatsApp client generates public and private keys. The public ones get stored on servers and swapped with recipients. A sender uses the public receiver key to encrypt the message and the own private key to generate digital signatures. This way only intended receivers can read the message and can verify, that the sender really was the author of the message.
This technology excludes man-in-the-middle attacks successfully and for point to point connections it is also quite simple.
But private and secure messaging gets much more complicated, when two features are required. The first one is the support for group communication (“fan-out problem”). Members of the group – and only they and while they are members of the group – should be able to read the messages. Frequently PKI technology is only used to create a shared symmetric key which gets distributed to the receiver. This allows a better performance. For group communication this usually requires a new round of key creation when a member leaves a group. Otherwise this member would still be able to read the messages.
The other feature that introduces much more complexity into the protocol is called “perfect forward secrecy”. It demands, that in case the keys of participants will be compromised later on, it is still impossible to decrypt previous messages. This clearly requires constantly generating temporary keys for every round of messaging. WhatsApp support this feature with pre-calculated keys on both sides.
Forward secrecy requires several keys, e.g one-time keys and session keys to generate unique encryption for every message. Clients install three keys on the server which distributes the keys to partner who want to send a message. The sender generates a session key pair. For every message a new key is generated from a key chain, which gets updated afterwords. The first key is derived from the root key with the help of temporary information.
Heavy-weight data (media) get encrypted with a symmetric key and carry a MAC. The WhatsApp server cannot decrypt those data. The symmetric key, the MAC key and the hash of the media gets transmitted to the receiver, together with a path to the file on the server. This information is of course also encrypted.
Now that a message can be encrypted and transmitted securely, we need to ask one more question: How can a sender verify, that a certain public key really belongs to a specific receiver? It the sender gets this wrong, somebody else will be able to read the message, if they can get to it. The answer to this question used to be so called certificates. Something regular user never were able to understand or use properly.
The solution from WhatsApp relies on QR codes. It gets generated from a public key and a potential sender can verify its information with out-of-band means (e.g. calling the receiver). How many people are really going to verify keys?
How about anonymity of WhatsApp messaging? Well, preserving anonymity of communication was a non-goal for WhatsApp and it shows in the treatment of meta-data: They are available and it is not secret for the server, who does communicate with whom. The use of other services by the smartphone client further de-anonymize users and allow data aggregation.
The handling of stored data shows another vulnerability: They can be saved in the cloud. See:
MIca Lee, https://theintercept.com/2016/06/22/battle-of-the-secure-messaging-apps-how-signal-beats-WhatsAppp/: If you choose to back up your phone to the cloud — such as to your Google account if you’re an Android user or your iCloud account if you’re an iPhone user — then you’re handing the content of your messages to your backup service provider.
Research questions: The first and most obvious is about the security of this solution. Can WhatsApp read your messages? The answer to this needs certain assumptions to be guaranteed: First: The WhatsApp client does generate the private and public keys for a user. Those keys are open to the application and theoretically they could be transported to the server without the client ever knowing about this.
Should backdoors be available for law enforcement? (See the “going dark” debate). The German government just announced the funding for a new state authority who’s job is the decryption of messages from WhatsApp etc. (http://www.heise.de/newsticker/meldung/BND-und-Verfassungsschutz-planen-Millionen-Projekte-gegen-Terror-3316145.html). Is this a feasible approach? Some research into the protocal from Whisper Systems has shown no obvious problems with encryption algorithms. That means the attacks would have to be against the keys. Malware could try to recover the private keys of a receiver. Or messaging providers could be forced by law to recover specific keys. Anyway, while the security of the protocol is quite good (perhaps a bit too complicated for my taste. I would have given up on perfect forward secrecy), the key handling completely depends on implementation features and cannot be considered safe.
What could make the keys more safe? They should never be able to leave the system and that means a smartcard like feature that can use the key for signature purposes but does not allow read access to it.
Could we also hide meta-data of communications better? One way to do so would be the use of intermediate nodes (the onion router comes to mind). Without intermediates, WhatsApp can always trace communications.
Why did law enforcement not complain a lot about the new encryption feature in WhatsApp? Well, I believe that they quickly saw the weaknesses in the key area and either forced WhatsApp to offer a way to get to the private key of a receiver or they found ways to get the key by themselves.
Psychology of Security, Marc Stauffer, Malte Vollmerhausen
Daniel Kahneman’s book “thinking fast and slow” provides a wealth of information for security people. It destroys misconceptions on our ability to judge risk properly and not fall into simple traps. I have been reading the book about three years ago and I still remember many examples from the book that just blew me away. Just go ahead and read the blog post or check out the presentation (which is rather long and detailed and certainly excellent). I will only comment on some research questions and say one thing, that took me by surprise: Part of the presentation was a short test about the famous biases described in the book. The “halo” effect e.g. makes us misjudge probabilities simply due to some striking features that overshadow the basic numbers of objects. We believe that a woman wearing thick glasses, homemade jackets and her hair in a bun has studied library science and not economics, even though the numbers of students in both professions should tell us differently. Some other questions in the tests were about risk avoidance etc.
And still, even though I had read the book and talked about it quite a bit: I was very tempted to give the wrong answers in the test. I decided to NOT think hard about the questions and just follow my instincts. And I fell in many traps doing so. This shows how strong those biases really are. Security solutions which try to change human behavior are having a hard time for this reason!
The first and most important research question would be to figure out, how the behavioral laws detected by Kahneman could be applied to build robust technical or social systems. The second would be to apply the laws to crisis scenarios and try to estimate human behavior.
ChromeOS, Bennie Binder
One important part of the Secure Systems seminar was to look out for new technologies which might solve long outstanding security problems. ChromeOS certainly falls into this category. It is a far from perfect attempt to provide a low-cost device to a huge audience with very limited needs. It provides a runtime platform for just three applications: Browser, Media Player, File Explorer – and perhaps in the future Android apps.
This does not sound like a very interesting system, but it fits the needs of millions of people in the US already.
And it got some nice security features, the first of which we’ve already mentioned: restricted applications. Restrictions
are a general security measure and ChromeOS does some more of it: It uses sandboxes based on the well known Linux features C-groups and namespaces and it puts all tabs in a separate sandbox or minijail. But it still uses Linux and therefore some hardening steps have been taken.
How does it deal with real live problems like corrupted systems? It provides a secure boot feature which is cryptographically
and through special hardware protected. A read-only boot image can always be used to reset the system to its original state. Users can store data securely in Google servers. Updates are automatic (nothing else makes sense anyway for this type of use).
There are of course a number of open questions like about the security of the sandboxes/minijail approach. Sending lots of data to Google could be a privacy issue but right now the backup problem is a plague for many private users. Will Chromebooks really cause fewer zombies/bots in the wild? (Does Google perform some form of detection or do they just rely on the restrictions put into ChromeOS?). The problem of botnets is discussed in another blog entry here.
But I think an even more important question is, whether Linux is really a good choice for such a device? It is well known and powers things like Android and an endless number of IoT devices. But it suffers from malware vulnerabilities and it is huge. Hardening means stripping off lots of stuff, especially when all you really need is to support three applications. Why don’t we have something that is small and secure and not written in C?
Well, the latest I heard is that Google is working on a new operating system that might do just what I am asking here: According to The Verge, Fuchsia is based on LittleKernel and supposed to run on various kind of systems, not just small routers. And even a capability based security model should be included. This would really be a revolution in OS design, if it does not turn out to be one of the many mis-uses of that term. See: http://www.theverge.com/2016/8/15/12480566/google-fuchsia-new-operating-system
Fuchsia is written in the new Dart programming language (https://www.dartlang.org/guides/language/language-tour) and a first glance showed some interesting features geared toward avoiding common programming errors. And it uses a VM with garbage collection. So there is hope against overflow-based malware…
MirageOS, Simon Lipke
Current operating systems are huge (Linux: >20 mio. LOCs) and feature rich, most of them tightly integrated within a monolithic kernel. Build and boot are complex, slow and critical.
So called library operating systems reduce the OS to services the applications really need. Ideally, only those services are loaded together with the application. These services can be called “Unikernel” and they run directly on top of the Xen hypervisor. The presentation demonstrated the huge amount of reduction in code size and in lockstep with it: the reduction of attack surface.
MirageOS uses the functional language OCAML instead of C for all code. And yes, all kernel code and services have been rewritten. Event complex things like TCP. While this certainly took quite a while it result was a code-base that gets adjusted per application. No unnecessary code gets executed or loaded. OCAML is type and memory safe and there are projects which use different languages. MirageOS shows that leaving the well worn path of C/C++ IS POSSIBLE, even for operating system like services.
This was the most important lesson from the talk (for me at least). There is hope for new languages even in system programming.
How will serverless computing (e.g. AWS Lambda) affect Unikernels? Unikernels can boot extremely fast and they have a small memory footprint.
How do container technology and Unikernels go together? It is not really a surprise that Docker has acquired the MirageOS system. Today docker containers run on fat kernels and Unikernels provide a means to reduce the footprint (and the security vulnerabilities!) quite a lot.
Will the general purpose fat kernel disappear? Yes, this is very likely. We see new operating systems being built (e.g. by Google) and IoT will require much smaller but safer systems. Cloud data centers would benefit a lot from reduced compute and memory footprint and increased security.
Will we see new approaches to kernel services? Quite likely. How about just-in-time compiled mini-Unikernels? Once the routes through services are well known, a compiler further optimizes access to hardware. (That idea is quite old but has not been applied yet).
Does it pay to change the way kernels and services work? Google is moving away from TCP in its data centers – or modify it considerably, because data centers are not like the internet and TCP carries too much baggage for data center use.
MirageOS like e.g. ChromeOS uses restrictions to minimize footprint and attack surface. Seems like the way to go in the future.
Satellites, Satelliten-Kommunikation (Zeller, Hensle, Savastürk)
A topic that – like side channel attacks – seems to be a bit exotic and of lesser relevance than others. But in security I have learned, that the biggest surprises (or newest developments) frequently come from areas which have been previously considered “exotic”.
And some more thoughts about satellite communication makes you realize, that e.g GPS is a satellite based service of the highest importance for civilians, the military and corporations all over the world. So satellites are a critical infrastructure and the must be protected well. Or do we find the typical pattern of failing security like in other infrastructures? The presentation starts with a short introduction into the physical and technical aspects of satellites and quickly shows vulnerabilities in protocols. Many older satellites seem to have no use of encryption and authentication at all and it is unclear how many are actually already controlled by strangers.
Even the mars-robot curiosity is only protected by the massive signaling infrastructure needed to send and receive signals and not by security measures. While trying to outperform the NASA signals is rather hard, an attack on the NASA deep space ground station could be the weakest link resulting in loss of the rover.
Satellite terminals used by the military showed vulnerabilities that allowed manipulation of GPS signals and leaks of location data. The list of vulnerabilities includes hard coded passwords, weak passwords, weak resets, backdoors, weak (insecure, undocumented) protocols etc.
From a security point of view, the results are close to what is seen e.g. in traditional power grids: The protection is more based on undocumented and unknown protocols than on IT-Security measures. And the same vulnerabilities as in most traditional infrastructures can be found here as well.
Michael Kreuzer, Botnets
Given the developments in the blockchain area with ever more complex digital contracts and clever use of signatures, I see even more complex botnets coming up. With a communication structure like the one in Tor – with its intermediates and rendezvous servers – and based on signatures (encryption anyway coming with http2.0 everywhere) all fighting will be about the keys controlling those signatures.
Are we going to be less vulnerable to malware attacks? I doubt it. Pen-testing is making many people rich currently, but it does nothing for the future because it does not fight the root causes.
New developments are slowly showing up: new languages, OS etc. Still a long way to go.
Resilience can really be learned from botnets. I hope the people responsible for power, water etc. will learn something.
P2P has been in the shadows of the cloud movement for quite a while now. The first p2p thing making a splash recently was the blockchain technology. I’ve done something on the distributed web in Aktuelle Themen last semester and I am thinking about reviving my P2P lecture in distributed systems (IPFS is truly amazing..)
P2P even has some answers to the privacy problems in social networks: safebook does introduce intermediates just like Tor.
Well, I hope I can use something from the paper on an upcoming conference on electric infrastructures in Vienna. Now I can show them, what resilience really means
Jonathan Peter, Malvertising
Advertisements are the oil that lets public free internet services run. All over the world web sites and advertizing networks (so called affiliate networks) deliver ads to web users in real-time. But some of those ads contain malware. The presentation gives answers to questions like “how many” or “how is it done”.
But first a look at the overall structure of ad serving and its participants. The final servers of advertisements are the web sites which participate in affiliate networks and which get money for each view or click. The ads come from ad agencies which can be rather large or very small. Sometimes several hops are needed until the final ad is served and all this happens in realtime due to the fact that the places for the ads are sold in real-time auctions (real-time bidding).
Web sites will simply take an ad and serve it to users. If the ad contains malware (drive by attack), then the users browser gets attacked and possibly corrupted.
In a 3-month test setup, an experiment showed that about 1-2% of ads contain malware and that this malware is mostly served through small ad agencies. 70% of malware campaigns serve ransomeware!
The list of countermeasures shows the typical mixture of useless, unusable or impossible advice that we regularly get from IT-Security when it comes to malware. “Use trusted sites only” completely ignores the fact, that ads are not created at those sites. They simply serve them. That means, any well known site like BBC or yahoo could be involved in distributing malware (as they actually were).
“use ad-blockers” is actually quite useful, but prevents certain sites from being used.
This gives rise to interesting research questions: Can advertisements be checked for malware BEFORE they get delivered? (the real-time bidding might be a problem here..). What could the browser do in case malware is served? How effective are sandboxes? VMs? Capability based browsers come to mind here as well, possibly written in memory and type safe languages.
Could the browser detect ads and block access? This way, downloads could be prevented but we would probably destroy the free web with the malvertising…
Can we server safe advertising at all currently, given the structure of the affiliate networks? Are there organizational, political or technical means that would help?
And I would like to add one more point to the presentation: What about the data that get concentrated in those affiliate networks? Given a corrupt agency, what kind of data could the black economy get from them? Could they be used to create perfect spear fishing attacks because they know EXACTLY what you bought a couple of minutes ago?
A closing note on the work done by all participants of this seminar: I have been extremely satisfied with the results. I have gained significant knowledge from presentations and discussions. Thank you very much for the efforts made!