For the most of us, voices are a crucial part in our every-day communication. Whether we talk to other people over the phone or in real life, through different voices we’re able to distinguish our counterparts, convey different meanings with the same words, and – maybe most importantly – connect the voice we hear to the memory of a person we know – more or less.
In relationships lies trust – and whenever we recognize something that’s familiar or well-known to us, we automatically open up to it. It happens every time we make a phone call or receive a voice message on WhatsApp. Once we recognize the voice, we instantly connect the spoken words to that person and – in case of a friend’s or partner’s voice – establish our connection of trust.
In the past couple of years research in the field of machine learning (ML) has made huge progress which resulted in applications like automated translation, practical speech recognition for smart assistants, useful robots, self-driving cars and lots of others. But so far we only have reached the point where ML works, but may easily be broken. Therefore, this blog post concentrates on the weaknesses ML faces these days. After an overview and categorization of different flaws, we will dig a little deeper into adversarial attacks, which are the most dangerous ones.
Metadata is data about data. Thus, it provides information about data. Examples for metadata are file size, time and date of creation, means of creation of data etc. Every day, we deal with it, but no one really cares about it. Sometimes, metadata gives us more information than the data itself.
But which devices generate metadata? How often do we use it? One of the largest producers of metadata are our smartphones. For this article we will check which metadata cameras or smartphones save in each picture we take. Normally, a picture is shot and then there is the look for the next one or it will be shared on a social media platform. Most people want to share the nice side of life with their friends. Continue reading
Autonomous cars are vehicles that can drive to a predetermined destination in real traffic without the intervention of a human driver. To ensure that the car gets from A to B as safely and comfortably as possible, various precautions must be taken. These precautions are explained in the following sections using various questions and security concepts. In addition, further questions are used to answer typical questions in the field of autonomous driving.
Often overlooked, usability turned out to be one of the most important aspects of security. Usable systems enable users to accomplish their goals with increased productivity, less errors and security incidents. And It stills seems to be the exception rather than the rule.
When it comes to software, many people believe there is an fundamental tradeoff between security and usability. A choice between one of them has to be done. The belief is – make it more secure – and immediately – things become harder to use.
It’s a never-ending challenge – security and usability experts arguing about which one is more important. And some more people of the engineering and marketing department get involved giving their views and trying to convince the others. Finding the right balance between security and usability is without a doubt a challenging task.
The serious problem: User experience can suffer as digital products become more secure. In other words: the more secure you make something, the less secure it becomes. Why?
Welcome to the second part of my series about malvertising. In this second post, we’ll get to the important stuff: What is malvertising and how often do these attacks happen?
Sadly today’s security systems often be hacked and sensitive informations get stolen. To protect a company against cyber-attacks security experts define a “rule set” to detect and prevent any attack. This “analyst-driven solutions” are build up from human experts with their domain knowledge. This knowledge is based on experiences and build for attacks of the past. But if any attack don’t match the rules, the secure system don’t recognizes it and the security is broken.
The question is: Is there a possibility to train a model based on past attacks to predict further attacks?