Building serverless architectures is hard. At least it was to me in my first attempt to design a loosely coupled system that should, in the long term, mean a good bye to my all-time aversion towards system maintenance.
Music information retrieval is also hard. It is when you attempt to start to grasp the underlying theoretical framework and submerge yourself into scientific papers which each yield a different approach for extracting some feature out of digital audio signals. It is a long way until MFCC starts to sound natural. I have been there.
The safety culture of an organization is the key indication of its performance related to safety. It incorporates the visible rules, norms and practices as well as the implicit factors such as values, beliefs and assumptions. That is why the safety culture reflects “the way we do things around here” which is the most precise definition. Safety is a universal topic since we pursue it permanently and every action is safety related. To improve safety, we first need to understand the organization’s unique safety culture before we can derive tailored actions. This post covers the basic theoretical background of a safety culture and focuses and two central components: just and learning culture. The resulting principles can increase the resistance of an organization towards its operational hazards but only if they are adapted to the unique situation. There is no generally applicable step-by-step manual on how to implement a safety culture.
Smart meters have been a controversial topic for quite a while. Other countries began the roll out years ago. In Germany this takes way longer and there are still no certified products for the energy companies to install. The BSI (Bundesamt für Sicherheit in der Informationstechnologie) is responsible for certifying the smart meters. There are several smart meters up for certification as you can see on this side of the BSI.
The main reason for installing smart meters is the energy transformation to make the energy net more reliable for renewable energies. Therefore the EU has decided that every country should provide smart meters to their consumers. Thus the Bundesregierung passed the law for Digitalisierung der Energiewende in 2016. It requires 80% of all households to own a smart meter by 2020.
Artificial intelligence has a great potential to improve many areas of our lives in the future. But what happens when these AI technologies are used maliciously?
Sure, a big topic may be autonomous weapons or so called “killer robots”. But beside our physical security – what about our digital one? How the malicious use of artificial intelligence will threaten and is already threatening our digital security, will be discussed in this blog post.
Over the last five years, the use of cloud computing services has increased rapidly, in German companies. According to a statistic from Bitkom Research in 2018, the acceptance of cloud-computing services is growing.
Cloud-computing brings many advantages for a business. For example, expenses for the internal infrastructure and its administration can be saved. Resource scaling is easier, it can be done when the appropriate capital is available. In general the level of innovation within companies is also increasing, so this can support new technologies and business models. Cloud-computing can also increase data security and data protection, because the big providers must comply with standards and obtain certification through the EU-DSGVO General Data Protection Regulation, which came into force in May 2018. There is even special cloud providers that offer such services, so these are working based on the regulations of the TCDP (Trusted Cloud Data Protection) certification.
Usability and Security – Is a tradeoff necessary?
Usability is one of the main reasons for a successful software with user interaction. But often it is worsened by high security standards. Furthermore many use cases need authentication, authorisation and system access where high damage is risked when security possibilities get reduced. In this article the dependence of these two areas as well as typical mistakes with their possible solutions are shown to bury a fallacy in IT: “There needs to be a tradeoff between security and usability”.
Today cities are growing bigger and faster than ever before. This results in various negative aspects for the citizens such as increased traffic, pollution, crime and cost of living, just to name a few. Governments and city administrations and authorities are in need to find solutions in order to alleviate these drawbacks. Over the past years one solution arose and has grown continuously was the concept of the smart city.
The concept of smart cities is based on the application of connected systems to manage a city efficiently. There are various aspects in which smart cities emphasize such as transport control, energy and water transport or public health and safety management. The broad distribution of Internet of Things (IoT) technologies favors the development of smart cities. IoT devices are considered the backbone of a smart city as they function as sensors and can be applied in many environments.
In some environments today’s cities are already really smart. For example, many large cities are using a traffic and transport control system, which can control the flow of traffic and make it more efficient, reducing or even avoiding congestion and increase traffic flow. Smart cities are becoming reality. But as the smart city technologies touching more and more aspects of the citizens everyday life, these technologies drawing increased attention from cyber attackers. Since many of the smart city technologies control safety-critical systems like the already mentioned traffic and transport control system, those systems are worthwhile and, because of the security concerns about the underlying IoT technology, often weak targets.
Since the amount and value of data are constantly increasing more and more data of each individual is collected and processed. Moreover Facebook’s recent data leak with Cambridge Analytica shows that collected data cannot be absolutely securely treated and stored.
In 2014 and 2015, the Facebook platform allowed an app … that ended up harvesting 87m profiles of users around the world that was then used by Cambridge Analytica in the 2016 presidential campaign and in the referendum.
This is one of the reasons why we’ll take a look on our digital identity, how it can be linked to our real identity and how we can restrict that. An understanding of what data is collected while surfing on the web is a first step in preserving anonymity.
It is widely known that tech companies, like Apple or Google and their partners collect and analyse an increasing amount of information. This includes information about the person itself, their interaction and their communication. It happens because of seemingly good motives such as:
- Recommendation services: e.g. word suggestions on smartphone keyboard
- Customizing a product or service for the user
- Creation and Targeting in personalised advertising
- Further development of their product or service
- Simply monetary, selling customer data (the customer sometimes doesn’t know)
In the process of data collection like this clients’ or users’ privacy is often at risk. In this case privacy includes confidentiality and secrecy. Confidentiality means that no other party or person than the recipient of sent message can read the message. In the special case of data collection: no third party or even no one else but the individual, not even the analysing company should be able to read its information to achieve proper confidentiality. Secrecy here means that individual information should be kept secret only to the user.
Databases may not be simply accessible for other users or potential attackers, but for the company collecting the data it probably is. Despite anonymization/pseudonymization, information can often be associated to one product, installation, session and/or user. This way conclusions to some degree definite information about one very individual can be drawn, although actually anonymized or not even available. Thus, individual users are identifiable and traceable and their privacy is violated.
The approach of Differential Privacy aims specifically at solving this issue, protecting privacy and making information non-attributable to individuals. It tries to reach an individual deniability of sent/given data as a right for the user. The following article will give an overview of the approach of differential privacy and its effects on data collection.
What is Beyond Corp?
Beyond corp is a concept which was developed and is used by Google and is by now adopted by some other companies. The idea behind it was to get away from the intranet and its perimeter defense, where, if you breach the perimeter you can access much of the enterprise data. With Beyond Corp, your enterprise applications are not hidden behind a perimeter defense but are instead deployed to the internet, only accessible via a centralized access proxy. With the deployment of the enterprise applications to the internet, Google establishes a zero trust policy – anyone no matter from which IP tries to access a enterprise application has to have sufficient rights, determined through device and user data.
The trigger for this to happen was the “Operation Aurora” in 2009, an advanced persistent threat (APT) supposedly originating from China, where data from Google and around 35 other companies in the USA was stolen. Since you wont detect an APT through monitoring, because the many single steps in themselves are uncritical and hard to relate if the attackers take their time (talking about several weeks), but are easy to achieve once you entered the intranet successfully, Google decided to start the Beyond Corp project to find a more secure architecture for their enterprise.