Autonomous cars are vehicles that can drive to a predetermined destination in real traffic without the intervention of a human driver. To ensure that the car gets from A to B as safely and comfortably as possible, various precautions must be taken. These precautions are explained in the following sections using various questions and security concepts. In addition, further questions are used to answer typical questions in the field of autonomous driving.
What happens if one of the systems or sensors fails?
A complete failure of the machine perception or the sensors must not occur. If this should happen, the car would drive completely blind and the probability of an accident would be correspondingly high. For this reason, sensory redundancy is provided.
Such redundancy is provided by so-called multi-sensor systems (see Figure 1). These systems use and merge the data and information from the various sensors and sensor principles to compensate for a possible partial system failure. For example, if radar and lidar sensors are installed, both provide distance measurement data, but in different quality and in a different sensor detection range. However, due to the similarity of the measurement data, they can support each other.
How can the risk of a failure be reduced or a problem identified?
The recognition of risky situations represents a great challenge. External events must be recorded via the perception of the environment and correctly interpreted in order to be able to react appropriately. Nevertheless, technical faults in the vehicle and in the vehicle control system must also be detected. During vehicle operation, a human driver observes the warning and control lamps and senses changes, for example due to technical defects. However, since an autonomous vehicle does not have a driver, sensors and functions must be integrated that detect technical defects and faults and determine the current and future performance and possible range of functions based on their severity. The expected complexity of a vehicle with a vehicle guidance system leads to a high number of measured values. As a result, a self-representation of the vehicle is created, which is used to arrive at an assessment of the current risk depending on the situation and performance.
Which is the biggest human-machine problem?
Google’s autonomous vehicles have so far been accident-free, i.e. they themselves have not yet caused any accidents. However, there have already been some rear-end collisions, which have been caused by the fact that the human drivers could not properly assess the cautious driving style of autonomous cars. For example, an autonomous vehicle slowed down too suddenly because of a pedestrian who might be about to want to cross the street. The driver of the following car had not expected such a caution and therefore it came to the accident.
While this confirms the safety of autonomous vehicles, which in principle can reduce accidents, they are also at great risk when they are on the road with human drivers. The reason for this is that stubbornly rule-guided and too cautious driving of the software meets hasty, inattentive, emotional and rules creatively following or ignoring driving of people.
This dilemma is made even more difficult, at least when autonomous and human-controlled vehicles share the roads, because it has turned out that the frequency of accidents involving autonomous vehicles in conjunction with human drivers is twice as high as that of human drivers who interact with one another and who seem to be able to assess each other better (as Bloomberg reports).
Can autonomous vehicles reduce the number of accidents?
One advantage of autonomous driving is supposed to be vehicle safety. However, as comparatively few autonomous vehicles are currently on the road, it is much more difficult to forecast a reduction in accidents. For this reason, it is worth taking a look at the assistance systems currently in use and the effects that have occurred since their introduction. Looking at ESP (Electronic Stability Program), for example, the number of accidents could be reduced by up to 25% in some cases. In general, the number of accidents or serious accidents has decreased due to the use of assistance systems. If this approach is applied to autonomous driving, it can be assumed that the number of accidents (assumption: more and more autonomous cars are on the road) will gradually decrease. This approach is supported by a study by Daimler, among others. It is assumed that by 2020 10%, by 2050 50% and by 2070 100% of accidents can be avoided. Technical errors are not taken into account. Overall, due to missing data and function descriptions, as well as introduction point and function limits, it is not possible to make an actual forecast on the safety effect of autonomous vehicles.
How does the car behave in dilemma situations?
In everyday traffic, in some cases there is a concatenation of several events that lead to a situation that cannot be solved without personal injury. In these so-called dilemmas, an autonomous car has to select a possible option for action within a very short period of time, which may lead to personal injury but causes as little damage as possible. Possible material damage and violations of applicable laws are also conceivable, but have a comparatively lower priority.
In the following, different situations (the first is collision-free, the second can lead to a dilemma) are presented and the different options are shown (see Figure 2).