Advanced Driver-Assistance Systems (ADASs) are used for increasing safety in the automotive domain, yet current ADASs notably operate without taking into account drivers’ states, e.g., whether she/he is emotionally apt to drive.
Advanced driver-assistance systems (ADAS) are groups of electronic technologies that assist drivers in driving and parking functions. Through a safe human-machine interface, ADAS increase car and road safety. ADAS use automated technology, such as sensors and cameras, to detect nearby obstacles or driver errors, and respond accordingly. As most road accidents occur due to human error, ADAS are developed to automate, adapt, and enhance vehicle technology for safety and better driving. ADAS are proven to reduce road fatalities by minimizing human error. Safety features are designed to avoid accidents and collisions by offering technologies that alert the driver to problems, implementing safeguards, and taking control of the vehicle if necessary. Adaptive features may automate lighting, provide adaptive cruise control, assist in avoiding collisions, incorporate satellite navigation and traffic warnings, alert drivers to possible obstacles, assist in lane departure and lane centering, provide navigational assistance through smartphones, and provide other features.In recent years, the automotive field has been pervaded by an increasing level of automation. This automation has introduced new possibilities with respect to manual driving. Among all the technologies for vehicle driving assistance, on-board Advanced Driver-Assistance Systems (ADASs), employed in cars, trucks, etc.[1], bring about remarkable possibilities in improving the quality of driving, safety, and security for both drivers and passengers. Examples of ADAS technologies are Adaptive Cruise Control (ACC)[2], Anti-lock Braking System (ABS)[3], alcohol ignition interlock devices[4], automotive night vision[5], collision avoidance systems[6], driver drowsiness detection[7], Electronic Stability Control (ESC)[8], Forward Collision Warnings (FCW)[9], Lane Departure Warning System (LDWS)[10], and Traffic Sign Recognition (TSR)[11]. Most ADASs consist of electronic systems developed to adapt and enhance vehicle safety and driving quality. They are proven to reduce road fatalities by compensating for human errors. To this end, safety features provided by ADASs target accident and collision avoidance, usually realizing safeguards, alarms, notifications, and blinking lights and, if necessary, taking control of the vehicle itself.
ADASs rely on the following assumptions: (i) the driver is attentive and emotionally ready to perform the right operation at the right time, and (ii) the system is capable of building a proper model of the surrounding world and of making decisions or raising alerts accordingly. Unfortunately, even if modern vehicles are equipped with complex ADASs (such as the aforementioned ones), the number of crashes is only partially reduced by their presence. In fact, the human driver is still the most critical factor in about 94% of crashes[12].
However, most of the current ADASs implement only simple mechanisms to take into account drivers’ states or do not take them into account it at all. An ADAS informed about the driver’s state could take contextualized decisions compatible with his/her possible reactions. Knowing the driver’s state means to continuously recognize whether the driver is physically, emotionally, and physiologically apt to guide the vehicle as well as to effectively communicate these ADAS decisions to the driver. Such an in-vehicle system to monitor drivers’ alertness and performance is very challenging to obtain and, indeed, would come with many issues.
The incorrect estimation of a driver’s state as well as of the status of the ego-vehicle (also denoted as subject vehicle or Vehicle Under Test (VUT) and referring to the vehicle containing the sensors perceiving the environment around the vehicle itself)[13][14] and the external environment may cause the ADAS to incorrectly activate or to make wrong decisions. Besides immediate danger, wrong decisions reduce drivers’ confidence in the system.
Many sensors are needed to achieve such an ADAS. Sensors are prone to errors and require several processing layers to produce usable outputs, where each layer introduces delays and may hide/damage data.
Dependable systems that recognize emotions and humans’ states are still a research challenge. They are usually built around algorithms requiring heterogeneous data as input parameters as well as provided by different sensing technologies, which may introduce unexpected errors into the system.
An effective communication between the ADAS and the driver is hard to achieve. Indeed, human distraction plays a critical role in car accidents[15] [15] and can be caused by both external and internal causes.
In this paper, we first present a literature review on the application of human state recognition for ADAS, covering psychological models, the sensors employed for capturing physiological signals, algorithms used for human emotion classification, and algorithms for human–car interaction. In particular, some of the aspects that researchers are trying to address can be summarized as follows:
ADAS were first being used in the 1950s with the adoption of the anti-lock braking system.[4] Early ADAS include electronic stability control, anti-lock brakes, blind spot information systems, lane departure warning, adaptive cruise control, and traction control. These systems can be affected by mechanical alignment adjustments or damage from a collision. This has led many manufacturers to require automatic resets for these systems after a mechanical alignment is performed.
The reliance on data that describes the outside environment of the vehicle, compared to internal data, differentiates ADAS from driver-assistance systems (DAS).[4] ADAS relies on inputs from multiple data sources, including automotive imaging, LiDAR, radar, image processing, computer vision, and in-car networking. Additional inputs are possible from other sources separate from the primary vehicle platform, including other vehicles (vehicle-to-vehicle or V2V communication) and infrastructure (vehicle-to-infrastructure or V2I communication).[5]
Moreover, the complex processing tasks of modern ADASs are increasingly tackled by AI-oriented techniques. AIs can solve complex classification tasks that were previously thought to be very hard (or even impossible). Human state estimation is a typical task that can be approached by AI classifiers. At the same time, the use of AI classifiers brings about new challenges. As an example, ADAS can potentially be improved by having a reliable human emotional state identified by the driver, e.g., in order to activate haptic alarms in case of imminent forward collisions. Even if such an ADAS could tolerate a few misclassifications, the AI component for human state classification needs to have a very high accuracy to reach an automotive-grade reliability. Hence, it should be possible to prove that a classifier is sufficiently robust against unexpected data [16].
We then introduce a novel perception architecture for ADAS based on the idea of Driver Complex State (DCS). The DCS of the vehicle’s driver monitors his/her behavior via multiple non-obtrusive sensors and AI algorithms, providing emotion cognitive classifiers and emotion state classifiers to the ADAS. We argue that this approach is a smart way to improve safety for all occupants of a vehicle. We believe that, to be successful, the system must adopt unobtrusive sensing technologies for human parameters detection, safe and transparent AI algorithms that satisfy stringent automotive requirements, as well as innovative Human–Machine Interface (HMI) functionalities. Our ultimate goal is to provide solutions that improve in-vehicle ADAS, increasing safety, comfort, and performance in driving. The concept will be implemented and validated in the recently EU-funded NextPerception project[17], which will be briefly introduced.
In the automotive sector, the ADAS industry is a growing segment aiming at increasing the adoption of industry-wide functional safety in accordance with several quality standards, e.g., the automotive-oriented ISO 26262 standard[21]. ADAS increasingly relies on standardized computer systems, such as the Vehicle Information Access API[22], Volkswagen Infotainment Web Interface (VIWI) protocol[23], and On-Board Diagnostics (OBD) codes [24], to name a few.
In order to achieve advanced ADAS beyond semiautonomous driving, there is a clear need for appropriate knowledge of the driver’s status. These cooperative systems are captured in the Society of Automotive Engineers (SAE) level hierarchy of driving automation, summarized in Figure 1. These levels range from level 0 (manual driving) to level 5 (fully autonomous vehicle), with intermediate levels representing semiautonomous driving situations, with a mixed driver–vehicle degree of cooperation.
According to SAE levels, in these mixed systems, automation is partial and does not cover every possible anomalous condition that can happen during driving. Therefore, the driver’s active presence and his/her reaction capability remain critical. In addition, the complex data processing needed for higher automation levels will almost inevitably require various forms of AI (e.g., Machine Learning (ML) components), in turn bringing security and reliability issues.
Driver Monitoring Systems (DMSs) are a novel type of ADAS that has emerged to help predict driving maneuvers, driver intent, and vehicle and driver states, with the aim of improving transportation safety and driving experience as a whole[25]. For instance, by coupling sensing information with accurate lane changing prediction models, a DMS can prevent accidents by warning the driver ahead of time of potential danger[26]. As a measure of the effectiveness of this approach, progressive advancements of DMSs can be found in a number of review papers. Lane changing models have been reviewed in[27], while in [28][29], developments in driver’s intent prediction with emphasis on real-time vehicle trajectory forecasting are surveyed. The work in[30] reviews driver skills and driving behavior recognition models. A review of the cognitive components of driver behavior can also be found in[7], where situational factors that influence driving are addressed. Finally, a recent survey on human behavior prediction can be found in[31].
Modern cars have ADAS integrated into their electronics; manufacturers can add these new features. ADAS are considered real-time systems since they react quickly to multiple inputs and prioritize the incoming information to prevent accidents.[6] The systems use preemptive priority scheduling to organize which task needs to be done first.[6] The incorrect assignment of these priorities is what can cause more harm than good.[6]
ADAS are categorized into different levels based on the amount of automation, and the scale provided by The Society of Automotive Engineers (SAE).[4] ADAS can be divided into five levels. In level 0, ADAS cannot control the car and can only provide information for the driver to interpret on their own.[4] Some ADAS that are considered level 0 are: parking sensors, surround-view, traffic sign recognition, lane departure warning, night vision, blind spot information system, rear-cross traffic alert, and forward-collision warning.[4] Level 1 and 2 are very similar in that they both have the driver do most of the decision making. The difference is level 1 can take control over one functionality and level 2 can take control over multiple to aid the driver.[4] ADAS that are considered level 1 are: adaptive cruise control, emergency brake assist, automatic emergency brake assist, lane-keeping, and lane centering.[4] ADAS that are considered level 2 are: highway assist, autonomous obstacle avoidance, and autonomous parking.[4] From level 3 to 5, the amount of control the vehicle has increases; level 5 being where the vehicle is fully autonomous. Some of these systems have not yet been fully embedded in commercial vehicles. For instance, highway chauffeur is a Level 3 system, and automatic valet parking is a level 4 system, both of which are not in full commercial use yet.[4] The levels can be roughly understood as Level 0 - no automation; Level 1 - hands on/shared control; Level 2 - hands off; Level 3 - eyes off; Level 4 - mind off, and Level 5 - steering wheel optional. ADAS are among the fastest-growing segments in automotive electronics due to steadily increasing adoption of industry-wide quality and safety standards.[7][8]
This list is not a comprehensive list of all of the ADAS. Instead, it provides information on critical examples of ADAS that have progressed and become more commonly available since 2015.[9][10]
- "ACSF (Automatically commanded steering function) of Category C" (...) a function which is initiated/activated by the driver and which can perform a single lateral manoeuvre (e.g. lane change) when commanded by the driver.
- "ACSF of Category D" (...) a function which is initiated/activated by the driver and which can indicate the possibility of a single lateral manoeuvre (e.g. lane change) but performs that function only following a confirmation by the driver.
- "ACSF of Category E" (...) a function which is initiated/activated by the driver and which can continuously determine the possibility of a manoeuvre (e.g. lane change) and complete these manoeuvres for extended periods without further driver command/confirmation.
—UNECE regulation 79[42]
According to PACTS, lack of full standardization might make the system have difficultly being understandable by the driver who might believe that the car behave like another car while it does not.[53]
we can’t help feeling that this lack of standardisation is one of the more problematic aspects of driver-assistance systems; and it’s one that is likely to be felt more keenly as systems become increasingly commonplace in years to come, particularly if traffic laws change to allow ‘hands-off’ driving in the future.—EuroNCAP[54]
ADAS might have many limitations, for instance a pre-collision system might have 12 pages to explain 23 exceptions where ADAS may operate when not needed and 30 exceptions where ADAS may not operate when a collision is likely.[53] Names for ADAS features are not standardized. For instance, adaptive cruise control is called Adaptive Cruise Control by Fiat, Ford, GM, VW, Volvo and Peugeot, but Intelligent Cruise Control by Nissan, Active Cruise Control by Citroen and BMW, and DISTRONIC by Mercedes.[53] To help with standardization, SAE International has endorsed a series of recommendations for generic ADAS terminology for car manufacturers, that it created with Consumer Reports, the American Automobile Association, J.D. Power, and the National Safety Council.[55][56] Buttons and dashboard symbols change from car to car due to lack of standardization.[53][57] ADAS behavior might change from car to car, for instance ACC speed might be temporarily overridden in most cars, while some switch to standby after one minute.[53]
In Europe, in Q2 2018, 3% of sold passenger cars had level 2 autonomy driving features. In Europe, in Q2 2019, 325,000 passenger cars are sold with level 2 autonomy driving features, that is 8% of all new cars sold.[58] <graph>{"legends":[{"properties":{"legend":{"y":{"value":-90}},"title":{"fill":{"value":"#54595d"}},"labels":{"fill":{"value":"#54595d"}}},"stroke":"color","title":"Level 2 features in car sold during Q2 2019","fill":"color"}],"scales":[{"domain":{"data":"chart","field":"x"},"type":"ordinal","name":"color","range":"category10"}],"version":2,"marks":[{"type":"arc","properties":{"hover":{"fill":{"value":"red"}},"update":{"fill":{"scale":"color","field":"x"}},"enter":{"endAngle":{"field":"layout_end"},"innerRadius":{"value":0},"outerRadius":{"value":90},"startAngle":{"field":"layout_start"},"stroke":{"value":"white"},"fill":{"scale":"color","field":"x"},"strokeWidth":{"value":1}}},"from":{"data":"chart","transform":[{"type":"pie","field":"y"}]}},{"type":"text","properties":{"enter":{"theta":{"field":"layout_mid"},"baseline":{"value":"top"},"align":{"value":"center"},"text":{"template":"{{datum.y|number:'.0p'}}"},"y":{"group":"height","mult":0.5},"x":{"group":"width","mult":0.5},"fontSize":{"value":9},"angle":{"mult":57.29577951308232,"field":"layout_mid"},"radius":{"offset":-4,"value":90},"fill":{"value":"white"}}},"from":{"data":"chart","transform":[{"field":"y","type":"pie"}]}}],"height":200,"axes":[],"data":[{"format":{"parse":{"y":"number","x":"string"},"type":"json"},"name":"chart","values":[{"y":0.2,"x":"Toyota"},{"y":0.15,"x":"BMW"},{"y":0.14,"x":"Mercedes-Benz"},{"y":0.12,"x":"Volvo"},{"y":0.11,"x":"Audi"},{"y":0.28,"x":"Others"}]}],"width":90}</graph> Major car brands with Level 2 features include Audi, BMW, Mercedes-Benz, Tesla, Volvo, Citroën, Ford, Hyundai, Kia, Mazda, Nissan and Peugeot.[58] Full Level 2 features are included with Full Self-Driving from Tesla, Pilot Assist from Volvo and ProPILOT Assist from Nissan.[58]
The AV industry is growing exponentially, and according to a report by Market Research Future, the market is expected to hit over $65 billion by 2027. AV insurance and rising competition are expected to fuel that growth.[59] Auto insurance for ADAS has directly affected the global economy, and many questions have arisen within the general public. ADAS allows autonomous vehicles to enable self-driving features, but there are associated risks with ADAS. AV companies and manufacturers are recommended to have insurance in the following areas in order to avoid any serious litigations. Depending on the level, ranging from 0 to 5, each car manufacturer would find it in its best interest to find the right combination of different insurances to best match their products. Note that this list is not exhaustive and may be constantly updated with more types of insurances and risks in the years to come.
With the technology embedded in autonomous vehicles, these self-driving cars are able to distribute data if a car accident occurs. This, in turn, will invigorate the claims administration and their operations. Fraud reduction will also disable any fraudulent staging of car accidents by recording the car’s monitoring of every minute on the road.[62] ADAS is expected to streamline the insurance industry and its economic efficiency with capable technology to fight off fraudulent human behavior. In September 2016, the NHTSA published the Federal Automated Vehicles Policy, which describes the U.S. Department of Transportation's policies related to highly automated vehicles (HAV) which range from vehicles with ADAS features to autonomous vehicles.
The advancement of autonomous driving is accompanied by ethical concerns. The earliest moral issue associated with autonomous driving can be dated back to as early as the age of the trolleys. The trolley problem is one of the most well-known ethical issues. Introduced by English philosopher Philippa Foot in 1967, the trolley problem asks that under a situation which the trolley’s brake does not work, and there are five people ahead of the trolley, the driver may go straight, killing the five persons ahead, or turn to the side track killing the one pedestrian, what should the driver do?[64] Before the development of autonomous vehicles, the trolley problem remains an ethical dilemma between utilitarianism and deontological ethics. However, as the advancement in ADAS proceeds, the trolley problem becomes an issue that needs to be addressed by the programming of self-driving cars. The accidents that autonomous vehicles might face could be very similar to those depicted in the trolley problem.[65] Although ADAS systems make vehicles generally safer than only human-driven cars, accidents are unavoidable.[65] This raises questions such as “whose lives should be prioritized in the event of an inevitable accident?” Or “What should be the universal principle for these ‘accident-algorithms’?” Many researchers have been working on ways to address the ethical concerns associated with ADAS systems. For instance, the artificial intelligence approach allows computers to learn human ethics by feeding them data regarding human actions.[66] Such a method is useful when the rules cannot be articulated because the computer can learn and identify the ethical elements on its own without precisely programming whether an action is ethical.[67] However, there are limitations to this approach. For example, many human actions are done out of self-preservation instincts, which is realistic but not ethical; feeding such data to the computer cannot guarantee that the computer captures the ideal behavior.[68] Furthermore, the data fed to an artificial intelligence must be carefully selected to avoid producing undesired outcomes.[68] Another notable method is a three-phase approach proposed by Noah J. Goodall. This approach first necessitates a system established with the agreement of car manufacturers, transportation engineers, lawyers, and ethicists, and should be set transparently.[68] The second phase is letting artificial intelligence learn human ethics while being bound by the system established in phase one.[68] Lastly, the system should provide constant feedback that is understandable by humans.[68]
Intelligent transport systems (ITS) highly resemble ADAS, but experts believe that ITS goes beyond automatic traffic to include any enterprise that safely transports humans.[68] ITS is where the transportation technology is integrated with a city’s infrastructure.[69] This would then lead to a “smart city”.[69] These systems promote active safety by increasing the efficiency of roads, possibly by adding 22.5% capacity on average, not the actual count.[69] ADAS have aided in this increase in active safety, according to a study in 2008. ITS systems use a wide system of communication technology, including wireless technology and traditional technology, to enhance productivity.[68]