The time evolution of intelligent vehicle technology is explained, which highlights the development of an intelligent
vehicle and its safety applications, focusing on the various usages of perception sensors in production.
During this phase, the dynamic stability of vehicles was one of the focal points. Inertial sensors incorporated into inertial measurement units (IMUs) combined with an odometer were often used to improve the stability of the vehicle, particularly when the road had several curves, and this soon led to driver assistance like anti-lock braking systems (ABSs), followed by traction control (TC) and electronic stability (ECS) [1]. Mercedes has shown efficacy and importance for human life with the combined ABS and ECS systems and the “Moose Test” has attracted public and official attention[2]. Nevertheless, safety concerns were limited to drivers and passengers, increasing concern about mobility and the safety of human life in the surrounding area, which led the way to the development of external sensors. In 1986, the European project PROMETHEUS[3] involving university research centers and transport as well as automotive companies, carried out basic studies on autonomous features ranging from collision prevention to cooperative driving to the environmental sustainability of vehicles. Within this framework, several different approaches to an intelligent transport system have been designed, implemented, and demonstrated. In 1995, the vision study laid the foundation for a research team led by Ernst Dickmann, who used the Mercedes-Benz S-Class and embarked on a journey of 1590 km from Munich (Germany) to Copenhagen (Denmark) and back, using jolting computer vision and integrated memory microprocessors optimized for parallel processing to react in real time. The result of the experiment marked the way for computer vision technology, where the vehicle, with high speeds of more than 175 km/h and with minimal human intervention, was driven autonomously 95% of the time. In the same year, in July 1995, Carnegie Mellon University’s NavLab5 traveled across the country on a “No Hands Across America” tour in which the vehicle was instrumented with a vision camera, GPS receiver gyroscope, and steering and wheel encoders. Moreover, neural networks were used to control the steering wheel, while the throttle and brakes were human controlled[4]. Later, in 1996, the University of Parma launched its ARGO project, which completed more than 2000 km of autonomous driving on public roads, using a two-camera system for road follow-up, platooning, and obstacle prevention[5]. Meanwhile, other technologies around the world have made way in the market for various semi-autonomous vehicle applications. For example, to develop car parking assistance systems, ultrasonic sensors were used to detect barriers in the surroundings. Initially, these systems had merely a warning function to help prevent collisions when moving in and out of parking spaces. Toyota introduced ultrasonic back sonar as a parking aid in the Toyota Corona in 1982 and continued its success until 1988[6]. Later, in 1998, the Mercedes-Benz adaptive cruise control radar was introduced, and these features were initially only usable at speeds greater than 30 km/h[7]. Slowly, autonomous and semi-autonomous highway concepts emerged and major projects were announced to explore dynamic stability and obstacle detection sensors such as vision, radar, ultrasonic, differential GPS, and gyroscopes for road navigation. The navigation tasks included lane keeping, departure warning, and automatic curve warning[8][9]. Most of these projects were carried out in normal operating environments. The phase came to a halt with the National Automated Highway System Consortium[10] on the demonstration of automated driving functions and the discussion on seven specific topics related to automated vehicles: (i) driver assistance for safety, (ii) vehicle-to-vehicle communication, (iii) vehicle-to-environment communication, (iv) artificial intelligence and soft computing tools, (v) embedded high-performance hardware for sensor data processing, (vi) standards and best practices for efficient communication, and (vii) traffic analysis systems.
Several interesting projects were published in the second phase, such as the first Defense Advanced Research Projects Agency (DARPA)Grand Challenge, the second DARPA Grand Challenge, and the DARPA Urban Challenge [11][12]. These three projects and their corresponding competitions were designed to accelerate the development of intelligent navigation and control by highlighting issues such as off-road navigation, high-speed detection, and collision avoidance with surroundings (such as pedestrians, cycles, traffic lights, and signs). Besides, complex urban driving scenarios such as dense traffic and intersections were also addressed. The Grand Challenge has shown the potential of lidar sensors to perceive the environment and to create 3D projections to manage the challenging urban navigation environment. The Velodyne HDL64[13], a 64-layer lidar, played a vital role for both the winning and the runner-up teams. During the competition, vehicles had to navigate the real environment independently for a long time (several hours). The winner of the Second Grand Challenge (Stanley, Stanford Racing Team, Stanford University) equipped his Stanley vehicle with five lidar units, a front camera, a GPS sensor, an IMU, wheel odometry, and two automotive radars. The winner of the Urban Challenge (2007) (Boss, Carnegie Mellon University Team) with its Boss vehicle featured a perception system made up of two video cameras, five radars, and 13 lidars (including a roof-mounted unit of the novel Velodyne 64HDL). The success of the Grand Challenges highlighted some important information, for example, the size of the sensors and their numbers increased significantly, leading to an increase in data acquisition density, which resulted in several researchers studying different types of fusion algorithms. Further data acquisition density studies have paved the way for the development of advanced driving maneuvers such as lane keeping and collision prevention with warning systems to help the driver avoid potential hazards. We also note that, although different challenges have been addressed in the context of competitions in urban navigation, all of them have been faced with clear weather conditions and no specific report has been provided on tests under varying climatic conditions.
The third phase is a combination of driver assistance technology advancement and commercial development. The DARPA Challenges have strengthened partnerships between car manufacturers and the education sector and have mobilized several efforts to advance autonomous vehicles (AVs) in the automotive industry. This has involved a collaboration between General Motors and Carnegie Mellon University (Carnegie Mellon University), the Autonomous Driving Joint Research Lab, and a partnership between Volkswagen and Stanford University (Stanford University). Google’s Driverless Car initiative has introduced commercial research into autonomous cars from a university lab. In 2013, a Mercedes-Benz S-Class vehicle[14] was produced by the Karlsruhe Institute of Technology/FZI (Forschungszentrum Informatik) and Daimler R&D, which ran 100 km from Mannheim to Pforzheim (Germany) completely autonomously in a project designed to enhance safety. The vehicle, which is equipped with a single stereo vision system consisting of several new generations of long-range and short-range radar sensors, followed the historic memorial road of Bertha Benz. Phase III has focused on issues like traffic automation, cooperative driving, and intelligent road infrastructure. Among the major European Union initiatives, Highly Automated Vehicles for Intelligent Transport (HAVEIT, 2008–2011)[15][16][17] has tackled numerous driver assistance applications, such as adaptive cruise control, safety lane changing, and side monitoring. The sensor sets used in this project include a radar network, laser scanners, and ultrasonic sensors with advanced machine learning techniques as well as vehicle-to-vehicle communications systems (V2V). The results of this project developed safety architecture software for the management of smart actuators and temporary autopilot system tasks for urban traffic with data redundancy and led to successful green driving systems. Other platooning initiatives were Safe Roads for the Environment (SARTE, 2009–2012)[18], VisLab Intercontinental Autonomous Challenge (VIAC, 2010–2014)[19], the Grand Cooperative Driving Challenge, 2011[20] , and the European Truck Platooning Challenge, 2016[21][22], which were major projects aimed at creating and testing some successful intersection driving strategies in cooperation. Global innovation, testing, and deployment of AV technology called for the adoption of standard guidelines and regulations to ensure a stable integration, which led to the introduction of SAE J3016, which allows for six degrees of autonomy from 0 to 5 in all-weather situations where the navigation tasks at level 0 are managed by the driver and the computer at level 5. To respond to these regulations, the Google driverless car project began in 2009 to create the most advanced driverless car (SAE autonomy level 5) that features a 64-beam lidar rotating rooftop, creating 3D images of objects that help the car see distance and create images of objects within an impressive 200 m range. The camera mounted on the windshield helps the car see objects right in front of it and to record information about road signs and traffic lights. Four radars mounted on the front and rear bumpers of the car make it possible for the car to be aware of the vehicles in front of and behind it and to keep passengers and other motorists safe by avoiding bumps and crashes. To minimize the degree of uncertainty, GPS data are compared to the sensor map data previously collected from the aerial, which is fixed at the rear of the car and receives information on the exact location of the car and updates the internal map. An ultrasonic sensor mounted on one of the rear wheels helps keep track of movements and warn the car about obstacles in the rear. Usually, the ultrasonic sensors are used for parking assistance. Google researchers developed an infrastructure that was successfully tested over 2 million km on real roads. This technology belongs to the company Waymo[23]. Nissan’s Infiniti Q50 debuted in 2013 and became one of the company’s most powerful autonomous cars and the first to use the virtual steering column. The model has various features, such as lane changing, collision prevention, and cruise control, and is equipped with cameras, radar, and other next-generation technology. The driver does not need to handle the accelerator, brake, or steering wheel[24]. Tesla entered the course of automated driving in 2014[25], with all its vehicles equipped with a monocular camera and an automotive radar that enabled autopilot level 2–3 functionality. In 2018, Mobileye, focusing on a vision-only approach to automated driving, presented an automated Ford demo with only 12 small, fully automated mono-cameras[26]. Beside the projects, there are many pilot projects in almost all G7 countries to improve the introduction rate of the ultimate driverless vehicle. Moreover, the ADAS has achieved high technology readiness, and many car manufacturers are now deploying this technology in their mass-market vehicles. Although several key elements for automatic maneuvers have been successfully tested, the features are not fully covered under all weather conditions. In the next section, we discuss the various applications of ADASs which are currently available in the market and their limitations in performing in various weather conditions.
This entry is adapted from the peer-reviewed paper 10.3390/s20226532