Autonomous Vehicles: Comparison
Please note this is a comparison between Version 1 by Mahdi Rezaei and Version 5 by Mahdi Rezaei.

An Autonomous Vehicle (AV), or a driverless car, or as self-driving vehicle is a car, bus, truck, or any othertype of vehicle that is able to drive from point A to point B and perform all necessary drivingoperations and functions, without any human intervention. An Autonomous Vehicle is normally can be equipped with different types of sensors to perceive the surrounding environment, including Normal Vision Cameras, Infrared Cameras, RADAR, LiDAR, and Ultrasonic Sensors.  [1][2][3][4]

An autonomous vehicle should be able to detect and recognise all type of road users including surrounding vehicles, pedestrians, cyclists, traffic signs, road markings, and can segment the free spaces, intersections, buildings, and trees to perform a safe driving task.  [2]

Currently, no realistic prediction expects we see fully autonomous vehicles earlier than 2030. 

  • Autonomous Vehicles
  • Self-driving Cars
  • Computer Vision
  • Machine Learning
  • Sensor Fusion
  • Driver Behaviour Monitoring
  • Human Factors

According to the SAE International [1], previously known as the Society of Automotive Engineers, there are six different levels of automation defined and agreed on internationally, from 0 to 5.

As the levels increase, the extent vehicle's independence to human control and intervention increases. i.e. Level 0 means no automation and level 5 indicates fully autonomous driving.

Level 0: the human driver performs all the driving tasks and the car has no control over its operation

Level 1: the vehicle is equipped with a driver assistant system (DAS) [2] that can partially help the driver with speed adjustment, automatic cruise, lane divergence warking, or emergency braking at a basic level.

Level 2: the vehicle is equipped with more advanced features- advanced driver assistance system (ADAS) that can oversee steering and accelerate and braking in some easy conditions and good daylight conditions; however, the human driver is required to continue to pay full attention during the entire journey and perform directly on almost all complex driving tasks.

Level 3: the system is capable of performing all parts of the driving task in some ideal conditions [3], but the human driver is required to be able to ready to regain control when requested to do so by the system. The driver should be ready for a safe transition between the autonomous mode and the human mode within a few seconds. So the driver is not allowed to sleep or fully recline the driving seat. 

Level 4: the vehicle’s ADAS is able to independently perform all driving operations in certain conditions. In this level, the human vigilance/attention is not required on the pre-defined conditions.

Level 5: the vehicle system is taking all the advanced sensor technologies and state-of-the-art AI, Computer Vision, and Machine Learning techniques [4] and is able to perform all driving tasks in all weather and lighting conditions, in any road types from urban to rural to highways without any requirement of human intervention or monitoring. and no driving assistance is required from the human driver. Using cloud data and internet communication the AV would be able to communicate with other vehicles and infrastructures to get the most updated information about the road situation beforehand (from few seconds to few minutes in advance)..[5[]

 Cover photo: [6]

References

  1. SAE International . SAE International. Retrieved 2020-10-25
  2. Rezaei, M and Klette, R. Computer Vision for Driver Assistance. Springer Publishing, ‎‎2017‎
  3. Rezaei, M and Terauchi M and Kletter R. Robust vehicle detection and distance ‎estimation under challenging lighting conditions. IEEE transactions on intelligent ‎transportation systems, 16(5), pp. 2723--2743, 2015‎
  4. Mahdi Rezaei; Mahsa Shahidi; Zero-Shot Learning and its Applications from Autonomous Vehicles to COVID-19 Diagnosis: A Review. Intelligence-Based Medicine 2020, 1-2, 1-26, 10.1016/j.ibmed.2020.100005.Saleem, NH and Rezaei, M. and Klette, R. Extending the Stixel world using Polynomial ‎Ground Manifold Approximation. International Conference on Mechatronics and Machine ‎Vision in Practice (M2VIP), pp. 1-6, 2018‎
  5. Saleem, NH and Rezaei, M. and Klette, R. Extending the Stixel world using Polynomial ‎Ground Manifold Approximation. International Conference on Mechatronics and Machine ‎Vision in Practice (M2VIP), pp. 1-6, 2018‎SAE International . https://www.sae.org/. Retrieved 2020-10-25
  6. FIA . Federation Internationale De L`automobile. Retrieved 2020-10-25
More
Video Production Service