Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1774 2024-01-17 11:31:06 |
2 format change Meta information modification 1774 2024-01-18 03:37:25 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Abohassan, M.; El-Basyouny, K. Quantifying the Complexity of the Autonomous Vehicles Environment. Encyclopedia. Available online: https://encyclopedia.pub/entry/53966 (accessed on 28 April 2024).
Abohassan M, El-Basyouny K. Quantifying the Complexity of the Autonomous Vehicles Environment. Encyclopedia. Available at: https://encyclopedia.pub/entry/53966. Accessed April 28, 2024.
Abohassan, Mohamed, Karim El-Basyouny. "Quantifying the Complexity of the Autonomous Vehicles Environment" Encyclopedia, https://encyclopedia.pub/entry/53966 (accessed April 28, 2024).
Abohassan, M., & El-Basyouny, K. (2024, January 17). Quantifying the Complexity of the Autonomous Vehicles Environment. In Encyclopedia. https://encyclopedia.pub/entry/53966
Abohassan, Mohamed and Karim El-Basyouny. "Quantifying the Complexity of the Autonomous Vehicles Environment." Encyclopedia. Web. 17 January, 2024.
Quantifying the Complexity of the Autonomous Vehicles Environment
Edit

One of the challenges that autonomous vehicles (AVs) face is the large amount of data inundating the onboard computer from its sensors, which is beyond the real-time processing capabilities of current AV models. The onboard computer equipped by the AVs, which is required to operate in real- time, has the onus of perceiving the environment surrounding the AV, processing the incoming information, and making the most apt decisions to ensure the safe operations of the ego vehicle. 

autonomous vehicles virtual simulations LiDAR data digital twins data processing

1. Autonomous Vehicles

The ongoing developments in autonomous driving prompted the Society of Automotive Engineers (SAE) to define six levels of driving automation [1]. The real challenge starts with level 3 conditional automation, where the driver’s attention is only called to respond to emergencies, whereas, during normal driving conditions, the human is not required to control the vehicle. Automated driving systems (ADS), in general, have their restrictions, defined as operational design domain (ODD) [2]. Human supervision is not required for levels 4 and 5 of automation. The only difference is that level 4 is active in some ODDs since it needs the support of detailed maps and the existence of certain types of infrastructures. If these conditions are unmet, the vehicles automatically park themselves to stop the trip as a part of a fail-safe system [3]. On the other hand, level 5 is designed to have full automation without any human intervention.
Currently, some AV models indeed operate on level 4 autonomous driving [4]. However, due to the lack of adequate infrastructure and supporting legislation, their deployment has been restricted to a few small regions with urban environments with a speed limit of only 50 kph [4]. Such regulations prompted these models to be used primarily for ridesharing purposes. WAYMO, NAVYA, and Magna are among the presently available level 4 AVs [4]. Numerous companies, including Audi, Uber, WAYMO, and Tesla, have openly acknowledged their ongoing efforts to test level 5 autonomous vehicles for future public use. However, no level 5 autonomous vehicle has been released for commercial use [4].
Several challenges impede the realization and deployment of fully autonomous vehicles. Such challenges include technological, safety, ethical, and political aspects [5]. A significant technological hurdle arises from the immense data influx into the AV’s onboard computer, primarily from sensors like LiDAR, which complicates real-time data processing and subsequently impacts the vehicle’s efficiency and safety. The literature shows that leveraging parallel computations is a highly effective strategy for addressing real-time processing challenges associated with large datasets. For instance, [6] developed an optimized algorithm using the OpenMP technology to expedite the processing time of determining the position of the LiDAR by eight times. This is especially relevant as computing system architectures are currently experiencing substantial advancements and are positioned to become even more sophisticated in the future [6].
The concept of vehicle-to-anything communication (V2X) also holds promise as a solution to this challenge, especially since it has seen rapid advancements in recent years. For instance, in [7], smart nodes were designed to help optimize communications of the vehicles with the infrastructure by learning the environment with its variable scenarios and predicting the optimal minimum contention window (MCW) by training DQN models. Nevertheless, the current infrastructure cannot support this technology yet, and existing networks lack the robustness needed to support the anticipated high volume of data exchange [4]. Additionally, safeguarding the privacy of the extensive data collected by AVs is a vital concern [8].
Furthermore, the correct and timely response to the surprising loss of control incidents like skidding is not on par with human reactions [9]. Uncrewed autonomous vehicles face a significant challenge in matching or improving upon human factors, such as ethical decision making on the road. While these vehicles excel at following traffic rules and safe navigation, they lack the “human touch” needed to make moral decisions in complex situations that involve human emotions, morals, and judgment [5]. This lack raises concerns about potential biases in the algorithms or AI used in AVs [5], especially when it is believed that the public will switch from crewed to uncrewed vehicles only after they understand the ethical principles that the AVs follow [10]. A different facet that impedes the progress of AVs is the current policies and regulations. Liability, in particular, emerges as a pivotal concern for the widespread adoption of AVs. Currently, drivers are typically liable for any car-involved collisions [11]. However, determining the primary responsible party becomes rather unclear in the context of accidents involving uncrewed vehicles [12].

2. AV Simulations on LiDAR Data

Achieving a meticulous replication of the intricate physical road infrastructure and an exacting simulation of the physics involved in sensing processes is key to bridging the substantial divide between theoretical simulations and real-world applications [13]. Closing this gap not only refines the development of AV technologies but also augments their reliability, safety, and efficiency. Broadly speaking, LiDAR data can either be real or synthetic, and it is worth noting that the use of synthetic LiDAR data in simulations is yet to be on par with the use of realistic data as they were shown to have lesser accuracies in capturing the intricacies of the real-world environment and were generally not as diverse [14].
Neuhaus et al. [15] utilized the 3D point cloud data captured by the Velodyne HDL-64E LiDAR sensor in assessing autonomous navigations in unstructured environments. Drivable areas were analyzed using an innovative algorithm that evaluates local terrain roughness, enhancing the precision of autonomous vehicle path planning. Furthermore, Manivasagam et al. [13] leveraged real data collected by their self-driving fleet in diverse cities to enhance simulation realism. They curated an extensive catalog of 3D static maps and dynamic objects from real-world situations to devise an innovative simulator that integrates physics-based and learning-based approaches. Li et al. [16] proposed augmented autonomous driving simulation (AADS) by combining LiDAR and cameras to scan real street scenes. In contrast to traditional approaches, their method offers more scalability and realism. Likewise, Fang et al. [17] utilized MLS data to create a virtual environment that can directly reflect the complexity and diversity of the real-world geometry. Then, by applying CAD models to capture the obstacles’ poses, such information was incorporated into the virtual environment to enrich it and enhance the AV simulations. This method demonstrated that a combination of real and simulated data can attain over 95% accuracy in the simulations.
Contrastively, several other works have used synthetic LiDAR data in their AV simulations since they do not require the same heavy manual annotation work as the scanned LiDAR data. Hence, they promise to streamline and increase the efficiency of AV simulations [16]. For instance, the authors in [18] extracted their synthetic LiDAR annotated datasets from the famous computer game GTA-V, where they simulated a virtual scanning vehicle within the game’s environment to capture realistic driving scenes. On the other hand, Wang et al. [19] developed a framework that simulated LiDAR sensors in the CARLA [20] autonomous urban driving simulator to generate synthetic LiDAR data. Their approach was inspired by the notable achievements of deep learning in 3D data analysis. This method demonstrated that incorporating synthetic data significantly improves the performance and accuracy of AVs in simulation environments.

3. Quantifying the Complexity of the AV Environment

The onboard computer equipped by the AVs, which is required to operate in real- time, has the onus of perceiving the environment surrounding the AV, processing the incoming information, and making the most apt decisions to ensure the safe operations of the ego vehicle. The traffic environment poses a big challenge for autonomous systems as they are typically open, non-deterministic, hard to predict, and dynamic [21]. Identifying the complex situations is essential in advancing AVs’ safety, which would expedite their mass adoption.
Wang et al. [22] proposed a modeling and assessment framework that can quantify the complexity of the AV’s environment. Their approach involved establishing fundamental and additional environmental complexity models that systematically evaluate four key environmental aspects: road conditions, traffic features, weather conditions, and potential interferences. Based on experts’ judgment, the overall environment complexity can be calculated using a preset scoring system for the different environment features. The analytic hierarchy method (AHM) determines the relation between different attributes. A weighting scheme based on subjective and objective considerations is implemented to calculate the overall complexity of the environment. A similar, automated framework that bases its measured complexity on the road type, scene type, challenging conditions, and traffic elements was developed by Wang et al. [23]. Traffic elements focus exclusively on vehicles, considering a maximum of the closest eight neighboring vehicles. Using both LiDAR point clouds and image data, the proposed framework was validated using three experiments that modeled different road and traffic conditions.
Gravity models were proposed by [24] to assess the complexity of the surrounding environment, where the level of driving complexity was measured as the extra cognitive burdens exerted by the traffic environment on the drivers. That said, the proposed method could not directly obtain the complexity values, and many relevant parameters were miscalibrated in the calculations. Following the same concept, Yang et al. [25] divided the environment into static and dynamic elements to develop their environment complexity model. The static features were studied using the grey relation analysis. At the same time, the complexity of the dynamic elements was quantified based on the improved gravitation model, adding an extra explanatory variable into the function to explain the degree of contribution of the driving strategy.
Focusing on the dynamic traffic elements only, ref. [26] proposed a framework that captured the objective human drivers’ judgment on the complexity of the driving environment. In this method, the complexity of the environment was defined based on the interactions of the ego vehicle with the vehicles surrounding it. Applying this framework to three case studies involving different road maneuvers showed that the produced complexity curves managed to quantify and time the changes in environmental complexity. One drawback to this method is its inability to describe the static environment complexity.
Multiple researchers utilized the potential field theory in their models. A highway potential field function was proposed by Wolf et al. [27] to aid the AV’s obstacle avoidance system. Wang et al. [28], on the other hand, proposed three different fields (moving objects, road environment, and driver characteristics). Similarly, Cheng et al. [29] based their environmental complexity evaluation model on the potential field theory. The environment elements are represented by a positive point charge or uniformly charged wires that create a potential field in their vicinity. The total potential field of a certain environment can be calculated by superimposing individual fields. The virtual electric quantity of the different environment elements is calibrated using the AHM, where non-motorized vehicles have the highest values and static traffic elements like lane markings have the lowest values. This method was verified on virtual and real traffic scenarios and showed comparable results with expert scoring.

References

  1. J3016_202104: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles—SAE International. Available online: https://www.sae.org/standards/content/j3016_202104/ (accessed on 1 May 2023).
  2. Parekh, D.; Poddar, N.; Rajpurkar, A.; Chahal, M.; Kumar, N.; Joshi, G.P.; Cho, W. A review on autonomous vehicles: Progress, methods and challenges. Electronics 2022, 11, 2162.
  3. Lee, J.; Oh, K.; Oh, S.; Yoon, Y.; Kim, S.; Song, T.; Yi, K. Emergency pull-over algorithm for Level 4 autonomous vehicles based on model-free adaptive feedback control with sensitivity and learning approaches. IEEE Access 2022, 10, 27014–27030.
  4. Kosuru, V.S.R.; Venkitaraman, A.K. Advancements and challenges in achieving fully autonomous self-driving vehicles. World J. Adv. Res. Rev. 2023, 18, 161–167.
  5. Kim, D.; Mendoza, R.R.L.; Chua, K.F.R.; Chavez, M.A.A.; Concepcion, R.S.; Vicerra, R.R.P. A systematic analysis on the trends and challenges in autonomous vehicles and the proposed solutions for level 5 automation. In Proceedings of the 2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), Manila, Philippines, 28–30 November 2021; pp. 1–6.
  6. Mochurad, L.; Kryvinska, N. Parallelization of finding the current coordinates of the lidar based on the genetic algorithm and OpenMP technology. Symmetry 2021, 13, 666.
  7. Wu, Q.; Shi, S.; Wan, Z.; Fan, Q.; Fan, P.; Zhang, C. Towards V2I age-aware fairness access: A DQN based intelligent vehicular node training and test method. Chin. J. Electron. 2023, 32, 1230–1244.
  8. Taghavi, S.; Shi, W. EdgeMask: An edge-based privacy preserving service for video data sharing. In Proceedings of the 2020 IEEE/ACM Symposium on Edge Computing (SEC), San Jose, CA, USA, 12–14 November 2020; pp. 382–387.
  9. Wang, J.; Zhang, L.; Huang, Y.; Zhao, J.; Bella, F. Safety of autonomous vehicles. J. Adv. Transp. 2020, 2020, 1–13.
  10. David, P. Preparing Infrastructure for Automated Vehicles. ITF. Available online: https://www.itf-oecd.org/preparing-infrastructure-automated-vehicles (accessed on 6 May 2023).
  11. Collingwood, L. Privacy implications and liability issues of autonomous vehicles. Inf. Commun. Technol. Law 2017, 26, 32–45.
  12. Alawadhi, M.; Almazrouie, J.; Kamil, M.; Khalil, K.A. Review and analysis of the importance of autonomous vehicles liability: A systematic literature review. Int. J. Syst. Assur. Eng. Manag. 2020, 11, 1227–1249.
  13. Manivasagam, S.; Wang, S.; Wong, K.; Zeng, W.; Sazanovich, M.; Tan, S.; Yang, B.; Ma, W.C.; Urtasun, R. Lidarsim: Realistic lidar simulation by leveraging the real world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11167–11176.
  14. Fang, J.; Yan, F.; Zhao, T.; Zhang, F.; Zhou, D.; Yang, R.; Ma, Y.; Wang, L. Simulating LIDAR point cloud for autonomous driving using real-world scenes and traffic flows. arXiv 2018, arXiv:1811.07112.
  15. Neuhaus, F.; Dillenberger, D.; Pellenz, J.; Paulus, D. Terrain drivability analysis in 3D laser range data for autonomous robot navigation in unstructured environments. In Proceedings of the 2009 IEEE Conference on Emerging Technologies & Factory Automation, Palma de Mallorca, Spain, 22–25 September 2009; pp. 1–4.
  16. Li, W.; Pan, C.W.; Zhang, R.; Ren, J.P.; Ma, Y.X.; Fang, J.; Yan, F.L.; Geng, Q.C.; Huang, X.Y.; Gong, H.J.; et al. AADS: Augmented autonomous driving simulation using data-driven algorithms. Sci. Robot. 2019, 4, eaaw0863.
  17. Fang, J.; Zhou, D.; Yan, F.; Zhao, T.; Zhang, F.; Ma, Y.; Wang, L.; Yang, R. Augmented LiDAR simulator for autonomous driving. IEEE Robot. Autom. Lett. 2020, 5, 1931–1938.
  18. Yue, X.; Wu, B.; Seshia, S.A.; Keutzer, K.; Sangiovanni-Vincentelli, A.L. A lidar point cloud generator: From a virtual world to autonomous driving. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, Yokohama, Japan, 11–14 June 2018; pp. 458–464.
  19. Wang, F.; Zhuang, Y.; Gu, H.; Hu, H. Automatic generation of synthetic LiDAR point clouds for 3-D data analysis. IEEE Trans. Instrum. Meas. 2019, 68, 2671–2673.
  20. Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An open urban driving simulator. In Conference on Robot Learning; PMLR: Mountain Veiw, CA, USA, 2017; pp. 1–16.
  21. Wang, K.; Feng, X.; Li, H.; Ren, Y. Exploring influential factors affecting the severity of urban expressway collisions: A study based on collision data. Int. J. Environ. Res. Public Health 2022, 19, 8362.
  22. Wang, Y.; Li, K.; Hu, Y.; Chen, H. Modeling and quantitative assessment of environment complexity for autonomous vehicles. In Proceedings of the 2020 Chinese Control and Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 2124–2129.
  23. Wang, J.; Zhang, C.; Liu, Y.; Zhang, Q. Traffic sensory data classification by quantifying scenario complexity. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1543–1548.
  24. Zhang, H.C. Research on Complexity of Road Traffic Environment Based on Gravitation Model. Master’s Thesis, Department of Transportation Engineering, Beijing Institute of Technology, Beijing, China, 2016.
  25. Yang, S.; Gao, L.; Zhao, Y.; Li, X. Research on the quantitative evaluation of the traffic environment complexity for unmanned vehicles in urban roads. IEEE Access 2021, 9, 23139–23152.
  26. Yu, R.; Zheng, Y.; Qu, X. Dynamic driving environment complexity quantification method and its verification. Transp. Res. Part C Emerg. Technol. 2021, 127, 103051.
  27. Wolf, M.T.; Burdick, J.W. Artificial potential functions for highway driving with collision avoidance. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 3731–3736.
  28. Wang, J.; Wu, J.; Li, Y. The driving safety field based on driver–vehicle–road interactions. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2203–2214.
  29. Cheng, Y.; Liu, Z.; Gao, L.; Zhao, Y.; Gao, T. Traffic risk environment impact analysis and complexity assessment of autonomous vehicles based on the potential field method. Int. J. Environ. Res. Public Health 2022, 19, 10337.
More
Information
Subjects: Transportation
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : ,
View Times: 93
Revisions: 2 times (View History)
Update Date: 18 Jan 2024
1000/1000