Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1584 2023-10-09 10:12:06 |
2 layout Meta information modification 1584 2023-10-09 10:37:07 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Reis, N.; Machado Da Silva, J.; Correia, M.V. Light Detection and Ranging Data Testing Methods. Encyclopedia. Available online: https://encyclopedia.pub/entry/49957 (accessed on 03 May 2024).
Reis N, Machado Da Silva J, Correia MV. Light Detection and Ranging Data Testing Methods. Encyclopedia. Available at: https://encyclopedia.pub/entry/49957. Accessed May 03, 2024.
Reis, Nuno, José Machado Da Silva, Miguel Velhote Correia. "Light Detection and Ranging Data Testing Methods" Encyclopedia, https://encyclopedia.pub/entry/49957 (accessed May 03, 2024).
Reis, N., Machado Da Silva, J., & Correia, M.V. (2023, October 09). Light Detection and Ranging Data Testing Methods. In Encyclopedia. https://encyclopedia.pub/entry/49957
Reis, Nuno, et al. "Light Detection and Ranging Data Testing Methods." Encyclopedia. Web. 09 October, 2023.
Light Detection and Ranging Data Testing Methods
Edit

Reports suggest that while both the LiDAR (Light Detection and Ranging) and RADAR (Radio Detection and Ranging) systems detected the pedestrian about six seconds before the registered impact, a misclassification of the pedestrian as an “unknown object” led to the unfortunate crash. LiDAR presents the least technological development when it comes to the identification of anomalous data. While per-point detection is a well-explored field of anomaly detection, object-level and pattern-based approaches remain few and far between.

autonomous driving perception algorithms LiDAR

1. Introduction

The last decade has seen an exponential increase in the application of embedded systems within the automotive industry; this modernization has allowed for deeper and more complex integration of electronics, in part fueled by higher interest in electric and hybrid vehicles. This phenomenon has led to a wave of demand for vehicles equipped with ADAS (Advanced Driver Assistance Systems) and “self-driving vehicles”, shifting automated driving into a particularly pertinent field of research. This increased interest has been accompanied by a requirement for better and more accurate machine learning models, in turn necessitating new performance indicators and metrics, higher quality and diversity of evaluation methods, and higher quality and realism of the datasets used to train these models, among others.
Nevertheless, accidents involving autonomous vehicles have been reported as a result of errors in the perception computing layers [1][2][3]. One such example involved an accident between one of Uber’s vehicles and a pedestrian holding a bike while crossing a street. Reports suggest that while both the LiDAR (Light Detection and Ranging) and RADAR (Radio Detection and Ranging) systems detected the pedestrian about six seconds before the registered impact, a misclassification of the pedestrian as an “unknown object” led to the unfortunate crash [3]. Despite development efforts held in the past years, further research is required to prevent these failures. Special attention must be paid to ways in which researchers may improve or characterize the performance of perception algorithms, especially under diversified driving situations. Notably, the literature cites an acute lack of research involving holistic LiDAR data [4].
This may be due to the fact that LiDAR usage within vehicles is a fairly new field of application, with the required technology having improved steadily over time. Earlier versions of integrated LiDAR systems showed a lack of fidelity and resolution in the obtained point clouds, with each subsequent iteration providing denser and more precise measurements that allow for fewer mistakes within the perception layer. As noted in [4], an overwhelming amount of research effort has been placed on multi-source imagery obtained with cameras, RADAR, LiDAR, and multimodal research being scarce at best.
Fundamental requirements in obtaining robust and accurate perception algorithms lie in expanding availability and size of datasets, improving the effectiveness of testing methods and relevance of key performance metrics, and mitigating risks by better assessing the performance and evolution of these algorithms. There are a few ways to achieve this, notably by comparison of the layer’s output against a “ground truth” (labeled data) that makes up part of the dataset. Other notable approaches involve the collection of detailed and relevant data via thorough testing, analysis of correlations between metrics, detection of anomalies and outliers, identification of fringe cases, and notable exceptions, among others [4][5].

2. Overview of LiDAR Data Testing Methods

The correctness of a LiDAR PC is usually evaluated by calculating the minimum Euclidean distance between equivalent points in both a reference and the captured PC. Four metrics are typically used: the Hausdorff Distance [6], Modified Hausdorff Distance [7], Chamfer Distance, and Earth Mover’s Distance [8]. The LiDAR’s accuracy is then provided by the Root Mean Square Error of the calculated distances. However, the simple calculation of these distances does not allow for the detection of outliers or their probabilities.
The detection of outliers in LiDAR data in agricultural applications has been previously discussed in [9]. The authors evaluated two methods. One is based on a geometric approach in which noisy point cloud data are fitted to a surface via normal and curvature estimation in a local neighborhood. The other relies on the PointCleanNet (PCN) deep learning framework. It is considered a simple data-driven method for removing outliers and reducing noise in unordered point clouds while being able to handle large densely sampled point clouds. While the first method requires the specification of input parameters that are sensitive to the distribution and density of the points in the dataset, the second proves to be more robust against changes in the point cloud density, shape, and level of noise. Nevertheless, PCN typically requires point densities greater than about 600 points per m2; moreover, as a supervised learning method, it is unlikely to succeed when the training noise characteristics differ from those of the test data.
A 2020 publication analyzed the performance of various LiDAR types available on the market [10]. The authors sought to better understand how their respective performance differences would impact the safety of future automated driving systems. The capabilities of ten LiDARs were evaluated using various metrics that encompassed twelve manufacturers’ specifications: channels, frames-per-second (FPS), precision, maximum range, minimum range, vertical field of view (vFOV), vertical resolution (vRes), horizontal resolution (hRes), wavelength (𝜆), sensor diameter (d), weight, and price.
First, the authors began with a set of qualitative observations to analyze each LiDAR’s performance regarding secondary reflections, intensity-based aberrations, blooming, missing points, and traffic line visibility issues. These observations were intended to identify the main contributors to PC noise, measurement errors, artifacts, scarcity, and missing information. A statistical method was used to measure the respective overall accuracy and precision. The methodology was detailed, allowing for reproducible results by using the relative error and root mean square error across three different targets for each LiDAR. This same methodology was then extended to measure the impact of surface reflectivity on the LiDAR data by making use of the different material properties of each target [10].
A similar analysis was used on a smaller scale to assess the impact of each individual laser on the overall PC. The end goal was to ascertain whether accuracy errors resulted from faults in the calibration procedure of individual lasers or from the sensor’s attempts to compensate for differences between its laser emitters. It was found that most LiDAR errors are due to the latter [10]. A final point of concern addressed the density of points within a given PC, seeking to compare experimental results with the expected maximum provided by the datasheet specifications. The expected density of points can be calculated using the LiDAR’s sampling rate and frequency, field of view, and resolution specifications. By knowing how many points are obtained after a given amount of frames, verification of successfully returned laser beams allowed the authors to compare the theoretical and practical density of the beams as well as the differences in their intensity [10].
In 2021, a review of the methods used to test environmental perception in automated driving systems was published in [11]. In this work, the authors found that while much of the testing and evaluation present at the time conformed to ISO 26262 [12] and ISO 21448 [13], this became insufficient when vehicles received a greater degree of automation. They highlighted several points regarding the interdependence of criteria and the inability of existing metrics to account for points of failure which, while not formally regarded as catastrophic failures, may nonetheless result in accidents.
One such highlighted example involves a metric dubbed the “statistical safety impact[11], which evaluates a system’s safety impact in individual scenes. Unfortunately, this metric depends on whether the system itself correctly recognizes and reports its uncertainty. If a failure-induced mischaracterization occurs, an uncertainty may never be detected, in which case the abnormality remains undetected. Similarly, there are times in which the perception layer may encounter uncertainties around a false positive, such as cases involving “ghost pedestrians”. This may cause the subsequent layers to behave erratically, leading to emergency maneuvers and dangerous breaking that can place other vehicles and drivers at risk. Most of all, their review highlights that despite the existence of safety criteria and metrics which fulfill them, including those independent from the system itself, there exists a pressing need to produce new and more apt indicators that do not rely on the system itself, are able to consider the impact that a misclassification might have on the entirety of the system pipeline, and are scalable to higher degrees of autonomy.
In 2022, a thorough survey was published by [4] that delved into the many forms in which anomaly detection has been leveraged to tackle this specific context, outlining an extensive list of previous methodologies developed throughout the years. They identified five distinct categories: confidence score, reconstruction, generation, feature extraction, and prediction. For each category, the authors searched extensively for any methods that could be applied in a given context, identifying three main modalities for data capture: camera, LiDAR, and RADAR. Additionally, analyses were conducted regarding the detection of anomalies across multimodal facets and object-level data. The former encompassed data captured with two or more of the previous three modalities, while the latter involved abstract abnormalities such as behavioral patterns and other data not bound to any given modality.
Within their survey, the authors highlighted the differentiation between the quantity and the quality of effective methods, especially concerning data captured by LiDAR. Of all the modalities, LiDAR presents the least technological development when it comes to the identification of anomalous data, comprising only four total methods, three in the confidence score category and one reconstructive approach. While per-point detection is a well-explored field of anomaly detection, object-level and pattern-based approaches remain few and far between.

References

  1. Đorđe, P.; Mijailović, R.; Pešić, D. Traffic Accidents with Autonomous Vehicles: Type of Collisions, Manoeuvres and Errors of Conventional Vehicles’ Drivers. Transp. Res. Procedia 2020, 45, 161–168.
  2. Ma, Y.; Yang, S.; Lu, J.; Feng, X.; Yin, Y.; Cao, Y. Analysis of Autonomous Vehicles Accidents Based on DMV Reports. In Proceedings of the 2022 China Automation Congress (CAC), Xiamen, China, 25–27 November 2022; pp. 623–628.
  3. Miethig, B.; Liu, A.; Habibi, S.; Mohrenschildt, M.v. Leveraging Thermal Imaging for Autonomous Driving. In Proceedings of the 2019 IEEE Transportation Electrification Conference and Expo (ITEC), Detroit, MI, USA, 19–21 June 2019; pp. 1–5.
  4. Bogdoll, D.; Nitsche, M.; Zollner, J.M. Anomaly Detection in Autonomous Driving: A Survey. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022.
  5. Christian, G.; Woodlief, T.; Elbaum, S. Generating Realistic and Diverse Tests for LiDAR-Based Perception Systems. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), Melbourne, Australia, 14–15 May 2023; pp. 2604–2616.
  6. Ribera, J.; Guera, D.; Chen, Y.; Delp, E.J. Locating objects without bounding boxes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6479–6489.
  7. Dubuisson, M.P.; Jain, A.K. A modified Hausdorff distance for object matching. In Proceedings of the 12th IEEE International Conference on Pattern Recognition, Jerusalem, Israel, 9–13 October 1994; Volume 1, pp. 566–568.
  8. Savkin, A.; Wang, Y.; Wirkert, S.; Navab, N.; Tombari, F. Lidar Upsampling With Sliced Wasserstein Distance. IEEE Robot. Autom. Lett. 2022, 8, 392–399.
  9. Nazeri, B.; Crawford, M. Detection of Outliers in LiDAR Data Acquired by Multiple Platforms over Sorghum and Maize. Remote Sens. 2021, 13, 4445.
  10. Lambert, J.; Carballo, A.; Cano, A.M.; Narksri, P.; Wong, D.; Takeuchi, E.; Takeda, K. Performance Analysis of 10 Models of 3D LiDARs for Automated Driving. IEEE Access 2020, 8, 131699–131722.
  11. Hoss, M.; Scholtes, M.; Eckstein, L. A Review of Testing Object-Based Environment Perception for Safe Automated Driving. Automot. Innov. 2022, 5, 223–250.
  12. ISO Standard No.26262-1:2018; Road Vehicles—Functional Safety—Part 1: Vocabulary. International Organization for Standardization: Geneva, Switzerland, 2018.
  13. ISO Standard No.21448:2022; Road Vehicles—Safety of the Intended Functionality. International Organization for Standardization: Geneva, Switzerland, 2022.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , ,
View Times: 148
Revisions: 2 times (View History)
Update Date: 09 Oct 2023
1000/1000