Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1602 2023-12-28 16:35:36 |
2 format correct Meta information modification 1602 2023-12-29 05:41:01 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Markó, N.; Horváth, E.; Szalay, I.; Enisz, K. Autonomous Vehicle Localization. Encyclopedia. Available online: https://encyclopedia.pub/entry/53234 (accessed on 28 April 2024).
Markó N, Horváth E, Szalay I, Enisz K. Autonomous Vehicle Localization. Encyclopedia. Available at: https://encyclopedia.pub/entry/53234. Accessed April 28, 2024.
Markó, Norbert, Ernő Horváth, István Szalay, Krisztián Enisz. "Autonomous Vehicle Localization" Encyclopedia, https://encyclopedia.pub/entry/53234 (accessed April 28, 2024).
Markó, N., Horváth, E., Szalay, I., & Enisz, K. (2023, December 28). Autonomous Vehicle Localization. In Encyclopedia. https://encyclopedia.pub/entry/53234
Markó, Norbert, et al. "Autonomous Vehicle Localization." Encyclopedia. Web. 28 December, 2023.
Autonomous Vehicle Localization
Edit

In a vehicle, wheel speed sensors and inertial measurement units (IMUs) are present onboard, and their raw data can be used for localization estimation. Both wheel sensors and IMUs encounter challenges such as bias and measurement noise, which accumulate as errors over time. Even a slight inaccuracy or minor error can render the localization system unreliable and unusable in a matter of seconds. Traditional algorithms, such as the extended Kalman filter (EKF), have been applied for a long time in non-linear systems. These systems have white noise in both the system and in the estimation model. These approaches require deep knowledge of the non-linear noise characteristics of the sensors. On the other hand, as a subset of artificial intelligence (AI), neural network-based (NN) algorithms do not necessarily have these strict requirements.

autonomous vehicle open source localization GNSS IMU odometry self-driving

1. Introduction

A crucial aspect of contemporary autonomous vehicle research involves enhancing localization methods [1][2][3][4]. Errors accumulate over time during position estimation: even a small error can rapidly ruin the estimation quality within a fraction of a second. Traditional methods like the extended Kalman filter (EKF), unscented Kalman filter (UKF) and particle filter (PF) have served as stalwart approaches in handling non-linear systems, affected by noise in both the system and estimation models. However, these approaches necessitate an in-depth understanding of the non-linear model noise characteristics of the sensors. To properly tune these kinds of systems, a substantial investment of time and effort is needed. For example, to tune various Kalman filter model parameters, e.g., process noise covariance, pre-whitening filter models for non-white noise, and frequently even optimization methods are applied [5][6][7][8]. In contrast, neural network-based algorithms offer a promising alternative that circumvents these stringent requirements. It is important to distinguish between two neural networks (NN) involved approaches. The first one uses neural networks as an optimization approach, and only uses the network to obtain the proper parameters for the Kalman filter. The second one perceives the problem as a black box without a pre-existent filter.

2. Robot and Vehicle Position Estimation

A significant challenge both in robotics and autonomous vehicle research is to determine the robot’s current position [9][10][11] often referred to as localization. Robots use their sensor data to extract the position information. The sensor data may include speed, acceleration, image, depth information and distance measurements, with different levels of accuracy. By combining information from diverse sensors, a comprehensive position estimation could be achieved. Generally, the accuracy of the robot’s position estimation improves with the increasing number of available sensors, especially when the raw sensor data have a certain level of accuracy. In robotics one of the most widely used localization techniques is simultaneous localization and mapping (SLAM) [12]. SLAM is a sensor fusion approach and as the name suggests it provides not only the vehicle’s location but also an estimate of the environment. In recent years this approach has become much more robust in exchange for efficiency [12]. SLAM usually involves camera or LiDAR (light detection and ranging) data processing, which means a large amount of sensor data. On the other hand, more efficient and sometimes less robust solutions [2][3][13] only rely on simple measurements such as velocity, acceleration or magnetic field. The most used estimators in this field are EKF, UKF and PF [14]. In the [15] an invariant extended Kalman filter (IEKF) based, loosely coupled IMU-GNSS integration scheme was introduced. The IEKF as a navigation filter using quaternion rotation for the INS-GNSS integration scheme. The observable states of the Kalman filter are made to converge within an area of attraction. There are solutions which fuse IMU and LIDAR data for positioning, while there are solutions that use only LIDAR data for estimation. What both approaches have in common is that LIDAR provides a much larger amount of data than conventional in-vehicle sensor systems. As a result, their computational requirements are also more significant. This approach makes data processing and data optimization one of the main challenges, and the ability to estimate as accurately as possible from the reduced data volume. Such odometry estimation solution was presented in [16] which approach relies on point-to-point ICP (iterative closest point) combined with adaptive thresholding.
Compared to the previously introduced localization, a much broader problem set is called system state estimation. This is a fundamental concept in dynamic systems, which helps to understand and predict the behavior of complex, dynamic processes. Dynamic systems are prevalent in numerous fields, spanning from engineering and physics to economics and environmental sciences, and often involve evolving states influenced by various inputs, disturbances and inherent uncertainties. The primary objective of state estimation is to ascertain the unobservable or partially observable states of a dynamic system based on available measurements and a mathematical model that describes the system. Some prevalent approaches for this problem set involve probabilistic and statistical methods such as the Bayes filter, Kalman filter [13], (extended and unscented variations), Gaussian approaches [17].
The paper [18] suggests enhancing the Kalman filter by incorporating the distance attributes of both LIDAR and radar sensors. An analysis of the sensor properties of LIDAR and radar with respect to distance was conducted, leading to the development of a reliability function to expand the Kalman filter’s capability in capturing distance-related features. In this study real vehicles were used as an experiment, and comparative measurements were performed that integrated sensor fusion with a fuzzy, adaptive noise measurement and the Kalman filter. The experimental findings demonstrated that the approach employed in the study produced reliable position estimates. Similarly, in [19] a localization approach that integrates an inertial dead reckoning model with 3D LIDAR-based map matching is suggested and empirically validated in a dynamic urban setting encompassing diverse environmental conditions.
In the study presented in [20], the unique challenges faced by UAVs in outdoor wilderness scenarios are addressed with a significant focus on enhancing received signal strength (RSS) for accurate positioning and collision avoidance. The presence of colored noise in RSS data disrupts traditional least squares (LS) algorithms. The extended Kalman filter (EKF) emerges as a solution, wherein a colored noise model is ingeniously incorporated, sharpening its distance estimation accuracy. Alongside, an adaptive algorithm is introduced to account for fluctuating path-loss factors during UAV flights. This comprehensive EKF-based approach not only ensures precise UAV positioning but also incorporates an orthogonal rule for advanced collision detection and trajectory modification. Experimental findings validate the robustness of this EKF-centric methodology, endorsing its value for collaborative UAV operations.
Increasing the positioning accuracy can be based not only on internal sensor and LIDAR data, but also on increasingly complex and widespread vehicle-to-vehicle or vehicle-to-infrastructure communication networks. If a vehicle within a network loses its GNSS signal, it can obtain positioning data from other nodes in the network. As an example, methods can be based on Vehicular ad hoc networks (VANETs). VANETs communicate additional position data and attempt to achieve localization using the received signal strength indicator (RSSI) analytical model [21][22].
The disadvantage of these methods is that RSSI is significantly affected by the weather and objects in the environment. Several correction methods have been developed to deal with this, including weighted centroid localization (WCL) and signal to interference noise ratio (SINR) methods [23].
There are procedures, similar to the fusion of IMU and GNSS data, which are based on Kalman filter and use RSSI, SINR calculations together with WCL procedure [14].

3. Knowledge-Based Versus Black-Box Modeling

The mentioned knowledge-based solutions (most notably the EKF) have obtained a good reputation in signal filtering. The main drawback of knowledge-based methods is their dependency on the underlying model accuracy. Kalman filters typically use kinematic models in lower velocities, but in higher velocities, dynamic models are a better fit. These have several parameters that depend on external factors and may vary over the life of the vehicle, for example vehicle mass, weight distribution and tire stiffness. The changes can be neglected, but still the noise characteristics of the model and the sensors must be known and considered in the modelling. A further difficulty is that there are sensor signals, such as GNSS data, where the noise characteristics are not deterministic and can change very quickly. For example, when communication is suddenly lost with the remote base-towers required for real-time kinematics (RTK) based solutions [24], the accuracy and hence the noise characteristics of the GNSS signals can change significantly within seconds. This also shows that with the number of measured data and internal state variables, not only does the computational complexity is increasing significantly due to the significant matrix operation running in the background, but also the complexity of the models makes them difficult to handle. A further problem is that while data from IMU and odometry data can be taken into account in a dynamical or kinematic model directly or through physical relationships, this is more difficult to do for a camera signal or LIDAR point cloud.
In recent years, black-box modelling-based technologies have gained significant traction too. In the field of vehicle localization a numerous papers [24][25][26][27][28][29][30][31] have been published. Maybe even more attention is on smartphone-based pedestrian tracking [32][33][34][35][36]. In this field similar technologies can be applied, but the quality and type of usable sensor set is divergent. From the mentioned approaches [31] is the closest to ours. The main difference is that instead of convolutional neural networks (CNN), our solution uses LSTM. The most known advantage is the ability to accurately predict outcomes and behavior without requiring a deep understanding of the underlying internal mechanisms or complexities. One of the main drawbacks of black-box methods is to provide the right results in all situations. Moreover, they require a large amount of measurement data to teach them, and these measurements must be recorded under the most varied conditions possible.

References

  1. Chalvatzaras, A.; Pratikakis, I.; Amanatiadis, A.A. A Survey on Map-Based Localization Techniques for Autonomous Vehicles. IEEE Trans. Intell. Veh. 2023, 8, 1574–1596.
  2. Lu, Y.; Ma, H.; Smart, E.; Yu, H. Real-Time Performance-Focused Localization Techniques for Autonomous Vehicle: A Review. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6082–6100.
  3. Laconte, J.; Kasmi, A.; Aufrère, R.; Vaidis, M.; Chapuis, R. A Survey of Localization Methods for Autonomous Vehicles in Highway Scenarios. Sensors 2022, 22, 247.
  4. Chen, C.; Pan, X. Deep Learning for Inertial Positioning: A Survey. arXiv 2023, arXiv:2303.03757.
  5. Cao, H.Q.; Nguyen, H.X.; Nguyen, T.T.; Nguyen, V.Q.; Jeon, J.W. Robot Calibration Method Based on Extended Kalman Filter–Dual Quantum Behaved Particle Swarm Optimization and Adaptive Neuro-Fuzzy Inference System. IEEE Access 2021, 9, 132558–132568.
  6. Huo, Z.; Wang, F.; Shen, H.; Sun, X.; Zhang, J.; Li, Y.; Chu, H. Optimal Compensation of MEMS Gyroscope Noise Kalman Filter Based on Conv-DAE and MultiTCN-Attention Model in Static Base Environment. Sensors 2022, 22, 7249.
  7. Sheibani, M.; Ou, G. A Bayesian optimized framework for successful application of unscented Kalman filter in parameter identification of MDOF structures. J. Low Freq. Noise Vib. Act. Control. 2021, 40, 1712–1730.
  8. Chen, Z.; Heckman, C.; Julier, S.; Ahmed, N. Weak in the NEES?: Auto-Tuning Kalman Filters with Bayesian Optimization. In Proceedings of the 21st International Conference on Information Fusion, Cambridge, UK, 10–13 July 2018.
  9. Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.D.; Leonard, J.J. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Trans. Robot. 2016, 32, 1309–1332.
  10. Panchpor, A.A.; Conrad, S.S.É.J.M. A survey of methods for mobile robot localization and mapping in dynamic indoor environments. In Proceedings of the 2018 Conference on Signal Processing and Communication Engineering Systems, Vijayawada, India, 4–5 January 2018.
  11. Chen, S.Y. Kalman Filter for Robot Vision: A Survey. IEEE Trans. Ind. Electron. 2012, 59, 4409–4420.
  12. Placed, J.A.; Strader, J.; Carrillo, H.; Atanasov, N.; Indelman, V.; Carlone, L.; Castellanos, J.A. A survey on active simultaneous localization and mapping: State of the art and new frontiers. IEEE Trans. Robot. 2023, 39, 1686–1705.
  13. Moore, T.; Stouc, D. A Generalized Extended Kalman Filter Implementation for the Robot Operating System. In Proceedings of the 13th International Conference on Intelligent Autonomous Systems (IAS-13), Padua, Italy, 15–18 July 2014.
  14. Mahmoud, A.; Noureldin, A.; Hassanein, H.S. Integrated Positioning for Connected Vehicles. IEEE Trans. Intell. Transp. Syst. 2020, 21, 397–409.
  15. Ibrahim, A.; Abosekeen, A.; Azouz, A.; Noureldin, A. Enhanced Autonomous Vehicle Positioning Using a Loosely Coupled INS/GNSS-Based Invariant-EKF Integration. Sensors 2023, 23, 6097.
  16. Vizzo, I.; Guadagnino, T.; Mersch, B.; Wiesmann, L.; Behley, J.; Stachniss, C. KISS-ICP: In Defense of Point-to-Point ICP—Simple, Accurate, and Robust Registration If Done the Right Way. IEEE Robot. Autom. Lett. 2023, 8, 1029–1036.
  17. Barfoot, T.D. State Estimation for Robotics; Cambridge University Press: Cambridge, UK, 2017.
  18. Kim, T.; Park, T.-H. Extended Kalman Filter (EKF) Design for Vehicle Position Tracking Using Reliability Function of Radar and Lidar. Sensors 2020, 20, 4126.
  19. Xia, X.; Bhatt, N.P.; Khajepour, A.; Hashemi, E. Integrated Inertial-LiDAR-Based Map Matching Localization for Varying Environments. IEEE Trans. Intell. Veh. 2023, 8, 4307–4318.
  20. Luo, C.; McClean, S.; Parr, G.; Teacy, L.; De Nardi, R. UAV Position Estimation and Collision Avoidance Using the Extended Kalman Filter. IEEE Trans. Veh. Technol. 2013, 62, 2749–2762.
  21. Drawil, N.M.; Basir, O. Intervehicle-Communication-Assisted Localization. IEEE Trans. Intell. Transp. Syst. 2010, 11, 678–691.
  22. Liang, W.; Li, Z.; Zhang, H.; Wang, S.; Bie, R. Vehicular Ad Hoc Networks: Architectures, Research Issues, Methodologies, Challenges, and Trends. Int. J. Distrib. Sens. Netw. 2015, 11, 745303.
  23. Altoaimy, L.; Mahgoub, I.; Rathod, M. Weighted localization in Vehicular Ad Hoc Networks using vehicle-to-vehicle communication. In Proceedings of the Global Information Infrastructure and Networking Symposium (GIIS), Montreal, QC, Canada, 15–19 September 2014.
  24. Saha, S.S.; Sandha, S.S.; Garcia, L.A.; Srivastava, M. Tinyodom: Hardware-aware efficient neural inertial navigation. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2022, 6, 1–32.
  25. Esfahani, M.A.; Wang, H.; Wu, K.; Yuan, S. AbolDeepIO: A Novel Deep Inertial Odometry Network for Autonomous Vehicles. IEEE Trans. Intell. Transp. Syst. 2020, 21, 1941–1950.
  26. Brossard, M.; Barrau, A.; Bonnabel, S. RINS-W: Robust Inertial Navigation System on Wheels. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019.
  27. Brossard, M.; Barrau, A.; Bonnabel, S. AI-IMU Dead-Reckoning. IEEE Trans. Intell. Veh. 2020, 5, 585–595.
  28. Zhang, M.; Zhang, M.; Chen, Y.L.M. IMU data processing for inertial aided navigation: A recurrent neural network based approach. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021.
  29. Zhou, B.; Gu, Z.; Gu, F.; Wu, P.; Yang, C.; Liu, X.; Li, L.; Li, Y.; Li, Q. DeepVIP: Deep learning-based vehicle indoor positioning using smartphones. IEEE Trans. Veh. Technol. 2022, 71, 13299–13309.
  30. Tang, H.; Niu, X.; Zhang, T.; Li, Y.; Liu, J. OdoNet: Untethered speed aiding for vehicle navigation without hardware wheeled odometer. IEEE Sens. J. 2022, 22, 12197–12208.
  31. Brossard, M.; Bonnabel, S. Learning wheel odometry and IMU errors for localization. In Proceedings of the International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019.
  32. Cao, X.; Zhou, C.; Zeng, D.; Wang, Y. RIO: Rotation-equivariance supervised learning of robust inertial odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–22 June 2022.
  33. Rao, B.; Kazemi, E.; Ding, Y.; Shila, D.M.; Tucker, F.M.; Wang, L. Ctin: Robust contextual transformer network for inertial navigation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 28 February–1 March 2022.
  34. Gong, J.; Zhang, X.; Huang, Y.; Ren, J.; Zhang, Y. Robust inertial motion tracking through deep sensor fusion across smart earbuds and smartphone. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2021, 5, 1–26.
  35. Bo, F.; Li, J.; Wang, W. Mode-independent stride length estimation with IMUs in smartphones. IEEE Sens. J. 2022, 22, 5824–5833.
  36. Yan, S.; Su, Y.; Luo, X.; Sun, A.; Ji, Y.; Ghazali, K.H.B. Deep Learning-Based Geomagnetic Navigation Method Integrated with Dead Reckoning. Remote Sens. 2023, 15, 4165.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 97
Revisions: 2 times (View History)
Update Date: 29 Dec 2023
1000/1000