Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1312 2023-11-09 03:35:34 |
2 layout Meta information modification 1312 2023-11-09 03:37:31 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Li, M.; Hu, K.; Liu, Y.; Hu, E.; Tang, C.; Zhu, H.; Zhou, G. A Multimodal Robust for Coal Mine Mobile Robots. Encyclopedia. Available online: https://encyclopedia.pub/entry/51316 (accessed on 19 May 2024).
Li M, Hu K, Liu Y, Hu E, Tang C, Zhu H, et al. A Multimodal Robust for Coal Mine Mobile Robots. Encyclopedia. Available at: https://encyclopedia.pub/entry/51316. Accessed May 19, 2024.
Li, Menggang, Kun Hu, Yuwang Liu, Eryi Hu, Chaoquan Tang, Hua Zhu, Gongbo Zhou. "A Multimodal Robust for Coal Mine Mobile Robots" Encyclopedia, https://encyclopedia.pub/entry/51316 (accessed May 19, 2024).
Li, M., Hu, K., Liu, Y., Hu, E., Tang, C., Zhu, H., & Zhou, G. (2023, November 09). A Multimodal Robust for Coal Mine Mobile Robots. In Encyclopedia. https://encyclopedia.pub/entry/51316
Li, Menggang, et al. "A Multimodal Robust for Coal Mine Mobile Robots." Encyclopedia. Web. 09 November, 2023.
A Multimodal Robust for Coal Mine Mobile Robots
Edit

Mobile robots in complex underground coal mine environments are still unable to achieve accurate pose estimation and the real-time reconstruction of scenes with absolute geographic information. Integrated terrestrial-underground localization and mapping technologies have still not been effectively developed.

multimodal fusion SLAM geodesic-coordinate transmission coal mine robot

1. Introduction

The current underground operation process of coal mine robots has problems such as non-transparent information about the state of the working environment, low integration and systematization, and a lack of ability to effectively respond to changes in the environmental conditions of the laneway. Underground equipment intelligence is still stuck in the robotization stage of individual pieces of equipment, and the system-wide intelligent control of the intelligent body of the underground operating machine crowd has not been formed. The reasons for the above situation are diverse. At the level of single-robot autonomy, for mobile robots of different applications such as digging, mining, transportation, inspection and rescue, key technologies such as precise robust positioning and environmental map construction under complex underground conditions have not yet been broken through to achieve autonomous navigation and autonomous operation. From the perspective of system intelligence, different work areas and types of robots cannot realize positioning and navigation and autonomous operation based on a unified coordinate system, and the efficiency of scene modeling and updating is low, which cannot meet the requirements of building a system-wide digital twin control platform, posing a great challenge to achieving the transparent and intelligent control of the underground robot crowd.
Simultaneous localization and mapping (SLAM) is a field of research that has witnessed substantial growth in the past decade, with the aim of reconstructing the scene model while simultaneously locating a robot in real-time. This technology gradually gained traction in industrial applications due to its potential to enhance autonomous systems’ capabilities. Compared to ground mobile robots, SLAM tasks for underground coal mine robots face many challenges.
Firstly, underground coal mine environments are dim and always have dust, water vapor, smoke, and other complex conditions. The visibility in mines is worse after a disaster, and the road surface is slippery and bumpy, causing the performance of conventional sensors such as lidar and cameras to degrade dramatically. There is an urgent need for high signal-to-noise ratio sensors and effective information processing to solve the problem of feature extraction and data association in a front–end manner, so that it can meet the demand for long-term robust applications under harsh underground conditions.
Secondly, confined narrow-tunnel underground spaces lead to weak texture, less structure, and large scene ambiguity. The geomagnetic and large electromagnetic interference of electromechanical equipment leads to wireless signal attenuation, shielding, NLOS, and multipath propagation, which will lead to the sensor state estimation process to introduce a large amount of false information. Reliance on a single sensor cannot meet the accuracy and reliability requirements of complex environments. The use of multi-source information fusion must address the effective use and estimation consistency of various types of noise and spurious data.
Thirdly, for robots that need geographic information guidance to perform mining operations such as digging, coal mining, drilling, transportation, etc., it is necessary to obtain the position estimation and environment model that are consistent with the geographic information. This is a prerequisite for constructing a visualized digital twin control platform based on geographic information, as well as an inevitable demand for realizing coal mining robots from single-robot autonomy to whole-system intelligence. An underground coal mine is an environment where GPS does not function and where it is difficult to obtain the global reference system, whilst the current method of determining the geodetic coordinates based on the conduction of the roadway control points by means of total station external mapping is not suitable for the application of robotic swarm operation in a large-scale scenario. New means must be used to solve the problems of the real-time matching and fusion of SLAM process and spatial location geodetic coordinates in an integrated and normalized way.

2. A Multimodal Robust Simultaneous Localization and Mapping Approach Driven by Geodesic Coordinates for Coal Mine Mobile Robots

Underground SLAM has been studied for a long time, mainly using handheld devices [1], mobile trolleys [2], probe robots [3], and mining trucks [4] carrying sensors such as LIDAR and IMU modules for environment modeling. Tardioli et al. [5] discussed the decade of relevant research advances in the field of underground tunneling robotics, focusing on the use of LIDAR-based, vision camera, odometry, and other SLAM methods in tunnel environments. The SubT tournament organized by the US DARPA from 2018 to 2021 is oriented to robotics applications in urban underground spaces, tunnels and natural cave environments, and related studies have proposed new solutions to typical problems in SLAM for underground scenes such as illumination changes [6], scene degradation [7] and alleyway feature exploitation [8]. Ma et al. [9] proposed a solution to solve the underground localization and map building to realize the autonomous navigation of mobile robots by using a depth camera. Chen et al. [10] proposed a SLAM method concept for millimeter-wave radar point cloud imaging underground in coal mines, combined with deep learning to process sparse point clouds. Yang et al. [11] investigated the roadway environment sensing method for tunneling robots based on Hector-SLAM. Li et al. [12] proposed a lidar-based NDT-Graph SLAM method for an efficient map building task of coal mine rescue robots. Mao et al. [13] used a measuring robot to track the coal mining machine body prism to conduct geodetic coordinates and realize coal mining machine positioning, which is an application area limited by the scanning of the measuring robot obtaining geographic information at a high cost. Overall, the research on SLAM for coal mining robots focuses on solving the initial application and performance improvement of sensors such as lidar, cameras, millimeter wave radar, etc., in specific scenarios and under specific conditions, and there is no downhole localization or map-building scheme that can be obtained in agreement with the geographic information, and the long-term localization modeling technology for the complex environment of multiple conditions coupled with the downhole still requires further exploration.
In the field of SLAM for ground mobile robots, the use of multi-sensor fusion to cope with complex environments has also become a current research feature. On the basis of pure laser odometers such as LOAM [14] and LEGO-LOAM [15], a large number of lidar-inertial odometry based on filtering such as LINS [16], Faster-LIO [17], and optimization such as LIO-mapping [18] and LIO-SAM [19] have been proposed, which are integrated by means of being loose coupling or tight coupling. Utilizing the rich information of images to make up for the shortcomings of a lack of laser features, a large number of laser-visual-inertial fusion methods have been further proposed, e.g., LVI-SAM [20], R2LIVE [21], R3LIVE [22], LIC-Fusion [23], LIC-Fusion2 [24], and mVIL-Fusion [25]. Many methods have been proposed to address degradation by combining visual and inertial information besides lidar-based information. Zhang et al. [26] proposed the degeneracy factor of eigenvalues and eigenvectors to identify the degenerate directions in the state space. A sequential and multilayer sensor fusion pipeline was further presented in [27] to integrate data from lidar, camera, and IMU. VIL SLAM [28] incorporated stereo VIO to a lidar mapping system by tightly coupled fixed-lag smoothing to mitigate failures under degenerate cases of pure lidar-based approach. Both of these approaches achieved robust and accurate performance in the tunnel. However, vision-aided methods are not suitable for long-term operation under obscurity circumstances which harm vision detection and tracking. Zhen and Scherer [29] proposed a localization approach which use an ultra-wideband (UWB) to compensate for the degeneration of lidar. They described the localizability as the virtual force and torque constraints which restrain the uncertainty of state estimation. A limitation of this work is that a map must be given a priori.
In conclusion, current underground SLAM methods only focus on localization and modeling results relative to the robot’s initial position, making it difficult to simultaneously obtain absolute geographic information in an underground coal mine, which is critical for some coal mine robotics applications.

References

  1. Bosse, M.; Zlot, R.; Flick, P. Zebedee: Design of a spring-mounted 3-D range sensor with application to mobile mapping. IEEE Trans. Robot. 2012, 28, 1104–1119.
  2. Huber, D.F.; Vandapel, N. Automatic 3D underground mine mapping. In Field and Service Robotics; Springer: Berlin/Heidelberg, Germany, 2003; pp. 497–506.
  3. Thrun, S.; Thayer, S.; Whittaker, W.; Baker, C.; Burgard, W.; Ferguson, D.; Hannel, D.; Montemerlo, M.; Morris, A.; Omohundro, Z.; et al. Autonomous exploration and mapping of abandoned mines. IEEE Robot. Autom. Mag. 2004, 11, 79–91.
  4. Zlot, R.; Bosse, M. Efficient Large-scale Three-dimensional Mobile Mapping for Underground Mines. J. Field Robot. 2010, 31, 758–779.
  5. Tardioli, D.; Riazuelo, L.; Sicignano, D.; Rizzo, C.; Lera, F.; Villarroel, J.L.; Montano, L. Ground robotics in tunnels: Keys and lessons learned after 10 years of research and experiments. J. Field Robot. 2019, 36, 1074–1101.
  6. Kasper, M.; McGuire, S.; Heckman, C. A Benchmark for Visual-Inertial Odometry Systems Employing Onboard Illumination. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Macau, China, 4–8 November 2019; pp. 5256–5263.
  7. Ebadi, K.; Change, Y.; Palieri, M.; Ebadi, K.; Chang, Y.; Palieri, M.; Stephens, A.; Hatteland, A.; Heiden, E.; Thakur, A.; et al. LAMP: Large-Scale Autonomous Mapping and Positioning for Exploration of Perceptually-Degraded Subterranean Environments. In Proceedings of the IEEE International Conference on Robotics and Automation, Paris, France, 31 May–31 August 2020; pp. 80–86.
  8. Wisth, D.; Camurri, M.; Das, S.; Fallon, M. Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry. IEEE Robot. Autom. Lett. 2021, 6, 1004–1011.
  9. Ma, H.; Wang, Y.; Yang, L. Research on depth vision based mobile robot autonomous navigation in underground coal mine. J. China Coal Soc. 2020, 45, 2193–2206.
  10. Chen, X.Z.; Liu, R.J.; Zhang, S.; Zeng, H.; Yang, X.P.; Deng, H. Development of millimeter wave radar imaging and SLAM in underground coal mine environment. J. China Coal Soc. 2020, 45, 2182–2192.
  11. Yang, J.J.; Zhang, Q.; Wu, M.; Wang, C.; Chang, B.S.; Wang, X.L.; Ge, S.R. Research progress of autonomous perception and control technology for intelligent heading. J. China Coal Soc. 2020, 45, 2045–2055.
  12. Li, M.G.; Zhu, H.; You, S.Z.; Wang, L.; Tang, C.Q. Efficient Laser-Based 3D SLAM for Coal Mine Rescue Robots. IEEE Access 2019, 7, 14124–14138.
  13. Gao, S.G.; Gao, D.Y.; Ouyang, Y.B.; Chai, J.; Zhang, D.D.; Ren, W.Q. Intelligent mining technology and its equipment for medium thickness thin seam. J. China Coal Soc. 2020, 45, 997–2007.
  14. Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Robotics: Science and Systems; The MIT Press: Rome, Italy, 2014; Volume 2, pp. 1–9.
  15. Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765.
  16. Qin, C.; Ye, H.; Pranata, C.E.; Han, J.; Zhang, S.; Liu, M. LINS: A Lidar-Inertial State Estimator for Robust and Efficient Navigation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 8899–8906.
  17. Chunge, B.; Tao, X.; Ya, C.; Hao, W. Faster-LIO: Lightweight Tightly Coupled Lidar-Inertial Odometry Using Parallel Sparse Incremental Voxels. IEEE Robot. Autom. Lett. 2022, 7, 4861–4868.
  18. Ye, H.; Chen, Y.; Liu, M. Tightly Coupled 3D Lidar Inertial Odometry and Mapping. In Proceedings of the International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3144–3150.
  19. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 5135–5142.
  20. Shan, T.; Englot, B.; Ratti, C.; Rus, D. LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 5692–5698.
  21. Lin, J.; Zheng, C.; Xu, W.; Zhang, F. R2-LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping. IEEE Robot. Autom. Lett. 2021, 6, 7469–7476.
  22. Lin, J.; Zhang, F. R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 10672–10678.
  23. Zuo, X.; Geneva, P.; Lee, W.; Liu, Y.; Huang, G. LIC-Fusion: LiDAR-Inertial-Camera Odometry. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 5848–5854.
  24. Zuo, X.; Yulin, Y.; Patrick, G.; Jiajun, L.; Yong, L.; Guoquan, H.; Marc, P. LIC-Fusion 2.0: LiDAR-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 5112–5119.
  25. Wang, Y.; Ma, H. mVIL-Fusion: Monocular Visual-Inertial-LiDAR Simultaneous Localization and Mapping in Challenging Environments. IEEE Robot. Autom. Lett. 2023, 8, 504–511.
  26. Zhang, J.; Kaess, M.; Singh, S. On degeneracy of optimization-based state estimation problems. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 809–816.
  27. Zhang, J.; Singh, S. Laser-visual-inertial odometry and mapping with high robustness and low drift. J. Field Robot. 2018, 35, 1242–1264.
  28. Shao, W.; Vijayarangan, S.; Li, C.; Kantor, G. Stereo Visual Inertial LiDAR Simultaneous Localization and Mapping. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 370–377.
  29. Zhen, W.; Scherer, S. Estimating the Localizability in Tunnel-like Environments using LiDAR and UWB. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 4903–4908.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , , ,
View Times: 104
Revisions: 2 times (View History)
Update Date: 09 Nov 2023
1000/1000