Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 2920 2022-11-04 08:50:30 |
2 format Meta information modification 2920 2022-11-07 03:26:16 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Mamchur, D.;  Peksa, J.;  Kolodinskis, A.;  Zigunovs, M. Use of Autonomous Vehicles in Nonintrusive Object Inspection. Encyclopedia. Available online: https://encyclopedia.pub/entry/32940 (accessed on 07 May 2024).
Mamchur D,  Peksa J,  Kolodinskis A,  Zigunovs M. Use of Autonomous Vehicles in Nonintrusive Object Inspection. Encyclopedia. Available at: https://encyclopedia.pub/entry/32940. Accessed May 07, 2024.
Mamchur, Dmytro, Janis Peksa, Antons Kolodinskis, Maksims Zigunovs. "Use of Autonomous Vehicles in Nonintrusive Object Inspection" Encyclopedia, https://encyclopedia.pub/entry/32940 (accessed May 07, 2024).
Mamchur, D.,  Peksa, J.,  Kolodinskis, A., & Zigunovs, M. (2022, November 04). Use of Autonomous Vehicles in Nonintrusive Object Inspection. In Encyclopedia. https://encyclopedia.pub/entry/32940
Mamchur, Dmytro, et al. "Use of Autonomous Vehicles in Nonintrusive Object Inspection." Encyclopedia. Web. 04 November, 2022.
Use of Autonomous Vehicles in Nonintrusive Object Inspection
Edit

Traditional nonintrusive object inspection methods are complex or extremely expensive to apply in certain cases, such as inspection of enormous objects, underwater or maritime inspection, an unobtrusive inspection of a crowded place, etc. With the latest advances in robotics, autonomous self-driving vehicles could be applied for this task.

self-driving vehicle artificial intelligence classification

1. Introduction

The task of nonintrusive object inspection has become crucial nowadays due to increased goods flow and people flow in recent decades. With this increase, the number of potential threats increases as well, as in crowded places among a number of people, parcels, and baggage it is easier to hide dangerous or illegal items that smugglers could use, as well as terrorists and other criminals. On the other hand, overcontrolling these flows is highly undesirable, creating obstacles and delays in free traveling and delivery. Thus, there is an urgent need to develop nonintrusive unobtrusive methods for passive surveillance to identify potential threats in crowded places or at public checkpoints. Besides traditional nonintrusive methods used at stationary checkpoints, as described in [1], with the advance in robotics, it has become possible to extend inspection systems’ equipment to mobile self-driven platforms. Such an approach is beneficial for maritime inspection and might be used for on-land purposes. Nonintrusive object inspection using self-driving vehicles could be active and passive [1][2][3]. During the functional assessment, different types of control points are established, where people and their belongings should pass through security gates or could be inspected with traditional equipment, such as a manual metal detector, or be searched with the help of specially trained animals (dogs, rats) [4][5][6]. During a passive inspection, security should be ensured without direct interaction with the object of the search, even without informing them about the searching process [7][8][9]. With this aim, different object and behavior recognition techniques are employed, typically based on processing the images received from surveillance video cameras installed in public places [10][11][12][13][14].
One relatively novel approach in the last few decades is social media analysis [15][16][17][18]. The idea behind this approach is social media profile analysis to identify suspicious persons in the early stages of their appearances in public place [17]. These methods could be automatic, using artificial intelligence and image recognition algorithms to identify specific persons, followed by a search of their social media profiles with activity analysis. These methods are relatively novel and have limited implementation nowadays due to nonsatisfiable search and analysis quality. However, they are constantly improving with the increasing knowledge base and advances in sensors and methods used for this type of analysis [19][20][21][22][23].
In the case of traditional nonintrusive object inspection, in some instances, it is impossible to use fixed control points. Thus, mobile and self-moving vehicles might help [24][25][26][27][28]. As an example, such inspection could be provided for underwater object surveillance during border control, when it is not easy to give a traditional type of nonintrusive inspection with the fixed control point over large vehicles, such as cruise liners, or at a vast area, such as port or a gulf. In this case, smaller remotely controlled or self-driven vehicles equipped with a set of sensors, transducers, and other technologies for nonintrusive control are in use to provide security inspection activities [29][30][31][32]. Such vehicles have proven to be effective for underwater inspection, but they still need to be improved with more advanced sensors and signal processing algorithms [33]. Similarly, autonomous self-moving on-land vehicles could be used for nonintrusive inspection in crowded places to increase public safety with early suspicious object detection. To make reviewing more efficient, the process also should be automated. One of the possible solutions is to employ self-moving vehicles carrying inspection equipment so that they could autonomously inspect massive objects, crowds, or object flows with the help of nonintrusive control; analyze information; and send alarms to security officers in case of suspicious situation being detected [34].

2. Typical Structure of Self-Moving Vehicles

As a carrier for nonintrusive object inspection equipment, self-driven autonomous vehicles could be used. Typically, these vehicles contain a set of sensors to analyze an environment detecting obstacles on their preplanned route, and a processing unit to control the vehicle’s movement and recalculate the route depending on the presence of obstacles [35][36].
Such vehicles are typically equipped with distance sensors (DS), used to detect distance to the nearest obstacle, and video cameras (VC), to provide image analysis for obstacle detection and object classification. A GPS detects the current vehicle’s location, inertial measurement units (IMU) collect data from gyroscopes, accelerometers, and magnetometers about vehicle motion, and a control system processes all data and implements motion control algorithms. For nonintrusive inspection purposes, the vehicle is additionally equipped with a nonintrusive object inspection system (NIOIS), which could include different types of sensors and data-processing software depending on the environment under surveillance and inspection tasks, such as portable X-ray sensors, infrared cameras, acoustic sensors, odor sensors, etc. [37]. The following chapter reviews sensors used for self-driving motion control.

3. Sensors Used for Self-Driving Vehicles

Interference avoidance is essential in developing motion control systems for self-driven vehicles. This task is typically solved using different types of sensors and control algorithms executed on microcontrollers or microcomputers. These special-purpose computing devices typically contain pins and ports to accept sensor information via standard industrial wired or wireless data-transfer protocols.
In general, the most common method for obstacle detection uses either or both of two types of sensors: active sensors, such as various types of laser and ultrasonic distance sensors; and passive sensors, such as video cameras [38][39][40][41]. All sensors vary by their effective range, possibility to detect close and remote objects, operational speed, viewing angle, and overall price.
Different sensors are used with different purposes for object detection. Thus, to detect close objects, ultrasonic sensors are the best choice. However, their effective distance and viewing angle are poor, although the price is very attractive. On the other hand, LIDAR and RADAR sensors have 360° viewing angle, good effective distance, but quite a high price. Thus, the choice of sensor type to implement the interference avoidance algorithm should be made carefully depending on the purpose of the self-driving vehicle. For example, for road vehicles—which themselves cost thousands of dollars, and for which a typical task consists of detecting relatively remote objects, within tens or hundreds meters distance—it is reasonable to implement RADAR and LIDAR sensors. However, for small self-driving drones, where it is quite important to create relatively cheap devices that also should move with relatively slow speed, it is reasonable to consider using a set of ultrasonic sensors. Finally, if the aim is to develop a relatively cheap device that could align with object detection and perform an object recognition algorithm based on its image, the use of a video camera might be considered. In tasks of nonintrusive object inspection with the use of autonomous vehicles, a 360° viewing angle and speed detection are not the most-needed options, while it is important to detect close objects and image recognition technology might be in use. Thus, it is reasonable to use a combination of an ultrasonic sensor and a video camera for this purpose.

3.1. Obstacle Detection with the Video Camera

This method typically employs one or two video cameras to detect obstacles based on image-processing techniques.
In the case of a single camera, the following algorithms are used:
-
Algorithms based on known object recognition. They aim to recognize previously known object parameters and evaluate the distance to these objects based on known object dimensions. These algorithms are easy to implement and they are able to effectively recognize previously known objects, but are useless under any uncertainties, e.g., detecting obstacles that were not previously learned;
-
Motion-based algorithms. They analyze the sequence of images and compute each pixel offset. Based on the system motion data, it is possible to detect obstacles that appear on a vehicle way without previous information on the type or shape of these obstacles. In most cases, an optical flux algorithm is used to implement this method.
In the case of stereo vision, i.e., combining information from two cameras, the following algorithms are used:
-
The stereo comparison method, based on searching for common patterns in two images, computing differences in the binocular discrepancy maps, and estimating the distance to the obstacle based on the horizontal displacement;
-
The method of homographic transformation, which aims to transform the image angle from one camera to the image angle of another camera, assuming any significant differences between straight and altered images as an obstacle.
Typically, a combination of the above-mentioned methods is practically used, providing sufficient information about the environment and obstacles. This information could be used in complex route-planning algorithms for self-driving vehicles. Among the significant disadvantages of interference detection methods with video cameras are their high cost and high computational complexity of the models used, which require the use of neural networks and high-performance computing devices, which, in turn, also have a high cost and energy consumption, which are especially important in the development of mobile devices [42][43].

3.2. Interference Detection Using Active Sensors

Active sensors use reflected signal analysis to compute the distance to an obstacle. This group of sensors includes LIDAR (light detection and ranging), RADAR (radio detection and ranging), ultrasonic sensors, infrared sensors, and others. The first three are the most popular in this group:
-
LIDAR uses laser radiation to calculate the distance to the target;
-
RADAR uses radio waves to calculate the angle, type, distance, and speed of the obstacle;
-
Ultrasonic sensors use high-frequency sonic radiation to analyze the time of reflected signal detection to determine the distance to the object.
Compared to video cameras, the advantages of these methods are the possibility to distinguish objects and obstacles under different lighting conditions, higher operational range, specificity, and accuracy of obstacle detection. In addition, less computational capacity is required to calculate obstacle parameters, as these sensors’ operational principle does not require complex computation for image processing. Each method has its pros and cons [44][45][46][47][48][49][50][51][52].

3.3. Rationale for the Obstacle Sensor Choice

Based on the provided analysis, it was concluded that for nonintrusive and unobtrusive object inspection with the implementation of self-driving vehicles, in most cases RADAR or a video camera should be implemented as the primary sensor and an ultrasonic sensor as an additional.
Moreover, it should be mentioned that ultrasonic signal processing is another challenge, due to nontrivial data-processing and control algorithms.

3.4. Obstacle Detection Algorithms

The motion of a self-driving vehicle in a changing environment is possible only when the vehicle can adapt its behavior following the changing information about this environment. In most cases, based on the preliminary description of the environment, the route-planning module generates possible paths to a given destination. Using data from sensors in real time, the self-driving algorithm should adjust the vehicle’s motion to avoid collisions via recalculating basic path parameters. The use of ultrasonic distance sensors as the main obstacle detectors is quite problematic in this case, because the raw distance data do not provide information on obstacle location. Thus, complete information on obstacles could only be obtained via examining an obstacle from different angles. As an on-duty self-driving inspection vehicle performs specific tasks, it is impossible to spend time collecting information from each possible angle. Instead, the system should obtain the maximum possible information from the data collected during the vehicle’s movements. The possible solution is collecting and merging data from previous measurements and creating a continuous environment map based on these measurements. However, such an approach, with the use of environment simulation using graphical primitives (lines, polygons, circles, etc.), requires high-resolution information and a considerable data preprocessing capacity. Even after meeting all these requirements, the control system could provide false results due to information noise, errors during data collection, sensor faults, etc.
Alternatively, an environment map could be created based on different types of fullness matrices. There are two types of fullness matrices: raster matrix (fixed-cell matrix) and adaptive cell size matrix. Each of these types has its pros and cons. Using a raster matrix also requires using a large but fixed memory volume. The use of an adaptive cell size algorithm reduces the memory capacity needed in cases of vast obstacles or huge unfilled environment areas. However, it requires a lot of mighty computational power, along with frequent data updates, which also harden the use of probabilistic algorithms.
Similar approaches are used with other obstacle detection sensors, but they vary in distance range for sensor effect, computational capacity, and sensor price, as it was mentioned previously.

4. Self-Driving Vehicles Positioning Principles and Navigation Control

4.1. General Principles of Global Positioning

One of the most critical questions in uncrewed-vehicle motion control is detecting its geographical position. Nowadays, most solutions use a Global Positioning System for this purpose, which implements satellite signal analysis to detect an object’s position.
The idea of satellite navigation takes its roots in the 1950s. When the first artificial Earth satellite was launched, American scientists led by Richard Kershner observed satellite signals and found that thanks to the Doppler effect, the frequency of the received signal rises when the satellite approaches and drops as it moves away. The essence of the discovery is that knowing the observer’s coordinates makes it possible to detect the satellite’s position and speed, and knowing the satellite’s placement makes it possible to see the observer’s speed and coordinates [53][54].
In 1973, the United States started the DNSS program, renamed later into NavStar, which aimed to launch satellites into medium Earth orbit, receive satellite signals using Earth-based equipment, and detect objects’ geographical coordinates with the use of specialized software. The program was renamed to its modern name, Global Positioning System (GPS), in December 1973.
With the spread of cellular communication, it became possible to produce various devices such as devices for geographic coordinate determination and data transmission devices used to transmit the coordinates of different transportation objects, such as cars, ships, aircraft, etc. These devices are called trackers. Gradually, with the development of microelectronics and software advances, as well as with the increase in cellular communication coverage, it became possible not only to transmit an object’s geographical coordinates but to perform other options, such as:
-
Calculating the object’s location, speed, and movement direction based on the GPS satellites’ signals;
-
Connecting external sensors to analog or digital inputs of a tracker;
-
Reading data from the vehicle’s onboard equipment via either serial port or CAN interface;
-
Storing a certain amount of data in the internal memory during communication breaks;
-
Transferring received data to the central server for the following processing.
Received data can either be stored at a local storage device and transferred later to a centralized database or transmitted to the centralized database in real time via cellular communication channels.
All modern tracking systems are identical. They have the same functionality, and the main differences are related to using components from different manufacturers and the functionality of particular techniques.
Therefore, for nonintrusive object inspection tasks with self-driving vehicles, the GPS module is responsible for detecting the current vehicle position and adjusting its motion algorithm if needed. The most typical implementation is in large areas, for example, at a gulf for maritime border control vehicles.

4.2. Software for Autonomous Moving Control

A vital component for any autonomous moving vehicle is software with built-in navigation maps and a report system. This software typically consists of two parts: the onboard vehicle’s software to control its motion, and remote monitoring software, to analyze and store tracking and other data. The onboard part is typically implemented using microcontrollers or single-board computers, while the remote monitoring part could be implemented either with the WEB-based software or as a desktop solution.
The WEB service is typically used to control a small number of devices, while the main focus is operational GPS location monitoring. The WEB application is available from any computer independently of the operating system installed. This application typically has limited resources to control and analyze received data; thus, customized desktop solutions are used for more advanced and robust solutions. Furthermore, it should be considered that data in WEB-based applications typically are not aimed to be stored for a long time; as a rule, about one month. Thus, a report generated with such a system could be generated only for a limited period. With WEB-based applications, it is hardly possible to customize a program to specific needs and problems. Therefore, the undoubted advantage of WEB-based software is its ability to be started from any computer, tablet, or smartphone, neglecting the operating system installed. However, it is limited in the possibility of adjusting it to specific needs and the amount of data that could be processed.
In the case of a special-purpose software system created, it could be tailored to specific task demands, including controlling the vehicle’s autonomous movements and analyzing data received from its additional sensors. In this case, all the data received from the vehicle are stored on remote storage, and they are constantly available and can be further processed with more powerful computers. Thus, it is possible to provide complex analyses on received data, including video image processing; build different analytical reports for any time; and work independently on Internet connections within local networks.
Thus, for the aims of nonintrusive object inspection with the use of self-moving vehicles, the proper solution is the use of simple microcontroller-based software to control vehicle moving and send sensor data to the remote device, and custom advanced software within the powerful remote computer to store and analyze both vehicle’s movement and position data and additional nonintrusive inspection sensor data.

References

  1. Mamchur, D.; Peksa, J.; Le Clainche, S.; Vinuesa, R. Application and Advances in Radiographic and Novel Technologies Used for Non-Intrusive Object Inspection. Sensors 2022, 22, 2121.
  2. Tholen, B. The changing border: Developments and risks in border control management of Western countries. Int. Rev. Adm. Sci. 2010, 76, 259–278.
  3. Trauner, F.; Ripoll Servent, A. The Communitarization of the Area of Freedom, Security and Justice: Why Institutional Change does not Translate into Policy Change. JCMS J. Common Mark. Stud. 2016, 54, 1417–1432.
  4. Wasilewski, T.; Szulczynski, B.; Wojciechowski, M.; Kamysz, W.; Gebicki, J. A highly selective biosensor based on peptide directly derived from the HarmOBP7 aldehyde binding site. Sensors 2019, 19, 4284.
  5. Di-Poï, N.; Milinkovitch, M.C. Crocodylians evolved scattered multi-sensory micro-organs. Evodevo 2013, 4, 19.
  6. Qi, P.F.; Meng, Q.H.; Zeng, M. A CNN-based simplified data processing method for electronic noses. In Proceedings of the ISOCS/IEEE International Symposium on Olfaction and Electronic Nose, Montreal, QC, Canada, 28–31 May 2017; pp. 1–3.
  7. Polner, M. Customs and Illegal Trade: Old Game–New Rules. J. Borderl. Stud. 2015, 30, 329–344.
  8. Nguyen, H.D.; Cai, R.; Zhao, H.; Kot, A.C.; Wen, B. Towards More Efficient Security Inspection via Deep Learning: A Task-Driven X-ray Image Cropping Scheme. Micromachines 2022, 13, 565.
  9. Yang, H.; Zhang, D.; Qin, S.; Cui, T.J.; Miao, J. Real-Time Detection of Concealed Threats with Passive Millimeter Wave and Visible Images via Deep Neural Networks. Sensors 2021, 21, 8456.
  10. Wang, J.; Jiang, K.; Zhang, T.; Gu, X.; Liu, G.; Lu, X. Visible–Infrared Person Re-Identification via Global Feature Constraints Led by Local Features. Electronics 2022, 11, 2645.
  11. Capasso, P.; Cimmino, L.; Abate, A.F.; Bruno, A.; Cattaneo, G. A PNU-Based Methodology to Improve the Reliability of Biometric Systems. Sensors 2022, 22, 6074.
  12. Elordi, U.; Lunerti, C.; Unzueta, L.; Goenetxea, J.; Aranjuelo, N.; Bertelsen, A.; Arganda-Carreras, I. Designing Automated Deployment Strategies of Face Recognition Solutions in Heterogeneous IoT Platforms. Information 2021, 12, 532.
  13. Huszár, V.D.; Adhikarla, V.K. Live Spoofing Detection for Automatic Human Activity Recognition Applications. Sensors 2021, 21, 7339.
  14. Montaño-Serrano, V.M.; Jacinto-Villegas, J.M.; Vilchis-González, A.H.; Portillo-Rodríguez, O. Artificial Vision Algorithms for Socially Assistive Robot Applications: A Review of the Literature. Sensors 2021, 21, 5728.
  15. Palomino, M.A.; Aider, F. Evaluating the Effectiveness of Text Pre-Processing in Sentiment Analysis. Appl. Sci. 2022, 12, 8765.
  16. Slabchenko, O.; Sydorenko, V.; Siebert, X. Development of models for imputation of data from social networks on the basis of an extended matrix of attributes. East.-Eur. J. Enterp. Technol. 2016, 4, 24–34.
  17. Sydorenko, V.; Kravchenko, S.; Rychok, Y.; Zeman, K. Method of Classification of Tonal Estimations Time Series in Problems of Intellectual Analysis of Text Content. Transp. Res. Procedia 2020, 44, 102–109.
  18. Romanovs, A.; Bikovska, J.; Peksa, J.; Vartiainen, T.; Kotsampopoulos, P.; Eltahawy, B.; Lehnhoff, S.; Brand, M.; Strebko, J. State of the Art in Cybersecurity and Smart Grid Education. In Proceedings of the 2021 IEEE 19th International Conference on Smart Technologies (EUROCON), Lviv, Ukraine, 6–8 July 2021.
  19. Williams, J.M. The safety/security nexus and the humanitarianisation of border enforcement. Geogr. J. 2016, 182, 27–37.
  20. Iqbal, A.; Amin, R.; Iqbal, J.; Alroobaea, R.; Binmahfoudh, A.; Hussain, M. Sentiment Analysis of Consumer Reviews Using Deep Learning. Sustainability 2022, 14, 10844.
  21. Al Naqbi, N.; Al Momani, N.; Davies, A. The Influence of Social Media on Perceived Levels of National Security and Crisis: A Case Study of Youth in the United Arab Emirates. Sustainability 2022, 14, 10785.
  22. Tesfagergish, S.G.; Kapočiūtė-Dzikienė, J.; Damaševičius, R. Zero-Shot Emotion Detection for Semi-Supervised Sentiment Analysis Using Sentence Transformers and Ensemble Learning. Appl. Sci. 2022, 12, 8662.
  23. Shahzalal, M.; Adnan, H.M. Attitude, Self-Control, and Prosocial Norm to Predict Intention to Use Social Media Responsibly: From Scale to Model Fit towards a Modified Theory of Planned Behavior. Sustainability 2022, 14, 9822.
  24. Sandino, J.; Vanegas, F.; Maire, F.; Caccetta, P.; Sanderson, C.; Gonzalez, F. UAV Framework for Autonomous Onboard Navigation and People/Object Detection in Cluttered Indoor Environments. Remote Sens. 2020, 12, 3386.
  25. Recalde, L.F.; Guevara, B.S.; Carvajal, C.P.; Andaluz, V.H.; Varela-Aldás, J.; Gandolfo, D.C. System Identification and Nonlinear Model Predictive Control with Collision Avoidance Applied in Hexacopters UAVs. Sensors 2022, 22, 4712.
  26. Khan, M.F.; Yau, K.-L.A.; Ling, M.H.; Imran, M.A.; Chong, Y.-W. An Intelligent Cluster-Based Routing Scheme in 5G Flying Ad Hoc Networks. Appl. Sci. 2022, 12, 3665.
  27. Ming, Z.; Huang, H. A 3D Vision Cone Based Method for Collision Free Navigation of a Quadcopter UAV among Moving Obstacles. Drones 2021, 5, 134.
  28. Cui, X.; Zhang, X.; Zhao, Z. Real-Time Safety Decision-Making Method for Multirotor Flight Strategies Based on TOPSIS Model. Appl. Sci. 2022, 12, 6696.
  29. Nikolakopoulos, K.; Kyriou, A.; Koukouvelas, I.; Zygouri, V.; Apostolopoulos, D. Combination of Aerial, Satellite, and UAV Photogrammetry for Mapping the Diachronic Coastline Evolution: The Case of Lefkada Island. ISPRS Int. J. Geo-Inf. 2019, 8, 489.
  30. Avola, D.; Cinque, L.; Di Mambro, A.; Diko, A.; Fagioli, A.; Foresti, G.L.; Marini, M.R.; Mecca, A.; Pannone, D. Low-Altitude Aerial Video Surveillance via One-Class SVM Anomaly Detection from Textural Features in UAV Images. Information 2022, 13, 2.
  31. Marzoughi, A.; Savkin, A.V. Autonomous Navigation of a Team of Unmanned Surface Vehicles for Intercepting Intruders on a Region Boundary. Sensors 2021, 21, 297.
  32. Tomar, I.; Sreedevi, I.; Pandey, N. State-of-Art Review of Traffic Light Synchronization for Intelligent Vehicles: Current Status, Challenges, and Emerging Trends. Electronics 2022, 11, 465.
  33. Zhao, Z.; Hu, Q.; Feng, H.; Feng, X.; Su, W. A Cooperative Hunting Method for Multi-AUV Swarm in Underwater Weak Information Environment with Obstacles. J. Mar. Sci. Eng. 2022, 10, 1266.
  34. Cetin, K.; Tugal, H.; Petillot, Y.; Dunnigan, M.; Newbrook, L.; Erden, M.S. A Robotic Experimental Setup with a Stewart Platform to Emulate Underwater Vehicle-Manipulator Systems. Sensors 2022, 22, 5827.
  35. Coppola, M.; McGuire, K.N.; Scheper, K.Y.W.; de Croon, G.C.H.E. Onboard communication-based relative localization for collision avoidance in micro air vehicle teams. Auton. Robot. 2018, 42, 1787–1805.
  36. Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous localization and mapping: A survey of current trends in autonomous driving. IEEE Trans. Intell. Veh. 2017, 2, 194–220.
  37. Ranyal, E.; Sadhu, A.; Jain, K. Road Condition Monitoring Using Smart Sensing and Artificial Intelligence: A Review. Sensors 2022, 22, 3044.
  38. Fraundorfer, F.; Engels, C.; Nister, D. Topological mapping, localization and navigation using image collections. In Proceedings of the International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October 2007–2 November 2007; pp. 3872–3877.
  39. Goedemé, T.; Nuttin, M.; Tuytelaars, T.; Van Gool, L. Omnidirectional vision based topological navigation. Int. J. Comput. Vis. 2007, 74, 219–236.
  40. Kim, D.-H.; Shin, K.; Han, C.-S.; Lee, J.Y. Sensor-based navigation of a car-like robot based on bug family algorithms. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2013, 227, 1224–1241.
  41. Mueller, M.W.; Hamer, M.; D’Andrea, R. Fusing ultrawide band range measurements with accelerometers and rate gyroscopes for quadrocopter state estimation. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 1730–1736.
  42. Budiharto, W. Intelligent Surveillance Robot with Obstacle Avoidance Capabilities Using Neural Network. Comput. Intell. Neurosci. 2015, 2015, 745823.
  43. Badrloo, S.; Varshosaz, M.; Pirasteh, S.; Li, J. Image-Based Obstacle Detection Methods for the Safe Navigation of Unmanned Vehicles: A Review. Remote Sens. 2022, 14, 3824.
  44. Zhang, X.; Zhou, M.; Qiu, P.; Huang, Y.; Li, J. Radar and vision fusion for the real-time obstacle detection and identification. Ind. Robot. 2019, 46, 391–395.
  45. Ci, W.; Xu, T.; Lin, R.; Lu, S. A Novel Method for Unexpected Obstacle Detection in the Traffic Environment Based on Computer Vision. Appl. Sci. 2022, 12, 8937.
  46. Neelam Jaikishore, C.; Podaturpet Arunkumar, G.; Jagannathan Srinath, A.; Vamsi, H.; Srinivasan, K.; Ramesh, R.K.; Jayaraman, K.; Ramachandran, P. Implementation of Deep Learning Algorithm on a Custom Dataset for Advanced Driver Assistance Systems Applications. Appl. Sci. 2022, 12, 8927.
  47. Hachaj, T. Potential Obstacle Detection Using RGB to Depth Image Encoder–Decoder Network: Application to Unmanned Aerial Vehicles. Sensors 2022, 22, 6703.
  48. Hussain, M.; Ali, N.; Hong, J.-E. Vision beyond the Field-of-View: A Collaborative Perception System to Improve Safety of Intelligent Cyber-Physical Systems. Sensors 2022, 22, 6610.
  49. Buckman, N.; Hansen, A.; Karaman, S.; Rus, D. Evaluating Autonomous Urban Perception and Planning in a 1/10th Scale MiniCity. Sensors 2022, 22, 6793.
  50. Elamin, A.; El-Rabbany, A. UAV-Based Multi-Sensor Data Fusion for Urban Land Cover Mapping Using a Deep Convolutional Neural Network. Remote Sens. 2022, 14, 4298.
  51. Prochowski, L.; Szwajkowski, P.; Ziubiński, M. Research Scenarios of Autonomous Vehicles, the Sensors and Measurement Systems Used in Experiments. Sensors 2022, 22, 6586.
  52. Kelly, C.; Wilkinson, B.; Abd-Elrahman, A.; Cordero, O.; Lassiter, H.A. Accuracy Assessment of Low-Cost Lidar Scanners: An Analysis of the Velodyne HDL–32E and Livox Mid–40’s Temporal Stability. Remote Sens. 2022, 14, 4220.
  53. Cullen, A.; Mazhar, M.K.A.; Smith, M.D.; Lithander, F.E.; Ó Breasail, M.; Henderson, E.J. Wearable and Portable GPS Solutions for Monitoring Mobility in Dementia: A Systematic Review. Sensors 2022, 22, 3336.
  54. Caporali, A.; Zurutuza, J. Broadcast Ephemeris with Centimetric Accuracy: Test Results for GPS, Galileo, Beidou and Glonass. Remote Sens. 2021, 13, 4185.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 290
Revisions: 2 times (View History)
Update Date: 07 Nov 2022
1000/1000