The task of nonintrusive object inspection has become crucial nowadays due to increased goods flow and people flow in recent decades. With this increase, the number of potential threats increases as well, as in crowded places among a number of people, parcels, and baggage it is easier to hide dangerous or illegal items that smugglers could use, as well as terrorists and other criminals. On the other hand, overcontrolling these flows is highly undesirable, creating obstacles and delays in free traveling and delivery. Thus, there is an urgent need to develop nonintrusive unobtrusive methods for passive surveillance to identify potential threats in crowded places or at public checkpoints. Besides traditional nonintrusive methods used at stationary checkpoints, as described in 
, with the advance in robotics, it has become possible to extend inspection systems’ equipment to mobile self-driven platforms. Such an approach is beneficial for maritime inspection and might be used for on-land purposes. Nonintrusive object inspection using self-driving vehicles could be active and passive 
. During the functional assessment, different types of control points are established, where people and their belongings should pass through security gates or could be inspected with traditional equipment, such as a manual metal detector, or be searched with the help of specially trained animals (dogs, rats) 
. During a passive inspection, security should be ensured without direct interaction with the object of the search, even without informing them about the searching process 
. With this aim, different object and behavior recognition techniques are employed, typically based on processing the images received from surveillance video cameras installed in public places 
One relatively novel approach in the last few decades is social media analysis 
. The idea behind this approach is social media profile analysis to identify suspicious persons in the early stages of their appearances in public place 
. These methods could be automatic, using artificial intelligence and image recognition algorithms to identify specific persons, followed by a search of their social media profiles with activity analysis. These methods are relatively novel and have limited implementation nowadays due to nonsatisfiable search and analysis quality. However, they are constantly improving with the increasing knowledge base and advances in sensors and methods used for this type of analysis 
In the case of traditional nonintrusive object inspection, in some instances, it is impossible to use fixed control points. Thus, mobile and self-moving vehicles might help 
. As an example, such inspection could be provided for underwater object surveillance during border control, when it is not easy to give a traditional type of nonintrusive inspection with the fixed control point over large vehicles, such as cruise liners, or at a vast area, such as port or a gulf. In this case, smaller remotely controlled or self-driven vehicles equipped with a set of sensors, transducers, and other technologies for nonintrusive control are in use to provide security inspection activities 
. Such vehicles have proven to be effective for underwater inspection, but they still need to be improved with more advanced sensors and signal processing algorithms 
. Similarly, autonomous self-moving on-land vehicles could be used for nonintrusive inspection in crowded places to increase public safety with early suspicious object detection. To make reviewing more efficient, the process also should be automated. One of the possible solutions is to employ self-moving vehicles carrying inspection equipment so that they could autonomously inspect massive objects, crowds, or object flows with the help of nonintrusive control; analyze information; and send alarms to security officers in case of suspicious situation being detected 
2. Typical Structure of Self-Moving Vehicles
As a carrier for nonintrusive object inspection equipment, self-driven autonomous vehicles could be used. Typically, these vehicles contain a set of sensors to analyze an environment detecting obstacles on their preplanned route, and a processing unit to control the vehicle’s movement and recalculate the route depending on the presence of obstacles 
Such vehicles are typically equipped with distance sensors (DS), used to detect distance to the nearest obstacle, and video cameras (VC), to provide image analysis for obstacle detection and object classification. A GPS detects the current vehicle’s location, inertial measurement units (IMU) collect data from gyroscopes, accelerometers, and magnetometers about vehicle motion, and a control system processes all data and implements motion control algorithms. For nonintrusive inspection purposes, the vehicle is additionally equipped with a nonintrusive object inspection system (NIOIS), which could include different types of sensors and data-processing software depending on the environment under surveillance and inspection tasks, such as portable X-ray sensors, infrared cameras, acoustic sensors, odor sensors, etc. 
. The following chapter reviews sensors used for self-driving motion control.
3. Sensors Used for Self-Driving Vehicles
Interference avoidance is essential in developing motion control systems for self-driven vehicles. This task is typically solved using different types of sensors and control algorithms executed on microcontrollers or microcomputers. These special-purpose computing devices typically contain pins and ports to accept sensor information via standard industrial wired or wireless data-transfer protocols.
In general, the most common method for obstacle detection uses either or both of two types of sensors: active sensors, such as various types of laser and ultrasonic distance sensors; and passive sensors, such as video cameras 
. All sensors vary by their effective range, possibility to detect close and remote objects, operational speed, viewing angle, and overall price.
Different sensors are used with different purposes for object detection. Thus, to detect close objects, ultrasonic sensors are the best choice. However, their effective distance and viewing angle are poor, although the price is very attractive. On the other hand, LIDAR and RADAR sensors have 360° viewing angle, good effective distance, but quite a high price. Thus, the choice of sensor type to implement the interference avoidance algorithm should be made carefully depending on the purpose of the self-driving vehicle. For example, for road vehicles—which themselves cost thousands of dollars, and for which a typical task consists of detecting relatively remote objects, within tens or hundreds meters distance—it is reasonable to implement RADAR and LIDAR sensors. However, for small self-driving drones, where it is quite important to create relatively cheap devices that also should move with relatively slow speed, it is reasonable to consider using a set of ultrasonic sensors. Finally, if the aim is to develop a relatively cheap device that could align with object detection and perform an object recognition algorithm based on its image, the use of a video camera might be considered. In tasks of nonintrusive object inspection with the use of autonomous vehicles, a 360° viewing angle and speed detection are not the most-needed options, while it is important to detect close objects and image recognition technology might be in use. Thus, it is reasonable to use a combination of an ultrasonic sensor and a video camera for this purpose.
3.1. Obstacle Detection with the Video Camera
This method typically employs one or two video cameras to detect obstacles based on image-processing techniques.
In the case of a single camera, the following algorithms are used:
Algorithms based on known object recognition. They aim to recognize previously known object parameters and evaluate the distance to these objects based on known object dimensions. These algorithms are easy to implement and they are able to effectively recognize previously known objects, but are useless under any uncertainties, e.g., detecting obstacles that were not previously learned;
Motion-based algorithms. They analyze the sequence of images and compute each pixel offset. Based on the system motion data, it is possible to detect obstacles that appear on a vehicle way without previous information on the type or shape of these obstacles. In most cases, an optical flux algorithm is used to implement this method.
In the case of stereo vision, i.e., combining information from two cameras, the following algorithms are used:
The stereo comparison method, based on searching for common patterns in two images, computing differences in the binocular discrepancy maps, and estimating the distance to the obstacle based on the horizontal displacement;
The method of homographic transformation, which aims to transform the image angle from one camera to the image angle of another camera, assuming any significant differences between straight and altered images as an obstacle.
Typically, a combination of the above-mentioned methods is practically used, providing sufficient information about the environment and obstacles. This information could be used in complex route-planning algorithms for self-driving vehicles. Among the significant disadvantages of interference detection methods with video cameras are their high cost and high computational complexity of the models used, which require the use of neural networks and high-performance computing devices, which, in turn, also have a high cost and energy consumption, which are especially important in the development of mobile devices 
3.2. Interference Detection Using Active Sensors
Active sensors use reflected signal analysis to compute the distance to an obstacle. This group of sensors includes LIDAR (light detection and ranging), RADAR (radio detection and ranging), ultrasonic sensors, infrared sensors, and others. The first three are the most popular in this group:
LIDAR uses laser radiation to calculate the distance to the target;
RADAR uses radio waves to calculate the angle, type, distance, and speed of the obstacle;
Ultrasonic sensors use high-frequency sonic radiation to analyze the time of reflected signal detection to determine the distance to the object.
Compared to video cameras, the advantages of these methods are the possibility to distinguish objects and obstacles under different lighting conditions, higher operational range, specificity, and accuracy of obstacle detection. In addition, less computational capacity is required to calculate obstacle parameters, as these sensors’ operational principle does not require complex computation for image processing. Each method has its pros and cons 
3.3. Rationale for the Obstacle Sensor Choice
Based on the provided analysis, it was concluded that for nonintrusive and unobtrusive object inspection with the implementation of self-driving vehicles, in most cases RADAR or a video camera should be implemented as the primary sensor and an ultrasonic sensor as an additional.
Moreover, it should be mentioned that ultrasonic signal processing is another challenge, due to nontrivial data-processing and control algorithms.
3.4. Obstacle Detection Algorithms
The motion of a self-driving vehicle in a changing environment is possible only when the vehicle can adapt its behavior following the changing information about this environment. In most cases, based on the preliminary description of the environment, the route-planning module generates possible paths to a given destination. Using data from sensors in real time, the self-driving algorithm should adjust the vehicle’s motion to avoid collisions via recalculating basic path parameters. The use of ultrasonic distance sensors as the main obstacle detectors is quite problematic in this case, because the raw distance data do not provide information on obstacle location. Thus, complete information on obstacles could only be obtained via examining an obstacle from different angles. As an on-duty self-driving inspection vehicle performs specific tasks, it is impossible to spend time collecting information from each possible angle. Instead, the system should obtain the maximum possible information from the data collected during the vehicle’s movements. The possible solution is collecting and merging data from previous measurements and creating a continuous environment map based on these measurements. However, such an approach, with the use of environment simulation using graphical primitives (lines, polygons, circles, etc.), requires high-resolution information and a considerable data preprocessing capacity. Even after meeting all these requirements, the control system could provide false results due to information noise, errors during data collection, sensor faults, etc.
Alternatively, an environment map could be created based on different types of fullness matrices. There are two types of fullness matrices: raster matrix (fixed-cell matrix) and adaptive cell size matrix. Each of these types has its pros and cons. Using a raster matrix also requires using a large but fixed memory volume. The use of an adaptive cell size algorithm reduces the memory capacity needed in cases of vast obstacles or huge unfilled environment areas. However, it requires a lot of mighty computational power, along with frequent data updates, which also harden the use of probabilistic algorithms.
Similar approaches are used with other obstacle detection sensors, but they vary in distance range for sensor effect, computational capacity, and sensor price, as it was mentioned previously.
4. Self-Driving Vehicles Positioning Principles and Navigation Control
4.1. General Principles of Global Positioning
One of the most critical questions in uncrewed-vehicle motion control is detecting its geographical position. Nowadays, most solutions use a Global Positioning System for this purpose, which implements satellite signal analysis to detect an object’s position.
The idea of satellite navigation takes its roots in the 1950s. When the first artificial Earth satellite was launched, American scientists led by Richard Kershner observed satellite signals and found that thanks to the Doppler effect, the frequency of the received signal rises when the satellite approaches and drops as it moves away. The essence of the discovery is that knowing the observer’s coordinates makes it possible to detect the satellite’s position and speed, and knowing the satellite’s placement makes it possible to see the observer’s speed and coordinates 
In 1973, the United States started the DNSS program, renamed later into NavStar, which aimed to launch satellites into medium Earth orbit, receive satellite signals using Earth-based equipment, and detect objects’ geographical coordinates with the use of specialized software. The program was renamed to its modern name, Global Positioning System (GPS), in December 1973.
With the spread of cellular communication, it became possible to produce various devices such as devices for geographic coordinate determination and data transmission devices used to transmit the coordinates of different transportation objects, such as cars, ships, aircraft, etc. These devices are called trackers. Gradually, with the development of microelectronics and software advances, as well as with the increase in cellular communication coverage, it became possible not only to transmit an object’s geographical coordinates but to perform other options, such as:
Calculating the object’s location, speed, and movement direction based on the GPS satellites’ signals;
Connecting external sensors to analog or digital inputs of a tracker;
Reading data from the vehicle’s onboard equipment via either serial port or CAN interface;
Storing a certain amount of data in the internal memory during communication breaks;
Transferring received data to the central server for the following processing.
Received data can either be stored at a local storage device and transferred later to a centralized database or transmitted to the centralized database in real time via cellular communication channels.
All modern tracking systems are identical. They have the same functionality, and the main differences are related to using components from different manufacturers and the functionality of particular techniques.
Therefore, for nonintrusive object inspection tasks with self-driving vehicles, the GPS module is responsible for detecting the current vehicle position and adjusting its motion algorithm if needed. The most typical implementation is in large areas, for example, at a gulf for maritime border control vehicles.
4.2. Software for Autonomous Moving Control
A vital component for any autonomous moving vehicle is software with built-in navigation maps and a report system. This software typically consists of two parts: the onboard vehicle’s software to control its motion, and remote monitoring software, to analyze and store tracking and other data. The onboard part is typically implemented using microcontrollers or single-board computers, while the remote monitoring part could be implemented either with the WEB-based software or as a desktop solution.
The WEB service is typically used to control a small number of devices, while the main focus is operational GPS location monitoring. The WEB application is available from any computer independently of the operating system installed. This application typically has limited resources to control and analyze received data; thus, customized desktop solutions are used for more advanced and robust solutions. Furthermore, it should be considered that data in WEB-based applications typically are not aimed to be stored for a long time; as a rule, about one month. Thus, a report generated with such a system could be generated only for a limited period. With WEB-based applications, it is hardly possible to customize a program to specific needs and problems. Therefore, the undoubted advantage of WEB-based software is its ability to be started from any computer, tablet, or smartphone, neglecting the operating system installed. However, it is limited in the possibility of adjusting it to specific needs and the amount of data that could be processed.
In the case of a special-purpose software system created, it could be tailored to specific task demands, including controlling the vehicle’s autonomous movements and analyzing data received from its additional sensors. In this case, all the data received from the vehicle are stored on remote storage, and they are constantly available and can be further processed with more powerful computers. Thus, it is possible to provide complex analyses on received data, including video image processing; build different analytical reports for any time; and work independently on Internet connections within local networks.
Thus, for the aims of nonintrusive object inspection with the use of self-moving vehicles, the proper solution is the use of simple microcontroller-based software to control vehicle moving and send sensor data to the remote device, and custom advanced software within the powerful remote computer to store and analyze both vehicle’s movement and position data and additional nonintrusive inspection sensor data.