Millimeter-Wave Radar and Convolutional Neural Network: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , ,

A framework for simultaneous tracking and recognizing drone targets using a low-cost and small-sized millimeter-wave radar is presented. The radar collects the reflected signals of multiple targets in the field of view, including drone and non-drone targets. The analysis of the received signals allows multiple targets to be distinguished because of their different reflection patterns.

  • mmWave radar
  • cloud points
  • target tracking

1. Introduction

In recent years, unmanned aerial vehicles (UAVs), such as drones, have received significant attention for performing tasks in different domains. This is because of their low cost, high coverage, and vast mobility, as well as their capability to perform different operations using small-scale sensors [1]. Smartphones can now operate drones instead of traditional remote controllers, owing to technological advancements. In addition, drone technology can provide live video streaming and image capturing, as well as make autonomous decisions based on these data. Consequently, artificial intelligence techniques have been utilized in the provisioning of civilian and military services [2]. In this context, drones have been adopted for express shipping and delivery [3,4,5], natural disaster prevention [6,7], geographical mapping [8], search and rescue operations [9], aerial photography for journalism and film making [10], providing essential materials [11], border control surveillance [12], and building safety inspection [13]. Even though drone technology offers a multitude of benefits, it raises mixed concerns when it comes to how it will be used in the future. Drones pose many potential threats, including invasion of privacy, smuggling, espionage, flight disruption, human injury, and terrorist attacks. These threats compromise aviation operations and public safety. However, it has become increasingly necessary to detect, track, and recognize drone targets and make decisions in certain situations, such as detonating or jamming unwanted drone targets.
The detection of unwanted drones poses significant challenges to observation systems, especially in urban areas, as drones are tiny and move at different rates and heights compared to other moving targets [2]. For target recognition, optic-based systems that rely on cameras provide more detailed information than radio-frequency (RF)-based systems, but these require a clear frontal view, as well as ideal light and weather conditions [14,15], as shown in Figure 1A. Both residential and business environments are less accepting of the use of cameras for target recognition because of their intrusive nature [16]. Although RF-based systems are less intrusive, the signals received from RF devices are not as expressive or intuitive as those received from images. Humans are often unable to directly interpret RF signals. Thus, preprocessing RF signals is a challenging process that requires the translation of raw data into intuitive information for target recognition. It has been proven that RF-based systems such as WiFi, ultrasound sensors, and millimeter-wave (mmWave) radar can be useful for a variety of observation applications that are not affected by light or weather conditions [17]. WiFi signals require a delicate transmitter and receiver and are limited to situations where targets must move between the transmitter and receiver [18]. Because ultrasound signals are short-range, they are usually used to detect close targets and are affected by blocking or interference from other nearby transmitters [14].
Figure 1. In contrast to (A), an optic-based system, (B) the proposed framework is based on mmWave radar, which consists of three transmitting antennas and four receiving antennas.
The large bandwidth of mmWave allows a high distance-independent resolution, which not only facilitates the detection and tracking of moving targets, but also their recognition [18]. Furthermore, mmWave radar requires at least two antennas for transmitting and receiving signals; thus, the collected signals can be used in multiple observation operations [18]. Rather than true color image representation, mmWave signals can represent multiple targets using reflected three-dimensional (3D) cloud points, micro-Doppler signatures, RF-intensity-based spatial heat maps, or range-Doppler localizations [19].
mmWave-based systems frequently use convolutional neural networks (CNNs) to extract representative features from micro-Doppler signatures to recognize objects [18,20,21]; however, examining micro-Doppler signals is computationally complex because they deal with images, and they only distinguish moving targets based on translational motion. Employing a CNN to extract representative features from cloud points is becoming the tool of choice for developing the mathematical modeling underlying dynamic environments and leveraging spatiotemporal information processed from range, velocity, and angle information, thereby improving robustness, reliability, and detection accuracy and reducing computing complexity to achieve the simultaneous performance of mmWave radar operations [22].

2. Millimeter-Wave Radar and Convolutional Neural Network

Several techniques have been developed to detect and recognize drones, including visual [24], audio [25], WiFi [26,27], infrared camera [28], and radar [29]. Drone audio detection relies on detecting propeller sounds and separating them from the background noise. A high-resolution daylight camera and a low-resolution infrared camera were used for visual assessment [30]. Good weather conditions and a reasonable distance between drone targets and cameras are still required for visual assessment. Fixed visual detection methods cannot estimate the continuance track of drones. Infrared cameras detect heat sources on drones such as batteries, motors, and motor driver boards. Airborne vehicles can be detected more easily by mmWave radar, which has been the most-popular form of detection for military troops for a long time. However, traditional military radars are designed to recognize large targets and have trouble detecting small drones. Furthermore, the target discrimination may not be straightforward. The extremely short wavelength of mmWave radar systems makes them highly sensitive to the small features of drones, providing very precise velocity resolution, and allowing them to penetrate certain materials to detect concealed hazardous targets [30].
This subsection discusses various recent drone classifications using machine learning and deep learning models. The radar cross-section (RCS) signatures of different drones with different frequency levels have been discussed in several studies, including [2,31]. The method proposed in [2] relied on converting the RCS into images and then using a CNN to perform drone classification, which required much computation. As a result, they introduced a weight-optimization model that reduces the computational overhead, resulting in improved long short-term memory (LSTM) networks. The authors showed how a database of mmWave radar RCS signatures can be utilized to recognize and categorize drones in [31]. They demonstrated RCS measurements at 28 GHz for a carbon-fiber drone model. The measurements were collected in an anechoic chamber and provided significant information regarding the RCS signature of the drone. The authors aided the RCS-based detection probability and range accuracy by performing simulations in metropolitan environments. The drones were placed at different distances ranging from 30 m to 90 m, and the RCS signatures used for detection and classification were developed by trial and error.
The authors proposed a novel drone-localization and activity-classification method using vertically oriented mmWave radar antennas to measure the elevation angle of the drone from the ground station in [32]. The measured radial distance and elevation angle were used to estimate the height of the drone and the horizontal distance from the radar station. A machine learning model was used to classify the drone’s activity based on micro-Doppler signatures extracted from radar measurements taken in an outdoor environment.
The system architecture and performance of the FAROS-E 77 GHz radar at the University of St Andrews were reported in [33] for detecting and classifying drones. The goal of the system was to demonstrate that a highly reliable drone-classification sensor could be used for security surveillance in a small, low-cost, and portable package. To enable robust micro-Doppler signature analysis and classification, the low phase noise and coherent architecture take advantage of the high Doppler sensitivity available at mmWave frequencies. Even when a drone hovered in a stationary manner, the classification algorithm was able to classify its presence. In [34], the authors employed a vector network analyzer that functioned as a continuous wave radar with a carrier frequency of 6 GHz to gather Doppler patterns from test data and then recognize the motions using a CNN.
Furthermore, the authors of [35] proposed a method for the registration of light detection and ranging (LiDAR) point clouds and images collected by low-cost drones to integrate spectral and geometrical data.

This entry is adapted from the peer-reviewed paper 10.3390/asi6040068

This entry is offline, you can click here to edit this entry!
Video Production Service