Topic Review
A Patch-Based CNN Built on the VGG-16 Architecture
Facial recognition is a prevalent method for biometric authentication that is utilized in a variety of software applications. This technique is susceptible to spoofing attacks, in which an imposter gains access to a system by presenting the image of a legitimate user to the sensor, hence increasing the risks to social security. Consequently, facial liveness detection has become an essential step in the authentication process prior to granting access to users. A patch-based convolutional neural network (CNN) with a deep component for facial liveness detection for security enhancement was developed, which was based on the VGG-16 architecture.
  • 570
  • 13 Sep 2022
Topic Review
A Promising Downsampling Alternative in a Neural Network
Downsampling, which aims to improve computational efficiency by reducing the spatial resolution of feature maps, is a critical operation in neural networks. Upsampling also plays an important role in neural networks. It is often used for image super-resolution, segmentation, and generation tasks via the reconstruction of high-resolution feature maps during the decoding stage in the neural network.
  • 190
  • 04 Dec 2023
Topic Review
A Robust Vehicle Detection Model for LiDAR Sensor
Vehicle detection in parking areas provides the spatial and temporal utilisation of parking spaces. Parking observations are typically performed manually, limiting the temporal resolution due to the high labour cost. 
  • 260
  • 15 Jun 2023
Topic Review
A Sub-Second Method for SAR Image Registration
For Synthetic Aperture Radar (SAR) image registration, successive processes following feature extraction are required by both the traditional feature-based method and the deep learning method. Among these processes, the feature matching process—whose time and space complexity are related to the number of feature points extracted from sensed and reference images, as well as the dimension of feature descriptors—proves to be particularly time consuming. Additionally, the successive processes introduce data sharing and memory occupancy issues, requiring an elaborate design to prevent memory leaks.
  • 156
  • 24 Oct 2023
Topic Review
A Symbol Recognition System for Single-Line Diagrams Developed
In numerous electrical power distribution systems and other engineering contexts, single-line diagrams (SLDs) are frequently used. The importance of digitizing these images is growing. This is primarily because better engineering practices are required in areas such as equipment maintenance, asset management, safety, and others. Processing and analyzing these drawings, however, is a difficult job. With enough annotated training data, deep neural networks perform better in many object detection applications. Based on deep-learning techniques, a dataset can be used to assess the overall quality of a visual system
  • 238
  • 01 Nov 2023
Topic Review
A Systematic Approach to Healthcare Knowledge Management Systems
Big data in healthcare contain a huge amount of tacit knowledge that brings great value to healthcare activities such as diagnosis, decision support, and treatment. However, effectively exploring and exploiting knowledge on such big data sources exposes many challenges for both managers and technologists. A healthcare knowledge management system that ensures the systematic knowledge development process on various data in hospitals was proposed. It leverages big data technologies to capture, organize, transfer, and manage large volumes of medical knowledge, which cannot be handled with traditional data-processing technologies. In addition, machine-learning algorithms are used to derive knowledge at a higher level in supporting diagnosis and treatment.
  • 990
  • 13 May 2022
Topic Review
A Taxonomic Survey of Physics-Informed Machine Learning
Physics-informed machine learning (PIML) refers to the emerging area of extracting physically relevant solutions to complex multiscale modeling problems lacking sufficient quantity and veracity of data with learning models informed by physically relevant prior information.
  • 253
  • 20 Jun 2023
Topic Review
A Unified Framework for RGB-Infrared Transfer
Infrared(IR) images (both 0.7-3 µm and 8-15 µm) offer radiation intensity texture information that visible images lack, making them particularly helpful in daytime, nighttime, and complex scenes. Many researchers are studying how to translate RGB images into infrared images for deep learning-based visual tasks such as object tracking, crowd counting, panoramic segmentation, and image fusion in urban scenarios. The utilization of the RGB-IR dataset in the aforementioned tasks holds the potential to provide comprehensive multi-band fusion data for urban scenes, thereby facilitating precise modeling across different scenarios. In addressing the challenge of accurately generating high-radiance textures for the targets in the infrared spectrum, the proposed approach aims to ensure alignment between the generated infrared images and the radiation feature of ground-truth IR images.
  • 140
  • 18 Dec 2023
Topic Review
Abnormal Activity Recognition for Visual Surveillance
Due to the ever increasing number of closed circuit television (CCTV) cameras worldwide, it is the need of the hour to automate the screening of video content. Still, the majority of video content is manually screened to detect some anomalous incidence or activity. Automatic abnormal event detection such as theft, burglary, or accidents may be helpful in many situations. However, there are significant difficulties in processing video data acquired by several cameras at a central location, such as bandwidth, latency, large computing resource needs, and so on. 
  • 125
  • 11 Jan 2024
Topic Review
Abstractive vs. Extractive Summarization
Due to the huge and continuously growing size of the textual corpora existing on the Internet, important information may go unnoticed or become lost. At the same time, the task of summarizing these resources by human experts is tedious and time consuming. This necessitates the automation of the task. Natural language processing (NLP) is a multidisciplinary research field, merging aspects and approaches from computer science, artificial intelligence and linguistics; it deals with the development of processes that semantically and efficiently analyze vast amounts of textual data. Text summarization (TS) is a fundamental NLP subtask, which has been defined as the process of the automatic creation of a concise and fluent summary that captures the main ideas and topics of one or multiple documents.
  • 496
  • 07 Jul 2023
  • Page
  • of
  • 115