Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1809 2023-07-27 16:29:46 |
2 format correct Meta information modification 1809 2023-07-28 09:10:22 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Yao, Q.; Chen, C.; Song, D.; Xu, X.; Li, W. Finger Vein Verification. Encyclopedia. Available online: https://encyclopedia.pub/entry/47365 (accessed on 26 September 2024).
Yao Q, Chen C, Song D, Xu X, Li W. Finger Vein Verification. Encyclopedia. Available at: https://encyclopedia.pub/entry/47365. Accessed September 26, 2024.
Yao, Qiong, Chen Chen, Dan Song, Xiang Xu, Wensheng Li. "Finger Vein Verification" Encyclopedia, https://encyclopedia.pub/entry/47365 (accessed September 26, 2024).
Yao, Q., Chen, C., Song, D., Xu, X., & Li, W. (2023, July 27). Finger Vein Verification. In Encyclopedia. https://encyclopedia.pub/entry/47365
Yao, Qiong, et al. "Finger Vein Verification." Encyclopedia. Web. 27 July, 2023.
Finger Vein Verification
Edit

Traditional technologies of personal identity authentication (e.g., tokens, cards, PINs) have been gradually replaced by some more-advanced biometrics technologies, including faces, retinas, irises, fingerprints, veins, etc. Among these, the finger vein (FV) trait, due to its unique advantages of high security, the living requirement, being non-contact, and not easily being injured or counterfeited, has drawn extensive attention after it appeared. Different from some visual imaging traits such as faces and fingerprints, the main veins in the fingers tend to be longitudinally distributed in the subcutaneous regions.

finger vein verification Siamese network contrastive learning Gabor convolutional kernel

1. Introduction

Generally, FV imaging can be performed by using near-infrared (NIR) light in the particular wavelength range of (700∼1000 nm). When the NIR light irradiates the skin and enters the subcutaneous tissues, light scattering occurs, and plenty of light energy is absorbed by the deoxyhemoglobin in the veins’ blood, which makes the veins appear as dark shadows during imaging, while other non-vein areas show higher brightness. As a result, the acquired FV images generally present a crisscross pattern, as shown in Figure 1.
Figure 1. Illustration of the appearance of finger vein imaging.
When the FV trait is used for personal identity verification, it should not be regarded as a conventional pattern-classification problem, due to the fact that the number of categories is huge, while the number of samples per category is small, and it occurs in a subject-independent scenario with only a subset of categories known during the training phase. Moreover, because of the restrictions of the acquisition equipment and environment, the imaging area of the vein texture is small [1] and the information carried by the vein image is relatively weak. In this regard, how to extract more-robust and -discriminative FV features is particularly critical for an FV verification system [2].
In the early stages of research, some meticulously hand-crafted features were adopted by FV verification systems. One kind of method, namely “vein-level” [3], was devoted to characterizing the geometric and topological shapes of the vein network (such as point-shaped [4][5][6][7], line-shaped [8][9], curve-based [10][11][12], etc.). In addition, anatomical structures [13][14] and even vein pulsation patterns [15] were also introduced for feature representation. In order to minimize the impact of the background as much as possible, these methods should accurately strip out the veins from the whole image. However, owning to the unexpected low quality of the acquired FV image, it has always been a great challenge to screen out the vessels accurately, while either over-segmentation or under-segmentation is the actual situation [16]. The “vein-level” features depend on pure and accurate vein pattern extraction, while neglecting the spatial relationship between veins and their surrounding subcutaneous tissues. As claimed in [17], the optical characteristics such as absorption and scattering in those non-vein regions were also helpful for recognition. Following this research line, another kind of method, namely “image-level” [3], aimed at extracting features from the whole image, while not distinguishing vein and non-vein regions. Among these, some local-pattern-based image features (e.g., local line binary pattern (LLBP) [18][19], local directional code (LDC) [20], discriminative binary code (DBC) [21]) have been widely adopted. Accompanying this, subspace-learning-based global image feature approaches, such as PCA [22][23] and LDA [24], have also been applied. Furthermore, such local and global features have been integrated to construct more-compact and -discriminative FV features [25].
The design of the aforementioned hand-crafted features usually depends on expert knowledge and lacks generalization over various FV imaging scenarios. Admittedly, these methods always rely on many preprocessing strategies to solve problems such as finger position bias, uneven illumination, etc. Relatively speaking, learning-based methods can provide more-adaptive feature representation, especially for the convolutional neural network (CNN)-based deep learning (DL) methods, which adopt a multi-layer nonlinear learning process to capture high-level abstract features from images [26][27][28]. Currently, CNNs equipped with various topological structures have been migrated to FV biometrics and have obtained commendable success. Among these, a deep CNN, namely “DeepVein” [29], was constructed based on the classic VGG-Net [30]. In [31], AlexNet was directly transferred to FV identification. In [32], convolutional kernels with smooth line shapes were especially picked out from the first layer of AlexNet and used to construct a local descriptor, namely the “Competitive Order”. In [33], multimodal biometrics, including finger veins and finger shape, were extracted from ResNet [34], respectively, and then fused for individual identity authentication. In [35], two FV images were synthesized as the input to DenseNet [36], while in [37], vein shape feature maps and texture feature maps were input into DenseNet in sequence and then fused for FV recognition. It must be conceded that DenseNet, due to its dense connection mechanism, generally has higher training complexity than AlexNet, VGG-Net, and ResNet [38].
The above classic DL models mostly adopt a data-driven feature-learning process, and their learning ability primarily relies on the quantity and quality of available image samples [39]. However, this is unrealistic in the FV community as most publicly available FV datasets are small-scale [40]. To address this issue, some fine-tuning strategies [41] and data augmentation technologies have been introduced to make up for the samples shortage to some extent. On the other hand, since vein images generally contain some low-level and mid-level features (mainly textures and shape structures), wider networks rather than deeper ones are preferred to learn a variety of relatively shallow semantic representations. In this regard, model distillation and lightweight models are also exploited for FV identity discrimination. In [42], a lightweight DL framework with two channels was exploited and verified on a subject-dependent FV dataset. In [43], a lightweight network with three convolution and pooling blocks, as well as two fully connected layers was constructed, and a joint function of the center loss and Softmax loss was designed to pursue highly discriminative features. In [44], a lightweight network, which consisted of a stem block and a stage block, was built for FV recognition and matching; the stem block adopted two pathways to extract multi-scale features, respectively, and then, the extracted two-way features were fused and input into the stage block for more-refined processing. In [45], a pre-trained Xception network was introduced for FV classification; due to depthwise separable convolution, the Xception network obtained a lighter architecture; meanwhile, the residual skip connection further widened the network and accelerated the convergence. These lightweight deep networks greatly lessen the training cost while ensuring accuracy, thus being more suitable for real-time applications of the FV trait.
Recently, some more-powerful network architectures have been used for FV recognition tasks, such as the capsule network [46], the convolutional autoencoder [47], the fully convolutional network [48], the generative adversarial network [49], the long short-term memory network [50], the joint attention network [51], the Transformer network [52], the Siamese network [53], etc. Among these, a Siamese framework, which is equipped with two ResNet-50s [34] as the backbone subnetwork, was introduced for FV verification [53]. Compared with some DL networks, which are inclined to learn better feature representation, the Siamese networks tend to learn how to discriminate between different input pairs by using a well-designed contrastive loss function. Therefore, they are more suitable for FV verification tasks, that is they better distinguish between genuine FVs and imposter FVs, rather than obtaining more-accurate semantic expressions. However, although the aforementioned network models have shown a strong feature-learning ability, they have the disadvantages of a complex model structure and an expensive training cost (Table 1).
Table 1. A brief summary of the advantages and disadvantages of existing finger vein verification methods or models.
  Category Methods/Models Advantages Disadvantages
Hand-Crafted Vein-Level Point [5][6], Line [8][9] Pure Lack Generalization
Curvature [10][11][12] Vein Pattern
Anatomy [13][14] Representation
Vein Pulsation [15]  
Image-Level LLBP [18][19] Do Not Distinguish
LDC [20] Vein and Non-Vein
DBC [21] Regions
Learning-Based Classic Models DeepVein [29]    
AlexNet [31][32] Capture High-Level Complex
Network Structures,
ResNet [34] Abstract Features Massive
Sample Requirement
DenseNet [37]    
Powerful Models Capsule [46], CAE [47], FCN [48]    
GAN [49], LSTM [50] Stronger  
Joint Attention Network [51] Feature More Complex
Network Structures
Transformer Network [52] Representations  
Siamese Network [53]    
Lightweight Models Two-Stream CNN [42] Simper Network  
Light CNN [43][44] Structures, Lower Feature
Representation
Is Weak
Xception [45] Computation Cost

2. Basic Procedure of Finger Vein Verification

In general, the processing flow of an FV verification system consists of four tasks: image capturing, preprocessing, feature extraction, and feature matching, as shown in Figure 2. Among these, the stages of image preprocessing and feature extraction are the most-critical ones; some preprocessing strategies, such as ROI localization and contrast enhancement, have a significant impact on the subsequent feature extraction. In the meantime, a powerful feature representation ability is key to precise matching.
Figure 2. Illustration of the basic processing flow of a finger vein verification system.
As stated earlier, deep CNNs have shown the capability to learn unprecedentedly efficient image features. However, most of the existing DL models regard feature extraction and feature matching as two separate stages, thus lacking a comprehensive consideration of feature learning from the perspective of discriminant matching. Instead, the Siamese network combines both of these through contrastive learning, so that the learned features can better serve the final matching purposes.

3. Siamese Network Framework

A Siamese network is usually equipped with a pair of subnetworks, as shown in Figure 3. For a pair of input samples, their feature vectors are learned by the corresponding subnetworks and then given to the contrastive loss function for similarity measurement. Generally, the contrastive loss function can be defined by Equation (1).
 
Loss ( X 1 , X 2 ) = ( 1 γ ) 1 2 ( D w ) 2 + ( γ ) 1 2 { max ( 0 , m D w ) } 2 ,
where 𝐷𝑤 is the similarity distance metric, 𝛾 is an indicator of whether the prediction is from the same finger class, 𝛾 equals 0, otherwise, 𝛾 equals 1, and m is a positive margin value, which is used to protect the loss function from being corrupted when the distance metric of different sample pairs exceeds this margin value.
Figure 3. Illustration of a Siamese network framework.

4. Gabor Convolutional Kernel

The Gabor convolutional kernel represents a combination of classic computer vision and deep neural networks; similar works have proven it to be successful [54]. According to their combination strategies, existing methods can be divided into three categories: The first category regards the Gabor filter as a kind of preprocessing, which means the inputs of the CNNs are Gabor-filtered images. Furthermore, in [55], convolutional kernels in the first two layers of the CNN were replaced by Gabor filters with fixed parameters. In this case, although the parameters of the trainable network were reduced, it still lacked a deep combination of the Gabor filters’ representation ability and the CNN’s learning ability. The second category completely replaces the convolutional kernels with Gabor filters and optimizes the parameters of the Gabor filters. Among these, in [56], only the first layer was replaced by the Gabor filters, while the remaining layers remained unchanged. In [57], a parameterized Gabor convolutional layer was used to replace a few early convolutional layers in the CNNs, and it obtained a consistent boost in accuracy, robustness, as well as highly generalized test performance. However, due to the computational complexity of the Gabor convolution operation and the corresponding parameter optimization algorithm, its popularization has been restricted. The third category adopts Gabor filters to modulate the learnable convolutional kernels [58]. Compared with the above two categories, such a method allows the network to capture more robust features with regard to orientation and scale changes, while not bringing additional computational burden.

References

  1. Yang, L.; Liu, X.; Yang, G.; Wang, J.; Yin, Y. Small-Area Finger Vein Recognition. IEEE Trans. Inf. Forensics Secur. 2023, 18, 1914–1925.
  2. Lv, W.H.; Ma, H.; Li, Y. A finger vein authentication system based on pyramid histograms and binary pattern of phase congruency. Infrared Phys. Technol. 2023, 132, 104728.
  3. Yao, Q.; Song, D.; Xu, X.; Zou, K. A Novel Finger Vein Recognition Method Based on Aggregation of Radon-Like Features. Sensors 2021, 21, 1885.
  4. Liu, F.; Yang, G.; Yin, Y.; Wang, S. Singular value decomposition based minutiae matching method for finger vein recognition. Neurocomputing 2014, 145, 75–89.
  5. Meng, X.; Zheng, J.; Xi, X.; Zhang, Q.; Yin, Y. Finger vein recognition based on zone-based minutia matching. Neurocomputing 2021, 423, 110–123.
  6. Matsuda, Y.; Miura, N.; Nagasaka, A.; Kiyomizu, H.; Miyatake, T. Finger-vein authentication based on deformation-tolerant feature-point matching. Mach. Vis. Appl. 2016, 27, 237–250.
  7. Wang, G.; Wang, J. SIFT Based Vein Recognition Models: Analysis and Improvement. Comput. Math. Methods Med. 2017, 2017, 2373818.
  8. Miura, N.; Nagasaka, A.; Miyatake, T. Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification. Mach. Vis. Appl. 2004, 15, 194–203.
  9. Huang, B.; Dai, Y.; Li, R.; Tang, D.; Li, W. Finger-Vein Authentication Based on Wide Line Detector and Pattern Normalization. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 1269–1272.
  10. Miura, N.; Nagasaka, A.; Miyatake, T. Extraction of Finger-vein Patterns Using Maximum Curvature Points in Image Profiles. IEICE-Trans. Inf. Syst. 2007, E90-D, 1185–1194.
  11. Song, W.; Kim, T.; Kim, H.C.; Choi, J.H.; Kong, H.J.; Lee, S.R. A finger-vein verification system using mean curvature. Pattern Recognit. Lett. 2011, 32, 1541–1547.
  12. Syarif, M.A.; Ong, T.S.; Teoh, A.B.J.; Tee, C. Enhanced maximum curvature descriptors for finger vein verification. Multimed. Tools Appl. 2017, 76, 6859–6887.
  13. Yang, L.; Yang, G.; Xi, X.; Meng, X.; Zhang, C.; Yin, Y. Tri-Branch Vein Structure Assisted Finger Vein Recognition. IEEE Access 2017, 5, 21020–21028.
  14. Yang, L.; Yang, G.; Yin, Y.; Xi, X. Finger Vein Recognition With Anatomy Structure Analysis. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 1892–1905.
  15. Krishnan, A.; Thomas, T.; Mishra, D. Finger Vein Pulsation-Based Biometric Recognition. IEEE Trans. Inf. Forensics Secur. 2021, 16, 5034–5044.
  16. Yang, L.; Yang, G.; Wang, K.; Hao, F.; Yin, Y. Finger Vein Recognition via Sparse Reconstruction Error Constrained Low-Rank Representation. IEEE Trans. Inf. Forensics Secur. 2021, 16, 4869–4881.
  17. Huang, D.; Tang, Y.; Wang, Y.; Chen, L.; Wang, Y. Hand-dorsa vein recognition by matching local features of multisource keypoints. IEEE Trans. Cybern. 2015, 45, 1823–1837.
  18. Rosdi, B.A.; Shing, C.W.; Suandi, S.A. Finger Vein Recognition Using Local Line Binary Pattern. Sensors 2011, 11, 11357–11371.
  19. Hu, N.; Ma, H.; Zhan, T. Finger vein biometric verification using block multi-scale uniform local binary pattern features and block two-directional two-dimension principal component analysis. Optik 2020, 208, 163664.
  20. Meng, X.; Yang, G.; Yin, Y.; Xiao, R. Finger Vein Recognition Based on Local Directional Code. Sensors 2012, 12, 14937–14952.
  21. Liu, H.; Yang, L.; Yang, G.; Yin, Y. Discriminative Binary Descriptor for Finger Vein Recognition. IEEE Access 2018, 6, 5795–5804.
  22. Wu, J.D.; Liu, C.T. Finger-vein pattern identification using principal component analysis and the neural network technique. Expert Syst. Appl. 2011, 38, 5423–5427.
  23. Yang, G.; Xi, X.; Yin, Y. Finger Vein Recognition Based on (2D)2PCA and Metric Learning. J. Biomed. Biotechnol. 2012, 2012, 324249.
  24. Wu, J.D.; Liu, C.T. Finger-vein pattern identification using SVM and neural network technique. Expert Syst. Appl. 2011, 38, 14284–14289.
  25. Zhang, L.; Sun, L.; Li, W.; Zhang, J.; Cai, W.; Cheng, C.; Ning, X. A Joint Bayesian Framework Based on Partial Least Squares Discriminant Analysis for Finger Vein Recognition. IEEE Sens. J. 2022, 22, 785–794.
  26. Heidari, A.; Javaheri, D.; Toumaj, S.; Navimipour, N.J.; Rezaei, M.; Unal, M. A new lung cancer detection method based on the chest CT images using Federated Learning and blockchain systems. Artif. Intell. Med. 2023, 141, 102572.
  27. Heidari, A.; Navimipour, N.J.; Jamali, M.A.J.; Akbarpour, S. A green, secure, and deep intelligent method for dynamic IoT-edge-cloud offloading scenarios. Sustain. Comput. Inform. Syst. 2023, 38, 100859.
  28. Lazcano, A.; Pedro, J.H.; Manuel, M. A Combined Model Based on Recurrent Neural Networks and Graph Convolutional Networks for Financial Time Series Forecasting. Mathematics 2023, 11, 224.
  29. Huang, H.; Liu, S.; Zheng, H.; Ni, L.; Zhang, Y.; Li, W. DeepVein: Novel finger vein verification methods based on Deep Convolutional Neural Networks. In Proceedings of the 2017 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), New Delhi, India, 22–24 February 2017; pp. 1–8.
  30. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. Comput. Sci. 2014.
  31. Fairuz, S.; Habaebi, M.H.; Elsheikh, E.M.A. Finger Vein Identification Based on Transfer Learning of AlexNet. In Proceedings of the 2018 7th International Conference on Computer and Communication Engineering (ICCCE), Kuala Lumpur, Malaysia, 19–20 September 2018; pp. 465–469.
  32. Lu, Y.; Xie, S.; Wu, S. Exploring Competitive Features Using Deep Convolutional Neural Network for Finger Vein Recognition. IEEE Access 2019, 7, 35113–35123.
  33. Wan, K.; Min, S.J.; Ryoung, P.K. Multimodal Biometric Recognition Based on Convolutional Neural Network by the Fusion of Finger-Vein and Finger Shape Using Near-Infrared (NIR) Camera Sensor. Sensors 2018, 18, 2296.
  34. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
  35. Song, J.M.; Kim, W.; Park, K.R. Finger-Vein Recognition Based on Deep DenseNet Using Composite Image. IEEE Access 2019, 7, 66845–66863.
  36. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269.
  37. Noh, K.J.; Choi, J.; Hong, J.S.; Park, K.R. Finger-Vein Recognition Based on Densely Connected Convolutional Network Using Score-Level Fusion With Shape and Texture Images. IEEE Access 2020, 8, 96748–96766.
  38. Yao, Q.; Xu, X.; Li, W. A Sparsified Densely Connected Network with Separable Convolution for Finger-Vein Recognition. Symmetry 2022, 14, 2686.
  39. Hou, B.; Zhang, H.; Yan, R. Finger vein biometric recognition:a review. IEEE Trans. Instrum. Meas. 2022, 71, 1–26.
  40. Ou, W.; Po, L.; Zhou, C.; Rehman, Y.A.U.; Xian, P.F.; Zhang, Y.J. Fusion loss and inter-class data augmentation for deep finger vein feature learning. Expert Syst. Appl. 2021, 171, 114584.
  41. Kuzu, R.S.; Maioranay, E.; Campisi, P. Vein-based Biometric Verification using Transfer Learning. In Proceedings of the 43rd International Conference on Telecommunications and Signal Processing (TSP), Milan, Italy, 7–9 July 2020; pp. 403–409.
  42. Fang, Y.; Wu, Q.; Kang, W. A novel finger vein verification system based on two-stream convolutional network learning. Neurocomputing 2018, 290, 100–107.
  43. Zhao, D.; Ma, H.; Yang, Z.; Li, J.; Tian, W. Finger vein recognition based on lightweight CNN combining center loss and dynamic regularization. Infrared Phys. Technol. 2020, 105, 103221.
  44. Shen, J.; Liu, N.; Xu, C.; Sun, H.; Xiao, Y.; Li, D.; Zhang, Y. Finger Vein Recognition Algorithm Based on Lightweight Deep Convolutional Neural Network. IEEE Trans. Instrum. Meas. 2022, 71, 1–13.
  45. Shaheed, K.; Mao, A.; Qureshi, I.; Kumar, M.; Hussain, S.; Ullah, I.; Zhang, X. DS-CNN: A pre-trained Xception model based on depth-wise separable convolutional neural network for finger vein recognition. Expert Syst. Appl. 2022, 191, 116288.
  46. Gumusbas, D.; Yildirim, T.; Kocakulak, M.; Acir, N. Capsule Network for Finger-Vein-based Biometric Identification. In Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI), Xiamen, China, 6–9 December 2019; pp. 437–441.
  47. Hou, B.; Yan, R. Convolutional Autoencoder Model for Finger-Vein Verification. IEEE Trans. Instrum. Meas. 2020, 69, 2067–2074.
  48. Zeng, J.; Wang, F.; Deng, J.; Qin, C.; Zhai, Y.; Gan, J.; Piuri, V. Finger Vein Verification Algorithm Based on Fully Convolutional Neural Network and Conditional Random Field. IEEE Access 2020, 8, 65402–65419.
  49. Choi, J.; Noh, K.J.; Cho, S.W.; Nam, S.H.; Owais, M.; Park, K.R. Modified Conditional Generative Adversarial Network-Based Optical Blur Restoration for Finger-Vein Recognition. IEEE Access 2020, 8, 16281–16301.
  50. Kuzu, R.S.; Piciucco, E.; Maiorana, E.; Campisi, P. On-the-Fly Finger-Vein-Based Biometric Recognition Using Deep Neural Networks. IEEE Trans. Inf. Forensics Secur. 2020, 15, 2641–2654.
  51. Ren, H.; Sun, L.; Guo, J.; Han, C.; Cao, Y. A high compatibility finger vein image quality assessment system based on deep learning. Expert Syst. Appl. 2022, 196, 116603.
  52. Huang, J.; Luo, W.; Yang, W.; Zheng, A.; Lian, F.; Kang, W. FVT: Finger Vein Transformer for Authentication. IEEE Trans. Instrum. Meas. 2022, 71, 1–13.
  53. Tang, S.; Zhou, S.; Kang, W.; Wu, Q.; Deng, F. Finger vein verification using a Siamese CNN. IET Biom. 2019, 8, 306–315.
  54. Juan, M.R.; José, I.M.; Henry, A. LADMM-Net: An unrolled deep network for spectral image fusion from compressive data. Signal Process. 2021, 189, 108239.
  55. Calderón, A.; Roa, S.; Victorino, J. Handwritten digit recognition using convolutional neural networks and gabor filters. Int. Congr. Comput. Intell. 2003, 1, 1–9.
  56. Alekseev, A.; Bobe, A. GaborNet: Gabor filters with learnable parameters in deep convolutional neural network. In Proceedings of the 2019 International Conference on Engineering and Telecommunication (EnT), Dolgoprudny, Russia, 20–21 November 2019.
  57. Pérez, J.C.; Alfarra, M.; Jeanneret, G.; Bibi, A.; Thabet, A.; Ghanem, B.; Arbeláez, P. Gabor Layers Enhance Network Robustness. In Proceedings of the ECCV 2020, Glasgow, UK, 23–28 August 2020; Volume 12354, pp. 450–466.
  58. Luan, S.; Chen, C.; Zhang, B.; Han, J.; Liu, J. Gabor Convolutional Networks. IEEE Trans. Image Process. 2018, 27, 4357–4366.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , ,
View Times: 372
Revisions: 2 times (View History)
Update Date: 28 Jul 2023
1000/1000
ScholarVision Creations