FINGER VEIN RECOGNITION BASED ON VISION TRANSFORMER WITH FEATURE DECOUPLING FOR ONLINE PAYMENT APPLICATIONS

Finger Vein Recognition Based on Vision Transformer With Feature Decoupling for Online Payment Applications

Finger Vein Recognition Based on Vision Transformer With Feature Decoupling for Online Payment Applications

Blog Article

Biometric recognition plays a pivotal role in enhancing the security of online payment systems.Nevertheless, practical challenges such as image translation caused by user behavior and illumination variations in real-world environments can significantly degrade recognition performance.To address these issues, this powell and mahoney bloody mary mix study proposes a Global-Local Attention Model based on Feature Decoupling (GLA-FD), which integrates two key components: the Feature Decoupling and Reconstruction Module (FDRM) and the Global-Local Attention Module (GLAM).The FDRM module decouples finger vein images into background information and texture features, subsequently reconstructing them to generate enhanced vein feature maps.This process ensures robust recognition performance even under varying illumination conditions.

Building upon the enhanced vein feature maps, the GLAM module extracts both global and local representations, thereby strengthening the model’s ability to capture spatial correlations and effectively mitigating the adverse effects of image translation.Rigorous testing across diverse public databases validates the enhanced efficacy of the newly developed GLA-FD model, outperforming state-of-the-art approaches in cross-domain scenarios, including FV-USM, NUPT-FPV, UTFVP, MMCBNU-6000, PLUSVein-FV3 (LED), and PLUSVein-FV3 (Laser).The model achieved remarkable Correct Identification Rates (CIRs) of 99.95%, 99.96%, 96.

45%, 99.71%, 99.72%, and 99.20%, respectively.Furthermore, in terms of Equal Error Rate (EER), the GLA-FD model exhibited consistently low values of 0.

04%, 0.05%, 3.69%, 0.29%, 0.23%, and 0.

69% across the same datasets.These results highlight the model’s exceptional stability and generalization capabilities, ensuring reliable performance even in challenging scenarios such as blurred images, broad age distributions, limited sample sizes, large-scale identity classes, and diverse acquisition sensors.The versatility of the GLA-FD model underscores its potential for a wide range of real-world applications.The source code of the proposed GLA-FD model is wac 4011 publicly accessible at: https://github.com/liangying-Ke/GLA-FD, licensed under MIT Lab.

at NIU, Taiwan for academic and non-commercial use.

Report this page