English
 
Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Multi-modal person re-identification based on transformer relational regularization

Zheng, X., Huang, X., Ji, C., Yang, X., Sha, P., Cheng, L. (2024): Multi-modal person re-identification based on transformer relational regularization. - Information Fusion, 103, 102128.
https://doi.org/10.1016/j.inffus.2023.102128

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Zheng, Xiangtian1, Author
Huang, Xiaohua1, Author
Ji, Chen1, Author
Yang, Xiaolin1, Author
Sha, Pengcheng2, Author              
Cheng, Liang1, Author
Affiliations:
1External Organizations, ou_persistent22              
21.4 Remote Sensing, 1.0 Geodesy, Departments, GFZ Publication Database, Deutsches GeoForschungsZentrum, ou_146028              

Content

show
hide
Free keywords: -
 Abstract: For robust multi-modal person re-identification (re-ID) models, it is crucial to effectively utilize the complementary information and constraint relationships among different modalities. However, current multi-modal methods often overlook the correlation between modalities at the feature fusion stage. To address this issue, we propose a novel multimodal person re-ID method called Transformer Relation Regularization (TRR). Firstly, we introduce an adaptive collaborative matching module that facilitates the exchange of useful information by mining feature correspondences between modalities. This module allows for the integration of complementary information, enhancing the re-ID performance. Secondly, we propose an enhanced embedded module that corrects general information using discriminative information within each modality. By leveraging this approach, we improve the model’s stability in challenging multi-modal environments. Lastly, we propose an adaptive triple loss to enhance sample utilization efficiency and mitigate the problem of inconsistent representation among multimodal samples. This loss function optimizes the model’s ability to distinguish between different individuals, leading to improved re-ID accuracy. Experimental results on several challenging visible-infrared person re-ID benchmark datasets demonstrate that our proposed TRR method achieves optimal performance. Additionally, extensive ablation studies validate the effective contribution of each component to the overall model. In summary, our proposed TRR method effectively leverages complementary information, addresses the correlation between modalities, and improves the re-ID performance in multi-modal scenarios. The results obtained from various benchmark datasets and the comprehensive analysis support the efficacy of our approach.

Details

show
hide
Language(s):
 Dates: 20232024
 Publication Status: Finally published
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1016/j.inffus.2023.102128
GFZPOF: p4 T3 Restless Earth
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Information Fusion
Source Genre: Journal, SCI, Scopus
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 103 Sequence Number: 102128 Start / End Page: - Identifier: CoNE: https://gfzpublic.gfz-potsdam.de/cone/journals/resource/191216
Publisher: Elsevier