Deutsch
 
Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Multi-modal person re-identification based on transformer relational regularization

Zheng, X., Huang, X., Ji, C., Yang, X., Sha, P., Cheng, L. (2024): Multi-modal person re-identification based on transformer relational regularization. - Information Fusion, 103, 102128.
https://doi.org/10.1016/j.inffus.2023.102128

Item is

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Zheng, Xiangtian1, Autor
Huang, Xiaohua1, Autor
Ji, Chen1, Autor
Yang, Xiaolin1, Autor
Sha, Pengcheng2, Autor              
Cheng, Liang1, Autor
Affiliations:
1External Organizations, ou_persistent22              
21.4 Remote Sensing, 1.0 Geodesy, Departments, GFZ Publication Database, Deutsches GeoForschungsZentrum, ou_146028              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: For robust multi-modal person re-identification (re-ID) models, it is crucial to effectively utilize the complementary information and constraint relationships among different modalities. However, current multi-modal methods often overlook the correlation between modalities at the feature fusion stage. To address this issue, we propose a novel multimodal person re-ID method called Transformer Relation Regularization (TRR). Firstly, we introduce an adaptive collaborative matching module that facilitates the exchange of useful information by mining feature correspondences between modalities. This module allows for the integration of complementary information, enhancing the re-ID performance. Secondly, we propose an enhanced embedded module that corrects general information using discriminative information within each modality. By leveraging this approach, we improve the model’s stability in challenging multi-modal environments. Lastly, we propose an adaptive triple loss to enhance sample utilization efficiency and mitigate the problem of inconsistent representation among multimodal samples. This loss function optimizes the model’s ability to distinguish between different individuals, leading to improved re-ID accuracy. Experimental results on several challenging visible-infrared person re-ID benchmark datasets demonstrate that our proposed TRR method achieves optimal performance. Additionally, extensive ablation studies validate the effective contribution of each component to the overall model. In summary, our proposed TRR method effectively leverages complementary information, addresses the correlation between modalities, and improves the re-ID performance in multi-modal scenarios. The results obtained from various benchmark datasets and the comprehensive analysis support the efficacy of our approach.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 20232024
 Publikationsstatus: Final veröffentlicht
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.1016/j.inffus.2023.102128
GFZPOF: p4 T3 Restless Earth
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Information Fusion
Genre der Quelle: Zeitschrift, SCI, Scopus
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: -
Seiten: - Band / Heft: 103 Artikelnummer: 102128 Start- / Endseite: - Identifikator: CoNE: https://gfzpublic.gfz-potsdam.de/cone/journals/resource/191216
Publisher: Elsevier