FlipReID : Closing the Gap between Training and Inference in Person Re-Identification
Ni, Xingyang; Rahtu, Esa (2021)
Ni, Xingyang
Rahtu, Esa
Teoksen toimittaja(t)
Beghdadi, A.
Cheikh, F. Alaya
Tavares, J.M.R.S.
Mokraoui, A.
Valenzise, G.
Oudre, L.
Qureshi, M.A.
IEEE
2021
Proceedings of the 2021 9th European Workshop on Visual Information Processing, EUVIP 2021
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202211078217
https://urn.fi/URN:NBN:fi:tuni-202211078217
Kuvaus
Peer reviewed
Tiivistelmä
Since neural networks are data-hungry, incorporating data augmentation in training is a widely adopted technique that enlarges datasets and improves generalization. On the other hand, aggregating predictions of multiple augmented samples (i.e., test-Time augmentation) could boost performance even further. In the context of person re-identification models, it is common practice to extract embeddings for both the original images and their horizontally flipped variants. The final representation is the mean of the aforementioned feature vectors. However, such scheme results in a gap between training and inference, i.e., the mean feature vectors calculated in inference are not part of the training pipeline. In this study, we devise the FlipReID structure with the flipping loss to address this issue. More specifically, models using the FlipReID structure are trained on the original images and the flipped images simultaneously, and incorporating the flipping loss minimizes the mean squared error between feature vectors of corresponding image pairs. Extensive experiments show that our method brings consistent improvements. In particular, we set a new record for MSMT17 which is the largest person re-identification dataset. The source code is available at https://github.com/nixingyang/FlipReID.
Kokoelmat
- TUNICRIS-julkaisut [15313]