Skip to main content
placeholder image

A novel unsupervised camera-aware domain adaptation framework for person re-identification

Conference Paper


Abstract


  • © 2019 IEEE. Unsupervised cross-domain person re-identification (Re-ID) faces two key issues. One is the data distribution discrepancy between source and target domains, and the other is the lack of discriminative information in target domain. From the perspective of representation learning, this paper proposes a novel end-to-end deep domain adaptation framework to address them. For the first issue, we highlight the presence of camera-level sub-domains as a unique characteristic in person Re-ID, and develop a 'camera-aware' domain adaptation method via adversarial learning. With this method, the learned representation reduces distribution discrepancy not only between source and target domains but also across all cameras. For the second issue, we exploit the temporal continuity in each camera of target domain to create discriminative information. This is implemented by dynamically generating online triplets within each batch, in order to maximally take advantage of the steadily improved representation in training process. Together, the above two methods give rise to a new unsupervised domain adaptation framework for person Re-ID. Extensive experiments and ablation studies conducted on benchmark datasets demonstrate its superiority and interesting properties.

Authors


  •   Qi, Lei (external author)
  •   Wang, Lei
  •   Huo, Jing (external author)
  •   Zhou, Luping
  •   Shi, Yinghuan (external author)
  •   Gao, Yang (external author)

Publication Date


  • 2019

Citation


  • Qi, L., Wang, L., Huo, J., Zhou, L., Shi, Y. & Gao, Y. (2019). A novel unsupervised camera-aware domain adaptation framework for person re-identification. Proceedings of the IEEE International Conference on Computer Vision (pp. 8079-8088).

Scopus Eid


  • 2-s2.0-85081908168

Start Page


  • 8079

End Page


  • 8088

Abstract


  • © 2019 IEEE. Unsupervised cross-domain person re-identification (Re-ID) faces two key issues. One is the data distribution discrepancy between source and target domains, and the other is the lack of discriminative information in target domain. From the perspective of representation learning, this paper proposes a novel end-to-end deep domain adaptation framework to address them. For the first issue, we highlight the presence of camera-level sub-domains as a unique characteristic in person Re-ID, and develop a 'camera-aware' domain adaptation method via adversarial learning. With this method, the learned representation reduces distribution discrepancy not only between source and target domains but also across all cameras. For the second issue, we exploit the temporal continuity in each camera of target domain to create discriminative information. This is implemented by dynamically generating online triplets within each batch, in order to maximally take advantage of the steadily improved representation in training process. Together, the above two methods give rise to a new unsupervised domain adaptation framework for person Re-ID. Extensive experiments and ablation studies conducted on benchmark datasets demonstrate its superiority and interesting properties.

Authors


  •   Qi, Lei (external author)
  •   Wang, Lei
  •   Huo, Jing (external author)
  •   Zhou, Luping
  •   Shi, Yinghuan (external author)
  •   Gao, Yang (external author)

Publication Date


  • 2019

Citation


  • Qi, L., Wang, L., Huo, J., Zhou, L., Shi, Y. & Gao, Y. (2019). A novel unsupervised camera-aware domain adaptation framework for person re-identification. Proceedings of the IEEE International Conference on Computer Vision (pp. 8079-8088).

Scopus Eid


  • 2-s2.0-85081908168

Start Page


  • 8079

End Page


  • 8088