Vehicle re-identification network with multi-view fusion and global feature enhancement
Author:
Affiliation:

Clc Number:

TP391;TN919. 8

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Vehicle re-identification is one of the important applications in the field of intelligent transportation. Most of the existing vehicle Re-ID methods focus on pre-defined local area features or global appearance features. However, in the complex traffic environment, it is difficult for traditional methods to acquire pre-defined local regions, and it is difficult to capture valuable vehicle global feature information. Therefore, an end-to-end dual-branch network with multi-view fusion hybrid attention mechanism and global feature enhancement is proposed. The network aims to obtain more complete and diverse vehicle features by enhancing the feature representation ability and feature quality of vehicles. This paper uses the view parsing network to segment the four views of the vehicle image, and uses the view stitching method to alleviate the problem of information loss caused by inaccurate segmentation. To better highlight salient local regions in stitched views, this paper proposes a hybrid attention module consisting of a channel attention mechanism and a self-attention mechanism. Through this module, the correlation between the key local information and the local information is obtained from the stitched view respectively, so as to better highlight the detailed information of the vehicle part in the stitched view. Besides, this paper also proposes a global feature enhancement module to obtain the spatial and channel relationship of global features through pooling and convolution. This module not only extracts the semantically enhanced vehicle features, but also makes the vehicle features contain complete detailed information, and solve the influence of the acquired vehicle images by factors such as changes in viewing angle and lighting conditions. Extensive experiments in this paper on the Veri-776 and VehicleID datasets show that mAP, CMC@ 1, and CMC@ 5 reach 82. 41%, 98. 63%, and 99. 23%, respectively, which is better than the existing methods.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: June 15,2023
  • Published: