Abstract:A person re-identification method based on feature fusion and multi-scale information was proposed to solve the problem of low accuracy of person re-identification due to the large difference of human image background and similar global appearance of human body. Firstly, the global feature map of human body image is extracted by ResNet50. Secondly, the branch structure is designed. In the first branch, the spatial transformation network is used to align the global feature images adaptively, and the local feature images are obtained by horizontal segmentation of the global feature images. The correlation between the global feature and each local feature is mined by fusing the global feature and each local feature separately. The second branch adds four convolution layers of different scales to extract multi-scale features from global images. Finally, in the reasoning stage, the features of the first branch and the second branch are connected in series as the comparative features of person. Experiments on the Market-1501 and DukeMTMC datasets show that the proposed method has better performance than the AlignedReID and EA-NET feature alignment and local feature extraction methods. In the Market-1501 dataset, mAP and Rank-1 reach 86. 77% and 94. 83%, respectively.