Attentional residual dense connection fusion network for infrared and visible image fusion
DOI:
Author:
Affiliation:

Clc Number:

TP391. 4

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In order to solve the problems in the current infrared and visible image fusion algorithm, such as missing scene detail information, unclear target region detail information and unnatural fusion image, an attentional residual dense fusion network (ARDFusion) for infrared and visible image fusion is proposed. The whole architecture of this paper is an auto-encoder network. First, the encoder with the largest pooling layer is used to extract multi-scale features of the source image, then the attention residual dense fusion network is used to fuse the feature maps of multiple scales. The residual dense blocks in the network can continuously store features and maximize the retention of feature information at each layer. The attention mechanism can highlight target information and obtain more detailed information related to the target and scene. Finally, the fused features are input into the decoder and reconstructed through upsampling and convolutional layers to obtain the fused image. This article proposes an attention residual dense fusion network for infrared and visible image fusion. The experimental results show that compared to other typical fusion algorithms in existing literature, it has better fusion performance, can better preserve the spectral characteristics in visible images, and has significant infrared targets. It has achieved good fusion performance in both subjective and objective evaluations.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: November 23,2023
  • Published: