自适应特征增强的多尺度红外与可见光图像配准和融合算法
DOI:
CSTR:
作者:
作者单位:

1.辽宁工程技术大学软件学院葫芦岛125105;2.辽宁工程技术大学基础教学部葫芦岛125105

作者简介:

通讯作者:

中图分类号:

TP391;TN911.73

基金项目:

国家自然科学基金面上项目(52274206)、国家自然基金面上项目(51874166)、国家自然基金青年基金项目(51904144)资助


Multi-scale infrared and visible image registration and fusion algorithm with adaptive feature enhancement
Author:
Affiliation:

1.School of Software, Liaoning Technical University, Huludao 125105, China; 2.Department of Basic Education, Liaoning Technical University, Huludao 125105, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    目前的红外与可见光图像融合算法对图像的特征提取不充分,细节信息丢失。实际生活中的红外与可见光图像多为未配准图像,已有的配准算法仍存在伪影和偏差等问题。针对以上问题,提出了一种自适应特征增强的多尺度红外与可见光图像配准和融合算法。首先,在配准网络中使用多尺度卷积核和密集连接,以提取不同尺度的特征和防止信息丢失,并引入ORB特征点检测算法和设计的特征增强模块,以充分提取特征和适应复杂环境;其次,通过引入通道注意力和自学习参数设计了光照增强模块,以增强可见光图像的信息表达;然后,在融合网络里,利用不同池化和可变卷积设计了自适应多尺度池化卷积,以提取不同尺度的细节信息,并设计EMA特征融合模块将局部特征和全局特征进行融合;最后,设计了流场一致性损失函数,从而减小配准误差。为了更好地验证方法的实用性,建立了红外与可见光图像数据集。在公开数据集TNO、Roadscene和自建数据集上进行对比实验和消融实验,实验表明,在主观评价上,配准图像偏差小无伪影,融合图像清晰可见,客观评价上,在指标MSE、MI、NCC、SD、EN上相比于其他算法提高了20%、7%、4%、15%、8%左右。另外,在YOLOv8上进行融合结果的目标检测性能实验,检测性能表现良好。

    Abstract:

    The current infrared and visible light image fusion algorithms often fail to fully extract image features, resulting in the loss of detail information. In real-world scenarios, infrared and visible light images are typically unregistered, and existing registration algorithms still suffer from artifacts and biases. To address these issues, this paper proposes an adaptive feature enhancement multi-scale infrared and visible light image registration and fusion algorithm. First, multi-scale convolutional kernels and dense connections are used in the registration network to extract features at different scales and prevent information loss. Additionally, an ORB feature point detection algorithm and a designed feature enhancement module are introduced to fully extract features and adapt to complex environments. Secondly, a lighting enhancement module is designed by incorporating channel attention and self-learning parameters to improve the information expression of visible light images. Then, in the fusion network, adaptive multi-scale pooling convolutions are designed using different pooling strategies and variable convolutions to extract detail information at multiple scales. An EMA feature fusion module is designed to integrate local and global features. Finally, a flow consistency loss function is introduced to minimize registration errors. To better validate the practical applicability of the proposed method, an infrared and visible light image dataset is established. Comparative and ablation experiments are conducted on the public datasets TNO, Roadscene, and a self-constructed dataset. The experimental results show that, in terms of subjective evaluation, the registered images have minimal bias and no artifacts, while the fused images are clear and visible. On objective evaluation, it improves about 20%, 7%, 4%, 15%, and 8% on the metrics MSE, MI, NCC, SD, and EN compared to other algorithms. Additionally, target detection performance experiments on YOLOv8 show that the fusion results exhibit good detection performance.

    参考文献
    相似文献
    引证文献
引用本文

孙溪成,吕伏,尹艺潼.自适应特征增强的多尺度红外与可见光图像配准和融合算法[J].电子测量与仪器学报,2025,39(6):242-254

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2025-09-16
  • 出版日期:
文章二维码
×
《电子测量与仪器学报》
关于防范虚假编辑部邮件的郑重公告