对称约束的类间方差阈值方法
DOI:
作者:
作者单位:

三峡大学计算机与信息学院

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

国家自然科学基金(61871258)


Symmetry-Constrained Between-Class Variance Thresholding Method
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    为了提高现有OTSU阈值法的阈值化精度和适应性,提出了一种对称约束的类间方差阈值方法。该方法首先对输入图像使用Prewitt算子构建梯度幅值图像,并根据对称性原则提取对称采样区;然后,基于构建的对称约束类间方差目标函数最大化准则选取阈值,并判断在此阈值下对称采样区是否满足对称条件;当无法满足对称条件时,基于对称采样区对输入图像进行对称修正处理,并应用对称约束的类间方差目标函数对修正后的对称采样区选取阈值;最后,使用最终选取的阈值对输入图像阈值化。在28幅合成图像和70幅真实世界图像集上比较了提出的方法与OTSU法及4种OTSU的改进方法的阈值化性能。实验结果表明:提出方法的误分类率在合成图像和真实世界图像上分别为0.0106和0.016,相较于阈值化精度第2的方法在误分类方面分别降低了91.4%和86.1%。提出的方法虽然在计算效率方面不占有优势,但它对不同模态的测试图像具有更稳健的阈值化适应性和更高的阈值化精度。

    Abstract:

    In order to improve the thresholding accuracy and adaptability of the existing OTSU thresholding method, a symmetry-constrained between-class variance thresholding method is proposed. The proposed approach initially employs the Prewitt operator to construct a gradient magnitude image from the input image, followed by the extraction of symmetric sampling areas based on the principle of symmetry. Then, a threshold is selected based on the symmetry-constrained between-class variance objective function maximization criterion, and it is judged whether the symmetric sampling areas satisfy the symmetry condition under this threshold. If the symmetry condition cannot be satisfied, the input image is processed with symmetry correction based on the symmetric sampling areas. The threshold is then selected using the symmetry-constrained between-class variance objective function on the rectified symmetric area. Finally, the chosen threshold is utilized for thresholding the input image. The performance of the proposed method is compared with OTSU"s method and four improved methods of OTSU"s method on a dataset comprising 28 synthetic images and 70 real-world images. Experimental results demonstrate that the proposed method achieves a misclassification error rate of 0.0106 and 0.016 on synthetic and real-world images, respectively. In comparison to the second-best method in terms of thresholding accuracy, the proposed method reduces the misclassification error rates by 91.4% and 86.1% on synthetic and real-world images, respectively. Although the proposed method does not exhibit superiority in terms of computational efficiency, it demonstrates a more robust thresholding adaptability and higher thresholding accuracy across diverse modalities of test images.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-02-21
  • 最后修改日期:2024-04-29
  • 录用日期:2024-05-10
  • 在线发布日期:
  • 出版日期: