Abstract:In order to improve the thresholding accuracy and adaptability of the existing OTSU thresholding method, a symmetry-constrained between-class variance thresholding method is proposed. The proposed approach initially employs the Prewitt operator to construct a gradient magnitude image from the input image, followed by the extraction of symmetric sampling areas based on the principle of symmetry. Then, a threshold is selected based on the symmetry-constrained between-class variance objective function maximization criterion, and it is judged whether the symmetric sampling areas satisfy the symmetry condition under this threshold. If the symmetry condition cannot be satisfied, the input image is processed with symmetry correction based on the symmetric sampling areas. The threshold is then selected using the symmetry-constrained between-class variance objective function on the rectified symmetric area. Finally, the chosen threshold is utilized for thresholding the input image. The performance of the proposed method is compared with OTSU"s method and four improved methods of OTSU"s method on a dataset comprising 28 synthetic images and 70 real-world images. Experimental results demonstrate that the proposed method achieves a misclassification error rate of 0.0106 and 0.016 on synthetic and real-world images, respectively. In comparison to the second-best method in terms of thresholding accuracy, the proposed method reduces the misclassification error rates by 91.4% and 86.1% on synthetic and real-world images, respectively. Although the proposed method does not exhibit superiority in terms of computational efficiency, it demonstrates a more robust thresholding adaptability and higher thresholding accuracy across diverse modalities of test images.