Abstract:In view of the limitations of single-domain analysis and the differentiated distribution of scanning features in image deblurring, a novel Mamba deblurring method based on dual-domain feature fusion is proposed. By introducing a state-space model, the proposed method simultaneously extracts spatial structural features from blurred images and multi-scale frequency-domain features generated by wavelet transformation. This approach overcomes the constraints of single-domain analysis and enables deep integration and adaptive fusion of spatial-domain contextual information with high-frequency details in the wavelet domain, all under the guidance of the state-space model. A dual-branch state-space module is designed to independently model spatial and frequency-domain information, accurately adapting to the differentiated distribution characteristics of spatial structures and high-frequency details in the frequency domain. While significantly enhancing feature representation capabilities, the method effectively addresses the challenges posed by the differentiated distribution of scanning features and achieves high-quality image restoration. Experimental results demonstrate that the proposed method achieves PSNR of 33.75 dB and SSIM of 0.968 on the GoPro dataset, PSNR of 31.81 dB and SSIM of 0.949 on the HIDE dataset, and PSNR/SSIM of 32.92/0.937 and 40.15/0.974 on RealBlur-J and RealBlur-R datasets, respectively, outperforming classical deblurring approaches in terms of blur removal, structural restoration, edge preservation, and overall visual quality. Devices developed based on this method are capable of high-precision image enhancement in practical engineering applications.