基于Tansformer的生成对抗网络图像去模糊
Image Deblurring Using Transformer-Based Generative Adversarial Network
DOI: 10.12677/pm.2026.163084, PDF,   
作者: 李秋良:新疆师范大学数学科学学院,新疆 乌鲁木齐
关键词: 深度学习图像去模糊生成对抗网络TransformerDeep Learning Image Deblurring Generative Adversarial Network Transformer
摘要: 现实场景中的图像模糊通常由相机抖动运动等诸多因素造成,由于模糊核未知,图像去模糊问题成为图像处理逆问题。目前去模糊方法虽取得较好效果,但在处理复杂真实厂家的模糊时,会存在纹理细节恢复不足和去模糊结果伪影的问题。为此,本文提出了一种新颖的transformer方法的生成对抗网络。该网络融合局部–全局双元Transformer模块。我们在不同基准数据集(如GoPro和HIDE)上进行了实验,主观客观结果均表明,本方法在恢复图像边缘信息和真实纹理特征方面取得较好的结果。
Abstract: Blur in real-world images is often caused by various factors such as camera shake. Due to the unknown blur kernel, image deblurring becomes an inverse problem in image processing. Although current deblurring methods have achieved promising results, they still suffer from insufficient texture detail recovery and artifacts when dealing with complex real-world blur. To address this issue, this paper proposes a novel transformer-based generative adversarial network. The network integrates a local-global dual Transformer module. We conducted experiments on different benchmark datasets, such as GoPro and HIDE. Both subjective and objective results demonstrate that the proposed method achieves favorable outcomes in restoring image edge information and realistic texture features.
文章引用:李秋良. 基于Tansformer的生成对抗网络图像去模糊[J]. 理论数学, 2026, 16(3): 216-225. https://doi.org/10.12677/pm.2026.163084

参考文献

[1] 王珮, 朱宇, 闫庆森, 孙瑾秋, 张艳宁. 真实场景图像去模糊: 挑战与展望[J]. 中国图象图形学报, 2024, 29(12): 3501-3528.
[2] Neji, H., Hamdani, T.M., Halima, M.B., et al. (2021) Blur2Sharp: A GAN-Based Model for Document Image Deblurring. Technical Report.
[3] 陈荣喆, 张蕙, 梁义涛, 等. 基于改进DeblurGAN-v2-3s的粮仓图像去模糊方法研究[J]. 中国粮油学报, 2025, 40(10): 227-234.
[4] 苏迪, 王少博, 张成, 陈志升, 刘超越. 基于生成对抗网络的弹载图像盲去模糊算法[J]. 兵工学报, 2024, 45(3): 855-863.
[5] Vaswani, A., Shazeer, N., Parmar, N., et al. (2017) Attention Is All You Need. arXiv:1706.03762.
[6] Tsai, F., Peng, Y., Lin, Y., Tsai, C. and Lin, C. (2022) Stripformer: Strip Transformer for Fast Image Deblurring. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M. and Hassner, T., Eds., Lecture Notes in Computer Science, Springer, 146-162. [Google Scholar] [CrossRef
[7] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S. and Yang, M. (2022) Restormer: Efficient Transformer for High-Resolution Image Restoration. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, 18-24 June 2022, 5728-5739. [Google Scholar] [CrossRef
[8] Kong, L., Dong, J., Ge, J., Li, M. and Pan, J. (2023) Efficient Frequency Domain-Based Transformers for High-Quality Image Deblurring. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, 17-24 June 2023, 5886-5895. [Google Scholar] [CrossRef
[9] Nah, S., Kim, T.H. and Lee, K.M. (2017) Deep Multi-Scale Convolutional Neural Network for Dynamic Scene Deblurring. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, 21-26 July 2017, 5572-5581. [Google Scholar] [CrossRef
[10] Shen, Z., Wang, W., Lu, X., Shen, J., Ling, H., Xu, T., et al. (2019) Human-Aware Motion Deblurring. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, 27 October 2019-2 November 2019, 5572-5581. [Google Scholar] [CrossRef
[11] Rim, J., Lee, H., Won, J. and Cho, S. (2020) Real-World Blur Dataset for Learning and Benchmarking Deblurring Algorithms. In: Vedaldi, A., Bischof, H., Brox, T. and Frahm, JM., Eds., Lecture Notes in Computer Science, Springer International Publishing, 184-201. [Google Scholar] [CrossRef
[12] Kupyn, O., Martyniuk, T., Wu, J. and Wang, Z. (2019) DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, 27 October 2019-2 November 2019, 8877-8886. [Google Scholar] [CrossRef
[13] Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D. and Matas, J. (2018) Deblurgan: Blind Motion Deblurring Using Conditional Adversarial Networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 8183-8192. [Google Scholar] [CrossRef