基于NafNet的弱隐形印刷量子点图像复原算法研究
Research on Image Restoration Algorithm for Weakly Invisible Printed Quantum Dots Based on NafNet
摘要: 针对弱隐形印刷量子点(PQD)图像在打印与扫描过程中因噪声及点阵缺损导致解码成功率(DSR)下降的问题,提出一种提升检测鲁棒性图像复原算法。首先,通过成像噪声参数化建模构建参数可控的复合退化数据集;其次,提出残差增强的ResNAFNet复原网络,提升失真点阵边缘与局部特征的恢复质量;最后,建立以DSR为导向的复原评价体系。结果显示,不同退化条件下测试集BER降至0.0124%和0.0038%,ED为2.9342和1.4757 pixels,DSR达88.75%以上,较退化图像提升85.55%。该方法有效提升了PQD图像的结构恢复质量与识读可靠性,验证了其在防伪鉴权中的工程有效性。
Abstract: To address the issue of decreased decoding success rate (DSR) in weakly invisible printed quantum dot (PQD) images due to noise and dot matrix defects during printing and scanning, a robust image restoration algorithm is proposed. First, a parameter-controllable composite degradation dataset is constructed through parameterized modeling of imaging noise. Second, a ResNAFNet restoration network with residual enhancement is proposed to improve the restoration quality of distorted dot matrix edges and local features. Finally, a DSR-oriented restoration evaluation system is established. Results show that under different degradation conditions, the BER of the test set decreases to 0.0124% and 0.0038%, the ED is 2.9342 and 1.4757 pixels, and the DSR reaches over 88.75%, representing an 85.55% improvement compared to degraded images. This method effectively improves the structural restoration quality and readability reliability of PQD images, validating its engineering effectiveness in anti-counterfeiting authentication.
文章引用:类承森, 曹鹏. 基于NafNet的弱隐形印刷量子点图像复原算法研究[J]. 计算机科学与应用, 2026, 16(3): 75-88. https://doi.org/10.12677/csa.2026.163088

参考文献

[1] 张进. 基于半色调图像的数字水印技术研究[D]: [硕士学位论文]. 西安: 西安理工大学, 2025.
[2] 葛乃馨. 半色调信息隐藏防伪研究[D]: [硕士学位论文]. 南京: 南京林业大学, 2019.
[3] 丁海洋. 基于点扩散和误差扩散的半色调自隐藏算法[J]. 计算机应用研究, 2020, 37(1): 245-250.
[4] 王璇. 半色调图像微结构信息快速提取算法及应用研究[D]: [硕士学位论文]. 北京: 北京印刷学院, 2025.
[5] 朱建乐. 盲同步印刷量子点信息隐藏编解码技术研究[D]: [硕士学位论文]. 北京: 北京印刷学院, 2025.
[6] 赵文康, 曹鹏. 基于印刷量子点的可靠性复合光谱图像编解码算法研究[J]. 包装工程, 2025, 46(7): 173-182.
[7] 池稼轩, 曹鹏, 王明飞. 基于PWM占空比的印刷量子点图像信息识读[J]. 包装工程, 2022, 43(13): 282-295.
[8] Yang, H. and Cao, P. (2024) Weak Invisible Dot Matrix Recognition Algorithm Based on Image Processing and Frequency Modulation Pulse Duty Cycle. 2024 5th International Symposium on Computer Engineering and Intelligent Communications (ISCEIC), Wuhan, 8-10 November 2024, 595-602. [Google Scholar] [CrossRef
[9] Zhen, Y. and Cao, P. (2020) Research on Restoration Algorithm of Halftone Anti-Counterfeiting Images. Proceedings of the 2020 8th International Conference on Information Technology: IoT and Smart City, Xi’an, 25-27 December 2020, 133-137. [Google Scholar] [CrossRef
[10] Yuan, B. and Cao, P. (2021) Research on Enhancement and Extraction Algorithms of Printed Quantum Dots Image Using a Generative Adversarial Network.
[11] Zhu, A. and Cao, P. (2024) An Efficient Restoration Method for the Faint Dot Matrix Images. International Journal of Computer Applications, 186, 1-10. [Google Scholar] [CrossRef
[12] Zhu, A. and Cao, P. (2025) Scratched Coating QR Code Image Restoration Based on the Pluralistic Image Completion Deep Learning Method. Journal of Imaging Science and Technology, 69, 1-11. [Google Scholar] [CrossRef
[13] Katsaggelos, A.K. and Wang, J. (2012) Modeling the Performance of Image Restoration from Motion Blur. IEEE Transactions on Image Processing, 21, 3502-3517.
[14] Nah, S., Hyun, K. and Lee, K.M. (2019) REDS: A Dataset for Image Restoration Tasks. CVPR Workshops 2019, Long Beach, 16-20 June 2019.
[15] Rim, J., Kim, S., Kim, S., et al. (2020) RealBlur: A Dataset for Training and Benchmarking Real-World Image Deblur-ring. ECCV 2020, Glasgow, 23-28 August 2020.
[16] Nah, S., Kim, T.H. and Lee, K.M. (2017) Deep Multi-Scale Convolutional Neural Network for Dynamic Scene Deblurring. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, 21-26 July 2017, 257-265. [Google Scholar] [CrossRef
[17] Pan, J., Bai, Y., Tang, J., et al. (2022) MC-Blur: A Comprehensive Benchmark for Image Deblurring. arXiv: 2112.00234.
[18] Abdelhamed, A., Lin, S. and Brown, M.S. (2018) A High-Quality Denoising Dataset for Smartphone Cameras. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 1692-1700. [Google Scholar] [CrossRef
[19] Ronneberger, O., Fischer, P. and Brox, T. (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab, N., Hornegger, J., Wells, W. and Frangi, A., Eds., Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Springer, 234-241. [Google Scholar] [CrossRef
[20] Zamir, S.W., Arora, A., Khan, S., et al. (2021) Restormer: Efficient Transformer for High-Resolution Image Restoration. arXiv: 2111.09881.
[21] He, K., Zhang, X., Ren, S. and Sun, J. (2016) Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 27-30 June 2016, 770-778. [Google Scholar] [CrossRef
[22] Dong, C., Loy, C.C., He, K. and Tang, X. (2016) Image Super-Resolution Using Deep Convolutional Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 295-307. [Google Scholar] [CrossRef] [PubMed]
[23] Kim, J., Lee, J.K. and Lee, K.M. (2016) Accurate Image Super-Resolution Using Very Deep Convolutional Networks. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 27-30 June 2016, 1646-1654. [Google Scholar] [CrossRef
[24] Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D. and Matas, J. (2018) DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 8183-8192. [Google Scholar] [CrossRef
[25] Kupyn, O., Martyniuk, T., Wu, J. and Wang, Z.Y. (2019) DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, 27 October-2 November 2019, 8878-8886.
[26] Chen, L., Chu, X., Zhang, X. and Sun, J. (2022) Simple Baselines for Image Restoration. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M. and Hassner, T., Eds., Computer VisionECCV 2022, Springer, 17-33. [Google Scholar] [CrossRef