基于SE-SWTNet的孕妇引产超声图像的关键点自动检测研究
Automated Key Point Detection for Labor Induction Ultrasound Imaging in Pregnant Women Using SE-SWTNet
DOI: 10.12677/mos.2025.144260, PDF,    科研立项经费支持
作者: 陈黄菲*, 陈 胜#:上海理工大学光电信息与计算机工程学院,上海;姚丽萍:上海市第一妇婴保健院超声科,上海
关键词: 引产关键点检测宫颈超声影像注意力机制Induction of Labor Key Point Detection Cervical Ultrasound Images Attention Mechanism
摘要: 为了降低引产风险并提高妊娠临床诊断的准确性,需要识别产前超声检查中的几个关键参数。收集妊娠第38周至42周的120幅宫颈超声图像,对数据进行标注和预处理。以传统swin transformer (Shifted Window Transformer)网络为基线网络,引入转置卷积和最后的输出特征层,能直接输出关键点参数;同时加入正则化函数和SE (Squeeze and Excitation)注意力机制,减少过拟合风险,提高模型的泛化能力。将SE-SWTNet (Squeeze and Excitation Shifted Window Transformer)与现有的4种算法进行对比。结果显示,SE-SWTNet能较为精确地捕捉宫颈超声图像中的关键点参数。在测试集上,SE-SWTNet的平均半径误差为1.11 mm,并分别在4 mm、5 mm和6 mm误差范围内达到了93.5%、96.5%和97.5%的准确率。SE-SWTNet可以实现基于孕妇引产超声图像的高精度关键点自动检测的功能,为临床医生提供更准确的诊断信息,从而降低风险,提高成功率。
Abstract: In order to reduce the risk of induced abortion and improve the accuracy of clinical diagnosis of pregnancy, it is necessary to identify several key parameters in prenatal ultrasound examination. Collect 120 cervical ultrasound images from weeks 38 to 42 of pregnancy, annotate and preprocess the data. Using the traditional Swin transform er (Shifted Window Transformer) network as the baseline network, introducing transposed convolution and the final output feature layer, it can directly output keypoint parameters; Simultaneously incorporating regularization functions and SE (Squeeze and Excitation) attention mechanisms to reduce the risk of overfitting and improve the model’s generalization ability. Compare SE SWTNet (Squeeze and Excitation Shifted Window Transformer) with four existing algorithms. The results showed that SE SWTNet can accurately capture key point parameters in cervical ultrasound images. On the test set, the average radius error of SE SWTNet is 1.11 mm, and it achieved accuracies of 93.5%, 96.5%, and 97.5% within error ranges of 4 mm, 5 mm, and 6 mm, respectively. SE SWTNet can achieve high-precision automatic detection of key points based on ultrasound images of induced abortion in pregnant women, providing more accurate diagnostic information for clinical doctors, thereby reducing risks and improving success rates.
文章引用:陈黄菲, 陈胜, 姚丽萍. 基于SE-SWTNet的孕妇引产超声图像的关键点自动检测研究[J]. 建模与仿真, 2025, 14(4): 9-18. https://doi.org/10.12677/mos.2025.144260

参考文献

[1] Yang, Q., Zhou, C., Chen, Y., Pei, J., Hua, X. and Yao, L. (2023) Prediction Model for Successful Induction of Labor by Cervical Strain Elastography Diagnosed at Late-Term Pregnancy in Nulliparous Women: A Prospective Cohort Study. BMC Pregnancy and Childbirth, 23, Article No. 114. [Google Scholar] [CrossRef] [PubMed]
[2] Walani, S.R. (2020) Global Burden of Preterm Birth. International Journal of Gynecology & Obstetrics, 150, 31-33. [Google Scholar] [CrossRef] [PubMed]
[3] Rydahl, E., Eriksen, L. and Juhl, M. (2019) Effects of Induction of Labor Prior to Post-Term in Low-Risk Pregnancies: A Systematic Review. JBI Database of Systematic Reviews and Implementation Reports, 17, 170-208. [Google Scholar] [CrossRef] [PubMed]
[4] Troyanova-Wood, M., Meng, Z. and Yakovlev, V.V. (2019) Differentiating Melanoma and Healthy Tissues Based on Elasticity-Specific Brillouin Microspectroscopy. Biomedical Optics Express, 10, 1774-1781. [Google Scholar] [CrossRef] [PubMed]
[5] Zhou, Y., Jin, N., Chen, Q., Lv, M., Jiang, Y., Chen, Y., et al. (2021) Predictive Value of Cervical Length by Ultrasound and Cervical Strain Elastography in Labor Induction at Term. Journal of International Medical Research, 49, 1-17. [Google Scholar] [CrossRef] [PubMed]
[6] Agrawal, A., Tripathi, P.S., Bhandari, G., Kheti, P., Madhpuriya, G. and Rathore, R. (2022) Comparative Study of TVS Cervical Score and Bishop Score in Prediction of Successful Labour Induction. Egyptian Journal of Radiology and Nuclear Medicine, 53, Article No. 138. [Google Scholar] [CrossRef
[7] Garg, P., Gomez Roig, M.D. and Singla, A. (2019) An Ultrasound Prediction Model for Probability of Vaginal Delivery in Induction of Labor. Perinatal Journal, 27, 161-168. [Google Scholar] [CrossRef
[8] Lu, J., Cheng, Y.K.Y., Ho, S.Y.S., Sahota, D.S., Hui, L.L., Poon, L.C., et al. (2019) The Predictive Value of Cervical Shear Wave Elastography in the Outcome of Labor Induction. Acta Obstetricia et Gynecologica Scandinavica, 99, 59-68. [Google Scholar] [CrossRef] [PubMed]
[9] Li, P., Tsui, W.L. and Ding, D. (2023) The Association between Cervical Length and Successful Labor Induction: A Retrospective Cohort Study. International Journal of Environmental Research and Public Health, 20, Article No. 1138. [Google Scholar] [CrossRef] [PubMed]
[10] Park, H.S., Kwon, H., Kwak, D.W., Kim, M.Y., Seol, H., Hong, J., et al. (2019) Addition of Cervical Elastography May Increase Preterm Delivery Prediction Performance in Pregnant Women with Short Cervix: A Prospective Study. Journal of Korean Medical Science, 34, e68. [Google Scholar] [CrossRef] [PubMed]
[11] Kamel, R., Garcia, F.S.M., Poon, L.C. and Youssef, A. (2021) The Usefulness of Ultrasound before Induction of Labor. American Journal of Obstetrics & Gynecology MFM, 3, Article ID: 100423. [Google Scholar] [CrossRef] [PubMed]
[12] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., et al. (2021) Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, 10-17 October 2021, 10012-10022. [Google Scholar] [CrossRef
[13] Han, K., Wang, Y., Chen, H., Chen, X., Guo, J., Liu, Z., et al. (2023) A Survey on Vision Transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45, 87-110. [Google Scholar] [CrossRef] [PubMed]
[14] Deng, J., Ma, Y., Li, D., Zhao, J., Liu, Y. and Zhang, H. (2020) Classification of Breast Density Categories Based on Se-Attention Neural Networks. Computer Methods and Programs in Biomedicine, 193, Article ID: 105489. [Google Scholar] [CrossRef] [PubMed]
[15] Bhatt, D., Patel, C., Talsania, H., Patel, J., Vaghela, R., Pandya, S., et al. (2021) CNN Variants for Computer Vision: History, Architecture, Application, Challenges and Future Scope. Electronics, 10, Article No. 2470. [Google Scholar] [CrossRef
[16] Ahn, D., Kim, S., Hong, H. and Chul Ko, B. (2023) Star-Transformer: A Spatio-Temporal Cross Attention Transformer for Human Action Recognition. 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, 2-7 January 2023, 3330-3339. [Google Scholar] [CrossRef
[17] Ramadan, R.A., Khedr, A.Y., Yadav, K., Alreshidi, E.J., Sharif, M.H., Azar, A.T., et al. (2022) Convolution Neural Network Based Automatic Localization of Landmarks on Lateral X-Ray Images. Multimedia Tools and Applications, 81, 37403-37415. [Google Scholar] [CrossRef
[18] Fard, A.P., Ferrantelli, J., Dupuis, A. and Mahoor, M.H. (2022) Sagittal Cervical Spine Landmark Point Detection in X-Ray Using Deep Convolutional Neural Networks. IEEE Access, 10, 59413-59427. [Google Scholar] [CrossRef