#### 期刊菜单

Lung X-Ray Multi-Label Classification Method Which Is Based on Densely Connected Neural Network
DOI: 10.12677/AIRR.2022.112015, PDF, HTML, XML, 下载: 36  浏览: 66

Abstract: In recent years, neural networks have experienced rapid development in the medical diseases classification. The classification ability of certain diseases has reached or even exceeded the level of professional medical practitioners. However, the application of these neural networks is mainly focused on the binary classification of diseases, and the performance of neural network models in the multi-label classification of diseases is not satisfactory. The imperfection of feature extraction in the neural network, the complexity of medical images, and the lack of expertise in related fields are the three main factors leading to this result. To this end, this paper proposes a confidence multi-label classification method based on dense neural network connections, which can effectively fuse the multi-dimensional features of the disease and use the concept of confidence to ensure the reliability of the classification results. This article is tested on the CXR14 datasets. Experiments show that the method in this article can effectively integrate features of different dimensions to enhance the ability of feature extraction. Based on the concept of confidence, we can obtain the expected level of classification credibility instead of the current definite classification that can only provide absolute classification results. This provides more abundant and reliable identification information for clinical medical diagnosis.

1. 引言

1) 改进网络来提供更强的特征提取能力；

2) 提供置信度的概念来输出用户定义角度的分类结果，而不是让神经网络输出绝对的结果标签信息。

2. 引言

$z=Wx+b,\text{}o=f\left( z \right)$

$Loss=J\left(f\left(xi\right),yi\right)$

3. 基于密集连接神经网络的肺部X光多分类

3.1. 密集连接神经网络

Figure 1. Example diagram of DenseNet network structure

Figure 2. Principle of DenseNet network based on shared storage space

3.2. 基于置信度的多分类策略

Figure 3. Concept Tree

$P\left({s}_{q}|l\right)={H}_{l}^{+}\left[{s}_{q}\right]/{|{H}_{l}^{+}\left[{s}_{q}\right]|}_{1}$

$P\left({s}_{q}|¬l\right)={H}_{l}^{-}\left[{s}_{q}\right]/{|{H}_{l}^{-}\left[{s}_{q}\right]|}_{1}$

$P\left(l|{s}_{q}\right)=P\left({s}_{q}|l\right)P\left(l\right)/\left(P\left({s}_{q}|l\right)P\left(l\right)+P\left({s}_{q}|¬l\right)P\left(¬l\right)\right)$

① 建立标签树

② 计算标签的先验概率分布

③ 对分类器输出的当前实例x进行一下操作

④ 获得实例x的标签值l并为实例标签中的每个标签赋值，压缩到Sq中，进一步转化为直方图形式 ${\text{H}}_{\text{l}}^{+}\left[{\text{s}}_{\text{q}}\right]$

⑤ 计算标签列表l中所有标签的父节点并汇总到 ${\text{H}}_{\text{a}}^{+}\left[{\text{s}}_{\text{q}}\right]$

⑥ 计算所有不属于实例x标签列表l的标签值并汇总到 ${H}_{l}^{-}\left[{s}_{q}\right]$

⑦ 通过之前计算的标签列表l的值计算器后验概率分布。

① 获得实例的后验概率分布P(l|sq)，并用符号C代表

② 判断C与阈值T的大小，若C小于阈值T则执行以下操作

③ 获得标签列表l的父节点并计算其按照算法1计算

4. 实验结果与分析

4.1. 实验数据集

Figure 4. The proportions of images with multi-labels in each of 14 pathology classes and the labels’ co-occurrence statistics

4.2. 评价指标

TP (true positive)：分类正确，把原本属于正类的样本分成正类。

TN (true negative)：分类正确，把原本属于负类的样本分成负类。

FP (false positive)：分类错误，把原本属于负类的错分成了正类。

FN (false negative)：分类错误，把原本属于正类的错分成了负类。

ROC的横坐标是为假阳性率(False Positive Rate, FPR)，是指在所有实际为阴性的样本中，被错误地判断为阳性之比率。其公式是：

$\text{FPR}=\text{FP}/\left(\text{FP}+\text{TN}\right)$

ROC纵坐标是真阳性率(True Positive Rate, TPR)，是指在所有实际为阳性的样本中，被正确地判断为阳性之比率。其公式是：

$\text{TPR}=\text{TP}/\left(\text{TP}+\text{FN}\right)$

4.3. 实验结果分析

Figure 5. Comparison of AUC scores for all disease patterns with poch of 10

5. 总结与展望

NOTES

*第一作者。

 [1] Varela, C., Timp, S. and Karssemeijer, N. (2006) Use of Border Information in the Classification of Mammographic Masses. Physics in Medicine & Biology, 51, 425-441. https://doi.org/10.1088/0031-9155/51/2/016 [2] Andre, E., Brett, K., Novoa, R.A., et al. (2017) Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks. Nature, 542, 115-118. https://doi.org/10.1038/nature21056 [3] Zhu, W., Huang, Y., Zeng, L., et al. (2018) AnatomyNet: Deep Learning for Fast and Fully Automated Whole-Volume Segmentation of Head and Neck Anatomy. Medical Physics, 46, 576-589. https://doi.org/10.1002/mp.13300 [4] Rajpurkar, P., Irvin, J., Zhu, K., et al. (2017) CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning. [5] Goodfellow, I., Bengio, Y. and Courville, A. (2016) Deep Learning. MIT Press, Cambridge. [6] Song, L., Xu, Y., Zhang, L., et al. (2020) Learning from Synthetic Images via Active Pseudo-Labeling. IEEE Transactions on Image Processing, 29, 6452-6465. https://doi.org/10.1109/TIP.2020.2989100 [7] Chen, H., Miao, S., Xu, D., et al. (2020) Deep Hierarchical Multi-Label Classification Applied to Chest X-Ray Abnormality Taxonomies. Medical Image Analysis, 66, Article ID: 101811. https://doi.org/10.1016/j.media.2020.101811 [8] Zhang, M. and Zhou, Z. (2014) A Review on Multi-Label Learning Algorithms. IEEE Transactions on Knowledge & Data Engineering, 26, 1819-1837. https://doi.org/10.1109/TKDE.2013.39 [9] Jaeger, S., Karargyris, A., Candemir, S., et al. (2013) Automatic Screening for Tuberculosis in Chest Radiographs: A Survey. Quantitative Imaging in Medicine and Surgery, 3, 89. [10] Redmon, J. and Farhadi, A. (2017) YOLO9000: Better, Faster, Stronger. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, 21-26 July 2017, 6517-6525. https://doi.org/10.1109/CVPR.2017.690 [11] De Mbczyński, K., Waegeman, W., Cheng, W., et al. (2012) On Label Dependence and Loss Minimization in Multi-Label Classification. Machine Learning, 88, 5-45. https://doi.org/10.1007/s10994-012-5285-8 [12] Wehrmann, J., Cerri, R. and Barros, R. (2018) Hierarchical Multi-Label Classification Networks. https://doi.org/10.1145/3019612.3019664 [13] Liu, B., Sadeghi, F., Tappen, M., et al. (2013) Probabilistic Label Trees for Efficient Large Scale Image Classification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, 23-28 June 2013, 843-850. https://doi.org/10.1109/CVPR.2013.114 [14] Meng, Z.Y., et al. (2021) Deep Neural Architecture Search: A Survey. Journal of Computer Research and Development, 58, 22-33. [15] Szegedy C, Liu W, Jia Y, et al. (2014) Going Deeper with Convolutions. https://arxiv.org/abs/1409.4842v1 [16] Szegedy C, Liu W, Jia Y, et al. (2015) Going Deeper with Convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, 7-12 June 2015, 1-9. https://doi.org/10.1109/CVPR.2015.7298594 [17] He, K., Zhang, X., Ren, S., et al. (2016) Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 27-30 June 2016, 770-778. https://doi.org/10.1109/CVPR.2016.90 [18] Huang, G., Liu, Z., Laurens, V., et al. (2016) Densely Connected Convolutional Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, 21-26 July 2017, 4700-4708. https://doi.org/10.1109/CVPR.2017.243 [19] Pleiss, G., Chen, D., Huang, G., et al. (2017) Memory-Efficient Implementation of DenseNets. [20] Davis, J., Liang, T., Enouen, J., et al. (2019) Hierarchical Semantic Labeling with Adaptive Confidence. International Symposium on Visual Computing, Lake Tahoe, 7-9 October 2019, 169-183. https://doi.org/10.1007/978-3-030-33723-0_14 [21] Wang, X., Peng, Y., Lu, L., et al. (2017) ChestX-ray8: Hospital-Scale Chest X-Ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, 21-26 July 2017, 2097-2106. https://doi.org/10.1109/CVPR.2017.369