#### 期刊菜单

Optimized Point Clouds Classification and Objects Extraction Using S-T Graph Cut
DOI: 10.12677/GST.2020.84017, PDF, HTML, XML, 下载: 359  浏览: 843

Abstract: With the development and extensive application of LiDAR technology, the classification of ground objects and scene understanding of point cloud data have become the current research hotspot. Due to over-segmentation or under-segmentation inevitable in the process of machine learning to extract local features, there are local errors in the classification results. Aiming at this phenomenon, the method of cutting the front background image is introduced in the study. Through the actual laser scanning point cloud data classification experiment, the fine optimized classification results are obtained; the original classification accuracy is improved and the effectiveness of the method is verified.

1. 引言

2. 前背景图割及其用于点云分类

2.1. 图割原理

$E\left(f\right)={E}_{smooth}\left(f\right)+{E}_{data}\left(f\right)$ (1)

${E}_{data}\left(f\right)$ 通常可以写为：

${E}_{data}\left(f\right)=\underset{p\in P}{\sum }{D}_{p}\left({f}_{p}\right)$ (2)

${E}_{smooth}\left(f\right)$ 的选取对于能量的优化效果具有很重要的作用。一般将 ${E}_{smooth}\left(f\right)$ 表示为：

${E}_{smooth}\left(f\right)=\underset{\left\{p,q\right\}\in N}{\sum }{V}_{\left\{p,q\right\}}\left({f}_{p},{f}_{q}\right)$ (3)

2.1.1. 图割定义

$G=〈V,E〉$ 包括一组节点V以及一组连接，分为有向图和无向图两种。为了方便，本文使用无向图。在此图中，相邻的节点对表示为： $e=\left\{p,q\right\}\in E$。相邻节点的边E的集合。图的普通节点在点云分割中，表示为三维点。同时图中还包括两种独特的节点：终端节点S和T，分别代表前景对象节点和背景节点。相邻普通节点间的边为n-连接，普通节点与终端节点间的边为t-连接。所有的边都有一定的权重 ${w}_{e}$

Figure 1. S-T diagram: S represents foreground object node, T represents background node. The thickness of the edge is related to its weight, and the dotted line is used to separate the foreground and background points

$|C|$ 定义为图割的代价(cost)，由图割中所有边的权重之和表示。最小割问题即寻找具有最小代价的图割。此问题一般可以转换成寻找终端节点间的最大流问题，从而可以高效求解 [11]。

Table 1. Weights of different sides in Figure 1

$K=1+\underset{p\in P}{\mathrm{max}}\underset{q:\left\{p,q\right\}\in N}{\sum }{B}_{p,q}$ (4)

2.1.2. 点云分割中图定义

${B}_{p,q}={\text{e}}^{-{\left({d}_{i}/\delta \right)}^{2}}$ (5)

(6)

2.1.3. 前背景种子点选取

1) 类别：聚类过程中，只对初始种子点类别相同的点进行聚类；

2) 聚类距离：由于不同类别点云在同一数据集中的点间距不同，例如地面和建筑物点的间距小于植被点的间距等。此参数需要采用训练数据进行训练得到；

3) 在K近邻搜索中，首先对数据集构建R-索引 [13]，在构建索引的基础上，搜素选取种子的K个邻近点。

2.2. 基于前背景图割的点云精细分类流程

1) 邻域系统：首先计算待提取物体分割区域外接最小立方体，然后将此立方体进行扩展，以包含其周围点；

2) 种子点选取：将待提取的可靠物体上的点设置为前景种子点，同时将邻域内的其他可靠物体上的点设置为背景种子点；

3) 前背景图割：对邻域系统内的所有点搜索其K近邻点，建立图的n-连接；同时对邻域系统内所有点搜索其最近的前背景种子点，建立图的t-连接。图的边的权重按照公式(5)和(6)进行计算。建立好图后，便可以采用前背景图割技术对前景对象进行提取。

Figure 2. Cut is used for precise classification of point clouds

3. 试验和分析

3.1. 地物精确提取

Figure 3. Pre-background cutting of the power tower and its surrounding points: (a) The initial classification results of point cloud obtained by using JointBoost and local features; (b) The foreground seed points are orange, the background seed points are green, and the black points are points on unreliable objects; (c) Segmentation results

3.2. 点云分类结果优化

Figure 4. Segmentation results of objects of other categories except ground points in the first region. In the figure, different colors represent the foreground object points after segmentation, while black represents the background points

Figure 5. Extraction results of foreground object

Figure 6. Optimization results of point cloud classification

4. 结论

 [1] 郭波, 黄先锋, 张帆, 等. 顾及空间上下文关系的jointboost点云分类及特征降维[J]. 测绘学报, 2013, 42(5): 715-821. [2] 杨必胜, 魏征, 李清泉, 等. 面向车载激光扫描点云快速分类的点云特征图像生成方法[J]. 测绘学报, 2010, 39(5): 540-545. [3] Bishop, C. (2006) Pattern Recognition and Machine Learning. Springer, New York, 657-663. [4] Munoz, D., Bagnell, J.A., et al. (2009) Contextual Classification with Functional Max-Margin Markov Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, 20-25 June 2009, 975-982. https://doi.org/10.1109/CVPR.2009.5206590 [5] Mallet, C., Bretar, F., et al. (2011) Relevance Assessment of Full-Waveform Lidar Data for Urban Area Classification. Isprs Journal of Photogrammetry and Remote Sensing, 66, 71-84. https://doi.org/10.1016/j.isprsjprs.2011.09.008 [6] Kim, H.B. and Sohn, G. (2011) Random Forests Based Multiple Classifier System for Power-Line Scene Classification. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Calgary, 29-31 August 2011, 253-258. [7] Kim, H.B. and Sohn, G. (2010) 3D Classification of Power-Line Scene from Airborne Laser Scanning Data Using Random Forests. Proceeding of IAPRS, Saint-Mande, 1-3 September 2010, 207-212. [8] Niemeyer, J., Wegner, J.D., Mallet, C., et al. (2011) Conditional Random Fields for Urban Scene Classification with Full Waveform LiDAR Data. In: Stilla, U., Rottensteiner, F., Mayer, H., Jutzi, B. and Butenuth, M., Eds., Photogrammetric Image Analysis, PIA 2011, Lecture Notes in Computer Science, Springer, Berlin, 233-244. https://doi.org/10.1007/978-3-642-24393-6_20 [9] Guo, B., Huang, X.F., Zhang, F., et al. (2015) Classification of Airborne Laser Scanning Data Using JointBoost. ISPRS Journal of Photogrammetry and Remote Sensing, 100, 71-83. https://doi.org/10.1016/j.isprsjprs.2014.04.015 [10] Veksler, O. (1999) Efficient Graph-Based Energy Minimization Methods in Computer Vision. Ph.D Thesis, Cornell University, Ithaca. [11] Ford, D. and Fulkerson, D.R. (2010) Flows in Networks. Princeton University Press, Princeton. [12] Boykov, Y. and Funka-Lea, G. (2006) Graph Cuts and Efficient N-D Image Segmentation. International Journal of Computer Vision, 70, 109-131. https://doi.org/10.1007/s11263-006-7934-5 [13] Papadopoulos, A. and Theodoridis, Y. (2005) R-Trees: Theory and Applications. Springer, Berlin. [14] Torralba, A., Murphy, K.P., et al. (2007) Sharing Visual Features for Multiclass and Multi-View Object Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 854-869. https://doi.org/10.1109/TPAMI.2007.1055 [15] Kalogerakis, E., Hertzmann, A. and Singh, K. (2010) Learning 3D Mesh Segmentation and Labeling. ACM Transactions on Graphics, 29, 102-114. https://doi.org/10.1145/1778765.1778839