#### 期刊菜单

Line Segment Extraction from Large-Scale Point Clouds
DOI: 10.12677/GST.2022.102008, PDF, HTML, XML, 下载: 124  浏览: 481  国家自然科学基金支持

Abstract: As the most common primitives, line segments play an essential role in the vectorized reconstruction of artificial scenes. In this paper, a method is proposed to extract line segments from large-scale point clouds. This method firstly extracts the 3D projection plane through region growing and region merging, then projects it to 2D to form an image, and finally back-projects the 2D line segment extracted from the projected image to 3D to obtain a 3D line segment. Experiments on point clouds datasets of large-scale outdoor scenes show that the proposed method can accurately extract the linear feature line segments representing the scene, filter out major nosie, and extract more complete line segments. The Completeness and Correctness of the line segments extraction results reach 83% on average compared with manual labeled ground truth ( : 0.5, 0.5), at an average processing speed of 27,000 points per second.

1. 引言

1) 所提出的方法融合了多尺度卷积神经网络输出的边缘图，充分提取了图像的边缘特征，减少点云在3D-2D转换中的边缘损失从而实现高效的直线段提取。

2) 提出了一种用于三维直线段的定量评价标准。同时标注了Semantic-3D公共数据集的真值直线段并提供给公众进一步使用，丰富了Semantic-3D数据集。

2. 方法

Figure 1. Pipeline of the proposed method

2.1. 三维投影平面区域的提取

1) 对于点云 ${P}_{L}$ 中的每个点，按照 ${\lambda }_{0}$ 的大小进行升序排列。

2) 从排序后 ${P}_{L}$ 中的未处理的第一个点 ${p}_{s}$ 开始，我们令 ${R}_{i}$ 存储与 ${p}_{s}$ 共面的所有点，令 ${S}_{L}$${p}_{s}$ 增长的种子点集合。将 ${p}_{s}$ 点加入到 ${R}_{i}$${S}_{L}$ 成为第一个点。

3) 遍历 ${S}_{L}$，对于 ${S}_{L}$ 一个未处理的点 ${p}_{i}^{s}$，遍历 ${P}_{i}^{s}$ 所有的近邻点 ${p}_{j}$。如果 ${p}_{j}$ 满足公式(1)所列的两个条件，则将 ${p}_{j}$ 加入到 ${R}_{i}$ 中且将 ${p}_{j}$${R}_{i}$ 剔除，其中公式(1)的两个条件是为了保证 ${p}_{i}^{s}$${p}_{j}$ 共面。如果公式 ${p}_{j}$ 还满足公式(2)所列的两个条件，则 ${p}_{j}$ 加入种子点集合 ${S}_{L}$。该公式第一个条件是保证的种子点的曲率小于某个阈值，第二个条件保证 ${p}_{s}$ 与种子点的距离小于某个阈值。

$\begin{array}{l}\left\{{\left[{\left({p}_{i}^{s}-{p}_{j}\right)}^{\text{T}}{n}_{{p}_{i}^{s}}\right]}^{2}+{\left[{\left({p}_{i}^{s}-{p}_{j}\right)}^{\text{T}}{n}_{{p}_{j}}\right]}^{2}\right\} (1)

$\begin{array}{l}{p}_{j}.c<{c}_{th}\\ ‖{p}_{s}-{p}_{j}‖ (2)

4) 重复第2)、3)步，直至 ${P}_{L}$为空。

5) 对于每个区域 ${R}_{i}$，如果该区域内点的个数小于20，则舍弃该区域。该区域的点则将属于另一个区域 ${R}_{j}$，而 ${R}_{i}$ 是包含该点的领域点最多的区域。

1) 首先将区域增长得到的区域集合 ${R}_{L}$ 中的每个区域进行PCA拟合，得到每个区域的法向量、曲率( ${\lambda }_{0}$ )、scale。

2) 给每个区域编号，然后根据其以下规则，找出每个区域对应的相邻的区域，令这个相邻的区域集合为 ${\Pi }_{i}$。规则：对于区域 ${R}_{i}$ 上的每个点 ${p}_{j}$，遍历 ${p}_{j}$ 的领域 ${I}_{{p}_{j}}$ 中所有的点p，若p点所在的区域不是 ${R}_{i}$，则p点所在的区域 ${R}_{k}$ 就是 ${R}_{i}$ 的相邻区域，将 ${R}_{k}$ 加入 ${\Pi }_{i}$

3) 对于 ${R}_{L}$ 中第一个未处理的 ${R}_{i}$${R}_{i}$ 标记为处理地。我们令 ${R}_{temp}$ 为准备和 ${R}_{i}$ 合并的区域的集合,并将 ${R}_{i}$ 加入 ${R}_{temp}$。遍历该 ${R}_{temp}$ 中所有的区域 ${R}_{k}$，对于 ${R}_{k}$ 的所有的相邻区域 ${R}_{j}$。如果满足公式(3)中的两个条件则将该相邻区域 ${R}_{j}$ 加入到 ${R}_{temp}$。遍历完 ${R}_{temp}$ 中所有的区域之后，如果 ${R}_{temp}$ 中的点的个数小于100，则舍弃这次合并，否则保留并且从 ${R}_{L}$ 剔除 ${R}_{temp}$ 中的所有的区域。

4) 重复步骤3)，直至 ${R}_{L}$中所有的区域都被标记为处理地。

$\begin{array}{l}\left\{{\left[{\left({C}_{i}-{C}_{j}\right)}^{\text{T}}{n}_{{R}_{i}}\right]}^{2}+{\left[{\left({C}_{i}-{C}_{j}\right)}^{\text{T}}{n}_{{R}_{j}}\right]}^{2}\right\} (3)

(a) (b) (c)

Figure 2. Extraction of 3D projection plane (a) Original point clouds; (b) Results of region growth; (c) Results of region merging

2.2. 基于投影的三维直线段的提取

Figure 3. Schematic diagram of 3D-2D projection

$\left[\begin{array}{l}{x}_{i}\\ {y}_{i}\\ {z}_{i}\end{array}\right]=\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]\left[\begin{array}{c}{X}_{{p}_{i}}-{X}_{{p}_{o}}\\ {Y}_{{p}_{i}}-{Y}_{{p}_{o}}\\ {Z}_{{p}_{i}}-{Z}_{{p}_{o}}\end{array}\right]$ (4)

$\left[\begin{array}{l}{u}_{i}\\ {v}_{i}\end{array}\right]=\left[\begin{array}{cc}\frac{1}{{\eta }_{\alpha }}& 0\\ 0& \frac{1}{{\eta }_{\alpha }}\end{array}\right]\left[\begin{array}{l}{x}_{\mathrm{max}}-{x}_{i}\\ {y}_{\mathrm{max}}-{y}_{i}\end{array}\right]$ (5)

Figure 4. Pipeline and results of 3D line segments extraction based on the projection image (a) Projection image; (b) Fused edge-map; (c) 2D line segments; (d) 3D line segments

Figure 5. Edge-maps of the projection image

3. 实验结果与分析

3.1. 实验数据集和评价指标

Table 1. Experiment data and results description

(a) StSulpice (b) Bildstein1 (c) Bildstein3

Figure 6. Experimental data

Figure 7. The schematic diagram of calculating two similarity criteria

1) ${d}_{l}$ 定义为提取的直线段在相对应的真值直线段上的有效长度比，即提取的直线段投影到真值直线段上与真值直线段的交集长度除以并集长度。

${d}_{l}=\frac{{l}_{intersection}}{{l}_{union}}$ (6)

2) ${d}_{s}$ 定义为平均最短距离，即提取的直线段投影到相对应的真值直线段上时，提取的直线段对应的投影相交区域上的采样点到真值直线段的最短距离的平均值。

${d}_{s}=\frac{\underset{i}{\overset{N}{\sum }}{d}_{i}}{N}$ (7)

$\begin{array}{l}Comp=\frac{|TP|}{|N|}\\ Corr=\frac{|TP|}{|M|}\end{array}$ (8)

3.2. 实验结果与精度评价

(a) (b) (c) (d)

Figure 8. Point clouds and extracted line segments (a) (b) bildstein3; (c) (d) StSulpice

(a) (b) (c) (d)

Figure 9. (a) Bildstein1 (b) Line segments by methods of Lu et al. [20] (c) Line segments by methods in the proposed method (d) Line segments manually extracted

Table 2. Bildstein1

Figure 10. ${d}_{l}/{d}_{s}-Comp/Corr$ curves

4. 结论

1) 国家自然科学基金面上基金资助，基金编号：42071451；2) 武汉大学知卓时空智能研究基金资助。

NOTES

*通讯作者。

1 http://www.semantic3d.net。

 [1] 杨必胜, 董震. 点云智能研究进展与趋势[J]. 测绘学报, 2019, 48(12): 1575-1585. [2] 杨必胜, 董震. 点云智能处理[M]. 北京: 科学出版社, 2020: 31-36. [3] Yang, B. and Chen, C. (2015) Automatic Registration of UAV-Borne Sequent Images and LiDAR Data. ISPRS Journal of Photogrammetry and Remote Sensing, 101, 262-274. https://doi.org/10.1016/j.isprsjprs.2014.12.025 [4] Dong, Z., Liang, F., Yang, B., Xu, Y., Zang, Y., Li, J., Wang, Y., Dai, W., Fan, H., Hyyppa, J. and Stilla, U. (2020) Registration of Large-Scale Terrestrial Laser Scanner Point Clouds: A Review and Benchmark. ISPRS Journal of Photogrammetry and Remote Sensing, 163, 327-342. https://doi.org/10.1016/j.isprsjprs.2020.03.013 [5] Chen, C. and Yang, B. (2016) Dynamic Occlusion Detection and Inpainting of in Situ Captured Terrestrial Laser Scanning Point Clouds Sequence. ISPRS Journal of Photogrammetry and Remote Sensing, 119, 90-107. https://doi.org/10.1016/j.isprsjprs.2016.05.007 [6] Han, X., Dong, Z. and Yang, B. (2021) A Point-Based Deep Learning Network for Semantic Segmentation of MLS Point Clouds. ISPRS Journal of Photogrammetry and Remote Sensing, 175, 199-214. https://doi.org/10.1016/j.isprsjprs.2021.03.001 [7] Yang, B., Dong, Z., Zhao, G. and Dai, W. (2015) Hierarchical Extraction of Urban Objects from Mobile Laser Scanning Data. ISPRS Journal of Photogrammetry and Remote Sensing, 99, 45-57. https://doi.org/10.1016/j.isprsjprs.2014.10.005 [8] Chen, C., Yang, B., Song, S., Peng, X. and Huang, R. (2018) Automatic Clearance Anomaly Detection for Transmission Line Corridors Utilizing UAV-Borne LIDAR Data. Remote Sensing, 10, Article ID: 613. https://doi.org/10.3390/rs10040613 [9] Xu, Y. and Stilla, U. (2021) Towards Building and Civil Infrastructure Reconstruction from Point Clouds: A Review on Data and Key Techniques. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 2857-2885. https://doi.org/10.1109/JSTARS.2021.3060568 [10] Xia, S., Chen, D., Wang, R., Li, J. and Zhang, X. (2020) Geometric Primitives in LiDAR Point Clouds: A Review. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13, 685-707. https://doi.org/10.1109/JSTARS.2020.2969119 [11] Yuan, C., Liu, X., Hong, X. and Zhang, F. (2021) Pixel-Level Extrinsic Self Calibration of High Resolution Lidar and Camera in Targetless Environments. IEEE Robotics and Automation Letters, 6, 7517-7524. https://doi.org/10.1109/LRA.2021.3098923 [12] Yu, H., Zhen, W., Yang, W., Zhang, J. and Scherer, S. (2020) Monocular Camera Localization in Prior Lidar Maps with 2D-3D Line Correspondences. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, 24 October-24 January 2021, 4588-4594. https://doi.org/10.1109/IROS45743.2020.9341690 [13] Yu, H., Zhen, W., Yang, W. and Scherer, S. (2020) Line-Based 2-D-3-D Registration and Camera Localization in Structured Environments. IEEE Transactions on Instrumentation and Measurement, 69, 8962-8972. https://doi.org/10.1109/TIM.2020.2999137 [14] Li, S., Ge, X., Li, S., Xu, B. and Wang, Z. (2021) Linear-Based Incremental Co-Registration of MLS and Photogrammetric Point Clouds. Remote Sensing, 13, Article No. 2195. https://doi.org/10.3390/rs13112195 [15] Amblard, V., Osedach, T.P., Croux, A., Speck, A. and Leonard, J.J. (2021) Lidar-Monocular Surface Reconstruction Using Line Segments. 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, 30 May-5 June 2021, 5631-5637. https://doi.org/10.1109/ICRA48506.2021.9561437 [16] Cui, Y., Li, Q. and Dong, Z. (2019) Structural 3D Reconstruction of Indoor Space for 5G Signal Simulation with Mobile Laser Scanning Point Clouds. Remote Sensing, 11, Article No. 2262. https://doi.org/10.3390/rs11192262 [17] Brown, M., Windridge, D. and Guillemaut, J.Y. (2015) A Generalisable Framework for Saliency-Based Line Segment Detection. Pattern Recognition, 48, 3993-4011. https://doi.org/10.1016/j.patcog.2015.06.015 [18] Meng, Q., Zhang, J., Hu, Q., He, X. and Yu, J. (2020) LGNN: A Context-Aware Line Segment Detector. Proceedings of the 28th ACM International Conference on Multimedia, Seattle, 12-16 October 2020, 4364-4372. https://doi.org/10.1145/3394171.3413784 [19] Lin, Y., Wang, C., Cheng, J., Chen, B., Jia, F., Chen, Z. and Li, J. (2015) Line Segment Extraction for Large Scale Unorganized Point Clouds. ISPRS Journal of Photogrammetry and Remote Sensing, 102, 172-183. https://doi.org/10.1016/j.isprsjprs.2014.12.027 [20] Lu, X., Liu, Y. and Li, K. (2019) Fast 3D Line Segment Detection from Unorganized Point Cloud. arXiv preprint arXiv:1901.02532. [21] Ni, H., Lin, X., Ning, X. and Zhang, J. (2016) Edge Detection and Feature Line Tracing in 3D-Point Clouds by Analyzing Geometric Properties of Neighborhoods. Remote Sensing, 8, Article No. 710. https://doi.org/10.3390/rs8090710 [22] Xia, S. and Wang, R. (2017) A Fast Edge Extraction Method for Mobile LiDAR Point Clouds. IEEE Geoscience and Remote Sensing Letters, 14, 1288-1292. https://doi.org/10.1109/LGRS.2017.2707467 [23] Hackel, T., Wegner, J.D. and Schindler, K. (2016) Contour Detection in Unstructured 3D Point Clouds. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 27-30 June 2016, 1610-1618. https://doi.org/10.1109/CVPR.2016.178 [24] Yu, L., Li, X., Fu, C.W., Cohen-Or, D. and Heng, P.-A. (2018) EC-Net: An Edge-Aware Point Set Consolidation Network. Proceedings of the European Conference on Computer Vision (ECCV), Munich, 8-14 September 2018, 386-402. https://doi.org/10.1007/978-3-030-01234-2_24 [25] Zhang, W., Chen, L., Xiong, Z., Zang, Y., Li, J. and Zhao, L. (2020) Large-Scale Point Cloud Contour Extraction via 3D Guided Multi-Conditional Generative Adversarial Network. ISPRS Journal of Photogrammetry and Remote Sensing, 164, 97-105. https://doi.org/10.1016/j.isprsjprs.2020.04.003 [26] Wang, X., Xu, Y., Xu, K., Tagliasacchi, A., Zhou, B., Mahdavi-Amiri, A. and Zhang, H. (2020) Pie-Net: Parametric Inference of Point Cloud Edges. Advances in Neural Information Processing Systems, 33, 20167-20178. https://doi.org/10.1007/s00521-020-05225-7 [27] Poma, X.S., Riba, E. and Sappa, A. (2020) Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass, 1-5 March 2020, 1923-1932. https://doi.org/10.1109/WACV45572.2020.9093290 [28] Elder, J.H., Almazàn, E.J., Qian, Y. and Tal, R. (2020) MCMLSD: A Probabilistic Algorithm and Evaluation Framework for Line Segment Detection. arXiv Preprint arXiv:2001.01788.