Inside GNSS Media & Research

JUL-AUG 2019

Issue link: https://insidegnss.epubxp.com/i/1148308

Contents of this Issue

Navigation

Page 51 of 67

52 Inside GNSS J U L Y / A U G U S T 2 0 1 9 www.insidegnss.com at a frequency of 1 Hz for GNSS and 10 Hz for the 3-D LiDAR. In addition, a GNSS RTK/INS (fiber optic gyroscopes) integrated navigation system is used to provide the ground truth of position- ing. All the data are collected and syn- chronized using the Robot Operation System (ROS) (Quigley, et alia Additional Resources). Moreover, the coordinate systems of all the sensors are calibrated before the experiments. e parameters of Algorithms 1 and 2 applied in the experiments are given in (Wen et alia). Two GNSS positioning methods are compared: 1) WLS: GNSS positioning using the WLS. 2) WLS-NC: GNSS positioning using the WLS and all the detected NLOS receptions are corrected. Result of Building Detection using LiDAR Figure 10 shows the perception result using LiDAR-based perception, namely point clouds segmentation. e colored points denote the 3-D point clouds from 3-D LiDAR sensor. e 3-D bounding boxes represent the detected buildings using the proposed method presented in Section 2. e 2-D black boxes denote the surrounding dynamic objects which are manually labeled, such as the vehi- cles and pedestrians. In practical use, the excessive dynamic objects can pose difficulty on the accuracy of building detection. Due to the blockage from surrounding buildings, the GNSS NLOS measurements occurred, and is shown in the bottom panel of Figure 10. In practices, the building can be mis- detected which can be seen in bottom panel of Figure 11. e bounding box that expected to be detected is B. However, the detected bounding box is A., the main reason behind this is that the excessive dynamic objects can block the field of view (FOV) of the 3-D LiDAR and only limited part of the buildings are detected by 3-D LiDAR. As mentioned in Section 2, the 3-D LiDAR play two significant roles in the proposed method: 1) detect the build- ings for satellite visibility classification; 2) ranging the distance between the GNSS receiver and potential signal ref lector. According to our recent research (Xiwei, Additional Resources), we make use of the camera to capture the sky view and hence the satellite visibility can be identified. As both the camera and 3D LiDAR are indispensable sensors for the realization of autonomous vehicles, we can leverage both the LiDAR-based perception and camera to help the GNSS positioning. Result of the Perceived Environment Aided GNSS Positioning Figure 12 and Table I show the position- ing results comparisons of the conven- tional LS, WLS and the proposed method (WLS-NC). As can be seen from Table I, GNSS positioning accuracy is gradu- ally improved with increased constraints. Figure 12 shows positioning error during a closed-loop test. e red line represents FIGURE 9 The sensors setup of the experimental vehicle and tested environment: GNSS and LiDAR sensors are installed on the top of the vehicle shown in the left side of the figure. The tested urban scenario is shown in the middle of the figures. The Skyplot of the experiment is shown in the right side of the figure. ALL DATA LS WLS WLS-NC Mean error 12.55 10.23 7.81 Std 7.5 5.9 5.13 Availability 100% 92.5% 100% ALL DATA LS WLS WLS-NC Mean error 12.30 11.01 7.13 Std 6.2 8.0 5.14 TABLE I POSITIONING PERFORMANCE OF THE TWO METHODS IN URBAN CANYON (IN THE UNIT OF METER) TABLE II POSITIONING PERFORMANCE OF THE TWO METHODS IN URBAN CANYON: ONLY THE DATA EPOCHS WITH NLOS CORRECTIONS (IN THE UNIT OF METER) SINGLE POINT POSITIONING

Articles in this issue

Links on this page

view archives of Inside GNSS Media & Research - JUL-AUG 2019