Seminar

2019 IROS

Author
Hyun Myung
Date
2020-02-12 22:25
Views
1084
2019 IROS에 다녀온후 2개의 논문을 정리하여 소개합니다.

Papers that are published in 2019 IROS will be introduced briefly.



1. Exploiting Sparse Semantic HD Maps for Self-Driving Vehicle Localization

    - The authors work at the UBER.

    - The Camera, LiDAR, GPS, Wheel Encoder, IMU, and HD map are used for the localization.

        - HD map

            The HD map needs only about 0.3% storage comparing to the LiDAR map.

            HD map contains the Lane and traffic sign position. (Lane: Lateral position & Heading, Sign: Longitudinal Position)

        - Lane Extracting

            The Lane can be extracted with the front-view camera image with raw LiDAR intensity.

        - Detecting Traffic Sign

            The traffic sign can be detected by a convolutional network. (Backbone as PSPNet)

        - Matching Process

            It matches between the extracted lane and lane data from the HD Map.

            It matches the detected traffic sign with the traffic sign data from the HD Map.

            Each probability (Weight) is calculated with GPS and IMU + Encoder.

            The location of the vehicle is estimated through the calculated probability.


    - Result

    They tested the vehicle about 312 km on the highway in the United States. Then, they only got 0.05m of lateral error and 1.12m longitudinal error.

    The algorithm runs at roughly 7Hz.



2. RangeNet++: Fast and Accurate LiDAR Semantic Segmentation

    - Semantic Segmentation with Only Rotational LiDAR

    - The point clouds are projected into the spherical image.

    - The RangeNet53 uses the KarKnet53 as the backbone.

    - The Network is trained with Semantic KITTI data.

    - About 130,000 points are converted into the 32,728 pixels.

    - The pixel contains the point clouds' index information to bring all the estimated semantic information.

    - There could be artifacts at the border of the object. To get rid of the artifacts, they used post-processing.

    - In a 3D coordinate (PointCloud), the K-NN search is adapted to filter the artifacts at the border.

    - The algorithm is running in real-time with the graphic card such as gtx1080.


    - Results


RangeNet53++

Size

Car

Bicycle

Pole

Truck

Person

Road

Parking

Building

64 x 2048 px

91.4

25.7

47.9

25.7

38.3

91.8

65.0

87.4

64 x 1024 px

90.3

20.6

43.8

25.2

29.6

90.4

52.3

83.9

64 x 512 px

87.4

9.9

39.1

19.6

18.1

90.0

50.7

80.2



   IoU [%]


Here is the Video which they posted on Youtube