Kurzfassung
The main goal of this study is to enhance the environment perception in the field of outdoor robotics.
The interaction of a robot with its environment is a challenging task which requires deep knowledge of the robots surrounding. In this project it is assumed that the robot operates in an unstructured outdoor environment.
This makes tasks such as path planning even more challenging as they are in structured environments such as indoors or in urban areas. Data from sensors such as 3D-LiDARs...
The main goal of this study is to enhance the environment perception in the field of outdoor robotics.
The interaction of a robot with its environment is a challenging task which requires deep knowledge of the robots surrounding. In this project it is assumed that the robot operates in an unstructured outdoor environment.
This makes tasks such as path planning even more challenging as they are in structured environments such as indoors or in urban areas. Data from sensors such as 3D-LiDARs can be used to create geometrical maps of the environment but they lack the ability to capture the semantics and properties of the surrounding surfaces. This can result in slippage in sand, emergency stops in front of traversable objects like high grass or bushes or even in getting stuck in marshy areas.
To overcome those problems the active vision group researches how semantics can be extracted from hyperspectral image data using classical spectral analysis methods (see Fig. 1), but novel deep learning methods as well . Those are fused with data of LiDARs and other sensor modalities to create semantically enriched 3-dimensional environment maps. which allow precise planning of the robots interaction with its surroundings.
Figure 1: Semantic segmentation of hyperspectral images in rural and
urban area using fully convolutional networks. The left side shows
grayscale representations of the scenes. On the right side predicted
labels are overlaid in different colors.
» weiterlesen» einklappen