Interest for autonomous driving is growing around the globe. Current production car’s assist systems already show some conditional driving automation. Technological advancements allow to achieve higher levels of driving automation, up to autonomous performing of entire dynamic driving tasks. Nonetheless, the save deployment of autonomous vehicles beyond research laboratory environments in real traffic on public roads necessitates further developments, which makes technologies around autonomous vehicles an area of active scientific research. A robust comprehensive environmental perception and understanding are basic requirements for derived actions and save driving behavior. It can only be achieved through the combination of different multi-modal sensing technologies and according data fusion.
An analysis of current trends shows that camera and LiDAR sensing technologies in combination with deep artificial neural network architectures and semantic segmentation in the context of autonomous driving form a set of current and challenging topics, equally interesting from an industrial as well as research perspective, to be addressed in this master thesis in computer science.
With the MIG (Made In Germany), the Free University Berlin maintains an adequately equipped autonomous vehicle platform serving as research platform with permission to operate in real world traffic. As part of this thesis a deep neural network architecture framework suitable for semantic segmentation is integrated in the MIG platform. Furthermore an appropriate deep artificial neural network for pixelwise semantic segmentation of the vehicle’s camera images is selected and implemented. Eventually a system is developed and implemented into the MIG that performs cross-modal transfer of pixelwise semantic labels from 2D images to corresponding 3D point clouds generated by a LiDAR scanner. All implementations and their underlying technologies are then assessed concerning their suitability in the context of autonomous driving.