Nowadays, real-time capable visual odometry and visual simultaneous localization and mapping have become popular research topics. Since robots depend on the precise determination of their own motion, visual methods can be used for trajectory generation, localization or path planning. Different kinds of sensors can be used to tackle this - in general hard to solve - task, but it is always a trade-off between configuration effort and monetary cost of the system as well as other quality factors. Hence, it becomes increasingly popular to use cameras as sensors for the ego-motionn determination of a robot.
This thesis deals with the extension of a monocular direct sparse visual odometry to a stereo direct sparse visual-inertial odometry and the evaluation of the outcome.
The depth information from a stereo camera is used to eliminate the initialization step and to pre-initialize the depth of selected pixels of keyframes. Furthermore, depth information and inertial measurements significantly robustify pose pre-initialization for new frames. Due to the known depth, the unknown scale issue is solved and the scale drift is eliminated. The experiments carried out in this work show that the extensions significantly increase both robustness and tracking accuracy.