This thesis describes the methods and realization of embedded real time vision systems for mobile robotics with a focus on stereo vision.
Computer vision algorithms are often computationally expensive, especially stereo vision and optical flow calculation which demand a high amount of computing power. Therefore, computers performing those tasks tend to have high power consumption and large physical size, both of which are often constrained in mobile robotics. In order to solve this problem I have developed several embedded systems that are able to perform stereo vision and optical flow computation with low power consumption. These systems are small enough to fit into most of the mobile robots being used in robotics research today. The six main contributions of my thesis are:
- An evaluation of algorithms for stereo vision in an urban scenario. This evaluation combines the quality of depth measurements together with the implementation costs on an embedded system.
- A stereo vision algorithm that achieves close to state-of-the-art performance of semi-global methods using a purely local approach.
- A framework for creating portable vision algorithms on programmable logic. This framework allows algorithms to be deployed in a hardware independent way.
- A novel security systems for computer controlled cars. This systems called "SAFEBox", enabled the autonomous car at the Free university of Berlin to be the first to receive permission from the city government to drive autonomously inside the city of Berlin.
- Several realized computer vision systems for different applications. These systems enable applications that were previously not possible using generic computing devices due to power and space constraints.
- An active lighting technique that enables stereo cameras to funtion outdoors as well as indoors on textureless surfaces.
My systems have been successfully tested with the autonomous cars "E-instein" and "Made in Germany". Both cars rely on the SAFEBox for safe drive-by-wire operation. "Made in Germany" uses my vision preprocessor for stereo vision and optical flow computation.
Moreover, the results of this thesis are not limited to controlling autonomous cars. They can be applied to mobile robotics in general. A centralized, eight-camera system for 360 degree surround view applications has also been developed. Furthermore, the development of a miniaturized Smart Stereo Camera has opened up new areas of applications, allowing the integration of the system into humanoid soccer robots, autonomous wheelchairs and the autonomous model cars developed at the Free University of Berlin.
Autonomous transportation will lead to major benefits in safety, economy and ecology. Although the associated technology has been an active field of research in the last decades, some problems have not been fully solved yet. Robust and efficient localization is a key component especially in urban scenarios. This thesis deals with the design and development of a system for landmark-based localization in urban scenarios suitable for autonomous driving. The sensor input is limited to a stereo camera pair, vehicle odometry and an off-the-shelf GPS. Prior knowledge in the form of a landmark map is also available.
Pole-like structures are identified as robust, long-term stable and common three-dimensional landmarks in urban scenarios. These are easily detectable by a stereo camera and are used as primary landmarks. In comparison to lane markers they have a lower occlusion probability and lower change rate. As pole-like structures can be rather small, a high quality depth reconstruction is crucial for robust detection. Several contributions are made in the field of automotive stereo vision, targeting long-term stability, robustness and efficiency. A new matching cost is presented and Semi-Global Matching is modified to become more reliable and more scalable. A robust extraction method for pole-like landmarks is introduced. The localization method proposed uses particle filters and the complete processing chain from feature extraction to processing a latency corrected vehicle pose output is covered. Field tests with an autonomous vehicle in urban environments and accuracy measures derived from real-driving data demonstrate the performance of the approach.