The navigation capabilities of honeybees are surprisingly complex. Experimental evidence suggests that honeybees rely on a map-like neuronal representation of the environment. Intriguingly, a honeybee brain exhibits approximately one million neurons only. In an interdisciplinary enterprise, we are investigating models of high-level processing in the nervous system of insects such as spatial mapping and decision making. We use a robotic platform termed NeuroCopter that is controlled by a set of functional modules. Each of these modules initially represents a conventional control method and, in an iterative process, will be replaced by a neural control architecture. This paper describes the neuromorphic extraction of the copter’s ego motion from sparse optical flow fields. We will first introduce the reader to the system’s architecture and then present a detailed description of the structure of the neural model followed by simulated and real-world results.