How complex is the memory structure that honeybees use to navigate? Recently, an insect-inspired parsimonious spiking neural network model was proposed that enabled simulated ground-moving agents to follow learned routes. We adapted this model to flying insects and evaluate the route following performance in three different worlds with gradually decreasing object density. In addition, we propose an extension to the model to enable the model to associate sensory input with a behavioral context, such as foraging or homing. The spiking neural network model makes use of a sparse stimulus representation in the mushroom body and reward-based synaptic plasticity at its output synapses. In our experiments, simulated bees were able to navigate correctly even when panoramic cues were missing. The context extension we propose enabled agents to successfully discriminate partly overlapping routes. The structure of the visual environment, however, crucially determines the success rate. We find that the model fails more often in visually rich environments due to the overlap of features represented by the Kenyon cell layer. Reducing the landmark density improves the agents route following performance. In very sparse environments, we find that extended landmarks, such as roads or field edges, may help the agent stay on its route, but often act as strong distractors yielding poor route following performance. We conclude that the presented model is valid for simple route following tasks and may represent one component of insect navigation. Additional components might still be necessary for guidance and action selection while navigating along different memorized routes in complex natural environments.