Deep Neuroevolution - From Super Mario Level Generation to Playing Doom from Pixels
Evolutionary-based approaches have recently established themselves as a competitive alternative to deep reinforcement learning approaches. In this talk, I detail some of our work in this area. For example, in our recent GECCO paper, we show that evolutionary algorithms can optimize complex neural architectures with more than 4 million parameters; instead of training the components of these models separately, as was necessary before, we demonstrate that models with the same precise parts can be instead efficiently trained end-to-end through a genetic algorithm. In contrast to gradient descent methods that struggle with discrete variables, evolution also works directly with such representations, opening-up opportunities for classical planning in latent space. In other recent work we show, for the first time, that evolving neural networks can be scaled to learn tasks in complex 3-D environments directly from raw pixels. The main idea behind the new approach, called deep innovation protection, is to employ multiobjective evolutionary optimization to temporally reduce the selection pressure on specific components in a neural architecture, allowing other components to adapt. Additionally, I will present some of our work on hybridizing deep learning methods with evolutionary computation approaches, which is an emerging and promising research direction.
04.12.2019 | 10:00 - 11:15