Seminar/Proseminar: Multi-Agent Reinforcement Learning
This advanced seminar course offers a focused exploration into the specialized field of multi-agent reinforcement learning (MARL). Reinforcement learning is a type of machine learning where an agent learns to make decisions by interacting with its environment. In the multi-agent scenario, multiple agents concurrently interact with the environment and each other, which significantly complicates the learning problem and opens up a multitude of interesting research questions.
Students will gain a deep understanding of MARL by examining and discussing seminal and recent papers that address fundamental concepts, algorithms, challenges, and applications of MARL. Each student will be responsible for selecting a paper from a provided list, thoroughly understanding its content, and delivering an in-depth presentation and leading a discussion on its key contributions, methodologies, and implications.
Papers covered will span a range of topics including, but not limited to, cooperative and competitive MARL, exploration strategies in MARL, communication and negotiation among agents, fairness and stability in multi-agent systems, and deep multi-agent reinforcement learning. We will also touch upon real-world applications of MARL in areas such as robotics, traffic control, and game theory.
|Prof. Dr. Tim Landgraf
|Dahlem Center for Machine Learning and Robotics
Module siehe hier (rechte Spalte)
|Arnimallee 7 Seminarraum 031
|18.10.2023 | 10:00
|14.02.2024 | 12:00
Links auf Kursbeschreibung
The course emphasizes critical thinking and effective communication skills. Students are expected to actively engage in discussions, critique the methodologies and conclusions of papers, and consider the broader implications of the research.
Prerequisites: An understanding of basic reinforcement learning principles and algorithms, as well as a general familiarity with machine learning concepts. Proficiency in reading and understanding machine learning research papers is strongly recommended.