Springe direkt zu Inhalt

Path Following with Deep Reinforcement Learning for Autonomous Cars

Daniel Göhring, Raúl Rojas – 2021

Path-following for autonomous vehicles is a challenging task. Choosing the appropriate controller to apply typical linear/nonlinear control theory methods demand intensive investigation on the dynamics and kinematics of the system. Furthermore, the non-linearity of the system's dynamics, the complication of ist analytical description, disturbances, and the influence of sensor noise, raise the need for adaptive control methods for reaching optimal performance. In the context of this paper, a Deep Reinforcement Learning (DRL) approach with Deep Deterministic Policy Gradient (DDPG) is employed for path tracking of an autonomous model vehicle. The RL agent is trained in a 3D simulation environment. It interacts with the unknown environment and accumulates experiences to update the Deep Neural Network. The algorithm learns a policy (sequence of Control Actions) that solves the designed optimization objective. The agent is trained to calculate heading angles to follow a path with minimal cross-track error. In the final evaluation, to prove the trained policy's dynamic, we analyzed the learned steering policy strength to respond to more extensive and smaller steering values with keeping the cross-track error as small as possible. In conclusion, the agent could drive around the track for several loops without exceeding the maximum tolerated deviation, moreover, with reasonable orientation error.

Titel
Path Following with Deep Reinforcement Learning for Autonomous Cars
Schlagwörter
deep reinforcement learning, deep deterministic policy gradient, path-following, advanced driver assistance systems, autonomous vehicles
Datum
2021-10-27
Kennung
ISBN 978-989-758-537-1; DOI 10.5220/0010715400003061
Quelle/n
Erschienen in
Proceedings fo the 2nd International Conference on Robotics, Computer Vision and Intelligent Systems (ROBOVIS 2021)
Art
Text
Größe oder Länge
8 pages
Rechte
Copyright by SCITEPRESS