Biography
Daniel Tanneberg is a Ph.D. Student at the Intelligent Autonomous Systems Group at the Technische Universitaet Darmstadt. He received his B.Sc. and M.Sc. with honors focusing on Artificial Intelligence, Machine Learning and Biological Psychology from the Technische Universitaet Darmstadt. As a member of the European GOAL-Robots project, his research aims at developing lifelong-learning cognitive systems. Therefore, he is investigating intrinsic motivation signals for task-independent learning in stochastic neural networks as well as memory-augmented networks for learning abstract solution strategies.
Abstract
Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks
Autonomous robots need to interact with unknown, unstructured and changing environments, constantly facing novel challenges. Therefore, continuous online adaptation for lifelong-learning and the need of sample-efficient mechanisms to adapt to changes in the environment, the constraints, the tasks, or the robot itself are crucial. Here, we present our novel framework for probabilistic online motion planning with online adaptation based on a bio-inspired stochastic recurrent neural network. By using learning signals which mimic the intrinsic motivation signal cognitive dissonance in addition with a mental replay strategy to intensify experiences, the stochastic recurrent network can learn from few physical interactions and adapts to novel environments in seconds. We show results of our online planning and adaptation framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is shown by learning unknown workspace constraints sample-efficiently from few physical interactions while following given way points.