Home // University // News // Events // Scaling up and assuring reinforcement learning with abstract Markov Decision Processes

Scaling up and assuring reinforcement learning with abstract Markov Decision Processes

twitter linkedin facebook google+ email this page
Add to calendar
Speaker: Dr Daniel Kudenko, University of York
Event date: Wednesday, 05 July 2017, 16:00 - 18:00
Place: Campus Belval
Maison du Savoir, Room 04-4.020

About the topic:

While reinforcement learning (RL) had recent great successes in game AI and other decision making tasks, there are still two major challenges: 

  • Scaling up RL to complex tasks
  • Assuring properties such as safety of the learning process and the learning result. 

In this talk, he will show how abstract Markov Decision Processes, reward shaping, and quantitative verification can be used to tackle these challenges.

About the speaker:

Daniel Kudenko is a member of the Computer Science faculty at the University of York. He got a Ph.D. from Rutgers University and a Masters degree from University of the Saarland, Germany. His research interests include machine (reinforcement) learning, multi-agent systems, user modeling, and artificial intelligence for games and interactive entertainment. Within these areas, he published more than 80 peer-reviewed papers, and has been a member of numerous program committees. Dr. Kudenko is currently heading the Reinforcement Learning Group, is carrying out work in Games, Interactive Entertainment and Drama, and is a member of the Artificial Intelligence Research Group.