Reinforcement learning is the closest thing to a general AI system that we have. When it works, that is. The problem is, it often doesn’t. In this talk we will discuss the difficulties of RL as well as what to look for in the future.
Being the third pillar of machine learning, it is not too surprising that reinforcement learning both enjoys and suffers from opportunities and challenges that are unique to its domain. A lot of the difficulties of supervised and unsupervised learning are simply not an issue here, but it comes at a cost of dealing with a completely different set of problems.
We have seen the tremendous success of RL in creating AIs for various games - from tic-tac-toe through chess and Go and up to Starcraft 2 and Dota 2. But what about successful applications in fields that are not inherently game-related? Turns out, we won’t find that many, even if we dig pretty deep. Why is that?
It’s due to the prevalence of problems that are inherent to the current state of RL as a field. In this talk we will address these limitations. We will see that many of the reported findings don’t hold up under scrutiny. We will see how and why many state-of-the-art algorithms break down when compared to much more simple solutions. However, we will also identify the conditions where RL already shines or might shine in the future. In the end we will discuss some promising avenues for future research.
The talk assumes a familiarity with fundamental principles of machine learning in general as well as a basic knowledge of reinforcement learning concepts specifically (not to waste time on introducing the common terminology).
some
Domains:Artificial Intelligence, Algorithms, Deep Learning, Machine Learning, Statistics
Python Skill Level:basic
Abstract as a tweet:Why doesn’t RL show the same success as (un)supervised learning? Inherent difficulties facing RL and avenues for future work