2019-09-13, 14:30–16:00, Room B
By engaging in the revolution of AI and deep learning, reinforcement learning has also evolved from being able to solve simple game puzzles to beating human records in Atari games. It has also opened up the possibility of using reinforcement learning in making real life decisions.
In this workshop we would introduce some deep reinforcement learning (DRL) algorithms. The exercises will involve implementing them in python with deep learning libraries, specifically keras and tensorflow, to play games in Open AI Gym and simulated Atari. We will also explore real life usecases, like in robotic and business.
In the first section, we will touch on the basic in reinforcement learning and implement using crossentropy method to play a simple games, on top of implementing the basic tabular crossentropy method, we will also implement deep crossentropy method which keep track of the policy when it becomes too large.
Most of the problem in the real world are model-free setting, i.e. we don’t know what the final result will be like for our intermediate actions. In the second section, we will introduce Q-learning and SARSA, two model-free policies which involve understanding of Bellman equations. We will also start introducing experience replay buffer which is essential to speed up learning.
In the last section, we will explore using DQN (Deep Q-Network), which is a network develop by Google Deep Mind involve using CNN as an agent to play Atari game. Experience replay buffer will also be implemented to speed up learning.
As the end of the workshop, participants should be able to understand the concept of the deep reinforcement learning algorithms that we covered, implement them in python with keras and tensorflow, and potentially able to implement DRL in their work and projects.
We expect the participants to have basic knowledge in deep learning (especially CNN) and experience in using keras and tensorflow. We also expect participants to be able to have the required environment set up in their machine or their cloud platform (whichever they prefer) given the Docker image. Setup guide will be release prior to the workshop.