I'd like to remind all of you that we have a teatalk this afternoon by Julian. He will tell us the exciting news that IBM hasn't solved playing chess after all. See the updated abstract at the end of this email.
Games are an excellent platform for studying machine learning. Here we have potentially infinite training data and crystal clear performance metrics. Although, computers have been able to defeat human chess champions for years, I will argue that chess is actually very far from being "solved". It is also different from many problems we study in Deep Learning, because it is highly discrete yet emits a rich spatial and temporal structure.
In this talk, I will present promising results into learning to play chess from scratch, i.e. with minimum human expert knowledge. I will show different neural networks trained to both predict and play chess using supervised learning and reinforcement learning, and further try to quantify what features they have learned. Along the way, I will also give a gentle introduction to some core reinforcement learning concepts.