
In 2015, a team affiliated with Google DeepMind around David Silver, Aja Huang, Chris J. Maddison, Aja Huang, Ilya Sutskever, and David Silver report they trained a large 12-layer convolutional neural network in a similar way, to beat Gnu Go in 97% of the games, and matched the performance of a state-of-the-art Monte-Carlo Tree Search that simulates a million positions per move. In their paper Move Evaluation in Go Using Deep Convolutional Neural Networks, Chris J.

#CHESS EXPLORER JAN NOWAKOWSKI PROFESSIONAL#
Christopher Clark and Amos Storkey trained an 8-layer convolutional neural network by supervised learning from a database of human professional games, which without any search, defeated the traditional search program Gnu Go in 86% of the games.

In 2014, two teams independently investigated whether deep convolutional neural networks could be used to directly represent and learn a move evaluation function for the game of Go. Many applications of neural networks to Go have already used convolutional neural networks, such as Nicol N. Go has some translation invariance, because if all the pieces on a hypothetical Go board are shifted to the left, then the best move will also shift (with the exception of pieces that are on the boundary of the board). Īs mentioned by Ilya Sutskever and Vinod Nair in 2008, convolutional neural networks are well suited for problems with a natural translation invariance, such as object recognition.

After early trials to apply Monte Carlo methods to a Go playing program by Bernd Brügmann in 1993, recent developments since the mid 2000s by Bruno Bouzy, and by Rémi Coulom, who coined the term Monte-Carlo Tree Search, in conjunction with UCT (Upper Confidence bounds applied to Trees) introduced by Levente Kocsis and Csaba Szepesvári, led to a breakthrough in computer Go.
