WSJ

Researchers claimed a major artificial-intelligence breakthrough on Wednesday, unveiling a software program that taught itself to beat a top human player of the board game Go, considered a milestone challenge in the field.

Researchers at Google parent Alphabet Inc. ’s DeepMind unit said the program, AlphaGo, beat European Go champion Fan Hui five games to zero on a full board during a recent competition at DeepMind’s London headquarters. Previously, the best programs have defeated Go professionals only on smaller, unofficial boards.

AlphaGo uses two so-called deep neural networks, computer programs with millions of connections that loosely mimic the structure of the human brain. This approach has produced several artificial-intelligence breakthroughs in recent years, including computers that can identify objects in images more consistently than humans can.

AlphaGo’s first network was shown about 30 million Go moves made by human players, to teach it what move to make next. Engineers directed the training during this so-called supervised learning.

The second network played thousands of Go games against itself to learn, without human help, to evaluate board positions and estimate the likelihood of each move ultimately winning the game.

The latter, unsupervised, approach is more cutting edge. David Silver, a DeepMind researcher who worked on AlphaGo, said the system learned to discover new strategies itself.