He enjoyed until today a certain celebrity among amateurs of go game to be the European champion, but his latest feat dated will offer him worldwide fame and eternal without doubt : Fan Hui, a native of Bordeaux French, became the first professional player to lose a game of go against a computer
The event is historic enough to make the cover of Nature . For if the failures have long been controlled by artificial intelligence programs – the supercomputer Deep Blue beat Kasparov in 1997 has – the taste is of any complexity, which still resisted well to algorithms. On a plateau called grid goban , place of “stones” (black for a player, the other white) to encircle the enemy, prisoners make their pieces and appropriating “territories”. “The rules are very simple but it is probably the most complex game invented by man, because the number of possible combinations is greater than the number of atoms in the universe [about 10 170 , Ed] “ feels Demis Hassabis. This English neuroscientist knows something: artificial intelligence company he founded in 2011 and sold to Google last year, Deepmind, developed the program to go beat Fan Hui
<. p> The confrontation between the human and the AlphaGo algorithm was held in October in London. Fan Hui has accepted the invitation of my heart, seeing the computer as “a partner for forward and forward together “ and sure of his future victory. Yet one can not even say he was challenged by AlphaGo He was crushed. Of the five games he played fast at a rate of two a day, he lost three. Of the five parts “normal”, it has gained nothing. 5 Computer, Human 0. “AlphaGo plays like a human” says the champion in an interview with World . Without those shots “weird” which is usually guilty computers that we do not understand the strategy. The game AlphaGo seems natural, in addition, the deceitful when the human remains undeterred, he falters psychological side. “In the end, I lost all confidence before him and that’s catastrophic. He does not have that problem “, remembers Fan Hui. He kept the secret of his defeats until the publication this week of the study Nature .
Google Deepmind differs from other artificial intelligences that rub the game go by combining several approaches. It is still based on the classic “Monte Carlo” in advance that simulates thousands of parts for guessing what blows are most likely to result in a win. But add to this the opportunities offered by the deep learning , a learning ability that simulates a neural network so that the computer knows choose the best answer possible to the parameters given to it. Deepmind has already demonstrated its excellence in this area by creating a program capable of playing video games without one explain the rules, and Google thus leads to recognize and describe the content of a photograph (” a black and white dog jumps over a bar “). You need to have available a huge database to allow the computer to swallow examples and improve. AlphaGo studied 30 million professional players movements and then played against itself to bring its technology to the test. It is the combination of these three methods of artificial intelligence. – Monte Carlo, the deep learning and reinforcement learning, which is unprecedented (and apparently devastating) in the field of go
No comments:
Post a Comment