Artificial intelligence has made a considerable leap since the victory of a computer against Kasparov in chess, as AlphaGo shows, a Google software.
Nothing seems to stop the progress of artificial intelligence! Stronger than Deep Blue IBM who beat world chess champion Garry Kasparov in 1997 or Watson, IBM still, who won in 2007 at the quiz Jeopardy against the best players of the moment. AlphaGo, software designed by researchers at Google Deepmind, just crushed 5-0 Fan Hui, the European champion’s go. This Asian strategy game where two opponents clash, is to conquer as much territory with white and black pieces on a plate which has 361 boxes.
In view of the first parts, published Wednesday, January 27 in the journal Nature , “the software shows a surprisingly mature game, solid, patient but incisive when necessary,” says Motoki Noguchi, one of the best go players in France .
“The number of possible combinations is incredibly higher than in the game of go chess,” says Olivier Teytaud, a researcher at INRIA, who worked on the subject. There are 10 power 600 parties to go (a 1 followed six hundred zero) reasonable power against 10 120 chess, which already has more options than number of particles in the universe. This is a feat if the computer has finally beaten a professional human to go game without disabilities on a checkerboard of normal size (19×19 intersections). “In the years 1980-1990, almost all programmers of go game felt it would not be possible to get there before at least a century,” says Bruno Bouzy, lecturer at the University of Paris Descartes and one of the authors of a method used to AlphaGo.
The original approach of Google and his research team was “in its algorithm combine several ingredients artificial intelligence,” adds Olivier Teytaud. “The methods are not new in isolation, and various combinations were applied to the game of go in the past by other teams,” adds the French Yann Le Cun, director of research in artificial intelligence of Facebook and considered the inventor of “deep learning”, another component of AlphaGo.
The Google program uses the so-called Monte Carlo approach, which is “a simulation to evaluate a position and know what needs to be shot played associated with many random simulations in which an average is performed, “says Bruno Bouzy. What is more difficult to achieve as chess, go to because all parts have the same value. This method was supplemented in 2006 with what experts call a “research tree to anticipate short-term coups,” adds Bruno Bouzy. The French Rémi Coulom, a former researcher at INRIA and the University of Lille, contributed ten years ago the founder of the scientific paper method called “Monte Carlo Tree Search” applied to go game. The first parts, giving four shots ahead at the computer, have thus been won by machines on small checkerboard (9×9) face good human players. To further improve the algorithm, it was associated with the so-called concept of “deep learning”, that is to say that mimics how the brain works, putting in series of small neural networks (computer) used by for example the recognition of images or shapes.
“We can dream that computers may prove theorems, or even watching a movie and summarize”
In 2014, the Google research team led by David Silver had already used this approach for the go game. By perfecting these methods, including learning to go to parts of the program made by great players and by ensuring that the computer performs many parts against itself, the algorithm has been improved.
The consequences are immense. Because of problems that seemed insoluble to the IT community could be resolved. “We can dream that computers may prove theorems, understand a text or even watching a movie and summarize,” adds Olivier Teytaud. In itself, the computer used (which has 1,200 microprocessors and graphics processors 176) nothing extraordinary. All the intelligence is in the software.
To prove that the program holds the road, new parts will be performed in March by Alpha Go against South Korean Lee Se Dol, one of the best players go of the past ten years.
No comments:
Post a Comment