The Future Of Artificial Intelligence Is Being Led By Google And Alphabet
This article originally appeared on the Motley Fool.
Big news for artificial intelligence watchers: Google's AlphaGo AI has beaten the world's top-ranked Go player 3-0 at a match in Wuzhen, China. The AI is a vastly improved version of the one that beat a legendary 18-time world champion just one year ago.
It's a big accomplishment for Alphabet (NASDAQ:GOOGL) (NASDAQ:GOOG). But the headline isn't the highlight -- it's actually the manner of AlphaGo's victory that sheds new light on what the future might hold for the budding AI industry.
• Motley Fool Issues Rare Triple-Buy Alert
Go, a 3,000-year-old board game, has simple rules. Players take turns placing a stone on the board. If one player surrounds an opponent's entire group of stones, the surrounded stones are captured and removed from the board. The object of the game is to surround the most empty territory.
Despite Go's apparent simplicity, the subtlety and complexity of its strategy and tactics have long confounded AI researchers. Until last year, it was the only game of perfect information AI had been unable to master. Never had a machine come close to beating any professional player.
A full-sized 19 by 19 board has 361 positions. Because almost every empty position is a legal move, there are a vast number of possible game sequences -- too vast for a human or a computer to calculate the best move using the same brute force techniques that IBM's Deep Blue used to master chess. (The number of possible board states in Go exceeds the number in chess by a factor of more than the number of atoms in the known universe.)
Playing Go, therefore, requires a kind of intuitive thinking that computers have difficulty mastering.
Alphabet subsidiary DeepMind made a leap in this direction incorporating machine learning alongside the more traditional statistical-algorithmic approach known as the Monte Carlo Tree Search.
To match the intuitive skills of human players, programmers taught AlphaGo pattern recognition. They fed AlphaGo data from millions of internet forum games to teach it to recognize what good moves "look" like. AlphaGo then played against itself millions of times over several months to further refine its skills.
New specs
The latest version of AlphaGo that beat number-one-ranked Ke Jie is even more impressive than the one that defeated legendary player Lee Sedol last year. It's now 1,000% more efficient with computing power and takes mere weeks instead of months to train.
It's also a stronger player. Instead of learning on a data set of strong human players, DeepMind wiped AlphaGo's memory and freakishly retrained it entirely on data from millions of games it had played against itself in the past. Its personality has also evolved. AlphaGo 2.0 is more tactical, more territorial, and somewhat more aggressive. It's also added a few more unusual maneuvers to its arsenal -- for example, a type of invasion that it plays in situations that any good human player would perceive as too early.
The latest match tells us a lot about the pace of AI improvement.
• This Stock Could Be Like Buying Amazon in 1997
Ke Jie attempted to unsettle AlphaGo by playing some of its own unusual strategies and tactics against it. Known for his extremely accurate and quick ability to read possible game sequences, Ke Jie also tried to confuse AlphaGo by creating games so complicated that the computer would have difficulty keeping up. The second game spiraled off into eight simultaneous, interconnected battles spanning the entire board. But in the end, AlphaGo was able to handle itself.
The 3-0 match result seems to be further vindication for companies in the AI space, most notably NVIDIA (NASDAQ:NVDA) . But it may be a mixed blessing for processing manufacturers. Last year's match featured 1,920 CPUs and 280 of the souped-up graphics processing units that NVIDIA sells for AI uses. Despite losing the series, Lee Sedol managed to push all that processing power to the breaking point.
AlphaGo's tenfold efficiency gain in just one year could mean wider adoption of AI technology and processors, but it would also suggest a less processing-intensive AI future. Customers can't just buy the same amount of hardware to get higher performance -- there are diminishing returns to processing power.
What's more, opportunities attract competition -- Google has been developing its own AI processors.
Machine-learning AIs are turning to fields with large databases that combine pattern recognition and strategic reasoning, like medical diagnostics and treatment, that will also involve some level of teamwork between trained humans and trained computers. The summit in Wuzhen contained an unusual game that foreshadowed the future of AI applications.
For the first time, AlphaGo played on a team alongside a human in a game of "pair Go." Two teams of two play against one another, each player alternating moves for his/her teammate (a little bit like bridge). It can be tricky because players aren't allowed to communicate with their teammate. They must understand what their teammate's moves accomplish and what possible scenarios their teammate may be considering. It's an interesting way to simulate the tag-team future of man-machine problem-solving.
• 7 of 8 People Are Clueless About This Trillion-Dollar Market
The teams were Lian Xiao-AlphaGo versus Gu Li-a second AlphaGo. In the game, the machine both won and lost.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Ilan Moscovitz owns shares of Alphabet (A shares) and Alphabet (C shares). The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), and Nvidia. The Motley Fool has a disclosure policy.