The Future Of Robots, Artificial Intelligence And Computer Science
Way back in the 1980s, a schoolteacher challenged me to write a computer program that played tic-tac-toe. I failed miserably. But just a couple of weeks ago, I explained to one of my computer science graduate students how to solve tic-tac-toe using the so-called “Minimax algorithm,” and it took us about an hour to write a program to do it. Certainly my coding skills have improved over the years, but computer science has come a long way too.
What seemed impossible just a couple of decades ago is startlingly easy today. In 1997, people were stunned when a chess-playing IBM computer named Deep Blue beat international grandmaster Garry Kasparov in a six-game match. In 2015, Google revealed that its DeepMind system had mastered several 1980s-era video games, including teaching itself a crucial winning strategy in “Breakout.” In 2016, Google’s AlphaGo system beat a top-ranked Go player in a five-game tournament.
The quest for technological systems that can beat humans at games continues. In late May, AlphaGo will take on Ke Jie, the best player in the world, among other opponents at the Future of Go Summit in Wuzhen, China. With increasing computing power, and improved engineering, computers can beat humans even at games we thought relied on human intuition, wit, deception or bluffing – like poker. I recently saw a video in which volleyball players practice their serves and spikes against robot-controlled rubber arms trying to block the shots. One lesson is clear: When machines play to win, human effort is futile.
This can be great: We want a perfect AI to drive our cars, and a tireless system looking for signs of cancer in X-rays. But when it comes to play, we don’t want to lose. Fortunately, AI can make games more fun, and perhaps even endlessly enjoyable.
Designing games that never get old
Today’s game designers – who write releases that earn more than a blockbuster movie – see a problem: Creating an unbeatable artificial intelligence system is pointless. Nobody wants to play a game they have no chance of winning.
But people do want to play games that are immersive, complex and surprising. Even today’s best games become stale after a person plays for a while. The ideal game will engage players by adapting and reacting in ways that keep the game interesting, maybe forever.
So when we’re designing artificial intelligence systems, we should look not to the triumphant Deep Blues and AlphaGos of the world, but rather to the overwhelming success of massively multiplayer online games like “World of Warcraft.” These sorts of games are graphically well-designed, but their key attraction is interaction.
It seems as if most people are not drawn to extremely difficult logical puzzles like chess and Go, but rather to meaningful connections and communities. The real challenge with these massively multi-player online games is not whether they can be beaten by intelligence (human or artificial), but rather how to keep the experience of playing them fresh and new every time.
Change by design
At present, game environments allow people lots of possible interactions with other players. The roles in a dungeon raiding party are well-defined: Fighters take the damage, healers help them recover from their injuries and the fragile wizards cast spells from afar. Or think of “Portal 2,” a game focused entirely on collaborating robots puzzling their way through a maze of cognitive tests.
Exploring these worlds together allows you to form common memories with your friends. But any changes to these environments or the underlying plots have to be made by human designers and developers.
In the real world, changes happen naturally, without supervision, design or manual intervention. Players learn, and living things adapt. Some organisms even co-evolve, reacting to each other’s developments. (A similar phenomenon happens in a weapons technology arms race.)
Computer games today lack that level of sophistication. And for that reason, I don’t believe developing an artificial intelligence that can play modern games will meaningfully advance AI research.
We crave evolution
A game worth playing is a game that is unpredictable because it adapts, a game that is ever novel because novelty is created by playing the game. Future games need to evolve. Their characters shouldn’t just react; they need to explore and learn to exploit weaknesses or cooperate and collaborate. Darwinian evolution and learning, we understand, are the drivers of all novelty on Earth. It could be what drives change in virtual environments as well.
Evolution figured out how to create natural intelligence. Shouldn’t we, instead of trying to code our way to AI, just evolve AI instead? Several labs – including my own and that of my colleague Christoph Adami – are working on what is called “neuro-evolution.”
In a computer, we simulate complex environments, like a road network or a biological ecosystem. We create virtual creatures and challenge them to evolve over hundreds of thousands of simulated generations. Evolution itself then develops the best drivers, or the best organisms at adapting to the conditions – those are the ones that survive.
Today’s AlphaGo is beginning this process, learning by continuously playing games against itself, and by analyzing records of games played by top Go champions. But it does not learn while playing in the same way we do, experiencing unsupervised experimentation. And it doesn’t adapt to a particular opponent: For these computer players, the best move is the best move, regardless of an opponent’s style.
Programs that learn from experience are the next step in AI. They would make computer games much more interesting, and enable robots to not only function better in the real world, but to adapt to it on the fly.
Arend Hintze, Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University
This article was originally published on The Conversation. Read the original article.