Wednesday, August 30, 2017

When Machines Run Amok


Wall Street Journal


The author was taken aback when he observed an AI program teach itself to play an arcade game—much better than its human designers. Frank Rose reviews ‘Life 3.0’ by Max Tegmark.


PHOTO: GETTY IMAGES
Cosmologists take on the big questions, and in “Life 3.0” Max Tegmark addresses what may be the biggest of them all: What happens when humans are no longer the smartest species on the planet—when intelligence is available to programmable objects that have no experience of mortal existence in a physical body? Science fiction poses such questions frequently, but Mr. Tegmark, a physicist at MIT, asks us to put our “Terminator” fantasies aside and ponder other, presumably more realistic, scenarios. Among them is the possibility that a computer program will become not just intelligent but wildly so—and that we humans will find ourselves unable to do anything about it.
Mr. Tegmark’s previous book, “Our Mathematical Universe” (2014), put a hugely debatable spin on the already counterintuitive notion that there exists not one universe but a multitude. Not all mathematicians were impressed. “Life 3.0” will be no less controversial among computer scientists. Lucid and engaging, it has much to offer the general reader. Mr. Tegmark’s explanation of how electronic circuitry—or a human brain—could produce something so evanescent and immaterial as thought is both elegant and enlightening. But the idea that a machine-based superintelligence could somehow run amok is fiercely resisted by many computer scientists, to the point that people associated with it have been attacked as Luddites.
PHOTO: WSJ

LIFE 3.0

By Max Tegmark
Knopf, 384 pages, $28
Yet the notion enjoys more credence today than it did a few years ago, partly thanks to Mr. Tegmark. Along with Elon Musk, Stephen Hawking and the Oxford philosopher Nick Bostrom, he has emerged as a leading proponent of “AI safety” research, which focuses on such critical matters as how to switch off intelligent machines before things get out of hand.
In March 2014 he co-founded the Boston-based Future of Life Institute to support work on the subject, and soon after he helped stage a conference at which AI researchers from around the world agreed that they should work not just to advance the field of artificial intelligence but to benefit humankind. This past January, he helped draw up a 23-point statement of principles that has been embraced by some 1,200 people in AI, among them the authors of the leading textbook on the subject and the founders of DeepMind, the Google-owned company whose AlphaGo program defeated one of the world’s top Go players last year in South Korea. 
The issue is certainly timely. After decades in which artificial intelligence promised much and delivered little, recent breakthroughs in such target areas as facial recognition, automatic translation and self-driving cars have brought AI out of the woods. Amazon, Alphabet, Facebook, Tesla and Uber are making huge investments in AI research, as are Baidu and Alibaba in China. Where all this will take us is the broader focus of Mr. Tegmark’s book.
Though he sees widespread benefits in fields ranging from medical diagnosis to power-grid management, Mr. Tegmark devotes the bulk of “Life 3.0” to how things could go wrong. Most immediate is the threat of unemployment, starting perhaps among Uber drivers before eventually spreading to computer scientists whose machines have learned to program themselves. Even more disconcerting is the threat of an arms race involving cheap, mass-produced autonomous weapons. As Mr. Tegmark points out, “there isn’t much difference between a drone that can deliver Amazon packages and one that can deliver bombs.” Actually, bombs are crude compared with what AI could deliver once it has been weaponized: Think drones the size of bumblebees that could be programmed to kill certain people, or certain categories of people, by grabbing their skulls with tiny metal talons and drilling into their heads. [JB emphasis]
As horrific as that possibility may sound, it wouldn’t threaten the existence of the human species. Superintelligence might. No one really knows if a machine will ever develop the general-purpose intelligence that would be required. But in 2014 Mr. Tegmark caught a glimpse of how it might. He was watching a DeepMind program as it learned to play Breakout, a ’70s arcade game. The object of the game is to break through a wall by bouncing a ball off it repeatedly, knocking out a brick with every hit. At first the AI was hopeless. But it quickly got better, and before long it devised a relentlessly effective technique that none of the humans at DeepMind had thought of. It went on to learn 49 different arcade games, including Pong and Space Invaders, beating its human testers on more than half of them. Obviously it’s a very long way from vintage arcade games to general intelligence, let alone consciousness. But if a computer program can teach itself to play games, it might be able to teach itself many other things as well—slowly at first, then faster and faster.
What would that mean for humans? Nobody knows, including—as he freely admits—Mr. Tegmark. Like horses after the invention of the internal-combustion engine, we might be kept on as show animals—although Mr. Tegmark’s observation that the U.S. horse population fell almost 90% between 1915 and 1960 is not exactly heartening. He presents a dozen or so other scenarios as well. Would an omniscient AI act as a “protector god,” maximizing human happiness while allowing us the illusion that we’re still in control? Would it decide we’re a threat and wipe us out?
It’s impossible to know that either. By failing either to refute or champion the bulk of these possible futures, Mr. Tegmark makes the whole exercise seem divorced from reality. But he means it as a challenge: Rather than our being told what is going to happen, he wants us to decide what we want to happen. This sounds quite noble, if a tad naive—until he invites us to debate the issue on a web site that is chockablock with promo material for the book. There’s a place for self-promotion, just as there’s a place for killer-robot movies—but does either really contribute to our understanding of what humanity faces?
Mr. Rose is the author of “The Art of Immersion” and a senior fellow at the Columbia University School of the Arts.

No comments: