Follow on Twitter as @raffiwriter |
Photo: The New Yorker |
Last year, a curious nonfiction book became a Times best-seller: a dense meditation on artificial intelligence by the philosopher Nick Bostrom, who holds an appointment at Oxford. Titled “Superintelligence: Paths, Dangers, Strategies,” it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology—even nuclear weapons—and that if its development is not managed carefully humanity risks engineering its own extinction. Central to this concern is the prospect of an “intelligence explosion,” a speculative event in which an A.I. gains the ability to improve itself, and in short order exceeds the intellectual potential of the human brain by many orders of magnitude.
Such a system would effectively be a new kind of life, and Bostrom’s fears, in their simplest form, are evolutionary: that humanity will unexpectedly become outmatched by a smarter competitor. He sometimes notes, as a point of comparison, the trajectories of people and gorillas: both primates, but with one species dominating the planet and the other at the edge of annihilation. “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” he concludes. “We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”
At
the age of forty-two, Bostrom has become a philosopher of remarkable
influence. “Superintelligence” is only his most visible response to
ideas that he encountered two decades ago, when he became a
transhumanist, joining a fractious quasi-utopian movement united by the
expectation that accelerating advances in technology will result in
drastic changes—social, economic, and, most strikingly, biological—which
could converge at a moment of epochal transformation known as the
Singularity. Bostrom is arguably the leading transhumanist philosopher
today, a position achieved by bringing order to ideas that might
otherwise never have survived outside the half-crazy Internet ecosystem
where they formed. He rarely makes concrete predictions, but, by relying
on probability theory, he seeks to tease out insights where insights
seem impossible.
Some of Bostrom’s cleverest arguments
resemble Swiss Army knives: they are simple, toylike, a pleasure to
consider, with colorful exteriors and precisely calibrated mechanics. He
once cast a moral case for medically engineered immortality as a fable
about a kingdom terrorized by an insatiable dragon. A reformulation of
Pascal’s wager became a dialogue between the seventeenth-century
philosopher and a mugger from another dimension.
“Superintelligence”
is not intended as a treatise of deep originality; Bostrom’s
contribution is to impose the rigors of analytic philosophy on a messy
corpus of ideas that emerged at the margins of academic thought. Perhaps
because the field of A.I. has recently made striking advances—with
everyday technology seeming, more and more, to exhibit something like
intelligent reasoning—the book has struck a nerve. Bostrom’s supporters
compare it to “Silent Spring.” In moral philosophy, Peter Singer and
Derek Parfit have received it as a work of importance, and distinguished
physicists such as Stephen Hawking have echoed its warning. Within the
high caste of Silicon Valley, Bostrom has acquired the status of a sage.
Elon Musk, the C.E.O. of Tesla, promoted the book on Twitter, noting,
“We need to be super careful with AI. Potentially more dangerous than
nukes.” Bill Gates recommended it, too. Suggesting that an A.I. could
threaten humanity, he said, during a talk in China, “When people say
it’s not a problem, then I really start to get to a point of
disagreement. How can they not see what a huge challenge this is?”
The people who say that artificial
intelligence is not a problem tend to work in artificial intelligence.
Many prominent researchers regard Bostrom’s basic views as implausible,
or as a distraction from the near-term benefits and moral dilemmas posed
by the technology—not least because A.I. systems today can barely guide
robots to open doors. Last summer, Oren Etzioni, the C.E.O. of the
Allen Institute for Artificial Intelligence, in Seattle, referred to the
fear of machine intelligence as a “Frankenstein complex.” Another
leading researcher declared, “I don’t worry about that for the same
reason I don’t worry about overpopulation on Mars.” Jaron Lanier, a
Microsoft researcher and tech commentator, told me that even framing the
differing views as a debate was a mistake. “This is not an honest
conversation,” he said. “People think it is about technology, but it is
really about religion, people turning to metaphysics to cope with the
human condition. They have a way of dramatizing their beliefs with an
end-of-days scenario—and one does not want to criticize other people’s
religions.”
Because the argument
has played out on blogs and in the popular press, beyond the ambit of
peer-reviewed journals, the two sides have appeared in caricature, with
headlines suggesting either doom (“Will Super-intelligent Machines Kill Us All?”) or a reprieve from doom (“Artificial intelligence ‘will not end human race’
”). Even the most grounded version of the debate occupies philosophical
terrain where little is clear. But, Bostrom argues, if artificial
intelligence can be achieved it would be an event of unparalleled
consequence—perhaps even a rupture in the fabric of history. A bit of
long-range forethought might be a moral obligation to our own species.
Source: The New Yorker