Skip to main content
header-image
artificial-intelligence

Stuart Armstrong: Will Artificial Intelligence Destroy Humanity?

Dr Stuart Armstrong James Martin Research Fellow At The Future Of Humanity Institute Oxford University
October 02, 2015

Unless we have clear evidence that artificially intelligent beings we create pose no threat, we need to seriously consider the risk.

 

Will a Democrat or a Republican win the 2040 U.S. election? Will Google be remembered positively or negatively in 2090? Will humans create new universes by 3002?

In all those questions, it's clear that the correct answer is something along the lines of, "I don't know, it could go either way, but here are some arguments that could make one option more likely than the other." Yet with issues of artificial intelligence (AI), people are expected to come down with determination on one side — AI will either help humanity, or destroy it. Even those who take a nuanced position are often pigeon-holed into one side. "There might be some risk from AI" often gets caricatured as, "AI will certainly kill us."

Let’s start with the fact that we have never built AI. We lack a theoretical model of how to build AI — there is nothing around today where we can say, "Just add (a realistic amount of) computing power, and we have AI." We have several promising approaches, but the field is littered with promising approaches that have failed to pan out.

The field is also littered with overconfident predictions of possibility and impossibility, as analyzed in my paper "How We’re Predicting AI — or Failing To." What’s striking about these predictions is how reasonable, sensible and evidence-grounded they are. Even the predictions of the original Dartmouth conference in 1956 — the one that was essentially predicting AI over "a summer" — were solid, grounded and presented by the top experts in the field, all of whom had practical experience in making machines do exactly what they wanted them to. To have argued against them, at the time, would have been the height of arrogance.

So what can we conclude from this long history of AI prognostications? That predictions about the imminence and capability of AI are highly speculative, yet the people making them are almost always extremely overconfident. Therefore, on the question of AI safety, the correct position must be uncertainty.

Humanity's position on the earth is a function of our intelligence (including our social capacity, which is also a form of intelligence). And we are talking about creating potential beings that could have intelligence that rivals or exceeds our own. Beings that might be able to be copied at will, run at super-human speed, analyze and improve their own algorithms, and mesh seamlessly with the whole computing infrastructure that we have created for ourselves. They are unlike any previous "dumb" technologies, so cannot be analyzed in the same way. Humans will be the initial programmers of these entities, but we understand our current AI projects imperfectly — and it is not clear whether human values can be easily captured in powerful computers.

Given all the possibilities — and the great uncertainty — it would be absurd to put a probability higher than 90 percent on AI being a danger to humanity. However, it would be equally absurd to put a probability lower than 10 percent on whether it poses a serious threat.

For me, a 10 percent chance of deadly threat is ample justification for working seriously to address the potential risk. But maybe that argument is wrong — how could we go about disproving it? We'd first need to establish a probability threshold for what constitutes a reasonable threat. For instance, a 0.001 percent chance of human extinction is an expected 700,000 deaths. Given the uncertainties involved, reducing AI risk below 0.001 percent would require a very significant amount of argument and evidence. This is especially so given human biases at low probabilities — people (including experts) who claim to have a probability of 0.001 of being wrong are generally wrong about15 percent of the time.

"We can confidently say that AI won't pose a threat to humanity" is a very strong statement, needing very strong justification — a strong justification that has been lacking so far.

(Top image: Courtesy of Thinkstock)

 

 

Stuart_Armstrong_ProfileDr. Stuart Armstrong is the James Martin Research Fellow at the Future of Humanity Institute, Oxford University.

 

 

 

 
All views expressed are those of the author.
tags