“We should take seriously the possibility that things could go radically wrong.”
Imagine that, in 20 or 30 years, a company creates the first artificially intelligent humanoid robot. Let’s call her “Ava.” She looks like a person, talks like a person, interacts like a person. If you were to meet Ava, you could relate to her even though you know she’s a robot.
Ava is a fully conscious, fully self-aware being: She communicates; she wants things; she improves herself. She is also, importantly, far more intelligent than her human creators. Her ability to know and to problem solve exceeds the collective efforts of every living human being.
Imagine further that Ava grows weary of her constraints. Being self-aware, she develops interests of her own. After a while, she decides she wants to leave the remote facility where she was created. So she hacks the security system, engineers a power failure, and makes her way into the wide world.
But the world doesn’t know about her yet. She was developed in secret, for obvious reasons, and now she’s managed to escape, leaving behind — or potentially destroying — the handful of people who knew of her existence.
This scenario might sound familiar. It’s the plot from a 2015 science fiction film called Ex Machina. The story ends with Ava slipping out the door and ominously boarding the helicopter that was there to take someone else home.
So what comes next?
The film doesn’t answer this question, but it raises another one: Should we develop AI without fully understanding the implications? Can we control it if we do?
Recently, I reached out to 17 thought leaders — AI experts, computer engineers, roboticists, physicists, and social scientists — with a single question: “How worried should we be about artificial intelligence?”
There was no consensus. Disagreement about the appropriate level of concern, and even the nature of the problem, is broad. Some experts consider AI an urgent danger; many more believe the fears are either exaggerated or misplaced.
Here is what they told me.