It’s difficult to imagine the potential of artificial intelligence without the mind going straight to the fiction. Who can resist musing about our future overlords? Will they be the soul-searching Replicants of Blade Runner? The envious Cylons of Battlestar Galactica? The Pinocchio‑like Commander Data of Star Trek? The life‑affirming software of Her? The life-ending hardware of The Terminator?
Which of the three well-worn tropes are we in for: the benevolent, the baneful, or the benign?
“I don’t think it’s going to be a Terminator scenario,” laughs AJung Moon, a PhD candidate in Mechanical Engineering and a founding member of UBC’s Open Roboethics initiative (ORi). Moon emphasizes that designers are very aware of how future artificial intelligence will reflect the values of the programming community, which is a focal point for ORi as a robotics think tank bringing together designers, users, policy makers, and industry professionals to examine the legal, social, and ethical issues of artificial intelligence. In 2012, the University of Miami School of Law hosted the inaugural “We Robot” conference to discuss how current laws inadequately address the rapid development of robots in the military and civilian spheres. Born out of that conversation, ORi has matured into a Wikipedia for the design and implementation of future AI.
Just one of a growing body of organizations drawing on a diverse field of disciplines – biology, psychology, philosophy, engineering, economics, game theory, cognitive science, and more – ORi is an international effort to navigate the tricky waters of this growing technology. Much of their mission involves conducting surveys to keep their fingers on the pulse of public opinion. In November, 2015, Moon represented ORi at the United Nations to present their findings on Lethal Autonomous Weapons Systems – independent killer robots designed for the military – that showed an overwhelming number of those surveyed worldwide believe weapons should always be under the control of a human being.
Placing life-and-death decisions in the hands of machines without human oversight is the nightmare scenario of dystopian science fiction, and the current hot-button topic among those on the cutting edge of the real thing. “Even if you could program in the laws of war, a robot following them would not be compliant,” says Peter Danielson, a professor at UBC’s W. Maurice Young Centre for Applied Ethics. “You could never really do it because something like innocence is too complicated to be figured out by a robot.”
Sample responses from a survey on lethal autonomous weapons (LAWS)