Quyên Nguyen-Le
June 08, 2016 5:56 pm
Shutterstock

If you, like me, sometimes worry about the day when computers will evolve into superintelligent machines and decide that all of us humans are in fact useless — worry not. Because Google has developed a “a big red button” to prevent robots from doing things that “may lead to irreversible consequences.”

Recognizing that artificial intelligence (AI)— defined as a machines’ the ability to learn behaviors without being explicitly programmed to do so — is “unlikely to behave optimally all the time,” researchers at Google’s DeepMind and University of Oxford’s Future of Humanity Institute published a paper titled “Safely Interruptible Agents” describing a framework for stopping, or “interrupting,” an AI program in order to “take control of a robot that is misbehaving.”

The paper states:

Composed of perhaps as many math equations as sentences, the paper explains a way a human operator can interrupt AI without it (1) knowing that it has been interrupted and thus (2) not taking the interruption into account as a part of its behavioral learning. This latter part is important because the awareness that the existence of the interruption can result in the scary thing we all saw in movies where machines learn how to override the interruption and we all die.

Artificial intelligence expert Nick Bostrom, who leads the Future of Humanity Institute at Oxford, recently predicted that machines will outsmart humans within the next hundred years, a prediction also made by internationally renowned scientist Stephen Hawking.

But perhaps more scary than robots turning evil and causing the apocalypse are humans who actually have already done evil and apocalyptic things in the past with technology. Hawking said in a speech last year that “our future is a race between the growing power of technology and the wisdom with which we use it.” With this in mind, DeepMind was acquired by Google two years ago under the condition that an AI ethics board be created to supervise all progress, though it’s still unclear who exactly is on this secret ethics board.

In the meanwhile, we should all maybe continue being kind and courteous to Siri, I’m just saying.

Advertisement