Stephen Hawking’s final comments on Reddit will make you stop what you’re doing
People all over the world are mourning the death of legendary physicist Stephen Hawking, who passed away at his home in Cambridge on Wednesday, March 14th. The world-renowned scientist was hailed for his research on black holes and relativity, and for helping us better understand the universe and our place in it.
Hawking literally defied the odds by outliving his life expectancy by more than half a century. Following his diagnosis of amyotrophic lateral sclerosis (ALS) at the age of 21, doctors told him it was unlikely he would live more than a couple of years. Yet Hawking was 76 when he died, and had continued working long after his life-changing diagnosis, furthering his iconic research on black holes and sharing his awe-inspiring theories on the cosmos.
"We are deeply saddened that our beloved father passed away today," his children, Lucy, Robert and Tim, said in a statement. "He was a great scientist and an extraordinary man whose work and legacy will live on for many years. His courage and persistence with his brilliance and humor inspired people across the world. He once said, 'It would not be much of a universe if it wasn’t home to the people you love.' We will miss him forever."
In the wake of Stephen Hawking’s death, tributes from celebrities like Eddie Redmayne — who won an Oscar for his portrayal of Hawking in 2014’s The Theory of Everything — and shares of his most inspirational quotes quickly began popping up on social media.
But something in particular that caught our eye was found on Mashable, which shared Hawking’s last comments on Reddit during his AMA in 2016. The discussion focused on artificial intelligence, and Hawking’s answers are just another example of his amazing intellect.
Here’s some of what Stephen Hawking had to say about the risks of AI.
"The real risk with AI isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants."
One Reddit user asked: “How smart do you think the human race can make AI, while ensuring that it doesn’t surpass them in intelligence?”
"It’s clearly possible for a something to acquire higher intelligence than its ancestors: We evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails."
Another Reddit user wanted to know what Stephen Hawking thought about the possibility of AI becoming a threat to humans.
"An AI that has been designed rather than evolved can, in principle, have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away."
Head on over to Reddit to read the AMA in full. Rest in peace, Dr. Stephen Hawking.