Bill Gates Joins Musk and Hawking in Warning Against Artificial Intelligence

Microsoft co-founder Bill Gates joins entrepreneur Elon Musk and physicist Stephen Hawking with a warning about machine intelligence. (PHOTO CREDIT: Getty Images)
Microsoft co-founder Bill Gates joins entrepreneur Elon Musk and physicist Stephen Hawking with a warning about machine intelligence. (PHOTO CREDIT: Getty Images)

 

Bill Gates has a warning for humanity: Beware of artificial intelligence in the coming decades, before it’s too late.

Microsoft’s co-founder joins a list of science and industry notables, including famed physicist Stephen Hawking and Internet innovator Elon Musk, in calling out the potential threat from machines that can think for themselves. Gates shared his thoughts on AI on Wednesday in a Reddit “AskMeAnything” thread, a Q&A session conducted live on the social news site that has also featured President Barack Obama and World Wide Web founder Tim Berners-Lee.

“I am in the camp that is concerned about super intelligence,” Gates said in response to a question about the existential threat posed by AI. “First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern.”

Gates, who is co-chair of the Bill & Melinda Gates Foundation, isn’t the only one worried. Musk, the billionaire inventor and founder of SpaceX and CEO of electric car maker Tesla Motors, is not an expert in AI. But he did join a growing list of hundreds of researchers and professors in the field who signed an open letter earlier this month that proposed proper safeguards be put in place to research and develop such intelligence without humans losing control.

“I agree with Elon Musk and some others on this and don’t understand why some people are not concerned,” Gates said.

Fearing the worst
The reason they’re worried is that AI isn’t science fiction anymore. In stories and movies, AI is often presented as a good idea gone horribly wrong. In “The Matrix” movie trilogy, machines deem humanity a threat and enslave people in a virtual existence so they can feed off the electricity generated by the human body. When the Skynet computer system in “The Terminator” movie series becomes sentient, it wages a multiyear war using human-like robots designed to kill. HAL 9000, the socio-pathic supercomputer from “2001: A Space Odyssey,” is now a cinematic icon — HAL’s robotic tone and malevolent quotes have become pop culture tropes.

Back in the real world, Apple’s voice-based personal assistant Siri may seem a little dumb now, but AI is getting smarter as researchers develop ways to let machines teach themselves and mine the deep trove of data produced by our many connected gadgets. IBM’s Watson supercomputer has moved on from besting Jeopardy contestants to conducting medical research and diagnosis, and researchers earlier this month detailed a new computer program that can beat anyone at poker. A need to worry? Of course not, but Gates and others are trying to imagine the worst.

Musk in October called AI development “summoning the demon,” and has invested in the space to keep his eye on it. Hawking, writing for The Independent in May 2014, also expressed his concerns. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,” Hawking wrote.

Click here to read more.

SOURCE: Cnet | Nick Statt