Elon Musk-backed Group Gives $7 Million for Research Into Possible Negative Effects of Artificial Intelligence

foli-7mil-research-ai-risks

It’s not just the sci-fi community envisioning a world where machines take over. It’s a concern among some prominent visionaries, including a group that just shelled out nearly $7 million for research into potential ill effects of artificial intelligence.

The Future of Life Institute has awarded the money to 37 research teams that will be tasked with researching a range of topics related to the oncoming advancements of artificial intelligence, or AI, the organization announced on Wednesday. The funds come partly from the $10 million investment famed tech entrepreneur Elon Musk provided the group in January to determine the risks associated with AI.

AI is a term used for the ability for a machine, computer, or system, to exhibit human-like intelligence. The term “artificial intelligence” was coined by one of its founders, John McCarthy, who is credited with first using the term in 1955.

Since then, there’s been a push to see if artificial intelligence can eventually exceed the intelligence of humans. Indeed, some companies have come close: IBM’s Watson, for instance, has become a prime example of what artificial intelligence can deliver as it’s learned history, facts, and other information, and even won a game on the popular quiz show “Jeopardy.”

Meanwhile, some industry watchers have grown increasingly concerned with how far AI can go and its potential dangers. They caution that controlling AI before it becomes too smart is critical.

“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” famed physicist Stephen Hawking said in an article he co-wrote last year for The Independent. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Microsoft co-founder Bill Gates has also sounded off on AI, saying that he doesn’t “understand why some people are not concerned” about the possibility of super-intelligent machines.

Musk said last August that AI could be “potentially more dangerous than nukes” and followed that in October by saying that AI may require “regulatory oversight” so the world doesn’t “do something very foolish.”

Click here to read more.

SOURCE: Cnet, Don Reisinger