John Stonestreet and Roberto Rivera: Artificial Intelligence and Our Moral Obligation

The idea of sentient artificial intelligence, complete with a personality, has long been a central theme of science fiction. The Oscar-winning 2013 movie “Her,” starring Joaquin Phoenix and the voice of Scarlett Johansson, told the story of a relationship between a lonely man and an AI named Samantha.

On the small-screen, sentient robots are the stars of HBO’s “Westworld.” In fact, the protagonists of the series, Dolores and Maeve, are scarcely distinguishable, if at all, from the people who have repeatedly brutalized them since their creation, including in their own capacity for brutality. The biggest difference is that the audience is intended to excuse, or at least understand, the robots’ brutality.

Which touches on another theme of science fiction: What are our ethical obligations, if any, to the intelligent machines we build?

Considering that what’s depicted in books and on screen has been light years more advanced than the current realities of artificial intelligence, such philosophical questions have remained in science fiction.

Consider, for example, that seventy years ago, mathematician Alan Turing proposed what’s now called the “Turing Test.” Instead of answering the question “can a machine think?” which presupposes an agreed-upon definition of “think,” the “Turing Test” seeks to determine how well a machine can imitate a person. For example, could a machine fool someone who was asking it a series of non-scripted questions into thinking that it was a person.

Turing thought that a machine would eventually pass his test. Seventy years later, no machine has, and the most optimistic estimate says that it will take at least another decade.

That’s why a recent article in Aeon, which suggested that artificial intelligence should receive the same ethical and moral considerations afforded to laboratory animals, seems, to put it politely, premature.

One of the principal reasons that we feel a moral obligation to the animals in our care is that they can suffer and feel pain and they can let us know that.

A machine can do none of these things. If a person kicks a dog, it will yelp, and the person’s cruelty will be clear. Smash your MacBook because you’re upset with Siri, and the only one suffering will be you, since those things cost over $1,000.

Click here to read more.

SOURCE: Christian Post, John Stonestreet and Roberto Rivera