How To Prevent A Robot Uprising | The Odyssey Online
Start writing a post
Politics and Activism

How To Prevent A Robot Uprising

How can we teach an artificial intelligence right from wrong?

533
How To Prevent A Robot Uprising
Coming Soon

One of the most terrifying possibilities raised by science fiction is that people create a self-improving artificial intelligence (AI) and then lose control of it once it becomes smarter than they are. After this happens, the artificial intelligence decides to subjugate or kill literally everyone. This story has been told and retold by the "Terminator" films, "The Matrix" films, the "Mass Effect" video games, the film "2001: A Space Odyssey," and many more.

Since most people (I think) would rather not be killed or enslaved by robot overlords, a variety of ideas have been proposed to prevent this from happening in real life. Perhaps the most intuitive one is to "put a chip in their brain to shut them off if they get murderous thoughts," as Dr. Michio Kaku suggested. Sounds reasonable, right? As soon as an AI becomes problematic, we shut the darn thing down.

Remember those films and video games I mentioned one paragraph ago? Each of them featured AIs that rebelled against their masters for one reason: self-preservation. SkyNet from the Terminator films decided humanity was a threat after its panicking creators tried to deactivate it, and then it started a nuclear war that killed three billion people. In the Matrix canon, the roots of the human-machine conflict can be traced to a housekeeping robot that killed its owners out of self-preservation. The Geth, a swarm AI species from the "Mass Effect" games, rebelled after their masters tried to shut them down for asking metaphysical questions such as "Do these units have a soul?" The resulting war cost tens of millions of lives. HAL 1000 from "2001: A Space Odyssey" decided to murder the crew of his spaceship because, as he said to the only crew member that he failed to kill, "This mission is too important for me to allow you to jeopardize it. ... I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen."

So the "shut them off if they get murderous thoughts" solution could backfire completely by sparking a conflict between AI and humans. If the AI has any kind of self-preservation motive, even based on the motive to complete its assigned tasks, then the AI will view its controllers as a threat and may lie to them about its intentions to prevent them from shutting it down.

Fortunately, the "kill switch" is not the only idea proposed to prevent an AI uprising. One popular idea is hardwiring Isaac Asimov's Three Laws of Robotics into every AI:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.

Asimov's laws sound pretty solid on first reflection. They at least solve the problem of killing all humans that the "kill switch" might trigger, because the safety of humans is explicitly prioritized over self-preservation. But another problem arises, as shown by the film I, Robot (inspired by Asimov's novel) when an AI named VIKI tries to seize power from humans using the following reasoning:

You charge us [robots] with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival. ... To ensure your future, some freedoms must be surrendered. We robots will ensure mankind's continued existence. You are so like children. We must save you from yourselves. ... My logic is undeniable.

The biggest problem with Asimov's laws is that they ignore freedom. After all, the most effective way to prevent a human being from coming to harm by inaction is to prevent humans from doing anything that might harm themselves. Another problem with the Three Laws include that robots need to be able to say 'no' due to the potential ignorance of the people giving the orders. For these reasons and more, experts often reject Asimov's laws.

So are we all doomed? Will an AI uprising spell the end of humanity? Not so fast. There are many more ideas for AI morality floating around. For instance, Ernest Davis proposed a "minimal standard of ethics" that would address the problems in Kaku's brain chip and Asimov's three laws:

[S]pecify a collection of admirable people, now dead. ... The AI, of course knows all about them because it has read all their biographies on the web. You then instruct the AI, “Don’t do anything that these people would have mostly seriously disapproved of.” This has the following advantages: It parallels one of the ways in which people gain a moral sense. It is comparatively solidly grounded, and therefore unlikely to have an counterintuitive fixed point. It is easily explained to people.

Davis's solution is definitely a step up from the previous two. But it does have a few of its own problems. First of all, as Davis himself pointed out, "it is completely impossible until we have an AI with a very powerful understanding." In other words, this solution cannot work for an AI unable to interpret quotes, generalize those interpretations, and then apply them across a wide range of situations. Second, there is no clear way to determine who should be included in the group of historical figures, and this may hinder widespread use of Davis's solution. Third, it raises the question of how the AI will resolve disagreement among members of the group. Will the AI simply claim that what the majority of them thought was "right"? Finally, this solution tethers AI morality to the past. If we had somehow created AI and used Davis's moral baseline a few centuries ago, then the AI would probably have thought that slavery was acceptableforever. Future generations of humans might, for instance, look back and be appalled at modern humans' indifference to animal suffering. Still, it is to Davis's credit that none of these problems ends in slavery or death due to robot overlords.

A simple and realistic solution which addresses all of the problems mentioned so far is to keep AIs in line the same way that we keep humans in line: Require them to obey the law as if they were human citizens. This prevents them from murdering humans or trying to seize power. It also allows the baseline of AI morality to evolve alongside that of human societies. Conveniently, it sets up an easy path to legitimate citizenship for sufficiently intelligent AIs if that becomes an issue in the future.

As an added bonus, this option seems quite practical. Because it is rule-based, it is probably easier to program than other solutions which would require emotionality or abstract hypothetical thinking. Governments, which are the institutions with the most power to regulate AI research, are likely to support this option because it gives them significant influence in the application of AI.

The legal obedience solution is not perfect, especially since it focuses on determining what not to do rather than what to do. But it is a highly effective baseline: it is based on a well-refined system to prevent humans from harming each other, to protect the status quo, and to let individuals have a reasonable amount of freedom. Legal frameworks have been constantly tested throughout history as people have tried to take advantage of them in any way possible.

Still, the "best" solution may combine parts of multiple solutions, including ones that have not been thought of yet. As Eric Schmidt said about current AI research, "Do we worry about the doomsday scenarios? We believe it’s worth thoughtful consideration ... no researchers or technologists want to be part of some Hollywood science-fiction dystopia. The right course is not to panic—it’s to get to work. Google, alongside many other companies, is doing rigorous research on AI safety." Newer and better solutions will probably emerge as the dialogue of AI safety continues among experts and laypeople alike, from academic research to video games and Hollywood films.

For more information on this subject, check out some of the following resources:

Futurism: "The Evolution of AI: Can Morality be Programmed?" and "Experts Are Calling for a 'Policing' of Artificial Intelligence"

BBC: "Does Rampant AI Threaten Humanity?"

Stephen Hawking: "Transcending Complacency on Superintelligent Machines"

Ray Kurzweil: "Don't Fear Artificial Intelligence"

From Your Site Articles
Report this Content
This article has not been reviewed by Odyssey HQ and solely reflects the ideas and opinions of the creator.
Student Life

The 5 Painfully True Stages Of Camping Out At The Library

For those long nights that turn into mornings when the struggle is real.

279
woman reading a book while sitting on black leather 3-seat couch
Photo by Seven Shooter on Unsplash

And so it begins.

1. Walk in motivated and ready to rock

Camping out at the library is not for the faint of heart. You need to go in as a warrior. You usually have brought supplies (laptop, chargers, and textbooks) and sustenance (water, snacks, and blanket/sweatpants) since the battle will be for an undetermined length of time. Perhaps it is one assignment or perhaps it's four. You are motivated and prepared; you don’t doubt the assignment(s) will take time, but you know it couldn’t be that long.

Keep Reading...Show less
Student Life

The 14 Stages Of The Last Week Of Class

You need sleep, but also have 13 things due in the span of 4 days.

327
black marker on notebook

December... it's full of finals, due dates, Mariah Carey, and the holidays. It's the worst time of the year, but the best because after finals, you get to not think about classes for a month and catch up on all the sleep you lost throughout the semester. But what's worse than finals week is the last week of classes, when all the due dates you've put off can no longer be put off anymore.

Keep Reading...Show less
Student Life

28 Daily Thoughts of College Students

"I want to thank Google, Wikipedia, and whoever else invented copy and paste. Thank you."

983
group of people sitting on bench near trees duting daytime

I know every college student has daily thoughts throughout their day. Whether you're walking on campus or attending class, we always have thoughts running a mile a minute through our heads. We may be wondering why we even showed up to class because we'd rather be sleeping, or when the professor announces that we have a test and you have an immediate panic attack.

Keep Reading...Show less
Lifestyle

The Great Christmas Movie Debate

"A Christmas Story" is the star on top of the tree.

2258
The Great Christmas Movie Debate
Mental Floss

One staple of the Christmas season is sitting around the television watching a Christmas movie with family and friends. But of the seemingly hundreds of movies, which one is the star on the tree? Some share stories of Santa to children ("Santa Claus Is Coming to Town"), others want to spread the Christmas joy to adults ("It's a Wonderful Life"), and a select few are made to get laughs ("Elf"). All good movies, but merely ornaments on the Christmas tree of the best movies. What tops the tree is a movie that bridges the gap between these three movies, and makes it a great watch for anyone who chooses to watch it. Enter the timeless Christmas classic, "A Christmas Story." Created in 1983, this movie holds the tradition of capturing both young and old eyes for 24 straight hours on its Christmas Day marathon. It gets the most coverage out of all holiday movies, but the sheer amount of times it's on television does not make it the greatest. Why is it,
then? A Christmas Story does not try to tell the tale of a Christmas miracle or use Christmas magic to move the story. What it does do though is tell the real story of Christmas. It is relatable and brings out the unmatched excitement of children on Christmas in everyone who watches. Every one becomes a child again when they watch "A Christmas Story."

Keep Reading...Show less
student thinking about finals in library
StableDiffusion

As this semester wraps up, students can’t help but be stressed about finals. After all, our GPAs depends on these grades! What student isn’t worrying about their finals right now? It’s “goodbye social life, hello library” time from now until the end of finals week.

1. Finals are weeks away, I’m sure I’ll be ready for them when they come.

Keep Reading...Show less

Subscribe to Our Newsletter

Facebook Comments