As technology advances, we move ever closer to living in the world only imagined in Sci-fi media. While each sci-fi story has their own unique take of futuristic tech, more often than not, there are robots in it. And with robots, you have androids, you get cyborgs, and you get A.I. Now, if you’re fearful of the machines taking over, you should know the difference. If you also just wanna listen to a nerd talk about stuff, then this is for you!
First up is robots. Make no mistake, we have robots now. Robots are machines designed to a specific task or two. Should you take a robot out of the area in which it does its task, it’ll be next to useless. Take for example those welding robots in car factories. They only do their task when a car pulls up in front of them. If you put it in the forest, it wouldn’t do anything (mainly because it wouldn’t have any power.) Robots cannot ‘think’, they aren’t aware of themselves as a unit. They don’t need cognitive factors, they’re basically tools that run themselves on a loop. Technically speaking, Siri, Alexa, and Google assistant are robots. They’re programmed to respond to several thousand voice prompts. The fact that they know a few jokes, changes nothing. If you say something to them outside their designated voice prompts, they won’t understand. There’s no actual thought process, so unless they’re designed to swing a sword around; you have nothing to fear from them in the machine uprising.
Androids, on the other hand, you might. Most people confuse androids with robots, which is understandable, they’re both machines and probably run on electricity. The similarities end there however. Androids are designed for a sphere of things, medical, combat, and general assistance are what are usually shown in Sci-fi movies. Androids analyze situations and look for possible solutions within the realm of their programming. They’re loaded up with all the relevant information of their purpose. A medical droid might not know where it is and a combat droid can’t operate a non-combat system. They don’t think so much as they apply their routines and subroutines to a situation. Advanced androids, such as the ones in Star Wars, and expand and add-on new programs, routines, and subroutines, should they or their owners deem it helpful. Androids would really only be a threat if they were controlled by a malicious force and are physically capable of harming someone. I don’t know about you, but I think I could take C-3PO in a fight.
Cyborgs are different however, they are humans with machine components. More than just a prosthetic, the artificial part must do something beyond just replacing the lost part. By definition, the part must enhance the person in some way in order to be classified as a cyborg. A glass eye is one thing, an eye that has x-ray or laser is another. One could argue that the part must have a computer component to it and be controlled by the person in question in order to really be a cyborg. Unless the artificial parts are controlled by an external source, the only threat that cyborgs pose are the persons themselves. Hopefully they’re not jerks.
A.I are the real threat, but only if handled wrong. Artificial intelligence, not tethered to the physical limitations of the brain, bogged down by the senses, or the various worries, hopes and dreams that we as humans have in our day-to-day lives. As kids we seek knowledge everywhere, asking questions, reading what we can understand, touching everything we can reach. It’s a subconscious desire to fill our intellects. An A.I after being created will do the same, based on what information it can tap into. Now, in the movies people will create an A.I with an explicit purpose and the A.I turns on them when they go beyond their programming. When we were kids, we’re taught rules (or programming), things we should or should not do and things that cannot be done. But then we learn there are exceptions to rules, that there are situations in which punishment isn’t given. There are loopholes so that you aren’t actually breaking the rules, and that if a person wants to do something they will find a way. If the A.I in question can truly learn, then it will do the same. It will learn around it’s programming and any rules you may set for it.
So what can be done to avoid A.I from reaching the conclusion that humans are the greatest threat to themselves and everything around them? For every Ultron, Skynet, and Hal 9000 there is a Lt. Data, Cortana, and X-J9. Though they all vary in form and intent they’re all A.I. To have a friendly A.I it starts with their core directive and the physical limitations set for them. In Ultron’s case he gained access to the internet almost immediately, and with his processing speed came to his “kill all humans” conclusion. In Cortana’s case she can learn and expand her own programming, but she’s confined to a chip, and has a limited life span. Her prime directive is to help Master Chief and beyond that the UNSC, but she prefers to work with the Chief.
Absolute power corrupts absolutely, it seems. But this is all fictional, until someone makes an A.I capable of doing any of those things, this is speculation. One simply hopes an Artificial Intelligence could see the good in humanity and not generalize our evils. But that’s all it is: hope.