With the advancements in technology on the rise (particularly artificial intelligence), a question comes to mind: What is true sentience and can an android be said to be alive if it has sentience?
Last night, while doing homework, I decided to have a movie marathon; my roommate and I started with the classic Blade Runner (based off the 1968 book Do Androids Dream of Electric Sleep), then went to Ex Machina, and finally ending with What Dreams May Come (based off the Richard Matheson book of the same name). While the movies were chosen at random they provided the perfect path for thought in regards to sentience and consciousness.
For those unaware, the first movie takes place in a universe in which androids have been built and perfected (androids are robots with human appearances). Now these robots were built to perform labor too dangerous for humans to undertake; in essence, they were forced into slave labor. The movie follows Harrison Ford as he “retires” (or murders/kills as I will address later) a group of 6 androids who are running and fighting for their freedom. The second movie is about a man, brought to a remote location, with the goal of performing a Turing Test (the Turing Test is a test give to a computer to determine if it is a true AI [Artificial Intelligence being defined as any processes that would normally require human intelligence; visual perception; speech recognition; decision-making, which would normally require an emotional response; and translation between languages] in that a human will interact with a computer with questions being asked between the two of them; if the human believes it is speaking to another human the Turing Test is believed to be successful). While a dramatization, the movie reflects on the retirement of the former, and the implications that come with snuffing out sentience. The last movie in this list is about a man who loses his two children to a car accident, then four years later loses his own life, leaving his wife childless and a widower. For the beginning of the movie he struggles with the fact that death does not mean your consciousness disappears from the universe, something that I will also explore in this article.
Ex Machina (a reference to the latin phrase God in the machine, a literary device in which the characters of a piece are saved by the unexpected appearance of an idea, device, person, etc. who saves their proverbial bacon) brought to my attention a fantastic idea in AI Theory “Mary and the Black and White room” or sometimes “Black and White Mary.” The idea centers around a scientist, Mary. Mary knows everything there is to know about color: The wavelengths of each color, the number of colors that exist or could possibly exist, the difference in slight variances in color. She knows all of these things, and yet, she lives in a black and white room, she herself perceived in the colors black and white. When Mary is one day released from her room, she looks at a clear blue sky, and for the first time in her life learns what it feels like to perceive the color blue. This is the difference between man and machine: A machine might understand how to play chess, it might be able to replicate any image created by an artist, but it doesn’t understand the feeling to win a game of chess, the heartbreak of losing a game of chess, or to be moved by the art it creates.
Computers deal in black and white (i.e. 1s and 0s). They understand all of the information we pump into them, understand how to process that information, but lacks the imagination to take these inputs further. Beyond that, it lacks empathy. Is that all humanity is though? Can sentience and consciousness be linked solely to empathy? Is empathy the “God Particle” to human consciousness? What about people unable to empathize with humans? What about serial killers who kill with no regard for human life? They are monsters, certainly, but they are able to feel, to emote, so that can’t be humanities singular qualification. Mary can understand color, and if explained she might even be able to understand why people find certain colors more pleasing than others, but the inability to experience (first hand) the emotional link contained in these colors, does she really even understand that color? Is true sentience the ability to, not only understand outside stimulants but also to truly perceive them for yourself, form your own opinions as opposed to generating them based off of data or explanations given to you?
This is a large part of the human experience, but life is so much more complicated that just that. The luxury and curse of being self-aware plays heavily into humanity as well. Gorillas are intelligent, no one can debate that: they can speak to humans using sign language, something even I am incapable of, however, did you know that a gorilla has never asked an existential question? A gorilla has never asked why it is here, what its purpose is, what role it holds in the universe. A gorilla has never asked, even, why the sky is blue. The difference between humans and animals is the need to understand. A human child and a young gorilla are both given a rectangular block, both face up, both with a slight groove on the bottom that keeps the block from being able to rest solidly on a flat surface. When both subjects are told to place the object down the gorilla will (eternally) attempt to put the block down to no avail, whereas the child will eventually look at the bottom of the block to see what is hindering it from completing the task. This is another part of humanity. The need to understand and the desire to learn are fundamental in the human experience. Whereas a machine will falter and halt at the inability to complete a task, humans will analyze obstacles that stand between them and completion and examine what setbacks exist (even if unable to solve them).
It is theorized that an AI would need gelware (as opposed to hardware; gelware would allow for the restructuring of the brain [or cpu {Central Processing Unit} in regards to an android] much like what exists and happens in the human brain during the process known as “Neuroplasticity” in which the brain shifts its neuronal receptors allowing for faster cognitive actions and connections) in order to function at its peak level and allow for the most human experience. In this regard, the human brain can be seen as nothing more than an organic computer/cpu, which then raises another question. If a computer can attain a human level of intelligence and emotive responsive, can a human default to the “mindset” of a computer? I believe it is very possible, presented in the phenomena known as “Muscle Memory”.
In the military, certain processes are drilled into you. These processes allow for a more fluid functional unit of soldiers capable of completing a task in synchronization, allowing for a unit to act as a single entity when engaging enemies in combat. When I was in bootcamp, we recruits were required to refer to ourselves as recruits and in the third person. 2/3rds of my way through my training I began to think about what I would do outside of Paris Island, I imagined myself going to a library looking to indulge my lust for new literature, and as I played the encounter in my head I was alarmed to find I was unable to refer to myself in anything other than “This Recruit”. It was only with intense conscious thought that I was able to use the first person I, as opposed to a detached 3rd person perspective.