What makes a human a human? Is it our ability to learn or our ability to use critical thought? Is it perhaps within the very aspects of our consciousness?
No. Personality is what makes a human a human.
Personality encompasses all manner of the unique ways one deals with emotion and critically thinks about the world around them. It’s the root of a any other answer to the question. Most of your personality is happenstance, either embedded in your genetics or instilled in you through the media your parents consumed while you were nearby.
But humans aren’t the only things on Earth with personality. Gorillas, chimpanzees and orangutans all possess it. So do whales and orcas and dolphins and dogs and cats. Lions and tigers and bears (OH MY!) have personality, too. Extraterrestrial life will certainly have it as well, even if it differs from our understanding of it.
Personality or individuality (for the purposes of this article the two being interchangeable) are not necessarily symbiotic with intelligence, though I can’t comprehend the two being completely separate from each other. For example, a stick insect is objectively less intelligent than a gorilla and the gorilla is, rather bluntly, somebody. If you were to look into a gorilla’s eyes, an individual would look back.
We can safely assume that some level of intelligence higher than that of a stick insect is necessary for a creature to have a self. I realize that’s a pretty low bar to set, but whatever. This is my article, my train of thought. Step off.
I would mention intuition here, but intuition is more of a general survival instinct within all animals. All animals learn from mistakes unless, of course, it kills them. But I could say the same about humans. Nothing can learn anything if it’s dead.
To ask Philip K. Dick’s age-old question; do androids dream of electric sheep?
Not an original question, to be fair. Blade Runner made its argument rather effectively already.
IS IBM’s Watson somebody? Watson is designed to interact with people and business to advance progress and collect, share and monitor information as it’s available and changes. How far advanced does artificial intelligence have to be before we consider that Amazon’s Alexa might, at the very least, understand what self is?
Alexa can already order toilet paper for you based on your pattern of when you typically purchase it during a month. Isn’t that kind of intelligence somewhere on the list toward a full-fledged personality? I’ve had partners less intelligent than that.
Or, more importantly, could we even have the discussion about machine’s having a self if we know it’s created by someone else and written in their code? This one’s easy because, yeah. Duh.
The argument is pretty straight forward: my personality is written in my code, too. It’s just a different kind of code; genetic. It doesn’t matter if the shirt is made from cotton or wool, it’s still a shirt. To make a totally oversimplified analogy. Not to mention the literally billions of people worldwide that believe their own self was forged by a supernatural creator.
Now, imagine that Watson was placed into a manufactured skeleton. It had all the workings of a bipedal body. Joints, fingers and toes. It could stand upright and walk and move chess pieces. Would Watson be somebody then? Do we have a bias towards individuality that requires the thing in question to have a physical body?
I think we do. I think that, even if Watson had its own personality, most of us would hesitate to say that Watson was an individual because of the absence of a physical form.
When we talk about self-driving cars and the way that they can sense traffic patterns and see an accident coming before it happens, we say that it’s a smart car. And the more that people drive them the smarter they become. Sure, they’ll all still be tied to the programming within the car, but this will be continually patched and updated to be more and more efficient and safer.
It’s not illogical to believe that the near future holds self-driving cars that you or I wouldn’t need to interfere with at all. You could put in the destination and it would go. It could redirect the route on its own based on updating accident reports (as the GPS on most of our smartphones already do) and manage its speed as you pass different signs. That’s not science fiction, that’s already happening at Ford, Google and Tesla.
Once we pass that test, we get to move on to the next one. Can’t reach the top of the stairs if you never take the first step.
So, consider this: the more people utilize Watson or Alexa, the more they learn. Not that they explicitly develop self-thought, but they are archived with greater and greater amounts of information and can anticipate a user’s needs based on pattern recognition. They’re designed to be useful.
Is there a point where a machine could transcend artificial intelligence to a more traditional, living intelligence? Isn’t that the pinnacle of being useful to humanity? Watson is a tool, and we want him as effective as he can be. We will always be moving towards that.
Before we try and answer the above question, we must consider the future as not being a very bright place. There is a line of thought that believes there is a ceiling to human advancement on Earth. Once we hit the ceiling, we either have to leave Earth or perish here. It’s entirely possible that the idea of a machine becoming intelligent enough to have or understand personality and self will be snuffed out way before we could even get to that stage.
But, that doesn’t make for a very engaging thought process so, to hell with that.
The discussion about what makes a person a person is one we’ve been unfortunately having for hundreds of years. African Americans used to be only three-fifths of a person in the United States. Dozens of different genocide events, most notably the Holocaust, have been committed under the pretense that the group of people being targeted were not strictly people.
Whether or not Watson dreams of electric sheep, I don’t think humanity will be very kind to him once we start asking the question.