My good friend Gary Buckley and I are, apparently, both obsessed with the societal impact of economic automation. He wrote two consecutive articles about it, and I wrote three—not that there is any kind of competition going on here, mind you, because that would just be silly. But to throw one more article onto the pile, I wanted to address a couple of questions that he raised in his last article, "Are Humans Better Than Machines?":
Can robots replace human simplicity? Will robots ever be creative or feel empathy? Do the superior productive capabilities of automation reduce human value?
To put each respective answer succinctly: yes, most likely, and absolutely not.
Robots can replace our simplicity because their efficiency in any task they can perform improves at a rate we simply can't match. Gary accurately pointed out that, at the moment, "Humans can write more colorfully [than machines can]." But humans are not getting better, faster, or cheaper at writing, at least not at the rate that machines are. In fact, one news-writing bot was recently released for free - and research into automated creative writing is underway right now. Gary also said that, at the moment, "[H]umans can fold laundry better [than machines can]." But the same principle applies to laundry as to colorful writing. In fact, a robot by the name of Baxter can already be taught to fold laundry. And as CGP Grey said, "Even if Baxter is slow, his hourly cost is pennies' worth of electricity while his meat-based competition costs minimum wage. A tenth the speed is still cost-effective when it's a hundredth the price."
Software and hardware updates will soon be available for Baxter, but probably not for us. We cannot improve at anywhere near the rate that machines can, so they must eventually surpass us in economic productive value.
Gary also used the argument that "[h]umans will always continue to invent and create, which will continue to supply work for humans." First of all, this is an assumption. It has been true throughout history, but that does not mean it will remain true—just like the non-existence of instantaneous global communication was true for thousands of years, until it wasn't. It is a fallacy to say "what has always been true will always remain true," especially when available evidence indicates that the winds of change are blowing. Second, it is accurate based on current trends and predictions that the percentage of paid work done by humans will decline even if a shrinking group of humans can still find paid work as innovators or because of innovators.
Finally, it is possible that human capacity for innovation will be outdone by AI. While most machines right now are incapable of creativity, it is likely that more software complexity and processing power will lead to more creativity. One way that AI can become creative is by making connections between seemingly unrelated data points that humans never could due to their inability to crunch vast swaths of data from a huge variety of sources. It can also incorporate a random number generator into its idea generation process, in selection or creation of data, to reach a wider variety of possibilities and to be more "genuinely creative."
Automation will replace a huge percentage of human workers, and there are ways that AI could become creative. But can a machine feel empathy? It depends on how empathy is defined. The Free Dictionary defines empathy as "Theability to identifywith or understandanother'ssituation or feelings." Google defines empathy as "the ability to understand and share the feelings of another." Wikipedia has a similar definition: "the capacity to understand or feel what another being (a human or non-human animal) is experiencing from within the other being's frame of reference." Under these definitions, which are what most people I have encountered usually seem to mean when they say "empathy," several non-human social animals appear to show empathy, including dogs, elephants, chimpanzees, and some rodents.
Even with today's young supercomputer technology, we have simulated part of a rat's brain, which is the first step in a project that aims to simulate a human brain. Another project using simpler representations of neurons simulated a mini-brain that scored almost as well as humans did on math tests. We could give a machine empathetic software by virtually simulating the brain of a human or another animal capable of empathy, or potentially by reverse-engineering parts of such a brain and incorporating them into an AI program. This may be impossibly difficult at the moment, but there is no time limit for humanity to disprove the sentence "machines cannot replicate empathy."
However, if we define empathy as requiring a soul, then we wade into much murkier waters. For example, do the aforementioned animals capable of empathy as it is commonly defined have souls? Also, computer processing power has been increasing exponentially over the past few decades, and is likely to grow even faster in the future due to advances in quantum computing. If we eventually simulate a full human brain, would that simulation have a soul? And if you say no, how would you respond if it asked you why?
I will not pretend to have an answer to these heavy metaphysical questions. My point is that defining empathy as "requiring a soul" is inconsistent with how the term is generally defined, opens a huge philosophical can of worms and, if only humans have souls, leads to a circular argument: "only humans can have empathy because empathy is something that only humans can have."
With all of that said, humans will still be valuable even if machines and AI eventually surpass our simplicity, creativity, and empathy. Human intrinsic value is not reduced by automation or advancements in AI because a human's intrinsic value is not measured by comparison to machines, especially in terms of how much money they make or how many widgets they produce. It is very dangerous to conflate the intrinsic value of the human experience with human economic productive value, especially when the latter is in decline.
Gary did hit the nail on the head with his eloquent description of the intrinsic value of the human experience, which he describes as distinctly different from human economic productive value: "the simple is what makes life worth living. Feeling the sunshine on your face, snapping your fingers, breathing fresh air—all trivial occurrences. Yet perhaps the most faithfully joyful ones. Humans can enjoy the relational aspect of chatting with an old friend or getting lost in the world of a novel…To truly live — to thrive — human beings must love and be loved. People must feel raw pain and pure ecstasy to know the human experience."
All of these simplicities are a vital part of what makes life worth living, which is true whether or not "machines simply cannot replicate" them. And they are not only still possible but easier for us in a world where a human does not need to work for a living because machines, our beautiful creations that represent one of our greatest accomplishments as a species, do the undesired grunt work that is necessary for our survival.
Even if robots take over everything that we currently consider "paid work," including the simple work, humans can work at and enjoy all kinds of unpaid activities. There is no economic incentive to automate an unprofitable activity, so unproductive fun is much less likely than productive work to be "automated away." And even if an activity (e.g. productive fun) is automated, humans can still do it without being paid if they enjoy it for its own sake, so very few activities will be "automated away" that are intrinsically desirable. Robots will simply alleviate the need for a precarious "work-life balance" by taking our work and thereby giving us more life. They will give us the time and resources to live freely as we see fit, free to enjoy our simple and relational humanity.
For more information on this subject, check out some of the articles that Gary and I have written:
Gary's Automation Articles: "In Defense Of The Robot Painting" and "Are Humans Better Than Machines?"
My "Automating the Workforce" Series: "Part 1: The Crisis Is Here," "Part 2: Meet The Machines" and "Part 3: The Future Of Humanity"