In 2011, the supercomputer Watson defeated the two greatest minds "Jeopardy!" has ever seen by a large margin. It offered an insight into the eventual future where the prophecies of science fiction will become fact. From robots in our house to robots on the battlefield, our lives will be improved tremendously in the next century. You think our grandparents had it tough adjusting to a computer in their pocket? Imagine adjusting to a humanoid robot who can speak, react and help us in our own homes. While the general population is more concerned about their Roomba, intellectuals and researchers are more concerned about the lethal aspect of robots in the future. Whether it’s a drone, a fighter jet, or killer robots, it is right to let a robot decide who to kill?
Now five years since Watson, the Samsung SGR1 overlooks the “demilitarized” zone between North and South Korea. It has the ability to detect, target and kill any intruder without a human needed.
In Israeli airspace the Harpy can fly for hours with no one in the cockpit or on the ground to control it. It searches for enemy radio signals, then destroys them with precision.
On the walls of an Army base in Texas, guards are being replaced by the Tower Hawk System, a high power gun mounted on the wall that can be remotely controlled by a Xbox controller. Limiting the work of 10 men working in 12-hour shifts to only two men.
These are autonomous weapons, the pioneers of the killer robots we see on our screens, or at least that’s what some seem to think.
A future dominated by Terminator is still far away, but it’s no longer a figment of some aspiring sci-fi author's imagination, but a reality for a dedicated scientist funded by the Pentagon, Beijing, Tokyo, Seoul, or dare I say...Moscow. When innovation happens, it happens fast, and it cannot be controlled until it’s too late. That’s why in 2007, Noel Sharkey professor of artificial intelligence at Sheffield, warned the world of the threats posed by killer robots. In 2009, he, Jürgen Altmann, Peter Asaro and Rob Sparrow agreed to establish the International Committee for Robot Arms Control (ICRAC). Four short years later, The Campaign to Stop Killer Robots was launched. These two organizations are growing more and more popular every day and have been slowly building influence on the international stage.
These two foundations want to kill killer robots before they even begin to short circuit and cleanse the world of human beings. Their main priorities are to ban autonomous weapons, make any robot's weapons abide by International humanitarian laws, to be as transparent as possible in arms production, and to open dialogue between countries about this dangerous topic.
In June 2015 more than 1,000 artificial intelligence researchers signed an open letter calling for a ban on autonomous weapons. Since then it has garnered more than 3,000 researchers and 15,000 endorsements including Elon Musk and Stephen Hawking. Even the Dalai Lama agrees with the pre-emptive ban.
They fear that without proper regulation or a pre-emptive ban on robot weapons, it will start a new arms race much like the one we see with nuclear weapons today. War will be easier to wage if human soldiers are replaced with robots, and it will be the civilians who suffer the most. If civilians are killed in the process, who is to be held accountable? The commander, programmer, manufacturer, or the robot itself? It creates a “responsibility gap.” But before all that can be answered the major question is, is it ethically responsible for leaving the decision to kill another human being to a robot? A robot can malfunction. It can’t tell the difference between a civilian or a soldier. It can’t make complex decisions like humans can.
Roboticist Ronald Arkin argues that the ugly side of human nature would not be present in war. Robots would not shoot out of fear, anger, or revenge and robots would not rape, torture, or loot. They are devoid of emotions that can make war hell. It’s ironic to ask that robots be built complying with the international humanitarian law when it does not suffer as many of the same flaws as humans do.
It only took five months for Campaign to Stop Killer Robots to gain acceptance to a UN General Assembly meeting on the threat of autonomous weapons when it usually takes five years for an activist group to achieve that stature. Since 2013, the UN has held three formal General Assembly meetings about autonomous weapons where numerous nations have addressed their concerns. Pakistan, being the biggest victim of drone strikes, is one of the few nations to call for a complete prohibition on the use and development of robot weapons.
Yet no meaningful legislation or ban has been implemented in the countries building the robots. And until a tragedy happens with robots, there will not be a ban for a long time. Robotic warfare offers too many benefits because as ugly as it is to say, we don’t mind civilians dying in other countries as much as we hate our own troops dying in other countries. These researchers and scientists are fighting for a noble and sadly hypothetical cause because killer robots do not exist yet. It’s tough to pass legislation and regulations on something that hasn't been invented. As of today, no international agreement has been made on even just the definition of ‘autonomous weapons,’ let alone starting a pre-emptive ban on it. Someday we will need regulation and restriction and maybe even a full prohibition on the technology, but I don’t see that happening for probably another decade until robots become more and more advanced and lethal.
I completely support the Campaign to Stop Killer Robots and the ICRAC’s cause, but killing killer robots will have to wait another day.