Ban of killer robots, drones fought by U.S., Russia
In Geneva, it’s the big guy versus the little guy. And the big guy has robots on his side.
More than two dozen nations are using a key United Nations meeting this week to push for a total ban on fully autonomous weapons, or “killer robots,” as critics have dubbed them. The slow but relentless rise of lethal robots from relatively simple drones to “Transformers”-like killing machines and guns that can choose their targets has pitted large countries against small, with the U.S., Russia, Israel, the United Kingdom and other global powerhouses insisting that they will resist any effort to ban the development of autonomous technology.
International regulations outlawing the use of such military autonomous weaponry have found support among an unlikely group of allies, including the Vatican, Cuba, the Palestinian territories, Brazil, Panama, Ghana, Mexico, Pakistan and Uganda. They say such robots represent a violation of human rights standards and potentially an existential threat to humanity.
Activists hope the international campaign against what some call “slaughterbots” will mirror past agreements that banned the use of biological and chemical weapons and greatly constrained the use of landmines in warfare.
But senior Trump administration officials say they will oppose any restrictions on “lethal autonomous weapons systems,” and there seems to be little chance of wide-ranging global rules becoming a reality soon. The U.S. is a global leader in weapons research, including in the autonomous realm.
“The United States does not support beginning negotiations related to LAWS in 2019, on either a legally binding or other nonbinding instrument,” a State Department official told The Washington Times. “The issues presented by autonomy in weapons systems are complex, and further substantive dialogue is required to develop greater shared understanding of the nature and applications of the technical features and functions that are incorporated into weapons systems, and the ways in which to ensure that appropriate human judgment is exercised over the use of these weapons systems.”
The official said the U.S. believes other nations should refrain from making “hasty judgments” about autonomous weapons technology, which along with their offensive capabilities many defense officials around the world believe could save lives in conflicts.
As the battle lines are drawn in Geneva, a wild card in the equation is China, which has said it supports a ban on the use of killer robots but reportedly continues to conduct research in the field. Proponents of a robot ban count China as an ally, though Beijing’s true intentions remain murky.
Some scholars argue that China is taking a deliberately ambivalent position in order to cast itself as a supporter of human rights while behind the scenes is working to stay current in the robot arms race.
“China might be strategically ambiguous about the international legal considerations to allow itself greater flexibility to develop lethal autonomous weapons capabilities while maintaining rhetorical commitment to the position of those seeking a ban,” Elsa B. Kania, a fellow at the Center for a New American Security, wrote this year.
Blocking the ban
What is becoming increasingly clear is that most advanced, technologically proficient nations want autonomous weapons research to move ahead and argue that an all-out ban would be a mistake.
Germany and France, in a joint statement to the Geneva gathering, called for “results-oriented” talks aimed at finding “concrete options for recommendations on how to effectively address the challenges arising from lethal autonomous weapons systems, while neither hampering scientific progress nor the consideration of the beneficial aspects of emerging technologies.”
Britain and Israel also have come out against any broad ban. Russian officials, echoing their counterparts from a host of other nations, said recently that fully automated weapons that can act without human control simply don’t exist right now and that taking action to stop them is premature.
But activists say the time to act is now. By the time militaries have armies of autonomous weapons, they say, it will be too late for the international community to order the killer robots to stand down.
“Killer robots are no longer the stuff of science fiction,” Rasha Abdul Rahim, an artificial intelligence and human rights researcher at Amnesty International, said this week ahead of the summit.
“From artificially intelligent drones to automated guns that can choose their own targets, technological advances in weaponry are far outpacing international law,” he said. “We are sliding towards a future where humans could be erased from decision-making around the use of force.”
The Campaign to Stop Killer Robots, a coalition of groups formed in 2012 with the explicit purpose of “working to pre-emptively ban fully autonomous weapons,” has held events all week in Geneva and has lobbied nations to endorse calls for a ban.
The campaign maintains a running list of nations that have endorsed a full ban. The count stands at 26 and includes China.
“Momentum is starting to build rapidly for states to start negotiating a new ban treaty and determine what is necessary to retain human control over weapons systems and the use of force,” said Peter Asaro, vice chair of the International Committee for Robots Arms Control, which is a member of the campaign’s steering committee. “Requests for more time to further explore this challenge may seem valid but increasingly sound like excuses aimed at delaying the inevitable regulation that’s coming.”
Activists also warn that if technological trends continue, “humans may start to fade out of the decision-making loop for certain military actions, perhaps retaining only a limited oversight role or simply setting broad mission parameters.”
Influential high-tech figures such as Tesla CEO Elon Musk have raised similar concerns and support an international ban.
The top U.N. official overseeing autonomous weapons believes there are valid concerns around the rise of robots but cautioned against overstating the danger.
“I don’t think that these visualizations of Terminators or drones going berserk are very helpful in having an advanced conversation about intelligent autonomous systems,” Amandeep Gill, chair of the U.N. meetings on autonomous weapons, said in an interview with The Verge published this week.
“I think making policy out of fear is not a good idea. So as serious policy practitioners, we have to look at what has become of the situation in terms of technology development and what is likely to happen.”