The metal body flies without effort against the sky – searching, synthesizing, processing data at an inhuman speed. A specific face has been imprinted on this machine. Once found, the machine can summarily end that person’s life. This drone responds to no ground operator. It is autonomous – and lethal.
This weapon does not come from the mind of J.J. Abrams or Ridley Scott.
Rather, our own Department of Defense imagines such a weapon.
An October 25 New York Times story by Matthew Rosenberg and John Markoff released that the Pentagon has been in the advanced stages of testing for “robots” that would have artificially intelligent capabilities. These drones would be able to locate specific targets and actors without human oversight, and their development falls under an $18 billion Pentagon budget allocation for new technologies. Although these drones are still in the preliminary phases and have only been subjected to testing in replica villages, they have already proved to be adept at targeting specific faces and locations.
Defense department officials see these robots as necessary in maintaining an advantage over foreign advances in artificial intelligence. Foreign powers such as Russia and China imagine similar autonomous weaponry and have begun investing into comparable programs.
An ongoing debate has emerged in the Pentagon regarding the degree of independence drones should be capable of. Officials feel developing autonomous weaponry is a slippery slope to conflict dictated more by technology than human actors. The development of AI in drone technology has been dubbed the ‘Terminator Conundrum.’
Robert O. Work, the deputy defense secretary, explained how developing autonomous weapons keep us on pace with Russia and China:
“China and Russia are developing battle networks that are as good as our own. They can see as far as ours can see; they can throw guided munitions as far as we can… What we want to do is just make sure that we would be able to win as quickly as we have been able to do in the past.”
The advancement of autonomous technology can keep pushing President Obama’s military objectives: minimize collateral damage and keep boots off the ground. However, these advancements have catalyzed an arms race with Russia where military advantages supercede an ethical discussion about artificially intelligent weapons.
A primary critique of drone technology in the past has been its record of civilian and noncombatant casualties. Villages in Afghanistan, Yemen and now Syria have all seen how deadly and inefficient such strikes can be in aiding a military cause. Drone strikes that employ facial recognition could provide a far more exact and precise way to take out individual actors and lessen the possibility of collateral damage.
The killing of noncombatants has been reinforced by the Obama Administration’s lenient classifications for civilians located around a potential target. In a 2012 New York Times piece, Jo Becker and Scott Shane addressed how this policy has allowed for many civilians to be killed without classifying them as traditional noncombatants:
“Mr. Obama embraced a disputed method for counting civilian casualties that did little to box him in. It in effect counts all military-age males in a strike zone as combatants, according to several administration officials, unless there is explicit intelligence posthumously proving them innocent… Counterterrorism officials insist this approach is one of simple logic: people in an area of known terrorist activity, or found with a top Qaeda operative, are probably up to no good.”
Such cryptic justifications for taking the lives of noncombatants is clearly irrational – there is no way to definitely prove that all military aged males in a targeted locality are “up to no good.” This policy of imprecise targeting has enabled the killings of countless innocent civilians in embattled regions. A 2014 Guardian piece found evidence that drone strikes targeting specific actors are increasingly imperfect:
“Even when operators target specific individuals – the most focused effort of what Barack Obama calls “targeted killing” – they kill vastly more people than their targets, often needing to strike multiple times. Attempts to kill 41 men resulted in the deaths of an estimated 1,147 people, as of 24 November.”
However, artificially intelligent drones have already been displaying the ability to discern combatant from innocent. The same October 2016 New York Times story highlighted the drone’s new functions: “The drone showed a spooky ability to discern soldier from civilian, and to fluidly shift course and move in on objects it could not quickly identify.”
The logic of drone warfare has always stemmed from an ideology of keeping U.S. soldiers safe and off the ground. An October 5 Washington Post piece by Vivek Wadhwa and Aaron Johnson highlights how if autonomous drone technology can be developed, it would be a moral imperative for the Pentagon to utilize them instead of sending troops into conflict:
“The rationale then will be that if we can send a robot instead of a human into war, we are morally obliged to do so, because it will save lives — at least, our soldiers’ lives… And it is likely that robots will be better at applying the most straightforward laws of war than humans have proven to be. You wouldn’t have the My Lai massacre of the Vietnam War if robots could enforce basic rules, such as ‘don’t shoot women and children.’”
If we are moving towards increased drone warfare, minimizing civilian casualties should drive the development of autonomous weaponry.
However, despite these ostensible benefits, the development of new drone technologies has only fueled the growing military tension between the United States and Russia in recent months.
Recent Russian military maneuvers including a flotilla headed for Syria and the stationing of nuclear capable missiles in Kaliningrad have cause for increased tensions with NATO.
Accounting for the already heightened sense of mistrusts between the U.S. and Russia, an arms race for artificially intelligent drones would only further exacerbate current tensions and military maneuvering. It may create an ‘arms race’ in which the U.S. would struggle to maintain a technological advantage over competing polities. Christian Davenport of the Washington Post described how this race might play out:
“After decades of unmatched superiority, the Pentagon fears that potential adversaries have benefited from the proliferation of commercial technology and have caught up with the United States. The Pentagon is preparing for what Deputy Defense Secretary Robert Work called “network-on-network warfare” against more traditional rivals, such as China and Russia, after more than a decade of counterinsurgency warfare in Iraq and Afghanistan.”
The alternative to such an arms race might be a multilateral military agreement where many states collectively decided to halt progress on autonomous weaponry. Wadhwa and Johnson of the Washington Post highlight how a ban on all autonomous weapons might normalize future tensions: “The only way to avoid untenable situations is to create and enforce an international ban on lethal autonomous weapons systems. Unilateral disarmament is not viable. As soon as an enemy demonstrates this technology, we will quickly work to catch up: a robotic cold war”
The proposition of a multilateral agreement limiting the proliferation of autonomous weaponry directly harks back to the Nuclear Non-Proliferation Treaty of 1968. While the efficacy of the treaty was significant in limiting the overall spread of nuclear weapons among 190 signatory states, it was never signed by India, Israel, Pakistan, or South Sudan. France and China did not accede to the treaty until 1992. As evidenced by the NPT, it remains extremely difficult to keep all signatory states on similar trajectories. Such problems would invariably arise with a similar agreement on autonomous weaponry.
While the advancement of autonomous weaponry helps keep soldiers off the battlefield and decrease civilian casualties, it also raises a host of ethical questions. What kind of precedent does it set to allow inhuman objects to make critical decisions in the heat of battle?
Tom Malinowski of Human Rights Watch aptly explains this dilemma in a Lawfare article:
“Let’s remember: proportionality decisions require weighing the relative value of a human life. Would we be comfortable letting robots do that? How would we feel if someone we loved were killed in a military strike, and we were informed that a computer, rather than a person, decided that their loss was proportionate to the value of a military target?”
Ingrained algorithms and momentary calculations are now increasingly able to replace human judgement.
The benefits from developing autonomous weaponry seem not only tangible but increasingly within reach. Lives could be saved, U.S. soldiers kept out of conflict. But there still exists a heated debate within the upper cadres of the Pentagon in how to morally reconcile an inhuman object performing such violence. It begs a fundamental question:
Can we trust a machine with the ultimate judgement call: taking another human’s life?
Featured Image Source: Human Rights Watch
Be First to Comment