Fire and Forget: A Moral Defense of the Use of Autonomous Weapons in War and Peace

In Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.), Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare. New York: Oxford University Press. pp. 9-23 (2021)
  Copy   BIBTEX

Abstract

Autonomous and automatic weapons would be fire and forget: you activate them, and they decide who, when and how to kill; or they kill at a later time a target you’ve selected earlier. Some argue that this sort of killing is always wrong. If killing is to be done, it should be done only under direct human control. (E.g., Mary Ellen O’Connell, Peter Asaro, Christof Heyns.) I argue that there are surprisingly many kinds of situation where this is false and where the use of Automated Weapons Systems would in fact be morally required. These include cases where a) once one has activated a weapon expected then to behave lethally, it would be appropriate to let it continue because this is part of a plan whose goodness one was best positioned to evaluate before activating the weapon; b) one expects better long-term consequences from allowing it to continue; c) allowing it to continue would express a decision you made to be resolute, a decision that could not have advantaged you had it not been true that you would carry through with it; d) the weapon is mechanically not recallable, so that, to not allow it to carry through, you would have had to refrain from activating it in the first place, something you expected would have disastrous consequences; e) you must deputize necessary killings to autonomous machines in order to protect yourself from guilt you shouldn’t have to bear; f) it would be morally better for the burden of responsibility for the killing to be shared among several agents, and the agents deputizing killing to machines can do this, especially where it’s not predictable which machine will be successful; g) a killing would be morally better done with elements of randomness and lack of deliberation, and a (relatively stupid) machine could do this where a person could not; h) the machine would be acting as a Doomsday Device, so that it could not have had its hoped for deterrent effect had you not ensured that you would be unable to recall it if enemy action activated it; i) letting it carry through is a necessary part of its own learning process, and you expect that this learning will have salutary effects later on; j) human intervention in the machine’s operation would disastrously impair its precision, or its speed and efficiency; k) using non-automated methods would require human resources you just don’t have in a task that nevertheless must be done (e.g., using land-mines to protect remote installations); l) the weapon has such horrible and indiscriminate power that it is doubtful whether it could be actually used in ways compatible with International Humanitarian Law and the Laws of War, which require that weapons be used only in ways respecting distinctness, necessity and proportionality, but its threat of use could respect these principles in affording deterrence provided human error cannot lead to their accidental deployment, this requiring that they be controlled by carefully designed autonomous and automatic systems. I then consider objections based on conceptions of human dignity and find that very often dignity too is best served by autonomous machine killing. Examples include saving your village by activating a robot to kill invading enemies who would inflict great indignity on your village, using a suicide robot to save yourself from a less dignified death at enemy hands, using a robotic drone to kill someone otherwise not accessible in order to restore dignity to someone this person killed and to his family, and using a robot to kill someone who needs killing, but the killing of whom by a human executioner would soil the executioner’s dignity. I conclude that what matters in rightful killing isn’t necessarily that it be under the direct control of a human, but that it be under the control of morality; and that could sometimes require use of an autonomous or automated device. (This paper was formerly called "Fire and Forget: A Defense of the Use of Autonomous Weapons in War" on Philpapers; the current title is the title of the published version.)

Author's Profile

Duncan MacIntosh
Dalhousie University

Analytics

Added to PP
2019-07-28

Downloads
641 (#33,674)

6 months
108 (#49,247)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?