Skip to main content

Lethal autonomous weapons systems, or LAWS, defined by the US Department of Defense as “a weapons system that once activated, can select and engage targets without further intervention by a human operator,”[1] will be the next leap in military technology and already we are coiling for the jump. Every major power is investing in their development. Now, before they are ready for deployment, is the time to form an ethical understanding that can guide the creation of laws to fairly and humanely regulate their use.

One response has been that there is something inherently wrong with using LAWS and that they should be totally banned. Although this was my own initial reaction, I now believe it to be illogical, born from an instinctual apprehension of great change and a fear of the alienness of machines. The purpose of this piece is to dispel the notion that autonomous weapons are inherently immoral. This is not to say that they are good or safe. The strength of a weapon directly correlates with its capacity to do wrong, and so autonomous weapons must be subject to careful control and intense scrutiny. However the mere existence of LAWS is not unethical, and they should not be subject to a total ban.

There are three main arguments for inherent immorality that I will outline and then attempt to counter. Firstly, there is the capacity argument, that LAWS are not capable of complying with the laws of armed conflict because they lack the dynamic thinking and emotional intelligence of a human. Secondly, there is the human agency argument, that LAWS are immoral because they remove the human from the decision to kill, which is essential to ensure the weight of that decision is felt and to respect the human dignity of the victim. The final argument is that the use of LAWS undermines the purpose of war by separating the people who would impose force from the actual act and consequences of imposing force.

1: Capacity to Follow the Laws of Armed Conflict

The first line of argument is that autonomous weapons are unethical because they are incapable of complying with the existing principles of military ethics. Although current computers can calculate much faster than humans, they lack our dynamic and creative thinking. In the complex environment of war, there are many nuances that factor into decision making and our existing level of technology is not able to take all of these into account.

The most frequently espoused concern is that LAWS will fail to properly discriminate between legitimate and illegitimate targets. A computer may struggle to delineate between combatants and non-combatants, especially when the enemy has no consistent uniform, or may not be able to identify that an enemy is surrendering or incapacitated. Landmines, the most basic of autonomous weapons, fail in this regard. A landmine does not discriminate between combatants and non-combatants. Once a mine is planted, the next object to place adequate force upon it is attacked regardless of its legitimacy as a target. This technology is not able to interpret a situation sufficiently to make a moral decision and has been recognised as an immoral weapon by the international community in the Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction. Proportionality may also be a problem. A machine might be unable to properly escalate the use of force. A great depth of body language interpretation is required to respond proportionally to something like a punch or a thrown rock. Human beings have a significant advantage over LAWS because they are able to empathise with and so intuitively understand other human beings. In order to be relied upon to make lethal decisions at the standard we currently expect from our servicepeople, LAWS must be far more intelligent than they currently are.

It is worthwhile to note that increasing the thinking capacity of machines will lead to separate problems. Creating an artificial general intelligence (AGI), a digital mind equivalent to a human’s, is a moral quandary in its own right due to the possible achievement of consciousness, leading into the swamp grounds of sentient rights and slavery, and the doomsday scenario of an AGI improving itself until it has snowballed into an intelligence that cannot be controlled.[2]

Even if we were to develop LAWS able to interpret complex situations with the competency of a human, this still may not be enough to ensure they behave properly. Our ethics frequently have their origins in our emotional reactions, which in turn come from physiological impulses that are the product of our genetic code.[3] This means that some elements of our thinking are separate from pure logic and are potentially uniquely human. Computers do not share our ability to feel this instinctual sense of rightness or wrongness─or at least they will not until a point well beyond the foreseeable future─so they lack the base from which our morals emerge. They therefore lack the ability to fundamentally understand a portion of our ethics. Perhaps they will be able to implement the principles we dictate to them, but they will not be able to develop the same principles as we have on their own.

Designing an autonomous weapons system that will comply with the Laws of Armed Conflict will be extremely difficult. It will require a level of technological mastery that we do not yet have. There is, however, no reason to believe that it is impossible. The capacity argument would only be grounds for a total ban if it were inconceivable that a LAWS was able to comply with the laws of armed conflict. It is not inconceivable. Any system that fails to uphold our morals is immoral, but this immorality applies only to each specific system, and until a system is created it cannot be appraised. If a LAWS cannot comply with the laws of armed conflict, then it cannot. If it can, it can. The possibility of these systems being poorly designed is not enough to totally deny their usage. If a LAWS is found to be capable of compliance, then it may in fact be more moral than its human counterpart. Human combatants can have their decisions hampered by being tired, scared or prejudiced. Machines do not tire, they do not get emotional, they do not experience hate. They follow only their programming, and if they are programmed to act properly, they will.

Keep in mind that it is an option to limit the use of autonomous weapons to a scope of operations in which they are known to be reliable. Perhaps LAWS are not able to distinguish between human combatants and non-combatants, but it would not be hard for a computer to identify the profile of a particular model of tank or warship. A system with limited capacity to act ethically can be used ethically if it is restricted to the kind of work with which it can be trusted.

2: Dilution of Human Agency

The second line of argument is that autonomous weapons are unacceptable because they remove human oversight from the process of killing. This is said to have two effects: to separate the decision maker from the consequences of their choices and to deny the human dignity of the victim.

I consider human agency to be equally as present in the use of LAWS as it is in most modern weapons. When an autonomous machine fires, a pre-planned algorithm has made the immediate decision. However, the decision to activate that algorithm was made beforehand by a human. The difference between using LAWS and using a basic firearm is the uncertainty and delay between the choice and its result. The user does not know when or if the system will fire and exactly what the circumstances will be, but they do know that they do not know this. If they choose to activate a LAWS without being completely confident that it will act in a way that they would condone, then they have acted wrongly in the same way that a soldier wildly throwing a grenade over a wall has acted wrongly. The prime rule to adhere to when considering deploying any autonomous system, is if you would not endorse the worst possible action it may take, or do not know what the worst possible action is, then it should not be used. When a person presses the ‘on’ button for a LAWS, they are pulling a trigger, releasing an extremely sophisticated bullet, and they are accountable for that action. If a person activates a machine that has the potential to wrongfully take a life, then they have essentially fired that bullet thoughtlessly into the air. If they have not investigated the competence of the machine properly enough to know whether it may act improperly, their crime is the same. The path from firing to consequence is far more convoluted than that of a rifle, but the act of deciding to fire is the same and it comes from a human.

Another concern is that the distancing of the operator from the act of killing makes it harder to comprehend the consequences and therefor easier to make the decision to kill. If this is a problem then it is a problem already chronic in modern warfare.[4] It is common in this era to fight without laying sight on the enemy. It is now the unlucky combatant who sees the direct result of their actions. Over-the-horizon barrages have accounted for a massive portion of battlefield deaths in the last century, and with ICBMs the possible distance of an attacker from their target is the circumference of the Earth. Autonomous weapons add to the potential distance only via the dimension of time and, when visualising the impact of your decision is already an imaginative exercise, I do not see this as being a significant difference. If a missile firer can make the right decision on whether or not to strike a site they have never been to hundreds of kilometres away, when any number of events could happen during its flight, then an operator can make the right decision on whether or not to activate a LAWS.

The second supposed impact of removing human agency is the erosion of the human dignity of the target. This is part of the stance of ex-UN Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, who in 2013 called for a moratorium on the development of LAWS. I cannot make his point more eloquently than he has: “To allow machines to determine when and where to use force against humans is to reduce those humans to objects; they are treated as mere targets.”[5] “This approach stems from the belief that a human being somewhere has to take the decision to initiate lethal force and as a result internalise (or assume responsibility for) the cost of each life lost in hostilities, as part of a deliberative process of human interaction.”[6] Temporarily putting aside my previous claim that there is sufficient human agency in the activation of LAWS, I find this concept of a particular type of trigger system attached to a weapon being degrading to the humanity of its victim absurd. If a person is to be killed, then it should be absolutely necessary. If it is absolutely necessary, then it should be done in the most efficient manner that causes the least suffering. I cannot see a dying person being particularly concerned with whether their shooter was organic – and if they were, perhaps they would rather an emotionless, task driven killer be on the other end of the weapon than a member of the species that orchestrated the holocaust, raped Nanking and invented the iron maiden. A bullet is a bullet and if it embeds itself in a legitimate target, with respect to the laws of armed conflict and with the intention of minimising suffering, then what does it matter how it was launched?

I do understand where this sentiment comes from. Heyns likens the use of autonomous weapons on human beings to the act of exterminating vermin, killing with no respect or remorse for the target. It is not a good thing for a human being to be killed by an emotionless machine. The conundrum is that while killing with LAWS is not a good thing, killing itself is not a good thing. The humanity of a person is already violated to the extreme when they are killed. LAWS would be one more step after a marathon of wrongdoing. This does not mean that acts cannot be worse, or that there can be no ethics in war, as it is in itself bad. There is a great deal more wrong with causing unnecessary suffering or harming a person that does not pose a threat, than there is with using violence to accomplish an essential objective when there is no alternative. There must be a line – but this is not a logical place to draw it.

3: Alienation of the Purpose of War

Our final line of argument is the one I first supported and have since turned against. It is the belief that the replacement of human fighters perverts the purpose of warfare.

The argument I once would have given is as follows. War is a political instrument of last resort. When a disagreement of significant proportion cannot be resolved by any other means, it is accepted that the parties involved can engage in war. War is a means to measure the support, the willpower, behind either side of this disagreement. It does so because the accumulation of willpower is intrinsically connected with the fighting capacity of a force. For any individual to be willing to fight, they must first overcome their fear of the threat of death, displaying that their want to see their side win is greater than this fear. This means that the size of a host, and in turn its fighting capacity, is in tandem with the will behind it. Greater resources from supporters and stronger morale, further influencing fighting capacity, also result from greater willpower. The side with the greater willpower will win and have their desire fulfilled. This is essentially an extreme form of democracy and, like democracy, it works on the principle that whatever is most commonly wanted by a community is good more often than not. It is a horrible, wasteful, and imperfect way to decide, but when no civilised means can conclude a dispute, only the barbaric one remains.

For war to function as a method of decision making, fighting capacity must be reflective of will. For fighting capacity to be reflective of will it is necessary that the fighters risk a great deal by participating. It is necessary that they face the threat of death so that they can display the level of will required to overcome it. If equipped with an army of machines, the belligerents would be free from the threat of death, in which case the struggle would be a measure of their resources, not their will. Their fighting capacity would be further decoupled from their will as their force is multiplied out of proportion due to the introduction of combatants that have no conscious will. If war is viewed in this way, as an instrument for decision making which compares the will of the supporters of different conclusions by measuring their fighting capacity, then autonomous weapons would destroy this instrument because they would separate the fighting capacity from the actual will of a party. There would then be a gulf in power between those with autonomous weapons and those without – with those without having no means, not even war, to pursue their interests.

A moment after I had cemented this opinion, I realised that it was aggressively naïve and unabashedly utopian. Fighting capacity is not tied to will. It is the product of myriad factors, very few of which are reliant on the wants of those embroiled in the conflict. Technology, resources, and strategy add more to fighting capacity than willpower. Willpower did not allow the Romans to subdue barbarians, it did not allow Hernan Cortes and a few hundred conquistadors to wreak ruin against hundreds of thousands of Aztecs, and it was not willpower that pushed Australia’s indigenous inhabitants away from the land that would become Port Jackson. It was technological and economic superiority. The gulf in power I feared autonomous weapons would bring could not be much greater than the gulf between a spear and a gun. War would only be reflective of will if it were fought naked with bare fists. War is not a measurement of will, it measures only fighting capacity, and so it does not function as an instrument for decision making.

A fundamental law of human interaction is that, if a party has the power to achieve their goal, they can. Might truly does equal right. War is the realisation of this natural law, not a tool of our design. War occurs when a party believes that they have the capacity to impose their desires via force and choose to do so and an opponent then chooses to resist. We experience peace only when no one chooses to resist those exercising power, either because there is a consensus of desire or because there is a monopoly on power. We accept war not because we recognise it as a legitimate form of politics but because it cannot be uprooted. If we were to avoid war for 100 years, it would return with the same nature and the same viability on the 101st because it is not something we created, it is the result of reality.

My original interpretation was influenced by the Clausewitzian view that ‘War is a continuation of policy by other means.’[7] I took from this that ‘War is another means of politics’, like diplomatic or economic scuffling. It is not; war simply opens the way to other means that are not available to states operating within the restrictions of law and cooperation. War is not an instrument; it is a condition, an environment in which politics can occur. We can choose to dip into the state of war if it suits us in the same way we would dip into water. It may suit your intentions, it may be an advantageous position if you are a strong swimmer, but the water does not have a goal of its own. Its purpose cannot be perverted because it has no purpose. Therefore no weapon can pervert the purpose of war. We cannot control or moderate war to make it assume the image we would want it to. We can only govern our own behaviour within it.

To do good in war, one should not attempt to deny this reality by claiming that war is at all another tool of politics, to be used when we desire in the manner we desire, capable of being sanitised and, most dangerous of all, a method of resolution. One must work with the law of power as one works with the law of gravity. In order to do good, we must add our power to a party that intends to do good. When enough power is coagulated, a hegemon or leviathan is born which can arbitrate over the community, using the threat of its monopoly to force concession without bloodshed, and if this leviathan is led and structured in such a way that guides it to do good then it will do good. The methods we use to gain the power to reach this point should only be limited if they defeat our goal of doing good. It is not acceptable to use indiscriminate or disproportionate force because by doing so we are creating as much suffering as we would destroy. The previous arguments showed LAWS to be a weapon that is not certain to cause any greater harm than would be caused in the normal process of fighting a war, so it is not certain to be an unacceptable source of power.

Conclusion

None of these three arguments, which I see to be the primary and most viable points against the use of LAWS, are sufficient to prove autonomous weapons to be inherently immoral, yet they do give reason for caution. The incorporation of new technologies should not be allowed to erode our values. If we sacrifice our principles in order to win, then we lose. Considering the infancy of these systems I support a moratorium of no more than 20 years. It is not reasonable to ban LAWS permanently. I say this not only because I view autonomous weapons as being ethically acceptable. Investing in weaponry is a prisoner’s dilemma with drastic impact. It would be so greatly advantageous for one state to develop LAWS if their opponents did not, and so greatly disadvantageous to have not developed them if their opponents had, that I suspect that a permanent ban would be treated in the same manner as the prohibition of nuclear arms. Development would certainly continue and in a major peer conflict there is every possibility they would still be used, at which point they would be used without limitations. Guiding development with fair regulations that are strictly enforced would more likely result in ethical usage of LAWS. A short-term moratorium would be likely adhered to and would at least allow us to pass safely through the current period in which these systems are unmastered and most liable to be misused. When that period ends we will hopefully be better placed to create more meticulous laws.

Footnotes

Formatted HTML Code:

1 Baker, D. (2015). Key concepts in military ethics (pp. 132-134). Sydney: UNSW Press.

2 Burkart, J., Brugger, R., & van Schaik, C. (2018). Evolutionary Origins of Morality: Insights From Non-human Primates. Retrieved 11 December 2020, from https://doi.org/10.3389/fsoc.2018.00017

3 Clausewitz, C. (1989). On War. Translated by Michael Howard and Peter Peret, New Jersey: Princeton University Press.

4 Docherty, B., Duffield, D., Madding, A., & Shah, P. (2020). Heed the Call A Moral and Legal Imperative to Ban Killer Robots. Retrieved 7 December 2020, from https://www.hrw.org/report/2018/08/21/heed-call/moral-and-legal-imperat…

5 FitzPatrick, W., & Zalta, E. (2016). Morality and Evolutionary Biology. Retrieved 13 December 2020, from https://plato.stanford.edu/entries/morality-biology/#ExpOriMorPsyAltEvo…

6 Grossman, D. (1996). On Killing (2nd ed., pp. 99-133). Boston: Little, Brown and Company.

7 Heyns, C. (2013). Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns. Office of the United Nations High Commissioner for Human Rights.

8International Committee of the Red Cross (ICRC). (2018). Ethics and autonomous weapon systems: An ethical basis for human control? Geneva.

9 Klare, M. (2019). Autonomous Weapons Systems and the Laws of War. Retrieved 8 December 2020, from https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-laws-war

10 Muller, V. (2014) Risks of General Artificial Intelligence, Journal of Experimental and Theoretical Artificial Intelligence, 26:3, pp.297-301 DOI: 10.1080/0952813X.2014.895110

11 Patterson, D. (2020). Ethical Imperatives for Lethal Autonomous Weapons. Belfer Center for Science and International Affairs.

12 Rosert, E., & Sauer, F. (2019). Prohibiting Autonomous Weapons: Put Human Dignity First. Global Policy, 10(3). Retrieved from https://onlinelibrary.wiley.com/doi/epdf/10.1111/1758-5899.12691

13 United States of America Department of Defense. (2017). Directive 3000.09: Autonomy in Weapon Systems.

14 Wilson, E. (1998). The Biological Basis of Morality. Retrieved 13 December 2020, from https://www.theatlantic.com/magazine/archive/1998/04/the-biological-bas…

30
Cite Article
Harvard
APA
Footnote
RIS
(Mackay Stanhope, 2021)
Mackay Stanhope, J. 2021. 'Opposing Inherent Immorality in Autonomous Weapons Systems'. Available at: https://theforge.defence.gov.au/article/opposing-inherent-immorality-autonomous-weapons-systems (Accessed: 26 November 2024).
(Mackay Stanhope, 2021)
Mackay Stanhope, J. 2021. 'Opposing Inherent Immorality in Autonomous Weapons Systems'. Available at: https://theforge.defence.gov.au/article/opposing-inherent-immorality-autonomous-weapons-systems (Accessed: 26 November 2024).
Jack Mackay Stanhope, "Opposing Inherent Immorality in Autonomous Weapons Systems", The Forge, Published: April 06, 2021, https://theforge.defence.gov.au/article/opposing-inherent-immorality-autonomous-weapons-systems. (accessed November 26, 2024).
Download a RIS file to use in your citation management tools.
Defence Technical Social

Comments

Disclaimer

The views expressed in this article are those of the author and do not necessarily reflect the position of the Department of Defence or the Australian Government.

This web site is presented by the Department of Defence for the purpose of disseminating information for the benefit of the public.

The Department of Defence reviews the information available on this web site and updates the information as required.

However, the Department of Defence does not guarantee, and accepts no legal liability whatsoever arising from or connected to, the accuracy, reliability, currency or completeness of any material contained on this web site or on any linked site.

The Department of Defence recommends that users exercise their own skill and care with respect to their use of this web site and that users carefully evaluate the accuracy, currency, completeness and relevance of the material on the web site for their purposes.