Skip to main content

The successful first flight of Boeing Australia’s ‘Loyal Wingman’ unmanned aircraft in early 2021 marks the introduction of a new and ground-breaking capability for the Royal Australian Air Force (RAAF), and the broader Australian Defence Force (ADF). Unlike existing unmanned aerial vehicles (UAV) employed by the RAAF such as MQ-4C Triton, Loyal Wingman’s unique leveraging of artificial intelligence (AI) in conjunction with its ability to carry a variety of payloads gives it the potential to become the ADF’s first fully autonomous lethal weapon system. Although Loyal Wingman is still far from the Terminator-esque ‘killer robot’ opponents to such systems prophesy, now is a good opportunity to consider the potential ethical issues that may arise from their employment in future.

In the near future, it is highly likely that AI technology will advance to the point where autonomous weapon systems (AWS) such as Loyal Wingman can be employed with minimal human oversight. This paper argues that in this case, issues may arise when establishing a viable chain of accountability in the event an accidental or intentional war crime is committed by a fully AWS. It will begin by defining the degrees of autonomy and its relevance to autonomous weapon systems. It will then examine the three loci of responsibility that may be held responsible for a war crime committed by an AWS – the programmer, the AWS, and the commander – and demonstrate why none of these actors can be held satisfactorily accountable. In light of this, it concludes that AWS must be limited in their employment until these issues can be adequately addressed.

Defining Autonomy

Before expanding on the moral implications of AWS, it is worth establishing what makes an AWS different from similar platforms such as UAVs, which also have the ability to operate autonomously in certain circumstances. Hellström argues that we can categorise robots based on their ‘autonomy power’, or capacity to perform actions and decisions without input from a human designer, programmer, or operator.[1] A landmine would have very low autonomous power because it is capable of only one ‘action’ (detonation) without the need for human control. Conversely, a self-driving car would have a high degree of autonomous power because it can perform various complex tasks (braking, accelerating, turning etc.) without human control.

Milani’s Taxonomy of AWS Automation[2] provides a useful framework which differentiates the levels of autonomy in a military context across dimensions such as decision making, targeting and behaviour from a scale of ‘0’ (no automation) to ‘5’ (full automation). Accordingly, UAVs and other remotely piloted systems possess what he describes as ‘partial automation’, which allows inbuilt systems to perform specified tasks without human supervision. More commonly described as involving a ‘human in the loop’, examples of this include a UAV navigating waypoints using autopilot and ‘fire and forget’ munitions. In both instances, the system is able to perform limited targeting or navigation functions automatically, but contingent on human approval.[3] The next level of automation, ‘conditional automation’, is when a system is able to perform tasks automatically within predefined parameters specified by a human. This profile, commonly referred to as ‘human on the loop’ is already employed by the ADF in systems such as the Close-In Weapon System (CIWS), which is able to acquire and automatically engage targets that match a predefined threat profile.

The current version of Loyal Wingman is designed to be capable of operating at a level of conditional or high automation where it will perform complex tasks (patrol this area), or even full missions (patrol this area and engage identified threats) with limited or no human supervision.[4] Future versions are likely to be even more capable and may operate for tens of hours without the need for a human operator approving decisions prior to execution. At this stage, AWS will have attained a ‘human out of the loop’ level of autonomy. However, with this heightened autonomy comes increased risk. Although humans are not guaranteed to be better decision makers – indeed many scholars suggest that removing the human from the loop will increase the speed and accuracy of decisions[5] – as the following section will demonstrate, retaining human involvement in the process is significant when establishing responsibility for an AWS’ actions.

Robot war crimes

Moseley contends that jus in bello requires that “the agents of war be held responsible for their actions”.[6]  The theory of just war is underpinned by the concept of responsibilitybecause, fundamentally, just war necessitates that someone can be held accountable “for the deaths of enemies killed in the course of it”.[7] This issue is of particular significance in the event that a person’s death is in contravention of the laws of armed conflict, and therefore morally and legally equivalent to murder.

The employment of highly autonomy weapon systems clouds this issue. The absence of a human in/on the loop means that the chain of responsibility for an AWS’ actions is unclear. This is of particular significance in the event that a decision by an AWS leads to what amounts to be a war crime. In the event of this occurring, three loci of responsibility lend themselves to culpability: the programmer who designed the parameters, the commander who authorised the mission, and the AWS which conducted the mission. This section will demonstrate that except in very limited cases, none of these parties can be held satisfactorily accountable.

The Programmer

Given that AWS are governed by their programming, it is easy to presume that the programmer ought to be held liable for any error on its part. As Schmitt notes, the mere fact that a human is not constantly in control over an AWS does not mean that no human is responsible for its actions because ultimately, a human must decide how to program the system.[8] However, Quince opines that this presents an overly simplistic view of corporate and individual responsibly. She argues that attempting to hold developers responsible for every death or war crime committed by an AWS they design would be analogous to holding liable the manufacturer of every firearm, munition or weapon used in any other unlawful killing.[9] Except in cases where it can be proven that a violation occurred due to the developer’s negligence or intentional mis-programming, it would be near impossible to demonstrate the elements necessary to establish their responsibility.[10]

In addition, advances in AI autonomy will further complicate the issue of developer/programmer liability. Sparrow notes that as AWS become progressively sophisticated, their own inherent autonomy will mean that programmers will be increasingly unable to predict their behaviour. He rightfully questions the extent to which a person can be responsible for an event they can neither predict nor control, concluding that attempting to hold programmers responsible for their AWS’ actions in this case would be comparable to holding parents responsible for their children’s actions once they have left their care.[11] Thus, except in the most egregious cases, the programmer cannot be held morally liable for any war crimes committed by AWS.

The AWS

Although AWS may be held causally responsible for their actions, attempts to hold the robot itself accountable prove unsatisfactory when followed to their logical conclusions. Firstly, AWS cannot be viewed as accountable agents because of their inability to meet the requirements necessary to be considered as morally responsible actors; and secondly, AWS are unsuitable objects for human concepts of justice such as punishment in the event of a transgression.

The first objection with holding AWS accountable rests on proving their status as morally responsible agents. Although AWS can be considered to be causally responsible for their actions, this does not automatically mean they can be considered to be morally responsible. Himmelreich contends that for an actor to be considered a morally responsible agent, they must possess both intentional agency (the ability to make independent decisions and act on them), and moral agency (the ability to comprehend morality such that they can be held responsible for the moral consequences of their actions).[12] Whilst AWS can be seen to possess intentional agency, they lack the capacity to demonstrate moral agency. For this reason, AWS should be considered as what Himmelreich terms as merely minimal agents.[13] One example of another merely minimal agent is children, who possess the autonomy to determine their own actions, but lack the maturity to demonstrate they understand the consequences necessary for them to be held responsible. Skerker et al. articulate this distinction clearly, observing that a child may refrain from hitting their sibling because they have been told not to, rather than from an understanding that it is wrong to cause unnecessary harm to others. As such, their obedience is not indicative of moral maturity because they cannot be seen to display an understanding of the moral concepts that underpin the rules they follow.[14] Suitably advanced iterations of AWS should be similarly considered as merely minimal agents. Whilst they may possess a significant degree of autonomy and appear to act ethically when following their programming, they cannot be said to ‘understand’ their programming. For this reason, they cannot be held as morally responsible actors in their own right.

Even if we were to consider a state of AI advanced enough for its use in AWS to make such a system a morally responsible actor, scholars such as Herbert note that attempting to punish this robot would be futile. She opines that conventional punishments meted against humans work by restricting life or liberty for the purpose of retribution or reform – concepts that have no significance to a robot.[15] Thus, attempts to hold AWS responsible for their actions prove unsatisfactory on two grounds: first, because AWS lack the moral agency to be held responsible for their actions; and secondly, because attempts to punish AWS would be ineffective.

The Commander

The final locus of responsibility that could be held accountable for an illegal attack by an AWS lies with the commander who authorised its use. This view has its strengths as it is able to draw on established precedent regarding the responsibility of commanders for the actions of their subordinates, as evidenced in WWII and numerous other cases.[16] Although it appears reasonable to group AWS under this same relationship, as this section argues, doing so creates a dangerous false equivalency between AWS and human subordinates in terms of moral rights and obligations.

Chengeta suggests that describing AWS as ‘subordinates’ following ‘orders’ is dangerous. He provides a compelling argument that those who deploy AWS should not be labelled as commanders, and AWS should not be considered to be agents or combatants.[17] Referring to those who deploy AWS as commanders grants such systems the equivalency of combatants or fighters – in other words humans – with the capacity for both autonomy and responsibility. As the above section has demonstrated, this is not the case. AWS are at most merely minimal agents. Accordingly, he stresses the fact that AWS must be considered strictly as weapons and not robot combatants. Asaro expands on this view, noting that individuals cannot morally abdicate their responsibility to an irresponsible agent. Because AWS are not responsible agents, commanders cannot delegate this authority to them.[18] Just as a soldier does not subcontract their agency to their rifle, mortar, or missile when they fire at a target far away, neither can a commander subcontract their agency to an AWS. Commanders can therefore only be held accountable insofar as their human subordinates are implicit in using AWS to perpetrate a war crime. This view is consistent with existing international law, which holds a commander liable if they were aware that their subordinate had intentionally mis-programmed or employed an AWS in an unlawful manner and did nothing to stop or punish them after the fact.[19]

Conclusion and Implications

As this paper has demonstrated, there currently exists no clear locus of responsibility that can be held accountable in the event an AWS commits a war crime. The liability of programmers and developers will become increasingly limited as AI become more autonomous. AWS cannot be held accountable in their own rights as merely minimal actors, a status that is unlikely to change until AI develop to the point where they obtain sufficient moral agency to attain personhood and therefore have moral equivalency to a human being. Finally, this paper has argued that equating the relationship between AWS and their operators to that between a commander and subordinate creates a dangerous false equivalency which must be avoided. AWS are not moral agents and cannot be viewed as combatants or robot soldiers.

Ultimately, the lack of accountability creates ‘deficit in the accounting books’, where there is no one to hold responsible for an action that should have someone held responsible, and moreover, no one to punish for a wrongdoing.[20] The solution to the moral concerns raised by the accountability gap is to only employ AWS in situations where a human is always able to exercise meaningful control over its operations. Although this is not an issue faced by current AWS such as Loyal Wingman, it is incumbent on today’s commanders to consider what steps would have to be taken by the ADF to ensure a viable chain of accountability for more advanced future systems. Seneca’s observation that ‘a sword is never a killer, it is a tool in the killer’s hands”[21] remains relevant. Although the sword is now much more sophisticated, and capable of striking from much further away, ultimately it remains the wielder who must bear responsibility for its actions.

Bibliography

Arkin, Ronald. "Ethical Robots in Warfare." Institute of Electrical and Electronics Engineers Technology and Society Magazine 28, no. 1 (2009): 30-33. https://doi.org/10.1109/MTS.2009.931858.

Asaro, Peter. "On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making." International Review of the Red Cross 94, no. 186 (2013): 687-709. https://doi.org/10.1017/S1816383112000768.

Batt, Jonathan. "Lethal Autonomous Weapons and the Professional Military Ethic." Master of Military Art and Science, US Army Command and Generaly Staff College, 2018. https://apps.dtic.mil/sti/pdfs/AD1084117.pdf.

Chengeta, Thompson. "Accountability Gap: Autonomous Weapon Systems and Modes of Responsibility in International Law." Denver Journal of International Law & Policy 45, no. 1 (2020): 1-50. https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=1011&context=djilp.

Cross, International Committee of the Red. Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects. International Committee of the Red Cross (Geneva, Switzerland: International Committee of the Red Cross, 2014). https://www.icrc.org/en/download/file/1707/4221-002-autonomous-weapons-systems-full-report.pdf.

Hellström, Thomas. "On the Moral Responsibility of Military Robots." Ethics and Information Technology 15, no. 2 (2013): 99-107. https://doi.org/10.1007/s10676-012-9301-2.

Herbert, Carmen. "Autonomous Weapons Systems: The Permissible Use of Lethal Force, International Humanitarian Law and Arms Control." Stellenbosh University, 2017. https://scholar.sun.ac.za/bitstream/handle/10019.1/102687/herbert_autonomous_2017.pdf?isAllowed=y&sequence=1.

Heyns, Christof. Increasingly Autonomous Weapon Systems: Accountability and Responsibility. ICRC (Geneva, Switzerland: 2014). https://www.icrc.org/en/download/file/1707/4221-002-autonomous-weapons-systems-full-report.pdf.

Himmelreich, Johannes. "Responsibility for Killer Robots." Ethical Theory and Moral Practice 22 (2019): 731-47. https://doi.org/10.1007/s10677-019-10007-9.

Milani, Peter. "Autonomous Weapon Systems for the Land Domain." (2020). https://cove.army.gov.au/article/autonomous-weapon-systems-the-land-domain.

Mitchell, Andrew. "Failure to Halt, Prevent or Punish: The Doctrine of Command Responsibility for War Crimes." Sydney Law Review 22, no. 3 (2000): 381-410. https://doi.org/10.3316/ielapa.200102277.

Moseley, Alexander. "Just War Theory." Internet Encyclopedia of Philosophy. (n.d.). https://iep.utm.edu/justwar/.

Pettit, Philip. "Responsibility Incorporated." Ethics 117, no. 2 (10.1086/510695 2007): 171-201.

Quince, Sophie. "The Laws Surrounding Responsibility and Accountability of Autonomous Weapons Systems Are Insufficient: An Analysis of Legal and Ethical Implications of Autonomous Weapons Systems." The Student Journal of Professional Practice and Academic Research 3, no. 1 (2020): 1-50. https://www.northumbriajournals.co.uk/index.php/sjppar/article/view/1113/1471.

"Boeing Will Unveil This 'Loyal Wingman' Combat Drone for Australia's Air Force Tomorrow (Updated)." The Warzone, The Drive, 2019, accessed 18 August 2021, 2021, https://www.thedrive.com/the-war-zone/26656/boeing-will-unveil-this-loyal-wingman-combat-drone-for-australias-air-force-tomorrow.

Schmitt, Michael. "Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics." Harvard Law School National Security Journal  (2013): 1-37. https://harvardnsj.org/2013/02/autonomous-weapon-systems-and-international-humanitarian-law-a-reply-to-the-critics/.

Seneca, Lucius. "Epistulae Morales Ad Lucilium (Moral Letters to Lucilius)." Letter LXXXVII: Some arguments in favor of the simple life, n.d.

Skerker, Michael, Duncan Purves, and Ryan Jenkins. "Autonomous Weapons Systems and the Moral Equality of Combatants." Ethics and Information Technology 22 (2020): 197-209. https://doi.org/10.1007/s10676-020-09528-0.

Sparrow, Robert. "Killer Robots." Journal of Applied Philosophy 24, no. 1 (2007). https://www.jstor.org/stable/24355087.

Footnotes

Formatted HTML Code:

[1]Thomas Hellström, "On the Moral Responsibility of Military Robots," Ethics and Information Technology 15, no. 2 (2013), https://doi.org/10.1007/s10676-012-9301-2.

[2] Peter Milani, "Autonomous Weapon Systems For The Land Domain," (2020). https://cove.army.gov.au/article/autonomous-weapon-systems-the-land-domain.

[3] International Committee of the Red Cross, Autonomous Weapon Systems: Technical, Military, Legal And Humanitarian Aspects

[4] "Boeing Will Unveil This 'Loyal Wingman' Combat Drone For Australia's Air Force Tomorrow (Updated)," The Warzone, The Drive, 2019, accessed 18 August 2021, 2021, https://www.thedrive.com/the-war-zone/26656/boeing-will-unveil-this-loyal-wingman-combat-drone-for-australias-air-force-tomorrow.

[5] See for example Ronald Arkin, "Ethical Robots in Warfare," Institute of Electrical and Electronics Engineers Technology and Society Magazine 28, no. 1 (2009), https://doi.org/10.1109/MTS.2009.931858.

[6] Alexander Moseley, "Just War Theory," Internet Encyclopedia of Philosophy (n.d.). https://iep.utm.edu/justwar/.

[7] Robert Sparrow, "Killer Robots," Journal of Applied Philosophy 24, no. 1 (2007): 62, https://www.jstor.org/stable/24355087.

[8] Michael Schmitt, "Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics," Harvard Law School National Security Journal (2013): 33, https://harvardnsj.org/2013/02/autonomous-weapon-systems-and-international-humanitarian-law-a-reply-to-the-critics/.

[9] Sophie Quince, "The laws surrounding responsibility and accountability of autonomous weapons systems are insufficient: An analysis of legal and ethical implications of autonomous weapons systems," The Student Journal of Professional Practice and Academic Research 3, no. 1 (2020): 22, https://www.northumbriajournals.co.uk/index.php/sjppar/article/view/1113/1471.

[10] Jonathan Batt, "Lethal Autonomous Weapons And The Professional Military Ethic" (Master of Military Art and Science US Army Command and Generaly Staff College, 2018), https://apps.dtic.mil/sti/pdfs/AD1084117.pdf.

[11] Sparrow, "Killer Robots," 70.

[12] Johannes Himmelreich, "Responsibility for Killer Robots," Ethical Theory and Moral Practice 22 (2019), https://doi.org/10.1007/s10677-019-10007-9.

[13] Himmelreich, "Responsibility for Killer Robots," 4.

[14] Michael Skerker, Duncan Purves, and Ryan Jenkins, "Autonomous weapons systems and the moral equality of combatants," Ethics and Information Technology 22 (2020): 203, https://doi.org/10.1007/s10676-020-09528-0.

[15] Carmen Herbert, "Autonomous Weapons Systems: The Permissible Use of Lethal Force, International Humanitarian Law and Arms Control" (Stellenbosh University, 2017), https://scholar.sun.ac.za/bitstream/handle/10019.1/102687/herbert_autonomous_2017.pdf?isAllowed=y&sequence=1.

[16] Andrew Mitchell, "Failure to Halt, Prevent or Punish: The Doctrine of Command Responsibility for War Crimes," Sydney Law Review 22, no. 3 (2000), https://doi.org/10.3316/ielapa.200102277.

[17] Thompson Chengeta, "Accountability Gap: Autonomous Weapon Systems and Modes of Responsibility in International Law," Denver Journal of International Law & Policy 45, no. 1 (2020), https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=1011&context=djilp.

[18] Peter Asaro, "On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making," International Review of the Red Cross 94, no. 186 (2013): 701, https://doi.org/10.1017/S1816383112000768.

[19] Christof Heyns, Increasingly autonomous weapon systems: Accountability and responsibility, ICRC (Geneva, Switzerland, 2014), https://www.icrc.org/en/download/file/1707/4221-002-autonomous-weapons-systems-full-report.pdf.

[20] Philip Pettit, "Responsibility Incorporated," Ethics 117, no. 2 (10.1086/510695 2007).

[21] Lucius Seneca, "Epistulae Morales ad Lucilium (Moral Letters to Lucilius)," Letter LXXXVII: Some arguments in favor of the simple life (n.d.).

[2] Peter Milani, "Autonomous Weapon Systems For The Land Domain," (2020). https://cove.army.gov.au/article/autonomous-weapon-systems-the-land-domain.

, International Committee of the Red Cross (Geneva, Switzerland: International Committee of the Red Cross, 2014), https://www.icrc.org/en/download/file/1707/4221-002-autonomous-weapons-systems-full-report.pdf.

00
Cite Article
Harvard
APA
Footnote
RIS
(Ho, 2021)
Ho, R. 2021. 'The Future is Limited: The Ethics of Lethal Autonomous Weapons Systems'. Available at: https://theforge.defence.gov.au/article/future-limited-ethics-lethal-autonomous-weapons-systems (Accessed: 21 December 2024).
(Ho, 2021)
Ho, R. 2021. 'The Future is Limited: The Ethics of Lethal Autonomous Weapons Systems'. Available at: https://theforge.defence.gov.au/article/future-limited-ethics-lethal-autonomous-weapons-systems (Accessed: 21 December 2024).
Regan Ho, "The Future is Limited: The Ethics of Lethal Autonomous Weapons Systems", The Forge, Published: November 10, 2021, https://theforge.defence.gov.au/article/future-limited-ethics-lethal-autonomous-weapons-systems. (accessed December 21, 2024).
Download a RIS file to use in your citation management tools.
Defence Technical Social

Comments

Disclaimer

The views expressed in this article are those of the author and do not necessarily reflect the position of the Department of Defence or the Australian Government.

This web site is presented by the Department of Defence for the purpose of disseminating information for the benefit of the public.

The Department of Defence reviews the information available on this web site and updates the information as required.

However, the Department of Defence does not guarantee, and accepts no legal liability whatsoever arising from or connected to, the accuracy, reliability, currency or completeness of any material contained on this web site or on any linked site.

The Department of Defence recommends that users exercise their own skill and care with respect to their use of this web site and that users carefully evaluate the accuracy, currency, completeness and relevance of the material on the web site for their purposes.