Skip to main content

The rapid integration of Artificial Intelligence (AI) into military operations is reshaping how modern wars are fought, won, and governed. From surveillance drones that detect suspicious movement to decision-support systems that analyse satellite images, AI is increasingly becoming a partner in operational and strategic reasoning. And yet, with this new partnership comes an old and fundamental question: who is responsible when something goes wrong? If an autonomous drone misidentifies a civilian convoy as a military target, or if an AI-powered command system recommends a course of action that leads to unnecessary loss of life, where does moral and legal responsibility lie? With the developer? The operator? The commander? Or nowhere at all?

This is not a speculative or distant concern. Already, autonomous and semi-autonomous systems are being used in the battlefield. In Ukraine, AI-enabled drones are being deployed for reconnaissance and targeting, sometimes with limited or no direct human oversight. The Israeli military has reportedly used AI in operational planning and strike coordination. The United States, China, Russia, and others are investing heavily in developing AI systems that can respond faster than humans to rapidly evolving battlefield conditions. And Australia, through its Defence Science and Technology Group and various strategic partnerships, is actively exploring AI capabilities in support of its Future Force posture.

The appeal of AI in military operations is clear. Machines do not get tired, emotional, or distracted. They can process vast quantities of data in seconds, detect patterns that humans might miss, and act with precision at speeds that no human could match. AI systems can be embedded in decision-support tools, autonomous vehicles, logistics networks, and threat detection systems. In principle, this could lead to fewer errors, reduced collateral damage, and better outcomes for all involved.

But AI also introduces new kinds of risks—moral, legal, and institutional. Chief among these is the challenge of accountability.

In traditional military operations, responsibility for an action can usually be traced to a person or group of people. A commander issues an order. A pilot follows through. A logistics officer fails to supply the right materials. The military chain of command allows us to assign praise and blame, to conduct inquiries, and to ensure that ethical and legal standards are upheld. In short, humans bear the weight of responsibility.

With AI, this clarity begins to erode. AI systems, particularly those based on machine learning and neural networks, do not follow human-readable rules. Instead, they learn complex statistical patterns from data and make predictions or decisions based on those patterns. As I discussed in my article on the context of the “black box” problem, the internal reasoning of such systems can be opaque even to their own designers. This leads to what philosophers call a responsibility gap—a space where something morally significant happens, but no one seems clearly accountable for it.

Suppose an AI-enabled targeting system misidentifies a structure as a military installation and recommends a strike. A human operator, trusting the system’s high confidence level and facing time pressure, authorises the action. After the fact, it becomes clear that the system made an error. Did the fault lie with the operator, who may not have fully understood the system’s limitations? With the developers, who trained the AI on flawed data? With the commander, who approved the system’s deployment? Or with none of the above, since the error was the result of complex statistical interactions no human could have predicted?

This kind of moral ambiguity is troubling, especially in war, where clarity of responsibility is not just an ethical ideal but a legal necessity. The laws of armed conflict (LOAC), including the Geneva Conventions, require that military actions be proportionate, discriminate between combatants and civilians, and be justified by military necessity. When decisions are delegated to or influenced by AI, the chain of accountability required by LOAC may be disrupted.

Some commentators argue that responsibility should always remain with a human. This is the reasoning behind the widely supported principle of meaningful human control in the use of lethal autonomous weapons. But this raises further challenges. What counts as meaningful? Is it enough that a human pushes the final button, even if their understanding of the system is limited? Or must the human understand how the recommendation was generated and be able to override it based on independent judgment?

In practice, many military AI systems function as decision-support tools rather than autonomous agents. That is, they provide advice, risk assessments, or targeting suggestions that a human then reviews. This human-in-the-loop model preserves a formal line of accountability, but may obscure deeper issues. Studies in psychology and human–machine interaction have shown that people often exhibit automation bias—the tendency to defer to a machine’s recommendation, especially under pressure. So while the human remains “in the loop” on paper, they may not exercise meaningful oversight in reality.

Even more complex is the case of emergent behaviour in AI systems. These are behaviours or strategies that arise from the system’s interaction with the environment, rather than being explicitly programmed. For example, a battlefield AI might learn to prioritise certain types of movements or communications as signs of enemy activity, based on statistical patterns it has inferred. These behaviours can sometimes produce impressive results—but they can also lead to catastrophic failures. When emergent behaviour leads to harm, the question of responsibility becomes even murkier.

So how can we respond to the accountability challenge? While we may never eliminate the responsibility gap entirely, there are several ways to reduce it.

First, AI systems must be designed with transparency and traceability in mind. This means building in mechanisms for logging decisions, providing explanations (even approximate ones), and ensuring that human users can understand how the system reached a recommendation. The field of explainable AI (XAI) is advancing rapidly, and defence applications should prioritise these capabilities, even if it means trading off some performance for interpretability.

Second, military organisations must maintain clear lines of accountability, not just in legal terms but in institutional culture. This includes defining who is responsible for validating AI systems, approving their use, monitoring their performance, and acting on their outputs. Just as there are procedures for weapon certification and rules of engagement, so too must there be procedures for AI deployment and review.

Third, we must recognise that training and education are crucial. Personnel interacting with AI systems need not be data scientists, but they must understand the limitations of the tools they are using, including common failure modes, sources of bias, and indicators of overconfidence. Commanders, in particular, must be trained to interrogate AI recommendations critically and to foster an environment where questioning machine output is not seen as insubordination, but as good leadership.

Fourth, legal and ethical doctrine must evolve to meet these new realities. This includes updating military manuals, rules of engagement, and legal review procedures to reflect the use of AI-enabled systems. International humanitarian law is not static; it has adapted in the past to new technologies, from submarines to cyber warfare. The same must happen with AI. Australia has already shown leadership in this area by participating in international forums on responsible military use of AI and committing to ethical principles in defence innovation.

Finally, we should consider whether new institutional roles or structures are needed. For example, some have proposed the creation of AI ethics boards within defence organisations to review deployments, assess risks, and provide guidance. Others suggest that certain uses of AI—particularly those involving lethal force—should require independent oversight or additional authorisation layers.

The integration of AI into military decision-making is not simply a technical upgrade. It is a profound shift in how we understand agency, responsibility, and moral judgement in war. As we delegate more tasks to machines, we must be vigilant in ensuring that the values and norms that underpin military professionalism are not eroded. Machines can assist with decisions, but they cannot bear responsibility. That remains a uniquely human burden.

War will always involve uncertainty, error, and tragedy. AI may help reduce these—but it may also introduce new forms of opacity and moral hazard. The challenge before us is not just to build smarter machines, but to build more accountable institutions. In the end, responsibility cannot be automated.

00
Cite Article
Harvard
APA
Footnote
RIS
(Carroll, 2026)
Carroll, N.G. 2026. 'Moral Machines on the Battlefield: Can Artificial Intelligence Be Held Accountable?'. Available at: https://theforge.defence.gov.au/article/moral-machines-battlefield-can-artificial-intelligence-be-held-accountable (Accessed: 25 February 2026).
(Carroll, 2026)
Carroll, N. 2026. 'Moral Machines on the Battlefield: Can Artificial Intelligence Be Held Accountable?'. Available at: https://theforge.defence.gov.au/article/moral-machines-battlefield-can-artificial-intelligence-be-held-accountable (Accessed: 25 February 2026).
Nicholas George Carroll, "Moral Machines on the Battlefield: Can Artificial Intelligence Be Held Accountable?", The Forge, Published: February 25, 2026, https://theforge.defence.gov.au/article/moral-machines-battlefield-can-artificial-intelligence-be-held-accountable. (accessed February 25, 2026).
Download a RIS file to use in your citation management tools.
Defence Technical Social

Defence Mastery

Own Domain Awareness defence-poa-level1
Military Power Joint Mastery defence-poa-level4
Integrated National Power defence-poa-level5
Critical and Creative Thinking defence-cognitive-level1
Complicated Problems defence-cognitive-level2
Complex Problems defence-cognitive-level3
Wicked Systems defence-cognitive-level4
Multi-agency Wicked Systems defence-cognitive-level5

Social Mastery

Lead Teams Lead Leaders social-influence-level2
Lead Operating Systems social-influence-level3
Lead Capability social-influence-level4
Lead Integrated Systems social-influence-level5
Trust Development Through Consistency social-character-level2
Character Role Model social-character-level3
Generate Climates of Trust social-character-level4
Character Exemplar social-character-level5
Cultural Stewardship social-culture-level3
Cross Cultural Leadership social-culture-level4

Comments

Disclaimer

The views expressed in this article are those of the author and do not necessarily reflect the position of the Department of Defence or the Australian Government.

This web site is presented by the Department of Defence for the purpose of disseminating information for the benefit of the public.

The Department of Defence reviews the information available on this web site and updates the information as required.

However, the Department of Defence does not guarantee, and accepts no legal liability whatsoever arising from or connected to, the accuracy, reliability, currency or completeness of any material contained on this web site or on any linked site.

The Department of Defence recommends that users exercise their own skill and care with respect to their use of this web site and that users carefully evaluate the accuracy, currency, completeness and relevance of the material on the web site for their purposes.

 

Related Articles

1 /4