Skip to main content

Artificial Intelligence (AI) is no longer a futuristic concept confined to science fiction. It is here, integrated into the fabric of our everyday lives—from search engines that guess what we’re trying to type, to smartphones that recognise our faces, to cars that drive themselves. But what exactly is AI? And what does it mean for defence and military operations, especially when we consider that the inner workings of many AI systems remain opaque even to their own developers? This opacity is known as the “black box” problem, and it raises profound questions for leadership, decision-making, and military ethics in an age where machines are no longer just tools, but also—potentially—partners in strategic reasoning.

At its simplest, AI refers to computer systems designed to perform tasks that typically require human intelligence. These tasks might include recognising speech, translating languages, identifying images, making predictions, or playing games. AI technologies span a wide range of approaches, but in recent years, one branch has captured global attention: machine learning, and within that, deep learning and large language models (LLMs).

Machine learning refers to the ability of a computer system to learn patterns from data and make decisions based on that learning. Unlike traditional software, which follows clearly defined instructions written by a programmer, a machine learning system is trained on large datasets and “learns” for itself how to solve a problem. Deep learning takes this a step further. Inspired loosely by the structure of the human brain, deep learning models use layered networks of artificial neurons—so-called neural networks—to detect complex patterns in data. This is the technology behind everything from facial recognition and autonomous drones to Netflix recommendations.

Large Language Models (LLMs) like ChatGPT, GPT-4, and other systems developed by companies such as OpenAI, Google, and Meta are deep learning models trained on massive datasets of text—books, news articles, websites, and more. These models don’t understand language in a human sense. Rather, they predict what word is likely to come next in a sentence, given all the words that came before. Yet because of the scale of their training data and the sophistication of their architecture, they can generate fluent, plausible, and often useful responses to questions, prompts, and dialogue. They can summarise documents, draft reports, offer advice, and even simulate ethical reasoning or political debate.

So why does this matter to the military? The strategic landscape is shifting rapidly. Nations around the world are investing heavily in AI and autonomous systems, not merely for reasons of efficiency but because of the belief that AI will confer a decisive edge in future conflict. The United States has announced billions of dollars in AI investment across military domains. China has declared its aim to become the global leader in AI by 2030, integrating these technologies into military modernisation plans. Russia, Israel, the United Kingdom, and numerous NATO allies are also exploring AI-enhanced systems for logistics, intelligence, surveillance, cyber operations, targeting, and more.

The potential applications of AI in defence are wide-ranging. In logistics and planning, AI can optimise supply chains and resource allocation. In cyber operations, it can detect anomalies faster than human operators. In intelligence and surveillance, AI systems can process vast quantities of imagery, signals, and communications data far faster than human analysts, flagging potential threats in near-real time. In wargaming and strategy simulations, AI models can test the implications of different tactical choices. Perhaps most controversially, AI has also been proposed as a decision-support tool in targeting and autonomous weapons systems, where speed, precision, and reliability are paramount in high-stakes environments.

To take a few real-world examples: AI is already embedded in the US Department of Defense’s Project Maven, which uses machine learning to analyse drone footage and identify objects of interest. In Ukraine, both sides are using AI to track troop movements, optimise drone deployment, and assist in battlefield coordination. Australia, too, has recognised the importance of responsible AI integration in Defence, with initiatives underway to ensure that emerging technologies align with national values and operational needs.

All of this suggests that AI could become a force multiplier. It could help militaries act faster, see further, and understand more. But these benefits come with challenges, and perhaps none is more significant or more philosophically rich than what is known as the “black box” problem.

The term “black box” refers to a system whose internal workings are not visible or comprehensible to its users. In aviation, a black box records data about a flight; you don’t necessarily understand how it works, but you trust that it will faithfully store the information. In AI, however, the black box problem points to something more troubling: that even when we do design the system, we may not fully understand how or why it reaches its conclusions.

Deep learning models are notoriously hard to interpret. When an AI system identifies a potential target or flags an individual as suspicious, it may be drawing on millions of parameters that interact in subtle and non-linear ways. It is often impossible to trace back precisely why the model reached that conclusion. There is no line of code saying “if X then do Y.” Instead, the reasoning is buried in the statistical correlations learned from data.

This creates serious challenges for military leadership and command. In war, decisions must be justified—to commanding officers, to political leaders, to the public, and to history. If an AI model proposes a course of action—say, that a certain vehicle is likely to be hostile, or that a particular building is a high-value target—but we cannot explain why it reached that judgment, should we act on it? Who is responsible if the decision turns out to be wrong? The operator? The commander? The developer? These are not hypothetical questions. They strike at the heart of military ethics, operational accountability, and the laws of armed conflict.

The ADF’s Military Ethics Doctrine underscores that responsibility, accountability, and sound judgement are at the core of professional military ethics. It reminds us that ethical leadership is not only about achieving mission success, but about ensuring that decisions can be justified to subordinates, peers, and the Australian public. This emphasis on responsibility sits uneasily alongside the black box problem: if commanders cannot fully explain or account for why an AI system reached its conclusion, then the very standards of justification and accountability that the doctrine demands risk being undermined. Linking AI-enabled decision-making to the doctrine therefore highlights the need for careful safeguards: to preserve the commander’s ethical responsibility, to maintain accountability in the chain of command, and to ensure that technology enhances rather than erodes the moral agency of military professionals.

Moreover, trust in AI is not merely a technical matter. In high-stakes scenarios, people tend to over-rely on automated systems when they seem authoritative, even if they do not understand them. This is known as automation bias. Conversely, they may also under-rely on them if they are too opaque or occasionally wrong, discarding useful input out of distrust. In both cases, the black box problem distorts the crucial judgment calls that military personnel must make under pressure.

Consider a soldier on the ground whose targeting system flags a building as containing enemy combatants. The system’s confidence is high. But the soldier does not know why it thinks this—perhaps the model has picked up on a pattern of foot traffic or infrared signatures it was trained to associate with hostile forces. The commander now faces a dilemma: act on the recommendation, potentially saving lives and gaining an advantage—or delay or reject it due to lack of confidence, possibly missing a legitimate threat. Multiply this scenario across a theatre of war, and the stakes become enormous.

Ethical decision-making in war is already one of the most difficult domains of human action. It requires balancing mission objectives, protection of non-combatants, proportionality, and the moral integrity of individual soldiers. If AI systems are to assist in such decisions, we must find ways to make their reasoning more transparent—or at least to mitigate the effects of their opacity.

What can be done?

First, we can invest in explainable AI (XAI). This field of research aims to design AI systems that not only make predictions or decisions but also provide interpretable reasons for those outputs. For instance, rather than simply stating that a vehicle is “hostile with 95% confidence,” the system might indicate which features contributed to that judgment: movement patterns, radio emissions, recent location history, and so on. While these explanations may not be perfect, they can help commanders evaluate whether the model’s assumptions align with their own operational understanding.

Second, we can design human-in-the-loop or human-on-the-loop systems. These are AI systems that support, but do not replace, human decision-makers. They can provide recommendations, filter information, and offer options, but the final decision rests with a person. This preserves accountability and moral responsibility while still leveraging the speed and scale of AI capabilities. Importantly, training for these roles must focus not just on technical proficiency, but on cultivating good judgment in how to weigh and interpret AI input.

Third, we must ensure that the data used to train AI systems is scrutinised for bias, incompleteness, and irrelevance. If a deep learning model is trained on data that reflects historical injustices, strategic misjudgements, or operational blind spots, it may reproduce or amplify those errors. Transparency in data provenance, rigorous testing in varied environments, and ongoing monitoring are essential safeguards.

Fourth, military organisations should cultivate a culture of critical engagement with AI systems. This means avoiding both blind trust and knee-jerk rejection. Personnel must be equipped with the conceptual tools to understand the limits of AI, to ask the right questions, and to challenge machine recommendations when necessary. This is not just a matter of education, but of leadership. Commanders must model thoughtful, informed use of AI and foster an environment where concerns about AI-generated outputs can be raised without stigma.

Fifth, international collaboration on AI ethics and standards is crucial. The black box problem is not unique to one nation’s technology. Allies and partners must work together to establish norms and safeguards, ensuring that AI-enabled military systems are used responsibly and accountably. This includes participating in multilateral dialogues, sharing best practices, and resisting the pressure to accelerate deployment before ethical frameworks are in place.

In conclusion, the black box problem is not a flaw in AI technology so much as a reflection of its complexity. As AI systems become more powerful, their inner workings become harder to grasp—not just for laypeople, but for experts and developers themselves. This opacity challenges the fundamental principles of military ethics: responsibility, justification, accountability, and trust. But it need not be a deal-breaker. With careful design, transparent processes, ongoing evaluation, and committed leadership, we can build AI systems that support—not supplant—human judgment. The goal is not to remove humans from the loop, but to ensure that the loop itself remains ethically and operationally sound. In the fog of war, we may turn to machines for clarity, but we must never surrender our moral compass.

00
Cite Article
Harvard
APA
Footnote
RIS
(Carroll, 2026)
Carroll, N. 2026. 'Peering Inside the Black Box: AI, the Military, and the Ethics of the Unknown'. Available at: https://theforge.defence.gov.au/article/peering-inside-black-box-ai-military-and-ethics-unknown (Accessed: 18 February 2026).
(Carroll, 2026)
Carroll, N. 2026. 'Peering Inside the Black Box: AI, the Military, and the Ethics of the Unknown'. Available at: https://theforge.defence.gov.au/article/peering-inside-black-box-ai-military-and-ethics-unknown (Accessed: 18 February 2026).
Nick Carroll, "Peering Inside the Black Box: AI, the Military, and the Ethics of the Unknown", The Forge, Published: February 17, 2026, https://theforge.defence.gov.au/article/peering-inside-black-box-ai-military-and-ethics-unknown. (accessed February 18, 2026).
Download a RIS file to use in your citation management tools.
Defence Technical Social

Defence Mastery

Own Domain Awareness defence-poa-level1
Military Power Joint Mastery defence-poa-level4
Integrated National Power defence-poa-level5
Complicated Problems defence-cognitive-level2
Complex Problems defence-cognitive-level3
Wicked Systems defence-cognitive-level4
Multi-agency Wicked Systems defence-cognitive-level5

Social Mastery

Lead Operating Systems social-influence-level3
Lead Capability social-influence-level4
Lead Integrated Systems social-influence-level5
Ethical Philosophies social-ethics-level2
Moral Leadership social-ethics-level3
Stewarding the Profession social-ethics-level5
Character Role Model social-character-level3
Generate Climates of Trust social-character-level4
Character Exemplar social-character-level5
Cultural Stewardship social-culture-level3
Cross Cultural Leadership social-culture-level4

Comments

Disclaimer

The views expressed in this article are those of the author and do not necessarily reflect the position of the Department of Defence or the Australian Government.

This web site is presented by the Department of Defence for the purpose of disseminating information for the benefit of the public.

The Department of Defence reviews the information available on this web site and updates the information as required.

However, the Department of Defence does not guarantee, and accepts no legal liability whatsoever arising from or connected to, the accuracy, reliability, currency or completeness of any material contained on this web site or on any linked site.

The Department of Defence recommends that users exercise their own skill and care with respect to their use of this web site and that users carefully evaluate the accuracy, currency, completeness and relevance of the material on the web site for their purposes.

 

Related Articles

1 /4