Skip to main content

Modern warfare is characterised by the race to compress the Observe-Orient-Decide-Act (OODA) loop. Militaries that can process information faster will undoubtedly have an edge in future conflicts. In this context, machine learning (ML), a prominent subset of Artificial Intelligence, has significant potential to transform the battlefield. In one of my own articles, I have explained the unprecedented efficiency that AI promises across different domains, ranging from intelligence surveillance and reconnaissance (ISR), autonomous systems, planning and training, logistics and predictive maintenance to offence (for further details, please refer to the full research article) [1]. However, this leverage comes with undercurrents of concern. One of the major challenges in this regard is adversarial attacks—a field associated with attacking ML models and ML data.

Adversarial attacks impact the core logic[2] that fuels ML. Attack techniques include tampering with either training data or ML models used in various applications to adversely impact the functioning of ML or alter its output. As a result, it undermines the very advantage that militaries aim to gain from AI. The target entity can be data-capturing sensors, communication links or data storing and labelling points. The intended objective can be achieved via different measures[3], including poisoning attacks, evasion attacks,and/or extraction attacks. Poisoning attacks take place during the training phase of machine learning by injecting malicious data, leading to incorrect pattern recognition by ML. In contrast, evasion attacks occur during the testing phase, where incorrect input is inserted to manipulate the ML model without altering the training data. Likewise, extraction attacks are carried out through repeated queries to obtain sensitive information. This vulnerability of being manipulated can easily be weaponised by state and non-state actors alike to impede, blind or misdirect military systems.

In this context, adversarial AI can impair various military applications, creating new challenges for decision makers across all domains. For instance, the battlefield awareness of land forces can be undermined, leading to misdirected strikes. In the aerial domain, evasion attacks can impact the functioning of the radars. In the maritime domain, the sonar classification models can be attacked via poisoning attacks, impairing the classification ability vis-à-vis friendly or hostile ships. Adversarial noise shaping can also lure underwater autonomous systems into ghost channels. Even pixel-level perturbation can cause a significant impact in numerous applications.

Adversarial attacks can impact mission effectiveness and can also lead to fratricide. Data that has been tampered with also impairs channels where multisource fusion is employed. Likewise, while direct attacks certainly remain a challenge, the impairment of human situational awareness through coordinated adversarial attacks poses a towering challenge. In addition, operational logistics can be jeopardised across all branches. Such actions can lead to the distortion or misallocation of logistic priorities during critical periods. Together, these vulnerabilities can compromise operational decision-making.

The impact of adversarial AI can also extend into strategic decision-making. Australia has embraced a national defence strategy grounded in deterrence by denial. Deterrence is, at its core, a strategy based on perceptions of relative advantage and the likelihood of success. It relies on keenly balanced judgments over objectives, thresholds, risk calculations and intentions. Adversarial AI can erode the reliability of the information on which these judgments and decisions are based. Spoofed radars, misclassified ISR, and jeopardised communication channels will lead to misinterpretations. Any decision or action triggered by corrupted data can result in unintended escalation. These circumstances amplify the probability of escalation not by intent but by maliciously injected error—a dangerous preposition for future warfare. And given the complex nature of AI-enabled threat environment, it becomes difficult to predict the occurrence of such attacks. Likewise, the exact response to adversarial attacks also remains to be deliberated. The absence of guardrails in the form of international regulations compounds the challenges,leaving a major lacuna and associated risks.

Recent conflicts, notably the ongoing Russia-Ukraine conflict, the recent India-Pakistan May standoff, and the Iran-Israel conflict, have demonstrated the use of emerging technologies in contemporary conflicts. Adversarial AI, if weaponised, will further complicate these regional and geopolitical flashpoints

Regarding the way forward, Explainable AI (XAI) has been one of the most discussed remedial measures to address the dangers of adversarial AI. It is essential to note that while XAI can help find spurious correlations and identify shifts, it does not provide robustness of AI models—it only makes them more transparent. Keeping human-in-the-loop is one of the primary and effective ways to mitigate the threat. However, this solution comes with an associated challenge of performance compromise. An AI-enabled decision support system only offers potential advantage if commanders can harness the speed of its processing power. Placing a human back in the loop risks denying these very advantages. Likewise, there will always be a risk of some applications being more vulnerable. Under certain circumstances, even AI-human teaming might not work as required, such as in ISR, given limited human involvement.

It has become imperative for militaries to incorporate adversarial AI into war games and simulations to enhance their preparedness. In addition, joint service protocols can play an effective role in this regard. It is important to rely on heterogeneous and independent modalities so that an attack on data/model in one channel does not disable the entire system. The increasing frequency of adversarial attacks may convince adversaries to develop additional confidence-building measures (CBMs), particularly among hostile nations, to communicate anomalous behaviour in a timely way.

In the end, the race is not just to harness AI on the battlefield but also to defend it. If left unchecked, adversarial attacks can blind or mislead enemy forces—inadvertently acting as a catalyst of confusion and uncertainty rather than shortening the OODA loop. Hence, future AI will remain highly dependent not only on fielding advanced algorithms but also on safeguarding them against potential manipulation. Failure to build this resilience can unleash unprecedented challenges.

Bibliography

Álvarez, Jimena Sofía Viveros. 'The Risks and Inefficacies of AI Systems in Military Targeting Support', Humanitarian Law & Policy Blog, September 4, 2024.https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/.

Arif, Shaza. 'Military Applications of Artificial Intelligence: Impact on Warfare', Journal of Aerospace & Security Studies 1 (2022): 1–20.

Sciforce. 'Adversarial Attacks Explained (And How to Defend ML Models Against Them)', Sciforce, September 7, 2022.https://medium.com/sciforce/adversarial-attacks-explained-and-how-to-defend-ml-models-against-them-d76f7d013b18.

Footnotes

1 Arif, “Military Applications of Artificial Intelligence: Impact on Warfare.”

2 Sciforce, “Adversarial Attacks Explained (And How to Defend ML Models Against Them).”

3 Álvarez, “The Risks and Inefficacies of AI Systems in Military Targeting Support.”

00
Cite Article
Harvard
APA
Footnote
RIS
(Arif, 2025)
Arif, S. 2025. 'Understanding Adversarial AI: The Military Lens'. Available at: https://theforge.defence.gov.au/article/understanding-adversarial-ai-military-lens (Accessed: 10 December 2025).
(Arif, 2025)
Arif, S. 2025. 'Understanding Adversarial AI: The Military Lens'. Available at: https://theforge.defence.gov.au/article/understanding-adversarial-ai-military-lens (Accessed: 10 December 2025).
Shaza Arif, "Understanding Adversarial AI: The Military Lens", The Forge, Published: December 08, 2025, https://theforge.defence.gov.au/article/understanding-adversarial-ai-military-lens. (accessed December 10, 2025).
Download a RIS file to use in your citation management tools.
Defence Technical Social

Defence Mastery

Military Power Joint Mastery defence-poa-level4
Integrated National Power defence-poa-level5
Complicated Problems defence-cognitive-level2
Complex Problems defence-cognitive-level3
Wicked Systems defence-cognitive-level4
Multi-agency Wicked Systems defence-cognitive-level5

Social Mastery

Lead Integrated Systems social-influence-level5
Stewarding the Profession social-ethics-level5

Comments

Disclaimer

The views expressed in this article are those of the author and do not necessarily reflect the position of the Department of Defence or the Australian Government.

This web site is presented by the Department of Defence for the purpose of disseminating information for the benefit of the public.

The Department of Defence reviews the information available on this web site and updates the information as required.

However, the Department of Defence does not guarantee, and accepts no legal liability whatsoever arising from or connected to, the accuracy, reliability, currency or completeness of any material contained on this web site or on any linked site.

The Department of Defence recommends that users exercise their own skill and care with respect to their use of this web site and that users carefully evaluate the accuracy, currency, completeness and relevance of the material on the web site for their purposes.

 

Related Articles

1 /4