Modern warfare is characterised by the race to compress the Observe-Orient-Decide-Act (OODA) loop. Militaries that can process information faster will undoubtedly have an edge in future conflicts. In this context, machine learning (ML), a prominent subset of Artificial Intelligence, has significant potential to transform the battlefield. In one of my own articles, I have explained the unprecedented efficiency that AI promises across different domains, ranging from intelligence surveillance and reconnaissance (ISR), autonomous systems, planning and training, logistics and predictive maintenance to offence (for further details, please refer to the full research article) [1]. However, this leverage comes with undercurrents of concern. One of the major challenges in this regard is adversarial attacks—a field associated with attacking ML models and ML data.
Adversarial attacks impact the core logic[2] that fuels ML. Attack techniques include tampering with either training data or ML models used in various applications to adversely impact the functioning of ML or alter its output. As a result, it undermines the very advantage that militaries aim to gain from AI. The target entity can be data-capturing sensors, communication links or data storing and labelling points. The intended objective can be achieved via different measures[3], including poisoning attacks, evasion attacks,and/or extraction attacks. Poisoning attacks take place during the training phase of machine learning by injecting malicious data, leading to incorrect pattern recognition by ML. In contrast, evasion attacks occur during the testing phase, where incorrect input is inserted to manipulate the ML model without altering the training data. Likewise, extraction attacks are carried out through repeated queries to obtain sensitive information. This vulnerability of being manipulated can easily be weaponised by state and non-state actors alike to impede, blind or misdirect military systems.
In this context, adversarial AI can impair various military applications, creating new challenges for decision makers across all domains. For instance, the battlefield awareness of land forces can be undermined, leading to misdirected strikes. In the aerial domain, evasion attacks can impact the functioning of the radars. In the maritime domain, the sonar classification models can be attacked via poisoning attacks, impairing the classification ability vis-à-vis friendly or hostile ships. Adversarial noise shaping can also lure underwater autonomous systems into ghost channels. Even pixel-level perturbation can cause a significant impact in numerous applications.
Adversarial attacks can impact mission effectiveness and can also lead to fratricide. Data that has been tampered with also impairs channels where multisource fusion is employed. Likewise, while direct attacks certainly remain a challenge, the impairment of human situational awareness through coordinated adversarial attacks poses a towering challenge. In addition, operational logistics can be jeopardised across all branches. Such actions can lead to the distortion or misallocation of logistic priorities during critical periods. Together, these vulnerabilities can compromise operational decision-making.
The impact of adversarial AI can also extend into strategic decision-making. Australia has embraced a national defence strategy grounded in deterrence by denial. Deterrence is, at its core, a strategy based on perceptions of relative advantage and the likelihood of success. It relies on keenly balanced judgments over objectives, thresholds, risk calculations and intentions. Adversarial AI can erode the reliability of the information on which these judgments and decisions are based. Spoofed radars, misclassified ISR, and jeopardised communication channels will lead to misinterpretations. Any decision or action triggered by corrupted data can result in unintended escalation. These circumstances amplify the probability of escalation not by intent but by maliciously injected error—a dangerous preposition for future warfare. And given the complex nature of AI-enabled threat environment, it becomes difficult to predict the occurrence of such attacks. Likewise, the exact response to adversarial attacks also remains to be deliberated. The absence of guardrails in the form of international regulations compounds the challenges,leaving a major lacuna and associated risks.
Recent conflicts, notably the ongoing Russia-Ukraine conflict, the recent India-Pakistan May standoff, and the Iran-Israel conflict, have demonstrated the use of emerging technologies in contemporary conflicts. Adversarial AI, if weaponised, will further complicate these regional and geopolitical flashpoints
Regarding the way forward, Explainable AI (XAI) has been one of the most discussed remedial measures to address the dangers of adversarial AI. It is essential to note that while XAI can help find spurious correlations and identify shifts, it does not provide robustness of AI models—it only makes them more transparent. Keeping human-in-the-loop is one of the primary and effective ways to mitigate the threat. However, this solution comes with an associated challenge of performance compromise. An AI-enabled decision support system only offers potential advantage if commanders can harness the speed of its processing power. Placing a human back in the loop risks denying these very advantages. Likewise, there will always be a risk of some applications being more vulnerable. Under certain circumstances, even AI-human teaming might not work as required, such as in ISR, given limited human involvement.
It has become imperative for militaries to incorporate adversarial AI into war games and simulations to enhance their preparedness. In addition, joint service protocols can play an effective role in this regard. It is important to rely on heterogeneous and independent modalities so that an attack on data/model in one channel does not disable the entire system. The increasing frequency of adversarial attacks may convince adversaries to develop additional confidence-building measures (CBMs), particularly among hostile nations, to communicate anomalous behaviour in a timely way.
In the end, the race is not just to harness AI on the battlefield but also to defend it. If left unchecked, adversarial attacks can blind or mislead enemy forces—inadvertently acting as a catalyst of confusion and uncertainty rather than shortening the OODA loop. Hence, future AI will remain highly dependent not only on fielding advanced algorithms but also on safeguarding them against potential manipulation. Failure to build this resilience can unleash unprecedented challenges.
Álvarez, Jimena Sofía Viveros. 'The Risks and Inefficacies of AI Systems in Military Targeting Support', Humanitarian Law & Policy Blog, September 4, 2024.https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/.
Arif, Shaza. 'Military Applications of Artificial Intelligence: Impact on Warfare', Journal of Aerospace & Security Studies 1 (2022): 1–20.
Sciforce. 'Adversarial Attacks Explained (And How to Defend ML Models Against Them)', Sciforce, September 7, 2022.https://medium.com/sciforce/adversarial-attacks-explained-and-how-to-defend-ml-models-against-them-d76f7d013b18.
Defence Mastery
Social Mastery
Please let us know if you have discovered an issue with the content on this page.
Comments
Start the conversation by sharing your thoughts! Please login to comment. If you don't yet have an account registration is quick and easy.