Skip to main content

Introduction

Autonomous and automated weapon systems have transformed the landscape of 21st century warfare, presenting both enhanced capabilities and profound challenges. As these technologies evolve and proliferate, their role in conflict escalation has sparked significant debate among military experts and scholars. Proponents argue that these systems reduce the risks to military personnel and are therefore non-escalatory,[1] while others contend that these technologies fundamentally alter the character of warfare, introducing destabilising factors that increase the potential for unintended escalation.[2]

Central to this debate is how the integration of automated or autonomous systems into modern operations might alter the traditional dynamics of escalation in competition, crisis or conflict. This paper argues that these systems possess uniquely destabilising characteristics, thus increasing the likelihood of unintended escalation. In response to these risks, the paper’s second argument is that by adopting specific policies and strategies, middle powers – such as Australia – can play an important role in mitigating automation-driven escalation.

This paper adopts definitions provided by Heather Roff to clarify the key terms used in its analysis. It is important to note that autonomous and automated systems exist on a technological spectrum; at one end, ‘automated systems’ refer to un-crewed, remotely-controlled drones or other systems that execute pre-programmed, repeatable tasks without adaptation or learning. On the other end, ‘autonomous systems’ are more sophisticated, equipped with artificial intelligence, and capable of independent decisions, target selection, learning, and adaptation.[3] This distinction is critical for understanding the nuanced mechanisms of escalation discussed in this paper, which depend on the level of human oversight involved.

The paper identifies three primary mechanisms through which autonomous and automated systems drive escalation: (1) the creation of a moral hazard, where reduced personal risk emboldens decision-makers to pursue more aggressive actions; (2) the difficulty in distinguishing errors by automated and autonomous systems from deliberate hostile actions; and (3) the acceleration of warfare, driven by the integration of autonomous technology within command and control architectures.

After outlining the contemporary debate linking escalation and automation, the paper will establish a broad theoretical framework covering escalation dynamics. Analysis will build from this framework by first demonstrating how automated systems such as drones create a moral hazard that enables escalation. Second, it will examine how errors in human-machine interfaces – across both automated and autonomous platforms – can be escalatory, especially when errors are misinterpreted as deliberate actions. Third, it will illustrate how autonomous systems in command and control architectures accelerate warfare, increasing the risk of uncontrolled escalation and reducing opportunities for de-escalation. Finally, it will outline a series of potential policy and strategy approaches, illustrating how Australia, as a middle power, is well-positioned to contribute to international efforts aimed at controlling automation-driven escalation.

It is important to clarify the scope of analysis. While demonstrating the escalatory potential of autonomous and automated systems will be a primary focus, the concept of the security dilemma – where rapid proliferation of weapon systems by one state prompts similar actions by others, thus raising overall tensions[4] – will not be addressed. Although the security dilemma remains relevant in the broader sense of escalation, it is a well-established concept in academic literature, and is not unique to autonomous or automated weapon systems. This paper will instead focus on the specific, under-explored mechanisms by which automation and autonomy uniquely influence escalation dynamics, and the strategies that could mitigate these risks.

Literature Review

Part 1: Contemporary Debate on Autonomous Systems and Escalation

Recent studies have explored the impact of automated weapon systems, such as remotely controlled drones, on conflict escalation. Erik Lin-Greenberg’s research examined military decision-making through the ‘action-reaction’ framework, which Richard Smoke described as the ‘heart of escalation dynamics.’[5] Lin-Greenberg’s methodology involved simulated wargames, surveys, and discussions to assess differences in how militaries respond to the loss of automated drones versus crewed aircraft in comparable situations. His findings suggest that drones have a stabilising effect over conflict, since their destruction tends to elicit a less aggressive retaliation than crewed platforms.[6] While this stabilisation effect is significant, it leaves open questions about the broader impact of automation on risk-taking behaviour and decision-making thresholds, which this paper will address.

In contrast to Lin-Greenberg’s findings, John Schaus and Kaitlyn Johnson stated that as of 2018, the impact of automated platforms on escalation dynamics remained unclear.[7] More recently, a CNA (Center of Naval Analysis) report argued that automated systems are likely to be escalatory, however this analysis was largely conceptual, lacking empirical case examples to support its claim.[8] Jonathan Panter’s research presents a similar viewpoint, indicating that these platforms introduce new risks in crisis or conflict situations.[9] Although Panter’s findings are not conclusive, they raise two critical issues with automated and autonomous systems: (1) that major errors can occur, and remain undetected, when machines take over the work of humans, and (2) that these systems could create a ‘moral hazard,’ where decision-makers take riskier actions when they do not personally bear the consequences.[10] While Panter’s work offers valuable insights, the lack of real-world examples leaves a gap in fully understanding these risks. This paper fills this gap by using empirical case examples to illustrate how these mechanisms contribute to escalation dynamics.

Part 2: Escalation Theory, Theoretical Framework and Methodology

To analyse the risks posed by autonomous and automated systems, it is crucial to first establish a clear theoretical framework of escalation. Numerous scholars, particularly from the Cold-War era, have established models of escalation dynamics. Among the most influential is Herman Kahn’s model of an escalation ladder (Figure 1), which depicts the progression of conflict through a series of ‘rungs’ from sub-crisis disagreement to all-out war.[11] However, while Kahn’s ladder is useful for illustrating the intensification of conflict, it is a structural metaphor rather than a procedural one; it does not fully capture how conflicts can transform and escalate over time, which is of critical importance to the analysis in this paper.

Kahn's 16 Step escalation ladder
16Aftermath
15Some kind of "All-Out" War
14"Complete" Evacuation
13Limited Non-Local War
12Controlled Local War
11Spectacular Show of Force
10Super-Ready Status
9Limited Evacuation
8Intense Crisis
7Limited Military Confrontations
6Acts of Violence
Modest Mobilisation
4Show of Force
3Political, Diplomatic, and Economic Guestures
2"Crisis"
1Subcrisis Disagreement
 SOURCE: Khan (1962:185) 

Kahn also divided the concept of escalation into three distinct categories: ‘increasing intensity,’ ‘widening the area’ and ‘compounding escalation’.[12] Forrest Morgan et al. later adapted these ideas into the more widely used terms of horizontal and vertical escalation.[13] Vertical escalation refers to intensifying conflict – toward the top of Kahn’s ladder – while horizontal escalation involves widening the conflict’s scope through indirect means, such as proxy wars. These concepts are valuable in understanding how autonomous and automated systems influence both forms of escalation.

Thomas Schelling’s Strategy of Conflict and Richard Smoke’s War: Controlling Escalation  both add critical refinement to Kahn’s ladder model by introducing the concept of salient limits – unspoken thresholds that actors avoid crossing in order to prevent escalation.[14],[15] Smoke argues that escalation occurs when these subtle yet critical limits are breached. Escalatory actions, in this view, are defined not by size or by scale, but by whether they violate these recognised boundaries, which are discrete, objective, understood and respected by both sides.[16] Thus, any technology that enables actors to unknowingly cross these limits more easily can be considered escalatory.

A related concept is that of moral hazard, which is a key concept in this paper. Moral hazard occurs when decision-makers take greater risks because they do not personally bear the consequences of their actions.[17] This phenomenon is well documented in fields such as corporate governance, finance, and healthcare.[18] In the context of automated and autonomous weapons systems, moral hazard is especially relevant. By reducing risk to military personnel, automated platforms may embolden decision-makers, thus reducing the psychological barriers to crossing Schelling’s salient limits. While moral hazard does not directly cause escalation, it increases the likelihood of conflict escalation by encouraging riskier decisions. This dynamic will be explored through case illustrations in the next section.

Richard Smoke also applied the concept of salient limits to explore uncontrolled escalation. He described situations where these limits are crossed without an actor appreciating ‘the full consequences of his/her actions,’ and where the opponent’s likely reaction would cross further salient limits, leading to an out-of-control cycle of actions and reactions.[19] Avoiding uncontrolled vertical escalation is crucial in the modern era, especially when parties to conflict may possess weapons of mass destruction. Therefore, this paper will also examine how automation and autonomy contribute to scenarios involving uncontrolled escalation.

Frank Zagare contributed to escalation theory through his modelling of game theory scenarios such as ‘bluff’, ‘chicken’, and the ‘prisoner’s dilemma,’ which highlight the difficulty of interpreting adversary actions in high-stakes environments.[20] These theories emphasise the importance of interpreting adversary intent, and highlight how misunderstandings or miscalculations can lead to escalation. In the context of automated or autonomous systems, misattribution of intent – especially where errors or malfunctions are involved – mirrors these dilemmas. Escalation can occur when one side perceives an action as deliberate, even when it is not, driving conflicts toward higher levels of violence (or broader geographic spread).

From this theoretical framework we can identify three key criteria of escalatory technology to look for through empirical examples in our analysis: (1) It lowers or obscures Schelling’s salient limits, making it easier for actors to take more aggressive actions; (2) it blurs the distinction between intentional and unintentional actions, increasing the likelihood of misinterpretation; and (3) it compresses decision-making timelines, reducing opportunities for reflection and restraint, leading to the crossing of critical thresholds. These criteria will be applied to the three mechanisms identified earlier, to argue how automated or autonomous systems can drive conflict escalation.

Analysis

Part 1: Moral Hazard

The first mechanism we will consider is the moral hazard associated with autonomous and automated platforms. This section will use three contemporary examples to demonstrate how these systems diminish the perceived risks that decision-makers would traditionally weigh, thereby lowering the barriers to crossing Schelling’s salient limits of conflict.

A pertinent example of this dynamic occurred in 2023, when a Russian fighter aircraft collided with a U.S. MQ-9 Reaper drone over the Black Sea during the ongoing Russia-Ukraine war, while attempting to dump fuel on the drone’s sensors.[21] As Panter argues, Russian operational commanders likely assessed that damaging an un-crewed drone would not provoke a severe U.S. military response, since no American lives were at risk.[22] However, the moral hazard of this situation led to highly provocative, reckless manoeuvres, which ultimately crossed a salient limit: the direct destruction of U.S. military assets. In this case, the reduced concern over personnel risk likely facilitated this escalation, increasing the likelihood of miscalculation and further response.

A similar incident occurred in 2019, when the Iranian Revolutionary Guard Corps destroyed a U.S. RQ-4 Global Hawk conducting surveillance in the Strait of Hormuz. Again, the un-crewed nature of the platform almost certainly factored into the Iranian decision to escalate.[23] According to the Soufan Center, Iranian decision-makers calculated that destroying an un-crewed platform would not escalate the strategic status-quo significantly,[24] which proved to be an error of judgement precipitated by moral hazard. The outcome differed significantly to this calculation; U.S. President Trump immediately ordered a retaliatory strike against Iran, cancelling it less than one hour prior to execution due to concerns over vertical escalation and proportionality.[25] Instead, the U.S. opted for horizontal escalation into another domain, using cyber-attacks against Iranian radar installations and missile batteries.[26] By crossing the salient limit of kinetic action against U.S. forces, Iranian decision-makers triggered an escalatory response that expanded hostilities in an unpredicted way. Both the 2023 Reaper and 2019 Global Hawk incidents therefore demonstrate how automated systems can – through moral hazard – obscure or lower the salient boundaries of conflict, driving political, strategic, and operational leaders toward more provocative and escalatory actions.

A drone in flight

There are also instances where the moral hazard associated with automated and autonomous systems has led to uncontrolled escalation. On February 10, 2018, an armed Iranian drone launched into Israeli airspace from Syria, and was destroyed.[27] Given the reputation of Israel’s air defence network, it is highly unlikely that Iran would have taken such a risk with crewed aircraft – i.e. moral hazard was likely a key factor leading to the decision. The Iranian drone was shot down by Israel’s air defence system;[28] however Israeli leadership determined that this act did not sufficiently deter against future attacks using similar automated platforms, and elected to immediately launch retaliatory strikes against Iranian infrastructure within Syria, targeting the airbase the drone had launched from – crossing another salient limit.[29] During these retaliatory strikes, an Israeli F-16 was downed by Syrian surface-to-air missiles, crossing another threshold – kinetic action against crewed platforms – prompting Israel to then strike Syrian air defences and military positions near the population centre of Damascus. This action-reaction cycle continued until Russian intervention eventually de-escalated the exchange.[30] Smoke’s notion of uncontrolled escalation is reflected here in the way that each action pushed the conflict further beyond its previous limits. However, more fundamentally, this example underscores how the moral hazard of automated platforms led to a situation that quickly spiralled out of control.

The three empirical case illustrations in this section all validate Panter’s proposition that autonomous and automated weapon systems create a moral hazard in modern warfare, which can result in conflict escalation. By lowering perceived risks, these systems facilitate the crossing of Richard Smoke’s salient limits of conflict, thus meeting our definition of an escalatory technology.

Part 2: Machine Error and its Escalatory Potential

The second mechanism contributing to escalation is machine error, and the misinterpretation of adversary intent when such errors occur. As Forrest Morgan et al argue, the complexity of autonomous and automated systems increases the potential for critical errors – such as target misidentification or erroneous recommendations from decision support systems.[31] This risk is heightened when systems are deployed prior to adequate testing, or become vulnerable to cyberattacks and jamming.[32] Such factors introduce significant risks for conflict escalation, as adversaries may interpret machine errors or other unintended actions as deliberate acts of aggression.

A key challenge here lies in attribution: determining whether an action taken by an autonomous or automated system is a deliberate attack or the result of a malfunction. In high-tension environments, errors are more likely to be perceived as intentional, resulting in increased hostilities. An example of this occurred on July 3, 1988 during the Iran-Iraq war. The automated AEGIS system aboard the USS Vincennes, operating in the Persian Gulf, erroneously identified a civilian airliner, Iran Air Flight 655, as an attacking Iranian fighter jet. Despite conflicting data, the crew launched a surface-to-air missile which destroyed the airliner and killed all 290 passengers.[33] Though the U.S. government insisted the incident was accidental, Iranian leaders attributed it to a deliberate act of American aggression, prompting Ayatollah Khomeini to denounce the incident as a ‘barbaric massacre,’[34] while urging Iranians to ‘go to the war fronts and fight against America.’[35] Acrimony from this incident continues decades later, with Iranian leaders often making reference to it in anti-U.S. rhetoric.[36] This incident illustrates how unintended actions due to errors in automated systems can be easily misinterpreted as deliberate, leading to heightened enmity and escalation.

The Human-Machine Interface and Automation Bias

It is important to recognise that error and unintended actions are not unique to automated and autonomous systems. To consider this technology escalatory, it is necessary to demonstrate that these systems possess unique characteristics that make errors more dangerous. While automation aims to improve precision and reduce the rate of human error, it also shifts the human role from active control to passive monitoring.[37] According to Mouloua, This shift creates a critical vulnerability, as humans are less adept at detecting rare or unpredictable errors when disengaged from direct control.[38] Moreover, operators tend to place excessive trust in automated systems, which reduces their readiness to intervene when necessary; a phenomenon described as automation complacency or automation bias.[39]

Automation bias has been widely observed in industries like aviation and nuclear power. A clear illustration of its dangers occurred in 1972, when Eastern Airlines Flight 401 crashed due to a faulty landing gear indicator light that distracted the crew.[40] As they focused on troubleshooting the minor issue, they failed to notice that autopilot had initiated a slow descent, which continued until the aircraft impacted the Florida everglades, killing 101 passengers.[41] This incident highlights how human reliance on automation can lead to lapses in attention, with errors leading to catastrophic outcomes. In a military context, such lapses could be even more dangerous, particularly when adversaries can misinterpret unintended actions as deliberate and escalatory.

Research by Robert Arrabito et al. supports this view, identifying human factors as playing a major role in operational mishaps involving automated military platforms.[42] Arrabito argues that human supervisors must maintain high levels of vigilance to address errors, especially during long-endurance missions. When this vigilance falters – due to fatigue or automation bias – errors go undetected, exposing critical vulnerabilities in the system.[43] Furthermore, Andre Haider highlights that the human-machine interface in partially automated drones is also susceptible to disruption through datalink failures, equipment malfunctions, or cyberattacks and jamming.[44] Together, these factors demonstrate the potential for critical errors emerging in the context of automated or autonomous platforms – and thus the potential for inadvertent escalation – is significant.

Automation and Error - Counterarguments

Critics may argue that automated systems are designed specifically to reduce error – by addressing human weaknesses such as fatigue and cognitive bias – and should therefore be less escalatory than platforms under more direct human control. However, this argument overlooks the complexity of automated and autonomous operations, and mistakenly focuses on the rate of error as the most important metric to consider. A more relevant measure for assessing escalatory risk is the severity and number of errors which go undetected due to inadequate human oversight, leading to more serious consequences.

By this measure, Shayne Longpre et al. argue that automation technology has a particularly poor record, as human supervision over machines is particularly unreliable and ‘rife with unsolved challenges.’[45] This is seen even in the most advanced systems; according to Yuval Abraham, some of the most sophisticated autonomous military technologies today still exhibit error rates as high as 10%, with many of these errors going undetected by human supervisors due to automation bias, leading in some cases 

Autonomous Robot Dog

to severe outcomes such as the unintended targeting of civilians.[46]

Other Error-Producing Aspects of Automated and Autonomous systems

Still, there are other sources of error unique to automated and autonomous systems that are likely to increase the intensity of conflict. The 44-day war between Armenia and Azerbaijan over Nagarno-Karabakh in 2020 illustrated how the pairing of humans and automated drones can drive errors resulting in unintended outcomes. Izabella Khachatrayan attributes these errors to phenomena such as the ‘soda straw’ effect (tunnel-vision caused by the narrow field of view of a drone’s camera), and the ‘data crush’ effect (overload of sensor data overwhelming operators).[47] Due to these sources of error across the human-machine interface, Azerbaijani forces, using UAVs, struggled to apply the warfare principles of proportionality and distinction, leading to errors resulting in the indiscriminate targeting in civilian areas, raising tensions between the two sides.[48] This escalation prompted reciprocal and reactionary munition strikes against population centres, a reactionary cycle which was only de-escalated upon the arrival of international peacekeeping forces.[49]

The Nagarno-Karabakh war demonstrated how the errors in these systems triggered a breach of the salient limits of conflict, while also blurring the distinction between intentional and unintentional actions – a dynamic demonstrated in the aftermath of the Iran Air 655 incident. These dynamics, together with factors such as automation complacency – as seen in the Eastern Airlines accident – illustrate that automation can be inherently unpredictable. Further, the potential for unpredictable actions to be misinterpreted in a time of tension, demonstrates the potential for inadvertent escalation associated with these technologies.

Part 3: Autonomous Systems and Command and Control

The third mechanism contributing to escalation involves the temporal acceleration of warfare when autonomous systems are integrated into Command and Control (C2) frameworks. By increasingly delegating critical military decisions to machines, this shift fundamentally alters the nature and pace of decision-making in conflict, introducing significant escalatory tendencies. Near-autonomous decision-support systems, such as Israel’s ‘Gospel’,[50] ‘Lavender’,[51] and ‘Where’s Daddy’,[52] are examples of AI-driven technology that far surpass human-driven processes in both speed and volume of target generation. By rapidly processing vast amounts of surveillance and intelligence data, these platforms have reshaped the pace of conflict. As Elke Schwartz observes, integration of such systems has made ‘the process of killing progressively more autonomous,’ by reducing human deliberation and reflection in critical moments.[53]

A clear example of this dynamic is seen in the 2023-2024 Gaza conflict, where the Lavender system generated over 37,000 potential targets in its first days by rapidly processing vast amounts of surveillance data. The significant volume of targets drove a reduction in the time allocated to human reviews to approximately 20 seconds per target – barely enough for a human to confirm a target’s gender – before authorising a strike.[54] While this rapid decision-making may minimise indecisiveness and delays associated with human ‘bottlenecks,’[55] it also diminishes crucial opportunities for humans to carefully consider their decisions and exercise restraint.

These opportunities for restraint have been historically important. In contrast to AI-driven C2 systems like Lavender, which enable approval of thousands of strikes in seconds, historical incidents such as the 1962 Cuban Missile crisis,[56] the 1983 Able Archer Incident,[57] and the 1999 Kargil Conflict[58] were characterised by slower, controlled decision-making processes that allowed time for critical human deliberation and restraint. In these instances, the human ‘bottlenecks’ provided essential opportunities for discussion, diplomatic intervention, and ultimately de-escalation. These opportunities appear to be rapidly diminishing as AI-enabled tools accelerate targeting decisions.

This acceleration of decision-making also compounds with machine error, especially in the algorithmic decision-making associated with systems such as Lavender. According to Yuval Abraham, Lavender’s error rate is as high as 10%, and in some cases, the system has recommended strikes on low-level targets with as many as 20 civilians in close proximity – collateral damage deemed acceptable by the algorithm.[59] The apparent delegation of human moral and ethical considerations to rapid, algorithmic decision-making is concerning, as situations where autonomous systems like Lavender accept high levels of collateral damage as an efficient trade-off may set the scene for reactionary cycles of vengeance. This could increase the likelihood of uncontrolled vertical escalation.

Yet another factor that may accelerate escalation is the impact of these AI decisions on rational adversary behaviour. When rivals perceive that decisions are being made without human restraint, they may conclude that the salient limits of conflict have fundamentally changed. Zagare’s application of game theory, specifically the prisoner’s dilemma scenario, can be applied here to demonstrate how the perception that human restraint is diminished may lead rational adversaries to escalate conflicts pre-emptively.[60] The fear that restraint could increase vulnerability may drive more aggressive decisions, thus intensifying conflict. This could further accelerate warfare, pushing all actors toward exceeding Schelling’s salient limits of behaviour in conflict.

The transformation of warfare through automation of C2 systems underscores the urgent need to address the escalatory risks posed by such technologies. While speed and efficiency are operationally advantageous, they diminish human oversight and shrink decision-making windows that are necessary for de-escalation. Though this section has focused on Israeli technology, the widespread development of AI-driven C2 systems by the U.S Department of Defense (e.g. Project Maven)[61] and China’s PLA research and development programs[62] indicates that these trends are global. In great-power conflicts, where the stakes are exponentially higher, the potential for uncontrolled vertical escalation due to this mechanism is particularly alarming. Therefore, mitigating the risks of autonomous decision-support systems in military command and control architectures should be considered a global priority.

Part 4: How Australia Might Contribute to Contemporary Escalation Management

Having demonstrated the uniquely escalatory tendencies of autonomous and automated weapon systems – including moral hazard, machine error, and the acceleration of warfare - this section explores how middle powers such as Australia might implement strategies to mitigate these dangers. It will argue that due to its strong regional and global standing, Australia is well-positioned to play an important role in managing escalation in the context of automated warfare. This section outlines three potential avenues: Through operational strategies, international agreements, and crisis communication channels.

Operational Strategies

One of the most effective ways Australia might address the escalatory potential of automated systems is by adopting operational strategies focused on de-escalation. For instance, autonomous systems could be deployed in non-lethal conflict roles, such as using electronic warfare drones to disrupt enemy communications, deterring adversaries from attempting direct confrontation.[63] Similarly, automated command and control (C2) systems, akin to Lavender, could be adapted for less escalatory purposes: Instead of targeting individuals, these systems could be trained to identify and disrupt logistical vulnerabilities, such as military supply chains or military infrastructure.[64] This approach would allow Australia to impose significant costs on any hypothetical aggressor, deterring escalation without adopting an overly threatening posture.

Additionally, Australia could devise operational concepts for using autonomous platforms in humanitarian and disaster relief efforts. AI-powered C2 systems could assist in needs assessments, while automated platforms could deliver aid or clear land and sea mines in war-torn regions.[65] The use of drones for surveillance within UN peacekeeping operations exemplifies how these platforms could be employed in innovative ways that avoid the escalatory tendencies identified in the previous section.[66] Finally, operational contingency plans should incorporate de-escalatory pathways to ensure unexpected machine errors or system malfunctions are not misinterpreted. By establishing pre-agreed crisis management protocols with both allies and adversaries, and considering carefully how the Australian Defence Force employs automated and autonomous systems, Australian military planners could reduce the likelihood of confusion and miscalculation in moments of crisis.

Crisis Communication Channels

Beyond operational strategies, Australia could advocate for (and pursue) enhancements to crisis communication mechanisms between nations. Historically, dedicated ‘hot-lines’ have played a crucial role in de-escalation during crises.[67] However, considering the acceleration of warfare identified in this paper, there may be a need to broaden these communication mechanisms to sub-political levels, potentially extending to the operational levels of militaries. This would facilitate more timely opportunities to clarify intent, and more opportunities to resolve misunderstandings in real-time, thus stabilising tense situations. These communication mechanisms would be particularly useful in scenarios where automated systems such as drones or autonomous sub-surface assets are involved in near-miss incidents or other ambiguous situations.

International Agreements and Controls

Australia could also review its approach toward international controls and treaties associated with automated and autonomous weapon systems. As Jonosch Delcker highlights, Australia has historically opposed international efforts to regulate these technologies.[68] However, as a contracted party to the UN Convention on Certain Conventional Weapons (CCW), Australia could, under Article 8 (2)(a) of the Convention, propose a new protocol addressing autonomous systems – a category not yet covered by existing CCW protocols.[69] Given the growing recognition of the escalatory risks posed by these systems, and particularly amid more frequent and intense flashpoints, it would be prudent for Australian policymakers to explore this option.

A growing number of nations are now seeking treaties or agreements to regulate the use of autonomous and automated weapons,[70] and Australia’s endorsement could significantly bolster these global efforts. Historically, such treaties have had some success in curtailing the spread of destructive technologies, such as cluster munitions, landmines, and nuclear weapons.[71] While Australia’s relatively small population and vast territory to defend may create incentives to maintain and develop automated and autonomous capabilities for deterrence,[72] Its political standing uniquely positions it to lead global efforts to establish norms and standards around these technologies.

These agreements could address the escalatory risk in a number of ways. For example, by emphasising the importance of human accountability for the actions of automated and autonomous platforms, regulation could potentially address issues relating to moral hazard. Furthermore, by promoting transparency and confidence-building measures supporting the acquisition and employment of automated platforms, Australia could help reduce the likelihood of fear and miscalculation. Similarly, by advocating for enhancements to international crisis management frameworks, Australia’s efforts could open windows for de-escalation at a time when technology is shrinking them. Collectively, these approaches could achieve significant headway in reducing risks, even if establishing a fully comprehensive treaty remains challenging due to diverse interests of global powers.

However, as Laura Varella highlights, the rapid pace of technological development in the current era complicates the creation of effective international agreements. The dynamics of Zagares’ prisoner’s dilemma’[73] – where nations are incentivised to race for technological superiority – are now particularly pronounced among major military powers vying for technological dominance. Thus, as an interim measure, Australian efforts might be best targeted at achieving non-binding codes of conduct or voluntary agreements as intermediate steps toward more formal treaties.

Conclusion

The introduction of automated and autonomous systems in modern warfare has fundamentally transformed – and continues to transform – the landscape of conflict. This analysis has underscored the critical role that these technologies play in shaping escalation dynamics, driven by factors such as moral hazard, attribution of errors emerging from the human-machine interface, and the acceleration of warfare due to autonomous decision-making.

By examining illustrative incidents such as the destruction of U.S. Global Hawk and Reaper drones, and the Iran-Israel escalation in 2018, this paper has demonstrated that the moral hazard associated with autonomous and automated systems can lead decision makers to exceed the established thresholds of conflict. Furthermore, case examples on human-machine error, including the downing of Iran Air Flight 655 and the Nagorno-Karabakh conflict, illustrate how the errors associated with automated and autonomous systems, and the difficulty in determining the intent behind unexpected actions, can drive unintended escalation.

Moreover, this paper has highlighted how automated command and control (C2) systems, exemplified by Israel’s Lavender and Gospel systems, can increase the risk of rapid escalation by accelerating the conduct of warfare, diminishing human oversight and eliminating critical opportunities for diplomatic de-escalation. Thus, while these systems may enhance operational efficiency, they significantly raise the prospect of intensifying conflict.

The argument presented herein asserts that modern autonomous and automated weapon systems are uniquely destabilising and escalatory. While the empirical case illustrations presented here clearly support this argument, they also raise an essential question for further investigation: As Australia modernises its defence capabilities and strategy, how can it carefully weigh the operational benefits of automated and autonomous weapon systems against the imperative of escalation management in an increasingly tense geopolitical environment? This paper has briefly outlined potential pathways to achieving this balance through a combination of operational strategies, crisis communication channels, and by fostering greater international cooperation.

This final area warrants continued policy discussion and research. By reconciling the advantages of automation with the need for robust escalation control mechanisms, Australia may help ensure a more secure future in an increasingly automated world. Thoughtful engagement with international diplomacy, along with the development of robust crisis communication protocols, will be essential as Australia navigates the complexities of the global security environment in the 21st century.

Bibliography

Abraham, Yuval., “’Lavender’: The AI machine directing Israel’s bombing spree in Gaza.” In +972 Magazine. April 3, 2024. https://www.972mag.com/lavender-ai-israeli-army-gaza/

Abraham, Yuval., “A Mass Assasination Factory: Inside Israel’s Calculated Bombing of Gaza,” In +972 Magazine, 30 November 2023, https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/.

Arrabito, Robert., Ho, Geoffrey., Lambert, Annie., Rutley, Mark., Keillor, Jocelyn., Chiu, Allison., Au, Heidi., and Ming Hou. Human Factors Issues for Controlling Uninhabited Aerial Vehicles, (Toronto: Defence R&D Center, 2010), i. https://apps.dtic.mil/sti/pdfs/ADA543186.pdf

CNA Center for Autonomy and IA, “Impact of Unmanned Systems on Escalation Dynamics,” Accessed September 20, 2024. https://www.cna.org/archive/CNA_Files/pdf/summary-impact-of-unmanned-systems-to-escalation-dynamics.pdf

Commonwealth of Australia, National Defence Strategy 2024. Canberra: Australian

Government – Defence, 2024.

Cunningham, Erin and Missy Ryan, “Trump ordered an attack on Iran for downing drone, then called it off” In The Washington Post. June 21, 2019. https://go.gale.com/ps/i.do?id=GALE%7CA589970749&sid=googleScholar&v=2.1&it=r&linkaccess=abs&issn=01908286&p=AONE&sw=w&userGroupName=anon%7Eaa928ffb&aty=open-web-entry

Danby, Nick., “How the downing of Iran Air flight 655 still sparks US-Iran enmity,” in Responsible Statecraft, 02 July, 2021. https://responsiblestatecraft.org/2021/07/02/how-the-downing-of-iran-air-flight-655-still-influences-us-iran-enmity/

Delcker, Janosch., “How Killer Robots Overran the UN,” in Politico, February 12, 2019. https://www.politico.eu/article/killer-robots-overran-united-nations-lethal-autonomous-weapons-systems/

Einav, Liran., and Amy Finkelstein. “Moral Hazard in Health Insurance: What We Know and How We Know It,” In Journal of the European Economic Association 16, no. 4 (2018).

Federal Aviation Administration., “Accident Report: Eastern Airlines Flight 401, N310EA.” Accessed September 20, 2024. https://www.faa.gov/lessons_learned/transport_airplane/accidents/N310EA

Garamone, Jim., “Iran Shoots Down US Global Hawk Operating in International Airspace,” Press Release, US Department of Defense. June 20, 2019. https://www.defense.gov/News/News-Stories/Article/Article/1882497/

Grieco, Kelly, A., “What Can UN Peacekeeping Learn from Ukraine’s Drones?” (Washington: The Henry L. Stimson Center, 2023). https://www.stimson.org/2023/what-can-un-peacekeeping-learn-from-ukraines-drones/#:~:text="UN%20missions%20can%20also%20use,displaced%20people%20in%20conflict%20zones."

Heider, Andre., “The Vulnerabilities of Unmanned Aircraft Systems Components.” In A Comprehensive Approach to Countering Unmanned Aircraft Systems, Joint Air Power Competence Center, January 2021. https://www.japcc.org/chapters/c-uas-the-vulnerabilities-of-unmanned-aircraft-system-components/#data-link

Honrada, Gabriel., “AUKUS ground robots pass hot electronic war test,” in The Asia Times, February 8, 2024. https://asiatimes.com/2024/02/aukus-ground-robots-pass-hot-electronic-war-test/

Kahn, Herman., On Metaphors, Escalation and Scenarios, (New York: Routledge, 1962).

Karmon, Ely., “How Serious the Russian Threat to Israel in Syria? A Historical Perspective.” In IPS Publications. May 2018. 1. https://www.runi.ac.il/media/02lmakus/elykarmon29-5-18.pdf

Khachatryan, Izabella., The accuracy level of targeted killings by UAVs. (Montrose CA: Center for Truth and Justice, 2022). https://www.cftjustice.org/the-accuracy-level-of-targeted-killings-by-uavs-retrospective-to-nk-war-in-2020-is-the-usage-of-drones-legal-during-armed-conflicts-considering-the-high-risk-of-disproportionate-collateral-damage/

Klare, Michael T., “Strong Support at Conference for ‘Killer Robot’ Regulation.” In Arms Control Today. June 2024. https://www.armscontrol.org/act/2024-06/news/strong-support-conference-killer-robot-regulation

Larson, Christina., “China’s massive investment in artificial intelligence has an insidious downside,” in Science, February 8, 2018. https://www.science.org/content/article/china-s-massive-investment-artificial-intelligence-has-insidious-downside

Lawson, Keirissa., New Technology for Landmine Clearance, (Canberra: CSIRO, April 4, 2023). https://www.csiro.au/en/news/all/articles/2023/april/landmine-clearance

Lin-Greenberg, Erik., “Wargame of Drones: Remotely Piloted Aircraft and Crisis Escalation,” in Journal of Conflict Resolution 66, No. 10 (2022). https://journals.sagepub.com/doi/10.1177/00220027221106960

Lipscy, Phillip Y., and Haillie Na-Kyung Lee. "The IMF as a biased global insurance mechanism: Asymmetrical moral hazard, reserve accumulation, and financial crises." In International Organization 73, no. 1 (2019).

Longpre, Shayne., Storm, Marcus., and Rishi Shah, “Lethal autonomous weapons systems and artificial intelligence: Trends, challenges and policies,” in MIT Science Policy Review, Vol 3. August 29, 2022. https://sciencepolicyreview.org/wp-content/uploads/securepdfs/2022/10/v3_AI_Defense-1.pdf

Manson, Katrina., “AI Warfare is Already Here,” in Bloomberg. February 28, 2024. https://www.bloomberg.com/features/2024-ai-warfare-project-maven/

Merriam-Webster, ‘Moral Hazard’, Accessed September 20, 2024. https://www.merriam-webster.com/dictionary/moral%20hazard

Miller, Stephen E., “Nuclear Hotlines: Origins, Evolution, Applications,” in Analysis and New Insights, Stanley Center for Peace and Security, October 2020.

Morgan Forrest E., Mueller, Karl P., Medeiros, Evan S., Pollpeter, Kevin L., and Roger Cliff. “The Nature of Escalation,” In Dangerous Thresholds: Managing Escalation in the 21st Century, 1st ed., 7–46. RAND Corporation, 2008. http://www.jstor.org/stable/10.7249/mg614af.9

Morgan, Forrest E., Boudreaux, Benjamin, Lohn, Andrew, Ashby, Mark, Curriden, Christian, Klima, Kelly and Derek Grossman. Military Applications of Artificial Intelligence, (Santa Monica CA: RAND Corporation, 2020), 140. https://www.rand.org/content/dam/rand/pubs/research_reports/RR3100/RR3139-1/RAND_RR3139-1.pdf

Mouloua, Mustapha and Peter Hancock., Human performance in automated and

autonomous systems, current theory and methods (CRC Press, 2020). 104.

Panter, Jonathan. “Naval Escalation in an Unmanned Context,” in CIMSEC, April 26, 2023. https://cimsec.org/naval-escalation-in-an-unmanned-context/

Rattray, Gregory J., “Explaining Weapons Proliferation: Going beyond the Security Dilemma,” in USAF Academy Institute for National Security Studies, FNSS Occasional Papers, no. 1 (2016). https://irp.fas.org/threat/ocp1.htm

Roff, Heather., “Distinguishing autonomous from automatic weapons,” in Bulletin of the Atomic Scientists. February 9, 2016. https://thebulletin.org/roundtable_entry/distinguishing-autonomous-from-automatic-weapons/

Schaus, John and Kaitlyn Johnson, “Unmanned Aerial Systems’ Influences on Conflict Escalation Dynamics,” In CSIS Briefs, August 2, 2018. https://www.csis.org/analysis/unmanned-aerial-systems-influences-conflict-escalation-dynamics

Schelling, Thomas C., The Strategy of Conflict (Cambridge, MA: Harvard University Press, 1960).

Schwartz, Elke., ‘Gaza War: Israel Using AI to Identify Human Targets Raising Fears That Innocents Are Being Caught in the Net’, The Conversation, 12 April 2024. https://theconversation.com/gaza-war-israel-using-ai-to-identify- human-targets-raising-fears-that-innocents-are-being-caught-in-the-net-227422.

Sharma, Rohit Kumar., “Human-in-the-loop Dilemmas: The Lavender System in Israel Defence Force Operations.’ In Journal of Defence Studies, Vol. 18, No. 2., April-June 2024. 166-175.

Shoaib, Muhamad and Ruman Rabeet, ‘The Enron Scandal and Moral Hazard,’ in SSRN Electronic Journal, January 2012. DOI:10.2139/ssrn.1987424

Smoke, R., War: Controlling Escalation (Cambridge MA: Harvard University Press, 1977).

Soufan Centre, “Intelligence Brief: The U.S.-Iran Crisis.” Updated Jun 24, 2019. https://thesoufancenter.org/intelbrief-the-u-s-iran-crisis/

The Defense Post, “Russian SU-27 Causes American MQ-9 Reaper Drone to Crash Over Black Sea: US”, published in The Defence Post. March 14, 2023. https://thedefensepost.com/2023/03/14/russian-jet-us-drone-crash/

United Nations, The Convention on Certain Conventional Weapons, (Geneva: The United Nations Office of Disarmament Affairs, October 10, 1980). https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/

Varella, Laura., “Building Momentum to Ban AWS,” in AWS Diplomacy Report 1, No. 1. May 7, 2024. https://reachingcriticalwill.org/images/documents/Disarmament-fora/other/aws/2024-vienna/reports/AWSR1.1.pdf

Wolff, Jason., The Department of Defense’s digital logistics are under attack, (Washington DC: The Brookings Institute, July 2023). https://www.brookings.edu/articles/the-department-of-defenses-digital-logistics-are-under-attack/

Zagare, Frank C., “The Dynamics of Escalation,” in Information and Decision Technologies 16, no. 3, (1990). University of Buffalo, NY.

Footnotes

1 Erik Lin-Greenberg, “Wargame of Drones: Remotely Piloted Aircraft and Crisis Escalation,” in Journal of Conflict Resolution 66, No. 10 (2022). https://journals.sagepub.com/doi/10.1177/00220027221106960

2 Jonathan Panter, ‘Naval Escalation in an Unmanned Context,’ in CIMSEC, April 26, 2023. https://cimsec.org/naval-escalation-in-an-unmanned-context/

3 Heather Roff, “Distinguishing autonomous from automatic weapons,” in Bulletin of the Atomic Scientists. February 9, 2016. https://thebulletin.org/roundtable_entry/distinguishing-autonomous-from-automatic-weapons/

4 Gregory J. Rattray, “Explaining Weapons Proliferation: Going beyond the Security Dilemma,” in USAF Academy Institute for National Security Studies, FNSS Occasional Papers, no. 1 (2016). https://irp.fas.org/threat/ocp1.htm

5 Richard Smoke, War: Controlling Escalation (Cambridge MA: Harvard University Press, 1977), 268.

6 Erik Lin-Greenberg, “Wargame of Drones: Remotely Piloted Aircraft and Crisis Escalation.”

7 John Schaus and Kaitlyn Johnson, “Unmanned Aerial Systems’ Influences on Conflict Escalation Dynamics,” In CSIS Briefs, August 2, 2018. https://www.csis.org/analysis/unmanned-aerial-systems-influences-conflict-escalation-dynamics

8 ‘Impact of Unmanned Systems on Escalation Dynamics,’ CNA Center for Autonomy and IA. Accessed September 20, 2024. https://www.cna.org/archive/CNA_Files/pdf/summary-impact-of-unmanned-systems-to-escalation-dynamics.pdf

9 Jonathan Panter, ‘Naval Escalation in an Unmanned Context.’

10 ‘Moral Hazard’, Merriam-Webster, Accessed September 20, 2024. https://www.merriam-webster.com/dictionary/moral%20hazard

11 Herman Kahn, On Metaphors, Escalation and Scenarios, (New York: Routledge, 1962), 185.

12 Kahn, On Metaphors. 4-5.

13 Forrest E. Morgan, Karl P. Mueller, Evan S. Medeiros, Kevin L. Pollpeter, and Roger Cliff. “The Nature of Escalation,” In Dangerous Thresholds: Managing Escalation in the 21st Century, 1st ed., 7–46. RAND Corporation, 2008. http://www.jstor.org/stable/10.7249/mg614af.9

14 Smoke, War: Controlling Escalation, 33.

15 Thomas C. Schelling, The Strategy of Conflict (Cambridge, MA: Harvard University Press, 1960), 5.

16 Smoke, War: Controlling Escalation, 33.

17 ‘Moral Hazard’, Merriam-Webster, Accessed September 20, 2024. https://www.merriam-webster.com/dictionary/moral%20hazard

18 Examples of moral hazard include the Enron scandal, in which executives took excessive risks without facing direct consequences contributing to the company’s collapse. See Muhamad Shoaib and Ruman Rabeet, “The Enron Scandal and Moral Hazard,” SSRN Electronic Journal, January 2012. DOI:10.2139/ssrn.1987424.

Another example is the 2008 Global Financial Crisis (GFC), where financial institutions engaged in destabilising practises, assuming government bailouts would mitigate losses. See Phillip Y. Lipscy, and Haillie Na-Kyung Lee. "The IMF as a biased global insurance mechanism: Asymmetrical moral hazard, reserve accumulation, and financial crises." International Organization 73, no. 1 (2019): 35-64. Research also shows that individuals with comprehensive health insurance tend to engage in riskier behaviours, knowing their medical expenses are covered. See Liran Einav and Amy Finkelstein. “Moral Hazard in Health Insurance: What We Know and How We Know It,” In Journal of the European Economic Association 16, no. 4 (2018): 957-982.

19 Smoke, War: Controlling Escalation, 35.

20 Frank C. Zagare, “The Dynamics of Escalation,” in Information and Decision Technologies 16, no. 3, (1990), 251. University of Buffalo, NY.

21 “Russian SU-27 Causes American MQ-9 Reaper Drone to Crash Over Black Sea: US”, in The Defence Post. March 14, 2023. https://thedefensepost.com/2023/03/14/russian-jet-us-drone-crash/

22 Panter, “Naval Escalation in an Unmanned Context.”

23 Jim Garamone, “Iran Shoots Down US Global Hawk Operating in International Airspace,” Press Release, US Department of Defense. June 20, 2019. https://www.defense.gov/News/News-Stories/Article/Article/1882497/

24 ‘Intel Brief: The U.S.-Iran Crisis.’ The Soufan Centre IntelBrief. Updated Jun 24, 2019. https://thesoufancenter.org/intelbrief-the-u-s-iran-crisis/

25 Erin Cunningham and Missy Ryan, “Trump ordered an attack on Iran for downing drone, then called it off” In The Washington Post. June 21, 2019. https://go.gale.com/ps/i.do?id=GALE%7CA589970749&sid=googleScholar&v=2.1&it=r&linkaccess=abs&issn=01908286&p=AONE&sw=w&userGroupName=anon%7Eaa928ffb&aty=open-web-entry

26 The Soufan Center, ‘IntelBrief.’

27 Schaus and Johnson, “Unmanned Aerial Systems’ Influences.”

28 Schaus and Johnson, “Unmanned Aerial Systems’ Influences.”

29 Ely Karmon, “How Serious the Russian Threat to Israel in Syria? A Historical Perspective.” In IPS Publications. May 2018. 1. https://www.runi.ac.il/media/02lmakus/elykarmon29-5-18.pdf

30 Schaus and Johnson, “Unmanned Aerial Systems’ Influences.”

31 Forrest E. Morgan et al. Military Applications of Artificial Intelligence, (Santa Monica CA: RAND Corporation, 2020), 140. https://www.rand.org/content/dam/rand/pubs/research_reports/RR3100/RR3139-1/RAND_RR3139-1.pdf

32 Forrest E. Morgan et al. Military Applications. 22.

33 Nick Danby, “How the downing of Iran Air flight 655 still sparks US-Iran enmity,” in Responsible Statecraft, 02 July, 2021. https://responsiblestatecraft.org/2021/07/02/how-the-downing-of-iran-air-flight-655-still-influences-us-iran-enmity/

34 Nick Danby, “How the downing of Iran Air flight 655 still sparks US-Iran enmity.”

35 Nick Danby, “How the downing of Iran Air flight 655 still sparks US-Iran enmity.”

36 Hassan Rouhani (@Hassan Rouhani), Tweet regarding IR655, January 7, 2020. https://x.com/hassanrouhani/status/1214236608196685824?lang="en"

37 Mustapha Mouloua, and Peter Hancock, Human performance in automated and autonomous systems, current theory and methods, (CRC Press, 2020). 104.

38 Mouloua & Hancock, Human performance in automated and autonomous systems, 104.

39 Mouloua & Hancock, Human performance in automated and autonomous systems, 104.

40 “Accident Report: Eastern Airlines Flight 401, N310EA.” Federal Aviation Administration. Accessed September 20, 2024. https://www.faa.gov/lessons_learned/transport_airplane/accidents/N310EA

41 “Accident Report: Eastern Airlines Flight 401.”

42 Robert Arrabito et al. Human Factors Issues for Controlling Uninhabited Aerial Vehicles, (Toronto: Defence R&D Center, 2010), i. https://apps.dtic.mil/sti/pdfs/ADA543186.pdf

43 Arrabito et al. Human Factors Issues for Controlling Uninhabited Aerial Vehicles.

44 Andre Heider, “The Vulnerabilities of Unmanned Aircraft Systems Components.” In A Comprehensive Approach to Countering Unmanned Aircraft Systems, Joint Air Power Competence Center, January 2021. https://www.japcc.org/chapters/c-uas-the-vulnerabilities-of-unmanned-aircraft-system-components/#data-link

45 Shayne Longpre, Marcus Storm and Rishi Shah, “Lethal autonomous weapons systems and artificial intelligence: Trends, challenges and policies,” in MIT Science Policy Review, Vol 3. August 29, 2022. 49.

46 Yuval Abraham, “A Mass Assasination Factory: Inside Israel’s Calculated Bombing of Gaza,” In +972 Magazine, 30 November 2023, https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/.

47 Izabella Khachatryan, The accuracy level of targeted killings by UAVs. (Montrose CA: Center for Truth and Justice, 2022). https://www.cftjustice.org/the-accuracy-level-of-targeted-killings-by-uavs-retrospective-to-nk-war-in-2020-is-the-usage-of-drones-legal-during-armed-conflicts-considering-the-high-risk-of-disproportionate-collateral-damage/

48 Khachatryan, The accuracy level of targeted killings by UAVs.

49 Khachatryan, The accuracy level of targeted killings by UAVs.

50 Yuval Abraham, “A Mass Assasination Factory.”

51 Yuval Abraham, “’Lavender’: The AI machine directing Israel’s bombing spree in Gaza.” In +972 Magazine. April 3, 2024. https://www.972mag.com/lavender-ai-israeli-army-gaza/

52 Rohit Kumar Sharma, “Human-in-the-loop Dilemmas: The Lavender System in Israel Defence Force Operations.’ In Journal of Defence Studies, Vol. 18, No. 2., April-June 2024. 166-175.

53 Elke Schwartz, ‘Gaza War: Israel Using AI to Identify Human Targets Raising Fears That Innocents Are Being Caught in the Net’, The Conversation, 12 April 2024. https://theconversation.com/gaza-war-israel-using-ai-to-identify-human-targets-raising-fears-that-innocents-are-being-caught-in-the-net-227422.

54 Yuval Abraham, “Lavender.”

55 Yuval Abraham, “Lavender.”

56 “The Cuban Missile Crisis, October 1962,” Office of the Historian, United States State Department. Accessed September 20, 2024. https://history.state.gov/milestones/1961-1968/cuban-missile-crisis

57 Beyza Unal et al. Uncertainty and Complexity in Nuclear Decision Making, (London: Chatham House Pulbishing, March 2022).

58 Edward Wise, “Brinksmanship in the Kargil Heights: A Study of Escalation and Restraint during the 1999 Kargil conflict.” Leiden University Repository, Accessed 20 Sep, 21. https://studenttheses.universiteitleiden.nl/access/item%3A2701698/view

59 Yuval Abraham, “Lavender.”

60 Frank Zagare, “The Dynamics of Escalation,” 251.

61 Katrina Manson, “AI Warfare is Already Here,” in Bloomberg. February 28, 2024. https://www.bloomberg.com/features/2024-ai-warfare-project-maven/

62 Christina Larson, “China’s massive investment in artificial intelligence has an insidious downside,” in Science, February 8, 2018. https://www.science.org/content/article/china-s-massive-investment-artificial-intelligence-has-insidious-downside

63 Gabriel Honrada, “AUKUS ground robots pass hot electronic war test,” in The Asia Times, February 8, 2024.https://asiatimes.com/2024/02/aukus-ground-robots-pass-hot-electronic-war-test/

64 Jason Wolff, The Department of Defense’s digital logistics are under attack, (Washington DC: The Brookings Institute, July 2023). https://www.brookings.edu/articles/the-department-of-defenses-digital-logistics-are-under-attack/

65 Keirissa Lawson, New Technology for Landmine Clearance, (Canberra: CSIRO, April 4, 2023). https://www.csiro.au/en/news/all/articles/2023/april/landmine-clearance

66 Kelly A. Grieco, What Can UN Peacekeeping Learn from Ukraine’s Drones?, (Washington DC: The Henry L. Stimson Center, 2023).

67 Stephen E. Miller, “Nuclear Hotlines: Origins, Evolution, Applications,” in Analysis and New Insights, Stanley Center for Peace and Security, October 2020, 5.

68 Janosch Delcker, “How Killer Robots Overran the UN,” in Politico, February 12, 2019. https://www.politico.eu/article/killer-robots-overran-united-nations-lethal-autonomous-weapons-systems/

69 United Nations, The Convention on Certain Conventional Weapons, (Geneva: The United Nations Office of Disarmament Affairs, October 10, 1980). https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/

70 Michael T. Klare, “Strong Support at Conference for ‘Killer Robot’ Regulation.” In Arms Control Today. June 2024. https://www.armscontrol.org/act/2024-06/news/strong-support-conference-killer-robot-regulation

71 Laura Varella, “Building Momentum to Ban AWS,” in AWS Diplomacy Report 1, No. 1. May 7, 2024.  https://reachingcriticalwill.org/images/documents/Disarmament-fora/other/aws/2024-vienna/reports/AWSR1.1.pdf

72 Commonwealth of Australia, National Defence Strategy 2024. (Canberra: Australian

Government – Defence, 2024). 57.

00
Cite Article
Harvard
APA
Footnote
RIS
(Tracy, 2026)
Tracy, P. 2026. 'Autonomous and automated weapon systems and the traditional dynamics of conflict escalation. '. Available at: https://theforge.defence.gov.au/article/autonomous-and-automated-weapon-systems-and-traditional-dynamics-conflict-escalation (Accessed: 02 February 2026).
(Tracy, 2026)
Tracy, P. 2026. 'Autonomous and automated weapon systems and the traditional dynamics of conflict escalation. '. Available at: https://theforge.defence.gov.au/article/autonomous-and-automated-weapon-systems-and-traditional-dynamics-conflict-escalation (Accessed: 02 February 2026).
Philip Tracy, "Autonomous and automated weapon systems and the traditional dynamics of conflict escalation. ", The Forge, Published: February 01, 2026, https://theforge.defence.gov.au/article/autonomous-and-automated-weapon-systems-and-traditional-dynamics-conflict-escalation. (accessed February 02, 2026).
Download a RIS file to use in your citation management tools.
Defence Technical Social

Defence Mastery

Own Domain Awareness defence-poa-level1
Military Power Joint Mastery defence-poa-level4
Integrated National Power defence-poa-level5
Complicated Problems defence-cognitive-level2
Complex Problems defence-cognitive-level3
Complex Problems defence-cognitive-level3
Wicked Systems defence-cognitive-level4
Multi-agency Wicked Systems defence-cognitive-level5

Social Mastery

Lead Operating Systems social-influence-level3
Lead Capability social-influence-level4
Lead Integrated Systems social-influence-level5
Character Role Model social-character-level3
Generate Climates of Trust social-character-level4
Character Exemplar social-character-level5

Comments

Disclaimer

The views expressed in this article are those of the author and do not necessarily reflect the position of the Department of Defence or the Australian Government.

This web site is presented by the Department of Defence for the purpose of disseminating information for the benefit of the public.

The Department of Defence reviews the information available on this web site and updates the information as required.

However, the Department of Defence does not guarantee, and accepts no legal liability whatsoever arising from or connected to, the accuracy, reliability, currency or completeness of any material contained on this web site or on any linked site.

The Department of Defence recommends that users exercise their own skill and care with respect to their use of this web site and that users carefully evaluate the accuracy, currency, completeness and relevance of the material on the web site for their purposes.

 

Related Articles

1 /4