Skip to main content

The opportunities and risks associated with the emerging application of artificial intelligence and autonomous systems technologies have been subject to much hype, hope and hysteria. Major powers such as the United States and China have made explicit statements regarding the importance of these technologies to their future security and prosperity. Regarding artificial intelligence, even Vladimir Putin went so far as to state that, “Whoever becomes the leader in this sphere will become the ruler of the world.” Consequently, all these nations are investing heavily in associated areas of research. Conversely, special interest groups and some nations are calling for a ban on the weaponisation of these technologies. How, and to what degree, should the Australian Defence Force leverage the opportunities these technologies might offer in a way that is consistent with the Australian way of war and the expectations of the public it protects. A good place to start might be to replicate the current governance mechanisms that are the foundation of the success of the ADF and most modern militaries. As we will discover, behind our mission successes and capability superiorities are frameworks of accountability and associated governance mechanisms that underpin and ensure the performance of almost every aspect of a modern military.

What are ‘autonomous systems’?

Despite the conceptual enthusiasm for autonomous systems, or maybe because of it, the definition of terms, and language used to describe these systems has been quite loose. Unmanned systems are often described as robots, and automatic systems often are described as autonomous. If we are to understand, debate, or effectively employ autonomous systems, we must first be more precise in our use of language when discussing these systems. As Devitt describes, an autonomous system can be: a robot – being an AI embedded within a system that acts in the physical world, or an artificial intelligence computer application or program manipulating information without human control. An autonomous system can also be a sub-system within a more extensive system contributing to the function and actions of the broader system, where the more extensive system may even be predominately a human system. Even in these circumstances, by definition, an autonomous system operates without human control. Additionally, we need to define what we mean by autonomy. The definition offered by Abbass et al., being; ‘Autonomy is the freedom to make decisions subject to, and sometimes in spite of, environmental constraints according to the internal laws and values that govern the autonomous agent,’ is useful for our considerations.

Overview of Auftragstaktik

In response to being defeated by Napoleon, the Prussians developed the concept of Auftragstaktik, or mission command. The concept subsequently, and famously, brought success to the Germans during the Blitzkrieg of World War II. Through the decentralisation of military command, Auftragstaktik allowed German forces to conduct a rapid manoeuvre warfare that led to tactical victory. Mission command allows for independent actions to deliver coordinated, cooperative and cumulative effects in the battlespace that are consistent with an overall mission objective. Consequently, mission command has become the hallmark of modern warfare and is the command concept of choice for successful modern militaries including the Australian Defence Force. Until this operational concept is supplanted autonomous systems will also need to integrate with this way of war. Though, if we are to develop autonomous systems that can integrate with mission command based operations we must first understand what allows us to successfully implement the mission command concept with human centric systems. The Australian Defence Force’s command and control doctrine states:

Mission command requires a high level of mutual trust at all levels of command which can only be achieved through intensive, realistic training. Subordinates are trusted by being allocated sufficient resources to carry out their missions, and commanders should keep control to a minimum so as not to constrain their subordinates’ freedom of action.

The publication goes on to highlight; ‘The key to this is mutual trust and confidence amongst commanders, one of the pre-requisites of mission command.’ I doubt anyone would argue against the criticality of mutual trust to the success of a military force. Trust theoreticians  Fulmer and Gelfand consider trust to have two components, ‘…positive expectations of trust-worthiness, which generally refers to perceptions, beliefs, or expectations about the trustee’s intention and being able to rely on the trustee, and willingness to accept vulnerability, which generally refers to a suspension of uncertainty.’ Therefore, to successfully integrate autonomous systems into ADF operations we must generate positive expectations of trustworthiness and reliability in these systems, and the willingness to accept the consequences of their use. These trust effects will need to be established at all levels, from the operators in the field fighting alongside of the autonomous systems through to command at the highest levels.

Governance, mission command, and autonomous systems

If we are to ever effectively utilise an autonomous system, the willingness to accept of the associated vulnerability is the crux of the issue at hand. Can we expect those who will rely on the autonomous system for their combat success or survival to fully trust an autonomous system? Additionally, how will those who delegate their authority to an autonomous system, and be held to account for the outcome, establish a level of trust in the system sufficient to do so?

The most productive, and likely most successful, path to building human trust levels in fielded autonomous systems will be to replicate the way we currently build trust in each other. Unless we intend to only field fully autonomous forces with like systems, and without humans, we will need to replicate a level of trust in those systems that is consistent with the consequences of the autonomous systems failure in their mission. Humans and machines already exist in trust relationships. It is currently one sided, in the humans trust the machine. We trust our technology to perform in the way it was designed to. This trust is largely based on centuries of engineering developments and generations of experience with the products that result from these engineering practices. Given its recent emergence as an novel application of existing technologies, humans will be comfortable trusting some aspects of the machine such as mechanical reliability. However, where we are choosing to replace human attributes with machine processes, it will be harder to establish trust.

With humans, achieving the level of trust required for mission command is the consequence of systems of governance that assure, ensure, validate and verify the performance of every element of the capability systems, including the human elements. These systems of governance ensure it is reasonable to expect a certain behaviours and level of performance, as identified by Fulmer and Gelfand, required to establish trust. Thus, to integrate autonomous systems into existing capability systems, without undermining the performance of our current systems, will require appropriate systems of governance.

So, how does governance enable mission command, and therefore, how should we govern autonomous systems? Governance has three elements, compliance, conformance, and performance. Compliance ensures that documented processes and procedures are consistent with higher instructions. Conformance ensures that the behaviour of the elements within a system conforms to the documented processes and procedures. And Performance ensures that the system is delivering the effects that it is supposed to. Commitment to establishing and maintaining these elements of governance arises from the accountability that goes with the authority for the systems that are governed, and an understanding of the value delivered by systems of governance to a capability system. It is only possible for senior leadership to accept the exposure to personal liability for outcomes they were not directly involved in as they have confidence and trust in the systems of governance that assure and ensure the safe and effective conduct of operations.

For a model that could deliver these effects to the governance of autonomous systems, we could do worse than looking to those systems employed for the governance of military aviation within the Australian Defence Force. Like a modern aircraft, an autonomous system is a highly technical and complicated machine, made up of hardware and software, where its operation could impact the safety of personnel. Military aviation has deep institutional knowledge about the governance of systems made up of both hardware, software, and human components. From a technical perspective, there already exists standards of design, build, and maintenance that could be applied to autonomous systems.

Like most military functions the operational or human side of aviation is also highly governed. Like the technical elements of the system, the human components have analogous standards of design, build, and maintenance captured in recruiting and training standards, career temporal discipline models that continue to improve qualifications and competency of aircrew, and testing regimes that ensure ongoing currency and competency. In an aviation system, the technical and operational elements are mostly separate, the engineering world looks after technical governance, and the operational world looks after human operational governance. However, an autonomous system may rely on machine learning techniques and it may continue to learn while being employed. Thus, the use of machine learning techniques to build autonomous systems creates an overlap between the technical and operational governance responsibilities. This overlap highlights a functional gap in the suitability of current governance systems if applied to autonomous systems.

As autonomous systems are also trained, they will need systems of governance analogous to human training systems. To establish systems that can fill this governance gap will require entirely new functions and authorities within Defence. These new functions and authorities will need to design and approve training, testing, and validation data sets used for machine learning. Like their human counterparts, autonomous systems might also need qualification and categorisation schemes and regular testing to ensure they continue to perform to the required standard for the missions or tasks they are about to undertake as they continue to learn or the context they leant within have changed.

There are significant tasks ahead if we are to employ truly autonomous systems alongside humans operationally. A common refrain from senior leadership is their faith that millennials and digital natives will embrace the application of artificial intelligence and autonomous systems. The assumption being that they grew up with this technology so they will happily rely on it in future operations. Though whenever I am told about how much the world is changing, I look for ways that it isn’t. One of the constants in the world is us and our biology. Even with the rapid advance of technology and our sophisticated ideas, we are still just highly evolved great apes that share the same physical and cognitive shortcomings, emotional responses to stress and social imperatives as those who came generations before. Current generations may trust artificial intelligence to recommend a movie or order an Uber to deliver a pizza. However, when they’re personally liable for an outcome or their life is on the line, they might prefer to turn to established social bonds of trust and shared consequence, rather than confidence in the coding and technology of a system whose behaviour is determined by different and likely alien motivations. Only time will tell if we can develop governance systems robust enough to overcome millennia of evolution and social conditioning to trust life and limb and personal liability to the choices of a machine.

Although for some, the term governance is uttered with distain, governance, and its consequential effects underpins and enables mission command. Governance is as responsible for the effectiveness and successes of modern militaries as is superior firepower or the brilliant manoeuvres and creative genius of their commanders. It is the cornerstone of achieving and maintaining a capability superiority. On battlefields of the future, where autonomous systems fight alongside humans, victory might not go to the side who can build the most sophisticated artificial intelligence and autonomous systems, but those who can best govern them.

00
Cite Article
Harvard
APA
Footnote
RIS
(McCallum, 2019)
McCallum, M. 2019. 'Autonomous Systems, Auftragstaktik, and Governance.'. Available at: https://theforge.defence.gov.au/article/autonomous-systems-auftragstaktik-and-governance (Accessed: 28 December 2024).
(McCallum, 2019)
McCallum, M. 2019. 'Autonomous Systems, Auftragstaktik, and Governance.'. Available at: https://theforge.defence.gov.au/article/autonomous-systems-auftragstaktik-and-governance (Accessed: 28 December 2024).
Mark McCallum, "Autonomous Systems, Auftragstaktik, and Governance.", The Forge, Published: May 31, 2019, https://theforge.defence.gov.au/article/autonomous-systems-auftragstaktik-and-governance. (accessed December 28, 2024).
Download a RIS file to use in your citation management tools.
Defence Technical Social

Comments

Disclaimer

The views expressed in this article are those of the author and do not necessarily reflect the position of the Department of Defence or the Australian Government.

This web site is presented by the Department of Defence for the purpose of disseminating information for the benefit of the public.

The Department of Defence reviews the information available on this web site and updates the information as required.

However, the Department of Defence does not guarantee, and accepts no legal liability whatsoever arising from or connected to, the accuracy, reliability, currency or completeness of any material contained on this web site or on any linked site.

The Department of Defence recommends that users exercise their own skill and care with respect to their use of this web site and that users carefully evaluate the accuracy, currency, completeness and relevance of the material on the web site for their purposes.