
Streamlining Air Land Operations for Better Outcomes
Abstract

Augmented Reality and the Future of Learning and Business
Our interaction with the technological world today is changing rapidly. We are no longer limited by screens or even reality as we knew it. David Rapien walks us through the history and differences of Virtual Reality and Augmented Reality and looks towards the future options of these technologies in life, business and education.

How Biased Minds can be the Key to Unbiased AI Systems
The TED Talk examines the origins of Cognitive Bias, its advantages, drawbacks and challenges with Biased AI. The solution might lie within ourselves.

Center of Gravity: What Clausewitz Really Meant (Part 1 of 2)
Part one of a two-part article written by Professor Joseph L. Strange, Marine Corps War College and COL Richard Iron, British Army.
This paper explores what Clausewitz really meant by the term “center of gravity”. The authors propose that he intended it to be a strength, either moral or physical, and a dynamic and powerful agent in its own right. The authors also suggest that the current Joint and NATO definition of center of gravity is incorrect, implying it to be a source of strength, and that this mis-definition has been responsible for much of the confusion about the concept that exists today.

Managing Assumptions in Planning and Execution

Capability Boost: Trials Demonstrate Enhanced ViDAR/ScanEagle Package
This article from Jane’s International Defence Review discusses the use of Visual Detection and Ranging (ViDAR) technology on the ScanEagle unmanned aerial system (UAS) platform to provide detection capabilities comparable to radar using Electrooptical (EO) and Infra Red (IR) sensors.

Redefining the Center of Gravity
COL Dale C. Eikmeier, USA (Ret.), is an Assistant Professor at the U.S. Army Command and General staff College. COL Eikmeier shares his thoughts on identifying Center of Gravity. This method will provide campaign planners with an analytical tool that will fulfil doctrinal intent.

Understanding Centers of Gravity and Critical Vulnerabilities (Part 2 of 2)
Part two of a two-part article written by Professor Joseph L. Strange, Marine Corps War College and COL Richard Iron, British Army.
This paper examines the role of centers of gravity in operational design, looking at the relationship between centers of gravity and critical vulnerabilities. It suggests an analytical model that joint warfighters and planners on both sides of the Atlantic can use to assist strategic and operational-level planning. The model helps to analyze existing and potential vulnerabilities of a center of gravity, and determine which of those could be especially critical.

Google Ponders the Shortcomings of Machine Learning
This article discusses Google’s AI research project Google Brain and DeepMind and their explanation for why, despite advances in computing power, machine learning still lags behind human cognitive skills, particularly the ability to “generalize beyond one’s experience”. The article describes the use of graphs of relationships to replicate neural networks as a potential area of future advancement in machine learning and artificial intelligence.

Future of Logistics Systems in Defence
Logistics! Its not something we usually think about. Not many people understand the concept, or its impact on the progress of society. It is the lifeblood of economy – of any home, organisation, city or country.

AI-Based Virtual Tutors – The Future of Education?
“This blog post is about the UC Berkeley Virtual Tutor project and the speech recognition technologies that were tested as part of that effort. We share best practices for machine learning and artificial intelligence techniques in selecting models and engineering training data for speech and image recognition.

Artificial intelligence system uses transparent, human-like reasoning to solve problems
This article describes a method by which a computer can recognise objects using Transparency by Design Network (TbD-Net) developed at the MIT Lincoln Laboratory. Researchers have used human-like reasoning to develop an algorithm which they claim can outperform other visual recognition software and algorithms because humans can view its reasoning process to determine where and how it is making mistakes.