LTCOL Jasmin Diab

Over the last twelve months I have been hidden away working on a machine learning algorithm to support the International Atomic Energy Agency (IAEA) nuclear safeguards analytical tools (page 78). Essentially my little machine trawls the internet to find any non-declared uranium mining activities. This saves analysts years in scanning and reading documents in order to narrow down potential non-declared activities around the globe. So as a leader and decision maker, could we use machine learning with a military hat on? Or are we not culturally ready as an organisation to trust a machine will help us make the right decision whether in peace or at war. I seek to do this by asking a few questions. 


Firstly – what is machine learning?


Machine learning is just one of many tools in the artificial intelligence realm that helps humanity combat big data. In its simplest form, a machine learning model requires three key things, data, an algorithm and the ability to be iterative and learn. To be honest, the biggest part of machine learning is really in the data. If you put bad data in, you’ll get bad data out and reliance on support from the machine to make decisions will be lost. This requirement for system performance raises the first question.


Question #1 -Does defence store and maintain data effectively?


In my nuclear safeguards work I utilised a branch of machine learning, called natural language processing or NLP. It essentially uses mathematical processes to teach a machine to “read” and analyse text. This therefore opens up the world of analytics to not just numerical based data for analysis but words and phrases. Imagine how getting a machine to trawl through pages and pages of reports, which is now a structured data set, might help inform where a commander, decision maker or staff officer focusses their effort. But this is not just a “google-esk” type scanner, if your machines algorithmic parameters are set to what you need as well as have the ability to evolve and grow off your decisions, you can set it up to find patterns of information (or patterns of not the right information), to determine what is likely to be occurring.


Question #2 – If we do have excellent databases, do they all have the ability to share data for analysis? 


What do I mean? 


Let’s consider peacetime  a regular decision making event for military personnel, moving interstate. You’ve got your inventory loaded onto an online portal, you have a posting order with your locality, your family details are kept up to date on an online system including whether you have a special needs family member who needs a ramp or you have a large dog that needs a backyard. A machine with access to this data could analyse this information and what housing solutions have worked for others in a similar situation to you. Therefore, instead of spending weeks or months fighting to find a house within your price range and in your posting locality the machine does it for you and either allocates you an available DHA property, or narrows down some rental options in the area.


Question #3 - If it saves you and your family the stress of finding a house would you trust the machine to find one for you? 


Now if we were to look at decision making as part of a counter insurgency campaign, your machine could be set up to trawl open source media reports, live social media feeds, classified intelligence reports, chat reps, patrol reports and so much more. You, as the commander of a Task Group, have to decide whether to drop a bomb on a target or send in a land-based force. Your machine could be set to parameters that give accurate and timely analysis on what is in that location and what the likely risks are for either option. Now this doesn’t put the green lanyards out of a job (I can hear them all hating on me already – chill – this machine is for you!), rather, this allows intelligence staff to focus their efforts on the information that is relevant to the campaign taking in hundreds or even thousands more feeds than they could ever have been able to comprehend in a short amount of time.


Question #4 - Would we trust a machine to steer our decisions in a certain direction even if it may result in lethal effects?


Then finally, if we are in a state vs. state conflict, total war, the possibilities are endless. Incorporating written reports, live battle tracking feeds, radio traffic, known enemy doctrine, Tactics, Techniques and Procedures (TTPs), known supply routes, supply mechanisms, global supply shortages or restrictions. Your machine could give you indications on when and where is best to hit your enemy – it could link directly into a weapon system and decide when to engage (although the ethics of autonomous weapons are probably not there yet). It could also predict when your forces are likely to run out of key equipment as supply and demand ebbs and flows during the battle. Need a wargaming buddy? Why not build a machine that can understand how your enemy’s TTPs have been evolving and the machine itself could evolve its decision making off analysing enemy moves therefore allowing a commander to really wargame a plan?


Question #5 – How far are we willing to let machines help us in war?


How do we do this?


Some of these decisions are easier than others for humans to be comfortable in allowing a machine to steer us to a decision. I haven’t even opened the box of ethics to this argument which is an important issue to debate. However, if as an organisation, we want to be smarter with big data and using information as a weapon, then machine learning is an option we should be starting to incorporate in our daily life. The role of data scientists is crucial to help build and refine data sets and ensure we store and manage data securely and appropriately. Do we need to teach leaders how to embrace machines in their decision-making processes so that we understand both our limitations as humans (coffee and chocolate can only keep you awake for so long) but also what the limitations of our machine are? Do we need to incorporate coding as part of the school of languages so that we can build our own machines? Then we need to have the tough debates on ethics noting ethics would be likely to evolve during total war. Maybe an ethics-based machine could help us make these decisions.


Image source