Skip to main content

Executive summary

This essay examines the transformative role Artificial Intelligence (AI) will play within information warfare, focusing on its capabilities in data analytics, cyber operations, psychological influence, and misinformation management. The hypothesis posits that AI enhances the effectiveness of information warfare strategies, providing unique tools that fundamentally shift how military and intelligence organisations achieve strategic dominance.

Introduction to information warfare

Information warfare has expanded beyond traditional espionage and propaganda, encompassing a broad array of operations designed to gain strategic advantage through the control and manipulation of information. With the integration of digital technologies, information warfare now includes cyber-attacks, electronic warfare, and psychological operations aimed at influencing public opinion and decision-making. At the core of this evolution is AI, which enables advanced data collection, processing, and analysis, transforming information warfare from its early roots into a dynamic and complex battlespace that increasingly relies on information dominance (Libicki, 2020).

The role of AI in information warfare

Artificial intelligence plays a pivotal role in modern warfare by enhancing the capabilities of data analysis, cyber operations, and psychological manipulation. Machine learning algorithms enable pattern recognition, anomaly detection, and predictive analytics, while natural language processing (NLP) and computer vision expand the ability to interpret and act upon vast amounts of data in real time. These capabilities support the hypothesis that AI significantly magnifies the impact of information warfare strategies, offering both operational and strategic advantages that were previously unimaginable (Goodfellow, Bengio, & Courville, 2016).

Data as a strategic asset in warfare

Data has become a fundamental asset in modern warfare, driving intelligence gathering, decision-making, and tactical planning. Many military organisations now rely on a variety of data types to maintain situational awareness and execute operations with precision. Key data categories include:

  1. Intelligence data: Gathered through surveillance, reconnaissance, and signals intelligence, providing insights into enemy movements, capabilities, and intentions (Betts, 2015).
  2. Operational data: Covering logistics, troop deployments, and resource allocations, operational data is crucial for mission planning and execution.
  3. Communications data: Involves intercepted communications, electronic signals, and network traffic data, all critical for disrupting enemy command and control.
  4. Environmental data: Includes weather patterns, terrain information, and environmental factors affecting military actions. Such data informs decisions about timing and logistics to maximise mission effectiveness.

Collecting and managing this data requires dynamic data processing, elastic storage solutions, and cybersecurity measures to protect sensitive information from unauthorised access and cyber threats (Marr, 2015).

AI-driven data utilisation

AI significantly enhances the utilisation of data by providing advanced tools for large-scale data analytics, real-time processing, predictive analytics, and data fusion. These techniques allow for comprehensive situational awareness, improving decision-making and operational efficiency on the battlefield.

  1. Data analytics: AI algorithms can process massive datasets from satellite images, communications, and surveillance feeds, identifying patterns, correlations, and insights beyond human capabilities. This information can be used both by adversaries to identify weaknesses in an opponent’s systems, or by coalition and allied partners to identify an adversary’s actions. This comprehensive analysis helps military planners make informed decisions, maximising the strategic value of collected data (Gandomi & Haider, 2015).
  2. Real-time processing: Real-time data processing is essential in dynamic combat environments where immediate decision-making is required. AI technologies can analyse incoming data instantly, enabling military leaders to respond quickly to emerging threats and battlefield changes (Chen & Zhang, 2014).
  3. Predictive analytics: Predictive modelling enables AI systems to forecast enemy actions based on historical data, enhancing the ability of military leaders to anticipate threats and devise countermeasures proactively (Wang, Kung, & Byrd, 2018).
  4. Data fusion: AI-driven data fusion integrates information from multiple sources, such as radar, sonar, satellite imagery, and human intelligence, creating a unified operational picture that improves situational awareness and supports faster decision-making in complex scenarios (Liggins, Hall, & Llinas, 2017).

AI in cyber warfare

Cyber warfare is a cornerstone of modern military strategy, encompassing both defensive and offensive operations. AI provides advanced tools for detecting, preventing, and executing cyber-attacks, enhancing the effectiveness of cyber warfare efforts.

  1. Defensive cyber capabilities: AI-powered intrusion detection systems (IDS) analyse network traffic, identifying and responding to threats in real time. These systems can detect anomalies and suspicious activities that may signal an attack, enabling timely responses to prevent breaches (Buczak & Guven, 2016). Additionally, AI-driven threat intelligence platforms collect data from various sources to identify and predict emerging threats, strengthening defences against cyber incidents (Sommer & Paxson, 2010).
  2. Offensive cyber capabilities: In offensive operations, AI enables the creation of sophisticated malware that adapts to different environments, making it more difficult for defenders to detect and neutralise. For example, AI can generate convincing phishing attacks and social engineering tactics by analysing social media profiles, increasing the likelihood of success (Chandrasekaran, Narayanan, & Upadhyaya, 2006). AI can also scan systems for vulnerabilities, automate exploit discovery, and optimise Distributed denial-of-service (DDoS) attacks, making cyber offensives more precise and disruptive (Garg, Curtis, & Halderman, 2019).

Case studies in cyber warfare: Cases such as Stuxnet and Operation Cloud Hopper demonstrate the potential of AI-enhanced cyber operations. These cases reveal how AI could refine vulnerability detection, automate attack vectors, and increase the precision of cyber threats (Zetter, 2014).

AI and misinformation

Misinformation is a powerful tool in information warfare, capable of influencing public opinion, sowing discord, and undermining trust. AI plays a dual role in misinformation: it can create and spread false information, and it can also detect and mitigate it.

  1. Creation of misinformation: Through generative adversarial networks (GANs) and NLP, AI can produce deepfakes, fake news, and social media posts that resemble authentic content, deceiving audiences and spreading false narratives (Chesney & Citron, 2019). AI-driven bots further amplify misinformation by sharing, liking, and commenting on posts to create the illusion of widespread support or consensus, maximising the impact of misinformation campaigns (Ferrara et al., 2016).
  2. Detection and countering misinformation: AI also offers tools to counter misinformation, analysing content for inconsistencies, tracking dissemination patterns, and flagging or removing deceptive posts. These detection systems use machine learning to detect anomalies in content, helping to maintain information integrity (Nguyen, Nguyen, & Nguyen, 2020).

Case study: During the COVID-19 pandemic, AI systems were employed to combat misinformation regarding treatments and vaccine efficacy. Social media platforms utilised AI to detect misleading content, although the widespread impact of misinformation remained evident (Cinelli et al, 2020).

Psychological operations (psyops) enhanced by AI

AI is transforming psychological operations by enabling precise targeting, personalised messaging, and content automation. Through behavioural analysis and sentiment analysis, AI tailors propaganda to resonate with specific audiences, making psychological operations more effective.

  1. Targeting and segmentation: AI-driven profiling creates detailed audience profiles, enabling psyop specialists to craft messages that align with the psychological and emotional triggers of each demographic. These profiles use social media activity, browsing history, and interaction patterns to understand audience preferences (Kosinski, Stillwell, & Graepel, 2013).
  2. Automated content creation: AI automates the generation of persuasive content, from text to deepfake videos, making it possible to scale psychological operations. Natural language generation (NLG) algorithms and deepfake technology enable AI to create tailored narratives that resonate with target audiences (Chesney & Citron, 2019).
  3. Influence strategies: AI enhances traditional influence strategies through A/B testing, optimising messages to maximise engagement. It adjusts content in real time based on audience responses, ensuring messages remain relevant and impactful (Aral, 2020).

Ethical concerns: The use of AI in psychological operations raises ethical questions about manipulation, consent, and individual autonomy, highlighting the need for transparency and accountability in its application (Susser, Roessler, & Nissenbaum, 2019).

Ethical and legal implications

The deployment of AI in information warfare presents profound ethical and legal challenges. Issues such as accountability, privacy, and bias must be addressed to align AI use with international humanitarian principles and protect individual rights.

  1. Accountability and autonomy: AI systems, particularly those capable of autonomous decision-making, raise questions about accountability. Determining responsibility for AI-initiated actions, especially in offensive operations, is essential to ensure compliance with ethical standards (Russell & Norvig, 2021).
  2. Privacy and surveillance: AI’s data collection capabilities enable unprecedented surveillance, raising privacy concerns. Military applications of AI must balance the need for security with the protection of individual rights, as extensive data collection can infringe on civil liberties (Zuboff, 2019).
  3. Bias and fairness: AI systems can inherit biases present in the data they are trained on, potentially leading to discriminatory outcomes. Addressing these biases is critical, as biased AI can unfairly target groups and amplify misinformation (O'Neil, 2016).
  4. Legal compliance: AI deployment in warfare must adhere to international laws, including principles of distinction and proportionality. Establishing legal frameworks to regulate AI in warfare is necessary to mitigate risks and prevent escalation (Schmitt, 2013).

Principles for ethical AI deployment: Transparency, human oversight, proportionality, and adherence to ethical guidelines are fundamental for responsible AI deployment in warfare. Regular monitoring and evaluation ensure AI systems operate within legal and ethical boundaries, safeguarding against unintended consequences (Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016).

Future trends in AI and information warfare

As AI continues to evolve, its role in information warfare will grow more sophisticated. Emerging technologies such as quantum computing and advanced machine learning will drive this evolution, expanding the potential for real-time data processing, cryptographic capabilities, and autonomous decision-making.

  1. Quantum computing: Quantum computing could revolutionise AI by providing unprecedented processing power. This capability would enable more complex data analysis and enhance secure communications, with significant implications for information warfare (Gidney & Ekerå, 2019).
  2. Integration with IoT and 5G: The convergence of AI, IoT (Internet of Things), and 5G networks will improve situational awareness, enabling real-time data collection and processing on the battlefield. These technologies will enhance the effectiveness of AI-driven surveillance and command systems (Greengard, 2015).
  3. Adaptive cyber defence: AI will likely lead to more advanced, adaptive cyber defence mechanisms capable of pre-empting and countering cyber threats before they materialise. AI-driven defence platforms could autonomously identify and neutralise potential attacks, significantly strengthening cybersecurity (Buczak & Guven, 2016).

Conclusion

AI is transforming information warfare by offering powerful capabilities in data analysis, cyber operations, psychological influence, and misinformation control. While these advancements provide significant strategic benefits, they also introduce complex ethical and legal considerations that require thoughtful governance to promote responsible use. Prioritising transparency, accountability, and ethical standards will be essential to balancing AI’s potential with the necessary oversight and control. Continued investment in research, policy formulation, and international collaboration will be critical to maximising AI's benefits while minimising risks.

Moreover, as data increasingly becomes a critical asset in conflicts, AI is positioned to become a national strategic resource. Consequently, leaders must prioritise data dominance as a vital objective to secure superiority in the battlespace.

References

Aral, S. (2020). The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health–and How We Must Adapt. Currency.

Betts, R. K. (2015). Enemies of Intelligence: Knowledge and Power in American National Security. Columbia University Press.

Buczak, A. L., & Guven, E. (2016). A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Communications Surveys & Tutorials, 18(2), 1153–1176.

Chandrasekaran, M., Narayanan, K., & Upadhyaya, S. (2006). Phishing email detection based on structural properties. Proceedings of the 9th Annual New York State Cyber Security Conference, 3, 1–7.

Chen, M., & Zhang, Y. (2014). Big data analytics: A survey. Journal of Big Data, 2(1), 1–45.

Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Affairs, 98(1), 147–155.

Cinelli, M., Quattrociocchi, W., Galeazzi, A., Valensise, C. M., Brugnoli, E., Schmidt, A. L., ... & Scala, A. (2020). The COVID-19 social media infodemic. Scientific Reports, 10(1), 16598.

Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96–104.

Gandomi, A., & Haider, M. (2015). Beyond the hype: Big data concepts, methods, and analytics. International Journal of Information Management, 35(2), 137–144.

Garg, S., Curtis, D., & Halderman, J. A. (2019). Algorithmic discrimination and platform manipulation: The case of Airbnb. Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, 185–194.

Gidney, C., & Ekerå, M. (2019). How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits. Quantum, 3, 135.

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.

Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences, 110(15), 5802–5805.

Libicki, M. C. (2020). Cyberspace in peace and war (2nd ed.). Naval Institute Press.

Liggins, M. E., Hall, D. L., & Llinas, J. (2017). Handbook of multisensor data fusion: Theory and practice. CRC Press.

Marr, B. (2015). Big data: Using smart big data, analytics and metrics to make better decisions and improve performance. Wiley.

Nguyen, T. T., Nguyen, T. V., & Nguyen, T. H. (2020). Detecting and countering misinformation: Defining the balance between success and ethical challenges. Computers & Security, 92, 101745.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group.

Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.

Schmitt, M. N. (2013). Tallinn manual on the international law applicable to cyber warfare. Cambridge University Press.

Sommer, R., & Paxson, V. (2010). Outside the closed world: On using machine learning for network intrusion detection. IEEE Symposium on Security and Privacy, 2010, 305.

Susser, D., Roessler, B., & Nissenbaum, H. (2019). Technology, autonomy, and manipulation. Internet Policy Review, 8(2).

Wang, R. Y., Kung, L. A., & Byrd, T. A. (2018). Big data analytics: Understanding its capabilities and potential benefits for healthcare organizations. Technological Forecasting and Social Change, 126, 3–13.

Zetter, K. (2014). Countdown to zero day: Stuxnet and the launch of the world's first digital weapon. Crown Publishing Group.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

10
Cite Article
Harvard
APA
Footnote
RIS
(Gilbert, 2025)
Gilbert, C. 2025. 'Data Dominance in Modern Warfare The Crucial Role of AI and Data Analytics'. Available at: https://theforge.defence.gov.au/article/data-dominance-modern-warfare-crucial-role-ai-and-data-analytics (Accessed: 06 February 2025).
(Gilbert, 2025)
Gilbert, C. 2025. 'Data Dominance in Modern Warfare The Crucial Role of AI and Data Analytics'. Available at: https://theforge.defence.gov.au/article/data-dominance-modern-warfare-crucial-role-ai-and-data-analytics (Accessed: 06 February 2025).
Claudia Gilbert, "Data Dominance in Modern Warfare The Crucial Role of AI and Data Analytics", The Forge, Published: February 05, 2025, https://theforge.defence.gov.au/article/data-dominance-modern-warfare-crucial-role-ai-and-data-analytics. (accessed February 06, 2025).
Download a RIS file to use in your citation management tools.
Defence Technical Social

Defence Mastery

Comments

Disclaimer

The views expressed in this article are those of the author and do not necessarily reflect the position of the Department of Defence or the Australian Government.

This web site is presented by the Department of Defence for the purpose of disseminating information for the benefit of the public.

The Department of Defence reviews the information available on this web site and updates the information as required.

However, the Department of Defence does not guarantee, and accepts no legal liability whatsoever arising from or connected to, the accuracy, reliability, currency or completeness of any material contained on this web site or on any linked site.

The Department of Defence recommends that users exercise their own skill and care with respect to their use of this web site and that users carefully evaluate the accuracy, currency, completeness and relevance of the material on the web site for their purposes.

 

Related Articles

1 /4