Zum Inhalt springen
CNTR LogoMonitor2025
Focus · Artificial Intelligence

Warfare at Machine Speed: The Growing Use of AI in the Military

Debate on AI’s military use has intensified, initially with regard to (lethal) autonomous weapon systems. Autonomous weapon systems independently select, prioritize, and engage targets, making them lethal when humans are targeted. Emerging around 2010, debates around these weapon systems can now be found in the UN Weapons Convention and the UN General Assembly. However, for too long critics of autonomous weapons narrowed the focus of debate to just two critical elements: target selection and engagement. This only changed recently with a debate focusing on the broader “reliability” of military AI and related pledges or national (military) AI strategies. While both the scope and capabilities of AI in military contexts are still being discussed, increasingly AI is being quietly implemented in seemingly non-critical military domains. While militaries celebrate this for gains in efficiency and application in seemingly benign roles and functions, it raises serious arms control policy questions, especially regarding the acceleration of warfare and the erosion of human control. While accepting states’ interest in the use of military AI, debates on the stability and security implications of widespread AI ought to intensify, while also extending their focus to questions beyond the ethical or legal.

Beginning in the late 2000s, the debate on (lethal) autonomous weapon systems (LAWS) did not always align with what we understand by ‘AI’ today. At the time, public perceptions of AI had more to do with complex, deterministic algorithms or “expert systems” than modern machine learning or deep neural networks. Similarly, debates on AI in weapons systems focused narrowly on just two elements of the ‘kill chain’: target selection and engagement – i.e. the decision to attack.

Many critics were outraged by the idea of an algorithm deciding on and initiating an attack without human involvement, fearing violations of international law and human dignity. While such arguments remain controversial even among supporters,1 security policy criticism, though present, was often more restrained than legal or ethical critiques.2 Legal concerns centered on whether algorithms could distinguish between combatants and civilians or fulfil the requirement of proportionality, two important principles of International Humanitarian Law (IHL), while ethical criticism focused on human dignity. However, this narrow focus on ethical and legal issues at a specific point in the ‘kill chain’ means many current military AI applications largely escape scrutiny, as will be shown below. From a security policy perspective, critical arguments typically highlight arms control concerns, particularly the dangers of an escalating “hyperwar”3 or “war at machine speed”,4 which runs counter to traditional arms control’s goal of slowing down conflicts through measures like disengagement zones or preventing surprise attacks. The increasing use of AI and the displacement of humans from decision-making chains suggest a perilous acceleration beyond the narrow uses that have typically been the subjects of debate.

The Widespread Use of Military AI

The Use of Military AI within the “Kill Chain”

The “kill chain” or “targeting cycle” outlines six basic steps of a military engagement: Find, Fix, Track, Target, Engage, and Assess. The “target” and “engage” phases are often deemed “critical functions” due to their involvement in direct combat action. AI software now supports almost every step.

For finding potential targets, AI analyzes visual and surveillance data (e.g., satellite images, drone footage, mobile data), condensing it into actionable information. The Maven project, initiated by the Pentagon in 2017 and taken over by the U.S. National Geospatial-Intelligence Agency (NGA) in January 2023, uses AI to identify and classify objects and recognize patterns like convoy formations.5 Similar efforts include Israel’s Gospel, for infrastructure targeting, and Lavender, which reportedly creates human target lists from telecom or other data (according to unconfirmed, investigative sources).6

In the detection and tracking phases, AI assists in geolocating potential targets (e.g., mobile devices). Drones can autonomously track moving targets, even predicting locations when line-of-sight is lost. Systems like the American Gorgon Stare7 and recent WAMI (Wide-Area Motion Imagery) applications (e.g., Logos Technology’s BlackKite-I/RedKite-I) can monitor city-sized areas in near real-time, tracking hundreds of objects.8

AI increasingly plays a role in the critical targeting or selection phase. Indirectly, software (e.g., U.S. Army Corps of Engineers’ blast prediction tools)9 helps optimize weapon impact and attack vectors to minimize collateral damage. Directly, complex algorithms aid in concrete target selection. Defensive systems like the U.S. Navy’s Aegis or Phalanx can classify and prioritize incoming threats, and in high-threat scenarios can autonomously select and engage targets without human real-time intervention, qualifying as “autonomous weapons” by U.S. definition.10 Similar defensive short range air defense (SHORAD) systems include Rheinmetall’s Skyranger, Russia’s Pantsir-S1/S2/SM, and China’s SWS3, typically targeting non-human threats. Loitering munitions (“kamikaze drones”) like Israel’s Harpy or Harop can autonomously loiter until they detect targets. While the older Harpy does not need a human operator to confirm an attack, the more modern Harop requires human authorization for engagement. Future autonomous target selection in loitering munitions is expected, with firms like the German Helsing experimenting with AI-supported recognition for systems, e.g. their HX-2.11

Finally, AI strongly supports Battle Damage Assessment (BDA) and Combat Assessment (CA). AI compares before-and-after images, classifies damage, and fuses sensor data, mirroring the technical challenges of initial target identification. Many modern militaries will likely soon use AI in BDA. The German Bundeswehr, for example, commissioned a concept study on partially automated BDA processes by September 2024.12

Crucially, we’re seeing the integration of several – including critical – kill chain steps into single AI systems. Early examples include darpa “AlphaDogfights Trial” in August 2020, where AI dominated a human pilot in a simulated dogfight.13 By 2023, the U.S. conducted real fighter-jet tests involving AI.14 And while details remain scarce, reports emerged in 2021 about Chinese AI simulations where AI agents defeated human pilots.15 This development is not limited to major powers and superpowers – in 2025, Helsing announced that, together with Saab, it had also succeeded in developing autonomous combat control for a fighter jet.16 This trend underscores AI’s growing involvement across the entire kill chain cycle, from finding and tracking to target selection, weapon decision, engagement, and post-strike assessment.

The Broader Use of Military AI outside the “Kill Chain”

When it comes to the use of AI in the military outside the kill chain cycle, examples are numerous, including the use of AI in nuclear weapons decision making. The following is only a limited overview of various topics that are moving further and further away from the kill chain in the conventional domain.

Mission planning and preparation: When it comes to the planning of missions, modern AI systems assist planners by aggregating and rapidly deploying vast amounts of data from diverse sources, enhancing “intelligence fusion and targeting, battlespace awareness and planning, and accelerated decision-making”, as a NATO press release describes in the context of the implementation of Palantir’s Maven Smart System NATO (MSS NATO) in 2025.17 In addition, some systems, including MSS, are capable of simulating aspects of a military mission to expose weaknesses, make suggestions for routes to take or weapons and equipment to bring. Given the ability of AI to find unexpected solutions to well-known problems (as demonstrated when an AI beat Go Grandmaster Lee Sedol in 2016), it is obvious that the simulation of millions of potential scenarios will enhance mission planning to a previously unknown dimension,18 with German “GhostPlay” (https://www.ghostplay.ai/), a “virtual twin” of military reality, being an example.

Training: One more recent strand of the debate is how AI can be used to optimize training of soldiers both on a unit as well as an individual level. It is obvious that an AI can generate new training scenarios based on the individual or unit’s performance in former drills to focus on perceived weaknesses.19 On a broader level, AI can generate wargaming scenarios for maneuvers or large-scale drills. Some systems are already tested for broader implementation, e.g. the U.S. Air Force’s Pilot Training Next (PTN) Program for individual soldiers or the U.S. Navy’s Fleet Synthetic Training (FST) Program,20 which uses AI to simulate complex naval warfare scenarios for entire fleets.21

Logistics: If we consider John J. Pershing’s famous saying “infantry wins battles, logistics wins wars” and Clausewitz’ emphasis on the concentration of force in time and space, it is no surprise that AI also plays a special role in military logistics. In addition to the (dual) use of civilian COTS (Commercial Off-The-Shelf) applications where the optimization of logistics also plays a crucial part, the military is also actively developing and adapting AI solutions specifically for its unique logistical challenges, including logistics in contested environments, under harsh conditions and handling classified and/or dangerous material. AI can help, for example, closely monitor supply chains to predict and address potential bottlenecks early or enhance maintenance procedures based on the prediction of mechanical failures, just to name a few. In consequence, the U.S. Defense Logistics Agency (DLA) has established an AI Center of Excellence as of June 2024, and other militaries are following suit.22

Interior view of the Joint Operations Center of the U.S. Space Command. The image shows a high-tech control room with multiple personnel monitoring various computer screens. The creens display real-time data, maps, satellite tracking, and communication networks. The room is dimly lit with blue and green hues from the monitors, creating a focused and technologically advanced atmosphere. The personnel are engaged in monitoring and managing space operations.
The Joint Operations Center of the U.S. Space Command.Source: Lewis Carlyle, Public Domain.

Assessment and Conclusion

Even though the examples have been almost exclusively limited to Western countries, with a particular focus on the USA, it would be wrong to assume that other, less transparent countries are not investing in the military use of AI to take advantage of its benefits. The fact that many applications do not fall within the scope of the critical functions of the kill chain has led to an enormous development in the shadow of the recent international debates on LAWS. This issue is also problematic, but will likely prove even more difficult to restrict. In all the fields under discussion, the declared aim is to improve the efficiency of existing means of power and, above all, to speed up processes in order to gain an advantage on the battlefield, even when some form of human control is implemented.

From the point of view of the stability concept inherent in arms control, the military use of AI is problematic per se: Firstly, balances become more difficult to assess and calculate when an AI-supported system of systems comes into play, making older or seemingly inferior systems relevant again. Secondly, acceleration of warfare reduces the time available for both warning and assessing if an attack is imminent or underway, which risks leading to higher alert levels and inappropriate or incorrect reactions due to time pressure. The idea of extending arms control to seemingly benign or unproblematic domains, such as logistics, is not new. The concept of ‘verified transparency’ developed in Germany in the mid-2010s uses optimized logistics as an explicit example where it is necessary to counter the fear of devalued disengagement zones and a surprise attack with arms control instruments.23

We have to be honest: unfortunately, the times do not allow for concrete arms control measures – let alone legally binding ones – which are not in the clear national interest of the main international actors. As we have seen in the debate about laws, ethical and legal arguments will not necessarily convince those actors who see a clear military advantage in developing and procuring AI-enhanced weapon systems.

Interestingly, in 2023 a new debate started, initiated by the Netherlands. The first Summit on Responsible Artificial Intelligence in the Military Domain (often referred to as REAIM) focusing on “responsible” development, deployment, and use of AI in the military. 60 states agreed on a non-binding “Call to Action”,24 explicitly mentioning the impact of AI on international security and stability. REAIM also stresses the multi-stakeholder approach, given the rapid development of AI in the civilian sphere, spilling over into the military realm. While this is a good start, the focus still lies on national strategies and national responsibility, not addressing the consequences of the interactive use of military AI and its impact on international stability, both when it comes to crises as well as strategic conventional stability. While it is helpful to raise awareness among states about the dangers of immature AI and the associated loss of human control, this does not address the dangers posed by AI that meets the criteria of “reliability” and “trustworthiness”. As was the case during the Cold War, it is important to highlight the interactions that result from individual armament decisions.

It is therefore important to directly address the implications of the broad military uses of AI for future warfare and the manner in which interactions between machines and speed have consequences for stability and escalation. It does not seem to be the case that global actors have actually recognized the difficult security policy issues related to the broad use of AI in almost all military contexts, nor does it seem like they are responding with interest-driven arms control policy. The call to action explicitly calls for academia and think tanks to “conduct additional research in order to better comprehend the impact, opportunities and challenges of rapidly adopting AI in the military domain” – a task that demands a critical perspective.

  1. Z. B. Rosert, E., & Sauer, F. (2020). How (not) to stop the killer robots: A comparative analysis of humanitarian disarmament campaign strategies. Contemporary Security Policy, 42(1), 4-29. https://doi.org/10.1080/13523260.2020.1771508
  2. Z. B. Altmann, J., & Sauer, F. (2017). Autonomous Weapon Systems and Strategic Stability. Survival, (59)5, 117-142. Alwardt, C., & Schörnig, N. (2022). A necessary step back? Recovering the security perspective in the debate on lethal autonomy. Zeitschrift für Friedens- und Konfliktforschung (Journal for Peace and Conflict Studies), 10, 295–317. https://doi.org/10.1007/s42597-021-00067-z
  3. Allen, J.R., & Husain, A. (2017). On Hyperwar. Proceedings, 143(7), https://www.usni.org/magazines/proceedings/2017/july/hyperwar
  4. Z. B. Amt für Heeresentwicklung der Bundeswehr. (2019). Künstliche Intelligenz in den Landstreitkräften. https://www.bundeswehr.de/resource/blob/156024/d6ac452e72f77f3cc071184ae34dbf0e/download-positionspapier-deutsche-version-data.pdf
  5. National Geospatial-Intelligence Agency. (n.d.). GEOINT Artificial Intelligence. United States government. Retrieved September 9, 2025, from https://www.nga.mil/news/GEOINT_Artificial_Intelligence_.html
  6. Abraham, Y. (2024, April 3). ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza. +972 Magazine. https://www.972mag.com/lavender-ai-israeli-army-gaza/
  7. Trimble, S. (2014, July 2). Sierra Nevada fields ARGUS-IS upgrade to Gorgon Stare pod. Flight Global. https://www.flightglobal.com/civil-uavs/sierra-nevada-fields-argus-is-upgrade-to-gorgon-stare-pod/113676.article
  8. Logos Technologies. (n.d.). Redkite-I. Retrieved September 8, 2025, from https://www.logostech.net/products/redkite-i/; Logos Technologies. (n.d.). Blackkite-I. Retrieved September 8, 2025, from https://www.logostech.net/products/blackkite-i/
  9. US Army Corps of Engineers. (n.d.). PDC Software. United States government. Retrieved September 9, 2025, from https://www.nwo.usace.army.mil/About/Centers-of-Expertise/Protective-Design-Center/PDC-Software/
  10. :United States of America. (2023). DoD Autonomy in Weapon Systems (DoD Directive 3000.09). United States Department of Defense. https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf
  11. Helsing. (n.d.). HX-2 – AI Strike Drohne. Retrieved September 8, 2025, from https://helsing.ai/de/hx-2
  12. OHB SE. (2024, September 11). OHB Digital Connect deepens AI and image processing expertise in concept study for the German Armed Forces. https://www.ohb.de/en/news/ohb-digital-connect-deepens-ai-and-image-processing-expertise-inconcept-study-for-the-german-armed-forces
  13. Defense Advanced Research Projects Agency. (2020, August 26). AlphaDogfight Trials Foreshadow Future of Human-Machine Symbiosis. United States Department of Defense. https://www.darpa.mil/news/2020/alphadogfight-trial
  14. Decker, A. (2024, April 19). An AI took on a human pilot in a DARPA-sponsored dogfight. Defense One. https://www.defenseone.com/technology/2024/04/man-vs-machine-ai-agents-take-human-pilot-dogfight/395930/
  15. Pickrell, R. (2021, June 15). China says its fighter pilots are battling artificial-intelligence aircraft in simulated dogfights, and humans aren’t the only ones learning. Business Insider. https://www.businessinsider.com/china-pits-fighter-pilots-against-ai-aircraft-in-simulated-dogfights-2021-6
  16. Wang, A. (2025, June 17). Taiwan seals Ukraine combat-tested drone software deal to help deter China. Reuters. https://www.reuters.com/business/aerospace-defense/taiwan-seals-ukraine-combat-tested-drone-software-deal-help-deter-china-2025-06-17/
  17. North Atlantic Treaty Organization. (2025, April 14). NATO acquires AI-enabled warfighting system. North Atlantic Treaty Organization. https://shape.nato.int/news-releases/nato-acquires-aienabled-warfighting-system
  18. Jung, H. (2024). A Glimpse into the Future Battlefield with AI-Embedded Wargames. Proceedings, 150(6). https://www.usni.org/magazines/proceedings/2024/june/glimpse-future-battlefield-ai-embedded-wargames
  19. Iankersey3. (2024, June 20). 495. Training Transformed: AI and the Future Soldier. Mad Scientist Laboratory. https://madsciblog.tradoc.army.mil/495-training-transformed-ai-and-the-future-soldier/; Jung, H. (2024)
  20. RINA. (n.d.). AI-powered Aviation scenarios. Retrieved September 8, 2025, from https://www.rina.org/en/media/CaseStudies/ai-powered-aviation-scenarios
  21. Homewood-waszkiewicz, C. (2024, April 17). The Evolution and Crucial Role of Specialised Industrial Computing Enhancing Naval Defence Innovations. Captec. https://www.captec-group.com/evolution-of-industrial-naval-computing/
  22. Reece, B. (2025, March 13). DLA applying AU to supply chain risk management, warfighter readiness. Defense Logistics Agency, United States government. https://www.dla.mil/About-DLA/News/News-Article-View/Article/4117309/dla-applying-ai-to-supply-chain-risk-management-warfighter-readiness/
  23. Schmidt, H.-J. (2013). Verified Transparency. New conceptual ideas for conventional arms control in Europe. (PRIF Report No. 119). Peace Research Institute Frankfurt. https://www.prif.org/publikationen/publikationssuche/publikation/verified-transparency
  24. Ministry of Foreign Affairs, & Ministry of Defence. (2023). REAIM 2023 Call to Action. Government of the Netherlands. https://www.government.nl/documents/publications/2023/02/16/reaim-2023-call-to-action