Introduction: How the Materialization of AI is Reshaping Global Security Risks and Governance
Artificial intelligence is no longer only lines of code or academic experiments. It is becoming a physical, operational force on battlefields, in critical infrastructure, and in state power. Dual-use applications are emerging that may impact global security. In short, AI has begun to play a role in state power and international security. This year’s CNTR Monitor examines how AI has materialized in real-world systems, and why this shift demands new ways of thinking about arms control. Our goal is to move beyond general discussions of AI risk such as long-term existential threats, instead focusing on risks in immediate present. In doing so, we consider three major implications: i) the growing use of AI in military planning and strategic state behavior; ii) dual-use risks emerging in domains such as biotechnology and chemical synthesis; iii) opportunities and challenges of using AI for verification and monitoring in arms control.
Unlike many other digital technologies whose development has plateaued, the continued improvements in AI are a consequence of progress in software, innovations in hardware, and increased access to data, all working together. The interconnected pace of development means that breakthroughs in one domain, such as or chip optimization, quickly influence capabilities in others. Further, AI development is not confined to state actors or defense contractors, it is mostly driven by private companies and academic institutions, underscoring the importance of considering . These actors also operate with different incentives and varying levels of openness about their AI systems, capabilities, and goals. This creates challenges for the introduction of regulation that was previously typically designed for well-defined physical technologies.
AI as a Disruptive Technology and a Regulatory Challenge
AI – as it materializes – is not simply a single ‘doomsday’ risk like autonomous weapons or rogue superintelligence, but an accelerator of change across security domains. It impacts the present and future of warfare. In intelligence and planning, AI accelerates the aggregation and interpretation of large data streams. In defense, it reshapes targeting, logistics, and simulation capabilities. In research and innovation, AI tools are emerging that can be misused for irresponsible or illegal military applications or for terrorist or criminal purposes.
One of the most profound developments in AI has been the rise of large-scale general-purpose systems, so-called . These models are trained on vast, uncurated datasets and are capable of generating text, images, code, and even molecular structures. Because they are flexible and powerful, foundation models are changing how AI is created, used, and applied in many areas. Foundation models lower barriers to advanced capabilities by making powerful tools widely accessible. Importantly, the lack of transparency in many AI systems makes it harder to evaluate potential threats. So-called generate outputs without offering clear insight into the internal reasoning that produced them. This poses problems for accountability and verification. Once these models are released, particularly as open-source tools, control over their subsequent use becomes exceedingly difficult.
The line between research and deployment blurs while enforcement mechanisms, both legal and technical, struggle to keep pace. Efforts to govern foundation models remain in their early stages. Some experts propose licensing regimes or export controls for highly capable models. Others suggest safety evaluations before public release. However, these ideas have to address legal differences, enforcement of rules, and the trade-off between progress and managing risks. Policymakers face a difficult balancing act: overly restrictive regulation may stifle useful research or concentrate power among a few actors, while more neutral approaches may accelerate capability diffusion without safeguards.
Arms Control and Verification in the Algorithmic Age
Traditional arms control approaches are not well-suited to AI. Most evolved in the context of physical systems like ballistic missiles, nuclear weapons, or chemical stockpiles, the deployment of which could be quantified. In contrast, AI is software-based, can be updated quickly, and is inherently dual-use. The same underlying model architecture can be adapted for cancer research or to simulate battlefield strategies. This creates three main problems. First, verification becomes difficult when the object of concern is embedded in code or infrastructure rather than hardware. Second, the field moves too quickly for static treaties. By the time agreements are made and approved, the technologies involved may have already changed. Third, while arms control has always targeted the state level, non-state actors or individuals with access to open-source tools and cloud computing must also be considered.
Beyond the issue of AI regulation itself, AI impacts the fields of non-proliferation, arms control, and disarmament of both conventional and weapons of mass destruction. The dual-use nature of many AI tools is therefore of significant concern. An important question here is whether widely available AI tools for research and innovation could lower the barriers to arms proliferation. For example, AI might assist in designing new chemical or biological compounds, possibly lowering technical thresholds for misuse.
Nevertheless, AI may also provide opportunities for new forms of arms control verification. For example, AI systems can be used to model compliance behavior, detect violations through pattern analysis, or assist in monitoring through anomaly detection. However, using AI in verification also brings challenges, such as the need for explainability and human oversight, which are essential to ensure trust and transparency in sensitive contexts. Rigorous validation and clear accountability frameworks are also essential. To take advantage of these opportunities, arms control experts and AI researchers need to work together.
Institutional Gaps and International Fragmentation
Despite growing awareness of AI’s security implications, institutional responses remain fragmented. At the national level, regulatory strategies diverge widely. The European Union’s AI Act represents one framework, grounded in risk-based classification and rights-based obligations. However, its global impact depends on uptake beyond EU borders. In contrast, the United States has used a combination of voluntary commitments, industry guidelines, and executive orders, such as the 2023 Executive Order on AI, which focuses on national security in its policy. China continues to integrate AI into its broader strategic planning, combining domestic content regulation with long-term state-led investment. Internationally, most initiatives have remained non-binding.
The OECD AI Principles,1 the UNESCO Ethics Recommendation,2 and the Global Partnership on AI (GPAI, which incorporates OECD member countries’ AI efforts) emphasize values like accountability and transparency but lack enforcement power. Meanwhile, discussions in UN forums, such as the Group of Governmental Experts on LAWS, have struggled to reach consensus or produce concrete agreements, often stalling due to the need for unanimous decisions.
In research and innovation, the importance of guarding the freedom of research poses its own challenges. It shows that not all risks can or should be addressed through legal means. In sensitive domains like biotechnology or chemistry, the limits of regulatory possibilities highlight the importance of individual scientists’ responsibility and norms of conduct. Ethical advisory bodies and structured educational programs on handling dual-use research must provide a basis for responsible innovation.
This fragmented governance structure creates gaps because few organizations have the authority, technical know-how, or trust needed to make rules for AI security. Making matters worse, strategic competition between major powers makes cooperation on security-sensitive AI applications unlikely in the near term. Yet without shared rules, the military use of advanced AI systems could add to tensions in an already fragile global environment.
Toward Layered AI Security Solutions
Addressing AI’s security implications will not be solved by a single treaty or technical fix. What is needed is a layered, adaptive governance model that combines national regulation, international norms, industry cooperation, responsible research and innovation, as well as technical standards. There are several useful starting points.
First, technical standards on robustness, interpretability, and incident response – meaning how failures or misuse are detected and managed – can help define a shared baseline for responsible development. These should be developed collaboratively by standards bodies, researchers, and practitioners, and built into how systems are purchased and certified.
Second, structured public-private cooperation is essential. Private companies develop the majority of frontier AI models. Governments must therefore find ways to incentivize responsible behavior without stifling innovation. Joint research institutes, secure model evaluation frameworks, and shared incident reporting channels, which allow stakeholders to communicate and learn from AI-related failures or threats, can help bridge the gap between public interest and commercial capability.
A realistic assessment of the effects of AI is needed now. This includes moving beyond vague ethical commitments and instead building institutions, rules, and practices capable of managing AI’s disruptive potential.
Third, even in the absence of binding treaties international efforts should focus on pragmatic measures that lower risk and build trust. These may include transparency mechanisms, confidence-building measures, and shared safety evaluations. The aim should not be to achieve universal agreement on every use of AI, but to take credible steps that reduce the most harmful risks of proliferation and misuse.
Conclusions
Artificial intelligence is not a distant or speculative challenge but a strategic reality with immediate consequences for global security. It changes how conflicts happen, and how power is distributed. While AI is not inherently a weapon or an existential threat, its integration into security-relevant architectures without adequate oversight could amplify instability, exacerbate inequalities, and erode accountability. At the same time, AI offers promising opportunities to enhance security through improved monitoring, verification, and predictive capabilities that could support more effective arms control and risk reduction.
A realistic assessment of the effects of AI is needed now. This includes moving beyond vague ethical commitments and instead building institutions, rules, and practices capable of managing AI’s disruptive potential. Policymakers must understand not only how AI is being used, but how its very design choices shape its risks. This requires engaging deeply with the technical, legal, and strategic dimensions of AI, with the aim not to contain it, but to steer it responsibly. Beyond responding to AI as it exists today, the challenge is to anticipate how its trajectory may evolve and to ensure that security governance is not left playing catch-up.
- Organisation for Economic Co-operation and Development. (n.d.). AI principles. Retrieved September 9, 2025, from https://www.oecd.org/en/topics/sub-issues/ai-principles.html ↩
- United Nations Educational, Scientific and Cultural Organization. (2023). Recommendation on the Ethics of Artificial Intelligence. https://www.unesco.org/ ↩