International Initiatives and Core Principles for the Safe Use of AI
Summit on Responsible Artificial Intelligence in the Military Domain 2023 (REAIM)
At the REAIM conference1, government representatives from 57 countries2 — including the USA, China, and the majority of EU states—adopted a joint call for action on the responsible development, introduction, and use of artificial intelligence (AI) in the military realm.3 On the basis that AI will have a massive impact on military systems, but that this impact is not yet fully understood, the following politically nonbinding principles were formulated:
- Humans should remain responsible as well as accountable when using AI in the military realm and the use of AI systems should always be under human supervision.
- Military personnel should be sufficiently trained to be aware of possible influences, such as potential distortions in the data (data bias) and the consequences of trusting the decisions of AI systems and their use.
- Premature implementation of AI without sufficient research, testing, and safety should be avoided in favor of an inclusive approach to prevent unintended harm.
- Training data from AI systems should be collected, used, shared, and archived in a way that complies with international law and the relevant legal framework, as well as data and security standards.
As the majority of global AI research and innovation takes place in the civilian sector, the signatory states call for this to be done with an eye toward the responsibility for international security and in accordance with international law. However, the principles lack statements on the self-restraint of states in the use of military AI.
AI Safety Summit 2023 and the “Bletchley Declaration”
The AI Safety Summit 2023 organized by the British government in November 20234 took up the impetus from the REAIM 2023 conference, setting itself the goal of discussing the risks of AI from a human and international law perspective—and not only in the military sector. In cooperation with tech companies, internationally coordinated measures were to be explored to mitigate the dangers of this technology.
The final declaration, entitled the “Bletchley Declaration”5 and signed by 29 countries—including China, the USA, and Germany—emphasizes that the following principles should be prioritized in the development and use of AI:
- The protection of human rights
- Transparency and explainability of results
- Appropriate human supervision of the AI systems used
- Accountability and use of AI based on ethical principles
The declaration also underlines a warning already voiced at the REAIM conference: Given the enormous speed of AI development and the trends toward highly capable and universally applicable AI models (so-called general-purpose AI models), the resulting risks are not fully understood. Against this background, the declaration encourages the development of national—and ideally internationally coordinated—regulations and frameworks for risk assessment and risk minimization strategies.
Chinese and US Perspectives on Dealing with Artificial Intelligence
At almost the same time as the conference in Bletchley Park and just a few weeks apart, the USA and China both presented their own proposals for dealing with AI. The US proposal, entitled “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,”6 which 52 states7 have now signed, relates exclusively to the military use of this technology. In contrast, China’s “Global AI Governance Initiative”8 is dedicated to the broader use of AI, but also addresses military use. Both declarations essentially follow the principles already mentioned, particularly in terms of emphasizing the tension between progress and threat, the call to comply with applicable international law, and the demand for trustworthy and traceable AI products and applications. With regard to the military use of AI, China’s proposal highlights that “major countries in particular (...) should adopt a prudent and responsible attitude toward the research, development, and application of AI technologies in the military sector,” but do not go into detail on further, more specific principles. The US proposal, on the other hand, emphasizes the need for human “supervision” over the use of military AI systems, but leaves open the extent to which these systems may also act completely autonomously—which, conversely, does not rule out the autonomous use of armed force.
In light of the fact that access to the necessary high-tech resources for the development and application of modern AI systems, such as specialized microchips, is increasingly becoming part of the global power play between states, China’s position contains another aspect that is clearly aimed at this. The declaration is against “drawing ideological boundaries or forming exclusive groups to prevent other countries from developing AI” and “creating barriers and disrupting the global AI supply chain through technological monopolies and unilateral coercive measures.” This passage can only be understood as a clear criticism of the US and, in some cases, EU export control restrictions9 on the highly specialized microprocessors required for AI applications.
EU and UN Resolutions Focusing on the Non-Military Use of AI
In March 2024, the EU published its own guidelines for the regulation of AI and AI products. These had been in progress since 2021 and are considered to be the world’s first binding framework. However, the “EU AI Act”10 (AIA) explicitly does not refer to AI systems used for military purposes, as their regulation is “subject to international law (...), which is therefore the more appropriate legal framework for the regulation of AI systems related to the use of lethal force and other AI systems related to military and defense activities.” Instead, the AIA emphasizes the relevance of AI for societal and economic progress within the EU, albeit recognizing the security threats these systems can pose. In order to assess the criticality of AI applications, criteria are defined that include technical, economic, and human rights aspects in the production and use of AI. These criteria are used to define requirements for the safety, control, and legal certainty of AI systems, which are to be incorporated into the national legislation of EU member states as binding principles.
Ultimately, at its General Assembly on March 11, 2024, the UN also agreed on the resolution “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development,”11 which is dedicated to the use and opportunities of safe and trustworthy AI systems. The resolution is based on a proposal by the USA and was adopted by more than 120 countries. Like the EU AI Act, however, it only explicitly refers to the non-military use of AI and the promotion of safe, secure, and trustworthy AI systems for progress in relation to human rights, common development goals, and sustainability. Nonetheless, it also emphasizes that the focus should be on people and that in the wrong hands, AI poses a significant threat. In order to specifically assess these threats, the development and use of tools for the “internationally interoperable identification, classification, assessment and testing, prevention, and mitigation of vulnerabilities and risks during the design, development, and use of AI systems” is recommended.
U.S. Executive Order 14110 as an Approach to Export Control of AI
With the “Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”12 of 2023, the United States demonstrated a valuable approach to the future development of export control and regulatory measures of AI. The order establishes the validity of civil rights for the development and use of AI within the United States and defines security standards in the field of cybersecurity, but also measures to avoid biases. A key approach of the order is to link the validity of these requirements to the total computing power required to train or execute an AI system. For this purpose, it is stipulated that:
(...) The Secretary shall require compliance with these reporting requirements for: (i) any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and (ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.
In particular, computing power and network capacities are considered decisive factors for complex AI systems and are an essential component in planning their application. Against this background, the Executive Order defines a measurable threshold for the regulation of AI, which can be effectively and meaningfully determined before its application and considered by regulatory authorities. Nevertheless, such fixed values must be subject to continuous review and adaptation in order to keep pace with technical developments.
Footnotes
-
Ministry of Foreign Affairs. (2023). REAIM 2023. Government of the Netherlands. https://www.government.nl/ministries/ministry-of-foreign-affairs/activiteiten/reaim ↩
-
Ministry of Foreign Affairs. (2023, February 21). REAIM 2023 Endorsing Countries and Territories. Government of the Netherlands. https://www.government.nl/documents/publications/2023/02/16/reaim-2023-endorsing-countries ↩
-
Ministry of Foreign Affairs. (2023, February 21). REAIM 2023 Call to Action. Government of the Netherlands. https://www.government.nl/documents/publications/2023/02/16/reaim-2023-call-to-action ↩
-
Government of the United Kingdom. About the AI Safety Summit 2023. Government of the United Kingdom. https://www.gov.uk/government/topical-events/ai-safety-summit-2023/about ↩
-
https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 ↩
-
https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy-2/ ↩
-
Bureau of Arms Control, Deterrence, and Stability. (2024). Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. U.S. Department of State. https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/ ↩
-
Ministry of Foreign Affairs. Global AI Governance Initiative. The People’s Republic of China. http://gd.china-embassy.gov.cn/eng/zxhd_1/202310/t20231024_11167412.htm ↩
-
Allen, G. C. (2023, May 3). China’s New Strategy for Waging the Microchip Tech War. Center for Strategic & International Studies. https://www.csis.org/analysis/chinas-new-strategy-waging-microchip-tech-war ↩
-
https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473 ↩
-
https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ ↩