As global conflict and economic instability dominate headlines, a quieter but no less urgent challenge is gaining traction among international institutions: the governance of Artificial General Intelligence (AGI).
Over the past year, AGI has transitioned from an abstract theory to a top priority for policymakers in both the United States and the United Nations. Across multiple US Senate hearings, leading researchers—including Yoshua Bengio, Stuart Russell, and whistleblowers like William Saunders — have warned that AGI may arrive within as little as three years.
US lawmakers now treat AGI as a national security concern, with bipartisan voices comparing its risks to nuclear proliferation.
These domestic efforts mirror growing global concern. In March 2024, the United Nations General Assembly adopted its first resolution on AI, urging all 193 member states to pursue the “safe, secure, and trustworthy” development of AI. The UN’s High-Level Advisory Body on Artificial Intelligence released a report in late 2024 titled Governing AI for Humanity, which emphasized that 118 countries currently have no role in shaping AI governance.
The report outlines a blueprint for globally inclusive oversight, including an International Scientific Panel on AI, a global AI fund for the developing world, and a potential new UN agency modeled on the IAEA.
Together, these efforts mark a decisive shift in framing AGI as a global public risk — one that requires urgent, coordinated international governance. Unlike narrow AI applications that support enterprise workflows or consumer interfaces, AGI poses systemic risks across every domain of human activity — from national security to democratic institutions.
In this article, we explore a recent commentary from Millennium Project Senior Fellow and Global Strat View Foreign Affairs Editor Asanga Abeyagoonasekera. His recent essay for Global Strat View, “Governing AGI Before It Governs Us”, presents a powerful call for global coordination anchored in UN frameworks, detailing specific institutional steps to prevent misalignment and misuse.
In the process, Abeyagoonasekera outlines two immediate steps the US government should take to reduce the risks of unregulated AGI:
- Creating Congressional licensing for AGI development: By mandating transparency, rigorous safety testing, and alignment with human values, a national licensing system would help ensure robust AI systems are built and deployed responsibly.
- Convene a UN General Assembly session focused on AGI governance: A formal request from the US Department of State would help catalyze multilateral cooperation and lay the foundation for globally inclusive oversight.
This article will explore Abeyagoonasekera’s ideas and their most likely practical application as policy, regardless of any obvious political odds to achieving them from a legislative or cultural standpoint.
A National Licensing System as the Foundation of Responsible AGI Development
Unlike narrow AI tools, which are built for specific tasks, AGI will be capable of planning, adapting, and executing goals across domains—potentially beyond human comprehension or control. In testimony before the US Senate Judiciary Committee, Yoshua Bengio and Stuart Russell emphasized that AGI poses existential threats without precise oversight mechanisms. RAND Corporation has further described AGI as a “dual-use” technology, comparable to nuclear research in its potential for both benefit and harm.
Yet unlike nuclear weapons, AGI is being developed by a mix of private labs, open-source communities, and startups—outside the bounds of traditional state-level governance.
The extra-nuclear dynamic of the very problem AGI poses to the species makes international coordination even more urgent. To that end, The Millennium Project’s second recommendation focuses on the United Nations.
The organization argues that the US Congress should be the first body to establish a binding licensing framework for the development of AGI. Such a system, as proposed, would establish national standards for transparency, safety testing, and value alignment before companies or research groups can build or deploy AGI systems.
By applying licensing mechanisms standard in biotechnology or aviation, the US could anchor governance in accountability and traceability—ensuring that the race to develop AGI does not outpace our ability to manage it. A robust licensing regime would also provide a legal basis for liability, enabling enforcement when organizations fail to meet safety thresholds or disclosure requirements.
Using Diplomacy to Catalyze a Global Governance Framework
In parallel with domestic regulation, the Millennium Project recommends that the US Department of State formally request a dedicated session on AGI at the UN General Assembly.
While the 2024 General Assembly resolution on AI and statements from Secretary-General António Guterres mark critical first steps, no multilateral process currently exists to directly address AGI. A US-backed session would help elevate AGI governance as a global issue, drawing in perspectives from both developed and developing nations.
It would also serve as the launching point for more structured agreements—such as a UN Framework Convention or the creation of a dedicated international oversight body.
Their proposal for a UN General Assembly session on AGI builds on growing global concern. In March 2024, the General Assembly unanimously adopted its first resolution on artificial intelligence, urging the safe, secure, and trustworthy development of this technology. In December, UN Secretary-General António Guterres warned the Security Council that AI could undermine nuclear safeguards if not governed carefully.
The Millennium Project argues that the General Assembly is the only venue with the legitimacy and inclusiveness to convene a truly multilateral response.
The organization also recommends longer-term initiatives, including the creation of a Global AGI Observatory to detect early warning signs, an international certification regime for safety and alignment, and a UN Framework Convention on AGI modeled after arms control or climate agreements. These would form a foundation for future governance — but the near-term actions by Congress and the State Department are positioned as the most critical first steps.
To date, regulatory efforts have been promising but have lacked sufficient unity in scope and execution. The EU’s AI Act 2.0 introduces binding rules for general-purpose and high-risk models starting in 2024OECD’s025. The OECD’s AI Capability Indicators offer a voluntary framework for benchmarking systems against human-level cognitive capabilities.
And in September 2023, over 70 national parliaments pledged cooperation at the World Summit of Committees of the Future. Still, without a central framework or enforcement mechanism, these efforts risk being overtaken by the pace of technological development.
Abeyagoonasekera succinctly summarizes the challenge in how “AGI may evolve more rapidly than the systems intended to guide it.”
With AGI development accelerating and oversight lagging, the time to act is now. Licensing and diplomatic coordination may not solve every governance challenge, but they are critical milestones on the path to global safety and accountability.
Source link
#AGI #Governance #Insights #Asanga #Abeyagoonasekera #Millennium #Project