Government Commissioned Report: AI is Creating New Categories of Weapons of Mass Destruction

March 13, 2024

A U.S. State Department-commissioned report asserts that advanced artificial intelligence is creating entirely new categories of weapons of mass destruction-like (WMD-like) and WMD-enabling catastrophic.

The risks associated with these developments are global, have deeply technical origins and are quickly evolving, leaving policymakers with diminishing opportunities to introduce safeguards to balance these considerations and ensure advanced AI is developed and adopted responsibly.

“A key driver of these risks is an acute competitive dynamic among the frontier AI labs that are building the world’s most advanced AI systems,” the report from Galdstone AI states. “All of these labs have openly declared an intent or expectation to achieve human-level and superhuman artificial general intelligence (AGI) — a transformative technology with profound implications for democratic governance and global security — by the end of this decade or earlier.”

The report calls out two primary concerns. One is that inadequate security at frontier AI labs increases the risk that advanced AI systems could be stolen from U.S. developers, and then weaponized against U.S. interests. It also calls out the the possibility that these labs could at some point lose control of the AI systems they are developing, with “potentially devastating consequences to global security.”

The report lays out an action plan is for intervention aimed at increasing the safety and security of advanced AI. The plan was developed over a year, and informed by conversations with 200 stakeholders from across the U.S., U.K., and Canadian governments; major cloud providers; AI safety organizations; security and computing experts; and formal and informal contacts at the frontier AI labs.

The proposed actions that follow a sequence that:

  • Begins by establishing interim safeguards to stabilize advanced AI development, including export controls on the advanced AI supply chain;
  • Leverages the time gained to develop basic regulatory oversight and strengthen U.S. government capacity for later stages;
  • Transitions into a domestic legal regime of responsible AI development and adoption, safeguarded by a new U.S. regulatory agency; and
  • Extends that regime to the multilateral and international domains.

“This plan follows the principle of defense in depth, in which multiple overlapping controls combine to offer resilience against any single point of failure,” the report states. “We frame tradeoffs in terms of AI breakout timelines, the amount of time it would take an actor to train an AI system from scratch to equal the current state-of-the art under various expert-vetted assumptions.”

Was this article valuable?

Here are more articles you may enjoy.