Curb your enthusiasm, Trump admin urged to screen AI models before release
The Trump administration should screen advanced artificial intelligence models for security threats before public release and deny government contracts to those that fail review, an advocacy group said Monday.
The White House is addressing concerns about Anthropic’s Mythos, which could facilitate faster and easier execution of complex cyberattacks, creating national security risks.
Americans for Responsible Innovation called on the Trump administration to establish methods for vetting upcoming frontier models from major developers for cyberattack and weapons development capabilities.
Companies would need to pass the review to qualify for government contracts, the group stated in a letter to administration officials.
The U.S. Center for AI Standards and Innovation currently reviews some AI models through voluntary agreements with OpenAI, Anthropic, Google, Microsoft and xAI.
CAISI should lead the development of mandatory requirements, and Congress should establish a permanent enforcement office within the U.S. Department of Commerce to enforce these requirements, the group said.