March 2025 AI Developments Under the Trump Administration

This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration. This blog describes AI actions taken by the Trump Administration in March 2025, and prior articles in this series are available here.
White House Receives Public Comments on AI Action Plan
On March 15, the White House Office of Science & Technology Policy and the Networking and Information Technology Research and Development National Coordination Office within the National Science Foundation closed the comment period for public input on the White House’s AI Action Plan, following their issuance of a Request for Information (“RFI”) on the AI Action Plan on February 6. As required by President Trump’s AI EO, the RFI called on stakeholders to submit comments on the highest priority policy actions that should be in the new AI Action Plan, centered around 20 broad and non-exclusive topics for potential input, including data centers, data privacy and security, technical and safety standards, intellectual property, and procurement, to inform an AI Action Plan to achieve the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”
The RFI resulted in 8,755 submitted comments, including submissions from nonprofit organizations, think tanks, trade associations, industry groups, academia, and AI companies. The final AI Action Plan is expected by July of 2025.
NIST Launches New AI Standards Initiatives
The National Institute of Standards & Technology (“NIST”) announced several AI initiatives in March to advance AI research and the development of AI standards. On March 19, NIST launched its GenAI Image Challenge, an initiative to evaluate generative AI “image generators” and “image discriminators,” i.e., AI models designed to detect if images are AI-generated. NIST called on academia and industry research labs to participate in the challenge by submitting generators and discriminators to NIST’s GenAI platform.
On March 24, NIST released its final report on Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST AI 100-2e2025, with voluntary guidance for securing AI systems against adversarial manipulations and attacks. Noting that adversarial attacks on AI systems “have been demonstrated under real-world conditions, and their sophistication and impacts have been increasing steadily,” the report provides a taxonomy of AI system attacks on predictive and generative AI systems at various stages of the “machine learning lifecycle.”
On March 25, NIST announced the launch of an “AI Standards Zero Drafts project” that will pilot a new process for creating AI standards. The new standards process will involve the creation of preliminary “zero drafts” of AI standards drafted by NIST and informed by rounds of stakeholder input, which will be submitted to standards developing organizations (“SDOs”) for formal standardization. NIST outlined four AI topics for the pilot of the Zero Drafts project: (1) AI transparency and documentation about AI systems and data; (2) methods and metrics for AI testing, evaluation, verification, and validation (“TEVV”); (3) concepts and terminology for AI system designs, architectures, processes, and actors; and (4) technical measures for reducing synthetic content risks. NIST called for stakeholder input on the topics, scope, and priorities of the Zero Drafts process, with no set deadline for submitting responses.
Michael Kratsios Confirmed as Director of the Office of Science & Technology Policy
On March 25, the U.S. Senate voted 74-25 to confirm Michael Kratsios, the Assistant to the President for Science & Technology, as the Director of the White House Office of Science & Technology Policy. As the U.S. Chief Technology Officer and OSTP Associate Director under the first Trump Administration, Kratsios played a significant role in shaping U.S. AI policy, including overseeing the establishment of the White House National AI Initiative Office and OMB guidance on the use of AI by federal agencies finalized in November 2020. In his February 25 written responses to the Senate Commerce Committee, Kratsios stated that he would “seek to develop additional technical standards for the development and deployment of AI systems” through a “use-case and sector-specific, risk-based policy approach,” and would work with the Department of Commerce to assess the U.S. AI Safety Institute and “chart the best part forward for the institute to ensure continued American leadership” in AI.
On March 26, President Trump published a letter that he sent to Director Kratsios, directing Kratsios to meet three “challenges” to “deliver for the American people”: (1) securing U.S. “technological supremacy” over potential adversaries “in critical and emerging technologies,” including AI, by accelerating research and development and removing regulatory barriers; (2) revitalizing the U.S. “science and technology enterprise” by reducing regulations, attracting talent, “empowering researchers,” and “protect[ing] our intellectual edge”; and (3) ensuring that “scientific progress and technological innovation fuel economic growth and better the lives of all Americans.”
Congress and States Continue to Respond to DeepSeek
The reaction to the rise of DeepSeek, including its implications for the U.S.-China AI competition, continued in March. Members of Congress and state officials stepped up calls for bans on the use of DeepSeek’s AI models on government devices. On March 3, Representatives Josh Gottheimer (D-NJ) and Darin LaHood (R-IL) announced that they had sent letters to the governors of 47 states and the mayor of the District of Columbia urging them to “take immediate action” to ban DeepSeek from government-issued devices. The letters, which warn of “serious concerns” regarding DeepSeek’s data privacy and national security risks, follows Reps. Gottheimer and LaHood’s introduction of the No DeepSeek on Government Devices Act (H.R. 1121) in February. On March 6, Montana Attorney General Austin Knudsen issued a letter, signed by Knudsen and 20 other state attorneys general, urging Congress to pass the No DeepSeek on Government Devices Act.
States continued to pursue their own government use bans, following bans issued by officials in New York, Virginia, Iowa, and Pennsylvania last month. On March 4, South Dakota Governor Larry Rhoden and the South Dakota Bureau of Information & Telecommunications issued a ban on the use of DeepSeek’s AI application by state employees, agencies, or government contractors on state government-issued or leased devices. On March 21, Oklahoma Governor Kevin Stitt (R) announced a ban on downloading or accessing DeepSeek’s AI models on state-owned devices, or inputting “state data” into “any product using DeepSeek.” In his announcement of the ban, Governor Stitt cited security risks, regulatory compliance issues, adversarial manipulation risks, and DeepSeek’s lack of robust security safeguards as reasons for the ban.