In March 2026, the White House released a National AI Legislative Framework urging Congress to enact a unified federal policy to preempt a growing patchwork of state AI regulations. The framework emphasizes six core principles: protecting children, safeguarding communities and energy costs, respecting intellectual property, preventing censorship, fostering innovation, and preparing an AI-ready workforce. The administration argues that inconsistent state laws could undermine national competitiveness and innovation. (apnews.com)

Meanwhile, faith leaders have entered the AI policy debate, urging Congress to include binding legal safeguards in the annual defense policy bill to ensure human control over AI-enabled weapons. Their intervention underscores growing concern over the militarization of AI. (axios.com)

On the economic front, the Commerce Department has launched a program inviting U.S. companies to submit proposals for exporting full-stack American AI technology packages abroad. The initiative aims to strengthen economic and national security by embedding U.S. AI systems in international infrastructure. (axios.com)

At the state level, the regulatory landscape remains fragmented. States like California, New York, Illinois, and Colorado have enacted AI-specific laws addressing transparency, safety, and bias. The White House framework seeks to override these with a lighter-touch federal standard, though legal and political tensions persist. (altitudesmagazine.com)

In summary, the U.S. is at a crossroads in AI policy: the federal government is pushing for a unified approach to avoid regulatory fragmentation, while diverse stakeholders—from religious groups to industry and states—are shaping the debate across defense, labor, and international trade domains.