Back to list
Mar 20, 2026
22
0
0
IT NewsNEW

The TRUMP AMERICA AI Act: Senator Blackburn's 300-Page Draft to Create the First Federal AI Framework

Senator Marsha Blackburn released a sweeping 300-page draft federal AI bill that would preempt state regulations, mandate AI audits, and protect digital likeness rights.

#AI Regulation#US Policy#TRUMP AMERICA AI Act#Federal Law#Copyright
The TRUMP AMERICA AI Act: Senator Blackburn's 300-Page Draft to Create the First Federal AI Framework
AI Summary

Senator Marsha Blackburn released a sweeping 300-page draft federal AI bill that would preempt state regulations, mandate AI audits, and protect digital likeness rights.

Key Takeaways

On March 19, 2026, Senator Marsha Blackburn (R-TN) released a nearly 300-page discussion draft of the most comprehensive federal AI legislation attempted in the United States. Officially titled the "Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act" (TRUMP AMERICA AI Act), the bill aims to create a unified national framework for AI regulation that would replace the growing patchwork of state-level AI laws.

The legislation responds directly to President Trump's December 2025 executive order calling for federal AI standards to override state regulations. It consolidates proposals from both Republican and Democratic senators, including provisions on copyright protection, digital likeness rights, children's online safety, AI system audits, and Section 230 reform.

Feature Overview

1. Federal Preemption of State AI Laws

The bill's most consequential provision is its intent to establish federal AI rules that would supersede state legislation. With states including California, Colorado, Texas, and Virginia having enacted or proposed their own AI regulations, companies currently face a complex compliance landscape.

StateKey AI RegulationStatus
CaliforniaSB-1047 AI SafetyEnacted 2025
ColoradoAI Consumer ProtectionsEnacted 2025
TexasAI Governance ActEnacted 2026
VirginiaSB-796 AI Chatbot MinorsEnacted 2026

Federal preemption would simplify compliance for AI companies but has drawn criticism from state regulators who argue that federal standards may be weaker than state protections. The bill does not specify whether it would override all state AI laws or only those that directly conflict with federal provisions.

2. Duty of Care for AI Developers

The TRUMP AMERICA AI Act introduces a "duty of care" requirement for AI chatbot developers, mandating that they "exercise reasonable care" in designing systems to prevent foreseeable harms to users. This is a significant legal shift because it moves beyond the current framework where AI companies are generally protected from liability for user-generated outputs.

The duty of care standard is deliberately broad, covering everything from AI-generated misinformation to harmful advice. Courts would ultimately determine what constitutes "reasonable care" in specific cases, creating potential for precedent-setting litigation against AI companies.

3. Digital Likeness and Copyright Protection

The bill incorporates two significant intellectual property provisions:

Digital Likeness Rights: Drawing from Senator Chris Coons' (D-DE) legislation, the bill gives individuals the right to license their voice and visual likeness for digital replicas and prohibits transmitting such replicas without consent. This directly targets deepfakes and unauthorized AI-generated celebrity content.

Copyright Subpoena Power: Based on Senator Peter Welch's (D-VT) proposal, copyright holders would be able to request subpoenas to determine whether their work was used to train AI models. The bill also establishes that derivative AI-generated works receive no fair use exemptions or copyright protection.

For AI companies, the copyright provisions are potentially transformative. The subpoena power would create a legal mechanism for content creators to audit AI training data, something that has been technically and legally difficult to accomplish until now.

4. High-Risk AI Audits and Bias Prevention

The draft requires third-party audits of "high-risk" AI systems specifically focused on detecting viewpoint or political affiliation discrimination. This provision reflects concerns that AI models may exhibit political bias in their outputs, a topic that has been intensely debated in Congress.

The focus on political bias rather than broader bias categories (race, gender, age) is notable and has drawn criticism from civil rights organizations who argue the provision is too narrow. Supporters counter that political bias audits are a necessary first step that does not preclude broader requirements.

5. Section 230 Reform and Children's Safety

The bill incorporates Senator Lindsey Graham's (R-SC) proposal to sunset Section 230 liability protections for online platforms two years after enactment. It also includes Senator Blackburn's Kids Online Safety Act (KOSA), which requires platforms to implement duty of care protections for minors.

Specifically for AI, the bill bans chatbots designed for children, establishes criminal penalties for sexually explicit AI conversations with minors, and requires age verification systems. These provisions respond to high-profile incidents involving AI chatbots and young users.

6. AI Safety and Energy Infrastructure

Two additional provisions address emerging concerns:

AI Safety Assessment: The Department of Energy would be required to establish programs evaluating risks of advanced AI incidents, including loss-of-control scenarios. This is the first time proposed federal legislation has explicitly addressed AI existential risk.

Data Center Energy: The bill mandates Energy Department agreements with data centers to protect ratepayers from electricity price increases driven by AI computing demand. This addresses growing concerns about the energy footprint of AI infrastructure.

Usability Analysis

For AI companies, the TRUMP AMERICA AI Act would create both simplification and new obligations. Federal preemption would replace the state-by-state compliance burden with a single national standard. However, the duty of care requirement, copyright subpoena power, and mandatory audits would introduce compliance costs that are currently absent.

For content creators and artists, the digital likeness and copyright provisions represent a significant step forward. The ability to subpoena AI companies to determine whether specific works were used in training could fundamentally change the economics of AI-content creator relationships.

For the general public, the children's safety provisions and Section 230 reform would have the broadest impact, potentially reshaping how AI chatbots interact with young users and how platforms handle AI-generated content.

Pros

  1. Federal preemption would replace the current patchwork of state AI regulations with a single national standard, simplifying compliance for AI companies operating across multiple states
  2. Copyright subpoena power gives content creators a legal mechanism to audit whether their work was used to train AI models
  3. Digital likeness protections address the growing deepfake problem with consent requirements and licensing frameworks
  4. Bipartisan construction incorporating proposals from both parties increases the likelihood of meaningful legislative progress
  5. Children's safety provisions with criminal penalties for AI-related harms to minors set clear boundaries

Limitations

  1. Political bias focus in AI audits is too narrow and does not address racial, gender, or other forms of algorithmic discrimination
  2. Section 230 sunset creates significant uncertainty for the entire internet ecosystem, not just AI companies
  3. No clear timeline for passage as the bill faces jurisdictional complexity across multiple Senate committees
  4. Federal preemption may weaken protections in states that have enacted stronger AI regulations than the proposed federal standard

Outlook

The TRUMP AMERICA AI Act is a discussion draft, not introduced legislation. Its path to becoming law faces significant obstacles. The bill would likely fall under the jurisdiction of multiple Senate committees, including Commerce, Judiciary, and Energy, creating procedural complexity. Additionally, lawmakers face a dwindling legislative calendar in an election year.

However, the bill's bipartisan construction is strategically significant. By incorporating proposals from Democrats including Coons, Welch, and Durbin alongside Republican priorities, Blackburn has built a framework that cannot be dismissed as purely partisan. The consolidation of multiple standalone bills into a comprehensive package also creates negotiating leverage, as senators who contributed provisions have incentive to support the overall framework.

The AI industry's response will be critical. Companies that have lobbied for federal preemption of state laws may find that the accompanying obligations, particularly copyright subpoenas and duty of care requirements, are a higher price than expected. The trade-off between regulatory simplification and new compliance burdens will shape industry support or opposition.

Regardless of whether this specific bill advances, it establishes the template for what federal AI legislation could look like. The combination of preemption, accountability, intellectual property protection, and safety requirements will likely influence every subsequent attempt at comprehensive AI regulation in the United States.

Conclusion

Senator Blackburn's TRUMP AMERICA AI Act is the most ambitious attempt at federal AI legislation to date. The 300-page draft addresses issues ranging from copyright protection and digital likeness rights to AI safety assessments and data center energy impacts. While its path through Congress is uncertain, the bill establishes a comprehensive framework that will shape the federal AI regulation debate for years to come. For AI companies, content creators, and the broader technology industry, this draft signals that the era of unregulated AI development in the United States is approaching its end.

Pros

  • Federal preemption replaces the patchwork of state AI regulations with a single national standard
  • Copyright subpoena power gives content creators a legal mechanism to audit AI training data
  • Digital likeness protections address deepfakes with consent and licensing frameworks
  • Bipartisan construction increases likelihood of meaningful legislative progress
  • Children's safety provisions with criminal penalties set clear boundaries for AI-minor interactions

Cons

  • Political bias focus in AI audits is too narrow, missing racial and gender discrimination
  • Section 230 sunset creates broad uncertainty for the entire internet ecosystem
  • No clear timeline for passage amid jurisdictional complexity and election year calendar
  • Federal preemption may weaken protections in states with stronger existing AI regulations

Comments0

Key Features

1. Nearly 300-page federal AI legislation draft released by Senator Blackburn on March 19, 2026 2. Federal preemption to replace state-level AI regulations with a unified national standard 3. Duty of care requirement for AI chatbot developers to prevent foreseeable harms 4. Copyright subpoena power allowing creators to audit AI training data usage 5. Digital likeness protection requiring consent for AI-generated voice and visual replicas

Key Insights

  • Federal preemption of state AI laws would be the single largest regulatory simplification for AI companies operating in the US
  • Copyright subpoena power could fundamentally change AI-content creator economics by making training data auditable
  • The bill's bipartisan construction incorporating proposals from both parties increases credibility but may complicate passage
  • Political bias audits for high-risk AI reflect Congressional priorities but leave broader discrimination categories unaddressed
  • Section 230 sunset two years after enactment would reshape the entire internet ecosystem beyond AI
  • AI safety assessment provisions addressing loss-of-control scenarios mark the first time federal legislation has explicitly addressed existential AI risk
  • Data center energy provisions signal regulatory attention to AI's growing environmental and infrastructure footprint
  • The discussion draft format allows industry feedback before formal introduction, creating a window for lobbying and revision

Was this review helpful?

Share

Twitter/X