Senator Blackburn Withdraws Support for AI Moratorium in Trump's 'Big Beautiful Bill' Amid Intense Backlash
As Congress races against the clock to pass President Donald Trump’s sweeping legislative package, dubbed the “Big Beautiful Bill,” a contentious provision proposing a moratorium on state-level artificial intelligence regulations has become a flashpoint, triggering a rapid series of political maneuvers and reversals. At the center of this unfolding drama is Senator Marsha Blackburn, who has navigated a complex and shifting position on the controversial measure, ultimately withdrawing her support for a compromise she helped craft, following a wave of fierce criticism.
The initial proposal, reportedly championed by White House AI advisor and prominent venture capitalist David Sacks, called for a decade-long pause on states enacting their own laws concerning AI. This idea immediately drew fire from a diverse coalition of critics who argued it represented a significant federal overreach that would stifle necessary consumer protections and grant an effective “get-out-of-jail-free card” to large technology companies, making it exceedingly difficult to address emerging harms associated with AI and automated systems at the state level. Opposition spanned the political spectrum, uniting figures as disparate as 40 state attorneys general, who voiced concerns about their ability to protect citizens, and ultra-conservative politicians like Representative Marjorie Taylor Greene.
Facing this broad and potent backlash, efforts were made to find a middle ground. Senator Blackburn, who had initially opposed the original 10-year moratorium, collaborated with Senator Ted Cruz on a revised version of the provision. Announced on a Sunday evening, this new iteration sought to address some concerns by reducing the proposed pause from a full decade to five years. Crucially, it also introduced a series of carve-outs intended to allow states to continue regulating AI in specific, sensitive areas.
The Compromise and Its Carve-Outs
Senator Blackburn’s legislative history provides context for her involvement in crafting these exemptions. Representing Tennessee, a state with a significant music industry presence, she has been a vocal advocate for protecting artists and creators. Last year, Tennessee passed a pioneering law aimed at preventing AI-generated deepfakes of musical artists, expanding the legal right of publicity to cover the commercial exploitation of one's likeness. This focus was reflected in the proposed carve-outs within the revised AI moratorium.
The compromise language drafted by Blackburn and Cruz included exemptions for state laws dealing with:
- Unfair or deceptive acts or practices
- Child online safety
- Child sexual abuse material (CSAM)
- Rights of publicity
- Protection of a person’s name, image, voice, or likeness
These carve-outs were designed to preserve state authority in areas deemed particularly vulnerable to the potential negative impacts of unregulated AI, such as the spread of deepfakes, the exploitation of children online, and misleading algorithmic practices.
The Backlash Intensifies: A 'Trojan Horse' for Big Tech?
Despite the inclusion of these specific exemptions, the revised five-year moratorium provision failed to quell the widespread opposition. Critics quickly identified a critical caveat attached to the carve-outs: the exempted state laws could not place an “undue or disproportionate burden” on AI systems or “automated decision systems.”
This language became the primary target of renewed criticism. Opponents argued that the “undue burden” clause effectively nullified the carve-outs, providing a powerful legal shield for tech companies. Given that AI and algorithmic systems are deeply integrated into social media platforms and other online services, many state-level efforts to regulate these platforms for issues like child safety or deceptive practices could potentially be challenged as imposing an “undue burden” on the underlying AI or algorithms.
Senator Maria Cantwell, a key figure in tech policy debates, was among those who immediately saw the danger in this language. She argued that the provision, even with the carve-outs, was designed to create “a brand-new shield against litigation and state regulation.”
Advocacy groups and legal experts echoed these concerns. Danny Weiss, chief advocacy officer at the nonprofit Common Sense Media, which focuses on kid safety, described the revised version as still “extremely sweeping.” He warned that the “undue burden” shield could “affect almost every effort to regulate tech with regards to safety,” effectively hamstringing states' ability to protect their citizens, particularly vulnerable populations like children, from the potential harms of AI-powered platforms.
J.B. Branch, an advocate for the consumer rights nonprofit Public Citizen, went further, labeling the updated moratorium a “clever Trojan horse designed to wipe out state protections while pretending to preserve them.” In a scathing statement, Branch contended that the “undue burden” language rendered the carefully crafted carve-outs “meaningless,” as tech companies could simply argue that any state regulation impacting their AI systems imposed such a burden.
The opposition was not limited to policy experts and advocacy groups. Figures like Steve Bannon, a prominent voice in the ultra-MAGA movement, also criticized the five-year pause, albeit from a different angle, suggesting it still gave tech companies too much time to operate unchecked: “they’ll get all their dirty work done in the first five years.” This unusual alignment of critics from across the political spectrum highlighted the depth and breadth of concern regarding the moratorium.
The Broadband Funding Connection
Adding another layer of controversy, the AI moratorium provision was tied to federal funding. The bill linked access to funds from the Broadband Equity, Access, and Deployment (BEAD) program – a significant initiative aimed at expanding internet access – to compliance with the five-year pause on state AI regulations. Critics, including Senator Ed Markey, saw this as the Trump administration attempting to use crucial infrastructure funding as leverage, effectively weaponizing it against states that might wish to enact their own AI regulations.
Senator Markey, alongside Senator Cantwell, introduced an amendment specifically aimed at removing the AI moratorium from the larger bill. Markey described the revised provision as “a wolf in sheep’s clothing,” arguing that it still prevented states from protecting children online from what he termed Big Tech’s “predatory behavior” and allowed the administration to use broadband funding as a coercive tool.
Blackburn's Second Reversal
The intense and sustained backlash against the revised moratorium, particularly the focus on the “undue burden” clause and its implications for child safety and creator rights, appears to have prompted Senator Blackburn's second reversal on the issue. After initially opposing the 10-year ban, then working on the five-year compromise with carve-outs, she ultimately decided the compromise language was still insufficient to protect her constituents and other vulnerable groups.
On Monday evening, in a significant development, Senator Blackburn announced she was withdrawing her support for the revised provision. In a statement provided to WIRED, she explained her decision:
“While I appreciate Chairman Cruz’s efforts to find acceptable language that allows states to protect their citizens from the abuses of AI, the current language is not acceptable to those who need these protections the most,” Blackburn said. “This provision could allow Big Tech to continue to exploit kids, creators, and conservatives. Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can’t block states from making laws that protect their citizens.”
This statement clearly indicates that despite the carve-outs, Blackburn concluded that the “undue burden” language or other aspects of the provision still posed too great a risk, potentially undermining state efforts to regulate tech platforms on critical issues like child safety and the rights of artists and creators – areas she has championed. Her reference to the need for federal legislation like the Kids Online Safety Act and a comprehensive online privacy framework suggests her view that federal preemption might be acceptable *only* if robust federal protections are already in place, which she believes is not currently the case.
In a remarkable turn of events following her announcement, Senator Blackburn joined forces with Senator Cantwell, who had been a vocal critic of the moratorium from the outset. The two senators introduced their *own* amendment to the Big Beautiful Bill, this one specifically designed to strip the AI moratorium provision out of the legislation entirely. Tricia Enright, communications director for the Senate Committee on Commerce, Science, and Transportation, confirmed the collaboration, stating, “Blackburn is now co-sponsoring Senator Cantwell’s amendment, and Cantwell agreed to cosponsor Blackburn’s new amendment.” This bipartisan effort between two senators who had been on different sides of the compromise negotiations underscores the level of opposition the moratorium faced and the political pressure it generated.

The Broader Context of AI and Tech Regulation
The intense debate and rapid legislative shifts surrounding the AI moratorium in the Big Beautiful Bill highlight the broader challenges facing policymakers as they grapple with regulating rapidly evolving technologies like artificial intelligence. The core tension lies between fostering innovation and ensuring adequate protections for individuals and society.
Proponents of a federal moratorium, like those who initially pushed for the 10-year pause, often argue that a patchwork of state-level regulations could create a complex and potentially contradictory legal landscape, hindering the development and deployment of AI technologies. They suggest that a unified federal approach is necessary to provide clarity and predictability for businesses operating across state lines.
However, critics counter that waiting for comprehensive federal legislation could leave consumers and vulnerable groups exposed to significant harms in the interim. They point to the slow pace of federal action on issues like data privacy and online safety, arguing that states must retain the ability to act proactively to address urgent concerns. The debate over the AI moratorium thus mirrors long-standing discussions about federal preemption versus state authority in areas ranging from environmental protection to consumer finance.
The specific carve-outs proposed in the Blackburn-Cruz compromise – covering areas like unfair practices, child safety, and rights of publicity – reflect the areas where the potential harms of AI are perceived as most immediate and concrete. The inclusion of rights of publicity, in particular, underscores the growing concern among artists, actors, musicians, and other public figures about the use of AI to generate deepfakes and synthetic media that could exploit their likenesses without consent or compensation. Senator Blackburn's focus on this area aligns with legislative efforts already underway in states like Tennessee and California to grant individuals greater control over their digital identities.
The controversy over the “undue burden” language also speaks to a recurring challenge in tech regulation: how to craft rules that effectively address potential harms without stifling innovation or imposing overly burdensome compliance costs on companies, particularly smaller ones. Critics of the clause argued that it was written so broadly that it would effectively make any meaningful state regulation challengeable in court, creating a chilling effect on state legislative action.
The political maneuvering surrounding the moratorium also illustrates the powerful lobbying forces at play in the tech policy arena. Large technology companies often advocate for federal preemption, viewing a single set of national rules as preferable to navigating potentially dozens of different state laws. Consumer advocates, civil rights groups, and state officials, conversely, often favor state authority, seeing it as a necessary check on corporate power and a way to tailor regulations to local needs and concerns.
The rapid reversal by Senator Blackburn, moving from initial opposition to compromise authorship and finally to co-sponsoring an amendment to remove the provision entirely, demonstrates the significant political pressure generated by the backlash. It highlights the difficulty of finding a widely acceptable approach to regulating AI, even within a larger legislative package that the administration is eager to pass quickly.
The Path Forward for AI Regulation
With the AI moratorium provision now facing a strong push for removal from the Big Beautiful Bill, its future remains uncertain as Congress races towards the Fourth of July recess deadline. The bipartisan effort by Senators Cantwell and Blackburn to strip the language out suggests a recognition that the provision, in its current form, is too controversial to pass muster, even within a bill that the administration is prioritizing.
Should the moratorium be removed, the landscape for AI regulation would revert to the status quo, where states retain the authority to pass their own laws concerning AI, subject to existing constitutional limitations. This would likely lead to a continuation of the trend of states exploring and enacting targeted AI regulations, particularly in areas like data privacy, algorithmic bias, and the use of AI in specific sectors like employment, housing, and lending.
The debate also underscores the urgent need for federal action on comprehensive tech regulation. As Senator Blackburn noted, the absence of robust federal laws on issues like child online safety and data privacy leaves states feeling compelled to act, creating the very patchwork that some in the industry seek to avoid. Passing federal legislation like the Kids Online Safety Act or a national privacy standard could potentially provide a clearer framework and reduce the impetus for disparate state-level rules.
However, achieving consensus on comprehensive federal tech regulation has proven challenging in recent years, with disagreements over issues like Section 230 immunity, data minimization requirements, and enforcement mechanisms. The rapid evolution of AI adds another layer of complexity to these debates, as policymakers struggle to understand and anticipate the technology's potential impacts.
The saga of the AI moratorium in the Big Beautiful Bill serves as a microcosm of the broader challenges in governing artificial intelligence. It highlights the tension between federal and state authority, the powerful influence of industry lobbying, the urgent calls for consumer protection, and the difficulty of drafting legislation that is both effective and politically viable. As AI continues to become more integrated into daily life, these debates are only likely to intensify, requiring careful consideration and collaboration to ensure that regulatory frameworks promote both innovation and public well-being.
The fate of the AI moratorium provision now rests with Congress as it deliberates the final shape of the Big Beautiful Bill. Regardless of the outcome, the intense debate it has sparked has brought critical questions about the future of AI governance to the forefront, forcing policymakers to confront the complex trade-offs involved in regulating this transformative technology.
The swiftness with which Senator Blackburn shifted her position, particularly her move to co-sponsor an amendment with a senator from the opposing party who had been a consistent critic of the moratorium, demonstrates the significant political pressure generated by the backlash. It suggests that the concerns raised by state attorneys general, child safety advocates, and consumer groups resonated strongly enough to overcome the initial push for a federal pause. This episode may serve as a cautionary tale for future attempts to impose broad federal preemption in rapidly developing technological areas without addressing the specific, tangible harms that states are attempting to mitigate.
Ultimately, the withdrawal of Senator Blackburn's support, coupled with the bipartisan amendment to remove the provision, significantly diminishes the likelihood that the AI moratorium will be included in the final version of the Big Beautiful Bill. While the broader legislative package moves forward, this particular battle over the scope of AI regulation appears to be concluding with a victory for those who argued for preserving state authority, at least until more comprehensive federal protections are enacted. The debate, however, is far from over, as policymakers at both the state and federal levels will continue to grapple with how best to govern the powerful and rapidly advancing field of artificial intelligence.