Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

New York Passes Landmark RAISE Act to Tackle Frontier AI Safety and Catastrophic Risks

12:43 AM   |   14 June 2025

New York Passes Landmark RAISE Act to Tackle Frontier AI Safety and Catastrophic Risks

New York Takes Bold Step on AI Safety with Passage of the RAISE Act

In a significant move for artificial intelligence governance in the United States, New York state lawmakers have passed the Responsible AI Safety and Empowerment (RAISE) Act. This pioneering legislation, approved by the state Senate on Thursday, June 13, 2025, sets its sights on the most advanced AI models — often referred to as "frontier AI" — developed by leading labs such as OpenAI, Google, and Anthropic. The primary objective of the bill is to establish guardrails designed to prevent these powerful systems from contributing to catastrophic scenarios, defined within the bill as events causing death or injury to more than 100 people, or resulting in damages exceeding $1 billion.

The passage of the RAISE Act is being hailed by proponents as a crucial victory for the burgeoning AI safety movement. This movement, which advocates for proactive measures to mitigate potential existential or severe risks posed by advanced AI, has reportedly faced challenges and lost ground in recent years. Critics argue that a prevailing focus on rapid innovation and deployment within Silicon Valley, sometimes echoed by governmental approaches, has overshadowed necessary safety considerations. The RAISE Act, championed by prominent figures in the AI safety community including Nobel laureate Geoffrey Hinton and AI research pioneer Yoshua Bengio, seeks to counteract this trend by introducing legally mandated transparency standards for the labs developing these powerful models. If signed into law by Governor Kathy Hochul, New York would become the first state in America to enact such comprehensive transparency requirements specifically targeting frontier AI.

Navigating the Complex Landscape of AI Regulation

The debate surrounding AI regulation is multifaceted, pitting concerns about potential societal risks against fears of stifling innovation and losing competitive edge. As AI models become increasingly capable, moving beyond narrow tasks to exhibit more general intelligence-like behaviors, the potential for unintended consequences or malicious misuse grows. Advocates for safety argue that the pace of development is outstripping our understanding of these systems and our ability to control them, necessitating regulatory intervention.

Opponents, often from within the tech industry and venture capital, contend that heavy-handed regulation could impede progress, making it harder for startups to compete and potentially pushing cutting-edge research and development to countries with less stringent rules. They emphasize the economic benefits and potential societal improvements that AI can bring, arguing that regulation should be minimal, flexible, and focused on specific applications rather than the underlying models themselves.

The RAISE Act emerges from this complex backdrop, attempting to thread the needle between these competing interests. Its design reflects lessons learned from previous attempts at state-level AI regulation, most notably California's controversial AI safety bill, SB 1047. While sharing some core provisions and goals with SB 1047, which was ultimately vetoed by Governor Gavin Newsom, the RAISE Act's sponsors claim it was deliberately crafted to mitigate the criticisms leveled against its Californian predecessor.

New York state Senator Andrew Gounardes, a co-sponsor of the RAISE Act, emphasized this point in an interview. He stated that a key focus during the bill's drafting was ensuring it would not "chill innovation among startups or academic researchers" — a significant concern raised by critics of SB 1047. Senator Gounardes underscored the urgency of establishing regulatory frameworks, citing the rapid evolution of AI technology. "The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving," he remarked, adding that experts who understand AI best view the potential risks as "incredibly likely," which he finds "alarming."

Key Provisions and Scope of the RAISE Act

Should Governor Hochul sign it into law, the RAISE Act would impose specific obligations on the developers of frontier AI models. The core requirements revolve around transparency and incident reporting:

  • Safety and Security Reports: The bill mandates that the world's largest AI labs publish thorough reports detailing the safety and security characteristics of their frontier AI models. This aims to provide regulators and potentially the public with insights into how these models were developed, tested, and assessed for potential risks.
  • Incident Reporting: AI labs would be required to report safety incidents involving their frontier models. This includes reporting concerning behaviors exhibited by the AI model itself, as well as instances where bad actors might steal or misuse an AI model in a way that could lead to significant harm.

The bill grants the New York Attorney General the authority to enforce these standards. Companies found to be in violation could face substantial civil penalties, potentially reaching up to $30 million per violation.

A critical aspect of the RAISE Act is its narrowly defined scope. It is explicitly designed to target only the largest companies developing the most powerful AI models. The transparency requirements apply to companies whose AI models meet two key criteria:

  1. The models were trained using more than $100 million in computing resources. This threshold is intended to capture only the most resource-intensive, and presumably most powerful and potentially risky, AI models currently in existence or foreseeable in the near future.
  2. The models are being made available to residents of New York state. This establishes the jurisdictional basis for New York's regulation, impacting global companies like OpenAI (based in California) or DeepSeek (based in China) if they offer their qualifying models within the state.

This targeted approach is a direct response to criticisms of broader regulatory proposals that could inadvertently burden smaller startups or academic research institutions lacking the resources of major tech giants. Nathan Calvin, Vice President of State Affairs and General Counsel at Encode, who worked on both the RAISE Act and California's SB 1047, highlighted how the New York bill was shaped by the feedback on previous efforts. He noted that the RAISE Act notably omits certain controversial provisions found in SB 1047, such as requiring AI model developers to include a "kill switch" — a technical and ethical requirement that drew significant industry pushback. Furthermore, the RAISE Act does not hold companies that post-train or fine-tune frontier AI models accountable for critical harms, focusing the regulatory burden squarely on the developers of the foundational models.

Industry Reactions: Pushback and Nuance

Despite the sponsors' efforts to craft a more targeted bill, the RAISE Act has not been met with universal acclaim from the tech industry. New York state Assemblymember Alex Bores, another co-sponsor, acknowledged the significant pushback from Silicon Valley, though he described the resistance as unsurprising. Assemblymember Bores maintained that the RAISE Act is designed in a way that should not limit the innovation capabilities of tech companies.

However, prominent voices within the industry have expressed strong opposition. Anjney Midha, a general partner at Andreessen Horowitz (a16z), a venture capital firm that was a fierce opponent of California's SB 1047, publicly criticized the New York bill. In a post on X, Midha labeled the RAISE Act "yet another stupid, stupid state level AI bill that will only hurt the US at a time when our adversaries are racing ahead." This sentiment reflects a common industry argument that state-level regulations create a fragmented and confusing legal landscape that hinders American competitiveness on the global stage.

Even Anthropic, an AI lab that has positioned itself with a strong focus on safety and has advocated for thoughtful AI governance, has expressed reservations, albeit more nuanced than outright opposition. Anthropic co-founder Jack Clark stated that the company had not taken an official stance on the bill but shared some concerns. Clark suggested that the RAISE Act might be too broad in its application, potentially presenting a risk to "smaller companies." This criticism, however, was disputed by Senator Gounardes, who reiterated that the bill was specifically designed to avoid applying to small companies, suggesting Clark's interpretation might "miss the mark."

Notably, some of the largest potential targets of the legislation — OpenAI, Google, and Meta — did not respond to TechCrunch's request for comment on the bill's passage, indicating either a strategic silence or ongoing internal evaluation of the legislation's potential impact.

The Specter of Companies Leaving New York

A significant criticism raised against the RAISE Act, mirroring arguments against California's SB 1047 and regulations in other jurisdictions like Europe, is the possibility that AI model developers might simply choose not to offer their most advanced models or services within New York state to avoid the regulatory burden. This phenomenon has been observed to some extent in Europe following the implementation of stringent technology regulations like the General Data Protection Regulation (GDPR) and the forthcoming AI Act, where some companies have limited service availability.

Assemblymember Bores countered this concern, arguing that the regulatory burden imposed by the RAISE Act is relatively light compared to other potential regulations. He expressed confidence that the requirements would not be onerous enough to compel tech companies to cease operations or withdraw products from New York. Bores highlighted New York's economic significance, noting that it boasts the third-largest GDP in the United States. Pulling out of such a major market, he argued, is not a decision most companies would make lightly for purely economic reasons.

"I don't want to underestimate the political pettiness that might happen," Bores conceded, acknowledging that companies might withdraw services as a form of protest or leverage. However, he maintained his conviction that "there is no economic reason for [AI companies] to not make their models available in New York." This suggests the sponsors believe the market access New York provides outweighs the cost of compliance with the RAISE Act's provisions.

Comparing RAISE Act and California's SB 1047: A Tale of Two States

The comparison between New York's RAISE Act and California's vetoed SB 1047 is instructive, highlighting the evolving strategies and challenges in state-level AI regulation in the US. Both bills aimed to address the potential risks of advanced AI, particularly focusing on models capable of causing significant harm. Both were championed by AI safety advocates and faced considerable opposition from the tech industry.

California's SB 1047, introduced by State Senator Scott Wiener, was arguably more ambitious and, consequently, more controversial. It proposed a framework that included not only transparency requirements but also mandated safety testing before deploying powerful models and, most contentiously, required developers to implement a "kill switch" allowing models to be shut down in case of dangerous emergent behavior. This latter requirement was heavily criticized by the industry as technically challenging, potentially dangerous if misused, and an overreach of regulatory authority.

The RAISE Act appears to have learned from the intense backlash against SB 1047. By dropping the "kill switch" requirement and focusing accountability on the foundational model developers rather than those who fine-tune models, the New York bill attempts to present a less technically prescriptive and arguably less burdensome regulatory approach. The focus on transparency and incident reporting, while still requiring significant effort from labs, is seen by proponents as a more palatable entry point for regulation compared to mandates on technical design features or pre-deployment safety certifications.

However, both bills faced similar core criticisms regarding state-level action. Opponents argue that AI is a national, if not global, issue that requires federal regulation to ensure consistency and avoid a "patchwork" of differing state laws that could complicate compliance and hinder interstate commerce. The debate over state versus federal AI regulation remains a central tension in the US policy landscape.

Despite the veto of SB 1047, the fact that New York's RAISE Act successfully passed both houses of the state legislature indicates a persistent legislative appetite for addressing AI risks at the state level when federal action is perceived as too slow or insufficient. It also suggests that legislators are adapting their approaches based on previous regulatory attempts and industry feedback, aiming for bills that are potentially more politically viable while still addressing core safety concerns.

The Broader Context: AI Governance and Global Efforts

New York's RAISE Act does not exist in a vacuum. It is part of a growing global conversation about how to govern increasingly powerful AI systems. Governments around the world are grappling with similar questions about balancing innovation, economic competitiveness, national security, and societal safety in the age of advanced AI.

The European Union, for instance, has been a frontrunner in AI regulation with its comprehensive AI Act, which categorizes AI systems based on risk levels and imposes varying requirements. China is also developing its own regulatory frameworks, often with a focus on control and alignment with state interests. In the United States, discussions about federal AI regulation have been ongoing, involving executive orders, congressional hearings, and proposals, but comprehensive federal legislation has yet to materialize.

State-level initiatives like the RAISE Act can be seen as both a response to the perceived slow pace of federal action and a potential testing ground for different regulatory approaches. While they risk creating a fragmented regulatory environment, they also allow for localized experimentation and can pressure the federal government to act. The tech industry's significant lobbying efforts at both state and federal levels underscore the high stakes involved in shaping these regulations.

The concept of "frontier AI" itself is a subject of ongoing debate. Defining what constitutes a "frontier" model, particularly using metrics like compute cost, is an attempt to create a clear, objective trigger for regulation. However, the rapid advancements in AI capabilities mean that such thresholds may need to be revisited over time. Furthermore, the focus on the largest labs developing these models raises questions about how to monitor and regulate potentially risky AI development happening outside of these major players, including in open-source communities or by state actors.

Transparency, a cornerstone of the RAISE Act, is widely seen as a necessary component of AI governance. Understanding how powerful models work, what their capabilities and limitations are, and how they are being tested is crucial for identifying and mitigating risks. However, the specific form and depth of required transparency remain points of contention, with companies often citing intellectual property concerns or the risk of revealing information that could be exploited by malicious actors.

The Road Ahead: Governor Hochul's Decision

With the passage through the state legislature complete, the fate of the RAISE Act now rests with New York Governor Kathy Hochul. She has several options: she can sign the bill into law, making New York the first state with such targeted frontier AI regulation; she can veto it, effectively killing the bill unless the legislature can override her veto (which is often difficult); or she can send it back to the legislature with proposed amendments, potentially opening a new round of negotiations.

Governor Hochul's decision will be closely watched by the tech industry, AI safety advocates, and policymakers across the country. A signature would signal New York's commitment to proactive AI safety regulation and could potentially inspire similar efforts in other states. A veto, particularly if accompanied by reasoning that echoes industry concerns about innovation or fragmentation, would be a setback for state-level AI safety initiatives and could reinforce the argument for federal action.

The debate surrounding the RAISE Act highlights the fundamental challenge of governing a rapidly evolving technology with profound potential benefits and risks. Legislators like Senator Gounardes and Assemblymember Bores believe that waiting for federal consensus or for catastrophic events to occur is too risky, necessitating action at the state level despite the complexities. Industry critics, on the other hand, fear that well-intentioned but potentially flawed state laws could do more harm than good, hindering the very innovation that could address some of the world's most pressing problems.

Ultimately, the RAISE Act represents New York's attempt to contribute to the global effort to ensure that advanced AI is developed and deployed responsibly. Its focus on transparency and incident reporting for the most powerful models is a specific approach to a complex problem. Whether it becomes law and how effective it proves to be will depend on the Governor's decision and the subsequent response from the AI labs it seeks to regulate. The passage of the bill through the legislature, however, marks a significant moment in the ongoing story of how society attempts to understand, manage, and govern the power of artificial intelligence.