CEO's Column
Search
More
Policy and regulation

New York’s RAISE Act Targets AI Safety, Adds Protections for AI Whistleblowers

ByRishabh Srihari
2025-05-12.2 days ago
New York’s RAISE Act Targets AI Safety, Adds Protections for AI Whistleblowers
New York’s RAISE Act Targets AI Safety, Adds Protections for AI Whistleblowers

New York State is taking a bold step toward regulating artificial intelligence. A new bill called the RAISE Act — short for Responsible AI Safety and Education Act — is making its way through the state legislature. It sets strict standards for developers of powerful AI systems and introduces vital protections for employees raising safety concerns.

Regulating the Development of Frontier Models

The RAISE Act focuses on frontier models, which are defined as AI systems trained with over 10^26 computational operations and costing more than $100 million to build. These high-stakes systems often come with significant risks if not managed responsibly. The bill sets clear safety and transparency rules for large developers, or those with major investments in frontier model development.

Also read: Microsoft Restricts Employees from Using DeepSeek Over Security and Propaganda Concerns

Under the proposed law, developers would need to establish safety and security protocols before deployment. They must not release any system that poses an “unreasonable risk of critical harm,” defined as the potential for mass injury or over $1 billion in damage. Regular audits, internal reviews, and documentation will be required to ensure compliance.

Additionally, developers must disclose any safety incidents involving their models within 72 hours to the New York Attorney General. These measures are intended to create a transparent, accountable AI development environment.

Anti-Retaliation Provisions Empower Industry Insiders

One of the most significant features of the bill is its protection for insiders who report potential dangers. Although it avoids the term "whistleblower," the RAISE Act prohibits retaliation against employees who report serious risks tied to AI development.

If employees reasonably believe an AI model poses a “substantial risk of critical harm,” they have the right to speak up. Employers cannot silence them through internal policies or contracts. They must also inform staff about these protections and preserve rights granted by other laws or agreements.

Violations can lead to steep penalties: up to $10,000 per employee, and for safety breaches, 5-15% of total compute costs.

Growing Support for Broader AI Whistleblower Laws

While the RAISE Act is specific to New York, it’s already fueling wider conversations. The National Whistleblower Center is urging Congress to pass a federal AI whistleblower bill, arguing that national standards are essential for safeguarding the public and advancing responsible AI development.

An online Action Alert is available, allowing individuals to contact lawmakers and push for national protections.

Related Topics

AI RegulationsAI Policy

Subscribe to NG.ai News for real-time AI insights, personalized updates, and expert analysis—delivered straight to your inbox.