California and New York have enacted some of the nation’s most stringent regulations on artificial intelligence (AI), transitioning voluntary safeguards into mandatory compliance for companies developing and deploying large-scale models. Legal experts assert that these regulations reinforce accountability and transparency while permitting continued innovation, inevitably setting the stage for tension with federal officials who advocate for a more streamlined national framework.
The primary change revolves around accountability. AI developers and major platforms are now required to disclose their strategies for mitigating catastrophic risks, report serious incidents promptly, and protect whistleblowers who raise safety concerns. This shift establishes a new compliance standard for companies with national aspirations, as neglecting the two most influential tech markets in the U.S. is no longer an option.
California’s Senate Bill 53 mandates that developers publish risk mitigation plans for their most sophisticated models and report “safety incidents”—events that could lead to cyber intrusions, misuse of chemicals or biological agents, radiological or nuclear harms, serious bodily injury, or loss of operational control. Companies are given 15 days to notify state regulators and could face fines of up to $1 million for noncompliance.
Conversely, New York’s RAISE Act mirrors these disclosure requirements but introduces tighter enforcement measures. Safety incidents must be reported within 72 hours, and penalties can reach $3 million for initial violations. This legislation also imposes annual third-party audits, adding an independent layer of oversight not required by California’s law.
Both laws primarily target firms generating more than $500 million in gross annual revenue, effectively encompassing Big Tech and major AI vendors while exempting many early-stage startups. Regulators opted for a transparency-first approach following the failure of a more aggressive proposal in California, SB 1047, which had suggested mandatory “kill switches” and stringent safety testing for high-cost models.
A notable aspect for corporate legal teams is California’s whistleblower protections. Unlike risk disclosures—where many multinational companies are already gearing up to comply with the EU AI Act—clear state-level protections for employees reporting AI safety issues are rare in the tech industry and could transform corporate handling of layoffs and internal investigations.
The new regulations necessitate a comprehensive buildout of safety governance without hindering research and development. Companies must now create incident-response protocols that clearly define reportable AI events, escalation procedures, and evidence preservation strategies. Increased emphasis on rigorous red-teaming and centralized logging of model behavior is expected, alongside formal “safety case” documentation that product teams and legal counsel can validate.
According to legal experts, many global firms already align with the EU AI Act, suggesting that the incremental compliance burden may be less significant than anticipated, particularly regarding disclosures. Gideon Futerman of the Center for AI Safety contends that while day-to-day research practices will largely remain unchanged, these laws represent a critical step in making catastrophic-risk oversight enforceable in the U.S.
For instance, if a general-purpose model used by a fintech company is manipulated to create malicious code that compromises a partner network, New York’s law would require a 72-hour report and an audit trail, while California would allow 15 days for notification. These compliance timelines are set to influence vendor contracts, service-level agreements, and how swiftly findings are communicated to company boards.
The federal government has signaled a desire to centralize AI governance, cautioning that a fragmented state-by-state approach could stifle innovation and induce compliance challenges. The Justice Department is reportedly assembling an AI Litigation Task Force to contest state provisions perceived as incompatible with a national policy framework.
However, the question of preemption remains complex. Attorneys point out that, in the absence of a federal statute explicitly overriding state laws, courts tend to permit states to impose stricter standards—similar to health privacy regulations under HIPAA. While there has been a recent request for information from the Center for AI Standards and Innovation, Washington has yet to propose a definitive alternative to state-level regulations. A recent congressional effort to block state AI laws was unsuccessful, underscoring the unresolved nature of preemption.
In practice, the newly instituted laws prioritize transparency and traceability over stringent technical requirements. Although New York’s independent audits elevate compliance expectations, neither state mandates third-party evaluations of models prior to their deployment, allowing flexibility for laboratories while increasing the risk of overlooking potential catastrophic failures.
Companies are now tasked with ensuring that the documentation required by these laws does not become a liability in legal proceedings. With the whistleblower protections in California, firms will need robust anti-retaliation policies and clearer pathways for employees to raise AI safety concerns. Investors are increasingly factoring governance, privacy, and cybersecurity readiness into their funding decisions, aligning market incentives with compliance.
As enforcement actions commence, stakeholders should closely monitor the actions of the new task force, the evolving definitions of “safety incidents,” and how these regulations converge with the EU AI Act. Legal experts recommend using these laws as a baseline for compliance, advocating for the establishment of centralized incident registers, expanded red-team efforts to identify catastrophic misuse, comprehensive logging of model lineage, and enhanced whistleblower and vendor oversight. Transparency is now a fundamental requirement, fundamentally altering how leading AI companies operate.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































