A coalition of over 150 parents delivered a letter to New York Governor Kathy Hochul on Friday, urging her to sign the Responsible AI Safety and Education (RAISE) Act in its current form. This proposed legislation mandates that developers of large AI models—such as Meta, OpenAI, Deepseek, and Google—formulate safety plans and adhere to transparency protocols for reporting safety incidents.
The RAISE Act successfully passed through both the New York State Senate and Assembly in June. However, recent reports indicate that Hochul has suggested a substantial rewrite of the bill, potentially making it more advantageous for technology firms, similar to revisions made to California’s SB 53 following input from major AI companies.
Unsurprisingly, many AI enterprises oppose the legislation. The AI Alliance, which includes companies like Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks, and Hugging Face, expressed in a June letter to New York lawmakers their “deep concern” regarding the RAISE Act, labeling it as “unworkable.” Meanwhile, the pro-AI super PAC Leading the Future, backed by firms including Perplexity AI and Andreessen Horowitz, has launched targeted advertisements against New York State Assemblymember Alex Bores, a co-sponsor of the RAISE Act.
Organizations such as ParentsTogether Action and the Tech Oversight Project coordinated the letter delivered to Hochul, noting that some signatories have “lost children to the harms of AI chatbots and social media.” They described the RAISE Act as it stands now as “minimalist guardrails” that should be enacted into law.
The letter further emphasized that the bill, as passed by the New York State Legislature, “does not regulate all AI developers—only the very largest companies, the ones spending hundreds of millions of dollars a year.” Under the proposed regulations, these large developers would be required to disclose significant safety incidents to the attorney general and publish comprehensive safety plans. Additionally, they would be restricted from releasing a frontier model if it posed an unreasonable risk of critical harm, defined as either the death or serious injury of 100 individuals or more, or damages exceeding $1 billion related to the creation of a chemical, biological, radiological, or nuclear weapon, or an AI model that operates without meaningful human intervention and would, if committed by a human, fall under specific criminal categories.
“Big Tech’s deep-pocketed opposition to these basic protections looks familiar because we have seen this pattern of avoidance and evasion before,” the letter asserts. “Widespread damage to young people—including their mental health, emotional stability, and ability to function in school—has been widely documented ever since the biggest technology companies decided to push algorithmic social media platforms without transparency, oversight, or responsibility.”
The ongoing debate over the RAISE Act underscores a growing tension between the push for regulatory frameworks to ensure the safety of AI technologies and the interests of major tech companies that argue such regulations may stifle innovation. As the discourse evolves, the outcome could set significant precedents for how AI technologies are developed and managed in New York and beyond.
See also
Veeam Acquires Securiti AI to Enhance Data Resilience and Governance for Safe AI Adoption
Trump Signs Executive Order to Halt State AI Regulations, Citing National Security Concerns
Trump Announces Executive Order to Standardize AI Regulation Nationwide




















































