Connect with us

Hi, what are you looking for?

AI Regulation

Over 150 Parents Urge NY Governor to Sign AI Safety Bill Without Amendments

Over 150 parents urge NY Governor Hochul to sign the RAISE Act, mandating AI giants like Meta and OpenAI to disclose safety incidents and publish safety plans.

Over 150 parents have urged New York Governor Kathy Hochul to sign the Responsible AI Safety and Education (RAISE) Act, seeking its approval without any amendments. The letter was sent on Friday to emphasize the importance of establishing regulatory frameworks for large AI developers. The RAISE Act mandates that companies like Meta, OpenAI, Deepseek, and Google create safety plans and adhere to transparency rules regarding the reporting of safety incidents.

The RAISE Act, which successfully passed both the New York State Senate and Assembly in June, is now facing challenges as Governor Hochul proposed a nearly complete rewrite of the legislation, allegedly making it more amenable to tech companies. This change has sparked opposition, as many AI firms, including members of the AI Alliance, have expressed “deep concern” over the legislation in a letter sent to New York lawmakers earlier this year.

ParentsTogether Action and the Tech Oversight Project, which organized the letter to Hochul, highlighted the devastating impact of AI-related issues on families, with some signatories noting that they have “lost children to the harms of AI chatbots and social media.” They described the RAISE Act as containing necessary “minimalist guardrails” that should be enacted into law to protect users.

The legislation aims to regulate large companies investing hundreds of millions in AI development and would require developers to disclose large-scale safety incidents to the attorney general. It also stipulates that companies must publish safety plans and are prohibited from releasing a frontier model if it poses an unreasonable risk of critical harm. This critical harm is defined as resulting in the death or serious injury of over 100 people, or financial damages amounting to $1 billion or more due to risks associated with the creation of chemical, biological, radiological, or nuclear weapons.

As the debate continues, the potential implications of the RAISE Act reflect broader concerns about the rapidly evolving landscape of artificial intelligence. With parents advocating for stricter regulations and tech companies pushing back against such measures, the outcome of this legislative effort may set a significant precedent for AI governance in the United States.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Business

Pentagon partners with OpenAI to integrate ChatGPT into GenAI.mil, granting 3 million personnel access to advanced AI capabilities for enhanced mission readiness.

AI Technology

A new report reveals that 74% of climate claims by tech giants like Google and Microsoft lack evidence, highlighting serious environmental costs of AI...

Top Stories

AI Impact Summit in India aims to unlock ₹8 lakh crore in investments, gathering leaders like Bill Gates and Sundar Pichai to shape global...

AI Education

UGA invests $800,000 to launch a pilot program providing students access to premium AI tools like ChatGPT Edu and Gemini Pro starting spring 2026.

Top Stories

ByteDance's Seedance 2.0 generates high-quality videos mimicking Hollywood scenes, raising concerns over copyright and the future of traditional filmmaking.

AI Generative

OpenAI has retired the GPT-4o model, impacting 0.1% of users who formed deep emotional bonds with the AI as it transitions to newer models...

AI Generative

ChatBCI introduces a pioneering P300 speller BCI that integrates GPT-3.5 for dynamic word prediction, enhancing communication speed for users with disabilities.

Top Stories

Microsoft’s AI chief Mustafa Suleyman outlines a bold shift to self-sufficiency by developing proprietary models, aiming for superintelligence and reducing reliance on OpenAI.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.