OpenAI, the developer of ChatGPT, is facing backlash from parents and child safety advocates after it was revealed that the company is funding the Parents & Kids Safe AI Coalition, a group focused on promoting child safety in artificial intelligence. According to a report by the San Francisco Standard, several coalition members were unaware of OpenAI’s financial support, raising concerns about transparency as the company advocates for child safety regulations in AI.
The coalition, which reached out to various child safety organizations in March, sought support for policy initiatives including age verification for online services and restrictions on advertisements aimed at children. However, many coalition members indicated that communications did not clearly disclose OpenAI’s involvement. “I don’t want to say they’re outright lying, but they’re sending emails that are pretty misleading,” one leader stated, expressing disappointment over the lack of transparency. This issue has led at least two members to withdraw from the coalition.
One nonprofit leader described the situation as “a very grimy feeling,” highlighting concerns regarding the outreach strategy employed by the coalition. The proposals put forth by the coalition closely align with child safety legislation currently backed by OpenAI in California, where the company is actively seeking support as states increasingly consider regulations governing AI’s usage by minors.
In a statement provided to the San Francisco Standard, members of the coalition, along with an OpenAI executive, underscored their commitment to advancing strong child AI safety laws nationwide. However, the controversy has drawn attention to the influence of major technology firms in policy formation. Some advocacy groups chose to abstain from joining the coalition precisely because of OpenAI’s participation. Josh Golin of FairPlay commented, “I want them to get out of the way and let advocates and parents… pass the legislation they think is best for kids.”
This scrutiny comes as OpenAI faces growing legal and regulatory pressures regarding the use of its products by younger audiences. The debate over how best to regulate AI technology, particularly concerning children’s safety, remains ongoing in the United States. Critics argue that the involvement of tech companies in such coalitions could undermine the integrity of advocacy efforts aimed at protecting minors.
As discussions around AI regulation intensify, the balance between innovation and child safety will likely continue to be a focal point of contention. OpenAI’s funding of the coalition illustrates the complex relationship between technology companies and the advocacy organizations that aim to ensure safe usage of their products. The future of AI policy, especially concerning its impact on children, will need to navigate these intricate dynamics.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































