A recent survey conducted by Echelon Insights on behalf of the National Parents Union reveals that 80% of parents desire increased safeguards around artificial intelligence (AI) tools used by their children. The survey, which included responses from 1,511 parents of K-12 public school students, took place between February 12 and 18. A significant finding reveals that while 56% of parents believe their children are utilizing generative AI chatbots, there is a strong call for stricter regulations governing their use.
The survey highlights specific parental concerns: 86% want AI chatbots to issue pop-up warnings before exposing children to sensitive subjects, such as violence or self-harm. Additionally, 85% support measures to alert parents if their child engages in discussions about harmful or illegal activities, and 79% advocate for parental permission before minors can access these tools. These findings come at a time when the federal government is promoting AI expansion with minimal oversight, including two executive orders signed by President Donald Trump aimed at integrating AI into educational settings.
In legislative developments, the House Energy and Commerce Committee has advanced three bills focused on protecting minors’ data privacy and requiring online platforms to implement safeguards against harmful activities. “Empowering parents to better protect their children—especially amid the near-constant barrage of digital threats—remains one of our most solemn and important responsibilities,” stated Rep. Gus Bilirakis, R-FL, a committee member.
However, parents express concerns regarding one of the proposed bills, H.R. 7757, known as the Kids Internet and Digital Safety (KIDS) Act. This legislation mandates online platforms to offer parental tools and limit addictive design features. Critics argue it contains loopholes, allowing tech companies to avoid legal responsibility for understanding their user demographics and failing to establish clear guidelines for protecting young users. “Parents know exactly what’s at stake,” Keri Rodrigues, president of the National Parents Union, commented. “What H.R. 7757 actually does is let tech companies write their own rules, strip states of the power to hold them accountable, and call it child safety.”
Further insight from the survey indicates that 47% of parents feel their child’s school has not provided adequate information regarding AI policies, while only 37% have received any communication on this subject. Additionally, 57% of parents reported not being asked for input on how AI is utilized in educational settings. The survey reveals that parents’ views on AI reflect a blend of perspectives, with 31% identifying as very or somewhat conservative, 24% as very or somewhat liberal, and 40% as moderate. A plurality, 52%, recognizes both benefits and downsides to AI tools in K-12 education.
Interest in becoming involved in school AI policy is evident, with 40% of parents indicating they feel informed enough to participate in decision-making, while 39% desire involvement but seek greater understanding of the technology itself. Some educational institutions have made efforts to engage parents in discussions about AI policy, exemplified by a Massachusetts high school that organized a parents’ night to educate families about its AI policy, ultimately leading to a district-wide initiative.
Concerns regarding student data privacy and the practices of educational technology companies are prevalent among parents. Many feel there is a pressing need for increased transparency about what data is collected from students, as well as how that data is utilized. A separate survey conducted by Count on Mothers, which involved 2,290 U.S. mothers with children under 21, found that 39% of respondents were unaware that their children’s data was being collected by technology tools or did not comprehend the data collection process. Additionally, 41% expressed a desire to stay informed about data collection but acknowledged gaps in knowledge, while only 20% claimed to understand the privacy risks associated with AI tools and how to safeguard their child’s data.
Rodrigues further criticized the KIDS Act, stating, “This bill does not protect our kids. It protects the companies that are hurting them. It guts the state laws that are actually working. It kills the lawsuits that parents have filed.” As discussions around AI policies continue, the demand for more robust protections for children in digital environments persists, signaling a growing concern among parents about the intersection of technology and their children’s safety.




















































