Connect with us

Hi, what are you looking for?

AI Regulation

States Must Act Now to Regulate AI as Federal Efforts Stall, Warns J.B. Branch

As Congress stalls on AI regulation, 97% of Americans support state-level protections against rising threats, including AI-enabled fraud and unsafe technologies.

State regulation may be the only effective measure currently available to protect Americans from a growing array of harms associated with artificial intelligence (AI). As Congress remains stagnant on enacting substantive AI legislation, some voices in Washington argue against allowing states to legislate on the matter, effectively asking the public to accept a status quo where neither federal nor state authorities take action.

This scenario reflects a troubling failure of leadership, especially considering states have historically acted as first responders to emerging risks that affect citizens. From consumer protection to labor laws, states are often more attuned to public needs and can implement practical safeguards more rapidly than federal lawmakers. The increasing array of AI-related dangers, ranging from fraud targeting older Americans to unsafe chatbots resulting in tragic outcomes, underscores the urgent need for such state-level interventions.

As AI technologies evolve, the risks they pose have escalated significantly. Older Americans face heightened threats of AI-enabled fraud, while children—particularly young girls—are frequently targeted with nonconsensual intimate imagery. Workers are experiencing mass layoffs due to AI systems that often make opaque decisions about job applications. The political landscape is also under threat, as deceptive AI-generated content threatens the integrity of democracy, particularly with the approach of the 2026 midterm elections.

Despite these pressing concerns, Congress has largely remained inactive, effectively ceding ground to states that are stepping in to protect their constituents. A striking 97% of Americans support regulatory measures surrounding AI, transcending party lines, with 80% opposing federal attempts to block state-level protections. This public sentiment highlights a growing recognition that the complexities of AI require localized governance, particularly for issues impacting children’s safety and privacy.

Moreover, the wealth of major AI firms cannot be overlooked. With companies like **Nvidia** valued at over **$4 trillion**, and other tech giants such as **Apple**, **Google**, **Microsoft**, **Meta**, and **Amazon** commanding similar valuations, the argument that these entities cannot adapt to state regulations seems disingenuous. In fact, many of these companies already tailor their products to comply with the strictest state laws, particularly those in California, establishing a baseline for national application.

Big Tech is not only a passive subject of legislation but plays an active role in shaping it. In 2025, more than **3,500 federal lobbyists** focused on AI issues, reflecting a **265% increase** in such lobbying relationships over three years. This lobbying influence extends beyond Washington, with significant efforts at the state level, illustrated by **OpenAI’s** attempts to draft chatbot legislation aimed at protecting teenagers in California. This involvement undermines the narrative that state lawmakers are unilaterally imposing regulations on a reluctant tech industry; rather, these companies are typically involved in drafting legislation before it even reaches the floor.

Concerns about state regulations hampering innovation lack substantive evidence. Current trends indicate that investment in AI is flourishing, with data center construction rapidly expanding in the U.S. American companies dominate the global tech landscape, holding the majority of the top **50 richest tech companies**. If regulation were truly detrimental to innovation, one would expect to see declining market capitalizations and reduced investment. Instead, we observe record-high valuations and robust growth in infrastructure and lobbying expenditures.

Given the current landscape, state regulation is vital in safeguarding Americans from the increasing risks posed by AI technologies. While federal standards may eventually be established, preventing states from enacting their own protections would grant AI companies the freedom to experiment on the public without necessary safeguards in place. States were designed to act swiftly in the face of emerging threats, and the ongoing failures of Washington to respond highlight the importance of state authority during this critical juncture.

As the landscape of AI continues to evolve, the role of state governance will be crucial. Waiting for federal action is not just an inaction; it is a choice that could have dire implications for public safety and welfare. The time for decisive action at the state level is now, as the stakes for the American public continue to rise.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

AI in medicine is set to skyrocket from $29.27 billion in 2026 to $3.36 trillion by 2040, driven by a 40.3% CAGR and innovations...

AI Cybersecurity

Brockton Hospital patients were denied chemotherapy after a cyberattack disrupted operations, reflecting a broader healthcare cybersecurity crisis where 74% of hospitals face similar patient...

AI Technology

OpenAI plans a transformative $20 billion investment in Cerebras chips, aiming to enhance AI capabilities and secure a significant equity stake in the startup.

AI Research

Norm Ai launches the Legal AGI Lab to develop essential legal frameworks for AI integration in high-stakes sectors like healthcare and finance.

Top Stories

Anthropic's Mythos model boosts software engineering performance, prompting a potential reevaluation of IT services growth projections and escalating disruption risks.

AI Government

Japan's Justice Ministry launches a study panel to assess civil liability for unauthorized AI-generated content, meeting five times from April to July.

AI Finance

OpenAI enhances ChatGPT for Excel with new AI tools to streamline finance workflows, reducing manual effort and increasing productivity for enterprise teams.

AI Cybersecurity

Rubrik Zero Labs reveals 86% of organizations fear AI agents will surpass their security measures, highlighting urgent oversight challenges in an evolving landscape.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.