State regulation may be the only effective measure currently available to protect Americans from a growing array of harms associated with artificial intelligence (AI). As Congress remains stagnant on enacting substantive AI legislation, some voices in Washington argue against allowing states to legislate on the matter, effectively asking the public to accept a status quo where neither federal nor state authorities take action.
This scenario reflects a troubling failure of leadership, especially considering states have historically acted as first responders to emerging risks that affect citizens. From consumer protection to labor laws, states are often more attuned to public needs and can implement practical safeguards more rapidly than federal lawmakers. The increasing array of AI-related dangers, ranging from fraud targeting older Americans to unsafe chatbots resulting in tragic outcomes, underscores the urgent need for such state-level interventions.
As AI technologies evolve, the risks they pose have escalated significantly. Older Americans face heightened threats of AI-enabled fraud, while children—particularly young girls—are frequently targeted with nonconsensual intimate imagery. Workers are experiencing mass layoffs due to AI systems that often make opaque decisions about job applications. The political landscape is also under threat, as deceptive AI-generated content threatens the integrity of democracy, particularly with the approach of the 2026 midterm elections.
Despite these pressing concerns, Congress has largely remained inactive, effectively ceding ground to states that are stepping in to protect their constituents. A striking 97% of Americans support regulatory measures surrounding AI, transcending party lines, with 80% opposing federal attempts to block state-level protections. This public sentiment highlights a growing recognition that the complexities of AI require localized governance, particularly for issues impacting children’s safety and privacy.
Moreover, the wealth of major AI firms cannot be overlooked. With companies like **Nvidia** valued at over **$4 trillion**, and other tech giants such as **Apple**, **Google**, **Microsoft**, **Meta**, and **Amazon** commanding similar valuations, the argument that these entities cannot adapt to state regulations seems disingenuous. In fact, many of these companies already tailor their products to comply with the strictest state laws, particularly those in California, establishing a baseline for national application.
Big Tech is not only a passive subject of legislation but plays an active role in shaping it. In 2025, more than **3,500 federal lobbyists** focused on AI issues, reflecting a **265% increase** in such lobbying relationships over three years. This lobbying influence extends beyond Washington, with significant efforts at the state level, illustrated by **OpenAI’s** attempts to draft chatbot legislation aimed at protecting teenagers in California. This involvement undermines the narrative that state lawmakers are unilaterally imposing regulations on a reluctant tech industry; rather, these companies are typically involved in drafting legislation before it even reaches the floor.
Concerns about state regulations hampering innovation lack substantive evidence. Current trends indicate that investment in AI is flourishing, with data center construction rapidly expanding in the U.S. American companies dominate the global tech landscape, holding the majority of the top **50 richest tech companies**. If regulation were truly detrimental to innovation, one would expect to see declining market capitalizations and reduced investment. Instead, we observe record-high valuations and robust growth in infrastructure and lobbying expenditures.
Given the current landscape, state regulation is vital in safeguarding Americans from the increasing risks posed by AI technologies. While federal standards may eventually be established, preventing states from enacting their own protections would grant AI companies the freedom to experiment on the public without necessary safeguards in place. States were designed to act swiftly in the face of emerging threats, and the ongoing failures of Washington to respond highlight the importance of state authority during this critical juncture.
As the landscape of AI continues to evolve, the role of state governance will be crucial. Waiting for federal action is not just an inaction; it is a choice that could have dire implications for public safety and welfare. The time for decisive action at the state level is now, as the stakes for the American public continue to rise.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































