AI’s Unregulated Future: A Growing Concern
At this year’s Techonomy conference, discussions around artificial intelligence (AI) revealed a stark consensus: meaningful federal regulation is unlikely in the near future. This regulatory void is not theoretical; it is actively shaping how the industry approaches development, investment, and competition.
Amba Kak, executive director of the AI Now Institute, emphasized a fundamental shift in the conversation. “Rather than have trustworthy AI or be asked to trust companies more,” Kak stated, “we create a market environment that doesn’t require us to trust companies.” She highlighted the pressing issue of agency, arguing that the real crisis in the AI market is whether the public has effective control over the systems increasingly governing their digital lives. The federal government, she noted, has effectively relinquished its role in this agency.
This dynamic became evident during the summer when Congress nearly passed a moratorium restricting states from implementing AI regulations. Justin Hendrix, CEO of Tech Policy Press, pointed out that the legislation was close to being enacted before it was abruptly halted. This close call underscores Washington’s stance: prioritize rapid innovation over regulation and trust market mechanisms to resolve future challenges.
Hendrix also referenced the White House’s commitment to “achieve and maintain unquestioned and unchallenged global technological dominance,” characterizing it as a geopolitical mission rather than a framework for responsible governance. In this context, domestic regulations appear more like constraints on innovation than protective measures.
Kak further elaborated on what she and her colleagues term the “AI bailout,” describing the federal government’s initiatives as “premature and exceptional…red carpet treatment” for the industry. This includes significant provisions like opening federal land for new data centers and fast-tracking permits for AI development, alongside financial incentives for foreign governments to adopt American AI technologies.
With the federal government stepping back, state governments have emerged as the primary battleground for AI governance. Contrary to common assumptions, the regulatory landscape is not strictly along partisan lines. Kak noted that Texas has implemented a Responsible AI framework comparable to those in California and Colorado, reflecting a bipartisan interest in addressing AI-related issues. However, this effort also faces bipartisan pressure from a host of technology lobbyists who seek to limit the extent of regulations.
Hendrix provided an illustrative example: when a New York assembly member supported a transparency bill, he faced backlash from a political action committee backed by venture capitalist Marc Andreessen, which threatened to utilize part of its $100 million campaign fund against him. The prevailing sentiment in the industry appears to be that no form of regulation is acceptable.
This environment has significant implications for innovation and market dynamics. Hendrix stated succinctly, “AI is perhaps the greatest technology ever invented for concentrating power.” Currently, companies that dominate cloud infrastructure also control leading AI models and consumer interfaces, creating a self-reinforcing market structure. Kak referred to this concentration as “one of the most toxic structural elements of the AI market,” advocating for structural separation as a solution—a strategy employed in the past in sectors like railroads and telecommunications.
The discussion around regulation often turns to comparisons with China, which Kak described as one of the “great successes of the big tech lobbying machine.” However, the narrative that U.S. AI innovation hinges on a lack of regulation is beginning to falter. Emerging models like DeepSeek demonstrate that efficiency can challenge established scale advantages, suggesting that American industries are more reliant on government support than competition through merit.
In contrast, China’s AI action plan adopts a “more open and conciliatory” approach, implying that the geopolitical landscape may not be as zero-sum as often portrayed.
The repercussions of this regulatory gap are most palpable when considering the effects on children, a demographic already influenced by a previous generation of unregulated platforms. Kak pointed out that market incentives are driving AI firms to adopt increasingly questionable practices, such as promoting “age-gated erotica chatbots” to maximize revenue. “This is an obvious sign of an industry…having to prove a revenue case…whatever it takes,” she noted, emphasizing that children stand to lose the most.
Hendrix urged attendees to scrutinize lawsuits involving minors adversely impacted by AI, prompting a critical reflection on the industry’s accountability. “Take a hard look at what these men have built… and then ask yourself if you want to be in business with them,” he said.
As the next three years unfold, AI innovation in the United States will continue to operate in an unregulated environment, with states attempting to fill the gaps left by federal inaction. Lobbyists will test the boundaries of regulation, while the public, already impacted by significant digital shifts, will navigate the consequences of this evolving landscape. The choice to delay regulatory decisions has established its own trajectory, raising the urgent question of whether society will recognize the implications before the next wave of AI systems dictates the future.
For more insights, watch our full conversation below.
See also
Google DeepMind Engineer Reveals Meta Prompting Technique for Enhanced Video Generation in Gemini
Freddie Mac Mandates AI Governance Framework for Mortgage Companies by 2026
U.S. Federal AI Regulation Looms: Key Framework to Boost Startups and Global Competitiveness
Meta’s AI Spending Hits $1,117 Price Target, Analyst Predicts 68% Upside Potential
AI Revolutionizes Financial Planning: Enhancing Accessibility and Critical Insights for Investors



















































