As lawmakers grapple with the evolving landscape of artificial intelligence (AI), over 300 bills pertaining to AI have been introduced in the U.S. Congress, while state legislatures have proposed approximately 1,200. The urgency of crafting legislation during a technological transition is underscored by the need to protect public interests through regulatory frameworks, rather than allowing corporations to act unilaterally. However, the challenge remains that lawmakers often rely on current technologies to shape future regulations, which may stifle the adaptability required in a rapidly innovating sector.
Historically, the last significant attempt by Congress to legislate amidst technological upheaval was the Telecommunications Act of 1996, which updated the Communications Act of 1934. A retrospective view of the 1996 Act offers valuable lessons for contemporary discussions surrounding national policies aimed at addressing the disruptive impact of AI. That era witnessed a seismic shift from analog to digital technology, effectively dismantling established business categories and altering market dynamics. In a forward-thinking approach, the Act concentrated on market structures rather than predicting technology’s trajectory, thereby laying the groundwork for competition.
The Federal Communications Commission (FCC) was tasked with addressing potential bottlenecks to competition that could be exploited by dominant players, leading to over 100 rulemakings to ensure a competitive landscape. As AI technologies reshape the economy and society, we find ourselves at a similar crossroads, where a thoughtful examination of the past could inform effective governance in the present.
The 1996 Moment: When Technology Collapsed the Categories
Prior to 1996, communication markets were regulated distinctly—telephony, broadcasting, and cable each followed separate regulatory frameworks. The advent of digitization blurred these boundaries, enabling telephony to traverse various networks and allowing video to be transmitted through multiple mediums. This convergence prompted the 1996 Act to facilitate competition across formerly distinct sectors, allowing traditional phone companies to venture into video services and vice versa.
This legislative gamble recognized that digitization created competitive opportunities across categories. The policy’s relevance extends to AI governance today, emphasizing that technology policies need to account not only for the innovations themselves but also for the power dynamics that influence market control.
Convergence, Consolidation, and Chokepoints
Three decades after the Telecom Act, the trend toward consolidation remains evident, with larger firms often dominating the market. The Act’s aspirations to foster competition faced challenges as market scales increasingly favored consolidation, leading to diminished localism and increased concentration in media ownership. Notably, the FCC transitioned from a regulatory body focused on monopolistic practices to one promoting competition across converging markets. This evolution highlights the necessity for ongoing vigilance in maintaining a competitive environment.
While the ’96 Act did not address the internet’s implications for open markets, the platform economy has since demonstrated the benefits of accessibility. Companies like Google and Facebook thrived by leveraging open standards, but as they matured, they erected closed ecosystems that stifled competition. This pattern—where openness gives way to consolidation—has implications for AI, where the potential for monopolistic behavior is similarly present.
AI emerged not as an isolated phenomenon but as a necessity within the platform economy, driven by the predictive demands of user engagement. The evolution of machine learning within online platforms illustrates how the AI business model is rooted in anticipating consumer behavior, a trend that has accelerated with the advent of foundation models.
The New Stack: From Last Mile to Models
Control over the “last mile” was a critical chokepoint in telecommunications. In AI, however, chokepoints manifest in economic structures that may be less obvious but equally exclusionary. The AI architecture comprises interdependent layers, from microprocessors powering algorithms to cloud-based computational capacity and application-level delivery. Each layer presents both opportunities and bottlenecks for innovation by dominant firms.
As the stack evolves, companies are shifting their strategies, prioritizing applications over foundational models. The commoditization of models is prompting firms to seek profitability at the application layer, embedding AI capabilities into software solutions that enhance user engagement. This transition raises concerns about stack capture, where a few dominant players could control essential inputs and distribution, limiting competition.
The Lesson for AI: Regulate Behaviors
Reflecting on the Telecommunications Act reveals that regulation should concentrate on promoting competition and preventing anti-competitive practices. In the AI landscape, this involves ensuring that innovative applications can proliferate, thereby enhancing productivity and fostering international competitiveness. China’s focus on diffusing AI applications within its economy exemplifies the strategic importance of this approach.
Moving forward, a two-step regulatory framework could be essential for effective AI governance. The first step involves ensuring fair access to essential inputs, such as computational resources and model licensing. The second step focuses on ongoing governance to manage the societal impacts of AI technologies, ensuring accountability and consumer protection.
Just as the Telecom Act sought to prioritize competition amidst technological evolution, modern AI governance must also recognize that these transitions are not merely technical challenges but issues of power dynamics. If unchecked, the concentration of AI capabilities could undermine innovation, national security, and the democratic fabric of society.
The historical lessons of the Telecommunications Act serve as a timely reminder that, as we navigate the complexities of AI regulation, it is critical to focus on who controls essential technological inputs and how those powers are used, lest we risk creating a landscape where innovation is stifled, and the benefits of technology are not equitably distributed.
For more details on AI and its implications, visit OpenAI, Google DeepMind, or Microsoft.
Anthropic Launches Claude Cowork, Sparking Major Sell-Off in Software Stocks
South Korea Leads Asia-Pacific with Groundbreaking AI Regulation Amid Data Sovereignty Risks
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032






















































