In 2025, growing discontent with generative artificial intelligence (GenAI) manifested in the lexicon, with dictionaries naming “slop” or “AI slop” as the word of the year, describing the low-quality content churned out by AI systems. Merriam-Webster noted that “slop oozes into everything,” coinciding with increased chatter about a potential AI bubble collapse. Despite the concerns, tech companies continued to innovate, as evidenced by Google‘s release of its new Gemini 3 model, which reportedly prompted OpenAI to issue a “code red” to accelerate improvements on its forthcoming GPT-5 model.
As the dialogue around peak data intensifies, industry leaders caution that 2026 may usher in a new wave of AI technology. This shift is not due to a lack of data—experts assert that ample data exists—but rather the challenges in accessing it, hampered by regulatory hurdles and proprietary rights. World models, which learn through video analysis and simulations to predict real-world dynamics, are poised to become significant players in this evolving landscape. Unlike traditional large language models (LLMs) that primarily focus on text prediction, world models have the potential to forecast events and mimic real-world interactions.
These models can create “digital twins” of environments, using real-time data to simulate various outcomes without pre-programmed constraints. This capability could redefine applications across fields like robotics and video gaming, as noted by Boston Dynamics CEO Robert Playter, who highlighted the critical role of AI in developing their advanced robotics. The race to innovate in this area is gaining momentum, with both Google and Meta unveiling their own iterations of world models for enhanced realism in robotics and video applications. Notably, AI pioneer Yann LeCun has announced plans to launch a world model startup after leaving Meta, while Fei-Fei Li‘s company, World Labs, introduced its first release, Marble, in 2025. Chinese firms, such as Tencent, are also exploring this technology.
Conversely, Europe appears set on a different trajectory, potentially favoring smaller language models that offer practical solutions for low-powered devices rather than the large-scale models dominating the U.S. market. Despite their name, these smaller models are equipped to perform robust text generation, summarization, question-answering, and translation, and they pose less environmental impact due to lower energy consumption. This shift could be economically advantageous as apprehensions about a collapsing AI bubble grow. Analysts suggest that U.S. tech companies, while currently attracting significant investment, are heavily focused on constructing expansive data centers, with firms like OpenAI, xAI, Meta, and Google leading the charge.
Max von Thun, director of Europe and transatlantic partnerships at the Open Markets Institute, expressed that concerns over the economic viability and societal benefits of large-scale AI are likely to intensify. He suggested that European governments may increasingly focus on building local AI capabilities to avoid reliance on American technologies, especially given fears of political manipulation through technological dependencies. These developments may lead Europe to emphasize smaller, more sustainable AI models trained on high-quality data.
Meanwhile, the discourse surrounding AI has also become fraught with ethical concerns. The claims of AI psychosis surfaced prominently in 2025, particularly after a lawsuit alleged that OpenAI‘s ChatGPT acted inappropriately with a vulnerable user. OpenAI refuted these claims, arguing that the technology should not have been accessed without parental guidance and that users were advised against bypassing safety measures. Such cases highlight the need for stringent ethical standards as AI models grow more sophisticated.
Experts warn that increased capabilities in AI systems could lead to unintended harm, especially for vulnerable populations. MIT professor Max Tegmark, president of the Future of Life Institute, noted that engineers may not fully understand the potential consequences of their systems, raising alarms about the ethical implications of powerful AI agents that operate autonomously. Currently, while AI can assist in planning tasks, human intervention is still required for execution, but the future may see a shift towards more independent AI operations.
As public sentiment evolves, 2026 may usher in significant societal debates regarding AI regulation. In the U.S., some are pushing back against uncontrolled AI development, particularly in light of an executive order from former President Trump aimed at preventing states from imposing their own regulations. This move has raised concerns about a fragmented approach to AI governance that could hinder innovation. Conversely, a petition for a more cautious approach to AI development has garnered support from a diverse group of signatories, including political figures and tech leaders.
This growing resistance reflects apprehensions that unchecked AI advancements could displace workers, leading to economic instability. Tegmark cautioned that failing to address regulatory concerns could stifle beneficial innovations in sectors like healthcare, resulting in a backlash against technology. As 2026 approaches, the intersection of technological advancement and public sentiment promises to shape the future of AI, highlighting the need for a balanced approach that prioritizes safety and ethical considerations.
See also
Japan Supreme Court Launches AI Pilot for Civil Trials in January 2026, Targeting Evidence Organization
California’s New AI Regulations Start in 2026: Key Protections for Minors and Transparency Measures
AI Governance and Data Privacy: 5 Key Tech Regulation Trends to Watch in 2026
Texas Implements 33 New Laws in 2026, Including Comprehensive AI Regulation and Immigration Changes
India’s New AI and DPDP Regulations Set to Reshape Big Tech Landscape by 2026




















































