Artificial intelligence (AI) is at a critical juncture as it grapples with the limits of scaling, a concept that has long defined its evolution. Mohammed Marikar, co-founder of Neem Capital, argues that traditional assumptions about AI performance improving with scale are faltering. Instead of continual enhancements in efficiency, AI has become increasingly capital-intensive and constrained by physical limits, revealing diminishing returns much sooner than anticipated.
Data from global projections indicates that electricity demand from data centers is set to more than double by 2030, a surge typically associated with entire industrial sectors. In the United States, the power needs of data centers are expected to rise well over 100 percent by the end of the decade, necessitating trillions of dollars in new investments alongside significant expansions in grid capacity.
As AI systems find their way into critical sectors such as law, finance, and compliance, the stakes have risen dramatically. The UK High Court flagged concerns in June 2025 regarding the submission of filings containing fabricated case law generated by AI tools, underscoring the potential risks associated with the technology’s integration into high-stakes environments.
The implications of scaling AI are becoming contentious, particularly as reliance on these systems increases. While large language models (LLMs) excel in fluency through exposure to vast amounts of text, deeper reasoning capabilities do not scale in the same way. The next phase of AI development must prioritize understanding cause and effect, enabling systems to clarify uncertainties rather than simply generating confident—but potentially misleading—responses.
This focus on improving reasoning rather than merely scaling presents a growing verification burden. As AI systems are deployed more widely, users find themselves devoting substantial time to validating machine output instead of acting on it, amplifying risks associated with errors propagating quickly.
The financial implications of training advanced AI models have skyrocketed, with credible estimates suggesting that costs could exceed $1 billion for single training runs in the near future. However, training is only the initial expense. The larger financial burden lies in inference—running these models continuously while meeting real-world requirements for latency, uptime, and verification. As usage expands, the related energy consumption and costs compound, further complicating the economic landscape.
AI’s role in financial markets and cryptocurrency has also intensified. Systems increasingly monitor on-chain activities, analyze sentiment, and automate decision-making. However, the rapid pace of deployment, combined with the challenges of reliability, often leads to the propagation of errors. A notable example is the frequent generation of false positives in automated anti-money laundering (AML) systems, wasting resources and undermining trust in automated processes.
Marikar emphasizes that scaling AI without enhancing reasoning capabilities amplifies risk, particularly in sectors where credibility is non-negotiable. The traditional approach of prioritizing increased computation and data storage while leaving reasoning architecture unchanged is proving to be both economically unsustainable and unsafe.
A shift towards cognitive or neurosymbolic systems may offer a solution. These systems, which organize knowledge into interrelated concepts rather than relying on brute-force pattern matching, promise higher reasoning capabilities at lower energy and infrastructure demands. Emerging “cognitive AI” platforms are showcasing how structured reasoning can operate on local servers or edge devices, enabling users to maintain control over their knowledge.
Although cognitive systems are more complex to design and may underperform in open-ended tasks, they present a more sustainable model for AI development. By reusing reasoning rather than rediscovering it through extensive computation, these systems can reduce costs and simplify verification processes.
The decentralization of AI development is also becoming a focal point, as some platforms explore blockchain technology to allow individuals and corporations to contribute data, models, and computing resources. This approach mitigates concentration risks and aligns AI deployment with local needs rather than solely responding to global demands.
As AI stands at this inflection point, the industry must reconsider its path forward. The focus needs to shift from mere scaling to investing in architectures that enhance reliability and reasoning capabilities. The future of AI will depend on whether stakeholders choose to prioritize intelligent design over sheer size, which could redefine the economic landscape of this rapidly evolving sector.
See also
RootsTech 2026 Reveals AI Innovations Transforming Family History Research
Tesseract Launches Site Manager and PRISM Vision Badge for Job Site Clarity
Affordable Android Smartwatches That Offer Great Value and Features
Russia”s AIDOL Robot Stumbles During Debut in Moscow
AI Technology Revolutionizes Meat Processing at Cargill Slaughterhouse


















































