Investor attention is shifting toward emerging hardware dynamics in Asia as the next phase of the technology revolution unfolds. A report from asset manager Ninety One highlights that while the focus remains on the so-called “Magnificent Seven” tech giants, the critical infrastructure supporting artificial intelligence (AI) is increasingly being developed by companies in Asia.
The trepidation surrounding OpenAI’s ambitious $500 billion Stargate project, announced in January, has given way to optimism following new partnerships with major players like AMD, Nvidia, and Oracle. These alliances suggest a potential for over $1 trillion in infrastructure investment before the decade concludes, leading to discussions about the sustainability of this economic cycle.
Ninety One posits that rather than a dramatic downturn, the sector may experience a modest correction that recalibrates risk and resets unit economics, thus paving the way for renewed AI valuations. Central to this next wave are the hardware components—logic, memory, networks, and power systems—that make large-scale AI operations feasible.
These technologies are primarily produced by a group dubbed the “Secret Seven,” whose market valuations have not yet aligned with their pivotal roles in the AI ecosystem. This group supplies components across the AI stack and trades at earnings multiples considerably lower than their U.S. counterparts.
While leading AI models are often developed in the United States, the hardware that powers them is manufactured in Asia. Countries like Taiwan, South Korea, and parts of Southeast Asia form a concentrated manufacturing cluster bolstered by exceptional engineering talent, robust supplier networks, and significant research and development activity.
The heart of this technological framework is exemplified by three companies that dictate the pace of AI hardware advancement. Taiwan Semiconductor Manufacturing Company (TSMC) sets the benchmark for computing power. As the largest manufacturer of logic semiconductors, TSMC fabricates the critical chips at the core of today’s AI accelerator technologies. Its success stems from a neutral foundry model that fosters trust among clients, supported by a culture emphasizing confidentiality and process discipline.
Meanwhile, SK Hynix manufactures high-bandwidth memory (HBM) essential for AI throughput, strategically aligning with OpenAI’s Stargate project through partnerships with Samsung. The memory required for this initiative could exceed the industry’s current capacity, with future production already committed under long-term contracts.
Samsung contributes significantly to this landscape by offering both HBM and various types of memory, including dynamic random-access memory (DRAM) and NAND flash memory. The market’s tightening has prompted consumer-facing companies like Xiaomi to publicly acknowledge that rising memory prices, driven by AI infrastructure demand, are escalating device costs. According to Bernstein Research, DRAM prices have more than doubled since early 2025, a notable shift in an industry typically characterized by declining costs.
The focus on AI often centers on chip technology, but even the most advanced accelerators are hindered by inadequate data movement and power stability. Accton plays a crucial role in this aspect, providing high-speed switches that connect thousands of GPUs and custom accelerators within hyperscale data centers. The industry is transitioning to faster networking speeds, with Accton uniquely positioned to meet these evolving demands.
Power management and cooling remain critical challenges for AI servers, which consume energy in vast quantities and generate significant heat. Delta Electronics has established expertise in creating systems to address these requirements, rapidly adapting to increased power needs, as evidenced by its swift re-engineering in response to Nvidia’s rising server power demands.
ASE contributes to the ecosystem by integrating GPUs and HBM to function cohesively, collaborating with firms like Nvidia and AMD on next-generation packaging solutions. In contrast, Anji Microelectronics operates upstream, supplying essential materials for advanced-node fabrication, helping to enhance China’s domestic manufacturing capacity.
As large cloud providers design their own accelerators, such as Amazon’s Trainium and Google’s TPUs, the question arises whether existing manufacturers will be sidelined. However, custom chips still rely on established manufacturing and memory frameworks, necessitating collaboration with companies like TSMC, SK Hynix, and others to ensure performance at scale.
Opportunities in these emerging-market companies are underscored by a significant valuation mismatch. Positioned at the physical limits of the AI cycle, their strategic relevance is not yet reflected in their market pricing, creating a potentially lucrative avenue for investors to explore as the industry continues to evolve.
See also
NVIDIA Denies DeepSeek’s Use of Banned Blackwell GPUs Amid US-China AI Tensions
Nvidia’s H200 Boosts China’s AI Cloud Services Despite Huawei’s 12,800 TPP Chip
AI-Powered Networks Revolutionize Telecom Strategies at Dubai Leaders’ Summit
LinkedIn India Enhances Hiring Process with AI-Powered Features and Tools
Google’s AI Recipe Summaries Threaten Livelihoods of Food Bloggers, Sparking Industry Fears




















































