International
China’s Leading Tech Groups Train AI Models Overseas to Bypass U.S. Chip Restrictions
China’s major technology companies are increasingly shifting the training of their advanced artificial intelligence models to data centers overseas, seeking access to high-performance Nvidia chips while navigating tightened U.S. export controls. Alibaba and ByteDance are among the groups expanding their offshore computing operations in Southeast Asia, according to sources cited by the Financial Times.
Access to Cutting-Edge Hardware Within the Bounds of Current Regulations
Washington’s April decision to restrict sales of the Nvidia H20—an accelerator specifically designed for the Chinese market—has pushed companies to relocate part of their training workloads abroad. Data centers in Singapore and Malaysia, operated by non-Chinese entities, have emerged as critical hubs thanks to their availability of top-tier Nvidia hardware comparable to that used by U.S. giants such as Google, Meta and Microsoft.
“It is an obvious choice to come here. You need the best chips to train the most advanced models, and everything is fully compliant with the law,” said an operator of a Singapore-based data center, highlighting the region’s strategic role in providing both technological capability and legal cover.
The Biden-era “reach-through rule”—designed to prevent Chinese companies from accessing restricted U.S. chips even through third-party facilities—was removed by President Donald Trump earlier this year, clearing the path for the current surge in offshore training.
Qwen and Doubao: China’s Rapidly Ascending AI Models
Over the past year, Alibaba’s Qwen models and ByteDance’s Doubao models have climbed global benchmark rankings, positioning themselves among the world’s most competitive large language models (LLMs). Qwen, in particular, has seen widespread international adoption due to its “open” model availability, which allows developers worldwide to integrate and adapt it freely.
The rapid scaling of these LLMs demands substantial computational power. As such, access to state-of-the-art Nvidia clusters abroad has become essential for Chinese tech groups seeking to remain globally relevant in the accelerating AI race.
DeepSeek: The Notable Exception Relying on Domestic Compute
While most companies rely heavily on offshore compute, DeepSeek—developer of a globally recognized, high-performance, low-cost AI model—continues to train domestically. The company reportedly secured a significant inventory of Nvidia chips before U.S. export bans took effect and collaborates closely with Chinese semiconductor manufacturers led by Huawei.
Huawei maintains a dedicated team of engineers at DeepSeek’s Hangzhou headquarters. The partnership is viewed as a strategic effort to develop the next generation of China-made AI chips and accelerate national progress in reducing reliance on U.S. semiconductor technology.
Regional Expansion and the Limits of China’s Data Regulations
Beyond model training, companies such as Alibaba and ByteDance use Southeast Asian data centers to serve global cloud customers, amplifying their international footprint and strengthening positions in high-growth markets across the Middle East, Europe and Africa.
However, China’s strict data sovereignty laws impose a key limitation: private user data cannot be transferred abroad. As a result, custom model fine-tuning based on sensitive domestic datasets must be carried out within China. Industry experts note that this legal framework forces a clear operational separation—general model training may occur overseas, but client-specific training remains onshore.
A New Geography for Global AI Infrastructure
The relocation of AI training to Southeast Asia illustrates how geopolitical pressures are reshaping global computing supply chains. Singapore and Malaysia are emerging as critical AI infrastructure hubs, attracting significant investment from Chinese firms seeking scalable compute capacity.
Amid intensifying U.S.–China technological competition, the global semiconductor and AI industries are undergoing rapid realignment. The offshoring of LLM training is one visible manifestation of the broader restructuring that will define the next phase of the AI economy.





