Apple is just the latest in a growing line of competitors to Nvidia, the world’s leading manufacturer of artificial intelligence ( AI ) processors, but China is the only country that can compete with the US in the technology market. With business forces short- energized by US- led systems bans and sanctions, China does but by necessity.
Following CEO Tim Cook’s assessment of the bank’s AI technique at its Worldwide Developers Conference the day before, Apple’s share rate increased by more than 7 % to a new all-time deep on June 11.
Nvidia’s promote price dropped 0.7 % on June 11 to remind investors and other parties that while the company’s sales and profits are likely to increase, its exceptionally high market share and share market assessment are both likely to decline in the future.
Nvidia is competing with a growing number of companies around the world to capture market share and customers who prefer to avoid dealing with dominance manufacturers. In China, where the US government’s restrictions have hampered Nvidia’s ability to compete with Huawei and various native AI device manufacturers, the situation is different but less suitable.
Apple is integrating ChatGPT from OpenAI with a more advanced Siri digital assistant. After that, it will allow users to make their own emoji online graphics, called Genmoji, to match their “vibe” as San Jose’s Mercury News puts it.
” Users will also be able to create personalized photos”, the post continues,” such as taking a picture of your baby and making it into a stylized, toon- y edition, adding a superhero cape” . , Different” Apple Intelligence” services may follow. ” It is the next big step for Apple”, said Cook.
This should increase the competition for the new iPhones, iPads, and Macs, but it is a far cry from Nvidia’s top-of-the-line Hopper, Blackwell, and the upcoming generation of Rubin AI processors, which are or will be used to create large language models and digital twins of complex industrial machinery and workflows.
Nvidia currently has 80 % or more of the AI processor market, in the estimation of analysts. AMD ( another American integrated circuit design company ), Intel and many other competitors including Google, Amazon Web Services, IBM and AI ventures SambaNova, Cerebrus and Groq, are also positioning for a share of the market.
Barron’s reports that Microsoft, Meta and Oracle purchase 15 % to 25 % of their AI processors from AMD, and most of the rest from Nvidia. AMD’s Instinct MI300 AI accelerator offers a viable alternative to Nvidia’s H100 GPU. Both devices are undergoing upgrades.
In April, Intel released its Gaudi 3 AI accelerator, which it claims delivers “50 % on average better inference and 40 % on average better power efficiency , than Nvidia H100 – at a fraction of the cost”.
Gaudi 3 is available to computer makers Dell, HP, Supermicro, Lenovo, as well as customers Bosch, IBM, and Bharti Airtel, an Indian telecom services provider, as well as the Indian telecom services company.
In an effort to speed up the deployment of secure generative AI systems, Intel has also announced that it will collaborate with SAP, Red Hat, VMware, and other software companies to create an open platform for enterprise AI.
More seriously for Nvidia, Intel, Qualcomm, Google Cloud, Arm, Samsung and other companies have formed the Unified Acceleration Foundation ( UXL ) to develop an open- source, open- standard AI accelerator software ecosystem as an alternative to Nvidia’s currently dominant proprietary Compute Unified Device Architecture ( CUDA ) computing platform.
UXL states that “anyone can join” and China’s Xiangdixian Computing Technology is also a member. This places it in the same category as the RISC- V open standard IC design architecture, which presents an opportunity for China but a potential target for US politicians.
Nvidia customers Apple, Meta and Microsoft Azure are also getting into the act: Apple with its M4 SoC ( System- on- Chip ) which powers the new iPad Pro, Meta with its MTIA ( Meta Training and Inference Accelerator ) which is now in its second iteration, and Microsoft Azure with its Maia 100 AI Accelerator. Nvidia processors are also used by Google and Amazon the most frequently.
In China, AI processors are designed by tech giants Alibaba, Baidu, Huawei and Tencent, and smaller specialists including Bitmain, Cambricon, Enflame, Inspur, MetaX and Xiangdixian Computing Technology. Their main issue is that their advanced designs cannot be turned into chips by TSMC or other non-Chinese foundries because of US sanctions, aside from a relative lack of experience.
Although there are more than 40 semiconductor foundries in China, even the biggest and most technologically advanced, does not have access to EUV lithography equipment, which means it is impossible to produce large quantities of chips at process nodes smaller than 7 nm.
Huawei, which is building its own internal semiconductor production capability, is also doing this. Outside China, TSMC, Samsung and Intel are moving from 5nm to 3nm and soon 2nm.
But sanctions cut both ways. Chinese customers are now reliant on the dumbed-down H20, which is why the US government has banned the sale of Nvidia’s H100 and other advanced AI processors.
The US Commerce Department’s stringent regulations are so severe that Huawei’s Ascend 910B AI processor has been robbing Nvidia of market share based on a combination of performance, price, and concerns that sanctions might be tightened even more.
These worries are now being realized as the Biden administration reportedly intends to impose a cap on China’s access to gate-all-around transistor architecture and high-bandwidth memory.
Both technologies are essential for the creation of the most cutting-edge AI processors. Alibaba, Baidu and Tencent used Nvidia processors before sanctions were imposed, now they are customers of Huawei. Last February, Nvidia named Huawei as one of its top competitors.
Ironically, Enflame and MetaX have reportedly produced dumbed-down versions of their own processors that can be produced by TSMC in an ironic twist. However, the Chinese are investing the majority of their resources in developing their own equipment industry and making the best use of the foreign equipment they do have access to.
Huawei and SMIC are currently using self-adjusted quadruple patterning to create 5nm and possibly even 3nm chips to make up for their lack of EUV lithography equipment.
Huawei also created an AI-based platform. Although it is less developed and has a much smaller user base than Nvidia’s CUDA, it was just a concept five or six years ago. The same is true of China’s entire AI industry.
On the large language model front, SemiAnalysis ‘ Dylan Patel wrote in May that China’s open-source DeepSeek generative AI model is significantly less expensive than Meta’s most recent Llama 3 series model and also better. ” Even more interesting”, he added, “is the novel architecture DeepSeek has brought to market. They did not copy what Western businesses did. There are brand new innovations”.
DeepSeek costs less than OpenAI’s GPT-4, according to Andrew Carr, chief scientist at US generative animation company Cartwheel, according to the Financial Times.
With a overall score of 54.8 %, the University of Waterloo in Ontario’s Text and Image GEnerative Research ( TIGER ) lab ranks DeepSeek- V2 seventh out of ten large language models. OpenAI’s GPT- 4o ranks first at 72.6 %. Yi- Large from China’s 01. AI scores 57.5 %, Alibaba’s Owen15- 72B 52.6 %. TIGER Lab’s own MAmmo ranks ninth at 50.4 %.
Kai- Fu Lee, the CEO of 01. AI, a researcher in the United States, earned his PhD at Carnegie Mellon. Before moving to Beijing to lead Microsoft Research Asia and Google China between 1998 and 2009, he was born in Taiwan and worked for Apple and Silicon Graphics. Following that, he founded the venture capital firm Sinovation.
Lee founded 01. AI will develop large language models in both Chinese and English in 2023. The Large Model Systems Organization” Chinese Ranking” dated May 21, 2024, shows Yi- Large running a close second to the most recent version of OpenAI’s GPT- 4o.
The” Overall Ranking” places it seventh out of 15 models, behind three versions of GPT- 4o, Google’s Gemini 1.5 Pro, Anthropic’s Claude 3 Opus and the top version of GPT- 4.
Nvidia AI accelerators have been heavily used to train Chinese large language models so far. However, as the quality of the Chinese models increases, more people use locally produced processors and supercomputers.
Follow this writer on , X: @ScottFo83517667