OpenAI’s Sora pours ‘cold water’ on China’s AI dreams

OpenAI’s Sora pours ‘cold water’ on China’s AI dreams

Xu Liang, an AI entrepreneur based in Hangzhou, eastern Zhejiang province, said it will not be long before China has similar services. “As soon as in the next one or two months, there will be Sora-like models coming out of the Chinese market and plenty in the next half year,” he said. But Xu noted that there could still be a non-negligible gap between Chinese products and Sora.

Wang Shuyi, a professor who focuses on AI and machine learning at Tianjin Normal University (TJNU), said the experience of developing LLMs in the past year has allowed the Chinese Big Tech firms to build up their know-how in this area and stock up on necessary hardware, giving them the ability to produce Sora-like products in the next six months.

The Sora launch has brought speculation about the secret behind its impressive output. Xie, at New York University and one of two developers of DiT, tweeted that “data is likely the most critical factor for Sora’s success”. He estimated that Sora might have around 3 billion parameters.

“If true, this is not an unreasonable model size,” he wrote. “It could suggest that training the Sora model might not require as many GPUs as one would anticipate – I would expect very fast iterations going forward.”

A few months before Sora was out, a group of researchers launched the VBench, a benchmarking tool for video generation models designed to evaluate the performance of Runway’s Gen-2 and Pika.

Among 16 dimensions, Gen-2 stands out in areas including imaging quality and aesthetic quality, but it was weak in dynamic range and appearance style. Pika, co-founded by Chinese PhD candidate Guo Wenjing at Stanford University, is best at background consistency and temporal flickering but needs improvements in imaging quality.

The VBench team, consisting of researchers from Singapore’s Nanyang Technological University and Shanghai Artificial Intelligence Laboratory in China, found that Sora excels in overall video quality when compared with other models, based on the demos provided by OpenAI. There is limited information on how the model transforms text prompts into videos.

Lu Yanxia, research director for IDC China’s research on emerging technology, said tech giants such as Baidu, Alibaba and Tencent will be among the first to roll out similar services in the country. Local AI players iFlyTek, SenseTime and Hikvision – all sanctioned by Washington – will also be in the race, she said.

But China still faces an uphill battle, as the country’s tech market becomes increasingly walled off from the world in terms of capital, hardware, data and even people, according to analysts.

The market value gap between China’s top tech firms compared with those in the US such as Microsoft, Google and Nvidia has widened significantly in recent years since Beijing decided to kneecap its tech giants in the name of reining in the “irrational expansion of capital”.

And while China was once seen as having an advantage in its quantity of data, Lu said the country now faces a scarcity of quality data needed to train these newer models, compounding challenges from its limited access to advanced chips. A lack of talent is another concern, according to Lu, as the country’s best and brightest in AI often find it easier to shine working for leading players in the US.

At OpenAI, for instance, tech professionals with an educational background from China form a key group. Among OpenAI’s 1,677 associated members on LinkedIn, 23 of them studied at China’s Tsinghua University, the ninth most common tertiary education institution among the start-up’s employees, beating out the University of Cambridge and Yale University.

Stanford University, the University of California, Berkeley, and the Massachusetts Institute of Technology are the top three institutions among OpenAI workers, with 88, 80 and 59 employees, respectively, listing those schools on their LinkedIn profiles.

Even with the requisite talent, though, experts question how far China’s home-grown generative AI can go while facing existing constraints from US-China trade tensions.

Ping An Securities warned in a report that continued semiconductor export restrictions from the US “may accelerate the maturity of the domestic AI chip industry”, but “home-grown alternatives may fall short of expectations”.

Washington has blocked Chinese companies from accessing the world’s most advanced semiconductor tools through restrictions on related products that include any US-origin technology. In October, the US again tightened those restrictions, blocking the mainland’s access to GPUs that Nvidia had specifically designed for Chinese clients in response to earlier curbs.

Alexander Harrowell, principal analyst for advanced computing at technology research and advisory group Omdia, noted that China has options beyond GPUs for training LLMs. “You could use Google’s TPU (Tensor Processing Unit], Huawei’s Ascend, AWS’s Trainium, or one of quite a few start-ups’ products,” he said.

But replacing GPUs comes at a cost. “The further you go from the GPU route, the more effort it will cost you in software development and systems administration,” Harrowell said.

There will also be opportunities specifically for the China market, according to Xu, the Hangzhou-based entrepreneur. “With the publication of the technical report on Sora, and upcoming open-source video models, there will be groundwork for the Chinese players to learn from,” he said. Local video models will have better support for the Chinese language, he added.

TJNU’s Wang noted that one of the Sora demo videos involves a scene of a dancing Chinese dragon, which he found to be a stereotypical depiction of the activity. China’s numerous ethnic groups, folk traditions, customs, and geographic diversity offer a wealth of material for local video models to draw from to better cater to local users, he said.

Wang also baulked at the idea that there is an “insurmountable divide” between Chinese and American AI.

“Would Chinese companies rather just follow suit and crank out rip-offs every time their US peers come up with a novel product, or would they rather set a bigger goal to strive for safe artificial general intelligence?” Wang asked.

This article was first published on SCMP.