Relying on AI carries risks, cybersecurity expert warns amid China’s DeepSeek craze

Relying on AI carries risks, cybersecurity expert warns amid China’s DeepSeek craze

A security expert has warned in the wake of a national outcry over China’s home-grown robot DeepSeek that increased rely on artificial knowledge for decision-making may pose a security threat, exposing people to thieves and other bad actors.

According to domestic media reports, Qi Xiangdong, the chairman of Beijing-based cybersecurity firm Qi An Xin ( QAX ), large AI models presented security challenges and risks at the Digital China Summit in Fuzhou on Tuesday.

Big models will gain more power as AI becomes more widely distributed across industries, according to Qi, who is also a part of China’s major political consulting body, the Chinese People’s Political Consultative Conference.

Hackers can use vulnerabilities or data poison to influence the woman’s decisions, he said,” by using the guise of a large model” to commit malicious acts.

If the team involved create false data while updating the knowledge base, it can pollute the model’s learning environment, leading to wrong outputs, according to an inner operations perspective.

Chinese AI startup DeepSeek stunned the tech industry by launching a bot on par with US rivals like ChatGPT, which sparked a global AI craze in the general public and government agencies.

Officials have strongly supported the use of AI in general. Despite American sanctions that have restricted China’s access to high-tech chips, Beijing has praised DeepSeek as a victory for the nation’s development drive.

Qi Xiangdong is chairman of Beijing-based cybersecurity firm Qi An Xin and a member of China’s top political advisory body. Photo: Handout

Many social media users reported using AI to aid in language, writing essays, and also providing parenting advice, with many reporting the same thing. Chatbots have been helpful for remote residents who want to get advice on subjects ranging from animal gardening to pest control.

The practice has also spread to the medical field, purging some physicians to identify individuals using artificial intelligence. People have, nevertheless, questioned the use of AI in such a highly specialized niche. The northern province of Hunan forbade hospitals from utilizing the technology to create prescriptions in February.

Many cities in China have even incorporated AI into their internal operations and government service platforms. Some city governments have used the models from DeepSeek for tasks like writing and proofreading documents. Additionally, they have combined AI with security devices to aid in the resurgence of lost people.

Qi noted the security risks posed by the widespread use of AI models, and suggested that work should be done to create a safety governance framework to improve the management of key information used in large models. Checking, intercepting, and issuing alerts for hazardous material and abnormal access behavior could be a part of the system.

The top cybersecurity regulator, China’s Cyberspace Administration, unveiled a three-month strategy to manage AI services and applications on Wednesday.

The strategy may be targeted by AI products that gave unapproved medical advice, false investment advice, and misinformation affecting minors, according to a notice from the controller.

Additionally, the plan will target AI-generated rumors in areas like finance, healthcare, education, and law, as well as recent events, public policy, interpersonal issues, foreign affairs, and emergencies. South China Morning Post