China’s new rules on AI-generated content

Their software should not create content that contains “false and harmful information”.

AI programmes must be trained on legally obtained data sources that do not infringe on others’ intellectual property rights, and individuals must give consent before their personal information can be used in AI training.

SAFETY MEASURES

Companies designing publicly available generative AI software must “take effective measures to prevent underage users from excessive reliance on or addiction to generative AI services”, according to the rules published in July by Beijing’s cyberspace watchdog.

They must also establish mechanisms for the public to report inappropriate content, and promptly delete any illegal content.

Service providers must conduct security assessments and submit filings on their algorithms to the authorities if their software is judged to have an impact on “public opinion”, the rules say – a step back from a stipulation in earlier draft rules that required security assessments for all public-facing programmes.

ENFORCEMENT

The rules are technically “provisional measures” subject to the conditions of pre-existing Chinese laws.

They are the latest in a series of regulations targeting various aspects of AI technology, including guidelines on deep learning technology that came into effect earlier this year.

“From the outset and somewhat differently from the EU, China has taken a more vertical or narrow approach to creating relevant legislation, focusing more on specific issues,” partners at international law firm Taylor Wessing said.

While an earlier draft of the rules suggested a fine of up to 100,000 yuan (US$13,824) for violations, the latest version says anyone breaking the rules will be issued with a warning or face suspension, receiving more severe punishment only if they are found to be in breach of actual laws.

“Chinese legislation falls between the EU and the United States, with the EU taking the most stringent approach and the United States adopting the most lenient one,” Angela Zhang, associate professor of law at Hong Kong University, told AFP.

SUPPORTING INNOVATION

Jeremy Daum, Senior Fellow of the Yale Law School Paul Tsai China Center, noted that while an earlier draft of the rules was partly aimed at maintaining censors’ strict control over online content, several restrictions on generative AI that had appeared in an earlier draft regulation had been softened.

“Many of the strictest controls now yield significantly to another factor: Promoting development and innovation in the AI industry,” Daum wrote on his China Law Translate blog.

The scope of the rules has been dramatically narrowed to apply only to generative AI programmes available to the public, excluding research and development uses.

“The shift might be viewed as indicating that Beijing subscribes to the idea of an AI race in which it must remain competitive,” Daum said.