Following the launch of Open AI’s ChatGPT late last year, several Chinese companies and universities have sought to produce similar alternatives within China. Naturally, proliferation of such technology has attracted the attention of Chinese regulators, in particular the Cyberspace Administration of China (CAC), which has on 11 April 2023 released draft measures on managing generative AI services (Draft Measures) for public comments until 10 May 2023.
The Draft Measures are aimed at the broader category of generative AI technologies (as opposed to “deep synthesis” technologies, arguably a subset of generative AI technologies, covered by the Administrative Provisions on Deep Synthesis in Internet-based Information Services (Deep Synthesis Provisions)). The Draft Measures define generative AI as referring to technologies that generate text, pictures, sounds, videos, codes, and other content based on algorithms, models, and rules.
In general, the Draft Measures seek to reduce the risks and misuse of AI technologies while encouraging AI innovation that is in line with China’s core socialist values, social morality and public order. The Draft Measures are also formulated to exist alongside other relevant laws and regulations such as the Cybersecurity Law, Data Security Law, Personal Information Protection Law, Deep Synthesis Provisions, and Administrative Provisions on Recommendation Algorithms in Internet-based Information Services, forming an integral part of China’s wider regulatory framework in providing developmental guidance for the AI industry.
Below are the key important takeaways from the Draft Measures:
- Providers. The Draft Measures intend to regulate generative AI technologies by imposing direct obligations on the “Providers”, defined as organizations and individuals that use generative AI products to provide services such as chat and text, image, and sound generation, including those who support others in generating text, image, and sound by providing programmable interfaces.
Note: This approach appears to be in line with the approach under the proposed AI Act in Europe which places direct obligations on “providers” as well, although the definition of “providers” under both instruments are quite different in that the definition of “providers” under the Draft Measures appear to encapsulate both “providers” (i.e. the developer of the AI system) and “users” (i.e. the user of the AI system) under the EU AI Act. - Primary Responsibility. Providers of generative AI technologies are taken to assume the responsibility of a producer of the content generated by the relevant product, and where personal information is involved, Providers must comply with statutory responsibilities as a personal information processor.
- Security Assessment. Providers are required to submit security assessment reports to the competent authority and fulfill procedures for algorithm filing, modification and cancellation, before providing services to the public using generative AI products.
- Training Data. Providers will be responsible for the legality of training data and ensuring that their training complies with laws related to cybersecurity, intellectual property rights, personal information protection, etc.
- KYC. Providers must require their users to go through real identity information verification process.
- Privacy. Providers will be responsible for protecting users’ information and usage records while providing the services and are prohibited from retaining input information that can be used to deduce the users’ identities or providing input information to third parties.
- Illegal Content. Within three months of discovering content generated by their products which do not meet the requirements of the Draft Measures, Providers must, in addition to content filtering measures, train and optimize their technology to prevent the regeneration of similar content.
- Consequences. Failure to comply with the rules may attract penalties including fines, suspension of services or even criminal liabilities.
The Draft Measures are expected to become effective later in 2023.
CONCLUSION
China’s approach to regulating AI is an interesting one, as it adopts a piece meal iterative regulatory approach targeting specific use case or technology in the AI space, as opposed to an omnibus approach adopted by the EU via the AI Act. China also has national, provincial and local regulations targeted at AI, but these are largely high level and principle-based in nature. Nonetheless, what is clear is that China is keeping to its word in joining the AI race and has no intention of being left behind, so businesses in China and international clients alike that intend to be involved one way or another with AI should keep abreast of the developments in this fast-moving space. The requirements outlined in the Draft Measures above, if passed, will have a real impact on businesses in China especially given the current wave of companies lining up to incorporate, or deploy their own, ChatGPT equivalent technology.
Please contact Lauren Hurcombe (Partner) or Hwee Yong Neo (Senior Associate) if you have any questions. For more information on AI and the emerging legal and regulatory standards visit DLA Piper’s focus page on AI.