Generative Artificial Intelligence: China's legal framework

PrintMailRate-it

​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​published on 25 July 2024 | reading time approx. 5 minutes​

 

Generative artificial intelligence (AI) is developing rapidly around the world. China is no exception. In fact, China is considered to be one of the hotspots of development. But the rapid development also brings challenges, prompting lawmakers around the world to ask how AI can be best and most appropriately regulated. One of China's most recent regulatory measures is the "Provisional Measures for the Administration of Generative Artificial Intelligence Services" (the "Measures"), which was drafted and adopted in 2023 under the leadership of the Cyberspace Administration of China (CAC) with the participation of other government agencies. The Measures will shape the future of AI in China and have implications for global AI governance.​

​  

 ​  

​The measures at a glance

The Measures are part of China's comprehensive strategy to become a global leader in AI. At the same time, they aim to ensure that the technology is developed, deployed and used responsibly. The regulations cover various aspects of generative AI, including data protection, content management, ethical standards and the responsibilities of AI developers and users.
 
Privacy and Data Security: The m​easures emphasize the protection of personal data. AI developers must ensure that all data used to train AI models has been collected lawfully and with the consent of the data subjects. Strict measures must be taken to protect data from unauthorized access and misuse. In addition, the provisions of the Cybersecurity Law, the Data Security Law, in particular the Personal Information Protection Law, and the regulations of the relevant (specialized) authorities must be observed.
 
Content Management: Generative AI systems that can produce text, images and other media must adhere to strict content guidelines. The guidelines stipulate that AI-generated content must be appropriately labelled (watermarked). They prohibit the creation of content that could be harmful, misleading or disruptive to social order. This includes content that could be considered politically sensitive or contrary to public morality. Examples include unsubstantiated rumors, false information that could lead to public "disorder", pornography, content harmful to children and young people, or content that could contribute to addictive behavior or uncontrolled consumption.
 
Ethical Standards: The measures require AI systems to be developed and used in accordance with ethical principles. Particular emphasis is placed on respecting "public order and morals", although this is not further defined. However, the measures explicitly state that bias in AI algorithms must be avoided and that AI-generated content must not discriminate against individuals or groups. Developers are encouraged to incorporate fairness, accountability and transparency into their AI systems.
 
Intellectual Property Rights and Fair Competition: Developers and users of AI systems must also respect intellectual property rights and business ethics, protect confidential business secrets, and not use AI systems to gain monopoly positions or as a means of unfair competition.
 
Responsibility and Liability: AI developers and operators are legally responsible for the outcomes of their generative AI systems. This means that developers and operators can be sanctioned if an AI system generates harmful content. In addition, regular audits and evaluations of AI systems are mandatory to ensure compliance with legal requirements.
 

Impact on AI development in China

The introduction of these regulations is an important step in China's approach to AI governance. They aim to promote a responsible and ethical AI environment, but also pose challenges for AI developers.
 

Promoting responsible AI

One of the main objectives of the Measures is to promote the development of responsible AI. The aim is to reduce the risks associated with generative AI by setting out clear guidelines on data protection, content management and the ethical use of AI. This is intended to increase the confidence of the public in AI technologies and to promote their broad application in a variety of sectors.
 

Challenges for Developers

​The strict requirements can also present challenges for AI developers in China. For example, it may be necessary to make significant changes to the way data is collected, stored and used to ensure compliance with data protection laws. The creative potential of generative AI systems may also be limited if content must be designed to avoid sensitive political or social content.
 

Global Impact

​The Measures taken on generative AI are likely to have an impact beyond national borders. As one of the leading nations in AI research and development, China's regulatory approach could influence other countries and set a precedent for global AI governance.
 

Role Model of the Measures

​China's detailed and comprehensive approach to regulating generative AI could serve as a model for other countries looking to develop their own regulations. The emphasis on data protection, ethical use, and the legal responsibility and liability of developers and operators addresses global concerns about the potential impact of AI on society.
 

Impact on International Cooperation

​The regulations are likely to have an impact on international collaboration in AI research and development. Companies and research organisations outside China that wish to collaborate with Chinese institutions will need to ensure that their practices are compatible with Chinese regulations. The result could be greater harmonisation of AI standards at a global level, but also the creation of barriers to collaboration if the regulations are perceived as too restrictive.

 

Impact on companies in China

The use of generative AI in China is already established in a wide range of sectors and industries. AI is being used in healthcare, education, finance, retail and manufacturing, among others. In industries with sensitive data, such as healthcare and medicine, current guidelines could pose a major challenge and fundamentally change the way data is collected, stored and analyzed due to data protection and ethical requirements.
 
In the education and media sectors, regulations on content management could have a significant impact, as could regulations on politically and socially sensitive issues. For example, the way educational materials are created, curated and distributed could change. In the media sector, reporting could be further restricted, both by existing strict censorship laws and by AI guidelines. In particular, AI-assisted text generation, search engine research and much more could affect news reporting. The same applies to the entertainment industry, especially social media (e.g. platforms such as Weibo or WeChat), and the film and music industries.  
 
The application of AI in the manufacturing industry plays a crucial role in increasing efficiency, optimizing processes and automating tasks. However, the ethical standards set out in the guidelines could have an impact on the design and implementation of these processes. It is therefore important that these standards are taken into account when implementing AI technologies to ensure that the technology is used responsibly and for the benefit of all involved.
 
AI is also used in the engineering and automotive industries. In addition to AI-based production increases or operational optimization, AI is also used in autonomous vehicles, particularly in software and algorithm development, but also in maintenance, servicing and root cause and fault analysis of machines and systems.
 
This is only a small part of where AI is being used in China. For companies using AI applications in China, the measures mean that they will need to review and adjust their practices accordingly to comply with the regulations. This can be particularly challenging in areas such as data protection, content management and the ethical use of AI. Companies should ensure that they consider the legal and ethical aspects of implementing and using AI technologies.
 

Summary

China's policies mark a significant step forward in the regulation of AI technologies, both nationally and internationally. By addressing key issues such as data protection, content management and ethical standards, the Measures aim to promote the responsible development of AI while mitigating potential risks. While the regulations present a challenge for AI developers and users, they also provide a framework for building trust in AI technologies.
 
For AI technology developers, the Measures mean that the (further) development of new AI technologies should be reviewed and adapted to meet data protection requirements. They should also ensure that their AI systems do not generate harmful or misleading content and that the systems are developed and used in accordance with ethical principles.
 
For other countries, the Chinese approach could be an important point of reference. The coming years will show how these regulations will shape the future of AI in China and around the world.
Skip Ribbon Commands
Skip to main content
Deutschland Weltweit Search Menu