China and the U.S. – Different Approaches to Regulating AI

Tracey Tang and Art Dicker
Artificial intelligence has become a central pillar of China in its drive to become an advanced economy. The development of AI has enjoyed tremendous government support, benefitting from a large population and data collected in China through various digital platforms. As such, China has been at the forefront of adopting national legislation governing AI to give guidance to companies and to create a safe environment for its adoption. In addition to imposing restrictions on the development and use of AI in China, these regulations aim to present the government’s pro-growth attitude towards AI and delineate the roles and responsibilities of various stakeholders in AI governance.
The United States has relied more on existing agencies expanding their existing scope to cover AI use and development, by and large encouraging its dynamic private sector to develop AI through entrepreneurship.
In this article, we look deeper and compare the approaches to regulation and law enforcement activities in both countries.
Transparency and Fair Competition when Using Algorithms
China has been proactive in looking at the design and deployment of algorithms. The focus in particular has been on the application of algorithm-powered recommendation technologies, including AIGC, personalized push, automated decision-making, and other relevant online services.
One of the key regulations has been the Administrative Provisions on Algorithm Recommendations for Internet Information Services promulgated in 2021 (Algorithm Regulation). It enables the Cyberspace Administration of China (CAC) to require platforms to be more open and transparent about the recommendation systems they use and publicize the basic principles, purpose and major mechanisms of the algorithm in use. In addition, it allows users to be able to opt-out of the algorithm to avoid profiling. In case the algorithm may have a significant impact on a user’s rights or interests, the service provider must provide reasonable explanations and will be held liable for any misuse. Details of the algorithms have to be filed with the CAC if the services can shape public opinions, so that (in theory), they can be looked at for potential biases and abuse.
On the protection of public interests, the Algorithm Regulation specifically empathizes the importance of protecting vulnerable groups. In particular, businesses shall tailor the algorithm-powered recommendations for minors and elders according to their specific needs as well as their mental and physical conditions, and shall avoid using algorithms to make minors addictive to online services (e.g. video games or video streaming) or expose elders to telecommunication frauds. When using algorithms to assign tasks to contract workers, workers’ rights to obtain compensation and rest must be protected.
On the promotion of fair competition in online services, the Algorithm Regulation prohibits businesses from using algorithms to interfere, undermine or impose unreasonable restrictions on other businesses.
The U.S. does not have a law focused on algorithms. Certain government agencies instead use their existing authority to try to protect the public. For example, the Federal Trade Commission (FTC) can police “unfair or deceptive practices” that might be related to misleading claims about the capabilities of AI, or how use of AI can lead to discriminatory outcomes.
There have also been some attempts to legislate directly on AI. For example, a bill called the Algorithmic Accountability Act has been introduced in Congress which would mandate developers to audit AI systems for bias and privacy risks. But to date, this legislation has not been adopted. Finally, there have also been voluntary guidelines proposed, for example, by the National Institute of Standards and Technology (NIST) and its proposed AI Risk Management System which would have developers adopt best practices for managing risk and promoting transparency in their algorithms.
Data Protection and Privacy
AI and personal data go hand in hand. China has developed a comprehensive set of regulations governing data, and this has been useful to adapt to the need for oversight of AI as well. The data regulation framework consists of the Cybersecurity Law, which mandates network operators to store certain data within China, the Data Security Law which lays out how data related to national security should be protected, and the Personal Information Protection Law, which is similar to Europe’s GDPR and sets forth the need for obtaining user consent and limiting use of individual’s data. When using data to develop or apply AI technologies, businesses must comply with all applicable data protection regulations. Businesses that offer AI-powered tools to edit biometric information of a person (such as face or voice) must remind users to notify and seek separate consent from that person.
The U.S., by contrast, does not have an overarching set of data regulations. There is merely a mix of laws which govern how different types of data should be handled by industry. For example, certain medical data is regulated by the Health Insurance Portability and Accountability Act (HIPAA). Certain financial records data is governed by the Gramm-Leach-Bliley Act (GLBA).
States have also stepped into the void – the California Consumer Privacy Act (CCPA) provides some protection to California residents on how AI systems can collect and manage personal data. Virgina and Colorado have their own unique data privacy laws and requirements as well.
Copyrights
One point where China and the U.S. differ is on whether works generating using AI can be copyrighted. The U.S. has consistently taken the position that copyrights are for original works of authorship. Authorship has been interpreted by courts and the U.S. Copyright Office to mean having a human creator. This traces back to the Naruto v. Slater case in which it was determined that a work made by a non-human (monkey) cannot be copyrighted. This has been extrapolated by courts and the U.S. Copyright Office to mean works without meaningful human authorship, such as AI generated content, are not able to enjoy copyright protection.
One relatively recent decision was “Zarya of the Dawn” in 2023 in which the Copyright Office ultimately determined that a graphic novel which had contained images using an AI image generator distinguished between the text arrangement that went along with the images (which was copyrightable) and the images themselves where were not. Notably though, substantial editing of content originally generated by AI may turn the content into a work that can be copyrighted.
Unlike in the US where consensus has almost been reached on the copyrightability of AI generated content, in China it remains very controversial when it comes to whether the courts may grant copyright protection to content generated by AI. This is especially true when a plaintiff claims it has put substantial efforts to cause the creation of the AI generated content in dispute. On the one hand, there is no authority in China like the U.S. Copyright Office that has power and responsibility to issue its authoritative opinions on the copyrightability of certain works; on the other hand, Chinese courts seem to take different views in different cases when deciding whether a specific work generated by AI is copyrightable. A few court judgements recognized the AI generated images at issue are entitled to copyright protection because they meet the statutory standards of a work under China Copyright Law—the process of inputting quite complicated prompts and using AI tools to generate, modify and polish the images can be regarded as human’s intellectual activities while AI software is equivalent to a tool (e.g. a camera) which assists human authors to create works. However, these cases are likely to have very limited influence on future ones, as China does not follow a case law system; moreover, some judgements became final without review by higher courts, and the courts’ controversial reasoning has drawn significant criticism.
Enforcement and Penalties
While some might argue the more relaxed approach the U.S. takes is important to foster innovation using AI, one might also argue that having a comprehensive set of regulations governing AI at a national level would provide needed clarity to businesses engaged in developing and using AI.
In reality, in both the U.S. and China, multiple government agencies are involved in regulating the space and in particular, enforcement. In China, the mix of enforcement agencies includes the CAC and the Ministry of Public Security, among others. Under the Personal Information Protection Law, violations can include up to RMB 50 million (approx.. US$7 million) or 5% of the previous year’s revenue. The Data Security Law can also include penalties, and may lead to businesses being forced to temporarily or permanently shut down.
Enforcement in the U.S. is, as expected, a combination of efforts at the state and federal levels. The FTC can impose fines against companies found to be engaging in unfair or deceptive practices. It can also enforce consent orders that place long-term restrictions on how organizations use data. The Food and Drug Administration (FDA) enforces special rules for AI powered medial devices, for example. Developers must demonstrate efficacy and safety through approval pathways such as premarket submissions or De Novo classifications. The Securities and Exchange Commission (SEC) and the Commodity Futuers Trading Commission (CFTC) also watch over robo advisory and AI use in trading.
States often have their own enforcement, for example, the Illinois Biometric Information Privacy Act (BIPA), which allows private lawsuits to police AI data policies.
Going Forward
Rules on AI for both China and the US continue to evolve. For example, in both China and the U.S., Chinese cities and provinces and U.S. states are adopting biometric privacy laws (for example, on facial recognition) to address AI-related data collection.
More regulation on fairness of algorithms vis-à-vis consumers is also expected. Chinese regulators have been out ahead on this front, and the US FTC has also made prominent warnings against biased AI. Expect fairness audits and transparent reporting to eventually become the norm.
It is becoming clear that even though China and the United States have their own distinctive legal structures, both recognize that AI calls for robust oversight for use of personal data and algorithms in particular. It would not be surprising if in fact, the two approaches even started to converge a bit over time.
Art Dicker is Managing Partner of Parkwyn Legal, a boutique U.S. law firm focused on helping Chinese companies expand in the U.S. Art lived and worked in China for 16 years and is fluent in Mandarin.