China Implements Strict Rules for Generative AI Services

Xi-US-China-AI

China has recently published a set of rules governing generative artificial intelligence (AI) services, signalling the country’s commitment to regulating and promoting responsible AI development. These rules, which will take effect on August 15, 2023, go beyond current regulations in other parts of the world and have implications not only for domestic AI operators but also for international discussions on AI governance and ethical practices.

One of the notable requirements outlined in the rules is that operators of generative AI services must ensure that their services adhere to the core values of socialism. Additionally, these services must avoid content that incites subversion of state power, secession, terrorism, or any actions that undermine national unity and social stability.

China’s regulations prohibit generative AI services from promoting content that provokes ethnic hatred and discrimination, violence, obscenity, or false and harmful information. The emphasis on these content-related rules aligns with a draft released in April 2023, maintaining consistency in China’s approach.

The Chinese government also demonstrates its interest in developing digital public goods for generative AI. The regulations highlight the promotion of public training data resource platforms and the collaborative sharing of model-making hardware to enhance utilization rates. The aim is to encourage the orderly opening of public data classification and the expansion of high-quality public training data resources.

In terms of technology development, the rules stipulate that AI should be developed using secure and proven tools, including chips, software, tools, computing power, and data resources. Intellectual property rights must be respected when using data for model development, and the consent of individuals must be obtained before incorporating personal information. The regulations further stress the importance of improving the quality, authenticity, accuracy, objectivity, and diversity of training data.

To ensure fairness and non-discrimination, developers are required to create algorithms that do not discriminate based on factors such as ethnicity, belief, country, region, gender, age, occupation, or health. Furthermore, operators of generative AI must obtain licenses for their services under most circumstances, adding a layer of regulatory oversight.

While these rules provide a framework for a responsible and lawful use of generative AI, they also reflect China’s ambition to strengthen its position in the field. China aims to become a leading provider of generative AI, challenging the current dominance of the United States.

It is worth noting that China’s control over internet access and the spread of information within its borders has led to certain challenges in the development and adoption of AI technologies. Chinese tech giants Alibaba and Baidu are now developing their own generative AI tools, with the goal of competing in the global AI market. However, authorities have cracked down on citizens using AI tools and have warned against accessing certain AI models due to concerns over potential misuse.

China’s generative AI rules also prioritize the protection of intellectual property rights related to training data and prohibit the use of algorithms, data, platforms, or other advantages to implement monopolies or engage in unfair competition. The government encourages the development of generative AI by supporting infrastructure and public training initiatives.

While China is taking significant steps to regulate generative AI, other countries are also working on finding a balance between innovation and public safety. The European Union is still deliberating on its AI Act, and the Biden administration in the United States has outlined plans to support AI development. Additionally, the US Federal Trade Commission has launched an investigation into OpenAI, the creator of ChatGPT, for potential consumer harm.

Leave a Reply

Your email address will not be published. Required fields are marked *