What we can learn from China’s proposed AI regulations

Image Credit: Bloomberg Creative/Getty Images

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


In late August, China’s internet watchdog, the Cyberspace Administration of China (CAC), released draft guidelines that seek to regulate the use of algorithmic recommender systems by internet information services. The guidelines are thus far the most comprehensive effort by any country to regulate recommender systems, and may serve as a model for other nations considering similar legislation. China’s approach includes some global best practices around algorithmic system regulation, such as provisions that promote transparency and user privacy controls. Unfortunately, the proposal also seeks to expand the Chinese government’s control over how these systems are designed and used to curate content. If passed, the draft would increase the Chinese government’s control over online information flows and speech.

The introduction of the draft regulation comes at a pivotal point for the technology policy ecosystem in China. Over the past few months, the Chinese government has introduced a series of regulatory crackdowns on technology companies that would prevent platforms from violating user privacy, encouraging users to spend money, and promoting addictive behaviors, particularly among young people. The guidelines on recommender systems are the latest component of this regulatory crackdown, and appear to target major internet companies — such as ByteDance, Alibaba Group, Tencent, and Didi — that rely on proprietary algorithms to fuel their services. However, in its current form, the proposed regulation applies to internet information services more broadly. If passed, it could impact how a range of companies operate their recommender systems, including social media companies, e-commerce platforms, news sites, and ride-sharing services.

The CAC’s proposal does contain numerous provisions that reflect widely supported principles in the algorithmic accountability space, many of which my organization, the Open Technology Institute has promoted. For example, the guidelines would require companies to provide users with more transparency around how their recommendation algorithms operate, including information on when a company’s recommender systems are being used, and the core “principles, intentions, and operation mechanisms” of the system. Companies would also need to audit their algorithms, including the models, training data, and outputs, on a regular basis under the proposal. In terms of user rights, companies must allow users to determine if and how the company uses their data to develop and operate recommender systems. Additionally, companies must give users the option to turn off algorithmic recommendations or opt out of receiving profile-based recommendations. Further, if a Chinese user believes that a platform’s recommender algorithm has had a profound impact on their rights, they can request that a platform provide an explanation of its decision to the user. The user can also demand that the company make improvements to the algorithm. However, it is unclear how these provisions will be enforced in practice.

In some ways, China’s proposed regulation is akin to draft legislation in other regions. For example, the European Commission’s current draft of its Digital Services Act and its proposed AI regulation both seek to promote transparency and accountability around algorithmic systems, including recommender systems. Some experts argue that the EU’s General Data Protection Regulation (GDPR) also provides users with a right to explanation when interacting with algorithmic systems. Lawmakers in the United States have also introduced numerous bills that tackle platform algorithms through a range of interventions including increasing transparency, prohibiting the use of algorithms that violate civil rights law, and stripping liability protections if companies algorithmically amplify harmful content.

Although the CAC’s proposal contains some positive provisions, it also includes components that would expand the Chinese government’s control over how platforms design their algorithms, which is extremely problematic. The draft guidelines state that companies deploying recommender algorithms must comply with an ethical business code, which would require companies to comply with “mainstream values” and use their recommender systems to “cultivate positive energy.” Over the past several months, the Chinese government has initiated a culture war against the country’s “chaotic” online fan club culture, noting that the country needed to create a “healthy,” “masculine,” and “people-oriented” culture. The ethical business code companies must comply with could therefore be used to influence, and perhaps restrict, which values and metrics platform recommender systems can prioritize and help the government reshape online culture through their lens of censorship.

Researchers have noted that recommender systems can be optimized to promote a range of different values and generate particular online experiences. China’s draft regulation is the first government effort that could define and mandate which values are appropriate for recommender system optimization. Additionally, the guidelines empower Chinese authorities to inspect platform algorithms and demand changes.

The CAC’s proposal would also expand the Chinese government’s control over how platforms curate and amplify information online. Platforms that deploy algorithms that can influence public opinion or mobilize citizens would be required to obtain pre-deployment approval from the CAC. Additionally, When a platform identifies illegal and “undesirable” content, it must immediately remove it, halt algorithmic amplification of the content, and report the content to the CAC. If a platform recommends illegal or undesirable content to users, it can be held liable.

If passed, the CAC’s proposal could have serious consequences for freedom of expression online in China. Over the past decade or so, the Chinese government has radically augmented its control over the online ecosystem in an attempt to establish its own, isolated, version of the internet. Under the leadership of President Xi Jinping, Chinese authorities have expanded the use of the famed “Great Firewall” to promote surveillance and censorship and restrict access to content and websites that it deems antithetical to the state and its values. The CAC’s proposal is therefore part and parcel of the government’s efforts to assert more control over online speech and thought in the country, this time through recommender systems. The proposal could also radically impact global information flows. Many nations around the world have adopted China-inspired internet governance models as they err towards more authoritarian models of governance. The CAC’s proposal could inspire similarly concerning and irresponsible models of algorithmic governance in other countries.

The Chinese government’s proposed regulation for recommender systems is the most extensive set of rules created to govern recommendation algorithms thus far. The draft contains some notable provisions that could increase transparency around algorithmic recommender systems and promote user controls and choice. However, if the draft is passed in its current form, it could also have an outsized influence on how online information is moderated and curated in the country, raising significant freedom of expression concerns.

Spandana Singh is a Policy Analyst at New America’s Open Technology Institute. She is also a member of the World Economic Forum’s Expert Network and a non-resident fellow at Esya Center in India, conducting policy research and advocacy around government surveillance, data protection, and platform accountability issues.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Read More
Photo Credit: Pixabay

Next Post

Leave a Reply

Your email address will not be published. Required fields are marked *