China’s 15th Five-Year Plan (2026-2030), unveiled in early March, sets ambitious goals for AI and cybersecurity with global reach. While the Five-Year Plan (FYP) imperative for China to “seize the commanding heights of science and technological development” is more aggressively articulated than in past planning documents, it is the natural progression of Xi Jinping’s directive for China to become a cyber superpower.
As explained by the Cyberspace Administration of China (CAC), this means increasing internet content controls and cybersecurity capacity; promoting national technology champions, including in AI; and pushing China’s position on internet governance internationally.
The goals of the FYP represent the continuation of the 2017 State Council New Generation Artificial Intelligence Development Plan, which calls for China to become a world-leader in AI by 2030. This is important context to the position of AI in the 15th FYP, which spans until 2030.
The FYP is explicit in setting China’s agenda for not only actively expanding the adoption of Chinese AI technologies but accelerating China’s international influence over AI and emerging technology governance. It calls for the encouragement of Chinese “enterprises in emerging technologies, such as internet platforms and AI, to expand overseas application,” and the promotion of AI governance frameworks with countries in the Global South. It likewise emphasizes relations with developing countries and joint self-reliance of the Global South, seemingly a reference to expanding influence through narratives on cyber sovereignty, among others, which have been used to promote Beijing’s digital authoritarian norms beyond China’s borders.
Considering China’s domestic digital governance ecosystem, many of these goals threaten free expression by spreading pro-censorship AI governance and information securitization rules.
How China’s AI Plan Threatens the Freedom of Expression and Information Integrity
One area of concern is the emphasis on Chinese open-source models. The FYP reiterates its predecessor’s mention of promoting development in open source but goes a step further in calling for an acceleration of Chinese open source into the world.
Especially since the launch of the DeepSeek R-1 model in January 2025, Chinese models have risen at places like Hugging Face. Chinese offerings already make up many of the top ten models, in part because they are cheaper and require less compute. However, these models tend to retain elements that lead to built-in information controls. Indeed, the Hugging Face CEO, among others, has raised alarm over information threats involved in building on such models. While it is not impossible to remove in-built censorship from some Chinese models – as Perplexity AI said they did with DeepSeek-R1 last year – it is not a simple task.
Such information integrity concerns have been widely documented. For example, the Estonian Foreign Intelligence Services and China Media Project note how Chinese models tend to exhibit in-built information controls that now even seem to go beyond China’s domestic political narratives. Newly published research by scholars at Stanford and Princeton on the effect of Chinese government regulations on LLM information manipulation further noted that “China’s AI regulations are an extension of its censorship regime.”
Some of this owes to the 2023 CAC guidelines on generative AI technologies, which enjoin sweeping information control requirements. Gen AI is required to “uphold the Core Socialist Values” and prevent content inciting subversion or separatism, endangering national security, harming the nation’s image, or spreading fake information. These are common euphemisms for censorship relating to Xinjiang, Tibet, Hong Kong, Taiwan, and other issues sensitive to Beijing.
Meanwhile, China’s information controls are expanding beyond what is politically sensitive domestically toward shaping China’s preferred global narratives. For example, as flagged in the Estonian report, Chinese models have downplayed Russian aggression in Ukraine and highlighted Chinese talking points, which are themselves often convergent with Russian information manipulation narratives.
While generative AI is obviously only one piece of a much larger AI ecosystem, China’s approach to its governance is illustrative of broader trends in its approach to emerging technology that raise freedom of expression and information integrity concerns, especially as it seeks to more aggressively promote its norms globally.
How the Plan Doubles Down on Repressive Cybersecurity Norms
The previous FYP mainly equated cybersecurity with strengthening political security, the most fundamental meaning of which under Xi Jinping Thought is to safeguard regime security and CCP leadership. This treatment of cybersecurity to preserve information power is expanded in the 15th FYP, which calls for improvements in regulations on internet content management and network governance, a crackdown on online illegal activities, and continued operations to control online rumors.
This echoes the CAC guidelines on generative AI noted above and China’s Cybersecurity Law, which compels network users to not use the internet for, among other things, endangering national security, inciting subversion of national sovereignty, or disseminating false information. This is all shorthand for politically sensitive content that runs afoul of the CCP.
Critically, the Cybersecurity Law further instructs network operators to prevent the dissemination of the above prohibited content on their networks by stopping its transmission, deleting the information, preventing it from spreading further, and documenting and reporting it to the authorities. Such requirements effectively compel surveillance and censorship, raising human rights concerns that multiply when capabilities are supercharged through AI-enabled technologies. China is already aligning AI and cybersecurity in the law.
Staying true to the imperatives of the 14th FYP to “improve national cybersecurity laws” and the innovation of AI security technology, in 2025 the CAC released amendments to the Cybersecurity Law. In an explanatory note, the CAC reiterated how the law provides the legal foundation for cyber sovereignty and stressed its importance to Xi Jinping’s imperative for China to become a cyber superpower. The amended Cybersecurity Law, which took effect in January 2026, also encourages the use of new technologies like AI to improve cybersecurity.
When taken with the intent and purpose of the Cybersecurity Law, in part to provide a legal basis for restrictive information controls in the name of political security, the concern is that the already repressive imperatives to engage in censorship, surveillance, and information manipulation will be supercharged through AI-enabled capacity development, and further normalized throughout the AI and cybersecurity ecosystem.
The FYP’s cybersecurity section concludes with the directive, again, for China to more deeply participate in global governance and rulemaking in cyberspace, including through expanded international cybersecurity cooperation, which has been linked to rising digital repression.
Facing the Future
The 15th FYP establishes an ambitious playbook for aggressively adopting AI, and other emerging technologies, across China’s economy. While much of the emphasis is on the promotion of domestic industry, the other overt theme is one of accelerating China’s global influence in this domain.
While it has outlined several emerging technologies, China’s embrace of AI as the leading vehicle through which it seeks cyber superpower is clear. Because Xi has treated cybersecurity as an inherent dynamic of political, and therefore party, security, arguably it is the nexus of AI and cybersecurity that will largely shape the technological and normative frames through which Beijing will pursue its stated objectives of global influence outlined in the FYP.
Ultimately, whether parts of this FYP remain merely aspirational or not, these objectives should give stakeholders committed to democracy and human rights cause for heightened diligence. It calls for not only close monitoring of China but also redoubling commitments to human rights-based governance and safeguards within the AI design, development, and deployment phase.
Stakeholders should continue to monitor China’s AI and cybersecurity influence, such as through Digital Silk Road partnership or Chinese multilateral fora like the proposed World AI Cooperation Organization. They should advocate against China’s cyber sovereignty model at the United Nations, and other norms-setting arena like the International Telecommunications Union (ITU). Global support for efforts underway on the AI Basic Act in Taiwan, for example, is also an opportunity for countering China’s adverse influence in this domain. All the while, highlighting more rights-based and truly multistakeholder AI governance models in true cooperation with the Global South, to counteract China’s attempts at narrative control, would go a long way.




