
Death, Bans, and Fines: China’s Top AI-Generated Fake News Stories
Was the mortality rate among China’s post-1980 generation really 5.2% in 2024? Was Guangzhou the first Chinese city to ban food delivery in 2023? And did a fruit vendor in Shandong province face a 1.4 million yuan ($195,000) fine for lacking a business license last October?
These three widely shared stories, which sparked considerable attention on the Chinese internet, share one thing in common — it wasn’t the truth. Investigations later revealed that they had all been fabricated with the help of artificial intelligence.
In response to the growing problem of AI-generated disinformation, Chinese authorities have reaffirmed their commitment to combating rumors that could disrupt public order and incite panic.
At the 2025 China Internet Civilization Conference, held last week in Hefei, capital of eastern China’s Anhui province, the Cyberspace Administration of China (CAC) and the China Association for Science and Technology jointly released details of the three cases above among a batch of typical AI-generated rumors as a reference for future regulation.
The rumor claiming that the mortality rate among China’s 1980s generation had reached 5.2%, or one in every 20 people, as of 2024, was first shared by several WeMedia — or self-published — accounts in February. Citing China’s Seventh National Census as its source, they falsely asserted that individuals born during that decade were dying faster than those born in the ’70s.
On Feb. 18, an online fact-checking platform based in Shanghai refuted the claim, pointing out that the census data, last updated in 2020, could not be used to predict figures for 2024. The state broadcaster CCTV further reported that the false figure likely stemmed from an “AI computation error.”
“While large AI models are highly capable of processing information, their output may be flawed due to insufficient training data and unreliable sources,” the CAC document states. “If users rely solely on AI-generated results without verification, they risk amplifying falsehoods and fueling disinformation.”
The police traced the origin of the rumor to three netizens, surnamed Xia, Yin, and Zhu, who were later detained. Six additional people who further disseminated the story were given warnings.
Under China’s public security law, individuals who spread rumors online or via other media, as well as deliberately disrupt public order, can face up to 10 days of administrative detention and fines of up to 500 yuan. More serious violations — such as fabricating or knowingly disseminating false reports about disasters, epidemics, or emergencies — can lead to prison sentences of up to seven years.
In one related case, two individuals surnamed Zhang and Chen were each sentenced to 13 months in prison after their media company used AI in May 2023 to generate a video showing a massive fire at an industrial zone in Shaoxing, in the eastern Zhejiang province.
The video, which depicted a building engulfed in a dramatic blaze, was deemed by the police to have been created with the specific intent to go viral and draw traffic for profit. Authorities discovered the fire was just one of several AI-generated videos the company had produced in pursuit of online monetization.
In another case, a group led by a perpetrator surnamed Yang, based in southwestern China’s Sichuan province, also came under criminal investigation after mass-producing AI-generated articles designed to attract internet traffic.
When Guangzhou, capital of southern China’s Guangdong province, introduced restrictions in December 2023 relating to electric bicycle use to improve road safety, the group twisted the policy and published AI-generated articles falsely claiming that “Guangzhou is set to become the first city in China to ban food delivery services.”
The claim quickly went viral, sparking concerns among users of food delivery services as well as delivery workers fearful of potential job losses, prompting heated public debate.
“In this case, the perpetrators recruited part-time workers to form an organized group, fabricating rumors at scale. Their actions were malicious in nature and caused serious disruption,” the internet authorities said.
Another AI-generated rumor that gained traction in 2024 claimed a 65-year-old female fruit vendor in Jinan, capital of the eastern Shandong province, had been threatened with a fine of over 1.4 million yuan for operating without a business license. The story stirred public outrage among netizens over what was seen as an excessive penalty.
But local internet authorities and police found no record of such a case. Instead, they discovered that the story had been fabricated by a media company based in Changsha, capital of the central Hunan province, which used AI tools to create the post in an attempt to attract clicks and increase revenue. Following a collaboration between the two cities’ police departments, two employees from the company were later detained for three days.
As generative AI technologies continue to evolve, China has rolled out a series of regulations to impose legal and technical safeguards to prevent their abuse.
In November 2022, the CAC issued detailed regulations on deepfake content — requiring that AI-generated material be clearly labeled to prevent confusion between real and synthetic media.
The policy states that it is the responsibility of hosting platforms to clearly label AI-created content, take down any content deemed harmful, and to establish “rumor-refutation mechanisms.”
In March this year, the agency, alongside the National Radio and Television Administration and two other ministries, released further guidance on identifying artificial intelligence-generated content. It requires both users and internet service providers to mark AI-generated text, audio, images, videos, and virtual scenes with clear indicators — explicitly through labels or descriptions, and implicitly through metadata or tags — to ensure the content remains identifiable and traceable.
Editor: Tom Arnstein.
(Header image: VCG, re-edit by Sixth Tone)