Former Chinese Censor Calls on Social Media Users to Stand Up For Free Speech

Former Chinese social media censor Liu Lipeng, now living in the US, is shown in an undated photo.
Former Chinese social media censor Liu Lipeng, now living in the US, is shown in an undated photo.
(Photo provided by Liu Lipeng)

In 2011, Liu Lipeng landed a job straight out of college as a “content reviewer” at the popular microblogging platform Sina Weibo, where he learned how to issue warnings, delete posts, and shut down the accounts of users who ran afoul of an ever-increasing list of banned topics and keywords issued by the ruling Chinese Communist Party.

“I thought the job would be something like a forum moderator, and that I’d be looking for things like hate speech, pornography, or other offensive content,” Liu told RFA in a recent interview.

“I didn’t expect that I would be a part of such a huge machine, interlinked with the Chinese Communist Party’s propaganda operations and domestic security system.”

Liu’s job was to help maintain the complex system of blocks, filters, and human censorship that maintains the Great Firewall, which limits what Chinese internet users can do or see online in the absence of circumvention tools like virtual private networks (VPNs).

Once in the job, he was disturbed to find that it entailed acting on daily instructions to delete “sensitive” content as well as shutting down accounts that posted such content.

Liu also became aware of a department of China’s internet police embedded at Sina Weibo’s censorship center in Tianjin.

“Everyone knew that they were the ones who intimidated service users and detained people,” he said. “We were all scared of them.”

From the start, Liu interpreted his job description with a liberal approach, sometimes quietly unblocking users after shutting them down.

Among those he shut down, then unblocked, were Weibo users based in Hong Kong who had had content deleted and accounts shut down for talking about the candlelight vigil for the victims of the 1989 Tiananmen massacre by the People’s Liberation Army (PLA) in Beijing.

He also wrote to author Murong Xuecun after a post of his was flagged on his work queue.

“Later, I stopped deleting stuff altogether,” Liu said. “I had realized that it was all linked to a background operation involving keywords or sensitive accounts — this was the dark side of social media.”

“I would be dealing with angry users, who would curse me out, calling me a jobsworth and saying they hoped my whole family would drop dead, stuff like that,” Liu said. “It had a really big impact on me by the end of a working day.”

By the time he left Weibo in 2013, Liu was approving all of the content that passed across his desk.

“Chinese social media is so tainted by censorship and propaganda,” he said. “The government cracks the whip to force private companies to crack down on dissent.”

Turning whistle-blower

By 2016, Liu had turned whistle-blower, handing over internal work logs from Weibo to the New York-based Committee to Protect Journalists (CPJ).

He left China for the United States earlier this year, after realizing that widespread monitoring measures brought in by the government, ostensibly to control the spread of coronavirus, would render him vulnerable to political retaliation.

After arriving in the U.S., Liu made the work logs fully public and started giving media interviews.

“If I am not brave enough to speak out, how can I tell others not to self-censor or not to be afraid?” he said.

Xiao Qiang, founder of China Digital Times (CDT) and a teacher at the University of California Berkeley School of Journalism, said Liu’s leak has been groundbreaking in advancing understanding of China’s censorship and propaganda machine.

“The logs both confirm and build on the work we have done on this in the past,” Xiao said. CDT has been publishing insider information on China’s propaganda directives under its Ministry of Truth column for several years.

“This isn’t just historical evidence of the censorship process; I also hope that Chinese people will gain a deeper understanding of the current regime, and of their own situation,” he said.

RFA contacted several Weibo employees to confirm the authenticity of Liu’s published logs, and at least three said they could verify that they were genuine. Some were worried about talking to the foreign media, and all asked to remain anonymous.

Murong Xuecun confirmed he received an anonymous email from Weibo informing him of a ban in 2012, and thanked Liu via his Twitter account for the valuable insight it provided.

Sina Weibo, which now has garnered more than 500 million users since its launch as a Twitter-like service in 2009, hadn’t replied to a request for comment by the time of writing.

Liu confirmed what many have long believed about private enterprises in China: that they do as they are told by the government and toe the ruling party line.

He said they even vie with each other to show how well they implement government censorship and propaganda orders.

He said that while the censorship workflow and lists of keywords were initially developed by Sina, they are now being adopted as an industry standard by other social media platforms operating in China.

“The core lists of sensitive words and the censorship process can easily be copied from Sina,” he said.

Companies run risks

And there are nasty pitfalls for companies that don’t do a good job of controlling user-generated content.

Wang Xin, the founder of Kuaibo Technology, was handed a three-and-a-half-year prison sentence in 2016 by the Beijing Haidian District People’s Court, which found him guilty of “distributing obscene materials for personal gain” as his online business was accused of providing easy access to pornography and pirated content.

“A bad track record on content review isn’t just about losing money,” Liu said.

At Weibo, content is initially screened by software, which scans for sensitive words.

For example, words linked to the Tiananmen massacre, or the banned Falun Gong spiritual movement often appear on lists of banned keywords. The system then deletes any posts containing these words automatically.

Content mentioning the government or the Chinese Communist Party, or other keywords programmed into the software, will be forwarded to human reviewers before a decision is made about whether the post can be published.

Content reviewers can choose to pass content, or hide it, ensuring it won’t show up in searches, or prevent individual posts from being reposted.

Most banned posts are simply made private, so that only the poster can view them, while a minority are deleted outright.

“The third option, making it private, is the most common, because it doesn’t inform the user that their post has been deleted, but only they can see it,” Liu said.

“Delete is rarely used, because it is tantamount to informing the user that they broke the rules.”

Rules remain vague

Service providers are reluctant to alert users to content bans because the authorities need the rules to remain vague, so that users also censor themselves, he said.

Some accounts are added to a political whitelist, including the ruling party’s paid army on online commentators, dubbed the 50-centers for the alleged amount they are paid per comment, Global Times editor Hu Xijin, and Xi Wuyi, professor of Marxism at the Chinese Academy of Social Sciences, and Peking University Chinese department professor Kong Qingdong.

But their posts are still reviewed — just at a higher level — and mostly for the comments left by other platform users.

“The comments on their posts are all screened,” Liu said. “For example, if [an account on the whitelist] talks about a sensitive topic … people will flock to discuss it.”

“So the post is reviewed so as to monitor the comments it attracts … which has a very strong effect on public opinion because it looks as if everyone agrees, or has been persuaded,” he said.

But Liu said that the censorship system couldn’t succeed without self-censorship on the part of platform users.

“That fear is actually irrational, because if users refused to self-censor, then the censors wouldn’t be able to keep up with it all,” he said.

He said recent media reporting has suggested that China’s emerging artificial intelligence capabilities will soon enable it to keep tabs on online speech everywhere, but much of the work still has to be done by people.

“If the AI was so good, why is it that content review farms in first-tier cities are unable to recruit students?” Liu said. “They can only manage to hire them in smaller cities; places like Xian, Chongqing,and Zhejiang.”

“The sheer numbers of people they need to hire show that this is a huge [operation].”

Reported by Jane Tang for RFA’s Mandarin Service. Translated and edited by Luisetta Mudie.

 

Source: Copyright © 1998-2016, RFA. Used with the permission of Radio Free Asia, 2025 M St. NW, Suite 300, Washington DC 20036. https://www.rfa.org.