In the latest twist on Internet repression, governments don't just censor, they scare. Last week, for example, the Chinese government broadcast a text message to cell-phone users in Lhasa, Tibet, where Beijing has cracked down on protests in recent weeks. The message demanded that users "obey the law" and "follow the rules," and no protester could have mistaken the meaning, or the messenger. If the government also managed to terrify even quiet, apolitical citizens, Chinese and Tibetan—well, so be it. Repression 2.0 is not a precise technology.
The essence of the new repression is a form of surveillance in which the spies make their presence known in order to seem like they are everywhere. This strategy has emerged in recent years as authoritarian governments, led by China, have realized there are too many people online to control. State censors can't keep eyes on the 210 million Internet users in China, the 18 million in Iran, nor the 6 million in Egypt. The idea is not just to stop people from finding "dangerous" material online. It's to create an atmosphere in which none will seek it.
Repression 1.0 was simpler, but less effective. Then, the idea was outright censorship, and it still goes on today. As Internet users began communicating directly with individual Web sites, governments built (or bought) software filters designed to block any site they feared. Saudi Arabia blocks porn sites, Vietnam blocks political sites and so on. It's just that the filters have never worked well. They blocked either too much content or too little. Just as with your family computer's anti-porn software, the high setting might filter informative sites about breast cancer. The low setting can filter known offenders, but it remains vulnerable to sites offering new content and new ways to evade filtration.
When Web 2.0 technologies—like Web mail and social-networking sites—began to take off in 2002, they made it harder for censors to know what to block. Some Facebook users might fill their profiles with criticism of the government, but others might credulously purvey official propaganda. Facebook could hurt the government or help it, depending on the user rather than on the site itself. So instead of stopping Netizens from reaching Web 2.0 sites like Facebook or Gmail, the authorities turned to surveillance.
Of course, surveillance itself doesn't curtail free expression. But unlike Stasi agents listening through carefully hidden microphones, Web 2.0 spies don't hide.
Their crudest tool is compulsory registration—to blog, to secure an Internet connection or even to get a terminal at the neighborhood Internet café. While Internet cafés worldwide opened without much state interference in the late '90s, before long every government that limits speech also required Internet-café goers to register with proprietors and to log in with government IDs. According to Ethan Zuckerman, a fellow at Harvard University's Berkman Center for Internet and Society, some nations like Zimbabwe even deploy security agents—or people who act like them—to wander the aisles at cafés, glancing at screens. At the same time, digital records of which sites patrons visit are squirreled away for eternity in official databases. Today, Chinese café patrons would be taking a big risk searching "Tibet and crackdown," and they know it.
But Web 2.0 technology posed a new problem for censors. By indexing data on remote servers rather than downloading it to the user's workstation, social-networking sites like Facebook, Web-mail programs like Gmail and consumer sites like Netflix render themselves hard to watch. When almost all information online was transmitted by local Internet service providers and stored by local hosts, a government like Vietnam's could read data stored on a server in Ho Chi Minh City whenever it pleased—and respond by cutting off a user's access to that server. But Hanoi can't control what happens on Google servers in Mountain View, California. And it can't peek at every data packet going to and from America—the volume is too great.
Yet if governments can convince people that they are reading everything, they might not actually have to. And so the information-control agents from the world's most repressive regimes began thinking about what watchdogs call the "panopticon effect"—named for a type of prison conceived by the 18th-century social critic Jeremy Bentham. In it, a guard can watch the prisoners without their being able to tell whether he's watching. The question for authoritarians was the same: how do you make people feel as if they're being watched at all times and internalize the sense of omniscient authority? A crude answer is the simple broadcast message. Xiao Qiang, the director of the China Internet Project, says that university functionaries might send a note to all students: "This weekend, public-security authorities will install security software on our system." He adds, "You don't know how well it works or what it does, but you certainly know every student is being warned." Or the authorities might send a text-message like the one in Lhasa last week—a trick achieved by detecting which phones communicating with local Tibetan cell-phone towers are roaming domestic subscribers.
Newer, automated methods are targeting individuals more directly. These methods are hard to track in detail because they are invariably deep official secrets, but experts believe China is the leading state practitioner right now. The most famous example is the avatar duo of Jingjing and Chacha (puns on the Chinese word for police), who appeared in early 2006. They are two adorable cartoon cops with big heads, big eyes and tight mouths in the anime style. They live on the home pages of several ISPs, or else they arrive, uninvited, on the screens of Chinese Netizens. If a Web surfer visits a domain that has elected to host the cartoon characters, Jingjing or Chacha may appear spontaneously to dispense amiable advice about online behavior. "We will send kind reminders to people to establish online safety and … to respect online laws and regulations by regulating themselves to create a healthy Internet circumstance and to maintain harmonious order," Jingjing says on his blog. Chen Minli, the head of Internet security and surveillance in the southern city of Shenzhen, explained the point of these Web cops to the Xinhua news service, driving home the panopticon effect: "The purpose is to let all Internet users know that the Internet is not a place beyond the law. The Internet police will maintain order in all online behaviors."
Other examples of ham-fisted surveillance—the kind meant to be noticed—have been chronicled by the Open Net Initiative, a collaboration of several Western universities studying Internet freedoms. China is finding new and varied ways to apply its keyword-tracking technologies. First used to censor Web sites that contain certain phrases, they are now deployed to create the sensation that an intelligence agent is watching. The researchers report e-mails that sometimes arrive and sometimes don't, search engines that suddenly stop accepting particular queries, words that are sometimes excised and Web sites that arbitrarily become unavailable (browsers report a failure to connect or time out). For Netizens, it's impossible to know whether those effects represent censors typing away in a government data center or whether they're simply automated, like Jingjing and Chacha.
The trick about the new repression isn't just getting people to think the government knows—or seems to know—what they're doing; it's making them believe they'll pay the price. Here the technology of Repression 2.0 melds with old-fashioned strong-arm methods: those caught misbehaving are subjected to highly publicized character assassination, interrogation, threats to friends and families, trumped-up charges and show trials. Chinese police have shown up at the homes of Web surfers just minutes after they view an illicit site. Egyptian and Saudi courts try bloggers for sedition.
In the Middle East, censors are hunting not just for political challenges to the established order but also for signs of what they consider social deviancy, such as gay porn. But with so much ground to cover, resources are spread thin. So rather than convey a systematic sensation of surveillance, Middle Eastern governments are louder and angrier in their condemnations. Many Arab Internet service providers reluctantly share data about their clients' habits with authorities, fearing the consequences if they don't. Medhat Zayed owns a two-room Internet café in Cairo with six outdated PCs and one air conditioner. He and other proprietors are pressured to give daily reports on clients' browsing habits. "I don't want to spy," he says. "I don't want to play the role of the police … What I say can send them [to detention]. I hate what I'm doing, and it is haram"—proscribed by Muslim law. Yet he complies because of cases like Hala el-Masry, a 43-year-old woman from Egypt's conservative south who wrote a blog called Copts Without Borders, which chronicled cases of repression. Police detained her, accused her of plotting to kill her father and prosecuted her for undercutting national unity. Then authorities closed the two cafés from which she had posted blog items.
In the Middle East the overall effect is more erratic—it sometimes looks more like Repression 1.0—but no less terrifying than in China. "It's not a soft-power thing; it's imprisonment," says Ibrahim el-Houdaiby, an Egyptian blogger and dissident. In February, a popular Egyptian blogger who calls himself Kareem Amer got four years for insulting President Hosni Mubarak. And, el-Houdaiby warns, "they're still developing the technologies" used by China.
They are. Consider the case of Cairo blogger Wael Abbas, who is known in the Arab world for postings highly critical of Egyptian President Hosni Mubarak. He was well aware of how rough the regime could be on critics (he has posted videos of police torture sessions), but he was surprised one day to find his YouTube account closed—by YouTube. Droves of users had complained, in a short period, about the content he had uploaded. Who were they: The government? Ordinary Egyptians? Nobody really knew. YouTube eventually restored his account when Abbas convinced its operators he'd been targeted by the government. But then anonymous, false reports began circulating online that he had changed his religion three times (Protestant to Orthodox to Roman Catholic) and that he was gay. "You know how a conservative society like ours despises and hates whoever rotates between religions that easily, or an openly gay person," he says.
In China, by contrast, major ISPs are open about complying with directives from the Beijing Information Office to furnish data and ban keywords. In October, hours after Reporters Without Borders issued a report critical of Chinese Internet restrictions, the information office told ISPs to restrict keyword searches that included the group's name, the author's name and several phrases from the report; the ISPs obeyed within hours. Wang Jianzhou, the CEO of China's (and the world's) largest mobile-phone firm, China Mobile Communications Corp., is emblematic. When he was pressed at a news conference about the privacy implications of collecting user data, he said that "we never give this information away [to advertisers]. Only if the security authorities ask for it."
Just to be sure, though, major Chinese blog-hosting sites still censor themselves. The blogger Liu Xiaoyuan posts to blogs on six different hosts, partly to measure their approaches—which include asking him to revise his items, blocking them, deleting them without explanation and sending him notes. One day, a post to Sina, China's most visited Internet portal, came back with a message: "Dear Blogger-friend, Hello! We are very sorry to inform you that due to certain reasons this blog post is not suitable to be publicly shown and has been locked down. You can see the original text and photos through this page. Thank you for your understanding and support."
Everywhere, Repression 2.0 exploits the fact that it is easier to scare users than to filter traffic over the entire network. "We tend to think about the network as the weak point, but it tends to be the endpoints that are most vulnerable," says the Berkman Center's Zuckerman. "It's a whole lot easier to aim a parabolic microphone at a Vietnamese dissident than to hack a network." Even in Zimbabwe, where the repressive state has no means to filter the Internet, people are still terrified enough to avoid banned Web sites, like Zuckerman's own. "A local ISP in Zimbabwe tells me his customers ask to be removed from e-mail lists with jokes [about President Robert Mugabe], because they're afraid they'll be seen as dissidents. That's the panopticon at its best."
In other cases, Netizens are adopting a lexicon to dodge the new forms of repression. Allegory has become part and parcel of online political discussions in authoritarian countries. Bloggers and chat-room regulars are using the same techniques developed by the literati in the Eastern bloc during the cold war. "You see a lot more sarcasm and coded language," says the China Internet Project's Xiao, "but every reader shares a culture and knows what you're talking about. Even the human censors know what they're saying, but they can't go after them. The veiled critiques reduce political risk."
The next step for governments struggling to keep up with the flow of dangerous data may be a technique called data mining. One possible model: the Total Information Awareness project, a post-9/11 U.S. Defense Department idea that, had it not been shut down by horrified lawmakers, would have analyzed patterns of writing, shopping, e-mailing and surfing among Web users. The notion wasn't to watch everything people said—it was to scan their online footprint for patterns that might point to criminals or terrorists. It's easy to imagine Beijing appropriating the concept when it can muster the computing power.
That would scare Web users for years. Technology has a way of constantly changing, but authoritarians of the world have at least one thing going for them: spreading fear is easy, and the Web makes it easier.