For most Americans, protecting free expression means countering threats from government. Private corporations are not usually seen as threatening free speech. But as private technology companies increasingly mediate access to information and services, the distinction between governmental and private censorship becomes less clear. Concepts of free speech and freedom of expression may need to be revised and enlarged to take account of new threats in the age of digital communications – and policies to protect freedom of expression may need to counter threats, often subtle, from the private sector as well as government.
New Censorship Technologies and Practices
Since the invention of writing, heavy handed governmental forms of censorship targeted ideas or words deemed dangerous by authorities, but at the risk of drawing more attention and public debate to the ideas or words targeted for suppression. Contemporary threats, barely recognizable as censorship, more often come from steering or soft censorship. Using new communications technologies, corporations – and government agencies operating indirectly through corporations – are able to intervene in expressions before they happen. Search engines, auto-predictive keyboards, machine learning algorithms and filters originally designed to keep children safe on the Web have become tools for modifying citizen behavior and altering communications. A few examples help to illustrate these worrisome practices:
A recent update from Apple to the iPhone keyboard made it difficult for users to enter the words abortion or suicide into their smartphones. Whatever the intentions, this can make it difficult for iPhone users to take perfectly legal or constructive actions such as searching for abortion clinics or finding information about suicide prevention.
The source code of a recent Android handset update made by Google contains a dictionary of over 165,000 words users would not get help to complete from the auto predict or spell-check functions built into their devices – including terms like preggers, intercourse, lovemaking, butt, geek, thud, pizzle, and other supposedly dirty words.
In mainland China, searches for “human rights” on Google often return the question “Did you mean hunan rice?” and a series of rice-based recipes.
Edward Snowden’s revelations about digital surveillance by the National Security Agency suggest that U.S. technology companies have been surprisingly amenable to surrendering user data to intelligence authorities, with companies like Microsoft providing government easier access to its services through Skype, Outlook email, and SkyDrive cloud storage.
Blocking or tracking taboo words or language is a particularly useful way for understanding next-generation censorship. Many of these soft censorship technologies interfere with people’s use of certain words or expressions – or enable surveillance actors to track ideas or communications defined as threatening or undesirable.
Tracking and Steering Citizen-Users
Digital communications technology has also proven to be useful for collecting information about users in order to predict their needs and desires and steer their behavior. Unlike media manipulation or propaganda, which focuses on changing popular beliefs and social behaviors on a large scale, digital approaches aim to track, predict, and manipulate the behavior of individual users. For instance, Facebook’s “News Feed” has long tried to shape users’ choices by algorithmically predicting which content is most likely to keep people on the site and automatically removing content that users might find uninteresting or objectionable. For this company and many others, there is little need to challenge or change user beliefs in overt ways when more subtle forms of steering are possible given the ease of collecting and analyzing digital data on what individuals are doing or what they might find attractive or unappealing.
Protecting Free Expression in a New Era
Given the subtlety of contemporary forms of censorship and steering often practiced by private corporations, freedom of expression and choice can no longer be construed simply in terms of protection against governmental infringements. By better understanding the newest mechanisms for regulating language and steering citizens, we become better able to make informed policy decisions. Several new ways to protect free expression should be considered for the digital age.
Independent auditors may need to review search engines and algorithms, given their enormous power to shape what can be found on the Internet and how findings are ranked. Private companies have a legitimate interest in protecting their private intellectual property, but it should be possible for auditors to certify that search engines are not biased or designed to be coercive without divulging any details that amount to true trade secrets.
Users should not be regarded as disinterested in the possible biases of steering devices. Terms of service agreements could be simplified and rewritten so as not to discourage users from peering into the operations of the services they use.
New public technology services may also be needed. Instead of a proprietary search algorithms like Google, open-source search engines could be created along the same lines as Wikipedia, with users contributing to the creation and operation of searches. Algorithmic gatekeepers could be opened up and made intelligible to people with little technical knowledge. Universities might be best equipped to administer public knowledge platforms, because they are present in many countries and regions, enjoy academic freedom in many parts of the world, and have access to advanced research resources and technical experts not dependent on corporations for employment.
Well-designed new policies and institutions could help democratic nations – and peoples aspiring to freedom – to parry manipulative uses of digital technologies. Optimal policies must be future-oriented and able to accommodate rapidly changing technologies. Of course, new technologies and the companies that devise and deploy them deserve to prosper in coming decades – but only in ways that protect vital public interests in transparency and full freedom of expression.
Rex Troumbley is a Postdoctoral Fellow at Rice University. His research deals with how taboo language – cursing, swearing, profanity, obscenity, and racial slurs – is managed by medical, legal, and technical institutions in the United States. This article was written in collaboration with Scholar Strategy Network.