Misinformation is as old as communication itself. In television’s glory days, self-styled psychic Uri Geller fooled viewers for years before being outed as a fraud. Centuries earlier, in the adolescence of print media, British con artist William Chaloner circulated pamphlets attacking the national mint.
But never has misinformation spread so widely, readily, and willfully as it has in the age of social media. And never have so many different actors been culpable in creating that reality.
Take the dreadful Parkland, Florida, school shooting earlier this year. While Twitter and Facebook afforded students and their families access to potentially life-saving information sooner than other media, their algorithms also amplified right-wing conspiracy theories claiming student survivors were “crisis actors.” Although multiple print and digital media outlets quickly debunked the theory, the damage had already been done.
Often unwittingly, everyday Americans are caught in the crossfire of politically charged misinformation. Understandably, they’ve come to rely on social media to stay in touch. How else could a 50-something dad circle back with an elementary school friend who moved away decades prior? But they’ve also been shepherded into echo chambers by algorithms that prioritize clicks over truth — echo chambers that the majority of Americans, according to Pew Research, expect to get worse over the coming decade.
Certainly, it would be easy to point the finger at social media companies alone. But these platforms are neither the first nor the only perpetrators. Tribalism, a vacuum of government policy, and, yes, the very business model of social media firms have all played a part in this problem.
Inside the Social Media Machine
Compared to its media ancestors, social media is the perfect vector for spreading misinformation. The first of its three problematic attributes is its decentralized architecture, punctuated by highly influential nodes. Each nodular individual or company attracts like-minded media consumers, magnifying its influence on a given topic, regardless of the node’s expertise or truthfulness.
Although decentralization delivers media that’s maximally applicable to the user and prevents a single authority from controlling the narrative, it’s also dangerous. Misinformation spreads like wildfire in such a forum, where competence and truth matter less than the emotional payload of what’s being discussed.
Furthermore, social media makes it easy to link or break ties with connections, enabling users to self-select informational inputs. Over time, users can and do shut out information they dislike or don’t believe, distorting their own reality according to what’s “true” within their information bubbles. Because they’ve insulated themselves from uncomfortable ideas, the shock value of those ideas increases and drives users to respond with vitriol rather than reason.
The final systemic flaw of social media? Just follow the money — and, more specifically, the clicks. Clicks are literal currency for social media companies. Information that provides immediate gratification is good for business, and outrage-triggering content offers it like nothing else. Until that incentive structure shifts, social media’s echo chambers are likely here to stay.
Does that mean society is doomed to a truthless future? Not necessarily. But to rectify the situation, social media users, government entities, and social media platforms themselves must all be willing to alter their behaviors.
A 3-Pronged Defense Against Misinformation
For better or worse, social media users must be the first line of defense against the spread of half-truths and outright falsehoods. In short, they must be responsible informational bartenders. If a bartender serves an intoxicated person who later kills someone with her car on the way home, the bartender is at least morally culpable for fueling the tragedy.
Each time a social media user takes an action, such as retweeting a 280-character rant, he serves that information up to someone else. If he doesn’t critically consider content before sharing it, he’s putting someone else at risk — this time, with added social proof behind it, a cue to trust the information.
Fortunately, critical consumption of media is something everyone is capable of. Reading content entirely before sharing it, asking whether the content is coming from a reputable source, and searching for corroborating evidence from another source are easy and powerful guardrails against misinformation.
Couldn’t government entities also act as guardrails, playing the referee of truth? They certainly could try, but appointing a singular authority to separate fact from fiction invites an opportunity to propagandize. Facts are rarely black-and-white, and government officials are often all too happy to dole out “alternative facts” that advance their own narratives.
Instead, the role of governments (if any) must be to set policies that encourage all media companies, traditional and social, to build models that encourage deliberative engagement over clicks. About six in 10 American media consumers scan only the headline of news content before moving on. Something as simple as having share buttons placed in or at the bottom of content rather than directly on social platforms would at least force readers to open the source content before sharing it with others.
And what would social media companies think of such a policy? Obviously, they’re beholden to shareholders and market realities, just like other companies. Under their present model, they’re going to fight tooth and nail against any regulation that could cut into clicks and shares.
But there are certainly other business models that they could adopt. For example, switching to a subscription-based forum would weed out bots and give users more ownership over the media community they’re paying to be a part of. Such a system would also provide a revenue buffer to experiment with less emotionally charged, higher-quality content.
Incentivizing longer engagement with media through gamification, such as a system of points or social rewards, could be an effective compromise. Medium is exploring this path with a reader-assessed content quality metric called “claps.” Whether Medium’s approach becomes a viable long-term revenue model or not remains to be seen, however.
In today’s hyperpoliticized media environment, it can be difficult to remember social media’s original purpose: to inform and bring people together. Although social media has connected friends and families in some contexts, it’s driven wedges between others, sometimes to the point of job termination, social isolation, and even suicide.
If social media is ever to achieve its stated goal, we must start by fighting misinformation. And winning the war on misinformation will require all of us — people, companies, and governments and liberals, conservatives, and independents — to choose truth over comfort both on social media and off.