I was asked to write something for The New School's Public Seminar . Here it is
Democracy is threatened by an arms race that the forces of deception are winning. While microtargeted computational propaganda, organized troll brigades, coordinated networks of bots, malware, scams, epidemic misinformation, miscreant communities such as 4chan and 8chan, and professionally crafted rivers of disinformation continue to evolve, infest, and pollute the public sphere, the potential educational antidotes – widespread training in critical thinking, media literacies, and crap detection – are moving at a leisurely pace, if at all.
When I started writing about the potential for computer-mediated communication, decades before online communication became widely known as “social media,” my inquiries about where the largely benign online culture of the 1980s might go terribly wrong led me to the concept of the “public sphere,” most notably explicated by the German political philosopher Jurgen Habermas. “What is the most important critical uncertainty about mass adoption of computer mediated communication?” was the question I asked myself, and I decided that the most serious outcome of this emerging medium would have to do with whether citizens gain or lose liberty with the rising adoption of digital media and networks. It didn’t take a lot of seeking to find Habermas’ work when I started pursuing this question.
Although Habermas’ prose is dense, the notion is simple: Democracies are not just about voting for leaders and policy-makers; democratic societies can only take root in populations that are educated enough and free enough to communicate about issues of concern and to form public opinion that influences policy. The civil rights and women’s suffrage movements involved civil disobedience, political and judicial initiatives and electoral labor, but the overall effort was undergirded by the public opinion that emerged from arguments, demonstrations, and debates about the rights of these emerging publics. When he described the origins of 18th century constitutional republics in the coffee-house arguments among (white, male, bourgeois) political scholars and activists, Habermas also noted some future threats to its continued success.
Freedom of speech and the press are essential, but the Habermasian public sphere also depends on a modicum of “civil, rational discourse” among citizens and on education and journalism as sources of reliable information upon which to build opinions and promote arguments. Habermas feared that the field of public relations would enable the wealthy and powerful to manipulate public opinion, and that a diminishing variety of news sources would warp journalistic integrity. It doesn’t take much work to demonstrate that PR and media monopolies have indeed damaged the public sphere as a form of authentic public opinion that is supposed to emerge from informed discourse among citizens. What Habermas didn’t understand and nobody suspected was the power of computationally targeted disinformation and the technological amplification of antisocial actors to warp the public sphere to the degree we see in the era of Cambridge Analytica and Gamergate. I include myself among those who saw dangers of enclosure and manipulation, but failed to grasp the power of Internet-amplified surveillance capitalism, big data, attention engineering, and disinformation factories.
Since I started writing about digital media and networks in 1987, I have been asked by critics, scholars, and myself: “Do the benefits of these newly-granted powers of intellect augmentation and many-to-many communication ultimately outweigh the negative impacts on our attention, discourse, and polity?” Around a decade ago, I decided that the answer – at that time – was “it depends on what people know – social media literacy – and how many people have this knowledge.” Personal computers and Internet accounts don’t come with behavioral user manuals or epistemological danger warnings. The knowledge and skills necessary to personally benefit from the digital commons and contribute to rather than damage the public sphere isn’t secret or hard to learn. But in 2010, when I started writing about these literacies, few educational institutions were addressing issues of search and credibility, discourse and civility, behavior and citizenship. For the most part, they still aren’t. So I wrote a book that I hoped would inform the online behavior of today’s netizens – a guidebook a parent could give to a high school graduate and that parents could read themselves.
When I set out to write Net Smart: How to Thrive Online, I decided that five essential skillsets/bodies of lore/skills were necessary to thrive online – and by way of individual thriving, to enhance the value of the commons: literacies of attention, crap detection, participation, collaboration, and network awareness:
· Attention because it is the foundation of thought and communication, and even a decade ago it was clear that computer and smartphone screens were capturing more and more of our attention.
· Crap detection because we live in an age where it is possible to ask any question, any time, anywhere, and get a million answers in a couple seconds – but where it is now up to the consumer of information to determine whether the information is authentic or phony.
· Participation because the birth and the health of the Web did not come about because and should not depend upon the decisions of five digital monopolies, but was built by millions of people who put their cultural creations and their inventions online, nurtured their own communities, invented search engines in their dorm rooms and the Web itself in a physics lab.
· Collaboration because of the immense power of social production, virtual communities, collective intelligence, smart mobs afforded by access to tools and knowledge of how to use them.
· Network awareness because we live in an age of social, political, and technological networks that affect our lives, whether we understand them or not.
In an ideal world, the social and political malignancies of today’s online culture could be radically reduced, although not eliminated, if a significant enough portion of the online population was fluent or at least basically conversant in these literacies – in particular, while it seems impossible to stem the rising tide of crap at its sources, its impact could be significantly reduced if most of the online population was educated in crap detection. In Net Smart, I pointed out that while they are often not sufficient, the most basic tools of crap detection are close at hand and easy to use: It isn’t difficult to search on the name of the author to explore what others say about the author’s authority; compendia of tools for verifying medical, political, journalistic claims are a click away. Ten years ago, I promoted the utility of universal crap detection education as a prophylactic for manipulation and pollution of the public sphere; now I’m not so sure even widespread education will be sufficient in the face of super-amplified misinformation. That’s where the arms race comes in.
The biggest obstacle to de-crapification is the power of Facebook. As Siva Vaidhyanathan wrote, “the problem with Facebook is Facebook.” That is, Facebook profits by amassing detailed behavioral dossiers on billions of people, then selling to advertisers microtargeted access (“female,” “suburban,” “millennial” “high school education,” and myriad detailed behavioral characteristics) to the attention of those people. This is phenomenally more powerful than old school billboards or television advertising in terms of reaching exactly the population that an advertiser wants to engage. What Cambridge Analytica revealed is that political opinions can be sold more effectively this way, just as toothpaste can be sold more effectively this way. Computationally targeted propaganda is multiplied by the coordinated activities of human trolls, who are further amplified by armies of bots. Facebook can’t turn off the part that can covertly manipulate the public sphere without turning off its own main revenue stream.
Just as the problem with Facebook is the same as its business model, much of the destructive influences of the Internet on the public sphere grow from the same new powers that benefit millions: If you are the only gay teen in a small town, a caregiver for an Alzheimer’s patient who is homebound, and others outside the mainstream, you can connect with others who share your challenges; if you have a rare disease that only one in a million people have, there are two thousand others online, and you can connect with them. The same power to connect with strangers who may share fringe interests is also useful to Nazis, anti-vaxxers, and flat earthers. Computational propaganda based on surveillance capitalism and applicable to manipulating the political opinions of billions, however, derives from the huge data-mining and analysis capabilities of giants such as Facebook and Google.
Google’s YouTube is another example unintended consequences that damage the public sphere: The algorithm that suggests to YouTube viewers videos that may interest them may radicalize young people who start out with an interest in gaming and are directed to videos by extremists. Again, this result was not planned, but is inextricably linked to YouTube’s revenues: Engagement – continued attention – benefits YouTube’s ability to target ads. And Holocaust videos, medical malinformation, recruiters for terrorists, turn out to be engaging content to some people.
One aspect of social media threats to mindful engagement in the public sphere that can be addressed both through design and education is the hijacking of attention for profit. The reason social media behemoths amass detailed dossiers on the behavior and habits of billions of people is that knowing each person’s interests makes it possible to tailor advertising to their attention. And the longer an app can engage a user’s attention, the greater the opportunity to present advertising. Attention capture by smartphones is easily observed on any city street, where often a majority of the population are looking at their phones rather than where they are going. More frightening are the texting drivers.
Sustaining attention is the foundation of “engagement,” an attention-retention metric that drives the algorithms that escalate suggested videos from e-games to Jihadi or Nazi recruitment propaganda. Both the capture and retention of attention are becoming more effective as engineers deliberately design into their systems the same understanding of human attention that designers of slot machines understand: distracting signals that trigger FOMO (the little colored dot that indicates new email, new posts, new likes), intermittent reinforcement (cultivating dopamine dependency), and other aspects of user interface design that capture and hold attention for the purposes of advertising. This is one challenge where ethical design, while perhaps an idealistic goal, is a possible means of ameliorating the attention-draining tech at the source. Tristan Harris has written about this possibility eloquently. Again, I see little incentive for a profitable enterprise to dial back part of their profit-making machinery without political pressure (e.g., YouTube brought in $15 billion in 2019): The public sphere must amass enough public opinion to influence regulation and legislation.
I confronted issues of attention in the classroom during my decade of teaching at UC Berkeley and Stanford – as does any instructor who faces a classroom of students who are looking at their laptops and phones in class. Because I was teaching social media issues and social media literacies, it seemed to me to be escaping the issue by simply banning screentime in class – so we made our attention one of our regular activities. I asked my co-teaching teams (I asked teams of three learners to take responsibility for driving conversation during one-third of our class time) to make up “attention probes” that tested our beliefs and behavior. When I researched attentional discipline for Net Smart, I found an abundance of evidence from millennia-old contemplative traditions to contemporary neuroscience for the plasticity of attention. Simply paying attention to one’s attention – the methodology at the root of mindfulness meditation – can be an important first step to control. It doesn’t seem that attention engineers, despite their wild success, have the overwhelming advantage in the arms race with attention education that surveillance capitalists and computational propagandists deploy with their big data, bots, and troll armies.
The lopsided arms race is what leads me to conclude that education in crap detection, attention control, media literacy, and critical thinking are important, but are not sufficient. Regulation of the companies who wield these new and potentially destructive powers will also be necessary. I don’t pretend to know enough to recommend how to craft such legislation, but if I could choose a panel of experts to make recommendations, I definitely would trust the recommendations of a panel that includes danah boyd, Zynep Tufecki, Cory Doctorow, Ethan Zuckerman, Shoshana Zuboff, Brewster Kahle, Tim Berners-Lee, Tim Wu, Fred Turner, Renée DiResta. The list over-represents Americans (Tufecki was born in Turkey, Doctorow in Canada, Berners-Lee is British); it needs people from other countries, more people of color, more women. And putting together the world’s best committee is absolutely not a guarantee that political actors will turn their recommendations into effective policy. In the face of the dangers facing the public sphere, however, I think we should not shy away from dreaming of big solutions. If anyone has better ideas, now is the time to surface them.
There are other reasons to curb the monopoly power of digital megacorporations, but if the damage to the public sphere they inadvertently enable is to be mitigated, regulation is a necessary start. Teaching middle schools students how to search, requiring high school courses in critical thinking online, educating parents as well as students is also necessary – but without regulation, educators are bringing textbooks to a massive data fight.
END
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.