Jennifer Earl is professor of sociology and a professor of government and public policy at the university of Arizona. Her research focuses on Internet and social movements, social movement repression, and the sociology of law. She is the 2017 winner of the William F. Ogburn Career Achievement Award, awarded by the communication, information technologies, and media sociology section of the American Sociological Association.
I invited her to the podcast to talk about the use of the internet by political activists. When I say Internet, I don’t just mean social media. Social media gets a lot of attention, especially when people talk about polarization, but the internet is more than just Twitter and Facebook, and I think people sometime misunderstand how the internet is being used by political activists. I also wanted to talk to her about how the internet can be used to deceive people, and how people can become discerning consumers of web content.
One should treat information on the Web with more criticism than we often treat it. So I think a practice that I try to use is that if I read something that I’m surprised about, many people will then try to Google a confirmation of that. So they’ll read “X happened,” and then they’ll google “Did X Happen?” Or they’ll just google “X” and see what comes up. But I would recommend that you try you consider falsification too, just like in social science so that you also try to search for negative evidence like this didn’t happen. So you might Google “X isn’t true” or “X is a myth.” Now certainly sometimes that’s going to get you to places where… Well, probably X was true. So, one of the things about holocaust denial is that holocaust denial plays on that kind of format of question (”X is a myth”) to bring people into holocaust denial websites. So I’m not saying that you should trust falsification on the web 100 percent of the time either, but I think you should have a healthy skepticism about what you read even if it’s sent to you by someone you trust because you don’t know their vetting procedure.
This transcription was done by a professional but it may contain a few errors. Please listen to the podcast episode before quoting this transcript.
Chris Martin: My guess today is Jennifer Earl and she’s a professor of sociology and a professor of government and public policy at the University of Arizona. Her research focuses on internet and social movements, social movement repression and the sociology of law. And she’s the 2017 winner of the William F. Ogburn Career Achievement Award, which is awarded by Communication, Information Technologies and Media Sociology section of the American Sociological Association. I invited her to the podcast to talk about the use of the internet by political activists. And when I say internet, I don’t just mean social media. Social media gets a lot of attention especially when people are talking about polarization, but the internet is much broader than that. And I think sometimes people misunderstand based on the writings of some popular authors how the internet is actually being used.
I also wanted to talk to her about how the internet can be used to deceive people and how people can become more discerning users of web content. So here is Jennifer Earl.
Welcome to the show.
Jennifer Earl: Thanks so much for having me. I really appreciate it.
Chris Martin: I wanted to start by talking about your 2011 book, Digitally Enabled Social Change: Activism in the Internet Age. You co-authored that with Katrina Kimport. Can you tell us a little bit about that book?
Jennifer Earl: Certainly. So Katrina and I were very interested in how the use of digital technologies were affecting protest and social movements. But we observed that across the field people were studying very different kinds of technologies and technology uses and we thought that that would importantly affect how those technological uses were influenced in social movements.
And so, we do really two important theoretical moves in the opening of the book. One is to say, we need to distinguish between different kinds of uses of technology. And so, we, for instance, argue that a lot of the existing work had studied what we call immobilizations or where people use online technologies to facilitate offline protests. So think about, for instance, ride boards or something or downloading signs to streamline messaging versus using online technologies to actually facilitate the fundamental organizing or the actual participation like an online petition that is involved in the social movement campaign or the protest of that. And so, we argue that these two broadly different ways of using technology are going to have broadly different patterns in how technology affects movements.
And so, for immobilizations where you’re really using online tools to facilitate offline activism, we largely expected that people weren’t going to be leveraging what we and other people have termed affordances of the internet and because they weren’t really leveraging those affordances there were really only going to be marginal changes theoretically from existing processes. So if we were talking about an engine that is a scale change model where you’d go from a V4 engine to a V8 engine.
But for protest campaigns or protest events or social movements where you’re using the technology to actually organize or to host participation online, we expect that that would often involve although not always much more significant leveraging of the fundamental affordances of the internet and that that would really change the basic model of organizing. So if we were talking about an engine, we’d be talking about going from a standard combustion engine, for instance, to like a hybrid or a fully-electric engine. So the underlying processes don’t just get augmented, accelerated, etc. they actually change. And that change is because some of fundamental things that have underlie the processes that we think as a field operate an activism are altered when all of a sudden activism is much cheaper to organize and participate in, and when you can participate in a collective endeavor without being co-present with one another whether that’s involved with organizing or participation.
And so the empirical core of the book is essentially four chapters that ask what happens to social movement protests when for instance the cost of organizing are really cheap. What happens to social movement participation when the cost of organizing is really cheap? What happens to social movement organizing when you can organize without being co-present? And what happens to participation when you act collectively without being co-present in time and space?
And then the close of the book looks at how those changes together may be more than the sum of their parts and compose what Katrina and I refer to as a digital repertoire of contention which is a play on Tilly’s repertoire of contention arguing that digital technologies have ushered us into a third error of repertoire of contention.
Chris Martin: For those who are listening who are unfamiliar with Tilly, he’s an influential sociologist who studied social movements and categorized social movements based on the era in which they occurred. And I think the issue of low cost is really relevant to people in education and people at educational institutions because educational institutions were one of the first places where people affiliated with the institution got free internet access. And even though when we think of the earliest years of the internet, we don’t think of students using digital platforms per se to enact activism on their campus. I think that’s an issue of low cost whereas previously on campuses there was always at least some cost involved in whether it came to printing flyers or booking a space and that sort of thing. But that brings me to another question which is on everyone’s mind now. Now that we know that activism in the Internet Age is different, we’re also aware that digital platforms can be used in ways that are unethical and possibly malicious and that’s a hazard that we need to figure out how to deal with. So, can you talk a bit about what you’ve observed when it comes to unethical use of digital platforms?
Jennifer Earl: You know, this is a really hard area to study in a couple of ways. And so, I’m going to tell you more, I guess, what we don’t know in some ways and why we don’t know it than be able to give you firm answers on what we do know. But I think that that’s just a fairly honest reporting of where the field is at.
First of all, in terms of my own work, I study public protest. And so, one of the unique things about the method I use for collecting online data is that I’m trying to basically build representative populations of content you could reach about public protest issues and then sample those so that I can say something in a representative way about the population of things that you could have gotten access to. And that requires me to make a simplifying assumption in my work which is that I’m studying public protest.
A lot of the malicious things that are done online whether they involve protest, politics or not are really done in more hidden or often done in more hidden ways. And so, it has been hard for people like me who study public protest to estimate how often those things are happening because a lot of times they are intentionally obscured.
Now, that said, I had a five-year project that was funded by the National Science Foundation through a career award, and one of the years in addition to ongoing longitudinal data collection, we undertook an attempt to really study these less public forms. And it was incredibly difficult and the rate that we could find publicly was quite low and so across five years of studying public protest we really couldn’t find basically any instances that can maybe ultimately be found one of politically-motivated hacking that we could track through this public sampling. So I’m not saying it never happened. I’m just saying that the rate of it was so low that it took that much data and samples to be able to collect an instance of it.
Other researchers who have used different sampling techniques have also estimated quite low rates although not as low as my project would be too. So they might be in this sort of 1% to 5% rate. But if you were to look at the so called dark web, so the things that aren’t meant to be – that are meant to be ephemeral and are meant to be difficult to access you might find, and this is where, you know, for instance, when you hear about cybercrime, a lot of that is that dark web. So if you’re thinking about like a social security number, credit card number being traded or sold, that’s dark web activity.
There are people who have used online ethnographies or other formats to study that kind of dark web activity. And I think that they find that there is really a mixture of things going on there. So, for instance, Jessica Beyer who’s at University of Washington has done research on Anonymous amongst other groups of hidden communities. And it’s clear that there is a mixture of both motivations for participating in Anonymous according to her work, a mixture of activities. So some people participate in the kinds of denial of service actions, that, or doxing that Anonymous has become famous or infamous, depending on your perspective, for lots of [0:10:01] [Indiscernible]. And so, I would say that much of the work points to certainly that some of this is happening, it’s probably like other things that are treated sensationally in the media, there is, it’s happening. We can’t say that it’s not happening. And it certainly terribly affects the people it’s directed at. But our best estimates are that the rates of it are not as high as you may guess from, for instance, media coverage.
Chris Martin: And when it comes to the left and right, the Republican Party and the Democratic Party in the U.S., has there any evidence really that either of the parties or pax affiliated with the parties have been using the internet, whether it’s social media or some other type of internet activity in a malicious way?
Jennifer Earl: I do not know of research, more certainly my own research wouldn’t allow me to speak to that. I can say that there is some interesting research by Phil Howard’s team but I think there is – he’s at Oxford now. He used to be at Washington. And he’s at the Oxford Internet Institute and he’s been studying the use of bots in political campaigns. And particularly Twitter bots. So, for instance, you know, when you look at someone who is really popular online or even someone who’s only moderately popular online, a number of those followers are likely not real. And when you read Tweets, you really shouldn’t assume that those Tweets are actually always written by real people or different people. Because people can have multiple Twitter handles and because people can – there are bots on Twitter that are tweeting out messages and we all know that from spam but it also turns out that that gets used in politics.
And I don’t know that Phil’s work, he might be somebody you’re interested in having on, I don’t know if Phil’s work has looked differences between Republicans and Democrats but he has looked at the use of these bots in campaigns and I can say that the use is quite sophisticated in some cases. So, for instance, if you have a weakness where you think that a particular constituency may be criticizing your policies, you may create bots that appear to be affiliated with that constituency who then praise you.
And when the media assumes that Twitter accounts equal people, then they’re going to believe and report that there is de-census in this community about this candidate’s position as opposed to clear opposition or even clear support. And so, I can say that there is interesting and important research out there about how campaigns or people supporting campaigns because we can’t necessarily prove who pays for those bots and who runs those bots, but there is interesting research out there but I don’t think that I have seen it reporting on, for instance, differences across the political spectrum.
Chris Martin: Okay. And I know we need to wrap up pretty soon but do you have any advice for internet users, any practical advice, I guess, one piece of practical advice is not to use the number of likes on a Tweet as a heuristic for how popular, and actually is any other heuristics or just tips in terms of avoiding being a target of manipulation?
Jennifer Earl: Well, if you’re talking about sort of disinformation which is the kind of manipulation we were just talking about, I think that one should really treat information on the web with more skepticism than we often treat it. So I think a practice that I try to use is if I read something that I’m surprised about, many people then will try to Google a confirmation of that. So they’ll read x happened and then they’ll Google, did x happen? Or they’ll just Google x and see what comes up. But I would recommend that you try to consider falsification too, just like in social science so that you also try to search for negative evidence like this didn’t happen. So you might Google x isn’t true or x is a myth. Now, certainly, sometimes that’s going to get you to places where probably x was true. So one of the things that we know for instance about holocaust denial is that holocaust denial plays on that kind of format of question to bring people in to holocaust denial websites. So I’m not saying that you should trust falsification on the web 100% of the time either. But I think having a healthy skepticism about what you read even if it’s sent to you by someone you trust, because you don’t know what their vetting procedures are. Also, there are lots of great online sites that help fact check different popular stories. And I will encourage people to try to search those sites in order to try to verify or disconfirm information.
Chris Martin: That sounds like useful advice. I know Snopes is a site that a lot of people use but I think being aware of the searching for whether disinformation is true might actually lead you to a site saying that it’s true isn’t trusting.
So I know we need to wrap up. But do you have any closing thoughts?
Jennifer Earl: Well, I’d like to thank you very much for having me on. And also just encourage people to think about online protest as something that is increasingly affecting protest movements. And I think one of the things you haven’t asked me about that I usually get asked about and so like to try to talk about is that a lot of times people will sort of downplay or denigrate online forms of protest like online petitions, et cetera, where people can actually participate online. And oftentimes that’s done out of a belief that they’re not very effective or that it’s not really a protest or a meaningful protest if you’re not doing something like putting your body on the line or taking risks, et cetera. What I want to point to your listeners is that research shows that just like offline protest is only sometimes effective and is sometimes effective, we really pass the same question about online forms of protest like online petitions, sometimes it’s incredibly effective. Sometimes it’s not. And the questions that we really need to be asking about the effectiveness of digital protest are actually very similar to the questions that we should be asking about non-digital protest. And it turns out that I think we’ll find that the answers that those sometimes work and with the real hard work of protest scholars in terms of consequences is to understand what those circumstances are is actually common to often and online protest.
Chris Martin: Well, thanks for being on the show. Once again, your book is Digital Enabled Social Change: Activism in the Internet Age. It’s available on Kindle, paperback and hardback on Amazon and elsewhere. So thanks for being on the show.
Jennifer Earl: Thanks so much for having me.
All episodes of Half Hour of Heterodoxy available here.