Misinformation is often described as something that’s created and then spread, but increasingly, it’s shaped in real time through participation. In this episode of News Over Noise, Matt Jordan and Cory Barker talk with Kate Starbird about participatory disinformation and how online audiences help produce, remix, and amplify information as it moves, blurring the line between observer and actor. Drawing on her research, Starbird examines how these dynamics play out across politics, crises, and everyday online life, and what they reveal about the growing challenge of building a shared reality in a platform-driven media environment.
Special thanks to our guest:
Kate Starbird is a Professor at the Department of Human Centered Design & Engineering (HCDE) at the University of Washington (UW). Kate’s research sits within the fields of human-computer interaction (HCI) and computer supported cooperative work (CSCW). Extending from early work in crisis informatics, her research program has followed the phenomenon of online rumoring down the rabbit hole and into some of the toxic online spaces that are increasingly (re)shaping discourse, values, and politics around the world. Her work has revealed how online disinformation — i.e. the intentional manipulation of discourse for political gain — is inherently participatory, taking shape through collaborations between witting agents and unwitting (though willing) crowds. Dr. Starbird received her BS in Computer Science from Stanford (1997) and her PhD in Technology, Media and Society from the University of Colorado (2012). She is a co-founder of the UW Center for an Informed Public, which works to strengthen democratic discourse by building resilience to online manipulation.
News Over Noise is a co-production of WPSU and Penn State’s Bellisario College of Communications.
CORY BARKER: In April 2013, two bombs exploded near the finish line of the Boston Marathon. In the hours that followed, people across the internet tried to figure out what had happened. On Reddit, users began combing through photos from the scene. They zoomed in on faces, circled backpacks, compared time stamps, built theories. It wasn't long before one name started to circulate: Sunil Tripathi. He was a missing college student. Someone found his photo online. Others began connecting dots. The theory spread quickly across Reddit, then onto Twitter, now X, but it didn't stay there. News organizations picked it up and his name was broadcast as a possible suspect. But there was a problem. He had nothing to do with the bombing. By the time authorities identified the real suspects, the damage had already been done. A grieving family was pulled into a global news cycle, built on a false claim that thousands of people had helped construct. No single person made that mistake. It emerged from the crowd, from people trying to make sense of the chaos, trying to find answers, trying to help. But in the process, they created a story that wasn't true.
MATT JORDAN: That's what researchers now describe as participatory disinformation.
And it's the focus of the work of Kate Starbird. She's a professor at the University of Washington and co-founder of the Center for an Informed Public, where her research examines how online crowds, platforms and information systems interact during breaking news events, crises and political moments. Kate's work shows that misinformation today isn't just created and then spread. It's shaped in real time through participation. People interpret, remix, challenge and amplify information as it moves. Often blurring the line between audience and actor. We're going to talk with her about participatory disinformation, what she's observed on platforms like X, and how these dynamics influence what the public comes to know and believe. Kate Starbird, welcome to News Over Noise.
KATE STARBIRD: Thanks for having me on.
MATT JORDAN: Sure. So, I want just to get a sense of how you came to study misinformation. You're a computer scientist, what drew you to this field of study?
KATE STARBIRD: Yeah, I actually started out as a PhD student studying in an emerging field at the time called crisis informatics, or around 2008, 2009. And we were actually just looking at the use of social media during crisis events. And my dissertation was actually about all of the pro-social things people do during crisis events. I wrote a dissertation on digital volunteerism, and how people would use Twitter and other tools to try to help other people during disaster events, and anything from trying to help people where to find supplies or try to help people figure out where roads were closed or damage had happened, and yeah, so initially I was again focused on things that people were trying to do to help all the good things that people were doing online. And then around 2012, 2013, we were still looking at crisis events. I kind of moved on from a PhD student to my own position as a as an assistant professor. And we were looking at, I think, data from the Boston Marathon bombing, and began to see that rumors and misinformation were a bigger and bigger part of the sort of convergence online after these crisis events, or even while the crisis events were going on. And so, from 2012, 2013 to about 2016, I was looking and focused on online rumors. And we don't even use the word misinformation as much as rumors. We were really kind of looking at both, you know, just a natural response to crisis events is to try to figure out what's going on. And people sometimes get that wrong. So, we weren't necessarily putting a negative or normative framing on that kind of information sharing. But around 2016, 2017, we began to realize we weren't just seeing sort of like accidental rumors and misinformation, but we were really looking at pervasive disinformation, intentional exploitation of these spaces that was really sinking into the networks and the algorithms in these online spaces. So about so it went from rumors and kind of talking about things to misinformation to really focusing on disinformation after about 2017 and what are now more and more looking at propaganda as a, as an umbrella term, because so much of this is hard to just put in a disinformation framing. There's so much, especially with the onset of AI.
MATT JORDAN: Well, what is it about a crisis situation? What insight does it give us about how online environments work? Psychologically...
KATE STARBIRD: Oh my gosh, there's a couple of ways that this is revealing there are also moments that bring about rumoring So remembering is actually a natural response to crisis events, in part because of the uncertainty, a crisis event, whether it's a manmade crisis, a terrorism or something, or even natural disasters, the early moments of those, the aftermath of those events or when those events are ongoing, you might have a lot of information, but you don't know what's true yet. There's a fog of war situation. There's just a lot of uncertainty. And often there's also sort of a timing safety critical aspect. So, there's high anxiety. These are high stakes situations, getting the right information can be really important for making decisions. And so, people have a tendency to come together in the aftermath of crisis events to try to figure out what's going on. And historically, that was a come together in person. Now that's come together in online spaces. And that process, we talk about it as collective sense making of like making sense of what's going on and people you don't have perfect information. You're trying to put it together, you're making theories, you're speculating about what's going on. And then that leads to rumors. And rumors can sometimes turn out to be true. Often, they turn out to be false or somewhere in between. But it's sort of a natural, rumoring is a natural response to crisis events. And so that's an interesting time to study that kind of behavior, because you've got these bursts of activity. And so, as a researcher, you have these like really short windows into this sense- making activity that was not just likely but at occasion to this kind of rumoring activity. And again, initially we entered the space not necessarily saying this is something that's wrong. I mean, rumoring serves informational purposes. It serves psychological purposes. You know, community building, emotion... There's a lot of reasons people would come together and try to figure out what's going on. Additionally, these crisis events became opportunities for people to exploit those situations. We've always known in crisis events. There's a, you know, there’s just a bunch of different types of people that converge, and one of them is an exploitation kind of persona. And these became windows of opportunity online for people to gain attention, to put out clickbait content, breaking news and also political point-scoring kind of narratives, to get attention, to grow reputation, to get more followers, and then, in some cases, to use those followers later for different kinds of whether it was financial gain, political gain or something else. So, on top of it being sort of a natural time where people are converging, they're coming together, lots of attention, lots of anxiety, uncertainty, there was also this kind of exploitation that we've seen this new class of influencers take advantage, many of them take advantage of these crisis events to grow their audiences.
CORY BARKER: Speaking of coming together and exploitation, one of the great concepts from your work that you explore is this idea of participatory disinformation campaigns. Can you describe that for us and for our listeners in a little bit more detail?
KATE STARBIRD: Yeah, I mean, I think if we go back in time to maybe 2016 when the word disinformation all of a sudden, you know, if you look at a graph, nobody's really using it, even we were studying rumors and misinformation. We had... and we sometimes use the term disinformation but rarely didn't quite really know what it was. You know, it really takes off around 2016, with in particular the Russian government's sort of intentional campaigns to manipulate information spaces. And then other folks realized they could use this Russian style of disinformation and that it would, that they could manipulate information spaces in this way. And so, it sort of becomes ubiquitous. It's not, you know, the term, not just the term, but the use of the techniques become much broader than just one set of actors. But in 2016, when we were looking at it, we were looking at through this lens of the Russian state actors and what they were doing, these deceptive campaigns, and there was a lot of talk about this really sort of top-down mechanism where these, these knowing actors would manipulate these information spaces. And as we continue to look at things, yeah, we sometimes could infer that there was this top down, there were bots or deceptive accounts that were organized in the Internet Research Agency, like an actual building in Russia, that these folks were being trained there and manipulating the spaces. But by the time we get to 2020, and even before that, our work is... you know, this isn't just top down. It's also horizontal. It's also bottom up. Everyday people are becoming sort of unwitting agents in the spread of this content, and then they begin to create it themselves. And it's really hard to disentangle the witting actors, the people that are in on the campaign, with the people that are just becoming participants. And so, first time I started writing it was 2017, 2018, and our first paper was in 2019, where we're really starting to look at disinformation as a participatory process, as both top down and bottom up, where online audiences are collaborating with influencers and with the perpetrators of disinformation campaigns to create content that broadly fits into something that can be misleading. More and more, I'm thinking about participatory propaganda because it's hard to fit it quite into disinformation, some of these campaigns. But the idea is that they're not just top down, but they're sort of like these collaborations with different sets of actors that have different motivations that are participating in different ways. And many of them may not know that they're part of a disinformation campaign, especially when we look to the audience who begin to sincerely believe this content. And even when they're not sincere believers that we call them either unwitting but willing, they want to believe this content, and they begin to become participants in these campaigns. And we can give examples of the 2020 election denialism or claims about voter fraud. And we can unwind them and see that they're untrue. But a lot of people that were even generating them really believed that they were being cheated in some of these cases. More recent cases is the “they’re eating the pets” claims in 2024 around immigrants. And we can see some people believe it and they're spreading it. And then others are just sharing this content. They don't necessarily believe it, but it fits into their either their political goals or their world views about immigrants in the United States. Then more and more, most disinformation that we're seeing nowadays... and it's even hard to use that term because it's so far away from the top-down model we were looking at 2016.
MATT JORDAN: I've heard you describe this before, especially in relation to, the right-wing media ecosystem as kind of, operating like improv theater. What do you mean by that?
KATE STARBIRD: Yeah, it's interesting. So a couple of things there that kind of set up that argument. One of them is there's this reflection at the 2024 election. The Democrats lose badly, and there's a lot of talk about how the right dominated certain sections of media and certain demographics that were using different kinds of online media, podcasts, social media. And we are actually looking at that moment, but we also are sitting in our data from 2020, and we're trying to think of like, what is the right doing differently in these online spaces than the left? And one of the things we see is that the left is still mad at The New York Times, a lot of them, for not reporting on things properly, as according to their ideas of how things should be reported on. But they're still heavily reliant on traditional mainstream media outlets. And the right has really embraced a very different kind of content production with online influencers. A lot of, hyper-partisan media outlets that feed into this ecosystem. And we begin to see… and it’s across a bunch of different research that's happening. So, I'm working with a PhD student, Anna Beers, and Anna was looking at these networks of influencers and how they often had shared audiences. So, an influencer wasn't just surrounded by if you look at the network structure, an influencer isn't just surrounded by people that follow them. And there's another influencer with different people that follow them, that their audiences were shared in some interesting ways, and they often were interacting with each other, and they were playing different roles in the information ecosystem. And this is primarily on the right. The left has this, too. It's just not as well developed. And so, we see that as part of the sort of more participatory model like the right has done more participatory politics. And I can see it as disinformation; they're going to see this political messaging. We can have a normative argument about what the value system is represented there, but they're doing it better in these online environments. They've been able to use these online environments in more productive ways. But part of that is that it's not just a top-down message, That the top-down message also allows... There is some top-down messaging, but there's also this empowering of audiences to be part of the show. And by being part of the show, we've got these influencers that are on stage with these shared audiences. But the audience, the influencers don't just send their message out to the audiences. The audiences actually shape what the influencers are doing by giving them feedback, by seeding stories, by saying, “Hey, you know what? I saw somebody in my neighborhood, and I think they were eating pets.” Or, in the case that we were seeing, we saw a lot of people, like, “The Sharpies were bleeding through my ballots. Is that voter fraud?” And then the influencers are like, “Yes, that must be voter fraud.” And picking it up. So, we see that the audiences are able to contribute to the show in a way that aligns with the participatory nature of online environments, it's very effective. It empowers audiences. Audiences feel like they have agency in the political messaging. And, in 2024 when we start conceptualizing this, and we had a few papers earlier, we start to talk about improvisation on the right and how their messaging is more improvisational, which matches to the dynamics and logics of these online environments. And so, then we say it's kind of like improv theater. So, we have a metaphor. Danielle Thompson and I kind of flesh that out a little bit for a paper, but I feel like it's an interesting way to think about how different political parties have leveraged these dynamics differently. And maybe not intentionally. It's just part of the way these parties have evolved, and, in part, because the right had to, in their minds, use different media ecosystems because they didn't feel like mainstream media were serving their needs. And the left stuck to this mainstream media, which probably were no longer serving their needs, but have had a hard time figuring out how to develop a similar kind of ecosystem. There's a danger in participatory politics, which we've seen in other work, where they can spin out of control, where audiences can keep demanding more and more content that can... audiences can radicalize their influencers. I guess, is what we've seen in Ong and Cabañes’ work in the Philippines. And some of our work has kind of shown similar things. I'm sure there's other people doing this work of kind of showing how these participatory cycles can actually spin a little bit out of control as the audience demands more and more... gives positive feedback to more and more problematic messaging.
CORY BARKER: I'm curious, given your great work underlining the role of participatory disinformation and the role of mid-size influencers or random people just wanting to participate in these ecosystems online and in these communities, the role of President Trump, as the sort of number one target for the circulation of disinformation. When we go back to 2015, 2016 and what is in many ways the catalyst of so many of these conversations in public discourse and the way that you see his role evolving as this ecosystem is evolving. I'm thinking so much about how in relationship to our current predicament with the US war on Iran, the ways in which every day he says something to the effect of, like, the war's over, we've won. And then a few hours later, there's some sort of post or public comment that actually we're going to escalate and we're going to double it. And the way that that is straight up lying or confusion or this sort of yo-yoing effect that it's hard, to some of your earlier points to put into a box of like, disinformation. So, I'm curious, how do you view President Trump in this ecosystem in 2026?
KATE STARBIRD: Yeah, in 2026, this is... I think it's changing. And so, I'm going to start with how you've ended that question. I mean, I can look back and we can look and see like President Trump, pre-President Trump, so, Donald Trump prior to becoming president the first time and even in between, he's played different roles in this ecosystem over the course of his life and over the course of his political career. I would say, initially, his style of political communication was just really well matched for our dynamics, our online dynamics, and part of that is his improvisational style his ability to just kind of talk and kind of bounce things around and not really necessarily committed to sticking by the thing that he said before, which we're now seeing, pretty acutely. There's a different term that I haven't mentioned yet. We talked a little about disinformation, mentioned rumors, but there's also the term bullsh*t. And there's a whole book on bullsh*t, which I find to be very revealing. I'm on stage. I can't remember the author right now, but I think it's a really valuable term. And I'm not using it to use a profanity. I think the definition here is really important. And that is like, content that you're saying, claims that you're making... They may be true. They may not be true. You don't care whether or not they're true or not. Bullsh*tting is about saying what is useful to you, whether it's politically useful or personally useful or whichever, But bullsh*t is content that the truth value just isn't important. It's all about the persuasive value of the rhetorical value or something else. And so, Donald Trump is a very good bullsh*tter, right? It doesn't matter to him whether or not it's true, he's trying out different things, trying to see what resonates with his audience, which is one of the reasons he's so well-adapted to this online environment where you can get that feedback and he's happy to shift to something else. I can't speak to his mental and intellectual state here in 2026. I think there is something very different about his position right now. He's not as central inside the networks, but he has his own platform, and his content goes out everywhere. And it both sets the stage to a bunch of people who are going to try to improvise with it, either apologize for it, explain it, or celebrate it, whatever it is. So, there's still interaction between what he's doing and all these folks. But also at this point, there are so many other voices, whether within the government or still... there's a thin line between the influencers on the right and the people that are actually in our government right now, which is one of the fascinating things from someone who's been studying this rhetoric and some of these folks for so long to see them in positions of power, but they now have accounts that represent other parts of our government that are doing similar kinds of improvisational and propaganda messaging that, like Donald Trump's account is now sort of complemented by these others in a way that... yeah, I don't have a simple way to characterize it. It's one of those things that probably won't become clear to folks like me for a couple of years. So, I'm not going to be super helpful here. But certainly, there's a dynamic that's changing, but there's a recognition that one of the reasons that Donald Trump has been president of the United States twice now in this era is that he's been particularly good at the kind of political rhetoric or bullsh*tting that is just really effective in these online spaces. And he's been supported by people who have made sure that his message and his messaging style is... that he doesn't pay a political price for the negative impacts of that kind of content production or messaging.
MATT JORDAN: I heard a story on NPR the other day about how a lot of people around the globe were adopting his communications strategy. In particular, this was about how Iran had adopted trolling as a propaganda strategy. Why is trolling so effective, given the affordances of our digital platforms and media ecosystem?
KATE STARBIRD: We'd have to go back a long way to go through all of the decades of research on why our online environments just afford us benefits to certain kinds of social and communicative behavior that you just can't get away with in person or in prior manifestations of our communication technologies. That's not to say that this kind of style wasn't effective in different ways. We've also seen propaganda at different times. But there's something about this format and, in a particular kind of... Yeah, trolling...The hard thing is trolling means different things in different eras. There's trolling initially meant just trying to get a rise out of people, trying to get them to be upset about something. And that's been a feature of online environments... One of the first studies of email, and I think it was still like among only academics and researchers in the very limited ARPANET email era. And they already were seeing, flaming, where people would get mad at each other and say negative things and that was not even anonymous. And they theorized that it had something to do with because you're not in person, you can't see necessarily the results, some of our natural empathy or our natural moderation of our speech goes away in these online environment. So even, just the agitation and the ability to say things that are harmful is different in online environments. But then you have the layers of anonymity. I would say the anonymity likely contributes to people being able to say things without consequences. And so, people can both experiment but also violate prior norms and then also use it for different kinds of exploitation. And we do have some, like our more recent definitions of trolling often include this anonymity- driven behaviors. But then there's just the changing of norms. I think one of the interesting things, as a researcher on Twitter in 2010 was to watch how social norms were changing so quickly because so people were interacting in a completely new way, and trying out things. And we could see the different ways of mentioning people and retweets, and what's okay here and what's not okay. Well, those norms have now been shifted in these online environments. They shifted from what we would expect in offline environments. But these norms developed in a way. And I think that is part of these online environments have developed, the infrastructure of them has developed at the same time as the permissiveness for trolling, but also the norms around that, to where there's no social consequence for negative trolling. I got a new family member and they were trolling us. Like, “Oh, I like to troll. I’m online I like to troll” in a physical space. I'm like, “I'm sorry, I don't like it online. But you can't bring that to my family's space.” But because people are bringing that from that space and thinking that it's okay. So, our norms have changed around what kinds of social interactions are okay or healthy in different places. In different ways.
MATT JORDAN: There was this moment after the insurrection, after January 6th, where it looked like there was going to be this attempt by internet platforms to clean things up. They kicked off some of the trolls, like, Donald Trump and some of the worst spreaders of insurrection-type rhetoric. They put a lot of money into their trust and safety teams. And it looked like, for a moment, things were going to get better. What happened? Why did they just say, “Ah, we were just kidding”?
KATE STARBIRD: I mean, I think this is the stuff of many, many dissertations in the future. And some of that is still taking place. I would say that Republican strategists and communicators were very effective at a strategy that developed a little bit from the ground up. They were spaghetti-ing on the wall, trying to figure out what to do. But after January 6th, they needed to switch that narrative. They wanted to change who the heroes were and who the villains were. And so, they redefined what happened on January 6th by saying that the real perpetrators, the real bad thing was not this crowd of people that were motivated by falsehoods to go and try to disrupt the results of election. The real problem was the internet companies and the people and organizations that had tried to stop those lies, or that tried to counter those lies, and tried to call them out. And so, they managed to convince themselves and actually a large number of other people, that the problem wasn't January 6th. The problem wasn't the lies that Donald Trump and others told to motivate and mobilize the mob on January 6th. The real problem was censorship. That somehow the truth was censored about what had actually happened in that election and that that was the real issue. And this blended into some of the things that happened around Covid, and it turned out to be really, really effective. They tried other ones. They tried, no, it was really Antifa. They tried a lot of other things, but they really did a great job of shifting the narrative around January 6th from these falsehoods were bad and this was a bad thing that happened. And look at the results of them. And then, oh, no, the bad thing was actually that anyone ever tried to do anything about that. And they were really effective. And that narrative goes on today. Donald Trump and the current U.S government just settled a lawsuit with themselves, with their own plaintiffs around so-called censorship by the Biden administration, which is a farce. If you go look at the actual settlement, it tells them that they're not allowed to do things that nobody ever did. But anyhow, they will celebrate it. So, it's just really interesting. They use the courts; they use social media outrage and different kinds of things. And they used to be honest, I've been studying this for a long time, people always thought they were being censored in online environments. It was a folk theory by many people. Back in 2010, after the Haiti earthquake, I was seeing these digital volunteers and they were like, oh my gosh, my content isn't getting out there. I'm being censored for my political views. And it turned out that they were posting so often that the filters of Twitter thought that they were spam and they were labeling them spam and taking them down. So, whether it's been true or not, people have for all time had these really strong folk theories, like everyday people. And I think the Republicans and Donald Trump and his supporters were very good at leveraging that folk theory and turning it into this deep story of censorship where their audiences believed it, their influencers were happy to propagate it. And they had a legal apparatus that that went about trying to make that narrative into something that a large number of people believed. And their fight against censorship is still going on, even as this this administration censors and attacks freedom of speech like no other in the history of the United States. So, it's been an interesting evolution to watch both from the outside and unfortunately, as a researcher in this space from the inside as well.
CORY BARKER: In a recent Substack post about Trump's claims related to the Iranian interference into the 2020 election, you have an aside where you explained that Iran's efforts were an influence operation, not an interference one. How should we be thinking about those as two different things?
KATE STARBIRD: Yeah. So, this is in reference the 2020 election. Our team was... Let me go back there and I'm gonna give you a little context. Our team at the University of Washington, was collaborating with team at Stanford, and the team at Stanford had researchers who were one of maybe the other teams that did this as well. They identified that there was an Iranian influence operation in 2020, where agents of Iranian government, impersonated Proud Boys. So, they pretended to be Proud Boys, and they wrote intimidating letters and sent them to Democratic voters. So, initially, we have all these voters saying, oh my gosh, the Proud Boys are harassing us. And then these groups at Stanford and other places identified that, no Iranians are pretending to be Proud Boys. This was exposed. It was exposed very quickly. And the organization at the time, inside the Department of Homeland Security, there's an office called CISA, which is Cybersecurity and Infrastructure Security Association, something like that. And they put out bulletins. They made sure people knew, they made sure people understood it. So, this was almost a nothing burger. One, in the end it didn't actually intimidate people. And it's hard to know what the Iran government's motivations were in terms of who they were trying to help with that it may have just been just trying to make people less trustful at the process. You fast forward to Donald Trump statements as the US is doing this invasion of Iran and bombing of Iran or whatever this operation is, Donald Trump tries to use this claim of Iranian interference in the 2020 election as part of the motivation for this attack on Iran. And it doesn't make any sense. But it ties back to the repeated claims by Donald Trump, which he continues to make to this day, that the 2020 election was rigged and that somehow, he's trying to tie in the Iranian government to being part of that rig, which is asinine. It's just doesn't make sense. And at the same time, there's this huge hypocrisy at the core of that, because the people that helped point out that campaign and make sure it was identified and stopped got called censors later by the Trump administration and silenced, defunded. And the Stanford Internet Research Agency is no longer; the group that actually helped call out that campaign. So the layers of hypocrisy around his attempt to use the influence operation from Iran as part of his motivation for the wars... really frustrating. As a person who had some distant insider knowledge to what actually happened in 2020.
MATT JORDAN: It’s interesting in America, especially after World War I developed a very blossoming propaganda studies system. All these influential intellectuals like Walter Lippmann and whatnot were part of these initial studies. And these things were rebuilt during World War II and stopped about 1942. But you've started to introduce that word propaganda again into the discourse. And a lot of times people stay away from that word. And, again, like you said a second ago, you'd think that this would be a time where that would be something we'd really want to talk about. With the ability of Iran and China and Russia to get their information over the borders into our information ecosystem, you think this would be something we'd want to be doing? And yet, anytime you want to do that, or you do that or anybody else does this, it's called censorship. It’s just a very strange moment that we're living in.
KATE STARBIRD: There's a lot of experimentation happening around propaganda, disinformation, manipulation of these online spaces. The folks that are doing that experimenting are getting very good. And they probably know a lot about how it works. They're able to get pretty quick feedback about how effective they are. Foreign governments, our own government, influencers, any number of people are very good at doing this. To study it and to call it out and to draw attention to how it works, for some reason, that is the thing we're not allowed to do right now, though, I have students banging on my door wanting to get in and study these kinds of things undergraduates, PhD students. So, a lot of people recognize that it's really a need we have as a society of understanding what's happening is information systems helping people navigate them, developing new information literacies, developing technological solutions that layer onto the platforms we have to help people see where information comes from, what's the information provenance? Who created this? How did it go viral? Add AI to this and it's just like head exploding. There's so much need. And at the same time, we have a government who is basically trying to defund every group doing this kind of work and not just defund us, but smear us, pull us into interrogations in Washington, DC. I've personally been through two of those, and it feels purposeful. I mean, there's also the chance that maybe they believe the conspiracy theories that we somehow were pulling strings to censor millions and millions of tweets, which is just absolutely... tweets or social media posts or whatever, absolutely untrue. But I think it's hard to understand except as the people that are gaining power through the use of these techniques, propaganda, disinformation, manipulation of online systems don't want the rest of us to understand what's happening and to think about what we could do about it, and that's where we're at.
MATT JORDAN: Yeah.
CORY BARKER: A number of your projects over the years, and you've talked about it a little bit here today have involved, obviously, observing, collecting and coding tweets from Twitter, now X. I'm curious for you how the transition to from Twitter to X under Elon Musk has changed that platform, both as a space for community, for disinformation and as you actually do your work, as a potential research site, how has your relationship or observation of X evolved over the last few years?
KATE STARBIRD: Yeah, that's an extremely good question. I mean, when I first started out doing this research, we could always... from 2010 and the Haiti earthquake happened, I had a friend gives me my first starter code on how to do the collection, built the infrastructure from there, and we could collect tens of millions of tweets a day about unfolding events, crisis events, and could study them. We built an infrastructure to study them in near real time, to be able to do all sorts of analysis of network analysis to understand the structure, what was happening, all of these things. When Elon Musk takes control of Twitter, renames it X. Basically, they shut down all of that access, and initially he decides to charge $42,000 a month for some tiny subset of what we were getting before. And of course, $42,000 a month, why does he choose that number? Because it's a marijuana reference because he's being so funny. So, he's trolling us, as he's taking away the access and doesn’t want it to be an open platform anymore. And there's also maybe, an argument about the AI stuff he wants to keep his data for himself so he can sell it in the AI. Okay. But the researchers weren't using it to compete with his AI product. We're using it to study things. So, we now have very limited access. I have got a couple of different views, a couple of different small windows, but at the most I can download 10,000 tweets per day. And there's a cost to that access that we have. On top of that, the platform itself. I used to take content, and I would do live collections and show them in class and use them for examples and be able to teach students how to analyze this data. I took a 10,000 download of figure skating tweets or social posts. I don't know we call them, X posts whatever, from this most recent Olympics and gave them to some students for a visualization thing and realized that it was just not okay. The content was so problematic of just figure skating tweets and just like the most highly posted ones. So, this would be the most visible stuff on that platform was no longer really... it felt uncomfortable for me giving to master students. And that's just how toxic and kind of gross that platform has become. And so, unfortunately, I don't have the visibility to be able to make big characterizations of what happening there. It is still a place where during crisis events, a lot of people are sharing a lot of information. It's still a place for people to converge, to talk about sports and other kind of topics. And yet it feels very hostile to some of the values that seem to be part of, of its earlier iteration. So, yeah, the platform's changed. The content is more problematic. And we have just tiny little windows that we can study where we used to be able to study just so much of what was happening there.
MATT JORDAN: So, there's a general theory that all the platforms have kind of gotten ensh*ttified that this kind of process of degradation that they go to. And a lot of this did happen post those guardrails being taken off. And they stepped back into this orthodox libertarian view of censorship. And just said, hey, section 230, we hide behind it. There was last week a little bit of news that I just wanted to see what you thought about, Meta lost two cases related to this about their obligations to their users. One was that it was really easy on Meta platforms for predators to prey on minors. And the other was that both YouTube and Instagram were held in court to be addictive by design because they have them talking about, hey, this is addictive. So, I wonder if this, you think, is a moment where there might be some calling to account that this might be the beginning of something else in relation to the obligations of these platforms to their users.
KATE STARBIRD: Yeah, I think this is a hard case because I've seen really good arguments on both sides of not just the results of these cases, but the implications. On the one hand, we have long known that these content- based approaches that problematize the shares, a particular piece of content is not the right approach to understanding the harms of these online environments. Maybe there are a couple cases where people are doing something that's really terrible and criminal and needs to be prosecuted. But if you look at the range of harms, a lot of it has to do with the design of these platforms, whether it's around attention dynamics, whether, it's around recommendations that have, led to certain kinds of influencers being at the top of these information environments, there's so many things about the design of these platforms that are intertwined with some of the toxicities that we're seeing in our information spaces broadly. And possibly in society. And I know there's a lot of people that are very worried about children and for good reason in many cases. There are also a lot of worries that some of these, and I think Mike Masnick is somebody that I've read a lot on this and a few others that are worried about how one, some of these results for these cases may further empower these big companies because they may make it really hard for smaller companies to compete. That's one of been one of the worries around, reform of 230 for a long time is that if it becomes financially problematic or you can get sued for these smaller companies, they won't be able to compete and would basically lock in like, yeah, Meta has to pay a big fine. But they can afford a big fine and that they will continue to dominate these spaces. And I think that's a legitimate worry. I don't know how to navigate that around... how do we ensure that there are healthier spaces for online interactions without further reifying the power of the big folks that are already there? And I think that's the challenge. There's also other ones around children like Danah Boyd and others are folks that I've read for a long time on... there's a mixed results around access to online environments and actually totally taking that away from certain children, especially children that are LGBT or other kinds of folks who have a hard time making connections in other ways in their other lives can be really harmful. Because social media are actually a place where people can build community and that community can be healthy. We've been focusing on unhealthy community for a while, but there are healthy, very healthy ways of people being able to find support. And so, there's a risk of throwing out the baby with the bathwater in some of these decisions as well. And so, I think it's just not easy. And that's one of the things around all of this is these are really hard problems and hard problems where there are good arguments on both sides. And unfortunately, what we see is not necessarily people trying to come together to find hard solutions to hard problems, but we've often seen people exploiting these hard problems for political gain. And if we actually had people coming together to solve hard problems, we might be able to make some progress. But unfortunately, we can see a lot of risk with some of these recent court results of how this could be exploited in political ways that could make things even worse, especially for groups of folks who are often under supported by society.
CORY BARKER: I wanted to ask about the broader influence or impact of a lot of the things we've been talking about today, like participatory disinformation outside of the realm of politics, even if we broadly define politics. I feel like, and this is partially anecdotal, but there have been a number of mid to high profile examples in realms like sports or entertainment, where the proliferation and speed of the spreading of rumors and disinformation related to something online creates almost this temporary mass hysteria event of people convincing themselves that something has happened that has not happened. I'll give you two examples, both of which are kind of random, but one of which is from entertainment. At the end of last year, there was a fairly large, robust conversation online about there being a secret finale to Stranger Things or scenes that had been secretly deleted, to the point of cast members still doing promo on press tours, having to answer questions about this fan response. There is another example that's a little more personal to my heart as an Indiana University grad, that in the lead up to the College Football Playoff championship game, there was a big time rumor that Indiana was suddenly so good because they had somehow hacked the feeds of practice cameras and things like that, and somehow knew all of the plays. And that was really going around in the subculture of online college football fandom. And those seem like such clear examples of a rumor of intentional, potentially malicious disinformation, all of these things circulating all different directions and then seemingly, when you're encountering these things on your feed, people believing them to be true, and the way in which this type of disinformation, rumor mongering, paranoia mindset has basically spread into all of these spaces of our lives beyond politics. And of course, as you said, rumor predates the internet, all of these things. But I'm just curious what your reaction is to examples like that or incidents that you've even encountered, in research or in your personal lives where it feels like those things go far beyond rumors about immigrants or elections and into spaces that we deem, quote unquote, less important when this type of research is done.
KATE STARBIRD: Yeah, absolutely. I mean, urban legends have always been part of the ways that people make sense of events and sports. Oh my gosh. The thing about sports is everybody always sees everything through their own frame. And they're probably even more perhaps, get encapsulated by our sports identities as our political identities, and I actually think sports is a great place to study things that we can take the political away and look at some of the same kinds of dynamics. We’ve got a lot of research to say that we're more susceptible to rumors when they align with our political identity. Well, I'm pretty sure that, just having been an athlete and a fan and I have a parent who never saw either one of my basketball games and thought that I never committed a foul. And the refs were always against us according to that particular parent. And it actually gets to the psychological and identity... the individual vulnerabilities to some of these things, as well as the online dynamics, because we can all connect now, and these things can spread very quickly. So, I actually think, I mean, in some ways, that the sports context and I've seen a couple papers in the entertainment context, not as many in the sports, but the sports in particular, I've always been surprised that there aren't more studies in that context because they actually can apply to politics. And yet the studies themselves are a little bit less polarizing politically. And so maybe easier to study some of these dynamics. I will, having been in this space for a long time and practicing empathy for the folks that we study, and especially not the people that purposefully manipulate spaces, but the folks that get caught up into things. I don't know that we want to live in a world completely without conspiracy theory. I mean, I think there's something about... some things are harmless or mostly harmless at the end of the day. And it's good to have variations of how we interpret reality. And every once in a while, a conspiracy theory turns out to be true. It's important to have people putting it out there. All conspiracy theories couldn't possibly all be true. It's also good to remember that. So, there are some of these case where when they're not super harmful, it’s just part of human nature. People Magazine has had gossip and we had gossip, but it just encapsulated it. We've had gossip long before People Magazine. But that's part of what people do and gossip about celebrities and their lives is a community building exercise for other people. So, thinking about what are the motivations? Why are people participating? And, if folks want to do that in a playful way around sports, that's one thing. If it's motivating, harmful hate and harassment, how do we scaffold that? But I don't know that I would say people shouldn't rumor about this. This is bad. This all should be stopped. It just doesn't make sense. I mean human nature is... this is part of the way we make sense of social configurations and anxiety and crisis events and all these things is to talk about them, to speculate, to theorize, and our modern information environments has definitely supercharged that in ways that are funny in some cases; horrifying, problematic and world-changing. And that's just part of why we're here. Because it keeps us busy as researchers, for sure.
MATT JORDAN: There's a Thomas Jefferson quote I really like, which is, “An informed citizenry is at the heart of the dynamic democracy.” And you run a center called the Center for the Informed Public. And the idea, I think, is that informed citizens can participate in deliberation if they have a shared reality. So, if you were to run the zoo and could pull a policy lever, what would you suggest to create a digital media ecosystem whose affordances would better serve a deliberative democracy?
KATE STARBIRD: That's a hard question. It's really hard to think around policy levers in a country where we just haven't seen the ability to do productive change, in part because the people in power benefit from the systems working in a certain kind of way. I think the challenge that's unanswered, I don't know what the answer is, but the very nature of our attention-based information economy is at odds with this goal of having an informed citizenry, and I don't know how we're going to navigate it. And when we first saw the horizon of Russian disinformation coming, there were these earliest folks that I was reading around, what happened in the Soviet sphere Because disinformation was so pervasive that people no longer thought that they could even know the truth. And then, democratic governance really isn't possible when you don't think that you can know enough to vote. So, the theory around this is that in a world of pervasive disinformation, people are going to turn to more authoritarian leadership. I hope that's not true. We're in a world of pervasive information pollution. It's very hard for people to know the truth. You go out and ask people, especially with AI, it's very hard for people to say that they can trust the information that they have. And we're having to evolve very quickly as a society where we thought we could trust things before, and now we no longer think we can trust things and figure out what the in between there is. And I really do feel like there's some mismatch between our attention-based economies and this goal of having an informed citizenry and how we navigate that as a country and other countries that face a kind of similar challenge. It's going to determine what political configuration looks like for as long as I can see. I wish I could say, here's the solution. I don't have it, but I think trying to get people to actually look forward. and be like, this is a real challenge. Let's actually try to solve this instead of let's try to exploit it for our own gain. And right now, the people with the levers, whether they are platform owners, people in power, so many of them are just trying to exploit it for their own gain and not actually trying to solve what looks to be a very anti-democratic turn. If we are to believe that it's hard to have democratic governance when you can't trust what you see.
MATT JORDAN: Thanks so much for sharing that. And maybe what we're demonstrating is that just having a conversation about this is the first place to start more thinking about how to navigate this world. So, Kate Starbird, thank you so much for joining us and for sharing your wisdom with our audience.
KATE STARBIRD: Thank you. Yeah. Thanks again for having me on. This was really, really, really fun. So, appreciate the questions for sure.
CORY BARKER: Well, Matt, a great conversation with Kate Starbird, who is a long-time expert in the field of disinformation and misinformation online. What were one or two big takeaways for you?
MATT JORDAN: Well, I'm always interested to talk to people who have worked on this for so long to see the transition over time in these information ecosystems because it is a moving target, to be sure. But I think one of the things that strikes me is that this way of thinking about how these frames that get applied a sticky narrative frame, like voter fraud just happened, and how that allows people out there to, reinforce that narrative and that story, and they feel like they're part of a group that's trying to help. I mean, it's that urge to help their community, even if that community are seemingly antisocial. That's the fascinating thing about it. And I think it gives us a lot to think about. And you?
CORY BARKER: Yeah, a couple things stuck out to me. I think, to your point, that the evolution of these platforms and the evolution of how people view them and how people are using them is something we talk about pretty regularly on this show. But to hear it from someone who has relied on Twitter or now X to collect and code information that helps us better understand this ecosystem, and then to know or get confirmation that that access has essentially been cut off or been held hostage for an insane price, is something that, when we talk about these things more discursively or casually, even on our show, don't often come up as much. The literal access that researchers have or don't have to so many of these platforms and the amount of information that we know is there and that we can understand anecdotally, maybe something is happening, but to be able to collect and code and collate those things is so important to better understanding these ideas. And it's getting increasingly difficult to do that. So the positive to more negative view that a lot of people have had about social media from 2010 to now, 2026, I think we have a similar trajectory of access, where researchers and even everyday citizens have had a certain level of access to search and to filter and find things on these platforms that now they've just closed ranks a little bit to make it harder to even understand what's happening, unless we want to pay a lot of money, which is not a great thing for access to information and our deeper understanding.
MATT JORDAN: Yeah. And it seems to me that one thing that people can think about is if you're on a platform and they're trying to close us off from accessing what they're talking about, that may be a platform that is trying to manipulate you. That's it for this episode of News Over Noise. Our guest was Kate Starbird, professor at the University of Washington and co-founder of the Center for an Informed Public. To learn more, visit news over noise dot org. I'm Matt Jordan.
CORY BARKER: And I'm Cory Barker.
MATT JORDAN: Until next time, stay well and well informed. News over noise is produced by the Penn State Donald P. Bellisario College of Communications and WPSU. This program has been funded by the Office of the Executive Vice President and Provost at Penn State and is part of the Penn State News Literacy Initiative.
[END OF TRANSCRIPT]