News Over Noise

The Road More Traveled: How Misinformation Spreads

Episode Summary

Misinformation now moves at the speed of algorithms and with generative AI, it is getting harder to tell what is real and what is manufactured. In this episode of News Over Noise, hosts Matt Jordan and Cory Barker talk with Sofia Rubinson, analyst at NewsGuard and senior editor of Reality Check, about how false claims spread, why AI is accelerating their reach, and what that means for public trust. From viral images and foreign disinformation campaigns to health hoaxes and AI-generated content, Rubinson breaks down how false stories move from fringe platforms into the mainstream and how NewsGuard tracks, debunks, and analyzes those narratives in real time.

Episode Notes

Special thanks to our guest:

Sofia Rubinson is an analyst at NewsGuard and the senior editor of Reality Check, NewsGuard’s daily newsletter about how false claims spread — and who’s behind them. She investigates emerging false narratives spreading across social platforms and tracks the growing use of AI systems to manufacture and scale misinformation.

Producer: Lindsey Whissel Fenton

Audio Engineers: Mickey Klein, Scott Gros, Clint Yoder

News Over Noise is a co-production of WPSU and Penn State’s Bellisario College of Communications
 

Episode Transcription

CORY BARKER: It started with a photo of a tray piled high with lobster tails, steak and mashed potatoes. The caption claimed it came from a shelter in New York where, quote, illegal immigrants were dining like kings while taxpayers footed the bill within hours. It was everywhere. Millions of views on X and TikTok, headlines on Partizan blogs and outrage across cable news. Here's the rub. The photo wasn't from a migrant shelter. It was from a restaurant in Las Vegas. But by the time that detail surfaced, the story had already done its job stoking anger, feeding algorithms and deepening the idea that, quote, someone else is getting what you deserve. This example illustrates the path of misinformation. A story starts as a post, mutates through the memes, crosses platforms, and hardens into belief. It's emotional, shareable, and almost impossible to contain. As if this problem were bad enough. The rise of generative AI is making that cycle faster and more convincing than ever. Synthetic images and fake news sites are flooding timelines at a scale we've never seen before. And the result is an information environment where lies often travel further and faster than facts.

MATT JORDAN: To help us understand how this works and what can be done about it. We're here with Sophia Robinson, an analyst at NewsGuard and senior editor of Reality Check, NewsGuard’s daily newsletter that tracks how false claims spread and who's behind them. Sophia and her team investigate the narrative shaping our information ecosystem from Kremlin backed propaganda to health hoaxes to chi generated disinformation campaigns. We'll talk about what she's seeing right now, how misinformation moves and what it's doing to public trust. Sophia, welcome to News Over Noise.

SOFIA RUBINSON: Great to be here. 

MATT JORDAN: So, tell us a little about NewsGuard and why this organization started to publish the Reality Check newsletter. 

SOFIA RUBINSON: Of course. So NewsGuard has been around since 2018. We call ourselves the global leader in information reliability. We do many different things at our company. One of our biggest products is that we produce reliability ratings for all of the top news and information websites on the web right now. We have these criteria that are apolitical based on journalistic practices, and we apply those standards equally to all of the news information websites that we rate. But what I work on and what you mentioned is called Reality Check. It's our public facing arm. It's our newsletter on Substack that was started about a year ago, and we publish all the different types of content there but mainly focused on false claims that are spreading online and how our readers can protect themselves from falling victim to false claims. 

CORY BARKER: Can you tell us a little bit about what the process is like, as far as you know, how you make the newsletter, deciding what topics to cover and how you want to present your analysis to the public. 

SOFIA RUBINSON: A lot of thought goes into our newsletter. We have many different types of content, so some of the more standard that we do are just straight debunks of false claims that are spreading online. So those are claims we try to do that you're likely to come across in your own social media feed. Obviously, we're in an information system where there's corners of the internet that are very prone to false information, but even the standard social media user who's not really politically leaning, just trying to use social media connect to connect with friends, is still going to come across very viral and sometimes very harmful false information. So those are the types of claims that we like to debunk in our newsletter. We also detail how those claims originated categorized the spread. So, what types of accounts tend to be spreading this myth. You know how are they doing that? What's the medium? We do different types of audits especially AI chat bots. So, for example, we just put out a report about OpenAI's new Sora 2 model where we tested that system's ability to produce false claims on topics in the news. So, we do many different types of things, but all of it is centered around really understanding the information ecosystem that we're in right now. 

MATT JORDAN: During the Biden administration, the threat to foreign disinformation was acknowledged, and they put, resources into, to counter foreign information manipulation and interference, at the State Department, recognizing that it was really easy way to manipulate the media and kind of spread propaganda. That center was successful. They had launched a number of initiatives. They exposed a major Russian disinformation campaign to Africa, and they basically had, about two dozen kind of countries working with them to do this. Then things shifted, right? And the Trump administration, pushed by Big tech, basically ignored the law and defunded that, and framed it as a threat to free speech. This kind of chill on misinformation and disinformation studies is one that, no doubt has spread to you. It is, impacted a bunch of academic research. Harvard's had a lab, the center for Informed Public at University of Washington also similarly got kind of chilled. How has this, shift in priorities and chill examining disinformation impacted your work at NewsGuard? 

SOFIA RUBINSON: I'd like to think that it's had no impact on our mission and what we do. So even in the Biden administration and now in Trump's second term, we continue to be monitoring foreign disinformation with the same tenacity. You know, if anything, we've ramped up our efforts as threats from countries like Russia, Iran, and China have ramped up as well. We have experts covering those domains who produce, you know, detailed reports about the types of misinformation and disinformation campaigns that these entities are launching. We recently identified a new campaign by a Russian, pro-Russian network of websites to try to infect AI chat bots with disinformation. These were mainly on claims that were not covered anywhere else. So, feeling they were trying to really fill the information void. That is a vulnerability of AI chat bots. And we found that, it's called the Pravda network, that these AI chat bots had an increased likelihood of citing that Pravda network which has been linked to, Russian influence operations, you know, an increasing amount in the preceding months. So obviously there's a lot of political discourse going on, and it has affected a lot of organizations that have worked with the government in the past. But being a private company has kind of insulated us from some of those attacks. And, you know, we're still able to do our reporting independently. 

MATT JORDAN: But you have been attacked by the head of the FCC, though, right? 

SOFIA RUBINSON: That's true. Yes. There's been a lot of critique of NewsGuard from definitely from conservatives, but also from liberals. We get it from all sides. A lot of those critiques that focused on our ratings of websites. So, like I mentioned before, we have, you know, these very detailed reports that we put on rating news and information websites online, apolitical journalistic criteria. We have some standards about how sites disclose their perspective and their opinion, but we do not. There's no, penalty for having a perspective. There's many conservative sites that actually receive a perfect score from NewsGuard. There's many liberal sites that also receive a perfect score from NewsGuard. And there's also many liberal and conservative sites that do not receive perfect scores. So, we have been attacked for, you know, having biased but based off of our criteria, we know that we are reporting independently and we're not having, you know, we go through my checks to make sure that there's no bias in our reporting. 

CORY BARKER: If we can go back a little bit further, I saw in our research that you started at NewsGuard as an intern. So, I'm curious, thinking about your long-term sort of experience with the company. Are there things that have changed about the process or even, you know, change given the surrounding circumstances of different administrations? Obviously, you mentioned, you know, remaining committed to what you're doing, and especially as a private company, maybe not being as influenced by potential, you know, government related, threat discourse. But what sort of things have evolved since you've been with the company about its approach to false claims and disinformation? 

SOFIA RUBINSON: That's a great question. So, I've been at NewsGuard for about two years now, and I think the biggest shift is the focus on AI. You know, in a lot of different ways, we're seeing, you know, obviously foreign actors taking advantage of AI, both their ability to produce and manufacture falsehoods at scale, but also in their efforts to try to infect Western chat bots with disinformation campaigns. But we're also just seeing a lot of spread of claims that are relying on AI-created images and videos, which is something we really didn't focus on too much two years ago. A lot of the claims back then were, you know, mainly misunderstandings of political arguments, you know, documents that were photoshopped or being purposely misrepresented. But now with, you know, the rise of AI, that's definitely changed the way that we go about our reporting. It's definitely increased the amount of false claims that we're identifying. And we obviously, as I mentioned, have, you know, an interest in protecting AI chat bots from falling victim to believing false claims. So that's ramped up our efforts around, making sure that we're able to debunk claims at scale. 

CORY BARKER: If I can just follow up there and connect a few things that you've said about OpenAI Sora, would you say for you all that, you know, AI and AI-generated video is kind of the most pressing issue that you spend, you know, either the most amount of time on or have the most amount of conversations on as far as, a focus on, you know, the dissemination of false claims?

SOFIA RUBINSON: I'd say so. Obviously, there's many different threats in the information space, but the scale at which AI is becoming more believable is able to produce content in just a few minutes that can be used to really spread these false claims very widely. And even just in, you know, the last two years, obviously AI is not brand new. We saw it back then as well, but there used to be a lot of noticeable irregularities that people could, you know, be informed about.

So, whether that be an extra finger or, you know, visible speech irregularities or motions in the screen that don't make sense. Those are going away. So, we're seeing a lot of confusion online. This seems to be an area where people want clarity. But it's hard because it really looks so believable. So, this has been a major focus that we've shifted towards. 

MATT JORDAN: You all have done enough work there that you now have a data base called the False Claims Fingerprints database. And so, you know, looking over that or at least thinking about your, you know, your tenure at the company, who are the most active disinformation spreaders, either in terms of foreign disinformation campaigns or even entrepreneurs who are using this as a way to kind of achieve a kind of a passive income? 

SOFIA RUBINSON: Great question. So, I mean, we track a lot of Russian disinformation campaigns. One is called Storm-1516. It's run primarily by a man named John Mark Dougan, who NewsGuard was one of the first organizations to identify by name. And also, we maintain communication with him. One of our analysts is in contact with him where we're able to learn a little bit about how he's able to manufacture these false claims and his strategy. He was just a little background. He was a fugitive from the US. Now he's seek refuge in the Kremlin, and he's running this very sprawling, network of websites that produce false claims and also use AI to disseminate them across social media. So, that's a very interesting area that we've been covering. And, you know, he's obviously one of the major spreaders of false claims that we've been tracking. But of course, there's a lot of anonymous websites and social media accounts that appear to be using AI to disseminate false claims for profit, as you mentioned. So, you know, there's obviously two different motives that, these campaigns either have it's either political in order to influence the ideology of the viewer or the reader. And there's also financial. So, we've identified what we call unreliable AI-generated websites, which are websites that are able to be run almost completely with AI. They produce sometimes thousands of articles a day that advance very topical claims that are usually false. What we've noticed a lot of them do is they'll use Google Analytics to see what types of queries people are searching for, and then they will use those terms in their articles in order to generate engagement. And these websites are littered with advertisements. It's almost sometimes hard to read them because there's so many ads on the page, and sometimes they're major brands that we've that we've spotted on these sites. But using like Google ads and programmatic advertising, which places their ads automatically on these websites without their knowledge, these big brands are actually able to, in a way, sponsor these websites that have been the source of a lot of false claims we're seeing. 

MATT JORDAN: You know, you mentioned Russia, but I think I read a thing in the Pew Research about how Iran and China also are big super spreaders of disinfo. Do they…does one side count on the other, which is to say, do the operatives that are like James [John] Dougan, who are kind of creating these, these stories, do they know basically how the affordances of these of the internet is in terms of what gets picked up in the attention economy and just kind of count on that as part of their strategy? 

SOFIA RUBINSON: I would say. So, I mean, this is not my area of expertise at the company. We have people who are experts in, you know, China, Iran, and Russian disinformation, but they definitely seem to be utilizing social media in order to disseminate these claims. So not only do they run websites, obviously there's state run news agencies in all of these countries that produce propaganda and false claims on a almost daily basis. But now we're seeing a lot of these bot accounts, which we can't definitively trace to a foreign government, but they advance the interests of these foreign governments, which is interesting. 

CORY BARKER If I can ask a somewhat related question and kind of take us back to process for a second. I just looking at a recent post in the newsletter about anti Zohran Mamdani accounts falsely tying his electoral victory in in New York City to ISIS. Can you just give us a sense of okay, these posts are going around online. How do you and your team capture those post, think about how you're going to contextualize them and explain the false claims to your audience or things taken out of context? And obviously it differs from post to post. But you know, how long is that taking you all in thinking about, you know, this is really spreading and we need to get it to our audiences quickly. Can you just walk us through that a little bit more? 

SOFIA RUBINSON: Of course. So, we have full time staffers who monitor X, websites that we know are prone to spread misinformation, Facebook, Instagram, all of the major channels that we know false claims tend to spread on. And once we identify a false claim like the one that you just mentioned. So, that was a claim that was spread through a fake news release put out purportedly by ISIS supporting Zohran Mamdani and saying that they were going to launch an attack in New York City on Election Day. Once we identify that that claim has some spread and we can, we have different tools that we're able to do to search by image or by key phrases to be able to find different instances of that. And it reaches our threshold, the amount of views that would qualify it as being a viral claim, we'll start to investigate whether or not that's authentic. So, in this case, that required us reaching out to some experts who study the Islamic State, and they were pointing us to some very obvious discrepancies between, this statement, which was spreading mainly on X and real statements from ISIS. So, both in terms of formatting and also in terms of the language used and the motive that they have to write these statements, they wouldn't typically ever released a statement prior to an attack. Usually, it's taking credit for an attack that already occurred. So, all of those things we have to do the reporting process, which sometimes in this case, it took a little bit longer because we had to actually reach out to experts and wait to hear back, but in some cases, it doesn't take too long. For example, it's an AI-generated video. We have different software that we're able to put it in in order to detect that. We also have experts in our team who are trained to be able to spot minor discrepancies. So those take less time to definitively debunk. But once we know that a claim is provably false, then we'll start to look into where did it first spread. Sometimes we're not able to determine that, but when we are, it's usually very helpful to, you know, the story of this narrative, the motive behind it. In this case, in the Zohran Mamdani ISIS claim, we traced its first or what appeared to be the first instance of it on 4chan, which is a, you know, an extremist platform that is very prone to conspiracy theories and hoaxes. That's where we found that statement first, spreading and from there, it really took off. It spread, even to Laura Loomer, who is a Trump ally. She's, you know, a confidant has been in the white House. So, we're really able to see how it can go from, you know, this little obscure platform all the way to making its way to the ears of the president. 

CORY BARKER: One of the things you mentioned there was, threshold for virality. You don't have to reveal any sort of, you know, trade secrets there. But can you talk a little bit about, like, what that threshold is and if it's, you know, different depending on the platform, what is that process like for when you're deciding to move forward with coverage of an issue? 

SOFIA RUBINSON: So, it's not a hard and fast rule that we have. But I would say that it depends also on the risk of harm of the claim. If there's a claim such as this one, that ISIS was planning to attack New York City, that obviously has a very high risk, very high impact. So, even if we didn't see too much spread of the claim, you know, maybe it's not getting millions of views. Maybe we're seeing a few posts with a few thousand views. That might still be a claim that we would choose to cover or sometimes we see claims that are a little bit more… I don't want to like silly in a way. So, for example, there was a claim that spread after Election day that Trump put out a Truth Social post using an expletive to refer to the American public that he was upset about the election. That is a, you know, a relatively low-risk claim, however, that went very viral, getting millions of views across many different platforms. So, that was a claim that we also chose to cover. So, it really depends on the risk of harm. And if the harm is high, the views and the engagement could be a little bit lower for us to cover it. 

MATT JORDAN: And just as a kind of along that line, what what's the most viral thing that you have, like in terms of the reach, like, say, a Kremlin story, how many people will that reach if it goes viral? 

SOFIA RUBINSON: Those can reach millions of people. Obviously, there's more tropes that are very common in all of these, especially Russian disinformation. So, for example, one of the tropes that we see time and time again is that Ukrainian President Volodymyr Zelensky is corrupt and using Western aid for his own personal gain. So, that narrative as a whole has attracted tens of millions of views over time. But we track specific claims on that. So, for example, there'll be a claim that he purchased a villa in Italy for $8 million. So that specific claim may receive fewer views, maybe a few hundred thousand, but still a very significant number. And we do know that some platforms like X, sometimes those view accounts are a little bit inflated and aren't 100% accurate, but it still shows that these claims are very wide reaching and have, you know, the potential to be believed by thousands of people.

MATT JORDAN: You mentioned tropes, and one of the things that's always fascinated me as a media historian who has studied foreign disinformation and domestic propaganda, is sticky narratives. These narratives that we see not only just across time, like The Protocols of the Elders of Zion as read this kind of the original narrative about, you know, the Jewish puppet masters pulling the strings. How have you seen those type of things pop up again and again? 

SOFIA RUBINSON: Definitely. So, like you're mentioning with this specifically with, you know, Israel and with Jewish people, that's definitely a trope that we see time and time again. Sometimes they'll be specific false claims or claims that are completely baseless, I should say. So. For example, after Charlie Kirk was murdered, there was a big online narrative that it was Mossad was behind the assassination. That's something we see whenever there's a major event like that. That's tragic. Even the… like, sometimes school shootings, people will say, Mossad’s behind it, with no evidence, citing… they're not citing really anything other than just making the claim. So those we see time and time again, I would also say another common conspiracy theory that we're seeing a resurgence of is 9/11 claims. You know, a lot of those are also completely baseless or outright false, but they seem to be making, you know, their rounds. And these have a very firm base of believers who continues to spread those claims. So, these tropes just, you know, keep reappearing and reappearing and, you know, with AI being very accessible as well, we see that that's being used to further either they make fake evidence to support these narratives or just to really spread the claims wider.

CORY BARKER: If we can talk about stickiness in a slightly different way, from my perspective, one of the things that you all do a great job of is sort of creating frames or packages for the analysis of false claims, like the “False Claim of the Week,” right? How do you find that approach to be effective for disseminating this type of audience, right? To have these kind of recurring features that are signaling to your audience some of these, these big things that are happening kind of week after week after week is putting it in, I would say, you know, a slightly playful kind of journalistic frame of the ‘False Claim of the Week.” Do you find that that's a really effective way to reach people and capture attention about these issues?

SOFIA RUBINSON: We do. I mean, we find that our readers are, you know, looking forward to see what we qualify, what we identify as the false claim of the week. And that's just obviously one example. But having, having a… you know, we all spend a lot of time on social media for the most part, we come across claims every day, even just me personally, when I'm not on the clock, sometimes I'll come across very outrageous claims. Sometimes if it, you know, confirms your bias, you almost want to believe even when you know that the evidence might not be there. So, one of my favorite parts of my job is when we receive feedback from readers that, let's say, for example, the “False Claim of the Week” or something that they saw on their feed and they initially believed and they wanted to believe, maybe it was something that really confirmed their bias. And, you know, now they've been informed and they're going to spread that, you know, post that article now, as you know, retract their previous post that advanced the false claim. So, we found it a good way to engage our audience. But again, we all spend a lot of time on social media. It's a very confusing space, and nobody wants to feel like they are falling victim to falsehoods. We'd all like to think that, you know, we're not going to be the one spreading these false claims that the other side of the aisle. But one thing that I've learned from my time at NewsGuard is that all sides are a victim to it. There's really no there's nobody that's immune from seeing these false claims and oftentimes believing them. 

MATT JORDAN: So, for a long time, you know, when people were trying to figure out what's true and what's false, they would go to U.S government sources, sources like the Surgeon General or the Attorney General, and they were often fairly reliable sources of information. Several of your stories over the last year have pointed to a shift in this, what has Reality Check team found in relation to that? And how has how do you think this is a challenge to journalists at large? 

SOFIA RUBINSON: It's a big challenge. So, like you mentioned, we used to use government sources as sometimes the only debunking material that we would rely on. That really made us rethink that approach. You know, we acknowledge that maybe that wasn't the best way to go about it. And now we have a higher standard, you know, especially about health claims. We try to make a very diligent effort to not just rely on any one source, but, you know, to talk to experts, people in the medical field, authors of studies to, you know, that's one of the biggest areas of misinterpretation when it comes to health information online, is people will say, oh, this study proves that X causes Z. And then when we talk to the study author, they'll tell us, you know, that's not what my study says. So those are you know what we're relying more and more on. Obviously, it's it takes more time. So, we can't be as fast to debunk claims like that when it comes to health information, which obviously affects us all and we all want timely information on that. But definitely has caused us to pause a little bit, do more research, and make sure that our debunking material is a sound as it can be. 

MATT JORDAN: What was the most viral health misinformation story that you found this year? 

SOFIA RUBINSON: I mean, just in general, I think that claims that vaccines are dangerous and can cause cancer or like other ailments, has been another trope that we see. You know, there's been a lot of different ways that people make this claim. Sometimes it's very general, just saying that the measles vaccine causes cancer or the COVID vaccine causes cancer, but we also see a lot of people try to pin that on evidence, you know, and make their point sounder by saying, “Oh, this study proves that 1 in 10 people who've taken the COVID vaccine have developed this type of cancer.” When, you know, again, when we talk to the study authors, oftentimes we'll learn that that's not true or when we investigate the background of the study, oftentimes people cite, studies that are not peer reviewed, that are put out by advocacy groups. And it's very easy to be confused. We often don't think that these types of claims are disinformation or put out by people in a malicious way to try to deceive. Oftentimes, these are people that are either, you know, struggling with the health crisis themselves or their family members, and they want answers and they want to be able to say, this is what's causing this illness. And, you know, they'll pin it on these pieces of evidence that they're seeing online. But oftentimes the factual basis for those claims is just not there. So, you know, vaccine misinformation, I would say, is the biggest area that we see false claims being made about.

MATT JORDAN: I know you all don't really do this because you're reporting and you're checking and you're verifying and you're just debunking. But do you ever think about intent? Like, what do you think that people who are spreading false claims about vaccines are up to, and why are they spending all this time doing this disinformation campaign? 

SOFIA RUBINSON: We think about the motive of false claims quite frequently. And that's, you know, another factor that goes into risk, I would say, which is, I think one of the assessments that we do when we're deciding whether or not to enter a false claim into our database. So, you know, sometimes these claims have no political motivation or financial motivation. Sometimes they're just people who are looking for answers. Other times it's just I saw this study. It seems really interesting. Let me post about it and then they'll start a false claim without even realizing it. It's just a complete misinterpretation of the data or of a document that's, you know, has confusing wording. So, in those cases, those aren't necessarily as high risk. What we consider more high risk is when there is some kind of either financial or political motivation behind the claims. So, whether that be, you know, in health claims, I would say it's not necessarily as common for there to be a motivation to try to harm people, even though oftentimes that is the outcome. These are often people who are just distrustful of institutions who feel disenfranchised or, you know, who have felt left behind and are just clinging to different pieces of evidence that they can. But when it comes to, you know, foreign state actors, we see a lot of times not only will they try to advance a certain agenda, but sometimes they just want to confuse and to create chaos. You know, sometimes we look at a false claim that's come out of Russia and we're a little bit confused, like, why would they advance this? That doesn't seem to advance Russia's place in the world, as it doesn't seem to undermine Western values. Why are they putting this out? But sometimes the answer is they're just trying to create confusion and make us not know what to believe. So that way when we see anything, even if we see something that's accurate and provably true, we don't believe that. So, there's a lot of different motivation, and we're always thinking about that when we're reporting. 

CORY BARKER: To build on Matt's question a moment ago about, you know, the declining trust in institutions like government agencies and then your reliance, sort of evolving reliance, more on maybe individual experts and kind of accumulating those folks as opposed to maybe relying on government agencies. Do you have conversations about, you know, the trust in institutions and the trust in experts that your audience may have when thinking about who to rely on as a source? Because I'm thinking, obviously there is a lot of distrust in government agencies, but there's also, generally speaking, kind of distrust in expertise as well. So, it's obviously you're in a difficult situation and thinking about who do we rely on to confirm or verify and distribute, you know, information about these false claims like you're trying to verify or debunk these things? 

SOFIA RUBINSON: It's become increasingly difficult to know. Is this going to be a trustworthy source? Can we, you know, especially when it comes to health claims, I would say it's an area, you know, there's people even in the medical industry, you know, the best doctors can have differing opinions on the efficacy of a drug or a vaccine. So, it is a confusing space when you're trying to make sure that all of our debunks are provably false. I will say that we never... our biggest standard is just not relying on any one source. We always, you know, have to have a minimum, especially with health claims, usually, three different ways that we're being able to say that this is false. We also have experts in our team who read studies, who we're not just relying on external analysis of what these days are saying, we're actually reading them ourselves, and we're able to draw our own conclusions from those. And we detail that in our reports as well. But overall, I think it's a good thing that, I mean, I overall, I think it's a negative, aspect of society that we're so distrustful of people and institutions that have you know, our best interests typically in mind have developed an expertise when people are, you know, instead turning to influencers and people who may not have any formal education or expertise in the topic, but it has made us at NewsGuard, really think about the types of sources we rely on, and making sure that everything we publish is as accurate as it possibly can. So, it's I think, you know, we are able to do a better job because of this distrust. 

CORY BARKER: Do you do you find that you it's effective to explain sort of publicly in, you know, articles and newsletter additions that process that, you know, we rely on verification from three sources specifically, you know, related to medical information or disinformation. You know, it feels like there's a lot of advice given, even just to professional journalists, right? Talk more about your process, talk more about who you talk to and why and what that means to sort of build trust. Or are you folks doing the same thing and thinking about that as a way to help people understand how credible information is created and distribute it? 

SOFIA RUBINSON: Definitely. So, if you read Reality Check, you know that we try to be very conversational. We also really try to walk our readers through our analysis of why a claim is false. And that's, you know, just in an effort to inform and also to educate. So, something else that we launched recently on our Substack is what we call NewsGuard Academy. We produce videos every week where we walk our viewers through how they can go about debunking false claims on their own. And, you know, some of the videos are focused on, you know, what makes a reliable source? How do you know what types of news you can trust when you're, you know, searching the web? You know, what's the difference between a peer reviewed study and a study that's just published by, let's say, an advocacy organization? So, we really are trying to get that education out. So people are more empowered to when they come across a claim on their own, you know, either, you know, we don't want people to just not believe anything they see, because that's also not a great thing, but be able to do a little bit of research before they share it or spread it further. And you know, that's by walking our readers through that process. We found that's been pretty effective at empowering our readers. 

MATT JORDAN: You've talked a lot about sources, but I want to think for a second here about platforms, right? Because almost invariably in the sources, in the stories that I read from you are there's a kind of a process whereby something starts kind of in a fringe site and then moves toward one of the major platforms. So, I was wondering, you know, a lot of them have given up fact checking and things like that where they used to have human beings involved, but are there social media platforms that do a better job of curbing a bad faith communicators than the others right now?

SOFIA RUBINSON: I don't want to necessarily comment on which platforms are doing a great job with or not. I'd say that there's problems everywhere, and it seems to be almost, inherent in social media platforms that they're going to be false claims that are going to thrive there. You know, these are claims that tend to get a lot of engagement because they are either outrageous or they make you emotional. They elicit some kind of reaction in the reader. So that makes them inherently more likely to go viral. I will say that obviously, as you mentioned, a lot of platforms have shifted away from using human fact checkers, which there's been criticism of that. There's also been praise for that. X particularly, is a platform that we've noticed a lot of these false claims tend to thrive on and go viral. They have, you know, their Community Notes, which is a crowdsourcing effort. And what they say, you know, a claim has to be verified by people that have differing political views. And, you know, we've seen a lot of community notes on false claims or on posts that are advancing false claims that are accurate and seem to actually be effective at curbing those the spread. But we've also found Community Notes that have been inaccurate and have inherently, you know, inadvertently led to more spread of those claims. So, one recent one, which is pretty interesting, there was a video from MSNBC in October showing a No King's protest in Boston, and it was a very large crowd that was posted onto X shortly after that. There was a Community Note posted to that or to that post that said that MSNBC was airing outdated footage, and that the footage was actually from a protest in 2017, and that MSNBC was trying to make it look like there was more people, or to exaggerate the size of the crowd at the No King's rally. That Community Note was then screenshotted and spread all across X to say, wow, MSNBC is trying to deceive us. You know they need to be taken off air. All of these claims were being made about MSNBC, when in fact it wasn't true. It was. That was actually footage from the No Kings protest. We were able to verify that by looking at footage from multiple other news outlets in Boston that published footage at the same time, and that shows the same crowd. So, it's interesting how these efforts can sometimes lead to the spread of false claims. But, you know, in a lot of instances, they do seem to be effective.

CORY BARKER: Most of your newsletter entries encourage readers to send you things that look false to them, or that they suspect might be false. How often are people sending you things, and how often are you, you know, following up in, in, in a sense of actually like publishing things that have been sent to you by readers or followers. What is that relationship like? 

SOFIA RUBINSON: We have a great relationship with our readers, which is one thing I really love about the Substack platform and the ability to, you know, get in our readers inboxes every day. We oftentimes that'll be someone will send us a tip. And if it's a lot of times it's accurate information, we'll look into it and then we'll send them just a personal email and say, we looked into this claim appears to be accurate. And in those instances, we don't publish anything publicly in our newsletter. But there's been a couple of times, recently where we have gotten a tip that was a false claim. And then that has led to further reporting on our internal database, you know, adding that claim to our False Claims Fingerprints dashboard and then sometimes it was also led to us publishing in our newsletter as well. So, you know, as people are going across our social media feed, we really encourage our readers to, instead of sharing it to their network, maybe share it to us instead before that, and we'll look into that claim.

MATT JORDAN: So, NewsGuard is pretty assiduously apolitical, right? You, you, you know, you all make a great effort at this, and you've exposed how a lot of left leaning social media users have used false videos. For instance, you've done some stories on false ICE videos lately that have been spread on the left. Is the disinformation economy symmetrical? I mean, do we see do you see it? I mean, I guess the way around what I'm asking is, you know, there's been a lot of pushback on, journalistic news outlets like The New York Times “both sides” and things. And do you ever worry that in your attempt to stay apolitical, that you end up making something look symmetrical that is actually asymmetrical? 

SOFIA RUBINSON: That's a great point. I'd say that both sides are very susceptible to falling for falsehoods that confirm their bias. Is it completely even? No. I would say that when Biden was in office, we saw a lot more claims on the right attacking his administration that were, you know, bogus claims; had no basis in reality. And now we're seeing the same thing attacking Trump's administration. It does seem to be a little bit reliant on that power and balance, for when people feel more disenfranchised is when they tend to be more susceptible to falling for falsehoods. We have had a lot of internal discussions about that “both sides” point. We don't want to make a false equivalency and we won't. We don't have any standards where we say this week we need to post, two claims that we're spending on the right and two there on the left. We just follow where the data leads us. If, you know, in one week, maybe we'll publish all every day it'll be about a claim that spread on the right. Other weeks will be about the left. It's you know, that's not something we ever consider. We just always, you know, sometimes we do internal checks to make sure that we have eyes everywhere. We're not missing a space where there might be people spreading false claims, but that's not a consideration of what we publish. 

CORY BARKER: One of the things we talk a lot about on this show is trying to reach folks who might not think about media or news literacy or think of themselves as needing to sharpen those skills, no matter kind of where they are right now. How do you consider that challenge in how do you feel like that the newsletter potentially helps with that predicament; I don't even want to call it a problem, just, you know, helps potentially improve that circumstance? 

SOFIA RUBINSON: We've joked about that a lot internally, where it's sometimes it feels like, okay, our readers obviously very much care about, you know, misinformation and not falling for it. But sometimes we want to reach people who are, you know, more skeptical. And we'll do that by trying to cover sometimes more nonpolitical claims, I would say claims that are more just, you know, generally attack… that anyone could fall for. So, for example, there was a recent claim that Coca Cola terminated their sponsorship of the Super Bowl over some controversial statements that Bad Bunny, who was selected as the halftime performer, made. Coca-Cola actually is not a sponsor of the Super Bowl at all. They told us a that in a statement. So, that was very easy for us to definitively debunk. And that, post was being posted on social media and on our different platforms that we do to spread out. We noticed that that claim got a lot of engagement. So, sometimes we'll make efforts like that to try to do claims that aren't as heated, you know, to try to reach a broader audience. We're also very grateful to be on the Substack platform, which, you know, there's people on the far right, on Substack, on the far left, and everywhere in between, and those posts are put out on their like notes features and different ways that they're able to reach different people who aren't subscribed to our particular newsletter, but they're still able to see what we publish. So, that's another way that we try to reach people who may not be interested in, you know, making sure they're not falling for falsehoods. 

MATT JORDAN: So, it's been a couple minutes since we've mentioned AI, and in this media environment that's dangerous. So, I know you've done a lot of ratings of these various AI tools and some of them seem better than others, and then some of them are just trash, right? In terms of just repeating misinformation claims. So, a lot of people are turning to these tools now. What would you suggest that they do as they turn to these tools? 

SOFIA RUBINSON: So, NewsGuard audits the ten leading AI chat bots quarterly. And in our last audit, which was in August 2025, we found that 35% of the time when asked non-leading questions about false claims. So, for example, did Coca Cola, terminate its sponsorship with the Super Bowl? Then 35% of the time, you know, across the board these chat bots were regurgitating the false claim or, you know, saying and not debunking them. So obviously there's a lot of benefits that come with these chat bots when it comes to making a traveling itinerary or, you know, looking for a recipe. But when it comes to topics in the news, we have found that these are not reliable sources of information. They are, you know, our pro not only to hallucinations, but as we mentioned before, there are foreign efforts to try to infect these chat bots, to get them to produce false claims and to advance the narratives of foreign governments. So, at NewsGuard we really try to warn our readers that using these chat bots is great, but you shouldn't be relying on it as your only source of information when it comes to claims about topics in the news. We'll often see that on X, for example, sometimes will be a post making a very outrageous claim, and then a lot of users underneath that post will say, “at-Grok, is this true?" That's like a new trend. And whatever you know, X is I respond with which it's like an automatic response when you put that prompt in, people will take that as the truth. And, you know, sometimes it's great. Sometimes it does provide accurate information that debunks the false claim. But other times it will either, you know, give you an answer that doesn't really make sense, doesn't really answer the question, or it'll say something that's actually provably false. So, we really caution against using that as your only source of information. 

CORY BARKER: So, we like to give our listeners some practical suggestions in our episode. So, obviously the newsletter content is often but not always focused on, you know, debunking or contextualizing false stories after they've already circulated. But what do you think our audience can best do to recognize false information, its origins, potential intent, as we talked a little bit about earlier, as they're encountering these things in real time? 

SOFIA RUBINSON: Yeah, I mean, there's a lot of different things that you can yourself with different tools you can arm yourself with in order to avoid or, you know, to limit your ability to fall for these falsehoods. Of course, it's impossible to always, be immune from falling victim to them, but I think the biggest and most general, may not sound all that crazy is just to really think about the plausibility. When you are looking at your social media feed, oftentimes will have a very immediate emotional reaction when we see a claim that's being made, especially if it goes against what we believe or if it sometimes if it confirms what we believe and will, you know, an almost immediate reaction for a lot of people will be to hit that share button without really taking a step back. If it sounds too good to be true, oftentimes it is. And in those cases where, you know, you take a second and you pause before you repost, one of the best ways to just do an initial check is to look at the account or the website that's spreading the false claim or spreading the claim. I should say you don't know if it's false yet. For example, we see a lot of anonymous accounts on social media that, you know, are almost designed for purely engagement. So, they'll post, you know, very outrageous claims and they'll get millions of views. But it's very easy to kind of spot that if you go to the actual account itself. Look at the bio. Is there a name attached? Is there a profile picture? Those are very like practical tips that you can do. You know, it's not going to give you a definitive answer. Sometimes there'll be a real person attached to it who puts their name out there, and it's still false, but that's a great first step. You know, look at if you're on a website. Look at the about page. You know, is this a source that you've ever heard of before. Do they list their owner? That's one of our, nine standards of journalistic credibility that we rate our sites on, you know, do they say who is behind the site? Can you see a contact information? Is there an editor that's listed? If those things are not there, this might not be the best source of that information. And then we really encourage people to do then further research. And that can be very simple. Again, it could be turning to ChatGPT and saying I came across this claim, is this true? But then ask ChatGPT to cite sources and then do the same thing with those sources. You know, that's, I think, the most practical tip that people can take into their social media habits. 

MATT JORDAN: Sophia, this has been really enlightening and entertaining and helpful. And so, I want to thank you for joining us here. And keep up the good work.

SOFIA RUBINSON: Thank you so much for having me. 

CORY BARKER: This has been really, really interesting. Some really great conversation with Sophia from NewsGuard. What are your big takeaways? That, it the environment is becoming more challenging, right?

MATT JORDAN: That I think the AI tools and the ability to scale, I mean, the, the soar, all these new things that allow you to do this so quickly and then spread it so quickly. It just have to, you know, again, I, I think it takes down it takes our trust down a little bit. You know, I think she pointed out some of these things that people should be on the lookout for. And I think that's helpful. Yeah.

CORY BARKER: For me, one of the most interesting things about the conversation was her description of how they've reconsidered sort of institutional trust, right, in the way in which not only are, you know, sort of professional fact checkers and journalists like folks at NewsGuard skeptical of institutions that may be putting out false claims, intentionally or unintentionally, but trying to sort of marshal that when you know that the audience is increasingly skeptical of institutions for any number of reasons, and the way in which that has affected their process to rely on more experts, both, you know, just individually, but literally more people to cross-check and confirm. But then the potential complications that emerge with that, because, you know, it's also easy for people to be skeptical of one individual or two individuals, just as they are whole institutions. 

MATT JORDAN: Well, and I think if you if again, if she's as she says, that a lot of these, you know, news polluters are looking at Google Trends and Analytics to see what people are doing and talking about anti-institutionalism is on the rise, right? So, it almost kind of as a feedback loop that allows us to kind of doubt our experts, doubt institutions even more. And that's again something I think we all need to be, we're a little wary of. That's it for this episode of News Over Noise. Our guest was Sophia Robinson, analyst at NewsGuard and senior editor of Reality Check. To learn more, visit news-over-noise-dot-org. I'm Matt Jordan.

CORY BARKER: And I'm Cory Barker. 

MATT JORDAN: Until next time, stay well and well informed. News Over Noise is produced by the Penn State Donald Bellisario College of Communications and PSU. This program has been funded by the office of the Executive Vice President and Provost of Penn State and is part of the Penn State News Literacy Initiative.

[END OF TRANSCRIPT]