Artificial intelligence is reshaping how we create, consume, and trust information. In this special crossover episode of News Over Noise, Cory Barker and guest co-host Jenna Spinelli explore AI’s impact on news, education, and democracy with Sean Marcus of the Poynter Institute, Pamela Brunskill of the News Literacy Project, and Jenna Meleedy of the National Association for Media Literacy Education. Together, they unpack the rise of deepfakes, the “liar’s dividend,” and how educators are helping students mind the gap between breaking news and verified information. They also share strategies for navigating misinformation, using AI ethically in the newsroom, and maintaining trust in an era when technology can fabricate anything.
This is a special crossover episode between News Over Noise and the Democracy Works podcast featuring guest co-host Jenna Spinelle. Jenna is the Communications Specialist for the McCourtney Institute for Democracy. She is responsible for shaping all of the institute’s external communication, including website content, social media, multimedia, and media outreach. She also hosts and produces the Institute’s Democracy Works podcast. She holds a B.A. in journalism from Penn State and is an adjunct instructor in the Donald P. Bellisario College of Communications. Prior to joining the McCourtney Institute, Spinelle worked in Penn State’s Undergraduate Admissions Office and College of Information Sciences and Technology.
Special thanks to our guests:
Pamela Brunskill is a senior director of education design at theNews Literacy Project. Pamela joined NLP in August 2021 after 20 years of experience in education and education publishing. She was a teacher in Clarence and Williamsville, New York; a literacy coach at Enterprise Charter School; and an adjunct instructor at Bloomsburg University, Bucknell University and the University at Buffalo. She has written five books for the education market and co-authored Information Literacy: Separating Fact from Fiction. Pamela holds a bachelor’s degree in elementary education from SUNY College at Buffalo and a master’s degree in education from the University at Buffalo.
Sean Marcus is the Interactive Learning Designer for MediaWise here he designs and develops learning material for a wide range of audiences and purposes, ranging from senior citizens to high school students, and traditional classroom lessons to asynchronous, online learning experiences. Sean spent over 20 years teaching high school in Tampa Bay, twelve in public schools and eight in local independent schools. He has taught journalism, television production, creative writing and graphic design, and advised nationally recognized publications. Marcus holds a bachelor’s degree in creative writing and a master’s degree in English education, both from the University of South Florida. As a lifelong learner, he has pursued learning opportunities ranging from bookbinding and letterpress printing to digital citizenship and internet safety. He has presented at the Florida Council of Independent Schools’ annual conference, as well as at journalism education conferences at the local, state and regional level.
Jenna Meleedy is the Communication Coordinator at the National Association of Media Literacy Educators (NAMLE), where she manages the organization’s digital communications strategy. Her work includes running NAMLE’s social media channels, newsletters, website, and serving as a liaison to partner organizations. Jenna first joined NAMLE as a Social Media Intern in 2023 and later contributed as a member of the Youth Advisory Council for two years before stepping into her current role. A recent graduate of Pennsylvania State University, Jenna holds a Bachelor’s degree in Media Studies with minors in Civic and Community Engagement and Communication Arts and Sciences. Her passion for media literacy is reflected in her previous roles as a News Literacy Ambassador with the News Literacy Initiative and as an Education Advocacy Intern with the Media Education Lab. In 2024, she traveled to the Czech Republic as a Digital Citizenship Curriculum Development Intern, where she created and taught media literacy curriculum to secondary school students. Coming of age in the COVID-19 pandemic drove her to media literacy as a tool for empowerment, a combatant to political radicalization, and a framework for mental health and well-being. She is deeply committed to empowering digital natives to use media literacy as a means to foster critical thinking and engage in the democratic process.
MATT JORDAN: Season 4 of News Over Noise kicks off in January, but we were too excited to wait. So, we’re sharing the first two episodes early. Consider it a sneak peek at what's to come.
CORY BARKER: An Australian high school teacher recently ran a simple experiment. She showed her students two short videos, one real and one generated by artificial intelligence, and asked them to tell which was which. By the end of the exercise, a third of the class had confidently called the fake one real. Another third thought the real video was AI generated. The teacher paused and said, if you're this unsure here, imagine how unsure you'll be online. That moment captures where we are right now. Not just with deepfakes or misinformation, but with the daily struggle to trust what we see, read, and share. Artificial intelligence can help journalists and educators verify facts and spot falsehoods. It can also flutter feeds with convincing fakes that make us question everything. In this special crossover episode with guest host Jenna Spinelle from the Democracy Works podcast, we're going to explore how AI is reshaping how we create, consume, and believe information. Jenna and I are joined by Sean Marcus of the Poynter Institute, Pamela Brunskill from the News Literacy Project, and Jenna Meleedy from the National Association for Media Literacy Education. Sean, Pamela and both Jennas, welcome to News Over Noise.
PAMELA BRUNSKILL: Thanks for having us.
JENNA MELEEDY: Hi.
SEAN MARCUS: Thanks. Good to be here.
CORY BARKER: So, I want to start with a broad question for each of you. What kind of outreach or campaigns are your organizations doing to educate the public about AI? And in those campaigns, what particular groups are you targeting? Sean, let's start with you.
SEAN MARCUS: Yeah. Actually, today, we launched the alt+Ignite series that was in collaboration with the McGovern Foundation. That's a series of AI literacy courses and lesson material that is geared towards educators, library workers, civic leaders, journalists, essentially anyone who's engaging with AI. And that is everyone.
CORY BARKER: Pam, what about you?
PAMELA BRUNSKILL: At the News Literacy Project, we have a number of resources on artificial intelligence. Our main audience always is going to be K-12. Two of our biggest resources are going to be our technology lessons for elementary. We have it's called for elementary search and suggest algorithms. And then for 6 or 12 we have Introduction to Algorithms, which goes over the concept of what algorithms are, how they underlie generative AI, and what that entails. We also have a number of TikToks and videos and posts on social media related to short form teaching about AI. We have some posters and we have a curated page on news that.org dedicated to AI so people can find all our resources.
CORY BARKER: Jenna?
JENNA MELEEDY: So, NAMLE has a few resources on our website. Parent-friendly, teacher-friendly guides to navigating conversations about AI in the classroom and in the home, and the past. We've hosted an AI summit in person, and then this fall, our Youth Advisory Council is leading a session about helping other youth navigate how to use AI at our Youth Summit in Nashville.
JENNA SPINELLE: So, I want to just take a step back from talking about AI specifically for a minute in and oriented within the broader framework of news literacy. We are recording this on the kickoff of U.S. News and Media Literacy Week, and I think the 2016 election was one of the things that brought news literacy to the forefront. It brought a lot of things to the forefront for a lot of people, but media literacy was certainly one of them. So, I wonder if you could just orient AI within the broader concept of news literacy or media literacy?
PAMELA BRUNSKILL: At the News Literacy Project, we have five core competencies or five main standards that we suggest individuals look at to get savvy with news literacy. And the first three are related to differentiating news from other types of information that standard one. So, are you looking at news, raw information, propaganda, etc.? Standard two is about the importance of a free press to American democracy and the role of a free press. Standard three is identifying characteristics of credibility. So, using standards of quality journalism to recognize when something is credible or aspiring to credibility and ethics and standards. Four and five is really when we get into AI. And that's really important to recognize, because you're not really going to have a solid understanding of AI if you don't have the foundation of recognizing what news is in relation to others, the importance of it, and signs of credibility. So standard four is about verifying, analyzing information, recognizing when something is a piece of misinformation, which AI is a form of misinformation. And then standard five is about civic participation. So, it's using that the knowledge and skills you gain from standards one through four and applying it. What is your responsibility and role in the world in relation to AI? So, let's say you recognize this piece of content on your feed. Is AI generated. What are you going to do about it? Are you going to spread it or are you going to call it out? What is it you think you should be doing?
SEAN MARCUS: Yeah, I would kind of throw in there. And it's interesting that you drop the dates because, you know, media wise, formed through the Poynter Institute in 2018. Right? I'm really fairly directly out of the 2016 election ahead of the 2020 election. And it was because of that perceived and really true need for more media literacy, more media literacy education. And then we hit the 2024 election. And the prediction was that we've got the AI election that is coming, and it never completely formulated that way. And we're still waiting for the AI election to appear. But what we really found, as I was sort of coming out, was that we started with some of these detection and verification skills that were unique to AI. That quickly went away as AI improved, and then we immediately realized we fall back on, like you were saying, those five pillars that you've got, those traditional verification frameworks and those traditional detection methods for any type of misinformation, it runs true for AI. So, I think a lot of it is using the frameworks we have for media literacy in many senses, to put our audiences at ease in the face of this brand-new crazy technology. It's like, yes, it is new and crazy, but it's still the same thing. We're still dealing with interpreting information so we can apply those same ideas to this new technology.
JENNA MELEEDY: Yeah, I completely agree. I think it's clear that future leaders are going to need to know how to act in a world that's been transformed by AI, however, that might look like. And so, at NAMLE, we want everybody to have the media literacy skills that they need to thrive now and in the future. And that is defined as the ability to access, analyze, evaluate, create and act upon all forms of communication, definitely including AI.
JENNA SPINELLE: Pam, I want to bring up something that you talked about in the talk you gave today earlier on campus. I think there's lots of talk, certainly, about mistaking fake content that's AI generated for something that's real. But you brought up something called the Liar's Dividend and that why don't you tell us what that is and how that plays in here?
PAMELA BRUNSKILL: Yeah, the Liar’s Dividend comes about with AI and our convoluted information ecosystem. Now, when people are mistaking real content for made up and bad actors, we'll take advantage of that, right? Because for a bad actor to succeed, all they need is for you to not be able to trust anything. It's not necessarily that they need you to trust and believe what they are putting out, but as long as you don't believe that the truth of what's really out there, then they can manipulate you and get you to believe whatever they want, or to throw up your hands and say, I can't trust anything, so I don't know what to believe. You can go ahead and do whatever you want, and that's really what the Liar’s Dividend is.
JENNA SPINELLE: And can you give us an example of that?
PAMELA BRUNSKILL: Well, the example I gave in today's talk was from images of, of the of War, I think, it was from Russia-Ukraine, right? So, people are seeing real images and then discounting them as that's too awful, that's not real. And people are just trying to make me think that that's an atrocity that's happening. So, they'll play on my sympathies. So, I will support one group over another.
SEAN MARCUS: If I can jump in, I have to jump in because the liar's dividend is like one of my favorite ones to teach because it's so nefarious and sneaky. Right? It's like they just jump in, they're like, oh no, no, that's I, you know, most recently I saw the signal chat League of the Young Republicans club. The first response that came officially out was, well, we haven't seen these yet, so we suspect that they've been doctored. I think there's a phrasing something to that nature. You know, there's a high chance that these could have been manipulated or doctored. So, you know, that sowing confusion can now come because it's a scapegoat, right? For bad actors or for folks who, you know, can now find a new excuse for that terrible thing that they did, like, oh, that wasn't real. That was AI. And it's like, just like you said, Pam. It's like, it's so easy now to believe that something could be fake. It's equally easy to believe that it could be real or fake. So, it, it, it gives a certain shield to, to our bad actors. Sometimes it's very interesting.
JENNA MELEEDY: And I'll say that once you have so much content to consume and you have to evaluate everything to that extent, you really fall back on what is comforting to you, meaning figureheads that you have always believed in, thought leaders or ideologies that are familiar to you and that really will play into people's confirmation bias when consuming news.
PAMELA BRUNSKILL: I was just going to say that. Yeah, so I'm glad you got to it. Yeah. That the confirmation bias, right. You have to be aware of your own preconceived notions, your own biases, because you are absolutely going to have those activated. When we were talking about the Liar’s Dividend, you also another part you were talking about, right, is breaking news. So, at the News Literacy Project, we teach students how to mind the gap, right? In that time span between when an event happens and news organizations can verify information, there's a gap. And that gap is flooded with misinformation and people suggesting and supposing what might happen. And that can shape your perspective. So, we want to be aware, right? One of the skills of news literacy is just recognizing there's a gap between when information can be verified. So don't just trust the first thing you see when events are breaking.
SEAN MARCUS: And I'll jump in if I can't, you know. And that's where the AI literacy layers in on top of it, because we look at something like Grok, an AI tool like Grok, where during the Charlie Kirk shooting, you had so many retweets of Grok responses to what was going on in that situation, but it was moving so fast. Whether Grok was using reliable sources or not, those reliable sources were still speculating, filling in the gap. And so, we saw a flood of misinformation coming out, then getting verified and reinforced by users who were using Grok, an AI tool, as a verification method for this breaking news. It starts to just cycle in on itself and then minding that gap can become so much harder, if we're not aware of how AI is contributing to that cycle.
CORY BARKER: Sean, you talked a little bit in your presentation today about AI and ethics. So can you tell us a little bit about how you all are viewing the use of AI within a framework of ethical practice or ethical thinking, like, how do those two things that might seem pretty different from one another depending on who you ask, using AI and being ethical, how do they come together or meet in some middle?
SEAN MARCUS: It's a tough question, you know, and I quite frankly, I struggle to answer it. And I think that that challenge in us talking about every time we go out to talk about AI and ethics, is acknowledging the struggle and acknowledging the inherent conflict in especially through a journalistic kind of lens, ethical reporting and the pitfalls of AI. It is very difficult to sort of rationalize and put those things together. So, I think we start from a place of full transparency and disclosure, as you would with any journalistic endeavor. That first and foremost, if we're going to engage with AI technologies, we start from a place of disclosure. We start from a place of transparency so that our audiences know exactly what we're doing, and so that we as communicators are also advancing again, AI literacy, right? We're advancing people's understanding and awareness of this thing that seems to be sort of overtaking all of our lives. So, you know, I think the ethics start in just being completely upfront about what we're doing, how, how and why we're doing it. And on the flip side of that, I think the ethical obligation for journalists is to continue to report on AI technology to continue to hold tech companies accountable for the decisions that they're making and the potential problems that we're having. So as much as we are embracing, perhaps in a newsroom, AI technology that helps us work with fewer resources but do more work or whatever it is that we need to do in order to do good journalism. We're still keeping on the other end of our editorial team. We're still holding power to account the same way we would any other time.
JENNA SPINELLE: Yeah. So, you're speaking of newsrooms. You know, my first jobs in newspapers, I was the one tasked with updating the website because I was the youngest person in the newsroom. And it was very much an afterthought after the paper was already done and put to bed. Whatever. So that's a long way of saying newspapers kind of famously missed the.com boom or were behind on the trends and everything that was happening there. And I think we continue to see the ramifications of that today. But I wonder how they're thinking about AI. Are they cognizant of, well, we don't want to let this, I think, pass us by the same way that we kind of let the broader internet pass us by 20 years ago. How are they approaching some of these challenges that that we've been talking about?
PAMELA BRUNSKILL: So, several news organizations have signed agreements with AI services, right. So that AI is trained on the internet, and they can either go in and get training data and then not pay organizations, or they can try to not train them on that, and then they're going to get trained on slop and not necessarily so credible information. So, some news organizations have opted to sign agreements that they will get paid a certain amount. And then the AI can get trained on their information. Other news organizations, I'm thinking back to an article is probably a year and a half, two and a half years ago now with CNET, where they had a whole bunch of articles that were just written by AI and didn't disclose. So, your point about transparency and somebody found out and they're like, this is awful. And so, then they had to go through and that was a big discussion in the news world of, okay, we need to make sure whenever we use AI, we disclose it. And that has pretty much become a standard. Now, I think, for standards-based news organizations is that they will disclose when they're using AI. And it's I think it's perfectly acceptable to use AI to summarize sports scores, things like that. Right. But in terms of reporting, you're still going to need people to do to do that job.
CORY BARKER: How would you all evaluate the way that the popular press is actually covering AI? Whether that means it's adoption within the public, a few folks in their presentation today talked about potential environmental impacts related to the use of AI on the electrical grid, on water, those sorts of things. So how do you feel like the general popular news infrastructure is doing at informing the public about what AI is, some of the potential strengths and weaknesses? Jenna, you have any thoughts on that.
JENNA MELEEDY: In terms of pop culture?
CORY BARKER: In terms of news organizations and in terms of mainstream popular news organizations The New York Times, CNN, Washington Post, etc. How do you feel like they're covering AI?
JENNA MELEEDY: Well, I know that I've seen a few big news organizations get in trouble for using AI artwork instead of work from photojournalists. That they definitely have the resources to in reporting their news stories. And I, I do think it does a disservice to take the humanity out of reporting on politics when you use AI to mimic real human suffering, especially in wartime photos, and especially about photos about natural disasters.
CORY BARKER: Pam, do you have anything?
PAMELA BRUNSKILL: Well, just to go back to your original question, you just asked Jenna if I could hit to that first. So, standards-based news organizations, right, are going to make sure that they're verifying that the images are accurate and they're going to use photographs taken by photojournalists. And if they are using AI, they say it and there's got to be a reason for it. But in terms of mainstream news organizations covering AI, yeah, I think there are or news organizations absolutely discussing AI, the positives, the benefits and the negatives. But I've seen a whole lot more in in technology publications like The Verge and Wired a ton about AI. And of course, because that's their, you know, their niche. So, in terms of education, it's a really tricky landscape. And I hear from educators all the way from K through 12 is changing. Their students are using AI. My own kids are using AI, you know, to help them with their homework to help study. And so, there's this this spectrum of what is acceptable use, what is not in terms of a district, in terms of individual teacher. And it's all over, you know, the decisions and what people think is acceptable. It's just so different in terms of education, teaching about AI, that's really, really specific to the individual teacher in the individual district. I'm seeing, I'm in a few different social media pages, right, where educators talk about how do I do this? Can I use AI to help me create a PowerPoint? And so, like here, here you go. Use this and so teachers are using it to write lesson plans to create slide decks to help lessen their workload. So, I've heard some teachers use it to give feedback to students. Not all right. Some people will say, no, I'm not okay with that. And then in terms of teaching students about AI, only some educators feel confident comfortable doing that. And I'd say majority are not.
JENNA MELEEDY: I do see a lot of fear mongering about AI in the news, especially with huge national news outlets, and I think it's because it makes a flashy headline. You know, they they're out there to get engagement. And so, pushing the fearful perspective on AI, trying to get people to worry about what is I going to do to our kids that sells, that sells news stories at a time where news is struggling? And so, I think at NAMLE, we really try and create a more balanced perspective of looking at AI as you would any sort of technology where there are uses and gratifications that are positive for people and there are ways to abuse AI.
JENNA SPINELLE: So, Sean, you and I were talking earlier about content creators, and up to this point in this conversation, we've been talking about news organizations. Right. So can you just talk a little bit about, since we are seeing more and more journalists leaving news organizations to strike out on their own, as well as others who have been on their own and the content creation space the whole time. Just talk about how some of these rules are different or not, or sort of what the state of play is for news content creators.
SEAN MARCUS: Yeah. And that was, you know, that was a big push with one of the previous pieces that we did. Poynter wide on talking about AI. We were very intentional about developing material that was specific for content creators and training, disclosure, detection, those types of things, because it is an extremely different landscape. Right? I don't want to necessarily use the Wild West kind of term, but at the same time, the guardrails are off in many ways. If you're an independent individual, content creator, news content creator who's guiding star to you follow, you know, it's really up to that individual creator to set the tone, to set their own ethical standards. So, this is where you we would always hope that there would be transparent, once again, transparency in what they're doing, that they're taking you along. The journey of this is how I'm using AI. This is what I'm doing with it. If we're not seeing that, but we suspect that there is a whole lot of AI use going on with the content that they're creating, then it's a red flag in terms of the reliability as a creator. So, it's difficult because it's just scattered in so many ways. You know, there is not necessarily not I'm not that I'm aware of a content creators handbook for doing good journalism and doing good reporting. You know, they don't have the code of ethics that they necessarily have to follow. Not to say that there aren't news creators, individual news creators out there that are applying those ethical standards, but essentially, we lose those checks and balances and verifications of an editor and a flow and all of those kinds of things, which again, is typical with AI or without. But when AI comes into the mix, we have that problem of not only needing to trust that individual creator, essentially taking them at their word, but we also need to be able to trust their judgment in terms of how they're interacting with the content and material. So, I think it adds a really and yet another complicated layer to the content creator question, if you will, of being, you know, news reporters.
CORY BARKER: If I can follow up on that, it does seem like we're in a situation where, as you've said, with a couple different examples, like potentially calling it a Wild West, we're at a transformative moment, maybe to put a little bit more neutral and different entities within the news or the content industry are using AI, and people are talking about it, and there's a sense of distrust or skepticism about what things are real and that things are not real. In in your planning at your various organizations, is there a concern that this is going to create so much distrust of news and journalism, or content creators who use AI or talk about AI, that by the time there are more guardrails or there is more infrastructure to help us kind of know what is real and what's not, that we're kind of not going to be able to get to a point where people do trust anymore, like what they're seeing flat out right, that like, this is going to go on even for a few years, to a point where for a long period of time, there's no sense of trust of what we're seeing online just because we can't figure it out fast enough. Pam, what do you think?
PAMELA BRUNSKILL: Oh, I think wholeheartedly, people can determine what information is credible or not and navigate this information landscape. Yeah. AI is making our feeds get filled with a whole bunch of slop. Right? But that doesn't mean we can't tell what is credible and what is not. It's recognizing those five core standards, right? Are you looking at news? Are you looking at something else? Because if it's something else, you know, if it's meant to entertain you, fine. But if you're looking at news, there's certain things you want to be looking for, right? As Sean said, you want transparency. You want accuracy, you want reliability, you want context. These are all things that you would be looking for. And if it's a content creator or a media organization, these entities, these points will still be in place. You mentioned the society for Professional Journalists. They have a code of ethics. Most, if not all, standards-based news organization follow some form of the speech ethics code. And so, if you're looking at a standards-based news organization, they have their ethics code on their website. Or you can call and find out what they are. And if they don't live up to those standards, you can call them out on it. And they will have to make corrections if they make mistakes, right? And if a journalist or reporter makes a mistake, they have to issue a correction. If it's egregious, they can lose their jobs. That doesn't happen for a content creator. And so, when we're looking at our feeds mixed with, you know, a standards-based news organization, post a content creator, post a friends post, right? All intermingled. We have to start by recognizing, first of all, what am I looking at? Who's posting it, how credible is that source? And then we can get into the points of is there a watermark that says it is AI generated, and are there hashtags that gives us a little more content? And if it's about a breaking news event, right, then standards-based news organization and other organizations will be covering it. We can do some lateral reading, go, go find a new search and read more about it.
JENNA MELEEDY: Yeah, I think it's really easy, especially as somebody who really wants to keep up to date with the news, to fall into that nihilistic, cynical perspective. But we're really not as helpless in this quest for the truth as it sometimes feels like. There are so many strategies for reducing the noise and limiting your active news consumption to sources that you have research and trusted, and just ways to take care of your mental health that can really aid in making news consumption a lot more beneficial for you.
JENNA SPINELLE: And picking up on that, Jenna, I want to talk about news avoidance that you are for folks who can't see. You are the youngest person in this room by 15 years, I want to say at least. And so, talk about, you know, what you just said, right? Seeking out news sources that you research and that you trust. Like that is easier said than done, especially for folks in in your generation who have come of age in the news environment over the past decade or so. So talk about how you get folks to do that, whether it's in, you know, your personal life or some of the I know you work with a lot of high school students at NAMLE like, what are some of the strategies you use that maybe our listeners can adopt for the younger people in their lives?
JENNA MELEEDY: I do. So, from a digital native perspective who works with a lot of other digital natives, which, if you don't know, it, just means you've grown up with the internet. It really is about mental health and well-being first, because a lot of young people will hear, the words politics or news and they will shut down immediately. And that comes from a lifetime of growing up with catastrophically stressful news all the time, just constant or near constant exposure to content online that is specifically made to upset people and provoke an emotional reaction. So naturally, what are you going to do if every single time you try to consume news, you get so stressed out you can't function, you're going to shut down. You're going to become desensitized to the things that you see or hear or to violent imagery. You know, human things that would normally provoke a reaction. They have to get more extreme just to get that same reaction. And so that results in a lot of apathy. And cynicism among young people. And I think that the first step to combat that is to reduce the overstimulation and the overwhelming nature of the news. That doesn't mean no technology. That doesn't mean no news. That doesn't mean no social media. It is different for every person. The way that they reduce their own overstimulation when it comes to technology. For me, that means that I have to force myself to spend 30 minutes every morning trying to consume the news, seeking out topics, and I have to do it in a podcast, because if it's visual and audio, that's too overwhelming for me, or I have to do it while I'm working on some other tasks just to fit it in my day, that's what works for me. But what I have been seeing among young people a lot is a tiredness. When it comes to consuming information online. They don't go to traditional news sources as much as older generations because they're looking for escapism. They're looking for entertainment, which means that they turn to social media, which then will coincidentally give them political content that they weren't even expecting to see. And so that ends up being their only news consumption, their news avoidant. And then the only pieces of news that they're getting are stressful, algorithmically optimized bits and pieces of news.
JENNA SPINELLE: So, what do you do about that?
JENNE MELEEDY: The first thing that I encourage people to do, people of all ages, but especially young people, is to not take for granted grounding themselves. You hear about deal scrolling, which is when you get sucked into a cycle of scrolling on Instagram or TikTok or whatever platform you're on for hours and hours, and you completely lose track of time. It's very addictive taking moments to ground yourself physically. I like to remind people, is there tension in your jaw? Is your tongue stuck to the roof of your mouth? Are you sitting in an awkward position? Do you need to adjust your posture? Keep track of where you are mentally? Do you feel overstimulated? Do you feel bored? You went onto this platform for entertainment. Are you getting that from it? And then there are some more practical things that you can do. You can reduce notifications so that you're not getting random notifications of huge disasters that have happened across the country throughout your day, because that will completely overstimulate you and ruin your mood. You can train your algorithm so that it works for you instead of the other way around. By pressing the button on the content that says not interested or do not recommend, you can block profiles that are spreading AI, slop, or other forms of stressful content. Clickbait, misinformation. And I think an important and underrated one is just finding opportunities to talk about the news with other people. Now, that news is a less communal thing. Meaning you're not sitting in the family room watching the 7:00 news with your family anymore. You're in your own bubble. You're stuck in your own individualized algorithm. News is a more isolating event than ever, and I think when you break that isolation and you try to talk about the news with other people, it becomes less stressful. It becomes more grounded, more realistic, and you feel a sense of community that takes away some of that stress.
CORY BARKER: It sounds to me in the last few minutes of the combo that you're all in different ways, pretty optimistic about our ability to navigate the AI ecosystem or the AI influenced news ecosystem. Are there other things beyond what we've shared here in the last few minutes that, like you're really optimistic about as far as our ability to get through this and figure out the best ways to use AI as producers, as consumers, as people who kind of fill both of those roles, depending on the day or the practice that we're fulfilling.
SEAN MARCUS: Can I flip that question on its head a little bit? Yeah. Like I actually one of the very honestly say that the past year I've lived in, I said this to everybody that I have lived in a full state of existential dread because of the amount of time that I've had to engage with AI technology, AI literacy, and our understanding of it. But I will say to contribute to that sense of optimism is that acknowledgment that we are in a very tenuous and strange place right now. We're at the beginning of what could potentially be like a 500-year run of technological advancement. Maybe it's a fad and it fizzles out. I doubt it but acknowledging the fact that we have a lot of discomfort, we have a lot of unknowns. We have a lot of really shaky, weird stuff, for lack of a better way to say it. Going around right now with AI, when we engage with it, most of us are sitting in a spot where you know, for students it's like, oh, am I cheating? Am I not cheating? Nobody has defined those lines for me yet because those lines are not defined. So, they're sitting with the angst of like, should I be doing this? Should I not be doing this? Reporters are sitting with the angst of, you know, how is this impacting my credibility? I mean, anybody who's engaging with it, which like I said before, is everybody at this point, you know, is sitting with some level of angst about it. I think that once we sort of open that box up and discuss and acknowledge the fact that, like, this is equal parts exhilarating and dreadful, and we continue to drill down on both those sides of that road, I think that helps us to feel at least a little bit more balanced. You know, we've got our, our sea legs on a really rocky boat, if you will. So, I think there's, there's really a, an element to just leaning into the uncertainty and the discomfort of it not necessarily and accepting it and being okay with it, but just accepting the fact that, yeah, this is this is a weird time right now for technology and communication. And we have to collectively acknowledge that.
PAMELA BRUNSKILL: I had one thought while you were speaking, which is individually right, we all have our own agency things we can do, but we also have a more collective ability to shape AI and AI policies. So, if we are more as a society, I don't know about government intervention. I haven't figured out my thoughts on that yet. But if we can collectively advocate for incentivizing credible information and labels on AI, right, and specifics that we would want our social media companies to put out for us. Right. And if we if we collectively advocate for that and incentivize not necessarily regulation, but expectations for AI that that will, you know, prioritize your credible content and deprioritized misinformation so that we get good stuff at the top of our feeds. And collectively we ask for the not so polarizing emotional content, because otherwise we're just going to keep getting fed it because we know that keeps us on longer. But we have agency as a society that we can ask for something else.
JENNA MELEEDY: Yeah. So younger people I have found, do not share the same dread or alarmist sentiments about AI that many other people do. And I think that's just because for us, it's kind of just another convenient tool. It's a fact of life. It's something that we don't take very seriously. You know, we've got that cynicism, that dark sense of humor. We use it to make memes. And so, the bottom line is AI is incredibly popular right now, and it's only going to continue to grow in popularity. So instead of fearing it and instead of trying to fight back against it and limit it and restrict people from using it, why not find out how to work with it and use it in a way that has ethics in mind to minimize harm? As much as possible and maximize the benefits for people. There's got to be a balance.
JENNA SPINELLE: So I'm going to take us on a brief detour into the world of democracy that I inhabit and tell you about something called a dummy-mander, which is in the world of gerrymandering, where I just learned this term like two episodes ago on our show, where you gerrymander something so much that it doesn't work anymore. The district just becomes too sliced and diced, and voters don't behave the way that the models think they would. So, I think there might be something similar or potentially similar happening with AI where you know you're applying for a job and you have the, you know, AI generated resume interacting with the AI recruiter and, and all of these things. Right? So, what are some of the ways that I could implode in on itself and how likely do you think it is that that might happen?
SEAN MARCUS: I mean, I can jump in on that. I think that the example of the AI bot, evaluating the AI bots resume is a really good example of that type of interaction that we can see happening and we can see on social media, a bot responding to a bots post like eventually the humans who are consuming it are going to catch on. And it's kind of like you said, Pam, like collectively we're all going to jump in and say like, okay, this is this flood is enough, you know, to really to put it all together to answer that question, it's that optimism of what I can do to help and to do great things in the future. We hope that that's sort of the baby we save with the bathwater. We throw out of, like the social media bots that are flying around. But I could definitely see if the push for AI is a fully unregulated, whether that's government regulation, whether that's some kind of tech alliance regulation, something of that nature. I think that can be a really huge contributing factor to just this freewheeling over use of unethical practice that then eventually, yeah, the people will rise up and say, enough is enough. We're tired of watching bots are, you know, bicker with bots. It's just becomes absolutely silly. I think there's some potential for that. I don't know, like I say, it's tough to predict the future. And I also want to kind of throw in that, that idea of like, you know, we I think when we say the future, we tend to think 20 years down the road, 30 years down the road. But the future is a big, wide-open space. You know, everybody just like gives me so much grief over it. But I always use that, that sort of printing press period of like, you know, the printing press ruled for a good 500 years before technology came in. So, we're talking about being somebody who was around like 1495 to 1505, trying to predict what 1978 was going to look like. You know, we're in that we're in the 1495 to 1505 range. We have no idea what that 400, 300 years down the road is going to look like. So, I think there's a temperance we have to apply in determining whether it's going to implode, explode, whatever, because in the next 20 or 30 years, it'll do whatever it does. And then 100 years will pass, you know?
JENNA MELEEDY: Yeah, I think why not use that to our advantage? I do think AI and especially AI slop is kind of accelerating internet fatigue or social media fatigue. There's a meme that's like, that's enough internet for today. And it always comes after you've seen some sort of horrific amalgamation of AI slop. And so, if that's what it takes to stop us in our doomscrolling cycle, to lessen addictive tendencies that we might have with social media or other forms of media, why not? Why not use that as an excuse to take a break? And I also want to add that it is a kind of crazy and cool concept to think about the dead internet theory of the majority of the internet just being bored activity. And while I do think that that has potential to be the end of some social media sites that could be overrun with AI and bots, I will say I don't share the fear that some people have that I will replace people widespread in and across multiple job markets. Of course, with any emerging technology, some jobs will become outdated, but I tend to think of it as more a lawyer will not be replaced with AI, but a lawyer who uses AI might replace a lawyer who does not use AI.
PAMELA BRUNSKILL: I'm really torn on this one. I want to be. I am optimistic that we can navigate the information environment, but in terms of the future, there's always so much unknown things that can happen, right? Can change the trajectory. But when I listen to the experts of people who are really studying, I want to say, like, almost all of them are really doomsayers. And so that scares me. But I go back to this is where we are right now. It doesn't mean we're going to get to that doomsday scenario. So, we have to take our part and speak up with what we think the ethics of AI should be and follow suit and use AI responsibly.
CORY BARKER: I think that's a great place to leave it. Sean. Pam. Jenna. Jenna. Thank you all.
PAMELA BRUNSKILL: Thank you.
JENNA MELEEDY: Thank you so much.
SEAN MARCUS: It was great being here.
MATT JORDAN: So, Cory, that was really interesting. And as a fly on the wall to this conversation, watching all these people who are working in similar spaces talk about similar issues has been really enlightening to me. So, what was that like? You know, how are you thinking about it?
CORY BARKER: Yeah, I think what was really fascinating in hearing them talk about the various issues that they're encountering and that their various organizations are thinking about as far as how to serve the public, is just a really obvious central tension, that the development of generative AI and its influence on mis and disinformation is so significant that it is really easy to be pretty depressed about the ecosystem and our ability, individually and collectively, to navigate through this transitional, complicated period. And they all sort of reflected, I think, different moments of being pretty bummed out about their research and their work, Sean, most notably. But then I think they all also underlined some potential reasons for optimism, whether that, you know, is building out some of the products and initiatives that they're working on to help people feel more confident in their ability to navigate AI, or just a sense that we don't necessarily have to buy into the significant Wall Street bubble, investment-enabled hype about AI's encroachment to every part of our lives, but that central tension is so obvious, right, that we're in this really complicated time. And there is a lot of different there are a lot of different ways to think about this. Some potentially really, you know, negative and bleak and some, you know, potentially optimistic. What did you think as a third-party fly on the wall?
MATT JORDAN: I'm always thinking about the not just the spread of bad information and things like that, but also like what the act of communicating does in terms of our ritual sense. Right. And one of the things that I think about with AI is just that how by getting people to use it, they're getting people to treat this like some kind of a God-source, right? You have a question to ask AI and the more that people do that, the more of that feedback loop gets constructed in, the less that people are doing information searching the way they used to. So, I think that what that leads me to think about is that skepticism that is kind of pouring out now as people are dealing with hallucinations and all the problems of AI might kind of short circuit that feedback loop. A little bit so that people will start somewhere else as opposed to starting with AI and, you know, kind of exacerbating the problem.
CORY BARKER: That's it for this episode of News Over Noise. Our guests were Sean Marcus from the Poynter Institute, Pamela Brunskill from the News Literacy Project, and Jenna Meleedy from the National Association for Media Literacy Education. To learn more, visit newsovernoise.org I'm Cory Barker. Until next time, stay well and well informed.
MATT JORDAN: News Over Noise is produced by the Penn State Donald Bellisario College of Communications and PSU. This program has been funded by the office of the Executive Vice President and Provost of Penn State and is part of the Penn State News Literacy Initiative.
[END OF TRANSCRIPT]