In Survey of Readers of German Newspaper, AI-Driven Misinformation Found to Lower Trust, but Also to Raise Engagement with Trustworthy News Sources
Concerns over the prevalence of online misinformation, including “fake news,” and its implications for politics, business, and society at large have gained momentum in the last decade. The rise of social media, with its almost complete lack of barriers to disseminating content, has contributed to markedly diminished trust in the news, as have new developments in artificial intelligence (AI), especially generative AI (GenAI). All of these changes have implications for political polarization as well as the economic viability of the news industry.
In a new study, researchers examined the interplay among AI-powered misinformation, trust, and the media ecosystem, using a field experiment conducted by a major German newspaper, Süddeutsche Zeitung (SZ). They found that while AI-driven misinformation may lower trust, it also boosts engagement with trustworthy news sources.
The study, conducted by researchers at Carnegie Mellon University, Johns Hopkins University, National University of Singapore, and Süddeutsche Zeitung Digitale Medien, is published as a working paper.
“The media industry has struggled financially since the rise of the Internet in the 2000s,” notes Ananya Sen, associate professor of information technology and management at Carnegie Mellon’s Heinz College, who coauthored the study. “For business models that rely on producing high-quality news content, if it becomes impossible to distinguish real from false content, producing the news could become economically unsustainable.”
SZ is a major German newspaper with a daily paid circulation of more than 260,000 and 295,000 online subscribers. By reputation and quality, it is similar to The New York Times in the United States and The Guardian in the United Kingdom.
SZ regularly conducts surveys of online subscribers, digital app users, and website visitors. In this study, conducted in early 2025, researchers examined 17,000 people for whom SZ was considered a very trustworthy news source. Readers were randomly assigned to two groups: The first group was shown three pairs of real and AI-generated photos related to current affairs and asked to judge whether either was generated by AI. The second group was shown pairs of real images related to the same set of issues and asked questions unrelated to AI.
Next, readers in both groups were asked to take a quiz to evaluate the severity of misinformation and to rate their level of trust in SZ and other media outlets and platforms. Over the following weeks, researchers tracked more than 6,000 respondents’ online behavior as it related to SZ, with the users’ permission.
For respondents in the first group, having information highlighting the challenge of distinguishing real from AI-generated images affected post-survey browsing behavior: Daily visits to SZ digital content rose 3% in the first 3-5 days, with the effect declining over time but still significant after two weeks, and respondents had higher levels of information retention. These effects were stronger for respondents who found the quiz difficult and for those with lower levels of prior interest in politics.
The study also found that respondents in the first group were less likely to drop their subscriptions than were respondents in the second group, despite having learned more about AI-generated images. Moreover, subscribers’ retention rates rose 1.1% after five months, corresponding to about a one-third decline in the rate of attrition.
These findings suggest a possible business strategy for the news industry in response to the challenge posed by AI-generated content, the authors say. From a broader societal perspective, they provide a nuanced counterpoint to concerns over AI (and misinformation more broadly) leading to a downward spiral in trust in the information environment: Increased scarcity creates increased potential rewards for trustworthiness.
But it is not enough for purveyors of the news to retain a given level of trustworthiness. Media outlets must ensure that their ability to help readers distinguish real from AI content evolves at least as fast as the difficulty of the task.
“The deterioration in the information environment that has come from the emergence of a technology like GenAI leads to reduced trust in the information environment as a whole,” explains Felipe Campante at Süddeutsche Zeitung Digitale Medien, who led the study. “However, a news outlet that is perceived as sufficiently trustworthy may nevertheless witness increased demand as a result because its relative value goes up in the eyes of readers who deem it trustworthy enough to mitigate the effects of the misinformation technology.”
###
Summarized from a working paper, GenAI Misinformation, Trust, and News Consumption: Evidence from a Field Experiment, by Campante, F (Johns Hopkins University), Durante, R (National University of Singapore), Hanemeister, F (Süddeutsche Zeitung Digitale Medien), and Sen, A (Carnegie Mellon University). Copyright 2025. All rights reserved..
About Carnegie Mellon University's Heinz College of Information Systems and Public Policy
The Heinz College of Information Systems and Public Policy is home to two internationally recognized schools: the School of Information Systems and Management and the School of Public Policy and Management. Heinz College leads at the intersection of people, policy, and technology, with expertise in analytics, artificial intelligence, arts & entertainment, cybersecurity, health care, and public policy. The college offers top-ranked undergraduate, graduate, and executive education certificates in these areas. Our programs are ranked #1 in Information Systems, #1 in Information and Technology Management, #8 in Public Policy Analysis, and #1 in Cybersecurity by U.S. News & World Report. For more information, visit www.heinz.cmu.edu.