Saturday, June 14, 2025
HomePositive VibesAI Chatbots Cut News Site Traffic, But Can They Be Trusted?

AI Chatbots Cut News Site Traffic, But Can They Be Trusted?


Artificial intelligence chatbots are transforming how people access information online, offering quick, direct and click-free answers without the need to browse multiple websites. While traditional search engines like Google still dominate daily use, online news outlets are beginning to feel the early impact, according to a new Wall Street Journal report

Increasing use of AI chatbots for news sparks uncertainty for traditional news outlets

With increasing numbers of readers turning to AI tools such as ChatGPT, Copilot and Gemini for faster answers, the steady stream of traffic that once supported legacy news websites is beginning to waver. 

Online news providers have been working to adapt to a changing information ecosystem for some time, not only in response to AI, but to a wider trend of declining interest. A 2023 report by Oxford University’s Reuters Institute found that just 48% of people globally were very or extremely interested in news, down from 63% in 2017. More than a third said they intentionally avoid consuming it. Even regular internet users are now turning away from traditional online news content more than in previous years. 

SUCCESS Magazine Subscription offer

The emergence of tools such as Google’s AI Overviews and ChatGPT’s real-time browsing capabilities has enabled users to engage with news and current affairs in new ways. Some platforms, like X’s Grok, even market themselves as reputable alternatives for real-time news updates. According to Grok, its latest Deep Search update is “built to relentlessly seek the truth” and supposedly “distill clarity from complexity.” 

While not without flaws, these platforms offer quick, customized answers and help navigate complex information landscapes. Still, serious doubts remain about their reliability and whether these tools can deliver information with the trust and accountability expected of credible news sources. 

Business Insider, Washington Post and others announce layoffs this year

Several news outlets are already feeling the impact of this shift in online traffic. In the past six months alone, Business Insider has laid off 21% of its staff, The Washington Post cut 4% of positions and U.K.-based Reach PLC (owner of the Mirror US and Daily Express) reported a 17% year-on-year decline in digital traffic. Similar reductions have hit other major outlets, including the LA Times, Vox Media, and HuffPost. The Wall Street Journal reports that Nicholas Thompson, CEO of The Atlantic, foresees a major collapse of the traditional online news model. Earlier this year, he reportedly told staff that Google-driven traffic for the political magazine could drop close to zero, urging a complete strategic rethink.

Though many users appreciate the convenience of AI chatbots for daily news briefings or tracking developing stories, studies consistently suggest they fall short in delivering accurate and balanced reporting. A BBC review published in February found that more than half of the responses generated by ChatGPT, Copilot, Gemini and Perplexity exhibited “significant issues,” with 19% containing factual errors. The study concluded that these tools “cannot currently be relied upon” and urged regulators and AI developers to work with trusted news organizations to improve the reliability of AI-generated content and create an “effective regulatory regime.” 

AI chatbots frequently deliver inaccurate information and lack journalistic training

According to the BBC, Google’s Gemini was the most concerning for accuracy, with 46% of its responses marked as significantly flawed. Perplexity, however, had the highest proportion of problematic answers overall, exceeding 80%.

Experts have raised concerns that, despite their frequent inaccuracies and disinformative tendencies, AI chatbots gain a surprising level of user trust, mostly due to how they are trained and configured to sound human. This confidence, they warn, may exacerbate the already growing problem of disinformation online.

The consequences are particularly troubling in sensitive areas such as healthcare, where misinformation can have serious real-world impacts. According to the authors of a 2023 FPH study examining AI misinformation in public health, “The current inability of chatbots to distinguish varying levels of evidence-based knowledge presents a pressing challenge for global public health promotion and disease prevention.” News outlets, in contrast, are guided by strict industry codes and receive training and advice from organizations like the FTC to report health-related news safely and responsibly, especially in times of crisis. 

The trouble with trusting AI chatbots for news

A key concern is that current AI chatbots are not governed by these editorial standards and often lack mechanisms to prioritize credible sources. When tackling nuanced or complex topics, these systems may rely on unreliable inputs—such as Reddit threads, personal blogs or outdated data—simply to produce an answer. This can create a false sense of authority, misleading users who assume the information is accurate. 
AI chatbots, including those by Google and OpenAI, are trained using vast datasets from the internet and designed to produce fluent, contextually appropriate language that sounds truthful. However, they are not inherently trained to distinguish fact from fiction. Despite their appeal, all signs say they are not yet dependable sources for verified news—useful, perhaps, but not infallible.

Photo by Marco Lazzarini/Shutterstock

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments