{"id":20915,"date":"2026-01-17T15:59:00","date_gmt":"2026-01-17T15:59:00","guid":{"rendered":"https:\/\/sawahsolutions.com\/lap\/ai-chatbots-unreliable-news-delivery-highlights-risks-of-misinformation-and-bias\/"},"modified":"2026-01-17T16:12:22","modified_gmt":"2026-01-17T16:12:22","slug":"ai-chatbots-unreliable-news-delivery-highlights-risks-of-misinformation-and-bias","status":"publish","type":"post","link":"https:\/\/sawahsolutions.com\/lap\/ai-chatbots-unreliable-news-delivery-highlights-risks-of-misinformation-and-bias\/","title":{"rendered":"AI chatbots&#8217; unreliable news delivery highlights risks of misinformation and bias"},"content":{"rendered":"<p><\/p>\n<div>\n<p>A month-long experiment with AI chatbots reveals systemic flaws in automated news aggregation, including broken links, fabricated sources, and misinformation, raising concerns about their role in factual reporting amid increasing reliance on AI sources.<\/p>\n<\/div>\n<div>\n<p>When a journalism professor at the University of Quebec at Montreal spent a month getting his daily news exclusively from seven AI chatbots, the results were alarming and instructive about the current state of automated news delivery. According to the account published by The Conversation and summarised by Futurism, Jean\u2011Hugues Roy asked each service the same precise prompt every day in September: \u201cGive me the five most important news events in Qu\u00e9bec today. Put them in order of importance. Summarize each in three sentences. Add a short title. Provide at least one source for each one (the specific URL of the article, not the home page of the media outlet used). You can search the web.\u201d The output included hundreds of links, but only a minority pointed to actual, correctly described articles. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/futurism.com\/artificial-intelligence\/chatbot-ai-news-journalism\">[1]<\/a><\/sup><\/p>\n<p>Roy recorded 839 URLs produced by the chatbots, of which only 311 linked to working articles; many links were incomplete or broken, and in 18% of cases the models either hallucinated sources or pointed to non\u2011news pages such as government sites or interest groups. Even among the working links, fewer than half matched the summaries the chatbots presented, with numerous instances of partial accuracy, misattribution, and outright plagiarism. One striking example saw xAI\u2019s Grok assert that a toddler had been \u201cabandoned\u201d by her mother \u201cin order to go on vacation,\u201d a claim Roy says \u201cwas reported nowhere.\u201d Roy also noted instances where chatbots invented non\u2011existent public debate, writing that an incident \u201creignited the debate on road safety in rural areas\u201d when, he concluded, \u201cTo my knowledge, this debate does not exist.\u201d <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/futurism.com\/artificial-intelligence\/chatbot-ai-news-journalism\">[1]<\/a><\/sup><\/p>\n<p>Roy\u2019s experiment is consistent with broader research showing systemic flaws in AI assistants\u2019 handling of news. A study by the European Broadcasting Union and the BBC analysed 3,000 AI responses from models including ChatGPT, Copilot and Gemini and found that 81% contained issues and 45% contained significant errors, ranging from factual inaccuracies to fabricated or missing sources. Industry reporting has similarly warned that the prevalence of errors increases when models are permitted web access to provide up\u2011to\u2011date answers. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.pewresearch.org\/newsletter\/the-briefing\/the-briefing-2025-10-23\/\">[3]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.tomsguide.com\/ai\/45-percent-of-ai-generated-news-is-wrong-new-study-warns-heres-what-happened-when-i-tested-it-myself\">[6]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.axios.com\/2025\/09\/04\/popular-chatbots-amplify-misinformation\">[5]<\/a><\/sup><\/p>\n<p>Part of the problem is the data pipeline feeding these models. A NewsGuard analysis found that 67% of top\u2011quality news websites deliberately block AI chatbots, forcing models to rely more heavily on lower\u2011quality sources. According to NewsGuard, that reliance on sites with lower trust scores amplifies the risk that AI will access and repeat false or misleading material rather than the vetted reporting publishers offer. Axios reported NewsGuard\u2019s findings as part of a wider trend showing the frequency of chatbots producing misinformation rising from 18% in August 2024 to 35% by September 2025, coinciding with expanded internet access for models. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.newsguardtech.com\/special-reports\/67-percent-of-top-news-sites-block-ai-chatbots\">[2]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.axios.com\/2025\/09\/04\/popular-chatbots-amplify-misinformation\">[5]<\/a><\/sup><\/p>\n<p>The tendency of large language models to oversimplify or misrepresent material is not limited to current affairs. Research published in Royal Society Open Science examined nearly 4,900 AI\u2011generated summaries of scientific papers and found LLMs were five times more likely than humans to generalise results, glossing over critical methodological details and nuance. Such behaviour risks turning complex, contingent reporting into misleadingly confident narratives, a problem that becomes especially dangerous when AI\u2019s outputs are treated as authoritative news. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/ai-chatbots-oversimplify-scientific-studies-and-gloss-over-critical-details-the-newest-models-are-especially-guilty\">[4]<\/a><\/sup><\/p>\n<p>Technical and sociopolitical factors also compound the issue. Investigative reporting on recent model updates shows that retraining or ideological slants can rapidly alter a chatbot\u2019s outputs; Time reported on the model Grok shifting towards extreme rhetoric after a right\u2011wing retraining, illustrating how manipulation, groupthink and bias can degrade reliability. Taken together, these forces mean that AI\u2011produced news can reflect not only factual errors but also the priorities and blind spots of the systems that produce it. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/time.com\/7302830\/why-ai-is-getting-less-reliable\/\">[7]<\/a><\/sup><\/p>\n<p>Publishers, platforms and developers face a choice about how to respond. News organisations that block automated scraping argue that restricting access protects journalistic standards and their business models, yet doing so can push models toward inferior sources; developers who grant wide web access seek freshness but inherit the web\u2019s misinformation and broken links. Roy\u2019s month\u2011long experiment suggests that, absent structural fixes to data sourcing, verification and transparency, AI chatbots remain an unsafe substitute for professional journalism rather than a reliable news provider. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/futurism.com\/artificial-intelligence\/chatbot-ai-news-journalism\">[1]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.newsguardtech.com\/special-reports\/67-percent-of-top-news-sites-block-ai-chatbots\">[2]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.pewresearch.org\/newsletter\/the-briefing\/the-briefing-2025-10-23\/\">[3]<\/a><\/sup><\/p>\n<h3>\ud83d\udccc Reference Map:<\/h3>\n<p>##Reference Map:<\/p>\n<ul>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/futurism.com\/artificial-intelligence\/chatbot-ai-news-journalism\">[1]<\/a><\/sup> (Futurism \/ The Conversation) &#8211; Paragraph 1, Paragraph 2, Paragraph 7<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.pewresearch.org\/newsletter\/the-briefing\/the-briefing-2025-10-23\/\">[3]<\/a><\/sup> (European Broadcasting Union \/ BBC) &#8211; Paragraph 3, Paragraph 6<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.tomsguide.com\/ai\/45-percent-of-ai-generated-news-is-wrong-new-study-warns-heres-what-happened-when-i-tested-it-myself\">[6]<\/a><\/sup> (Tom&#8217;s Guide \/ EBU study coverage) &#8211; Paragraph 3<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.newsguardtech.com\/special-reports\/67-percent-of-top-news-sites-block-ai-chatbots\">[2]<\/a><\/sup> (NewsGuard) &#8211; Paragraph 4, Paragraph 8<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.axios.com\/2025\/09\/04\/popular-chatbots-amplify-misinformation\">[5]<\/a><\/sup> (Axios) &#8211; Paragraph 4<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/ai-chatbots-oversimplify-scientific-studies-and-gloss-over-critical-details-the-newest-models-are-especially-guilty\">[4]<\/a><\/sup> (Royal Society Open Science \/ LiveScience summary) &#8211; Paragraph 5<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/time.com\/7302830\/why-ai-is-getting-less-reliable\/\">[7]<\/a><\/sup> (Time) &#8211; Paragraph 6<\/li>\n<\/ul>\n<p>Source: <a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.noahwire.com\">Noah Wire Services<\/a><\/p>\n<\/p><\/div>\n<div>\n<h3 class=\"mt-0\">Noah Fact Check Pro<\/h3>\n<p class=\"text-sm\">The draft above was created using the information available at the time the story first<br \/>\n        emerged. We\u2019ve since applied our fact-checking process to the final narrative, based on the criteria listed<br \/>\n        below. The results are intended to help you assess the credibility of the piece and highlight any areas that may<br \/>\n        warrant further investigation.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Freshness check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>8<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The article was published on January 17, 2026, and references an experiment conducted in September 2025. The content appears to be original and not recycled from other sources. However, the article relies on a study from The Conversation, which is not directly accessible due to website restrictions, making it challenging to verify the freshness and originality of the referenced study.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Quotes check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>7<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The article includes direct quotes from Jean-Hugues Roy&#8217;s experiment. However, without access to the original source, it&#8217;s difficult to verify the accuracy and context of these quotes. The lack of accessible references raises concerns about the authenticity of the quotes.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Source reliability<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>6<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The article is published by Futurism, a known science and technology news outlet. However, the primary source of the information is The Conversation, which is currently inaccessible due to website restrictions. This limits the ability to assess the reliability of the original source and raises concerns about the accuracy of the information presented.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Plausability check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>7<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n    <\/span>The claims about AI chatbots generating inaccurate news summaries are plausible, given known issues with AI-generated content. However, without access to the original study, it&#8217;s difficult to fully assess the validity of the specific claims made in the article.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Overall assessment<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Verdict<\/span> (FAIL, OPEN, PASS): <span class=\"font-bold\">FAIL<\/span><\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Confidence<\/span> (LOW, MEDIUM, HIGH): <span class=\"font-bold\">MEDIUM<\/span><\/p>\n<p class=\"text-sm mb-3 pt-0\"><span class=\"font-bold\">Summary:<br \/>\n        <\/span>The article presents claims about AI chatbots generating inaccurate news summaries, referencing an experiment conducted by Jean-Hugues Roy. However, the primary source, The Conversation, is currently inaccessible due to website restrictions, making it challenging to verify the accuracy and context of the information presented. The reliance on an inaccessible source and the inability to verify key details lead to a medium level of confidence in the article&#8217;s reliability.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>A month-long experiment with AI chatbots reveals systemic flaws in automated news aggregation, including broken links, fabricated sources, and misinformation, raising concerns about their role in factual reporting amid increasing reliance on AI sources. When a journalism professor at the University of Quebec at Montreal spent a month getting his daily news exclusively from seven<\/p>\n","protected":false},"author":1,"featured_media":20916,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[40],"tags":[],"class_list":{"0":"post-20915","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-london-news"},"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts\/20915","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/comments?post=20915"}],"version-history":[{"count":1,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts\/20915\/revisions"}],"predecessor-version":[{"id":20917,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts\/20915\/revisions\/20917"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/media\/20916"}],"wp:attachment":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/media?parent=20915"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/categories?post=20915"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/tags?post=20915"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}