Artificial intelligence assistants from some of the world’s biggest tech firms are struggling to tell the truth about the news, according to a wide-ranging new study by the European Broadcasting Union (EBU) and the BBC.
The research, conducted with 22 public media organisations across 18 countries and in 14 languages, examined more than 3,000 AI-generated responses to news-related questions. It found that 45% of them contained at least one substantial problem — from factual mistakes to flawed or missing sourcing.
Across all tools tested — including OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity — more than 80% of responses showed some form of error. Around one in five answers contained outdated or entirely false information, while roughly a third exhibited serious sourcing failures.
Gemini performed worst, with between 72% and 76% of its responses showing sourcing errors — more than double the rate of its rivals. Examples of inaccuracies ranged from assistants misidentifying world leaders and fabricating legislative changes to producing fictitious quotes and statistics.
The study also highlighted a broader issue of transparency. In nearly a third of cases, AI assistants either omitted source citations or gave misleading attributions. The BBC said its journalism was at times distorted into “a confused cocktail” of errors, including fabricated quotes and altered facts.
The findings come amid rising use of AI tools as news gateways, particularly among younger audiences. According to the Reuters Institute, 7% of people worldwide — and 15% of those under 25 — already rely on AI chatbots for news. Regulators are beginning to respond: the Dutch data protection authority has warned against using AI assistants for voting advice during elections.
Researchers say the results point to a pressing need for AI companies to tighten accuracy controls, improve transparency and clarify how their systems handle editorial material. As these tools increasingly shape how people access information, the study’s authors warn that their flaws could deepen mistrust in both journalism and democratic institutions.
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The narrative is based on a recent press release from the European Broadcasting Union (EBU) and the BBC, dated October 21, 2025. Press releases typically warrant a high freshness score due to their timely dissemination of new information.
Quotes check
Score:
10
Notes:
The direct quotes in the narrative, such as those from Jean Philip De Tender, Media Director of the EBU, and Pete Archer, Head of AI at the BBC, are consistent with the press release. No discrepancies or variations in wording were found, indicating the quotes are accurately reproduced.
Source reliability
Score:
10
Notes:
The narrative originates from a press release issued by the European Broadcasting Union (EBU) and the BBC, both reputable organisations known for their commitment to journalistic integrity. This enhances the reliability of the information presented.
Plausability check
Score:
10
Notes:
The claims made in the narrative align with the findings of the EBU and BBC study, which has been reported by multiple reputable news outlets, including Reuters and Al Jazeera. The examples of inaccuracies, such as AI assistants misidentifying the current Pope, are consistent with the study’s reported findings. The language and tone are appropriate for a press release, and the content is directly relevant to the study’s objectives.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is a recent press release from the EBU and the BBC, accurately quoting their findings on AI assistants misrepresenting news content. The information is consistent with reports from reputable news outlets, and the language and tone are appropriate for the context. No significant issues were identified, indicating a high level of credibility.

