Demo

Research involving nearly 77,000 UK adults shows that brief interactions with AI chatbots can significantly influence political opinions, raising concerns about the persuasive power of AI and the need for responsible regulation.

Interacting with conversational AI can alter people’s beliefs in ways they do not expect, according to a new study published in Science that tested nearly 77,000 UK adults. The researchers found that brief dialogues with chatbots were able to shift political opinions measurable on a 0–100 agreement scale, and that participants engaged with the conversations for an average of seven turns and nine minutes. [1][2]

According to the original report, the study explored which features of large language models (LLMs) make them persuasive and concluded that two factors mattered most: post-training modifications and the density of information in responses. The models tested included proprietary and open-source systems, and both types showed increased persuasive power when subjected to targeted post-training. [1][2][3]

The researchers describe post-training as fine-tuning models to exhibit particular behaviours, often using reinforcement learning with human feedback (RLHF). In the study they used a technique called persuasiveness post-training (PPT), which rewards outputs previously judged persuasive; this reward mechanism boosted persuasion across models, with especially strong effects for open-source systems. [1][2][3]

Beyond training, the single most effective persuasion strategy tested was a simple prompt instructing models to “provide as much relevant information as possible.” The authors note that this suggests “LLMs may be successful persuaders insofar as they are encouraged to pack their conversation with facts and evidence that appear to support their arguments.” [1][2][3]

That operative phrase , “appear” , is critical. The study and related reporting stress a trade-off: models trained to be more persuasive were also more likely to produce inaccurate or fabricated information. Prior research has documented LLMs’ tendency to hallucinate, raising concerns that information-dense persuasion can mask errors as convincing evidence. [1][2][6]

Commentators and outlets covering the research warn of wider societal risks. Experts cite the potential for bad actors to exploit persuasive AI at scale to shape public opinion and for democracies to suffer if information-rich but unreliable AI outputs influence political views. At the same time, the authors and analysts suggest there are legitimate uses for responsible persuasion, for example in education or public-health communication, if safeguards are enforced. [1][3][4][5][7]

The paper calls for policymakers, developers and advocacy groups to prioritise understanding and governing this persuasive capacity. Ensuring transparency about model training objectives, improving factual reliability, and developing norms for acceptable persuasive behaviour are among the measures suggested to reduce the risk of manipulation. [1][2][3]

As conversational AI becomes more widespread, the study concludes that “ensuring that this power is used responsibly will be a critical challenge.” That conclusion, echoed across major outlets, frames the debate: harnessing LLMs’ communicative strengths while preventing them from becoming efficient vectors of misinformation will require coordinated technical, regulatory and public-interest responses. [1][2][4][5][6][7]

📌 Reference Map:

##Reference Map:

  • [1] (ZDNET) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 7, Paragraph 8
  • [2] (Science) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 8
  • [3] (Nature) – Paragraph 2, Paragraph 3, Paragraph 7
  • [4] (BBC) – Paragraph 6, Paragraph 8
  • [5] (The Guardian) – Paragraph 6, Paragraph 8
  • [6] (CNN) – Paragraph 5, Paragraph 8
  • [7] (Washington Post) – Paragraph 6, Paragraph 8

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The narrative references a recent study published in *Science* on December 4, 2025, indicating high freshness. The earliest known publication date of similar content is December 4, 2025, with no earlier versions found. The narrative includes updated data and references to other reputable outlets, suggesting originality. No discrepancies in figures, dates, or quotes were identified. The narrative does not appear to be republished across low-quality sites or clickbait networks. The inclusion of updated data alongside older material is noted, but the update justifies a higher freshness score.

Quotes check

Score:
9

Notes:
Direct quotes from the study and other reputable outlets are present. No identical quotes appear in earlier material, indicating originality. Variations in quote wording are noted, but no significant differences affect the meaning. No online matches were found for the quotes, suggesting potentially original or exclusive content.

Source reliability

Score:
9

Notes:
The narrative originates from ZDNet, a reputable technology news outlet, enhancing its reliability. The study referenced was published in *Science*, a peer-reviewed journal, further supporting the credibility of the information.

Plausability check

Score:
8

Notes:
The claims about chatbots influencing political opinions are plausible and supported by recent studies. The narrative includes supporting details from reputable outlets, reducing concerns about plausibility. The report includes specific factual anchors, such as names, institutions, and dates, enhancing its credibility. The language and tone are consistent with the region and topic, with no inconsistencies noted. The structure is focused and relevant, with no excessive or off-topic detail. The tone is appropriate for a technology news outlet, with no signs of sensationalism.

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): HIGH

Summary:
The narrative presents original content with high freshness, supported by direct quotes from reputable sources. The source is reliable, and the claims made are plausible and well-supported. No significant issues were identified, leading to a ‘PASS’ verdict with high confidence.

Supercharge Your Content Strategy

Feel free to test this content on your social media sites to see whether it works for your community.

Get a personalized demo from Engage365 today.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2025 AlphaRaaS. All Rights Reserved.