A police report generated with Axon’s AI tool humorously claimed an officer transformed into a frog after misinterpreting background audio from a Disney film, highlighting ongoing challenges in automating police documentation amidst concerns over AI accuracy and accountability.
In Heber City, Utah, a pilot of Axon Enterprise Inc.’s AI report-writing tool produced an extraordinary error: an automated draft police report stated that an officer “began to transform into a frog, with his skin turning green and slimy.” According to the Heber City Police Department, the line resulted from Draft One misinterpreting background audio from the Disney film The Princess and the Frog during a December 2025 domestic disturbance call, and the department issued a public clarification that no such transformation occurred and the report was corrected by human review. [1][2][4][5]
Axon’s Draft One, which the company markets as a way to reduce hours spent by officers on paperwork by transcribing and summarising body-worn camera footage, forms part of a broader push to automate administrative tasks in policing. The company has said the tool is under continuous refinement to improve accuracy, including handling ambient sounds more effectively; however, this incident highlights the limits of current systems when they encounter overlapping audio from entertainment media. [1][3]
Technically, the error is an example of a generative AI “hallucination”, where natural language processing models present fabricated or contextually misplaced details as facts. Industry reporting and advocacy groups have warned that such hallucinations can arise when models cannot reliably distinguish foreground, verifiable interactions from background noise or fictional dialogue, producing outputs that require human correction. [1][7]
The Heber City episode is not an isolated oddity but part of a pattern of problems that have accompanied rapid AI adoption in law enforcement. Investigations and reporting have documented cases where agencies relied on facial recognition and other AI outputs without adequate human corroboration, sometimes with serious consequences, prompting civil liberties groups and oversight organisations to call for stricter controls. The Washington Post found instances where agencies used AI matches in ways that ran counter to their own internal policies requiring independent verification. [6]
Legal and ethical questions follow from such mistakes. Advocates and scholars caution that inaccuracies in AI-generated reports could undermine investigations, jeopardise prosecutions or defence rights, and erode public trust if left unchecked. A June 2025 report by Fair and Just Prosecution argued that generative AI tools remain prone to misidentifications, fictitious details and other errors that make them risky to deploy in high-stakes criminal justice contexts without robust safeguards. [7]
Local response in Heber City emphasised human oversight as the immediate remedy. The police chief told city officials and local media that Draft One remains in a pilot phase and that all AI-generated reports are subject to human editing and verification before being finalised, turning the episode into a training moment about where automation must be tempered by human judgement. Similar pilots elsewhere have adopted hybrid models that pair AI efficiency with mandatory human review. [4][1]
The incident also had market and reputational repercussions. Coverage noted investor concern and heightened scrutiny of Axon’s ambitions to expand from hardware into AI-driven software, while competitors and policymakers watched for lessons about deployment, auditing and vendor accountability in public-sector contracts. Calls for greater transparency about error rates and formandated reporting of AI failures have gained traction at state and local levels in the absence of a comprehensive federal framework for policing AI. [1][3]
Looking ahead, technologists point to improvements in multimodal models and more rigorous dataset design as routes to reduce hallucinations, including training that explicitly includes ambient media noise as an edge case. Meanwhile, civil rights groups, prosecutors and law-enforcement associations are advocating policies that require audit trails, error reporting and clear lines of accountability so that automation serves to assist, not replace, responsible human decision-making. [1][7][6]
The Heber City “frog” report is, in one sense, an amusing anecdote; in another it is a cautionary tale about deploying generative AI where accuracy and accountability matter most. As departments experiment with tools that promise to free officers from paperwork, the episode underlines a persistent lesson: technological gains must be matched by safeguards, oversight and a commitment to human review if public confidence in justice institutions is to be maintained. [1][4][7]
📌 Reference Map:
##Reference Map:
- [1] (WebProNews) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 6, Paragraph 8, Paragraph 9
- [3] (Forbes) – Paragraph 2, Paragraph 7
- [4] (Park Record) – Paragraph 1, Paragraph 6, Paragraph 9
- [5] (NDTV) – Paragraph 1
- [6] (The Washington Post) – Paragraph 4, Paragraph 8
- [7] (Fair and Just Prosecution) – Paragraph 3, Paragraph 5, Paragraph 9
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The narrative is current, with the incident reported in early January 2026. The earliest known publication date of substantially similar content is December 16, 2025, in the Park Record. ([parkrecord.com](https://www.parkrecord.com/2025/12/16/heber-city-police-department-test-pilots-ai-software/?utm_source=openai)) The report is based on a recent event, and no discrepancies in figures, dates, or quotes were found. No evidence of republishing across low-quality sites or clickbait networks was identified. The narrative includes updated data and quotes, justifying a high freshness score. No earlier versions show different figures, dates, or quotes. No similar content appeared more than 7 days earlier. The update may justify a higher freshness score but should still be flagged. ([webpronews.com](https://www.webpronews.com/axons-ai-tool-mistakes-movie-audio-claims-cop-turned-into-frog/?utm_source=openai))
Quotes check
Score:
10
Notes:
The direct quotes in the narrative are unique to this report. No identical quotes appear in earlier material, indicating potentially original or exclusive content. No variations in quote wording were found. No online matches for the quotes were identified.
Source reliability
Score:
8
Notes:
The narrative originates from WebProNews, a reputable organisation. The Heber City Police Department is a verifiable entity with a public presence. Axon Enterprise Inc. is a legitimate company with a substantial online presence. No unverifiable entities or fabricated information were identified.
Plausability check
Score:
9
Notes:
The narrative’s claims are plausible and supported by multiple reputable sources. The incident of an AI-generated police report claiming an officer transformed into a frog due to misinterpreting background audio from ‘The Princess and the Frog’ is consistent across reports. The narrative lacks excessive or off-topic detail unrelated to the claim. The tone is consistent with typical corporate and official language. No inconsistencies in language or tone were found.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is fresh, original, and supported by reliable sources. The claims are plausible and consistent with multiple reputable reports. No significant issues were identified, leading to a high confidence in the assessment.

