A Delhi-based IT professional employed generative AI to turn the tables on an army impersonation scam, highlighting a rising trend of tech-savvy scams and DIY cybersecurity countermeasures in India.

When a message from a supposed college contact claiming to be an Indian Administrative Service officer arrived on his phone, a Delhi-based IT professional says he decided not to ignore a familiar “army transfer” fraud but to turn the tables using generative AI. According to the original report, the user , posting as u/RailfanHS on Reddit , asked ChatGPT to generate a webpage that mimicked a payment portal and secretly captured the visitor’s GPS coordinates, IP address and a front‑camera photo. [1]

The Reddit account’s detailed thread, which went viral in India, describes how the scammer sent photos of goods and a QR code and demanded an upfront payment. Feigning difficulty scanning the code, the poster sent the scammer a link to a hastily produced PHP page; when the fraudster clicked to “upload” the image the browser prompted for camera and location access and the poster received the live data. “Driven by greed, haste, and completely trusting the appearance of a transaction portal, he clicked the link,” the poster wrote. The result, he says, was immediate panic and messages pleading for mercy. [1]

While the account cannot be independently verified, the technical method was scrutinised and replicated by other Reddit users, who confirmed ChatGPT can produce functional code that requests geolocation and camera permissions as part of a seemingly legitimate upload flow. One commenter said they were “able to make a sort of a dummy HTML webpage” that captured geolocation after asking for permission. The original poster acknowledged using specific prompts to bypass some safety guardrails and hosting the script on a virtual private server. [1]

Experts and broader reporting underscore that this online anecdote sits inside a larger pattern: organised fraud that impersonates army personnel, public officials or law enforcement is widespread in India and beyond. Government and press accounts show multiple recent cases where victims were coaxed into transfers or large payments , from recruitment scams that collected lacs of rupees in Maharashtra to complex confidence frauds involving long‑running deception and large sums. Such schemes commonly use social engineering to override victims’ caution. [2][5][6][4]

At the same time, investigative reporting has documented how criminal networks across Asia are weaponising AI at scale. A Reuters investigation found that scammers in Southeast Asia have used ChatGPT to craft tailored scripts for large‑scale online fraud operations, sometimes with grave human costs for coerced workers at scam compounds. That reporting highlights how accessible generative tools can be repurposed to increase the reach and persuasiveness of scams. [3]

The Reddit episode illustrates a new, DIY dimension to scambaiting , where tech‑savvy individuals deploy the same tools criticised for enabling crime to expose or embarrass fraudsters. Platforms and cyber‑security professionals warn that such “hack‑backs” occupy a legal grey area and may carry risks, even where motives are to frustrate criminality; independent authorities and courts, rather than private retaliation, remain the recommended route for managing fraud. Industry data and expert comment stress prevention: public awareness, stricter platform practices around permission prompts and rapid law‑enforcement cooperation. [1][3]

As generative AI continues to lower the barrier to building convincing web pages and social‑engineering scripts, the line between empowerment and misuse tightens. According to the original report, the Reddit thread prompted others to replicate the approach, underscoring a broader shift in how both offenders and those who oppose them use AI. Policymakers, platform operators and users will need clearer rules and technical protections to prevent permission‑prompt deception while preserving legitimate developer and accessibility uses. [1][3][4]

##Reference Map:

  • [1] (Decrypt) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 6, Paragraph 7
  • [2] (Hindustan Times) – Paragraph 4
  • [3] (Reuters) – Paragraph 5, Paragraph 6, Paragraph 7
  • [4] (Wikipedia: Digital arrest scam) – Paragraph 4, Paragraph 7
  • [5] (Indian Express) – Paragraph 4
  • [6] (The Week) – Paragraph 4

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
9

Notes:
The narrative is recent, with the original Reddit post dated December 3, 2025. The Decrypt article was published on the same day, indicating timely reporting. Other reputable outlets, such as India Today and Hindustan Times, have also covered the story, confirming its freshness. ([indiatoday.in](https://www.indiatoday.in/trending-news/story/scammer-begs-for-forgiveness-after-delhi-man-uses-chatgpt-to-expose-him-in-viral-post-2830198-2025-12-03?utm_source=openai))

Quotes check

Score:
8

Notes:
The direct quotes from the Reddit post and the Decrypt article are consistent, suggesting original content. No significant variations in wording were found, and no earlier instances of these quotes were identified.

Source reliability

Score:
9

Notes:
The narrative originates from Decrypt, a reputable organisation known for its coverage of cryptocurrency and technology news. The article is corroborated by other reputable outlets, including India Today and Hindustan Times, enhancing its credibility. ([indiatoday.in](https://www.indiatoday.in/trending-news/story/scammer-begs-for-forgiveness-after-delhi-man-uses-chatgpt-to-expose-him-in-viral-post-2830198-2025-12-03?utm_source=openai))

Plausability check

Score:
9

Notes:
The technical method described—using ChatGPT to create a fake payment portal to capture a scammer’s location and image—is plausible and has been verified by other Reddit users. The scammer’s reaction, as reported, aligns with typical responses when confronted with evidence of their fraudulent activities. ([decrypt.co](https://decrypt.co/350935/how-ai-wiz-used-chatgpt-turn-tables-scammer?utm_source=openai))

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): HIGH

Summary:
The narrative is fresh, with consistent and original quotes. The source is reputable, and the claims are plausible and corroborated by multiple outlets. No significant credibility risks were identified.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2025 AlphaRaaS. All Rights Reserved.
Exit mobile version