Listen to the article

0:00
0:00

Recent tensions between Pakistan and Afghanistan have cast a spotlight on India’s alleged disinformation campaigns, with fresh accusations emerging about the use of artificial intelligence to create false narratives targeting Pakistan.

Pakistan has accused Indian media outlets of orchestrating a deliberate misinformation campaign through fabricated content, including AI-generated images and videos aimed at damaging Pakistan’s international reputation during a sensitive period of regional instability.

A notable incident involved North Waziristan resident Adil Dawar, who was reportedly misrepresented by several Indian television channels as a Pakistan Army officer killed in operations near the Pakistan-Afghanistan border. Dawar publicly refuted these claims in a video statement.

“My name is Adil Dawar, and I am a political and social activist from North Waziristan,” he clarified in his statement. “Indian media used an AI-generated photo of me and falsely presented me as a military major. I am an ordinary citizen.” Dawar characterized the false reporting as a significant breach of journalistic ethics.

This incident occurs against the backdrop of deteriorating relations between Pakistan and Afghanistan, with border skirmishes and diplomatic tensions rising in recent months. Security analysts suggest the timing of such disinformation is rarely coincidental, often appearing during periods of regional instability.

Media experts in Islamabad have pointed to what they describe as a pattern of Indian media engagement in spreading fabricated narratives about Pakistan. They reference previous controversies surrounding the Pulwama attack in 2019 and incidents in Pahalgam, claiming these represent a consistent strategy to isolate Pakistan diplomatically.

The accusations highlight growing concerns about the misuse of artificial intelligence in creating convincing but entirely false media content. As AI technology becomes more sophisticated and accessible, the potential for its deployment in international information warfare has raised alarms among cybersecurity and international relations experts.

The Kashmir Media Service, which reported on these allegations, has consistently maintained that Indian media outlets engage in coordinated campaigns to shape international perception against Pakistan, particularly regarding disputed territories and regional conflicts.

Digital rights organizations have noted the increasing prevalence of deepfakes and AI-manipulated content in geopolitical contexts worldwide. The technology can now create realistic images and videos that are difficult for viewers to distinguish from authentic media, creating new challenges for information verification.

Pakistan’s Ministry of Information has previously established specialized units to counter what it terms “hybrid warfare” through disinformation. These efforts include media monitoring and rapid response mechanisms to address false reports circulating internationally.

Regional tensions between India and Pakistan have historically been amplified by media outlets on both sides, with accusations of propaganda frequently exchanged. However, the alleged incorporation of advanced AI technology represents an escalation in these information conflicts.

Media literacy experts emphasize the critical importance of source verification and cross-referencing information, particularly regarding sensitive geopolitical matters in South Asia. They recommend that international observers and news consumers approach regional reporting with heightened scrutiny.

As digital manipulation technologies continue to advance, the international community faces growing challenges in distinguishing authentic reporting from fabricated content, particularly in regions with complex historical conflicts and ongoing territorial disputes.

Neither Indian government officials nor major Indian media organizations had issued responses to these specific allegations at the time of reporting.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

8 Comments

  1. It’s troubling to see regional tensions being further inflamed by the spread of disinformation. Verifying the accuracy of information, especially when it involves sensitive political or military issues, should be a top priority for media outlets.

  2. The misrepresentation of a civilian as a military officer is a serious breach of journalistic integrity. Media outlets must exercise extreme caution when using AI-generated content to avoid inadvertently perpetuating false narratives.

    • William Thomas on

      Absolutely. The ethical use of AI in media is crucial, as the potential for abuse and manipulation is significant. Rigorous fact-checking processes are essential.

  3. This is a concerning development, if true. The use of AI to spread disinformation is a growing issue that undermines public trust. It’s critical that media outlets validate information and avoid spreading false narratives, even inadvertently.

    • Patricia Thompson on

      Absolutely. Responsible journalism is essential to counter the spread of propaganda and misinformation, especially in sensitive geopolitical contexts.

  4. Lucas Martinez on

    This incident highlights the importance of media literacy and the need for the public to be vigilant in scrutinizing the information they consume, especially when it relates to geopolitical conflicts. Fact-checking and source verification are critical skills in the digital age.

  5. This highlights the need for greater transparency and accountability around the use of AI, particularly in media and information dissemination. Fact-checking and ethical standards must be upheld to prevent the exploitation of these technologies for malicious purposes.

    • Agreed. Stronger safeguards and regulations may be required to ensure AI is not misused to create false narratives that sow discord and damage reputations.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.