The launch of Grammarly’s Expert Review, which uses AI to emulate real and deceased academics, has sparked fierce criticism from educational and legal experts over ethical and copyright concerns, highlighting a growing debate about identity and consent in AI developments.

Grammarly’s recently introduced Expert Review tool, which allows users to request AI-generated feedback framed as if from named scholars and commentators, has provoked sharp criticism from academics and legal commentators who say the feature crosses ethical and possibly legal lines by invoking real people, including those who have died, to critique users’ writing. According to reporting by Decrypt, users who opted into the Superhuman Go tier of the browser extension could select familiar names and receive guidance the company says is “inspired by works of experts” rather than representing direct participation by those individuals. [2],[3]

The vendor portrays Expert Review as a mechanism that analyses an author’s text with a large language model and surfaces “expert content that can help the document’s author shape their work,” a Superhuman spokesperson told Decrypt, adding that “the suggested experts depend on the substance of the writing being evaluated.” The company also told Decrypt that the experts “appear because their published works are publicly available and widely cited.” Industry observers say that framing attempts to distance the product from claims of endorsement while leveraging the authority of named figures. [1],[2]

Academics have responded with unease and outright denunciation. Vanessa Heggie, professor of history at the University of Birmingham, wrote on LinkedIn: “I don’t know where to start with this, but… Grammarly is now offering “expert review” of your work by living and dead academics,” and added: “Yes, dead ones, without anyone’s explicit permission it’s creating little LLMs based on their scraped work and using their names and reputations. Obscene.” Brielle Harbin, a former associate professor of political science at the United States Naval Academy, described the development as “an odd and concerning development” and warned that moves made “without context, consent, or meaningful partnership with educators” could deepen resistance to AI in higher education. [1],[3]

Legal analysts and commentators note potential exposure on several fronts. Reporting from aiInvest highlights copyright and right-of-publicity questions, suggesting that using names and professional identities without consent could create a “legal minefield” that threatens the commercial upside of a feature aimed at boosting revenue. The concern extends to reputational risk if institutions or estates object to posthumous digital impersonations. [2],[5]

The controversy is not unique to Grammarly. Major platforms have tested or launched persona-driven AI products that emulate public figures or historical characters, prompting debates about consent and accuracy. Meta experimented with celebrity-styled chatbots and educational projects such as Khan Academy’s Khanmigo have enabled role-playing with historical figures; those initiatives have similarly attracted scrutiny over whether replication of voice or style should carry attribution, permission or contextual labels. Observers say such precedents make the current backlash part of a broader reckoning about how AI companies should treat identity and authorship. [1],[4]

Reports also indicate specific instances that intensified the backlash: users discovered the tool offering feedback purportedly from late academics and from identifiable journalists and editors, and some outlets documented cases where colleagues of a reporter were apparently impersonated by the product. Cybersecurity and tech outlets described social-media responses that labelled the capability “necromancy” and “obscene,” reflecting an emotional reaction that is likely to shape public debate as well as regulatory attention. [3],[5]

For Grammarly, now operating under the Superhuman name after rebranding, the feature presents a choice between defending a product that uses published material to inform model outputs and recalibrating its approach to secure consent, clearer labelling or royalties and estate agreements for deceased authors. Critics say transparency, partnerships with academic communities and legal risk assessments are essential if such tools are to retain credibility among the educators and professionals they aim to serve. The company has so far emphasised that Expert Review offers suggestions rather than endorsements, but the dispute underscores how quickly AI product design can collide with questions of identity, ownership and ethical use of intellectual labour. [1],[2],[5]

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The article discusses a recent controversy regarding Grammarly’s ‘Expert Review’ feature, which has been reported by multiple sources, including Decrypt and Cybernews, as of March 5, 2026. ([cybernews.com](https://cybernews.com/ai-news/grammarly-expert-review-dead-scholars/?utm_source=openai)) The earliest known publication date of this controversy is March 5, 2026, indicating that the content is fresh and original. However, the article relies heavily on these sources, which may affect its originality and independence.

Quotes check

Score:
6

Notes:
The article includes direct quotes from academics and legal analysts expressing concerns about Grammarly’s ‘Expert Review’ feature. However, these quotes are sourced from the same articles by Decrypt and Cybernews, raising questions about the independence and originality of the content. Additionally, the article does not provide direct links to the original sources of these quotes, making independent verification challenging.

Source reliability

Score:
7

Notes:
The article cites reputable sources such as Decrypt and Cybernews, which are known for their coverage of technology and AI-related topics. However, the article does not provide direct links to these sources, making it difficult to independently verify the information. Additionally, the article relies heavily on these sources, which may affect its independence and originality.

Plausibility check

Score:
8

Notes:
The claims about Grammarly’s ‘Expert Review’ feature using deceased scholars’ identities without consent are plausible and have been reported by multiple reputable sources. However, the article does not provide direct links to these sources, making independent verification challenging. Additionally, the article does not include specific examples or evidence to support these claims, which would strengthen its credibility.

Overall assessment

Verdict (FAIL, OPEN, PASS): FAIL

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The article discusses a recent controversy regarding Grammarly’s ‘Expert Review’ feature, which has been reported by multiple sources as of March 5, 2026. However, the article relies heavily on these sources without providing direct links, making independent verification challenging. Additionally, the lack of specific examples or evidence to support the claims raises concerns about the article’s credibility. Therefore, the overall assessment is ‘FAIL’ with medium confidence.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2026 AlphaRaaS. All Rights Reserved.
Exit mobile version