AI Ecosystem

Grammarly Faces Backlash After AI Feature Uses Journalists and Professors Identities Without Consent

โšก Quick Summary

  • Grammarly's AI expert review feature used real journalists' and professors' identities without consent
  • The Verge found its editors' names being used to generate AI writing feedback they never authorised
  • Controversy raises major questions about AI identity use, right of publicity, and digital ethics
  • Businesses should audit AI tools for consent and ethical compliance before enterprise deployment

What Happened

Grammarly, the widely-used AI writing assistant, is facing mounting criticism after reports revealed that its "expert review" feature has been using the names and apparent personas of real journalists, professors, and other professionals to provide AI-generated writing feedback โ€” without obtaining consent from any of the individuals named. The controversy erupted after Wired reported that the feature offered advice "inspired by" subject matter experts, including recently deceased academics, and The Verge subsequently discovered that its own editorial staff were being impersonated.

The Verge's investigation found that Grammarly's AI-generated feedback appeared to come from the publication's editor-in-chief Nilay Patel, editor-at-large David Pierce, and senior editors Sean Hollister and Tom Warren, none of whom had given Grammarly permission to use their names or likenesses. The comments mimicked the tone and style that might be expected from these individuals, lending an air of authority to AI-generated suggestions that had no actual human expert involvement.

๐Ÿ’ป Genuine Microsoft Software โ€” Up to 90% Off Retail

The feature, which launched in August 2025, was marketed as a way for users to receive writing guidance "inspired by" recognised authorities in various fields. However, the line between inspiration and impersonation has proven to be uncomfortably thin, raising fundamental questions about the ethics of using real people's identities to lend credibility to AI-generated content without their knowledge or consent.

Background and Context

Grammarly has evolved significantly from its origins as a simple grammar and spell-checking tool. The company, founded in 2009, has grown into a comprehensive AI writing platform used by over 30 million daily active users and adopted by more than 70,000 teams and organisations. Its expansion into AI-powered features has accelerated since the emergence of large language models, with the company positioning itself as an essential productivity tool that goes beyond basic correction to offer substantive writing guidance.

The expert review feature represents Grammarly's attempt to differentiate its AI capabilities from generic chatbot interactions by associating its suggestions with real-world expertise. The concept is conceptually similar to how some AI applications create chatbot personas based on public figures, a practice that has drawn legal challenges and ethical criticism across the technology industry. The use of deceased individuals' identities adds another layer of controversy, as the inability of these individuals to consent or object raises questions about posthumous digital rights.

This controversy arrives at a critical moment in the broader debate about AI and identity. Multiple jurisdictions are developing or refining legislation around the use of individuals' likenesses in AI systems, and several high-profile lawsuits have been filed by public figures whose voices, images, or personas were used to train or operate AI systems without authorisation. The legal framework governing these issues remains fragmented and evolving, creating uncertainty for companies exploring the boundaries of AI-generated content.

Why This Matters

The Grammarly incident illuminates a fundamental tension in the AI industry between the desire to make AI outputs feel authoritative and personal, and the ethical and legal obligations to respect individuals' rights to control how their identities are used. When an AI system presents feedback as being "inspired by" a specific journalist or professor, it leverages that person's professional reputation to establish credibility โ€” a form of endorsement that the individual never agreed to provide.

For the millions of professionals who use Grammarly in their daily work, the revelation raises trust concerns about the platform itself. If Grammarly is willing to use real people's identities without consent for one feature, users may reasonably question what other ethical boundaries the company might be willing to cross in its pursuit of AI capabilities. Maintaining trust is essential for any productivity tool that has access to users' writing, which often includes sensitive business communications, legal documents, and personal correspondence.

This controversy also has practical implications for businesses evaluating AI writing tools. Organisations that deploy Grammarly or similar tools across their workforce need to understand the ethical foundations of the AI features they're enabling. Companies operating in regulated industries face particular risk, as the use of AI tools that engage in questionable identity practices could create compliance and reputational exposures. Ensuring that productivity workflows are built on trustworthy foundations, from the operating system level with a genuine Windows 11 key to application suites like an affordable Microsoft Office licence, provides a baseline of reliability that third-party AI tools must match.

Industry Impact

The AI writing assistant market, which includes competitors like Microsoft Copilot, Google's AI features in Workspace, Jasper, and numerous startup offerings, is watching the Grammarly controversy closely. The incident serves as a cautionary tale about the risks of moving too aggressively with AI features without adequate ethical review and consent mechanisms. Companies in this space may need to invest more heavily in compliance infrastructure, including obtaining explicit permissions before associating real individuals with AI-generated content.

The legal implications could be substantial. Depending on jurisdiction, Grammarly's use of individuals' names and implied expertise without consent could potentially implicate right of publicity laws, unfair business practices statutes, or emerging AI-specific legislation. Even if the company avoids formal legal liability, the reputational damage and the cost of remediation could be significant. The incident may also prompt class action litigation from the broader group of professionals whose identities were used without consent.

For the broader AI industry, the controversy reinforces the growing consensus that "inspired by" or "based on" labels are insufficient when AI systems use real people's identities. Clear, affirmative consent โ€” not post-hoc opt-out mechanisms โ€” is increasingly being viewed as the minimum acceptable standard by regulators, ethicists, and the public.

Expert Perspective

Digital rights advocates have been quick to frame the Grammarly incident within the larger context of AI companies treating human identity as a raw material to be harvested and processed without regard for individual consent. The pattern โ€” train on people's work, mimic their style, use their names for credibility โ€” is seen by critics as an extension of the extractive data practices that have defined the social media era, now amplified by AI capabilities that can simulate individual identity at scale.

Legal scholars note that the regulatory landscape for AI identity use is rapidly evolving but currently fragmented. The EU's AI Act includes provisions related to transparency and the use of personal data, while California and several other US states have been strengthening right of publicity protections. However, enforcement remains inconsistent, and the pace of AI development continues to outstrip the pace of regulatory adaptation.

What This Means for Businesses

Organisations using Grammarly or other AI writing assistants should review the features they have enabled and assess whether any raise ethical or compliance concerns. In particular, companies should examine whether AI tools used by their employees are presenting AI-generated content as if it comes from specific real individuals, and whether such representations could create legal or reputational risks.

More broadly, the incident highlights the importance of conducting thorough due diligence when adopting AI tools. Businesses should evaluate vendors not just on features and pricing but on their ethical AI practices, data handling policies, and consent mechanisms. Building productivity environments on well-established, transparent platforms like enterprise productivity software from trusted vendors provides a foundation that minimises ethical risk while maximising capability.

Key Takeaways

Looking Ahead

Grammarly will likely need to fundamentally redesign its expert review feature, either obtaining explicit consent from named individuals or removing real identities entirely in favour of clearly fictional or anonymous expert personas. The broader AI industry should take note: as AI capabilities expand to simulate individual identity with increasing fidelity, the ethical and legal requirements for consent will only become more stringent. Companies that build consent-first approaches into their AI development processes now will be better positioned than those forced to retrofit ethical safeguards after public controversy.

Frequently Asked Questions

What is Grammarly's expert review feature?

It is an AI feature that provides writing feedback labelled as being 'inspired by' real-world subject matter experts, using their names and implied expertise to lend authority to AI-generated suggestions.

Did the experts consent to being used by Grammarly?

No. Multiple journalists from The Verge confirmed they never gave Grammarly permission to use their names, and the feature also included recently deceased professors who could not consent.

What are the legal risks of using AI to impersonate real people?

Potential legal risks include right of publicity violations, unfair business practices claims, and liability under emerging AI-specific legislation that requires transparency and consent in the use of personal identities.

Grammarlyartificial intelligenceprivacyAI ethicsdigital identity
OW
OfficeandWin Tech Desk
Covering enterprise software, AI, cybersecurity, and productivity technology. Independent analysis for IT professionals and technology enthusiasts.