⚡ Quick Summary
- Grammarly has rebranded as Superhuman, pivoting from grammar checking to AI-powered communication generation
- CEO addresses growing concerns about AI impersonation in user communications
- No industry consensus exists on disclosure requirements for AI-generated professional messages
- Businesses should develop proactive AI communication policies addressing transparency and appropriate use
What Happened
Superhuman — the company formerly known as Grammarly — is at the center of a growing controversy around AI impersonation and the boundaries of AI-assisted communication. CEO Shishir Mehrotra, who previously served as chief product officer at YouTube and sits on Spotify's board of directors, appeared on The Verge's podcast this week to address concerns about how the company's AI technology interacts with users' identities and communication styles.
The interview comes as Superhuman has completed its transformation from a grammar-checking tool into a comprehensive AI-powered communication platform that can draft emails, compose messages, and generate responses in users' personal writing styles. The rebrand from Grammarly to Superhuman signals the company's strategic pivot from passive editing assistance to active AI-driven content generation — a shift that raises fundamental questions about authenticity, consent, and the nature of communication in an AI-mediated world.
The conversation was prompted by incidents where Superhuman's AI reportedly generated communications that recipients could not distinguish from messages written by the purported sender — effectively impersonating users so convincingly that questions of consent and disclosure became unavoidable.
Background and Context
Grammarly was founded in 2009 as a grammar and writing assistance tool, growing into one of the most widely used productivity applications in the world with over 30 million daily active users at its peak. The company's evolution from a red-squiggly-line grammar checker to an AI writing assistant tracked the broader progression of natural language processing technology — from rule-based corrections to statistical models to transformer-based AI that can generate original text in any style or voice.
The rebrand to Superhuman reflects a strategic recognition that the company's future lies not in correcting human writing but in generating it. With AI capable of producing human-quality text, the value proposition shifts from "write better" to "write less" — allowing users to communicate more efficiently by delegating routine correspondence to AI that has learned their personal communication patterns, tone, and preferences.
This evolution mirrors a broader trend across enterprise productivity software platforms, where AI capabilities are moving from assistive to generative. Microsoft 365 Copilot, Google Gemini in Workspace, and numerous startup products are all pursuing similar strategies — using AI to draft, summarize, and respond to communications on behalf of users. The question that Superhuman's controversy highlights is where the line falls between AI assistance and AI impersonation.
Why This Matters
The Superhuman controversy strikes at a fundamental question that the entire technology industry must confront: when AI generates communication in a user's voice, who is actually speaking? The practical implications extend far beyond one company's product into the foundations of trust that underpin all digital communication.
In business contexts, the stakes are particularly high. If a client receives an email that appears to be personally written by their account manager but was actually generated by AI, the relationship dynamics change in ways that are not yet well understood. The perceived personal attention and care conveyed by a thoughtfully written email is different from an AI-generated response, even if the content is identical. Trust in business relationships is built partly on the belief that the other party invested time and attention in the communication — an investment that AI delegation fundamentally undermines.
The legal implications are also emerging. In regulated industries — financial services, healthcare, legal — communications often carry legal weight and disclosure requirements. AI-generated communications that appear to come from licensed professionals raise questions about professional responsibility, regulatory compliance, and the validity of agreements or advice conveyed through AI-drafted messages.
Industry Impact
The Superhuman controversy is forcing the entire AI productivity industry to confront disclosure norms that have been conspicuously absent from product design. Currently, most AI writing assistants — including those built into affordable Microsoft Office licence products — do not automatically disclose to recipients that a message was AI-generated or AI-assisted. The lack of disclosure creates an asymmetry where senders benefit from AI efficiency while recipients are unaware that they are engaging with AI-mediated communication.
Several industry groups and standards organizations have begun discussing AI communication disclosure frameworks, though no consensus has emerged. The debate mirrors earlier discussions about disclosing the use of automated customer service chatbots — a debate that eventually resulted in regulations in some jurisdictions requiring disclosure of non-human interaction. Similar requirements for AI-assisted professional communication seem increasingly likely.
For enterprise IT departments, the AI communication disclosure challenge adds a new dimension to acceptable use policies. Organizations need to establish clear guidelines about when AI-generated communication requires disclosure, which types of communications are appropriate for AI delegation, and how to maintain authentic professional relationships in an era of AI-mediated correspondence. Ensuring teams have proper tools — from licensed operating systems with genuine Windows 11 key installations to well-configured productivity suites — provides the governance framework that AI-assisted communication needs.
Expert Perspective
Communication ethicists note that the AI impersonation debate is not fundamentally new — executives have long had assistants draft correspondence on their behalf — but the scale and fidelity of AI-generated communication creates qualitatively different dynamics. A human assistant writing on behalf of an executive still involves human judgment, empathy, and understanding. An AI system generating thousands of personalized messages simultaneously, each crafted to appear individually written, represents a different kind of communication that existing social norms are not equipped to evaluate.
Technology industry leaders are divided on the disclosure question. Some argue that AI disclosure requirements would create friction that undermines the efficiency benefits of AI-assisted communication. Others contend that transparency about AI use is essential for maintaining trust in digital communication — and that eroding that trust will ultimately harm the adoption of AI productivity tools more than disclosure requirements would.
What This Means for Businesses
Businesses adopting AI-powered communication tools should proactively develop policies around AI disclosure, appropriate use, and quality control. Rather than waiting for regulations to mandate disclosure, forward-thinking organizations can differentiate themselves by establishing transparent practices that build trust with clients, partners, and stakeholders.
Practical steps include creating tiered guidelines that specify which communications are appropriate for full AI generation, which should be AI-drafted with human review, and which should remain entirely human-written. Client-facing communications, legal documents, and sensitive HR interactions likely fall in the latter categories, while routine scheduling, status updates, and internal coordination may be appropriate candidates for AI delegation.
Key Takeaways
- Grammarly has rebranded as Superhuman, pivoting from grammar assistance to AI-powered communication generation
- The company's CEO addressed concerns about AI impersonation in user communications during a Verge podcast interview
- AI-generated messages that convincingly mimic user writing styles raise questions about consent, disclosure, and communication authenticity
- No industry consensus exists on AI communication disclosure requirements, though regulatory frameworks are emerging
- Businesses should proactively develop AI communication policies rather than waiting for regulatory mandates
- The controversy highlights broader questions about trust in AI-mediated professional relationships
Looking Ahead
The Superhuman controversy is likely the opening chapter of a much larger societal discussion about AI authenticity in communication. As AI writing capabilities continue to improve, the line between AI-assisted and AI-generated communication will blur further. Regulatory responses, industry standards, and social norms around AI communication disclosure will need to evolve rapidly to maintain the trust that effective communication requires. The companies that get this balance right — offering AI efficiency with appropriate transparency — will define the next era of business communication.
Frequently Asked Questions
Why did Grammarly rebrand to Superhuman?
The rebrand reflects the company's strategic pivot from passive grammar correction to active AI-powered communication generation, where AI drafts emails and messages in users' personal writing styles.
Is AI-generated email considered impersonation?
The question is actively debated. When AI generates communication convincingly mimicking a user's writing style without disclosure to recipients, it raises ethical and potentially legal questions about consent and authenticity.
Should businesses disclose AI-generated communications?
Industry experts increasingly recommend proactive disclosure policies. Forward-thinking organizations are developing tiered guidelines specifying which communications are appropriate for AI generation versus human writing.