The Avocado Pit (TL;DR)
- 🕵️♂️ Grammarly's AI uses identities of experts, including those who have passed away, for writing advice.
- 👀 Users are finding familiar names in AI feedback, like bosses or professors.
- 🔒 Raises serious privacy and ethical concerns around AI-generated content.
Why It Matters
If you've ever dreamt of receiving academic advice from the ghost of a professor past, Grammarly's new feature might just be your ticket. But while channeling the voices of experts might sound like a nifty idea, it turns out that using identities without permission is about as welcome as a pop quiz on a Monday morning.
What This Means for You
For users, this means you might get writing tips from someone who never actually agreed to help you out. It's a bit like getting unsolicited advice from a stranger on the street—only this time, the stranger might be your boss, your favorite professor, or someone who unfortunately can’t object because they're no longer with us. This raises questions about privacy and the ethical use of AI.
The Source Code (Summary)
Grammarly's "expert review" feature promises writing advice inspired by subject matter experts, including deceased professors, as reported by Wired. Users have discovered unexpectedly familiar names among the AI-generated feedback, like their bosses. This feature highlights a potential breach of privacy and ethical standards, as it repurposes identities without explicit permission.
Fresh Take
Grammarly's new feature is like a digital séance gone awry. While the intention might be to elevate writing quality, it wanders into murky ethical waters by using identities without consent. It's a digital age reminder that just because we can do something with AI doesn't mean we should. The balance between innovation and privacy is more delicate than an avocado's ripening timeline, and this is a classic case of crossing the line.
Read the full AI | The Verge article → Click here


