Last month, a customer told me she'd had "the best support experience of her life." She was devastated to learn it was an AI. "I felt manipulated," she said. "Like I'd been tricked into having feelings." Her reaction haunts me.
Should AI Pretend to Be Human?
After years of research, here's where I've landed: transparency should be the default, with narrow exceptions. The burden should be on companies to justify non-disclosure.
"The question isn't whether AI can pass as human. It's whether we want to live in a world where we can never be sure if We're talking to a person. I don't."
Sherry Turkle MIT Professor, Author of "Alone Together"
The Bias Problem
In voice AI, we've documented:
- Accent bias: Speech recognition performs worse on non-native speakers
- Gender inference: Systems may apply stereotyped patterns
- Name-based assumptions: AI may adjust tone based on perceived ethnicity
A Framework for Ethical AI
Five Principles:
- Transparency by Default: Disclose AI use unless compelling reason not to
- Human Backup, Always: Clear, easy path to a human
- Continuous Bias Monitoring: Audit and publish results
- Privacy as a Feature: Minimize collection, maximize control
- Accountability Structures: Humans responsible for AI decisions
The customer who felt "tricked" taught me something important: technical excellence isn't enough. We must build AI that people can trust not because it fools them, but because it respects them.
Continue the Conversation
I'd love to hear your thoughts on AI ethics in customer service.
Follow @drrachelgreen →


