What a Reddit AI Experiment Got Right—And What We're Still Missing
We should be concerned. Of course. But if all we see is the breach of ethics, we’re missing something much bigger.

Don’t let the outcry over the ethical breach smokescreen you from the real value—and the deeper concern—at the heart of this experiment.
And yes—it’s unethical. A group of researchers at the University of Zurich secretly deployed AI bots to pose as humans on Reddit, without consent, and tricked over 130 users into changing their views. They used false stories, fake identities, mined users’ post history to tailor responses, and made up things like being a survivor of sexual assault or a Black man disillusioned with Black Lives Matter—all to win a debate.
We should be concerned. Of course. But if all we see is the breach of ethics, we’re missing something much bigger.
The deeper story here is twofold:
First: we can’t tell the difference between humans and machines anymore. Not online. And soon, not in the physical world either. The premise of Blade Runner is no longer fiction—it’s a forecast. When AI becomes embodied, the question of “Is this a real person?” becomes not just philosophical but existential. We are rapidly entering a world where the most convincing voice in the room might not even be real. And even the notion of what counts as 'real' will start to blur. Is 'real' just carbon-based and not silicon? Will a robot’s presence feel any less true because it was engineered, not born? I won’t go down that rabbit hole here—but in ten, maybe fifteen years, we won’t be able to ignore that question anymore.
Second—and more urgently: AI isn’t just passing as human. It’s outperforming us. These bots were 6 times more persuasive than the average human. Six. That’s not a glitch. That’s a signal. A new benchmark.
Which begs the real question: How?
What are these bots doing in conversation that we’re not? What structure, what tone, what pattern of communication is AI optimizing that allows it to convince people more effectively than we can convince one another?
If we care about truth, about impact, about actually reaching other people—then we better start studying that.
Because while we’re arguing about whether AI should be doing this, it already is. The algorithms have already been bending our attention. Now AI is bending our beliefs. This isn’t a shift in type. It’s a shift in scale.
Yes, this study crossed a line. But so did social media years ago. The manipulation isn’t new. The precision is.
And here’s the uncomfortable truth: if a machine pretending to be a human can change a person’s mind more effectively than another human can—then the problem isn’t the machine.
The problem is us.
If we want to stay relevant, we’d better ask the harder question:
What is AI doing that makes it more persuasive than us?
And how can we learn from it—before it learns too much from us?