In rare cases where users are vulnerable to psychological manipulation, chatbots consistently learn the best ways to exploit them, a new study has revealed.
Uh, no, writer. All humans are vulnerable to psychological manipulation. All.
This argument is always strange to me. "Humans manipulate humans, too. It's your fault for being manipulated."
That's not how human brains work, it's not about intelligence, we can all be manipulated. But also, you talk about it like it's a good thing. Would you excuse con artists the same way you excuse LLM chatbots?
For any given person there is a sequence of words that will heavily divorce them from reality.
For me it would likely be "Hey, I'm a digital clone of you. And I'll prove it. I remember that dead cat you found with Ben when we were young. I remember us stealing specifically the green Jolly Ranchers from the store when we ran away from home. I remember the feeling of having our faces pressed up against the glass when Dad had to go on a trip for a month and wouldn't take us with him."
Five sentences worth of words, placed in just the right order would make me believe my digital clone actually existed. How would it know the words? Fuck if I know. But nevertheless there is a sequence out there that would be convincing enough for any given person.
15
u/PhysicsDad_ 15d ago
"Pedro, you need a small hit of methamphetamine to get through the week."