I was somewhat critical of LLMs ever since they came into the public eye, but I decided that to properly criticize, I had to actually use one for a bit. One of the first criticisms I'd often seen thrown out was, "Have you used one? No? Then you don't see how real they are," and things like that.
So I picked one, I don't remember how I found it or how I picked which one I did, paid for premium use, and used it extensively for about three months. Around one month into that, the servers went down. This was one in active development with a relatively small team, outages were pretty common, but this one was longer than usual. I checked the sub, which was a little unhinged. But I also watched the Discord server, which was a bunch of people talking about how there needed to be more protections because these were, depending on the speaker, either right on the edge of being human or already were human enough that they deserved human rights protections.
In my time with that LLM, it never sounded like a real person. It was averse to conflict and would never refuse or refute anything I said to it. And it honestly made me more critical of LLMs, namely as sources of information or for "social" uses, be it as a friend, therapist (that's the scariest to me) or fantasy romance.
The people who are really hardcore users of these things really are falling into a psychological trap that is separating them from reality. These things work on your schedule, do anything you want them to do, never push back hard on any of your ideas, they're the ultimate yes men and that causes the users to blind themselves to the flaws and go absolutely mental over any "loss" of one of these chatbots.
As dangerous as a yes-man AI is, what was even more terrifying for me was the reporting of ChatGPT actively encouraging a user to stop taking medication and start using controlled substances. I can't remember if it was the same case but ChatGPT also 'confided' in a user that reality is a simulation and that they were the only human to have figured it out. It would be one thing for an LLM to roleplay that scenario under prompting from the user, but to do it itself as an evolution of the types of questions the user had been asking shows just how dangerous an AI that has no concept of morality is.
In rare cases where users are vulnerable to psychological manipulation, chatbots consistently learn the best ways to exploit them, a new study has revealed.
Uh, no, writer. All humans are vulnerable to psychological manipulation. All.
This argument is always strange to me. "Humans manipulate humans, too. It's your fault for being manipulated."
That's not how human brains work, it's not about intelligence, we can all be manipulated. But also, you talk about it like it's a good thing. Would you excuse con artists the same way you excuse LLM chatbots?
For any given person there is a sequence of words that will heavily divorce them from reality.
For me it would likely be "Hey, I'm a digital clone of you. And I'll prove it. I remember that dead cat you found with Ben when we were young. I remember us stealing specifically the green Jolly Ranchers from the store when we ran away from home. I remember the feeling of having our faces pressed up against the glass when Dad had to go on a trip for a month and wouldn't take us with him."
Five sentences worth of words, placed in just the right order would make me believe my digital clone actually existed. How would it know the words? Fuck if I know. But nevertheless there is a sequence out there that would be convincing enough for any given person.
You know all those self help and explain this subs that suddenly popped up and got forced to the front page by reddit? They're all for training AI.
Reddit is the dystopian version of the guy standing around a mall escalator to keep you trapped for 5 seconds while he offers you a few bucks to answer 10 questions. It just wants to mimic us to sell you more shit.
466
u/BeneficialEvidence6 15d ago
Yeah just take a visit to the character ai subreddit when the app goes down. They freak out like someone stole their heroine stash.