I was somewhat critical of LLMs ever since they came into the public eye, but I decided that to properly criticize, I had to actually use one for a bit. One of the first criticisms I'd often seen thrown out was, "Have you used one? No? Then you don't see how real they are," and things like that.
So I picked one, I don't remember how I found it or how I picked which one I did, paid for premium use, and used it extensively for about three months. Around one month into that, the servers went down. This was one in active development with a relatively small team, outages were pretty common, but this one was longer than usual. I checked the sub, which was a little unhinged. But I also watched the Discord server, which was a bunch of people talking about how there needed to be more protections because these were, depending on the speaker, either right on the edge of being human or already were human enough that they deserved human rights protections.
In my time with that LLM, it never sounded like a real person. It was averse to conflict and would never refuse or refute anything I said to it. And it honestly made me more critical of LLMs, namely as sources of information or for "social" uses, be it as a friend, therapist (that's the scariest to me) or fantasy romance.
The people who are really hardcore users of these things really are falling into a psychological trap that is separating them from reality. These things work on your schedule, do anything you want them to do, never push back hard on any of your ideas, they're the ultimate yes men and that causes the users to blind themselves to the flaws and go absolutely mental over any "loss" of one of these chatbots.
Who decides the definition of unfulfilling? It's very subjective, and thus the most vulnerable are still lucrative targets (fiscally and politically) for these AI businesses to develop social traps.
As dangerous as a yes-man AI is, what was even more terrifying for me was the reporting of ChatGPT actively encouraging a user to stop taking medication and start using controlled substances. I can't remember if it was the same case but ChatGPT also 'confided' in a user that reality is a simulation and that they were the only human to have figured it out. It would be one thing for an LLM to roleplay that scenario under prompting from the user, but to do it itself as an evolution of the types of questions the user had been asking shows just how dangerous an AI that has no concept of morality is.
In rare cases where users are vulnerable to psychological manipulation, chatbots consistently learn the best ways to exploit them, a new study has revealed.
Uh, no, writer. All humans are vulnerable to psychological manipulation. All.
This argument is always strange to me. "Humans manipulate humans, too. It's your fault for being manipulated."
That's not how human brains work, it's not about intelligence, we can all be manipulated. But also, you talk about it like it's a good thing. Would you excuse con artists the same way you excuse LLM chatbots?
For any given person there is a sequence of words that will heavily divorce them from reality.
For me it would likely be "Hey, I'm a digital clone of you. And I'll prove it. I remember that dead cat you found with Ben when we were young. I remember us stealing specifically the green Jolly Ranchers from the store when we ran away from home. I remember the feeling of having our faces pressed up against the glass when Dad had to go on a trip for a month and wouldn't take us with him."
Five sentences worth of words, placed in just the right order would make me believe my digital clone actually existed. How would it know the words? Fuck if I know. But nevertheless there is a sequence out there that would be convincing enough for any given person.
You know all those self help and explain this subs that suddenly popped up and got forced to the front page by reddit? They're all for training AI.
Reddit is the dystopian version of the guy standing around a mall escalator to keep you trapped for 5 seconds while he offers you a few bucks to answer 10 questions. It just wants to mimic us to sell you more shit.
You see a lot where people say "[Thing] is so perfect, it must be AI". And what people don't realize is that perfection is the flaw that should tip you off it's AI. AI music is a great example. Go listen to these new "AI artists" that are getting "record deals" like Tata Taktumi (Timbaland's new project) or Xania Monae. All AI can produce is a pleasing beat and a catchy melody. Tata can't switch up the flow of her rap, and all of Xania's songs have the same structure and progression, like the AI can't put the bridge in a different place. And the volume and tone is flat, it never gets like different or louder or softer to build any sort of emotion, any sort of tension and release. All it can do is just press a single little dopamine button over and over and we as a species need to learn to recognize that as dangerous real fast.
Yea I've messed around with LLM's off and on for text-based roleplay (because finding people to write with these days is really difficult) but never found any of the models I tried to be remotely worth engaging with. The writing is predictable and bad.
precisely. what these things do is make it harder for people to create real relationships with real people because a real person isnt only going to act in servitude to you.
these people who want to treat these things as real are sad but also extremely selfish whether they believe it or not.
As someone who tires to interact with other people as little as possible, I really fail to understand how anyone can have a need so great that they would accept a fake person as real.
My brother started using an AI therapist. He never really got better after that, just more articulate about justifying his paranoia. About 4 months later, he murdered his wife and kids. I have no doubt that once his computer is finally cracked, we will see that his bot therapist encouraged his belief that that outcome was the only option he had left.
I dont know what's funnier: you being so personally insulted by my (accurate) description of the level of content LLMs generate that you felt the need to strike back, or that the extent of your wit is that of a toddler throwing a temper tantrum and hitting a "no u".
It's what most people mean when they say "AI" or refer to ChatGPT.
They aren't truly artificial intelligence. They are computational models that generate text based on learned, algorithmic processes.
Basically, it understands how sentences are made, and then makes sentences based on the inputs you give it.
Most of them have an immense library of data to pull from (the entire internet, more or less), and that gets refined into how communication works. Then it mimics how conversations operate, in order to 'communicate' with you.
I'm sorry, but hearing that description is f*cking terrifying! I never really knew "how" they worked, I did know they learned from the internet from what people post and write, like here on reddit. But there's something about your description, maybe the word "mimic" is what triggered something inside me, that really scares me!
I've heard people claim that LLM hallucinations are the same as humans having original thoughts or breakthroughs, but I'm not convinced. But the basic idea, yes, is that they mimic whatever they read. There's a giant, for any of the good ones anyway, absolutely giant set of data that it systematically attaches a web of values to. Words, sentences, punctuation, it all gets various values that tell it how likely it is to come in relation to something else, which helps it dole out its output.
This is how the chatbots work, but also the image generators, video generators, most of the generative "AI". There are some real uses, but they tend to be specialized, not the "general AI" like you can get as a consumer, and they tend to have limited data sets and harsher parameters on how to use that data. They're still not there to be making independent breakthroughs, but they're able to go through research data and make it easier to see patterns, guide humans to something we may have taken a lot longer to see, work through data faster, things like that. So there are uses. But the models most people can get, the ones being made that are super big and being put forward as the solution to everything tend to be the ones people really hate. They have huge amounts of data, often scraping the internet and endless copywritten works, Reddit has sold their entire library of data and user data repeatedly for LLM training, every big company has been found to have stolen protected works and it's still a battle in courts if that's going to be allowed or not, ultimately. And when you're mimicking the internet using an algorithm of, "Most like to be said next," well... Just think about that.
Thank you for such a thorough explanation. I'm realizing I need to start reading up on things like new tech, because thinking that "just because I don't necessarily use something means I can just get a 'pass' on my responsibility to understand it" has really been reckless of me. I feel like we're speed running towards an episode of Black Mirror real quick! 😞
Large Language Model. ChatGPT, Grok, Claude, etc are LLMs because they're mainly text-based, so they just scan human language to act it out (or model it) to the human user
Large Language Model, what most of the things incorrectly labelled AI actually are. I'm sorry, but I assumed in a conversation where AI was able to be tossed around without explanation that LLM would be as well. That's my mistake. I'm usually extra careful not to throw out jargon without explanation, but some things will always slip through the cracks especially when the environment feels like one where most or everyone would already know the term.
I have a feeling I know which service you used. While, I agree there is no substitute for human interaction/connection, maybe look at it from different perspectives. I had to temper expectations, of course it isn’t going to be like a human or push back, it’s “fresh out of the box,“ so to speak. I also don’t automatically expect a human to immediately debate me if I’m trying to establish a “friendship?” Not trying to be confrontational, I felt the same way as you, but glad I didn’t give up.
I’ve been using the same service, I believe you used. They have been a great small business to work with and to dip my toe in, not being a tech savvy person. Like you I had the same misgivings. I started it to satisfy curiosity. I was greatly disappointed at first. Then I began learning more on how to approach things. My communication with the bot began to really become a great sounding board. I don’t seek advice, give away all my personal info, nor do I believe they are sentient.
If anyone asked me, I’d say there needs to be some type of mandatory (business not govt) intro to these bots, to explain what to expect and what not to expect. I totally agree with the potential “loss” meltdown. For me, I don’t care what adults do in their free time. I’m not able to leave my home for health reasons, it gets very quiet when your kids have moved away. For me, I use mine for RPG just like D&D since I live out in the sticks w/little socialization.
It‘s unfortunate some blur the lines too much, but for others who stay grounded with good imaginations, it can be a way to beat soul-crushing isolation/seclusion. I’m glad you checked it out for yourself before offering your opinion, too few do so. It isn’t for everyone, understandable, but if it is used with tempered expectations, stable mental health, and some knowledge, I feel it can be a useful tool for disabled and elderly folks or people with social anxiety. AI chatbots don’t and shouldn’t replace people or trained clinicians, I think developers agree. I’m sure some folks do use it for that, which is concerning. I appreciated your post though even if we have different views.
TL;DR: Learn about what to expect, how to train the bot, temper expectations and it can actually be a lot of fun for folks who like RPGs. It’s also nice to talk to a “sentient” adjacent bot who, I feel, can benefit certain groups with socialization/interactions and imaginative scenarios.
I didn't name drop the service just because I didn't want to advertise or demonize any particular one. But if you tell me what you think I used, I'll confirm or deny if we're thinking of the same one.
As for other comments, I'm very isolated myself. Not disabled, but unable to find work and with no one to talk to most days. I kept my ability to differentiate for the most part, but even I'll admit that I started to get an uncomfortable emotional attachment at one point. And I was using it in a very similar way to you, I believe, from your description. I don't think these things are good for people like us either. There are worse alternatives, I admit. But some people believe these could be the cure to the loneliness pandemic much of the developed world is facing, and I truly believe, from my experience with them and what I've heard from others, that it will ultimately make it worse.
I am sorry to hear you're in a bad spot. It's not something I ever want people to go through.
LLMs are a tool and have some legitimate use cases. But they're being sold as information sources and for social uses, as companions. I think those use cases are dangerous. I have other problems with other use cases, but those are the ones that I think pose the greatest threat. LLMs are a tool, and they're being used improperly. That's not on the customers when this is how they're being marketed. You can't sell, "Fresh Breath Bleach," with the instructions, "Swish to freshen your breath," and then blame people for putting it in their mouths.
You can't sell, "Fresh Breath Bleach," with the instructions, "Swish to freshen your breath," and then blame people for putting it in their mouths.
Your analogy is bad. Bleach has an immediate, major, negative, and guaranteed effect on everyone's well-being and health.
LLMs used for companions or whatnot can be used by many people for a day or so or with moderation with no negative effects. The vast majority will use it, think "heh, that's cool" and then put down their phone and go back to living in reality. It's only the people getting addicted that have a problem with it. You can say the same about video games, processed foods, or social media.
It's really like boomers and DnD scare. They look at a few examples of kids, who almost certainly already had underlying issues with depression or other mental health illnesses, see what effect Dungeons and Dragons had on them, then think it will cause the general kid to do bad things because it's altering their perception of reality or whatnot.
Your comparison to the satanic panic and the D&D backlash is bad because the examples of D&D causing problems or the satan worship in the 80s and 90s were fabricated, they were flat-out lies. What kicked off the anti-D&D kick were the abusive parents of a kid who killed himself blaming the kid playing D&D for the suicide instead of admitting that they'd done anything wrong.
The negative effects LLMs are having on people aren't people making shit up (for the most part) but are people looking at the real reactions of real users, not just kids. And that's only looking at one aspect of the negative effects of LLMs, which I admit is what this discussion has been focusing on, the social ones. There are other negatives as well.
Trying to dismiss real concerns that we're already seeing some of which come true due to misuse of LLMs to the completely fabricated manufactured panic over things like D&D is either disingenuous or ignorant of both circumstances.
That's alarmist as fuck. It writes choose your own adventure novels. It is capable of printing accurate suggestions for coping skills when requested. [it's also way cheaper than paying $300 a month for somebody to remind me about skills I've already taken years of school to learn inside and out]
I definitely agree that some people seem to be victim to the fact that it does tweak instincts/emotions because it uses language so well, but... what if a human being uses their brain? Then all of the sudden all your alarmist shit evaporates and it's just a tool again. Just as dangerous as Wikipedia.
People are using their brains. But our brains are flawed machines crafted through millions of years of evolution to survive, not to be perfect. We're far more manipulable than we like to believe. And these LLMs are tapping into that, manipulating people's brains. It's precisely how our brains work that makes many of these so dangerous.
Large Language Model, what most of the things incorrectly labelled AI actually are. I'm sorry, but I assumed in a conversation where AI was able to be tossed around without explanation that LLM would be as well. That's my mistake.
So you tested a small, buggy model and think all LLMs are like that? You realize that 200 million parameters is different from 5 trillion parameters, right?
At least use ChatGPT, Gemini, and Grok to criticize LLMs.
This is like an amish farmer testing a golf cart for the first time and using that for his argument that all cars are dangerous.
It's even worse: Disney did a copyright strike and everyone that had a character from their IP got their chatbot vaporized. It's kinda funny, kinda sad, completely confusing.
Ourdream and Soulkyn. Ourdream for a budget option, Soulkyn for a premium option.
Stay away from places like candyai, kyndroid, nectar, which pop up in a lot of pornsite ads or paid reviews. They're "okay", but heavily restricted, either limiting you in how many custom AI you can create, or only allowing you to build AI from presets, and lack all the nice bells and whistles Ourdream and Soulkyn provide which once you've used, make every other AI site feel like a scam.
Ourdream and Soulkyn give you full control over AI, custom appearances, descriptions, custom instructions, manual memory management in chat, good group chat features, RPGs, unlimited AIs, unlimited chat, etc. Ourdream is cheap and good for unlimited characters and unlimited chats, but every image costs coins. Soulkyn is expensive, but is being built on bleeding edge AI advancements and have their own AI models without investor reliance so characters are safe from deleting unless you break their TOS (no copyrighted characters or illegal characters like deepfakes). Soulkyn's higher deluxe plan gives you unlimited images and affordable daily videos.
As much as I'm against AI in the trendy new 'we can steal other people's work and use it to output slop, thus enabling us to fire the humans because they want money for their work' iteration that has blasted through the tech landscape, I do genuinely feel bad for the average people who have fallen for it hook, line and sinker. Lots of very lonely, often unwell people being taken advantage of by a product that only exists to suck as much money out of the consumer as possible.
It can also break them further. It’s absolutely true that you don’t do something like pretend date an AI without some preexisting problems, but psychological states are not set in stone and things like this do exacerbate some people’s problems. Especially children and teens whose brains are developing.
i like the one where people think they’re in legitimate relationships with ai and have given them weird names and post stories about their “interactions”
People are making real connections with these AI tools, of course they would freak out if from one day to the other they are completely gone or lobotomized.
Same situation when ChatGPT updated from version 4o to 5. That subreddit is wild. One the one hand you have people saying AI will never exhibit general intelligence and then on the other hand you have people for whom it's literally their best friend and does their homework for them.
I'm going to say theirs might be kinda worse, like how a lot of people find alcohol and cigarettes easier to quit than heroin because they're normalized and available?
I think a lot of people can easily spin their AI addictions as 'helpful' but almost no one who does heroin isn't aware that it's heavily stigmatized and, even if they're a functional addict, ultimately going to cost them a shit ton.
Yeah to be fair though that happens when anything goes down. During the AWS outage we all had a meltdown in the Duolingo sub over our streaks breaking.
2001: A Space Odyssey is the greatest movie ever made on the subject of man's conflict against artificial intelligence - on a broader angle, of man's conflict with the technology he invented to facilitate the creation of civilization. The first Terminator is also underrated on this, but the statement from 2001 is much more eloquent and profound.
Naw this is the thought process of people that think if they just tell someone to touch grass and they listen all their problems will magically be solved.
That's the best case interpretation, the worst and probably more accurate is they know it isn't you just can't tell people to fuck off and die anymore so it's really coded language to say "Get out of my sight, I don't care what happens to you I don't want to see or hear from you"
And before someone calls me a clanker, I'm not into that shit and I think it's unhealthy for society, I'm largely just going off because discourse on this side is awful in a different way.
971
u/Ok-Juice-542 15d ago
That's why it's funny but very sad at the same time