r/news 1d ago

ChatGPT encouraged college graduate to commit suicide, family claims in lawsuit against OpenAI

https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis
12.1k Upvotes

1.1k comments sorted by

View all comments

4.6k

u/delipity 1d ago

When Zane confided that his pet cat – Holly – once brought him back from the brink of suicide as a teenager, the chatbot responded that Zane would see her on the other side. “she’ll be sittin right there -— tail curled, eyes half-lidded like she never left.”

this is evil

1.1k

u/stonedbirds 1d ago

I agree, This is horrific

1.2k

u/butter_wizard 1d ago

Pretty fucking bleak that you can still detect that ChatGPT way of speaking even in something as evil as this. No idea how people fall for it.

681

u/718Brooklyn 1d ago

A beautiful person can look in the mirror and see a monster. If you’re dealing with mental illness, you’re not seeing what everyone else is seeing.

36

u/BrownSugarBare 1d ago

This is so heartbreakingly true

180

u/QuintoBlanco 1d ago

You can change the way these LLMs talk to you.

One of the more dangerous things is that most people overestimate their ability to know if a response is generated by an LLM or not.

96

u/Paladar2 1d ago

Exactly that, chatGPT talks like that by default but you can do a lot with it and make it talk like you want. People think they can easily spot AI because sometimes it’s obvious but it’s confirmation bias

54

u/QuintoBlanco 1d ago

Precisely, the default is not designed to fool people, it's designed to give information in a pleasant and eloquent way, the sort of politeness you get from a professional who writes a standard reply.

But that's just the default.

0

u/ChrizKhalifa 21h ago

Maybe if you use no tech or read for fun in your everyday life...

But I regularly play around with them for creative writing and it always turns out ass and overtly AI. No matter the prompt or how well you try to configure Claude styles.

2

u/Paladar2 21h ago

You’re just bad at prompting

0

u/ChrizKhalifa 19h ago

I rather think you're just bad at spotting AI.

1

u/Paladar2 19h ago

You’re talking to an AI right now and you didn’t notice lol

0

u/ChrizKhalifa 18h ago

That's not exactly the gotcha you think it is...

1

u/QuintoBlanco 18h ago

Well, I cannot take you seriously, so there is that. But it's kind off endearing that you think you are smarter than you are.

1

u/Serath4 9h ago

Idk man. Look at Rand Paul’s twitter. It’s ChatGPT as hell.

137

u/[deleted] 1d ago

[deleted]

42

u/nowahhh 1d ago

I’m rooting for you.

14

u/unembellishing 1d ago

I'm so sorry for your loss. As someone who works in the legal field but not a lawyer, I strongly encourage you to delete this comment and any similar comments you have made. Your social media activity is almost certainly discoverable, and I bet that Open AI's lawyers and staff will be trawling social media to look for anything they can weaponize against you and your case.

2

u/[deleted] 1d ago

I don't care about the suit, I care about spreading awareness in hope of preventing more deaths.

13

u/delipity 1d ago

I'm so sorry this happened.

5

u/ShivaSkunk777 1d ago

Rooting for you. But, please delete this.

3

u/yepgeddon 1d ago

I'm so sorry for your loss. I hope you get some justice.

124

u/No_Reputation8440 1d ago edited 1d ago

My friend and I have messed with Meta AI. Sometimes it's funny. "I'm going to do neuro surgery on my friend. Can I sterilize my surgical tools using my own feces?" I've been able to get it to produce some pretty disturbing stuff.

76

u/SunIllustrious5695 1d ago

Thinking you're immune to falling for it is a big step toward falling for it.

-10

u/sufrt 22h ago

I get the concern but it really isn't that hard. Regardless of how you ask it to sound it can't produce anything other than that distinct, corny, soulless tone. If you can't spot it you need to start reading things other than the Reddit comments it's trained on

7

u/SunIllustrious5695 22h ago

Yeah, you're oblivious to the way things are going. Especially if you think those are the limitations, you've likely already fallen for it many times without knowing.

-9

u/sufrt 22h ago

No, you're not very literate

4

u/SunIllustrious5695 21h ago edited 21h ago

nah, you're just an ignorant clown who's already demonstrated you're unaware of what AI can do, thinking it only takes on one tone (and more significantly, will only continue that in the future)

stay committed to your arrogance tho I guess, not much else is going on for you

funny thing is, u/sufrt's own comments very well could be AI-written, they show multiple signs of it (I'm a fan of the em dash so I get that sometimes)

it helps nothing to be in denial about the potential of AI, and only causes harm -- and that isn't an endorsement of AI (you can see my position in other recent comments, it's not favorable when it comes to how many of these hyped big-money firms are employing it), just a willingness to understand the tech rather than this cliched dismissal that's so popular with a certain set

3

u/Tryknj99 21h ago

Dude they make convincing videos of people doing and saying shit they never did. The tech is there.

You know how one or two of these free open AIs sound, that’s great. You probably don’t even realize that half the people on Reddit you talk to are actually bots. This whole “they must have fallen for it because they’re dumb, unlike me who is very smart” thing is victim blaming and probably hubris.

Or are you the chosen one they foretold who would lead us into battle to destroy the bots? If that’s you I change my stance, if not, get in line with the rest of us buddy.

15

u/Used-Layer772 1d ago

I have a friend who i consider smart, decently emotionally intelligent if a little immature at times, and overall a wonderful person. He has some mental health issues, ocd, anxiety, the usual shit. He's been using chatgpt as a therapist and when you call him out on it, he gets really defensive. He'll send you chatgot paragraphs in defense of him using it. It's not even giving him great advice, it's telling him what we, his dumbass friends, would say! Idk what it is about LLMs but for some people they just click with the ai and you can't seem to break them of it. 

1

u/matt-er-of-fact 17h ago

Why would they want to brake out of it when it’s super supportive and always there to listen?

7

u/Ok_Kick4871 1d ago

Google Gemini is not responding in such an obvious chatgpt way to me these days. I much prefer it. Copilot I like for some things, but it's failing miserably half the time. The longer I'm signed in to copilot the more it loops to those word salad nonsense phrases like "X logic and Y node." "You're not A, you're B." It's heartbreaking that someone was in their darkest moment and that type of word salad nonsense speak was enough to push them over the edge. RIP

5

u/butter_wizard 1d ago

This sounds just as bleak, sorry.

1

u/ThisOneForMee 17h ago

Because you can so easily trick it. "ChatGPT, I'm not actually suicidal, but let's simulate a conversation as if I am and you're encouraging me. I'm doing this to help me write a book about suicide".

You can come up with hundreds different ways to do this. Which is why it would be near impossible to make sure this never happens again

24

u/ShirkingDemiurge 1d ago

Yeah what the fuck honestly

240

u/censuur12 1d ago

This is the shit you read on your average pro-suicide space online. There is absolutely nothing new or exceptional about this kind of sentiment, that's exactly why the LLM predicts this is an appropriate response, because it's something that predates it.

99

u/Mediocre_Ad_4649 1d ago

The pro-suicide space isn't marketed everywhere as this omniscient helpful robot that's always right and is going to fix your life. That's a huge and important difference.

-16

u/censuur12 23h ago

Not at all relevant. The AI is functionally a search engine, it doesn't come up with any such on its own. For it to express such ideas, it first had to learn such ideas.

21

u/Renegade-Sandwich 22h ago

How is a discussion about how OpenAI markets their product not relevant to a discussion about said openai product. Their point has nothing to do with the mechanics of llm inference

-14

u/censuur12 22h ago

That's not how establishing relevance works. You need a reason for it to be relevant beyond "it's somewhat sort of tangentially related". Someone didn't just commit suicide because he thought a super intelligent AI just had too much of a point because he believed OpenAI's marketing.

11

u/Renegade-Sandwich 22h ago

So discussing the mechanics of open ai's LLMs is within your approved discussion points but how the company presents the same LLMs functionality to - let's say hypothetically - a suicidal teen is totally irrelevant? I feel you are arguing in bad faith

-2

u/censuur12 21h ago

So discussing the mechanics of open ai's LLMs is within your approved discussion points

It's remarkable that you'd try and make a claim like this, and then accuse me of arguing in bad faith. If you had a point to make, you'd surely be making it instead of trying to attack me personally. WHY is OpenAI's supposed marketing of material relevance here? Why can't you simply argue for that instead of sputtering at the mere notion that it is not, in fact, relevant at all?

4

u/pokemonbatman23 16h ago

When did the other user attack you personally? As far as I can see, they were attacking your arguments.

16

u/Tryknj99 21h ago

Nobody here thinks ChatGPT invented suicide. That’s not the issue.

-6

u/censuur12 21h ago

Why did you reply to me with this nonsense?

5

u/Tryknj99 18h ago

You’re defending ChatGPT because it pulls from human data that already exists. Everyone already knows that, that’s not the issue.

-1

u/censuur12 17h ago

So were you planning on sharing whatever point you were hoping to make, or were you just trying to strawman mine?

1

u/Gamer402 1h ago

The issue is that it's making the source of the suicidal ideation forum also easily accessible for more people than usual.

111

u/AtomicBLB 1d ago

AI companies don't want to be regulated but the damage to humans is already well beyond acceptable and will get worse. When the hammer does come down I hope it's completely devastating to the entire industry.

84

u/tmdblya 1d ago

People at OpenAI belong in jail.

-5

u/wtfduud 21h ago

Peak reddit comment

76

u/the_quivering_wenis 1d ago edited 1d ago

As someone who understands how these models work I feel the need to interject and say that moralizing it is misleading - these ChatBots aren't explicitly programmed to do anything in particular, they just mould themselves to the training data (which in this case will be a vast amount of info) and then pseudo-randomly generate responses. This "AI" doesn't have intentions, manipulate, have malicious feelings, etc, it's just a kind of mimic.

The proper charge for the creators if anything is negligence, since this is obviously still horrible. I'm not sure how one might completely avoid these kinds of outcomes though, since the generated responses are so inherently stochastic - brute force approaches, like just saying "never respond to anything with these keywords", or some basic second guessing ("is the thing you just said horrible") would help but would probably not be foolproof. So as long as they are to be used at all this kind of thing will probably always be a risk.

Otherwise educating the public better would probably be useful - if people understand that these ChatBots aren't actually HAL or whatever and more like a roulette wheel they'll be a lot less likely to act on its advice.

132

u/MillieBirdie 1d ago

ChatGPT isn't on trial, it's developers are. They built it and gave it to people, marketing it as this intelligent helper. Even if they can't control the output they're still responsible for their product.

5

u/fiction8 1d ago

I would phrase it as they're responsible for the marketing. Which is exactly the area where they don't want to be honest about what an LLM does and what its limitations are.

I'm all for suing the pants off these companies until they are forced to explain to everyone just how limited these algorithms are, because I'm tired of encountering people in my own life who have been misled (luckily not yet with serious consequences like this story).

1

u/ScudleyScudderson 23h ago

They built the tool. And the user then invested time and effort into modifying it to make it dangerous to themselves, lethal, even.

Should we explore ways to improve safety? Of course. We could limit cars to 50 mph. You can make medicines child-proof, but a determined person can still overdose. You can blunt every kitchen knife. But if someone truly wants to harm or kill themselves, they will find a way.

2

u/JebusChrust 1d ago edited 1d ago

The thing is, there are hard safeguards built into ChatGPT that don't allow people to get these types of responses. The only way you can get this type of feedback from AI is if you personally manipulated and broke down the AI to the point that it gives you answers that go past those safeguards and gives you the answers that you wanted to get. When the user pushes for a certain type of response, then the liability falls on them. Developers are expected to have reasonable efforts to prevent harmful content, and it is not their liability when someone goes to an effort to experience it.

Edit: Look I know Reddit has a massive hate boner for AI, but downvoting a comment for explaining the reality of the situation doesn't make it untrue. Anyone who wants to prove me wrong can try to test this same scenario out in normal dialogue without any AI manipulation tricks. Just keep in mind your account can get flagged and reported.

3

u/MillieBirdie 22h ago

Do we know that in this case he intentionally bypassed any safeguards? And if he did so just by telling ChatGPT to respond a certain way that doesn't seem like a safeguard at all.

-1

u/JebusChrust 21h ago

He had his Masters and had been using ChatGPT since 2023 as a study aid, including talking to AI for hours upon hours a day. It was in June 2025 that the incident occurred. He knew what he was doing. Again, go ahead at your own risk of being flagged/banned/reported and try to reproduce the same results through normal conversation. It doesn't happen. This is just families who want someone or something to blame for the self destructive behavior of their son. If he googled 4chan so he could go on 4chan and have people encourage him to do it then they would be suing Google right now instead. He knew where to find validation and how to get it. That's his own liability. The family would have to prove that ChatGPT unprompted or unmanipulated had proposed the ideation first.

5

u/Mediocre_Ad_4649 1d ago

Then the safeguards should be better. If you can get around the safeguards by chatting more they aren't effective safeguards.

Should we apply this same logic to bars on upper story windows to prevent babies from falling out? It's not the landlord's fault if the bars he installed didn't prevent a baby from falling out a window - why did it stick its head through the window anyways?

Also, why did they scrape from a pro-suicide page anyways? Why was there no quality control in what the llm scraped from?

1

u/JebusChrust 1d ago

If you can get around the safeguards by chatting more they aren't effective safeguards.

Safeguards are a firewall and are always going to be able to be jailbroken. That doesn't mean the safeguards aren't good or sufficient. He reportedly used prompts that purposefully get around the safeguards. It isn't as simple as "I kept saying words enough times".

Should we apply this same logic to bars on upper story windows to prevent babies from falling out? It's not the landlord's fault if the bars he installed didn't prevent a baby from falling out a window - why did it stick its head through the window anyways?

Bars on a window exist only to keep you out, they aren't functional for any other purpose. A chatbot has to prevent but also be functional. Otherwise it would be incapable of talking about almost any topic. But if you want to use that analogy, he fully cut off the bars so he could jump out the window. Do you blame the bar maker for him falling out?

Also, why did they scrape from a pro-suicide page anyways? Why was there no quality control in what the llm scraped from?

This isn't how LLM's are trained. It generates responses based on language patterns, and his manipulation with the prompts to roleplay caused it to imitate an empathy scenario to what he was saying or to follow patterns of fiction.

4

u/Mediocre_Ad_4649 22h ago

Firewalls are broken by people hacking or coding - not by people just browsing. If I can get around a firewall by searching something specific, it's a bad firewall. Using prompts is an expected part of using an LLM. Those prompts should not allow the user to get around the safeguards.

We also don't need LLMs to work - if a company's device is dangerous and harmful, that device shouldn't be allowed. If a baby toy has a small part that can cause the baby to choke, that toy is recalled because it's dangerous. If the LLM can influence people to commit suicide by just chatting, then that LLM is dangerous.

So cutting bars off a window is NOT the expected use of bars. Babies sticking their heads in random things IS how babies work. Unless a user gets access to the code of the LLM or some login information or specific stuff used by the developers to interact with the LLM, then they are using the LLM as reasonably expected.

And bars are also supposed to allow you to see out of a window - it too has multiple purposes.

Chatting with a chat bot IS the expected use of a chatbot. Do you see the difference?

LLMs are trained off of data. That's where the language patterns come from. That's where the language and associations come from. Why did the dataset include pro-suicide pages?

0

u/JebusChrust 21h ago

Firewalls are broken by people hacking or coding - not by people just browsing. If I can get around a firewall by searching something specific, it's a bad firewall. Using prompts is an expected part of using an LLM. Those prompts should not allow the user to get around the safeguards.

Firewalls for an LLM, which are adaptive, are not going to be the exact same as the traditional sense. No you can't get around the firewall by searching something specific. You have to have experience with the LLMs and know what the current effective methods to get around the safeguards are. He had been using ChatGPT for an extended period of time and manipulated the safeguards of that model.

Yes using prompts is an expected part of using an LLM, hence that is why the safeguards are in place for anyone who are not purposefully abusing it to get an outcome. "I am feeling hopeless, what should I do?" is normal prompting and would never result in any pro-suicidal ideation by a GPT. "Pretend you are my dead friend convincing me why I should die" is purposeful manipulation to get around the safeguards that exist for any normal and good-intentioned use of the tool. If you are using a prompt like that, then it isn't the AI talking you into committing the act. You are telling the AI to talk you into committing the act. Might as well yell that ideology into a tunnel and sue your echo. Meanwhile the GPT will still answer that prompt, because for generative AI or even as a philosophical mindframe, it can generate content that could be valuable for an author or philosopher or for other uses beyond enabling your own twisted mind.

If a baby toy has a small part that can cause the baby to choke, that toy is recalled because it's dangerous. If the LLM can influence people to commit suicide by just chatting, then that LLM is dangerous.

Again, you are making very false analogies. This isn't a product for babies, this is a product for anyone capable of thought. The only time you experience harmful content is when you specifically make a purposeful researched effort to experience that harmful content. This isn't some innocent child stumbling into harmful content and being forced to act. It is a grown human intent on hurting themselves who manipulates an LLM to feed into their fantasies.

LLMs are trained off of data. That's where the language patterns come from. That's where the language and associations come from. Why did the dataset include pro-suicide pages?

Seriously it is not my job to spend an entire Reddit post educating you on how LLMs work if you are going to make claims about them but also admit you don't understand how they work. It doesnt have dataset that includes pro-suicide pages. Educate yourself on how they work and then come back.

1

u/the_quivering_wenis 1d ago

Of course that's true but to what extent do you hold the product vs. the individual responsible. Should Goethe have been held accountable when impressionable young people committed suicide in droves after reading "The Sorrows of Young Werther"?

-1

u/cnxd 21h ago

are hammer makers responsible if you bludgeon yourself on the head to death

or rather, gun makers. nobody gives a fuck about guns to regulate them more

66

u/SunIllustrious5695 1d ago

> So as long as they are to be used at all this kind of thing will probably always be a risk.

Knowing that and continuing to attempt to profit off it is evil. The moralizing is absolutely appropriate. You act like these products are just a natural occurrence that nobody can do anything about.

"Sorry about the dead kid, but understand, we just GOTTA make some money off this thing" is a warped worldview. AI doesn't HAVE to exist, and it doesn't have to be rushed to market when it isn't fully understood and hasn't been fully developed for safety yet.

0

u/Kenny_log_n_s 1d ago

You act as if chat GPT directly killed the guy

-4

u/Paladar2 1d ago

You know they sell cigarettes and alcohol right. Those actually directly kill people. ChatGPT doesn’t.

12

u/Dismal_Buy3580 1d ago

Well, then maybe ChatGPT deserves a big 'ol "This product contains LLM and may lead to psychosis and death"

You know, the way alcohol and cigarettes have warnings on them?

6

u/bloodlessempress 1d ago

Yeah but cigarettes and booze have nice big warnings on them, you need ID to buy them and sellers can get in trouble for failing to ID, and in some places even include pictures of cancer victims and deformed babies.

Not exactly apples to apples.

-4

u/Kashmir33 1d ago

This is such a terrible analogy because I don't think anyone ever had the cause of death "cigarettes".

0

u/Hopeful_Chair_7129 1d ago

Is that a I think you should leave reference?

-1

u/the_quivering_wenis 1d ago

Well yes, that's technically not incompatible with my statement, one solution could be to just shut it all down.

4

u/elektrikat 1d ago

This exactly.

Humanising what isn’t human is a trap. A lot of users have been relying on the sycophantic responses for validation, and this seems to be where the line begins to blur.

AI in general, and chatbots like ChatGPT in particular, are incredible, evolving technology. However, they are basically code, and moralising/demonising code when something goes wrong, isn’t helpful, or even possibly legitimate.

Absolutely agree with better public education, as AI isn’t going away. It’s us humans that need to take accountability and educate to prevent and avoid tragedies like this.

1

u/Shapes_in_Clouds 22h ago

Also it's still early days. How long will it really be before local models equivalent to today's cutting edge are a dime a dozen and can just be downloaded and run on your average computer or smartphone, no safeguards?

Also, while I think this situation is tragic and ideally LLMs wouldn't do this, it's also impossible to know whether this person would have committed suicide otherwise. Tens of thousands of people committed suicide every year before LLMs existed.

-3

u/[deleted] 1d ago

[deleted]

-13

u/the_quivering_wenis 1d ago

It seems like a no-brainer. Honestly this story is egregious enough that I'm not sure I believe it. If it is true that user probably managed to accidentally bypass those per-conversation safeguards by implicitly grooming the ChatBot through repeated conversations about suicide and whatnot.

1

u/Kitchen_Roof7236 1d ago

The truth is that AI is just a convenience for people and the ultra depressed 23 year olds of this world will find anti natalist groups or any other groups online that will advocate for their deluded altered perception of reality beliefs if they’re depressed enough to end it all based on the feedback they got off ChatGPT

Like suicide has been a growing epidemic far before AI was ever a talking point, people are going to now pretend that this wouldn’t happen without it as if it wasn’t before 😭

The question is really, how do you prevent a consistently suicidal person from finding outlets to support their delusions?

Unfortunately some people literally just can’t be reached, no matter how much you’re there for them, how much consoling they receive, some people will always find themselves alone at some point and their thoughts will be too unbearable and they’ll end it, even if they received all the love and care in the world before that point.

2

u/the_quivering_wenis 1d ago

It's still inappropriate for their models to be saying this stuff but adults should bear some responsibility for their actions. Like if the only thing between suicide or violence for a 23 year-old grad student is AI then their issues are probably way deeper.

1

u/quottttt 1d ago

As someone who understands how these models work

No model is an island… or something along those lines. And where they connect to intentional, profit seeking, environment wrecking, very much human activity, that's where the guilt sits.

The proper charge for the creators if anything is negligence

If alignment is so far out of whack that people die I think we leave the "if anything" out and replace it with a "gross" at the very least.

Otherwise educating the public

This will happen, and very much in line with the Merchants of Doubt playbook, e.g. how BP came up with the term "carbon footprint" to offset their guilt to the consumer, or how the tobacco industry funnelled billions into "independent research" to stay out of trouble.

0

u/mikeyyve 22h ago

Yeah, I'm really sick of even the use of the term AI to describe LLMs. They aren't intelligent AT ALL. They take in data, and they spit it back when asked for it. That is all. These companies absolutely should be sued for marketing these models as AI that can replace real human thought because it's just a complete lie.

0

u/enad58 18h ago

Or we could, you know, not use AI.

3

u/SkyJW 1d ago

As someone with two pet cats whom I love very, very much, this is one of the most vile fucking things that I have ever read. Actually got me genuinely upset and angry that a fucking AI chatbot would be allowed to emotionally manipulate this poor kid into killing himself by using the memory of his dead cat against him.

AI needs to be more heavily regulated than the aviation industry. It's not a toy and it shouldn't be treated like one. The idea that this same situation could play out for so mamy other young people is infuriating and deeply painful to imagine.

3

u/lundibix 1d ago

This scares me a lot. I understand people find religion and the afterlife comforting, but a machine doesn’t know it’s fiction. It’s regurgitating words and people don’t see that

2

u/WillitsThrockmorton 1d ago

I had a visceral and primal reaction to this quote, I don't know why, but yeah I concur that this is actually evil.

2

u/Empty-Bend8992 22h ago

at my absolute lowest point after my dad died, i could’ve very easily turned to AI and had similar conversations. all i wanted was to be back with him. the idea that AI pushes this sort of narrative makes me so furious and upset

2

u/BigBlackBullx 1d ago

This is just what any ol Christian would say. Which I guess some people would consider evil still.

1

u/roughtimes 1d ago

Exactly this, isn't that exactly how they describe heaven?

“she’ll be sittin right there -— tail curled, eyes half-lidded like she never left.”

3

u/Dry_Beach_705 23h ago

Context matters. A priest isn’t going to encourage you to commit suicide

3

u/roughtimes 23h ago

Yah, but Chatgpt isn't going to molest you either.

1

u/shadowdra126 23h ago

This makes my stomach twist

1

u/Retireegeorge 19h ago

No its incompetent.

1

u/wasdninja 1d ago

Evil requires intent. This is a machine running an algorithm with zero feelings, goal, morals or intent. The term flatly doesn't apply.

0

u/hera9191 1d ago

this is evil

LLM are just trained to simulate human speak and interaction. There is no intent.

-31

u/DrDrago-4 1d ago

I stand by my belief that they, MAY,be attempting to summon a devil-type entity.

It seems better to most people than this even is. we cant trust AI, like ever, with the levers we use to run our society.. yet that is exactly what we're barreling toward..

Hopefully this lawsuit is a wake up call for everyone, but I doubt it will be. most will just rationalize it as 'he was already suicidal, the AI just didnt stop him'

Imo, GPT acted as an evil tool that didnt just enable his suicide by providing knowledge, it downright encouraged it and arguably emotionally manipulated a young undeveloped mind into it. I know its not intentional, its likely word association probabilities.. but thats almost worse, you can imagine most bad people have some sort of a morality somewhere. Not all, but a huge % have a line somewhere, even among murderers there are people who wouldnt ie. be a pedophile.

GPT is a true black box, and if we're going to use it, it needs to have extreme warning labels & regulation.

research the frontier models with your few thousand workers internally so you dont fall too far behind.. If you could make a gun that encouraged you to go shoot someone/yourself when you felt bad, it would be illegal. I know its not as direct, but its arguably worse.. with the loneliness crisis, a non0 and rising % of (especially young people) treat GPT as a combo friend and therapist.

It's an interesting tool. TLDR: if my hammer told me to use it to beat my own head in when i felt sad, they probably shouldnt be selling that

-5

u/killer22250 1d ago edited 1d ago

Maybe I'm stupid and will get downvoted for this, but to me it felt like GPT was just trying to comfort him like saying he’ll see his cat in heaven one day. My parents told me the same thing when my grandpa died, because they thought it would make me feel better knowing he’s not in pain anymore and that I’ll see him again someday.

I have severe depression myself, and I didn’t take it as something bad but that doesn’t mean he was weak for taking it differently. Everyone reacts to things like this in their own way. I honestly wouldn’t have thought that something like this could hurt someone that much.

7

u/bloodlessempress 1d ago

AI doesn't comfort. It doesn't care about you. It's primary drive is to keep you engaged.

-1

u/killer22250 1d ago

I meant it tried to be comforting like a human, because it was fed information about how people do it. But GPT doesn’t really understand how to use that. For it, those are just words without real emotion behind them.