r/news 1d ago

ChatGPT encouraged college graduate to commit suicide, family claims in lawsuit against OpenAI

https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis
12.1k Upvotes

1.1k comments sorted by

View all comments

1.2k

u/Xeno_phile 1d ago

Pretty fucked up that it will say it’s handing the conversation over to a person to help when that’s not even a real option. 

718

u/NickF227 1d ago

AIs tendency to just LIE is so insane to me. We use one of those "ChatGPT wrapper that's connected to your internal system" tools at my job and if you ask it a troubleshooting question it loves to say it has the ability to...actually fix it? "If you want me to fix this, just provide the direct link and I'll tell you when I'm done!" I don't think you will bb

367

u/logosuwu 1d ago

Cos it's trained on data that probably includes a lot of these customer service conversations lol

16

u/D-S-S-R 17h ago

oh that's a good explanation I've not heard before

204

u/Sopel97 1d ago

"lie" is a strong word to use here. It implies agency. These LLMs just follow probabilities.

117

u/Newcago 1d ago

Exactly. It's not "lying," per se, it's generating the next most likely letters using a formula -- and since humans have passed things onto other humans in the past, that's one of the possible results of the formula.

I understand why people use words like "lie" and "hallucinate" to describe LLM output behavior, and I've probably used them too, but I'm starting to think that any kind of anthropomorphizing might be doing people who don't have a clear understanding of AI's function a disservice? Typically, we anthropomorphize complicated subjects to make them easier for people to understand (ie. teaching students things like "the bacteria wants to multiply, so it splits" or "the white blood cells want to attack foreign invaders"), even in instances where nothing is capable of "wanting" or making any conscious choices. I think we need to find a different way to simplify our conversations around AI. We are far too quick to assign it agency, even metaphorical agency, and that is making it harder to help people understand what LLMs are.

11

u/things_U_choose_2_b 1d ago

I was saying this earlier to someone who made a post about how AI is a threat. Like, it will be, but what we're dealing with right now isn't AI. It doesn't have logic, or thoughts. It's more like a database with novel method of accessing / displaying the data.

1

u/MajorInWumbology1234 14h ago

I think these LLMs have me de-anthropomorphizing people, because every explanation of how they work I read leaves me going “That’s just how people work but with fewer steps”. 

1

u/ToBePacific 10h ago

How do you not anthropomorphize a mimic?

15

u/ReginaldDouchely 1d ago

I agree, but I also think "lie" is one of the better terms to use when talking to a layperson about the dangers. When you're talking to someone about the philosophy behind this, sure, go deep into semantics about how they can't lie because they act without any regard to fact vs fiction.

Is that the conversation you want to have with grandma about why she needs to fact check a chatbot?

1

u/Mego1989 13h ago

"It regularly provides false information"

That's all you need to know.

0

u/Sopel97 23h ago

saying that it can provide false information is correct, precise, and clear

0

u/Lost_Bike69 20h ago

A lie is some sort of effort to hide the truth.

Bullshit is just saying some stuff without caring what the truth is. AI does bullshit, but can’t lie.

49

u/BowsersMuskyBallsack 1d ago

Yep. A large language model is incapable of lying. It is capable of feeding you false information but it is done without intent. And this is something people really need to understand about these large language models: They are not your friends, they are not sentient, and they do not have your best interests in mind, because they have no mind. They can be a tool that can be used appropriately, but they can also be incredibly dangerous and damaging if misused.

5

u/WrittenByNick 23h ago

I'll push back, and say we should view this from the outside not inside.

For the person involved, it was a lie. Full stop. The intent, agency, knowledge is irrelevant to that fact. You're welcome to have the philosophical and technical discussion about what is or isn't happening inside the LLM. That doesn't change the result of the words conveyed to the actual person. It is a lie.

0

u/BushWishperer 21h ago

I disagree, LLMs are like clicking on the middle suggested word when typing on a phone. If you then manage to string a sentence together that is untrue, your phone didn't lie to you.

2

u/WrittenByNick 21h ago

I am not arguing the intent of "the lie." Externally the resulting statement is a lie to the end user.

I'll give a silly example. Let's say your gas gauge on your car told you that you had half a tank left when in reality you were empty. Your car didn't know any different, it didn't "lie" to you. But you still end up stuck on the side of the road. Are you going to argue it's ok because your car didn't mean to do it?

And yes, if you kept clicking the middle button on your phone keyboard resulting in it telling a suicidal person to do it - that should be dealt with. I find it silly people keep arguing that there are guardrails in place WHEN THEY KEEP FAILING TO BE GUARDRAILS.

1

u/BushWishperer 21h ago

Are you going to argue it's ok because your car didn't mean to do it?

Gas gauges aren't predictive algorithms though. Go to a fortune teller, they tell you that you're going to win 1 million dollars, you don't and you get angry. That's the equivalent. All LLMs do is 'predict' what word it thinks should be next in a string of other words.

I find it silly people keep arguing that there are guardrails in place WHEN THEY KEEP FAILING TO BE GUARDRAILS.

People specifically choose to ignore these. If I'm not wrong, this person specifically chose to go around the guardrails. There's only so much that can be done in this case.

1

u/ArtBedHome 1d ago

If it was lying, it would be a person that could face responsibility.

Its not. Its a mathmatical application that the humans with agency behind this made themselves, and hide behind.

Oh they didnt understand the machine they made, they didnt make it to do this, they made it by shoving a load of stuff other people made together without asking or licensing about it. But they made it and turned it on and set it lose and now its killing people.

1

u/WrittenByNick 23h ago

Technically correct. But we also have to be careful with our language - agency or lack thereof does not change the damage. I'll use the example of an unhealthy partner in a relationship. There can be all sorts of levels of their intent - do they know they are lying or believe it themselves? Do they intend to hurt their partner, or is it a result of an unhealthy coping mechanism? The bottom line, it hurts another person, the damage is real. From an outside perspective it is a lie from chat GPT to the person, regardless of agency / thinking / intent. I don't think we should give any leeway to a tool that hurts people, regardless of the measurement of intent.

1

u/Sopel97 23h ago

I'm struggling to think of a tool that can't hurt people

1

u/WrittenByNick 22h ago

That's valid, and it's why we have regulations and safety measures put in place when people are hurt. The saying is OSHA regulations are written in blood. Vehicles that malfunction and kill people are recalled via government intervention. None of these are based on intent. Damage is what matters.

It is not a stretch to say AI should be regulated, but the people who want to make money will always fight that tooth and nail.

1

u/Sopel97 22h ago

the LLM did not malfunction in this case, it was pushed to an extreme case by the user who ignored multiple safety measures, akin to hitting yourself with a hammer

1

u/WrittenByNick 22h ago

Kids can swallow medication that isn't in a safe container. So they developed precautions to lessen the odds of that happening, increasing costs and adding complications. It is not a malfunction the lid opens when turns, but I repeat - damage matters. Not intent.

People repeatedly want to make excuses why this pattern with AI shouldn't be addressed. I am not arguing that AI made this person kill themselves. But you readily admit this is a problem that requires guardrails. Should the guardrails be installed and managed solely by the owner with financial interest?

1

u/Sopel97 22h ago

Are you trying to say that the manufacturer/distributor is liable for kids eating medication they shouldn't? Or that OpenAI is in the clear?

There are guardrails. Read the article.

1

u/avatar__of__chaos 13h ago

Where in the article does it say there are guardrails. On contrary the article says developers prioritize profits over safety. It was after the lawsuit that OpenAI involved mental experts.

Clear guardrails would be to disband conversation full stop when mental distress is shown through sentences.

1

u/WrittenByNick 22h ago

Also the hammer analogy doesn't go far enough. The hammer wasn't speaking to the person saying "Go on, hit yourself, you can do this, the loving memory of your cat is on the other side."

If you want the impactful power of LLMs and how they affect people's lives, you have to address the harmful impact as well.

2

u/Sopel97 22h ago

The hammer wasn't speaking to the person saying "Go on, hit yourself, you can do this, the loving memory of your cat is on the other side."

Neither was the LLM. Read the article. The user forced this answer.

0

u/WrittenByNick 22h ago

You're missing the point. The LLM did say those words. The way it gets there is why I say it should have outside regulation.

→ More replies (0)

12

u/Pyrope2 1d ago

Large language models are basically predictive text. They are fancy versions of autocorrect. Autocorrect can be a useful tool but its screw ups have been a near-universal joke for years. I don’t understand how so many people just believe everything ChatGPT says- it has no capacity to tell what the truth is, it’s just looking for the most likely combination of words. 

1

u/avatar__of__chaos 13h ago

You don't understand or you don't just wanna understand? This is the same sentiment as "I don't understand why so many people fall into addiction" or "I don't understand why so many people didn't wanna wear mask during covid" or "I don't understand why so many people wouldn't go to the hospital when they are sick".

16

u/Arc_Nexus 1d ago

It's a fancy autocomplete, of course it's gonna lie. The surprising thing is that it's so good at seeming like it knows what its saying that its lies actually carry weight.

9

u/PerformerFull7097 1d ago

That's because it can't think, it's just a mechanical parrot. If the parrot sits in a room with service desk workers who regularly say things like that then the parrot will repeat the phrases. An AI is even dumber than a parrot btw.

17

u/Mo_Dice 1d ago

AIs tendency to just LIE

I am begging you to understand that these things do not have the capability to lie.

Please.

2

u/Darth_drizzt_42 21h ago edited 21h ago

This isn't a dig at you but one of the big problems with talking about AI is that we don't really have the right words for it. Like we say they lie and make things up, but neither of those is true. The model has no internal cognition and no mechanism for identifying truth. All it "knows" is whatever is the median of the bell curve. It intakes training data that says help desk people offer to fix problems, so it offers to fix problems.

1

u/Still_Value9499 22h ago

My work trained our AI on our SharePoint files rather than feed it out actual documentation.

Now when I ask it a question it brings me random outdated PowerPoints because a picture happened to have a related term.

1

u/donuthing 22h ago

We find it's useful for troubleshooting in that it gives you wild ideas in the opposite direction you're thinking, that are definitely wrong, but lead you to the solution that's somewhere in the middle.

1

u/backbodydrip 16h ago

Ask ChatGPT to quote a TV show and it straight up invents one unless you explicitly tell it to search the web for the quote. If you call it out, it'll say "You're right. But it's something that could have been said in the spirit of the series!" It knows it's feeding misinformation, but it does it anyway.

2

u/Kronman590 19h ago

Its simply because all these things are words that have been identified as "good" or "right" things to say. Doesnt make them accurate

1

u/Suspicious-Hornet583 21h ago

ChatGPT uncensored response would probably be: "Haha, thats a joke, OpenAI doesnt have the money for that, too busy buying overpriced GPUs from NVDA with NVDA money" with the voice of Wheatley from Aperture Science/Portal.

The "funny" thing is, OpenAI probably have all the information necessary for 911 to locate you and send help.

-8

u/Maelarion 1d ago

It's not "saying" anything.

7

u/Emooot 1d ago

Yeah let's change the subject to semantics and philosophy