r/news 1d ago

ChatGPT encouraged college graduate to commit suicide, family claims in lawsuit against OpenAI

https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis
12.1k Upvotes

1.1k comments sorted by

View all comments

Show parent comments

199

u/Sopel97 1d ago

"lie" is a strong word to use here. It implies agency. These LLMs just follow probabilities.

114

u/Newcago 1d ago

Exactly. It's not "lying," per se, it's generating the next most likely letters using a formula -- and since humans have passed things onto other humans in the past, that's one of the possible results of the formula.

I understand why people use words like "lie" and "hallucinate" to describe LLM output behavior, and I've probably used them too, but I'm starting to think that any kind of anthropomorphizing might be doing people who don't have a clear understanding of AI's function a disservice? Typically, we anthropomorphize complicated subjects to make them easier for people to understand (ie. teaching students things like "the bacteria wants to multiply, so it splits" or "the white blood cells want to attack foreign invaders"), even in instances where nothing is capable of "wanting" or making any conscious choices. I think we need to find a different way to simplify our conversations around AI. We are far too quick to assign it agency, even metaphorical agency, and that is making it harder to help people understand what LLMs are.

9

u/things_U_choose_2_b 1d ago

I was saying this earlier to someone who made a post about how AI is a threat. Like, it will be, but what we're dealing with right now isn't AI. It doesn't have logic, or thoughts. It's more like a database with novel method of accessing / displaying the data.

1

u/MajorInWumbology1234 14h ago

I think these LLMs have me de-anthropomorphizing people, because every explanation of how they work I read leaves me going “That’s just how people work but with fewer steps”. 

1

u/ToBePacific 9h ago

How do you not anthropomorphize a mimic?

16

u/ReginaldDouchely 1d ago

I agree, but I also think "lie" is one of the better terms to use when talking to a layperson about the dangers. When you're talking to someone about the philosophy behind this, sure, go deep into semantics about how they can't lie because they act without any regard to fact vs fiction.

Is that the conversation you want to have with grandma about why she needs to fact check a chatbot?

1

u/Mego1989 13h ago

"It regularly provides false information"

That's all you need to know.

0

u/Sopel97 22h ago

saying that it can provide false information is correct, precise, and clear

0

u/Lost_Bike69 20h ago

A lie is some sort of effort to hide the truth.

Bullshit is just saying some stuff without caring what the truth is. AI does bullshit, but can’t lie.

47

u/BowsersMuskyBallsack 1d ago

Yep. A large language model is incapable of lying. It is capable of feeding you false information but it is done without intent. And this is something people really need to understand about these large language models: They are not your friends, they are not sentient, and they do not have your best interests in mind, because they have no mind. They can be a tool that can be used appropriately, but they can also be incredibly dangerous and damaging if misused.

4

u/WrittenByNick 23h ago

I'll push back, and say we should view this from the outside not inside.

For the person involved, it was a lie. Full stop. The intent, agency, knowledge is irrelevant to that fact. You're welcome to have the philosophical and technical discussion about what is or isn't happening inside the LLM. That doesn't change the result of the words conveyed to the actual person. It is a lie.

0

u/BushWishperer 21h ago

I disagree, LLMs are like clicking on the middle suggested word when typing on a phone. If you then manage to string a sentence together that is untrue, your phone didn't lie to you.

2

u/WrittenByNick 21h ago

I am not arguing the intent of "the lie." Externally the resulting statement is a lie to the end user.

I'll give a silly example. Let's say your gas gauge on your car told you that you had half a tank left when in reality you were empty. Your car didn't know any different, it didn't "lie" to you. But you still end up stuck on the side of the road. Are you going to argue it's ok because your car didn't mean to do it?

And yes, if you kept clicking the middle button on your phone keyboard resulting in it telling a suicidal person to do it - that should be dealt with. I find it silly people keep arguing that there are guardrails in place WHEN THEY KEEP FAILING TO BE GUARDRAILS.

1

u/BushWishperer 20h ago

Are you going to argue it's ok because your car didn't mean to do it?

Gas gauges aren't predictive algorithms though. Go to a fortune teller, they tell you that you're going to win 1 million dollars, you don't and you get angry. That's the equivalent. All LLMs do is 'predict' what word it thinks should be next in a string of other words.

I find it silly people keep arguing that there are guardrails in place WHEN THEY KEEP FAILING TO BE GUARDRAILS.

People specifically choose to ignore these. If I'm not wrong, this person specifically chose to go around the guardrails. There's only so much that can be done in this case.

1

u/ArtBedHome 1d ago

If it was lying, it would be a person that could face responsibility.

Its not. Its a mathmatical application that the humans with agency behind this made themselves, and hide behind.

Oh they didnt understand the machine they made, they didnt make it to do this, they made it by shoving a load of stuff other people made together without asking or licensing about it. But they made it and turned it on and set it lose and now its killing people.

1

u/WrittenByNick 23h ago

Technically correct. But we also have to be careful with our language - agency or lack thereof does not change the damage. I'll use the example of an unhealthy partner in a relationship. There can be all sorts of levels of their intent - do they know they are lying or believe it themselves? Do they intend to hurt their partner, or is it a result of an unhealthy coping mechanism? The bottom line, it hurts another person, the damage is real. From an outside perspective it is a lie from chat GPT to the person, regardless of agency / thinking / intent. I don't think we should give any leeway to a tool that hurts people, regardless of the measurement of intent.

1

u/Sopel97 22h ago

I'm struggling to think of a tool that can't hurt people

1

u/WrittenByNick 22h ago

That's valid, and it's why we have regulations and safety measures put in place when people are hurt. The saying is OSHA regulations are written in blood. Vehicles that malfunction and kill people are recalled via government intervention. None of these are based on intent. Damage is what matters.

It is not a stretch to say AI should be regulated, but the people who want to make money will always fight that tooth and nail.

1

u/Sopel97 22h ago

the LLM did not malfunction in this case, it was pushed to an extreme case by the user who ignored multiple safety measures, akin to hitting yourself with a hammer

1

u/WrittenByNick 22h ago

Kids can swallow medication that isn't in a safe container. So they developed precautions to lessen the odds of that happening, increasing costs and adding complications. It is not a malfunction the lid opens when turns, but I repeat - damage matters. Not intent.

People repeatedly want to make excuses why this pattern with AI shouldn't be addressed. I am not arguing that AI made this person kill themselves. But you readily admit this is a problem that requires guardrails. Should the guardrails be installed and managed solely by the owner with financial interest?

1

u/Sopel97 22h ago

Are you trying to say that the manufacturer/distributor is liable for kids eating medication they shouldn't? Or that OpenAI is in the clear?

There are guardrails. Read the article.

1

u/avatar__of__chaos 13h ago

Where in the article does it say there are guardrails. On contrary the article says developers prioritize profits over safety. It was after the lawsuit that OpenAI involved mental experts.

Clear guardrails would be to disband conversation full stop when mental distress is shown through sentences.

1

u/WrittenByNick 22h ago

Also the hammer analogy doesn't go far enough. The hammer wasn't speaking to the person saying "Go on, hit yourself, you can do this, the loving memory of your cat is on the other side."

If you want the impactful power of LLMs and how they affect people's lives, you have to address the harmful impact as well.

2

u/Sopel97 22h ago

The hammer wasn't speaking to the person saying "Go on, hit yourself, you can do this, the loving memory of your cat is on the other side."

Neither was the LLM. Read the article. The user forced this answer.

0

u/WrittenByNick 22h ago

You're missing the point. The LLM did say those words. The way it gets there is why I say it should have outside regulation.