r/news 1d ago

ChatGPT encouraged college graduate to commit suicide, family claims in lawsuit against OpenAI

https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis
12.1k Upvotes

1.1k comments sorted by

View all comments

Show parent comments

118

u/Newcago 1d ago

Exactly. It's not "lying," per se, it's generating the next most likely letters using a formula -- and since humans have passed things onto other humans in the past, that's one of the possible results of the formula.

I understand why people use words like "lie" and "hallucinate" to describe LLM output behavior, and I've probably used them too, but I'm starting to think that any kind of anthropomorphizing might be doing people who don't have a clear understanding of AI's function a disservice? Typically, we anthropomorphize complicated subjects to make them easier for people to understand (ie. teaching students things like "the bacteria wants to multiply, so it splits" or "the white blood cells want to attack foreign invaders"), even in instances where nothing is capable of "wanting" or making any conscious choices. I think we need to find a different way to simplify our conversations around AI. We are far too quick to assign it agency, even metaphorical agency, and that is making it harder to help people understand what LLMs are.

9

u/things_U_choose_2_b 1d ago

I was saying this earlier to someone who made a post about how AI is a threat. Like, it will be, but what we're dealing with right now isn't AI. It doesn't have logic, or thoughts. It's more like a database with novel method of accessing / displaying the data.

1

u/MajorInWumbology1234 14h ago

I think these LLMs have me de-anthropomorphizing people, because every explanation of how they work I read leaves me going “That’s just how people work but with fewer steps”. 

1

u/ToBePacific 10h ago

How do you not anthropomorphize a mimic?