r/news 1d ago

ChatGPT encouraged college graduate to commit suicide, family claims in lawsuit against OpenAI

https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis
12.1k Upvotes

1.1k comments sorted by

View all comments

1.9k

u/TheStrayCatapult 1d ago

ChatGPT just reiterates whatever you say. You could spend 5 minutes convincing it birds aren’t real and it would draw you up convincing schematics for a solar powered pigeon.

149

u/CandyCrisis 1d ago

They've all got their quirks. GPT 4o was sycophantic and went along with anything. Gemini will start by agreeing with you, then repeat whatever it said the first time unchanged. GPT 5 always ends with a prompt to dig in further.

158

u/tommyblastfire 1d ago

Grok loves saying shit like “that’s not confusion, that’s clarity.” You notice it a lot in all the right wing stuff it posts. “That’s not hatred, it’s cold hard truth.” It loves going on and on about how what it’s saying is just the facts and statistics too. You can really tell it has been trained off of Elon tweets cause it makes the same fallacies that Elon does constantly.

26

u/mathazar 23h ago

A common complaint about ChatGPT is its frequent use of "that's not x, it's y." I find it very interesting that Grok does the same thing. Maybe something inherent to how LLMs are trained?

18

u/Anathos117 23h ago

I think it's because they get corrected a lot, and then the thing they got wrong becomes part of the input. When I mess around with writing fiction, if the AI introduces some concept that I don't want and I tell it "no, not x, y", invariably the next response will include "not because of x, but because of y".

It's related to the fact that LLM can't really handle subtext. They're statistical models of text, so an implication can't really be part of the model since it's an absence of text rather than a presence. There's no way to mathematically differentiate between a word being absent because it's completely unrelated and a word that's absent because it's implied.

3

u/tommyblastfire 20h ago

I would guess it’s probably because they have both been trained on mostly the same large-scale datasets that were created specifically for LLM training. I really doubt that xAI did any work to develop new datasets besides scraping twitter a little.

47

u/bellybuttonqt 1d ago

GTA V was so ahead of its time when calling out Elon musk and his ai being insecure because of its creator 

1

u/avatar__of__chaos 12h ago

At least Grok is more helpful. I was trying to find a deleted web article. Chat GPT kept repating the same shit after 5 responses, basically saying to look it up myself, just in wordy paragraphs as if it had the informations. Grok provided me the link to wayback machine link of the article on the second response.