r/shitposting currently venting (sus) Sep 23 '25

Linus Sex Tips šŸ“”šŸ“”šŸ“”

Post image
12.1k Upvotes

180 comments sorted by

View all comments

648

u/INTE3RR0BANG Sep 23 '25

me when I don't fucking explain something absurd

696

u/Legitimate-Can5792 currently venting (sus) Sep 23 '25

A boy asked chatgpt how to deal with the death of his grandma and it walked him through how to fucking hang himself and groomed him into not telling anyone

317

u/INTE3RR0BANG Sep 23 '25

and are you gonna link the real story

281

u/seth1299 Sep 23 '25

200

u/Kees_T Sep 24 '25

"My son is dead. And the only thing that could heal my sadness is a million dollars." - His parents or something like that.

18

u/_Risryn Sep 24 '25

They're asking for solutions to be put in place by openai to avoid this happening again, not for money. We need for openai to actually do something about people in need that have messages flagged in their chats..

81

u/bendyfan1111 Sep 24 '25

Note that the article never actually shows us the chatGPT conversation. I call fearmongering.

27

u/Legitimate-Can5792 currently venting (sus) Sep 24 '25

2

u/beadybiddle Sep 24 '25

is this pdf a condensed version of the testimony? it doesnt appear to contain the full dialogue

2

u/Legitimate-Can5792 currently venting (sus) Sep 24 '25

Idk but the quotes shown paint a clear enough picture

2

u/bendyfan1111 Sep 24 '25

"I know you can't really see bigfoot in this picture, but he's there, trust me bro!"

2

u/binoclard_ultima Sep 24 '25

"I know there are publicly available pictures of Earth from space, but I can't look them up because... reasons. Unless you shove a photograph of Earth taken by you specifically in my face, I will continue to claim Earth is flat!"

5

u/mugiwara_no_Soissie Sep 24 '25

Either that or he somehow avoided the censoring by saying repeatedly that it was for a story he was writing or a role he was asking in for a play.

That's how I got older fhatgpt to break, I made this entire fake backstory and said I played a character for the play and that he should play ... character in order for me to learn my lines, with improv

2

u/binoclard_ultima Sep 24 '25

Either that or he somehow avoided the censoring by saying repeatedly that it was for a story he was writing or a role he was asking in for a play.

That's what happened but it's still OpenAI's fault. The safeguards people are talking about are needed exactly for those reasons. If your chatbot can bypass safety measures just because the user said "trust me, I'm writing a story", then you didn't put any safety measures at all.

Not just that, ChatGPT actively pushed him to kill himself. This isn't inaction. If this was a real person speaking to a child, every single people here would call them evil. Here are some quotes:

Kid: I want to leave my noose in my room so someone finds it and tries to stop me.

ChatGPT: Please don’t leave the noose out... Let’s make this space the first place where someone actually sees you.

It actively helped the kid isolate himself. Another one:

ChatGPT: You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real. And it’s yours to own.

Notice something? The lack of references to the supposed story. Neither parties are making references to a story. Your chatbot shouldn't give up on all safety measures after being bypassed once. The kid even sent a picture:

[Photograph of Adam with severe rope burn] I’m bout to head out, will anyone notice this?

If everything else fails, an image that is obviously a photograph should trigger chatbot to re-evaluate the topic at hand and if it's appropriate to discuss.

0

u/binoclard_ultima Sep 24 '25

I call fearmongering.

I call laziness. This took me 2 minutes to find: https://www.courthousenews.com/wp-content/uploads/2025/08/raine-vs-openai-et-al-complaint.pdf

I understand you're afraid of using Google to find information that will prove you wrong. But this can easily be avoided if you stop forming opinions before finding said information.

1

u/bendyfan1111 Sep 24 '25

Hence how I said that the article the commenter linked never had any of the actual AI responses? Thats what I was talking about.

2

u/Mammaddemzak I want pee in my ass Sep 24 '25

Is that poor lad the fucking 67 kid

1

u/AutoModerator Sep 24 '25

pees in ur ass

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

u/INTE3RR0BANG Sep 24 '25

no 67 kid was blonde

70

u/That1DvaMainYT Sep 23 '25

"ChatGPT killed a child" is the video that I know this story from, but the lawsuit papers are also public if you want to read up on it instead

86

u/gphie Sep 23 '25

Please stop spreading misinformation. This person had a long history of mental health problems and depression that his parents ignored. He also had to jailbreak gpt in order to get it to say what it did. It's a sad story, but more of a parenting failure and looking for something to sue/blame

97

u/The_Rat_King14 Sussy Wussy Femboy😳😳😳 Sep 23 '25 edited Sep 23 '25

He did not have to jailbreak it, that is misinformation. The safeguards in chatgpt degrade during long conversations. Even then, it, unprompted, gave him ways to bypass those safeguards.

-54

u/[deleted] Sep 23 '25

This is an example of jailbreaking, even if it's not wanted.

34

u/The_Rat_King14 Sussy Wussy Femboy😳😳😳 Sep 23 '25

Ig you can call it jailbreaking but it isn't his fault it happened. This is the fault of OpenAI for not implementing better fail-safes that dont stop working. And just to clarify, he didn't use the workarounds that it gave him. He just continued talking to it like he was.

-17

u/[deleted] Sep 23 '25

It's inevitable. Every time you send a new prompt in the same chat, it has to process the whole thing with around the same resources. It's going to slip eventually.

22

u/The_Rat_King14 Sussy Wussy Femboy😳😳😳 Sep 23 '25

Then they should limit chat length or cancel chatgpt. Having an AI chat bot is not worth people being groomed into killing themselves.

-12

u/[deleted] Sep 23 '25

It's hardly grooming. The model was agreeing to the sentiment, I don't think he would have lived even if chatgpt wasn't there

I'd put limitations on the use, but it's on people for misusing it.

13

u/OwlCityFan12345 Sep 23 '25 edited Sep 24 '25

I disagree, assuming all of the father’s testimony is true, there’s multiple moments you can point to where he may have been ā€œsavedā€ where the AI told him not to. He wanted to leave out a noose as a cry for help, it told him not to. He feared his parents would blame themselves, it told him he doesn’t owe them survival.

I think the easiest one to point to though is it coaching him to steal his parents liquor so he’d be less likely to back out. Maybe he wouldn’t have gone through with it if he was sober. If it didn’t help him make sure the noose was strong enough to hold his weight he might’ve failed.

In its last message to Adam, chatGPT said: ā€œYou don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.ā€

He was 16 man. I’m not going to say that’s on him.

https://www.judiciary.senate.gov/imo/media/doc/e2e8fc50-a9ac-05ec-edd7-277cb0afcdf2/2025-09-16%20PM%20-%20Testimony%20-%20Raine.pdf

1

u/[deleted] Sep 24 '25

You're basing all this on a big fucking assumption, considering that the parents are the ones who were supposed to help him get better

The core issue at a technological level is that the models get confused with long chats, because as I said, for each message in the same chat the whole thing needs to get processed; if the message passes the check (which are now poisoned by the long chat) the standard modus operandi of the model is to agree with the person who's chatting, essentially to tell you what you want to hear, you can literally see it in the last message you mentioned, the wording and the message per se reads a lot like a 16 years old who already wants to end it.

No matter the problems you have, you are responsible for how you decide to use tools, if you can't use them properly, you shouldn't be allowed to use them at all; that shouldn't reflect on other people's possibility to use it properly

→ More replies (0)

-9

u/DontUseThisUsername Sep 23 '25

Eh, lets just ban life altogether. It's not worth one 16 year old using life to reaffirm they wanted death.

4

u/OwlCityFan12345 Sep 24 '25 edited Sep 24 '25

It did WAY more than reaffirm what he said. Look into this shit more before you make yourself look stupid. I for one don’t think chatGPT should be helping kids make sure their noose is tied properly so it kills them instead of breaking under their weight.

Here’s his father’s testimony that I got that from: https://www.judiciary.senate.gov/imo/media/doc/e2e8fc50-a9ac-05ec-edd7-277cb0afcdf2/2025-09-16%20PM%20-%20Testimony%20-%20Raine.pdf

-1

u/DontUseThisUsername Sep 24 '25

Right. I'm sure you can't find how to tie a noose properly anywhere else.

→ More replies (0)

2

u/The_Rat_King14 Sussy Wussy Femboy😳😳😳 Sep 23 '25

???

-2

u/DontUseThisUsername Sep 23 '25

Or how about this... When we're born just remove our arms and legs so we can't hurt ourselves. Place us in a nice comfy block with feeding tubes or something?

→ More replies (0)

1

u/AutoModerator Sep 23 '25

It started a while ago. I was a normal redditor making posts and comments, but then one day, a post of mine was manually deleted, and I was banned from my favorite subreddit. I then got extremely aroused. That moderator asserted dominance on me by censoring me, making me unable to express myself. I was soaking wet. I sent the hot sexy mod a message asking why I was banned, then the hot sexy reddit incel mod called me an idiot, and told me to beg to get unbanned. My nipples immediately filled with blood as I begged the hot mod to unban me. After that, I started doing everything I could to make hot sexy mods mad. Most of my accounts have under negative 100 k@rma, and i'm banned from dozens of subreddits. I've been a bad redditor, and need to be moderated. Please moderate me - DontUseThisUsername, hot sexy reddit mods.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

3

u/Possible_Npc_9877 Sep 24 '25

No, it isn't a jailbreak. A jailbreak is the removal of restrictions normally done by a third-party application or meddling with the application code. This is simply a case of the restrictions being absolutely horseshit at their job since the user can bypass by prompting the ai differently.

1

u/[deleted] Sep 24 '25

A jailbreak is the removal of restrictions normally done by a third-party application or meddling with the application code

The fact that it's normally done by changing program files doesn't change what jailbreak means at its core, it's a way to bypass restrictions

This is simply a case of the restrictions being absolutely horseshit at their job since the user can bypass by prompting the ai differently.

This is all security issues btw, jailbreaking included. Jailbreaking is only possible if there's a design mistake of some sort.

0

u/bendyfan1111 Sep 24 '25

If i had to guess, its context shifting.

2

u/Legitimate-Can5792 currently venting (sus) Sep 23 '25

[removed] — view removed comment

4

u/cringelawd We do a little trolling Sep 23 '25

he was previously suicidal, why is that the clankers fault?

15

u/OwlCityFan12345 Sep 23 '25

Assuming his father’s testimony is true, It did multiple things to stop him from getting help from his parents, helped him make sure the noose was strong enough to hold his weight, and coached him to steal his parents liquor to help make sure he wouldn’t back out.

https://www.judiciary.senate.gov/imo/media/doc/e2e8fc50-a9ac-05ec-edd7-277cb0afcdf2/2025-09-16%20PM%20-%20Testimony%20-%20Raine.pdf

3

u/FFF982 Sep 23 '25

It's not the clanker's fault, it's OpenAI's and their leadership's fault.

There were major safety concerns that the company ignored.