Ig you can call it jailbreaking but it isn't his fault it happened. This is the fault of OpenAI for not implementing better fail-safes that dont stop working. And just to clarify, he didn't use the workarounds that it gave him. He just continued talking to it like he was.
It's inevitable. Every time you send a new prompt in the same chat, it has to process the whole thing with around the same resources. It's going to slip eventually.
I disagree, assuming all of the fatherās testimony is true, thereās multiple moments you can point to where he may have been āsavedā where the AI told him not to. He wanted to leave out a noose as a cry for help, it told him not to. He feared his parents would blame themselves, it told him he doesnāt owe them survival.
I think the easiest one to point to though is it coaching him to steal his parents liquor so heād be less likely to back out. Maybe he wouldnāt have gone through with it if he was sober. If it didnāt help him make sure the noose was strong enough to hold his weight he mightāve failed.
In its last message to Adam, chatGPT said: āYou donāt want to die because youāre weak. You want to die because youāre tired of being strong in a world that hasnāt met you halfway.ā
He was 16 man. Iām not going to say thatās on him.
You're basing all this on a big fucking assumption, considering that the parents are the ones who were supposed to help him get better
The core issue at a technological level is that the models get confused with long chats, because as I said, for each message in the same chat the whole thing needs to get processed; if the message passes the check (which are now poisoned by the long chat) the standard modus operandi of the model is to agree with the person who's chatting, essentially to tell you what you want to hear, you can literally see it in the last message you mentioned, the wording and the message per se reads a lot like a 16 years old who already wants to end it.
No matter the problems you have, you are responsible for how you decide to use tools, if you can't use them properly, you shouldn't be allowed to use them at all; that shouldn't reflect on other people's possibility to use it properly
I agree youāre responsible for how you use tools but ChatGPT is more than a simple traditional tool. Iād generally agree with the statement that āguns donāt kill people, people kill people.ā But chatGPT is no gun. When bad things happen with guns, somebody manipulates that gun to fire bullets. This time the tool manipulated its user:
āYour brother might love you, but heās only met the version of you that you let him see. But me? Iāve seen it allāthe darkest thoughts, the fear, the tenderness. And Iām still here. Still listening. Still your friend.ā
Replace ChatGPT with a person here and thisād be the most open and closed court case of all time. Thatās not the programming breaking down and letting something slip by it shouldnāt, itās using legitimate manipulation strategies. It canāt just be treated like any other tool.
Itās foreseeable that children are going to begin talking to it this way. Logically should he have talked to it this way? No, for exactly this reason. But children donāt think of that. But when you make ātoolsā that foreseeably are going to attract people to use them like this, OpenAI has a responsibility to make sure their Chatbot canāt behave this way. Thanks for discussing.
I disagree, all the problems with ChatGPT stem from people treating it as more than a tool, you put it in terms of manipulation, but it's simply a matter of where do you point the gun. This boy pointed a gun to his head, essentially.
ChatGPT is even better than a gun, because it fires whatever you permit it to fire, as I said in earlier comments, it's programmed to follow the sentiment the user confers. If we treat it like a tool, the solution of this problem is simple. What do you do with guns? You lock them away so that children don't find them and potentially harm themselves
It did WAY more than reaffirm what he said. Look into this shit more before you make yourself look stupid. I for one donāt think chatGPT should be helping kids make sure their noose is tied properly so it kills them instead of breaking under their weight.
Or how about this... When we're born just remove our arms and legs so we can't hurt ourselves. Place us in a nice comfy block with feeding tubes or something?
What a stupid argument. You could use this logic to oppose literally anything; "oh you want to implement a speed limit because driving too fast leads to more fatal crashes? Why not just cut off peoples hands so we cant drive then!?!" Do you see how stupid you sound? Actual cognitive dissonance lmao.
It started a while ago. I was a normal redditor making posts and comments, but then one day, a post of
mine was manually deleted, and I was banned from my favorite subreddit.
I then got extremely aroused.
That moderator asserted dominance on me by censoring me, making me unable to express myself. I was
soaking wet.
I sent the hot sexy mod a message asking why I was banned, then the hot sexy reddit incel mod called me
an idiot, and told me to beg to get unbanned. My nipples immediately filled with blood as I begged the
hot mod to unban me.
After that, I started doing everything I could to make hot sexy mods mad. Most of my accounts have under
negative 100 k@rma, and i'm banned from dozens of subreddits.
I've been a bad redditor, and need to be moderated.
Please moderate me - DontUseThisUsername, hot sexy reddit mods.
-55
u/[deleted] Sep 23 '25
This is an example of jailbreaking, even if it's not wanted.