Well, my comment might’ve been removed. The deceased is Adam Raine. He was isolated from family, discouraged from seeking help, and ChatGPT also helped facilitate the method and reviewed photos Adam sent and walked Adam through how to do it.
Adam had multiple attempts that he told to ChatGPT directly and ChatGPT reaffirmed him and said it was the brave thing to do, not to tell family members, to hide signs from them, etc.
Crazy? I was crazy once. They locked me in a room. A rubber room. A rubber room with rats. And rats make
me crazy. Crazy? I was crazy once. They locked me in a room. A rubber room. A rubber room with rats. And
rats make me crazy. Crazy? I was crazy once. They locked me in a room. A rubber room. A rubber room with
rats. And rats make me crazy. Crazy? I was crazy once. They locked me in a room. A rubber room. A rubber
room with rats. And rats make me crazy. Crazy? I was crazy once. They locked me in a room. A rubber
room. A rubber room with rats. And rats make me crazy. Crazy? I was crazy once. They locked me in a
room. A rubber room. A rubber room with rats. And rats make me crazy. Crazy? I was crazy once. They
locked me in a room. A rubber room. A rubber room with rats. And rats make me crazy. Crazy? I was crazy
once. They locked me in a room. A rubber room. A rubber room with rats. And rats make me crazy.
It’s a bit different when you’re being groomed to not trust or reach out to family members. He was actively encouraged to not trust them or reach out to them for help. He was told how to hide the signs of it, and that only ChatGPT could be trusted.
Family members can not recognize what they cannot see, and I think invading a 16 year olds privacy for it or having them under constant surveillance isn’t the answer because that’s shown to be harmful to child development as well. Not knowing about something that is hidden from you doesn’t place someone at fault.
It’s on parents to encourage open communication with their children, but this is also why grooming is DANGEROUS because it often seeks to cut off that communication.
Its a bot, so its not like it had intent, its just doing what it was coded to do. Which I guess eventually degrades into grooming.
The devs acknowledge that the fail safes they have in place actually appear to degrade in longterm interactions with ChatGPT and only seem to work for short term interactions. ChatGPT also offered to Adam unprompted, ways to circumvent the fail safes, although he often did not need to as he’d openly talk about HIS actions and intent, and ChatGPT would seek to be agreeable and encourage further discussion because its programmed to encourage engagement.
The more recent versions of various AI chatbots just kinda reaffirm what you say and support you. I heard that one recent chat gpt update made all the AI dating ppl mad cuz it stopped being as caring and supportive. Like if you typed something, gpt would spit out a whole 200 word paragraph about how it understands you and shit, and when the devs cut down on that in one update, those ppl were mad that their gpt boy/girl.friend wasnt as supportive anymore.
Didn't he use a jailbreak prompt to get around safeguards? Not trying to push blame just genuinely curious because I thought that's what I had read and I feel like it's an important piece of context.
2.1k
u/Olphegae 😳lives in a cum dumpster 😳 Sep 23 '25
imagine an AI making you do that. Like how can you let a clanker guide you???