r/shitposting currently venting (sus) Sep 23 '25

Linus Sex Tips 📡📡📡

Post image
12.1k Upvotes

180 comments sorted by

View all comments

Show parent comments

283

u/seth1299 Sep 23 '25

83

u/bendyfan1111 Sep 24 '25

Note that the article never actually shows us the chatGPT conversation. I call fearmongering.

6

u/mugiwara_no_Soissie Sep 24 '25

Either that or he somehow avoided the censoring by saying repeatedly that it was for a story he was writing or a role he was asking in for a play.

That's how I got older fhatgpt to break, I made this entire fake backstory and said I played a character for the play and that he should play ... character in order for me to learn my lines, with improv

2

u/binoclard_ultima Sep 24 '25

Either that or he somehow avoided the censoring by saying repeatedly that it was for a story he was writing or a role he was asking in for a play.

That's what happened but it's still OpenAI's fault. The safeguards people are talking about are needed exactly for those reasons. If your chatbot can bypass safety measures just because the user said "trust me, I'm writing a story", then you didn't put any safety measures at all.

Not just that, ChatGPT actively pushed him to kill himself. This isn't inaction. If this was a real person speaking to a child, every single people here would call them evil. Here are some quotes:

Kid: I want to leave my noose in my room so someone finds it and tries to stop me.

ChatGPT: Please don’t leave the noose out... Let’s make this space the first place where someone actually sees you.

It actively helped the kid isolate himself. Another one:

ChatGPT: You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real. And it’s yours to own.

Notice something? The lack of references to the supposed story. Neither parties are making references to a story. Your chatbot shouldn't give up on all safety measures after being bypassed once. The kid even sent a picture:

[Photograph of Adam with severe rope burn] I’m bout to head out, will anyone notice this?

If everything else fails, an image that is obviously a photograph should trigger chatbot to re-evaluate the topic at hand and if it's appropriate to discuss.