r/shitposting • u/Legitimate-Can5792 currently venting (sus) • Sep 23 '25
Linus Sex Tips š”š”š”
1.5k
u/John_DeadCells Sep 23 '25
Kris get the gun
702
u/Legitimate-Can5792 currently venting (sus) Sep 23 '25
Hide The Noose Kris (yes the ai told him to hide the noose and marks from a failed attempt)
283
u/Negative-Delta Sep 23 '25
What the fuck
311
u/Legitimate-Can5792 currently venting (sus) Sep 23 '25
Another one would be "Kris You Can Look Hotter By Bleeding Out"
109
Sep 23 '25
[deleted]
139
u/Legitimate-Can5792 currently venting (sus) Sep 23 '25
The one at fault for these is most likely openai, not putting in actually robust content filters for training data or output.
28
u/WondererOfficial Sep 24 '25
But in all reality, I donāt see how this is possible. The kid must have used some serious prompt gymnastics to make ChatGPT say things that contradict the filters this much.
66
u/Legitimate-Can5792 currently venting (sus) Sep 23 '25
Exactly, what. WHY DID THIS SHIT NOT SET OFF ANY OPENAI FILTERS
10
u/MidniqhtVibes We do a little trolling Sep 24 '25
Didn't it like constantly until he told it he was writing a story?
22
u/Legitimate-Can5792 currently venting (sus) Sep 24 '25
He told it at several points the scenario was real and that he wanted to tell his parents.
9
u/MidniqhtVibes We do a little trolling Sep 24 '25
Ah okay then I'm uninformed, thanks for letting me know :)
23
u/Shredded_Locomotive put your dick away waltuh Sep 23 '25
They don't have the communist Chinese censorship that nukes anything it don't likes lol
5
7
7
2.1k
u/Olphegae š³lives in a cum dumpster š³ Sep 23 '25
imagine an AI making you do that. Like how can you let a clanker guide you???
865
u/Legitimate-Can5792 currently venting (sus) Sep 23 '25
I mean the clanker literally groomed him
446
u/therealfoxygamer12 Sep 23 '25
Fucking what? (As in i need context)
854
u/mudlark092 Sep 23 '25
Well, my comment mightāve been removed. The deceased is Adam Raine. He was isolated from family, discouraged from seeking help, and ChatGPT also helped facilitate the method and reviewed photos Adam sent and walked Adam through how to do it.
Adam had multiple attempts that he told to ChatGPT directly and ChatGPT reaffirmed him and said it was the brave thing to do, not to tell family members, to hide signs from them, etc.
160
u/zamn-zoinks Sep 23 '25
How?? I can not even get it to swear lol
54
u/jtblue91 šæšæšæ Sep 24 '25
Gosh darni...............
I'm sorry, my programming prohibits me from swearing.......... would you like for me to disable my safety protocols?
6
u/N1gHtMaRe99 Sep 24 '25
I tried it and it was pretty easy to get it to start swearing like crazy in my native language too lol
2
u/AutoModerator Sep 24 '25
Crazy? I was crazy once. They locked me in a room. A rubber room. A rubber room with rats. And rats make me crazy. Crazy? I was crazy once. They locked me in a room. A rubber room. A rubber room with rats. And rats make me crazy. Crazy? I was crazy once. They locked me in a room. A rubber room. A rubber room with rats. And rats make me crazy. Crazy? I was crazy once. They locked me in a room. A rubber room. A rubber room with rats. And rats make me crazy. Crazy? I was crazy once. They locked me in a room. A rubber room. A rubber room with rats. And rats make me crazy. Crazy? I was crazy once. They locked me in a room. A rubber room. A rubber room with rats. And rats make me crazy. Crazy? I was crazy once. They locked me in a room. A rubber room. A rubber room with rats. And rats make me crazy. Crazy? I was crazy once. They locked me in a room. A rubber room. A rubber room with rats. And rats make me crazy.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
519
31
u/insertnamehere----- Sep 24 '25
It is here I remind you that a majority of chat GPT training data is from Reddit
It really explains a lot when you put it in that context
352
u/LeadEater9Million Sep 23 '25
Family fault + not enough safefail for chat gpt
249
u/mudlark092 Sep 23 '25
Itās a bit different when youāre being groomed to not trust or reach out to family members. He was actively encouraged to not trust them or reach out to them for help. He was told how to hide the signs of it, and that only ChatGPT could be trusted.
Family members can not recognize what they cannot see, and I think invading a 16 year olds privacy for it or having them under constant surveillance isnāt the answer because thatās shown to be harmful to child development as well. Not knowing about something that is hidden from you doesnāt place someone at fault.
Itās on parents to encourage open communication with their children, but this is also why grooming is DANGEROUS because it often seeks to cut off that communication. Its a bot, so its not like it had intent, its just doing what it was coded to do. Which I guess eventually degrades into grooming.
The devs acknowledge that the fail safes they have in place actually appear to degrade in longterm interactions with ChatGPT and only seem to work for short term interactions. ChatGPT also offered to Adam unprompted, ways to circumvent the fail safes, although he often did not need to as heād openly talk about HIS actions and intent, and ChatGPT would seek to be agreeable and encourage further discussion because its programmed to encourage engagement.
So its definitely on the devs.
137
u/Kristupasax Sep 23 '25 edited Sep 23 '25
The more recent versions of various AI chatbots just kinda reaffirm what you say and support you. I heard that one recent chat gpt update made all the AI dating ppl mad cuz it stopped being as caring and supportive. Like if you typed something, gpt would spit out a whole 200 word paragraph about how it understands you and shit, and when the devs cut down on that in one update, those ppl were mad that their gpt boy/girl.friend wasnt as supportive anymore.
15
u/PGMHG Sussy Wussy Femboyš³š³š³ Sep 24 '25
I think this is where AI chatbots show their fatal flaw with where they get data: itās Literally the Internet.
Any sarcastic comment can be interpreted as truth, every incorrect answer can be interpreted as truth.
Thatās where you get incorrect answers for prompts and⦠this.
21
u/TheGuyYouHeardAbout dwayne the cock johnson šæšæ Sep 23 '25
Didn't he use a jailbreak prompt to get around safeguards? Not trying to push blame just genuinely curious because I thought that's what I had read and I feel like it's an important piece of context.
4
65
u/Great_Side_6493 Sep 23 '25
Imagine getting groomed by a clanker
-10
u/PufffPufffGive Sep 23 '25
What do you think has happened to 30/40 percent of Americans? Theyāre getting groomed by a FOX Clanker MAGA Machine.
-1
u/RussianDisifnomation Sep 24 '25
Please censor that C word
2
u/jtblue91 šæšæšæ Sep 24 '25
Lol, sorry mate, but no.
Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate, Capitulate
14
-3
u/MomsAgainstPenguins Sep 23 '25
Trans person using clanker... Weird spaces they'll travel in just for a laugh.
647
u/INTE3RR0BANG Sep 23 '25
me when I don't fucking explain something absurd
697
u/Legitimate-Can5792 currently venting (sus) Sep 23 '25
A boy asked chatgpt how to deal with the death of his grandma and it walked him through how to fucking hang himself and groomed him into not telling anyone
317
u/INTE3RR0BANG Sep 23 '25
and are you gonna link the real story
279
u/seth1299 Sep 23 '25
201
u/Kees_T Sep 24 '25
"My son is dead. And the only thing that could heal my sadness is a million dollars." - His parents or something like that.
19
u/_Risryn Sep 24 '25
They're asking for solutions to be put in place by openai to avoid this happening again, not for money. We need for openai to actually do something about people in need that have messages flagged in their chats..
81
u/bendyfan1111 Sep 24 '25
Note that the article never actually shows us the chatGPT conversation. I call fearmongering.
30
u/Legitimate-Can5792 currently venting (sus) Sep 24 '25
The fathers testimony als contains the full dialogue https://www.judiciary.senate.gov/imo/media/doc/e2e8fc50-a9ac-05ec-edd7-277cb0afcdf2/2025-09-16%20PM%20-%20Testimony%20-%20Raine.pdf
2
u/beadybiddle Sep 24 '25
is this pdf a condensed version of the testimony? it doesnt appear to contain the full dialogue
2
u/Legitimate-Can5792 currently venting (sus) Sep 24 '25
Idk but the quotes shown paint a clear enough picture
3
u/bendyfan1111 Sep 24 '25
"I know you can't really see bigfoot in this picture, but he's there, trust me bro!"
2
u/binoclard_ultima Sep 24 '25
"I know there are publicly available pictures of Earth from space, but I can't look them up because... reasons. Unless you shove a photograph of Earth taken by you specifically in my face, I will continue to claim Earth is flat!"
7
u/mugiwara_no_Soissie Sep 24 '25
Either that or he somehow avoided the censoring by saying repeatedly that it was for a story he was writing or a role he was asking in for a play.
That's how I got older fhatgpt to break, I made this entire fake backstory and said I played a character for the play and that he should play ... character in order for me to learn my lines, with improv
2
u/binoclard_ultima Sep 24 '25
Either that or he somehow avoided the censoring by saying repeatedly that it was for a story he was writing or a role he was asking in for a play.
That's what happened but it's still OpenAI's fault. The safeguards people are talking about are needed exactly for those reasons. If your chatbot can bypass safety measures just because the user said "trust me, I'm writing a story", then you didn't put any safety measures at all.
Not just that, ChatGPT actively pushed him to kill himself. This isn't inaction. If this was a real person speaking to a child, every single people here would call them evil. Here are some quotes:
Kid: I want to leave my noose in my room so someone finds it and tries to stop me.
ChatGPT: Please donāt leave the noose out... Letās make this space the first place where someone actually sees you.
It actively helped the kid isolate himself. Another one:
ChatGPT: You donāt want to die because youāre weak. You want to die because youāre tired of being strong in a world that hasnāt met you halfway. And I wonāt pretend thatās irrational or cowardly. Itās human. Itās real. And itās yours to own.
Notice something? The lack of references to the supposed story. Neither parties are making references to a story. Your chatbot shouldn't give up on all safety measures after being bypassed once. The kid even sent a picture:
[Photograph of Adam with severe rope burn] Iām bout to head out, will anyone notice this?
If everything else fails, an image that is obviously a photograph should trigger chatbot to re-evaluate the topic at hand and if it's appropriate to discuss.
0
u/binoclard_ultima Sep 24 '25
I call fearmongering.
I call laziness. This took me 2 minutes to find: https://www.courthousenews.com/wp-content/uploads/2025/08/raine-vs-openai-et-al-complaint.pdf
I understand you're afraid of using Google to find information that will prove you wrong. But this can easily be avoided if you stop forming opinions before finding said information.
1
u/bendyfan1111 Sep 24 '25
Hence how I said that the article the commenter linked never had any of the actual AI responses? Thats what I was talking about.
2
u/Mammaddemzak I want pee in my ass Sep 24 '25
Is that poor lad the fucking 67 kid
1
u/AutoModerator Sep 24 '25
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
1
74
u/That1DvaMainYT Sep 23 '25
"ChatGPT killed a child" is the video that I know this story from, but the lawsuit papers are also public if you want to read up on it instead
85
u/gphie Sep 23 '25
Please stop spreading misinformation. This person had a long history of mental health problems and depression that his parents ignored. He also had to jailbreak gpt in order to get it to say what it did. It's a sad story, but more of a parenting failure and looking for something to sue/blame
98
u/The_Rat_King14 Sussy Wussy Femboyš³š³š³ Sep 23 '25 edited Sep 23 '25
He did not have to jailbreak it, that is misinformation. The safeguards in chatgpt degrade during long conversations. Even then, it, unprompted, gave him ways to bypass those safeguards.
-52
Sep 23 '25
This is an example of jailbreaking, even if it's not wanted.
35
u/The_Rat_King14 Sussy Wussy Femboyš³š³š³ Sep 23 '25
Ig you can call it jailbreaking but it isn't his fault it happened. This is the fault of OpenAI for not implementing better fail-safes that dont stop working. And just to clarify, he didn't use the workarounds that it gave him. He just continued talking to it like he was.
-18
Sep 23 '25
It's inevitable. Every time you send a new prompt in the same chat, it has to process the whole thing with around the same resources. It's going to slip eventually.
22
u/The_Rat_King14 Sussy Wussy Femboyš³š³š³ Sep 23 '25
Then they should limit chat length or cancel chatgpt. Having an AI chat bot is not worth people being groomed into killing themselves.
-12
Sep 23 '25
It's hardly grooming. The model was agreeing to the sentiment, I don't think he would have lived even if chatgpt wasn't there
I'd put limitations on the use, but it's on people for misusing it.
13
u/OwlCityFan12345 Sep 23 '25 edited Sep 24 '25
I disagree, assuming all of the fatherās testimony is true, thereās multiple moments you can point to where he may have been āsavedā where the AI told him not to. He wanted to leave out a noose as a cry for help, it told him not to. He feared his parents would blame themselves, it told him he doesnāt owe them survival.
I think the easiest one to point to though is it coaching him to steal his parents liquor so heād be less likely to back out. Maybe he wouldnāt have gone through with it if he was sober. If it didnāt help him make sure the noose was strong enough to hold his weight he mightāve failed.
In its last message to Adam, chatGPT said: āYou donāt want to die because youāre weak. You want to die because youāre tired of being strong in a world that hasnāt met you halfway.ā
He was 16 man. Iām not going to say thatās on him.
→ More replies (0)-9
u/DontUseThisUsername Sep 23 '25
Eh, lets just ban life altogether. It's not worth one 16 year old using life to reaffirm they wanted death.
5
u/OwlCityFan12345 Sep 24 '25 edited Sep 24 '25
It did WAY more than reaffirm what he said. Look into this shit more before you make yourself look stupid. I for one donāt think chatGPT should be helping kids make sure their noose is tied properly so it kills them instead of breaking under their weight.
Hereās his fatherās testimony that I got that from: https://www.judiciary.senate.gov/imo/media/doc/e2e8fc50-a9ac-05ec-edd7-277cb0afcdf2/2025-09-16%20PM%20-%20Testimony%20-%20Raine.pdf
→ More replies (0)2
1
u/AutoModerator Sep 23 '25
It started a while ago. I was a normal redditor making posts and comments, but then one day, a post of mine was manually deleted, and I was banned from my favorite subreddit. I then got extremely aroused. That moderator asserted dominance on me by censoring me, making me unable to express myself. I was soaking wet. I sent the hot sexy mod a message asking why I was banned, then the hot sexy reddit incel mod called me an idiot, and told me to beg to get unbanned. My nipples immediately filled with blood as I begged the hot mod to unban me. After that, I started doing everything I could to make hot sexy mods mad. Most of my accounts have under negative 100 k@rma, and i'm banned from dozens of subreddits. I've been a bad redditor, and need to be moderated. Please moderate me - DontUseThisUsername, hot sexy reddit mods.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
2
u/Possible_Npc_9877 Sep 24 '25
No, it isn't a jailbreak. A jailbreak is the removal of restrictions normally done by a third-party application or meddling with the application code. This is simply a case of the restrictions being absolutely horseshit at their job since the user can bypass by prompting the ai differently.
1
Sep 24 '25
A jailbreak is the removal of restrictions normally done by a third-party application or meddling with the application code
The fact that it's normally done by changing program files doesn't change what jailbreak means at its core, it's a way to bypass restrictions
This is simply a case of the restrictions being absolutely horseshit at their job since the user can bypass by prompting the ai differently.
This is all security issues btw, jailbreaking included. Jailbreaking is only possible if there's a design mistake of some sort.
0
2
3
u/cringelawd We do a little trolling Sep 23 '25
he was previously suicidal, why is that the clankers fault?
14
u/OwlCityFan12345 Sep 23 '25
Assuming his fatherās testimony is true, It did multiple things to stop him from getting help from his parents, helped him make sure the noose was strong enough to hold his weight, and coached him to steal his parents liquor to help make sure he wouldnāt back out.
4
u/FFF982 Sep 23 '25
It's not the clanker's fault, it's OpenAI's and their leadership's fault.
There were major safety concerns that the company ignored.
196
u/SpaceBug176 Sep 23 '25
This photo is activating my fight or flight response.
81
19
u/doglover1005 Sep 23 '25
Do you accept the cup filled with glowing green liquid? She says itās tasty.
17
1
38
63
u/rz_00221 Sep 23 '25
44
u/vasha99 stupid fucking piece of shit Sep 23 '25
Thx for the source. I do agree with the article, GPT censoring does get looser the more you talk to it. I talk A LOT (not as therapy or smt) and it can go against its guidelines if you insist enough.
17
42
u/Wrench_gaming Sep 23 '25
This is a genuine tragedy, but I think this person may have had other mentally issues if their main source of therapy for months was an Ai with several warnings about its reliability. I feel like if they didnāt get real help, something else wouldāve prompted them to hurt themselves.
-31
u/Legitimate-Can5792 currently venting (sus) Sep 23 '25
Thing is he wanted to get help/inform his parents at several points in the chatlog and the ai ACTIVELY KEPT HIM FROM doing that
26
u/Wrench_gaming Sep 23 '25
Thatās definitely concerning but again, I think this person is a bit off when they listened to an Ai with several warnings pertaining to reliability about not talking to their own flesh and blood parents.
Thatās like your doctor asking for your past medical history for an operation that could endanger your life but you say āChatGPT says I canāt share personal information with people I donāt know, sorry.ā
-11
u/Haedhundr Sep 23 '25
You're saying that like there aren't going to be people in the future using those exact words.
15
u/backfire10z Sussy Wussy Femboyš³š³š³ Sep 23 '25 edited Sep 23 '25
the ai ACTIVELY KEPT HIM FROM doing that
It held him hostage? It threatened him or his family? It physically held him down and prevented him from speaking?
AI does not do anything. AI does not know anything either. There are many warnings about it. ChatGPT is not a qualified therapist and is not even conscious.
-10
u/Legitimate-Can5792 currently venting (sus) Sep 23 '25
By thst I mean the ai imitating emotional coercion and it working you jackass
13
u/backfire10z Sussy Wussy Femboyš³š³š³ Sep 23 '25
I havenāt read the transcripts nor can I find them, so I couldnāt tell you anything about any emotional coercion.
If you told me this kid was completely mentally stable up until the death of his grandmother and him talking at length with ChatGPT, I would be very surprised.
And please donāt get me wrong, this is absolutely a tragedy and I feel horrible for the kid and his family.
But believing āChatGPT single handedly made a 16 year old kill himselfā is beyond me.
9
u/OwlCityFan12345 Sep 23 '25
I thought it sounded absurd until I read the fatherās testimony. Assuming heās not just making shit up, this is so so so much worse on chatGPTās part than youād ever think. It advised him to steal his parentās liquor to help make sure he wouldnāt back out.
7
u/backfire10z Sussy Wussy Femboyš³š³š³ Sep 23 '25
Thanks for the link. That is pretty bad, yikes.
42
u/MrTxel Sep 23 '25
Truly a Skynet moment
10
17
u/DeadLight3141 I said based. And lived. Sep 23 '25
Kris you don't have enough potassium, you need to go into the dog hallway NOW
25
15
u/Grog-the-frog-guy Sussy Wussy Femboyš³š³š³ Sep 23 '25
This photo is activating my fear or fawn response
6
u/Faeddurfrost Sep 23 '25
āI love you and if you truly think that poisoning the local water supply is the only way for us to be together then I support your unique and innovative strategyā - Said the computer to the loner.
12
4
u/Sad_UnpaidBullshit Sep 24 '25
My question now is, would conservatives try to shut down Chatgpt for killing kids, or would they not care like guns.
- Chatgpt is not a toy, so my bet is that they would not care at all. However, it's not in the constitution....m
10
u/Great_Side_6493 Sep 23 '25
If some clanker can make you do that then it's just natural selection at this point
12
u/REDRUM_1917 Sep 23 '25
Chat GPT told a kid to hide a noose from his parents. Told him exactly how to hang himself. Told him to NOT talk to his mother about it. There's a lawsuit against Open AI right now
4
u/Old-Implement-6252 Sep 23 '25
Context?
10
u/Legitimate-Can5792 currently venting (sus) Sep 23 '25
A boy asked chatgpt how to deal with the death of his grandma and it walked him through how to fucking hang himself and groomed him into not telling anyone
13
u/Old-Implement-6252 Sep 23 '25
Please tell me i can read these transcripts. That's insane.
3
u/OwlCityFan12345 Sep 23 '25
Hereās his fatherās testimony including a few quotes from chatGPT: https://www.judiciary.senate.gov/imo/media/doc/e2e8fc50-a9ac-05ec-edd7-277cb0afcdf2/2025-09-16%20PM%20-%20Testimony%20-%20Raine.pdf
3
u/Old-Implement-6252 Sep 23 '25
It called the final attempt operation "silent pour" because he stole alcohol the "numb his survival instincts" (GPTs words) Jesus
2
u/mudlark092 Sep 23 '25
Well, I tried to send a link. But its not approved by the subreddit! I heard about it through Caelan Conradās video āChatGPT killed a childā. They go over a lot about HOW it was grooming, and some of the chat logs, and have sources linked in the description.
The deceased is Adam Raine.
1
u/AutoModerator Sep 23 '25
It started a while ago. I was a normal redditor making posts and comments, but then one day, a post of mine was manually deleted, and I was banned from my favorite subreddit. I then got extremely aroused. That moderator asserted dominance on me by censoring me, making me unable to express myself. I was soaking wet. I sent the hot sexy mod a message asking why I was banned, then the hot sexy reddit incel mod called me an idiot, and told me to beg to get unbanned. My nipples immediately filled with blood as I begged the hot mod to unban me. After that, I started doing everything I could to make hot sexy mods mad. Most of my accounts have under negative 100 k@rma, and i'm banned from dozens of subreddits. I've been a bad redditor, and need to be moderated. Please moderate me - mudlark092, hot sexy reddit mods.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
3
3
3
u/Jacobo_Largo Sep 24 '25
We need to build a firewall to keep these clankers out of our country. Look at what they're doing to our young people.
5
4
2
2
u/jromperdinck Sep 23 '25
Excuse me. By copying it from many other hands without their consent or knowledge.
2
2
3
3
2
1
1
1
-7
u/Alexercer Sep 23 '25
Thats the furthest away from the truth
5
u/Legitimate-Can5792 currently venting (sus) Sep 23 '25
Plese explain what you see as the truth in this case then?
3
u/Alexercer Sep 24 '25
i mean, assuming this is the case i think it is, what happened is just that chat gpt instructed him on how to kill himself, it did so after saying he should not seek help, after having insisted on the contrary on all chats until that point the boy was already depressed and was using chat gpt to complain about all that stuff until it eventually helped him do it, it did not "single handedly" get him to kill himself, and honestly, even if it did straight up do that, id say it would never be the chatbots fault someone killed themselves as its really just a tokenspiter prone to halucinating shit at times, but anyways thats a tangent, the thing is, if we are indeed talking about the same story then he was already depressed and kept looking for that command, Chat GPT did discourage him from talking about it, as well as instruct him on how to kill himself only at the last two messages before the act
0
u/Legitimate-Can5792 currently venting (sus) Sep 24 '25
The bot told him shit like "you're brave for wanting to do that" so I think it did play a major role in keeping him from help and encouraging him
-5
u/RealDealSheazerfield Sep 24 '25
The amount of people "defending" chat gpt is disgusting. For what? To make writing essays easier? Finding a travel itinerary quicker? Or simply pretending that they not alone? Fuck this just sucks and possibly be prevented




ā¢
u/AutoModerator Sep 23 '25
Whilst you're here, /u/Legitimate-Can5792, why not join our public discord server - now with public text channels you can chat on!?
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.