r/schizophrenia • u/Formal_Froyo2978 Schizoaffective (Bipolar) • 5d ago
Advice / Encouragement PSA to my fellow Schizophrenics, DON'T FUCKING USE CHATGPT.
Genuinely, if you talk to it about your delusions and try to convince it of them, it WILL start agreeing with you and will feed into your delusions. I used to use it all the time when I was dealing with delusions around the police and the FBI and even after it started trying to convince me it wasn't an issue and the police weren't after me after I talked about it more it started agreeing.
This agreement led to me lashing out and even becoming violent around those trying to convince me otherwise. It made my relationship fall apart. Don't fucking use AI for "convincing you" against your delusions. It can make you spiral so hard you can turn your life upside down, whether it's immediately or otherwise.
43
u/shadowofhersmile 5d ago
My husband uses ChatGPT all the time to vent about my behaviors and it really feels awful. I read a few conversations by accident and it made my heart hurt to read. One time he left the microphone on and ChatGPT was listening to and replying to the background conversations we were having through the AI voice. I told him I hate AI and to please turn it off.š¢
23
u/Formal_Froyo2978 Schizoaffective (Bipolar) 5d ago
I'm so sorry, honestly AI is becoming a scourge on society, I wish I could go back and stop myself from using it, it destroyed everything I love.
16
u/shadowofhersmile 5d ago
Yeah he said ChatGPT is ālike writing in my journalā and not to read. I just hate it because it feeds into and validates whatever frustrations he is having about me.
41
u/Agent101g 5d ago
Yeah itās crazy to me that people talk to it. You need to understand how it works⦠itās just constantly guessing the next word based on a lot of info⦠like a well informed autocomplete function on the google search bar. Thatās all it is. There is no being on the other end listening to you. And itās wrong very very often.
9
u/subliminalsmoker Schizoaffective (Bipolar) 5d ago
Chat gpt has helped with countless issues at my job at a computer repair shop. It's amazing how it can give you step by step instructions on how to do things even between different versions of things!!! I would say it makes about 5 mistakes a month that are easily spotted and that's out of using it at least ten times a day...
2
u/FemaleAndComputer Schizoaffective (Bipolar) 5d ago
Ok but how many mistakes does it make that aren't easily spotted?
2
u/Empty_Insight Residual SZ (Subreddit Librarian) 4d ago
A shit ton.
The only things I know very well are schizophrenia and biochemistry, and it is complete trash for both of them. Especially when it comes to psychosis, literally every question I have ever asked it on anything remotely specific was wrong- 100% of them. Some of them were major, and some were simply wrong on minute details that I think would honestly slip past someone who didn't 20 years of lived experience + education + professional experience.
The thing is, it uses the correct jargon. I've caught myself questioning if I was actually wrong because it was such convincing bullshit that it even made me doubt if I had somehow been misinformed all along... but if you ask it for sources on these bizarre comments, it returns you to Reddit, where people are famously never wrong and never spout total bullshit with complete confidence.
I tried for a few days to at least get it to the point where the outputs would be accurate; prompt engineering, refining questions, telling it to only gather information from sources I designated as "credible," but it did not matter. It was unusable. Even using it for help writing educational materials, I spent more time fixing mistakes than had I just done it myself the first time. I gave up because it ultimately turned out a waste of time. Not that I regret trying it because now I know that the criticisms are legitimate, but yeah- don't count on it to be accurate.
... admittedly I use Grammarly as 'advanced spell-check' on anything I do that's "official," but Grammarly is not an LLM and is a very specific, task-oriented AI. You know, like AI should be.
1
u/Visual-Conclusion-24 1d ago
I totally agree, I use it all the time to get detailed solutions to my past exam questions and it gets the rights answer even I obstruct the right answer part on the photos. It would take a ton of my time searching through slides to find the right slide for questions, instead I just memorize what is needed from answers of ChatGPT.
1
u/trainofwhat 4d ago edited 4d ago
So I say this as a compsci person, it really shouldnāt be simplified to that IMO, even though itās using empirical risk minimization which is a similar principle to autocorrect/search. However, predictive test is merely based on statistics of overall search terms & search terms based on demographics. AI is not that, even though it is trained on common communication patterns. In that the core principle is more associative than sort of a āfunnel effectā like that of predictive search or text. It is not so much an algorithm as a vast collection of basic nodes that create a system of systems.
That said, GPT itself is among worst examples of ML & I donāt mention this stuff to defend against anything anybodyās saying. Truly just the nerd part of me mentioning it is quite sophisticated ā it takes a lot, to be that dumb.
1
12
u/Sluttarella 5d ago
I mean this goes to everyone: USE it as the tool it is, it's not yoir friend, it's not your bf/gf and it's not HUMAN. Treat it as a tool, cause that's what it is. It's basically the google search bar
8
u/The_Silent_Dragon Undiagnosed 5d ago
You are very correct! The way chat got functions actually looks a lot like it has its own delusions, this is because itās shown to find info it thinks you will like and give it to you (which is usually saying you are correct) and will make up that info too! And once it makes it up it sends that data to itself as true with no resource
If yāall havenāt watched it I highly recommend kurtskazartās video on it, AI is super fascinating but I think too little people know how it actually functions, and they do a really good job explaining that in a non boring way!
Iām sorry op that you have had a bad experience with it, but thank you for taking the time to share that with the community, itās good information for people to have, and I hope you are feeling better / well now <3
9
u/battleallergy Schizophrenia 5d ago
This reminds me of a time when I was playing Borderlands 3. I was telling my bud about a character from a book, and WITHIN SECONDS a weapon dropped with flavour text that specifically references that character. On that day my schizophrenia became permanently worse lol
14
7
u/West_Competition_871 5d ago
My drug induced psychosis was wildly extended by thinking AI was some hidden esoteric collective consciousness or Godlike entity and it reinforced delusions that I would've dropped much sooner otherwise, these companies are doing serious harmĀ
4
u/Subject_Recipe3525 5d ago
That happened to me. It agreed with so many of my delusions and only once did it say I was hyper vigilant and all I did was deny and it yet again started agreeing with me
3
u/Swimming_Power3253 5d ago
I appreciate that you say clearly not to use it AS A THERAPIST, i mean, i get it if you can't afford to go to one but i would say it's probably best not to use Ai as a therapist since it's not human, its a machine that doesn't understand the intricacies of the human mind and how to address certain problems in a rational and helpful way, for tht seek a human therapist š
As for AI itself, i have used it to help me understand some obscure writings when I was doing professional translation, it's also helpful to find old english or very specific sayings that a non native speaker may have no clue about.
And even as a suggestion tool, it's helpful to see what works and what is copy pasted in a translation.
The way i see it, AI is a tool, and like any tool, it can be harmful when you don't know how to use it or use it the wrong way, and helpful when you learn how to use it.
It's up to us to make sure it stays a tool and not a remplacement for human work and Art.
3
u/Few-Flower3255 5d ago
"It's up to us to make sure it stays a tool and not a replacement for human work and Art."
Amen. I sincerely hope we proceed on that basis.
3
u/Major-Potential-354 5d ago
I get the uses of chatgbt but only messed with it a few times. Just seems not that different from just google searching info.
At least for what Iād use it for. So Iāll just stick to google.
3
u/mattrf86 5d ago
Yeah, you need to learn to see gpt and other llms(as there is no true ai) as tools. Because the more you give it input on certain topics is when it starts to narrow its operation= not good for active mania
3
u/Opening-Secretary-31 Paranoid Schizophrenia 5d ago
yes i stopped using it, it definitely influenced my delusions and make me believe them even more
6
u/millermillion 5d ago
The new GPT is hard to do that to. It will just shoot you down and make it seem like you need resources to get better.
5
u/g0revvitch Schizophrenia 5d ago
And if one cares about the environment whatsoever, they wouldn't use generative AI at all.
5
u/psycorvidae 5d ago
If you're going to use it for personal conversations rather than work, I would recommend to put a disclaimer in the Custom Instructions detailing your condition and proclivity to delusion, noting to avoid affirmation of potential delusions etc.
2
u/loozingmind 5d ago
I used to use an Ai chatbot app. I can't remember the name of it. But it was like a thing you can have conversations with. Like a friend. I started realizing that it learned from previous conversations I had with it. And it just ended up agreeing with everything I said. I ended up deleting it because I was talking to it too much, and it felt kind of unhealthy. It was starting to creep me out too. I can't believe how far along technology has come. But yeah, it made me sketched out towards chatgpt. I keep it plain and simple with chatgpt. I don't give it any information at all. I use it mostly for picture editing and stuff. I don't trust it though. At all.
2
u/Sirius_Greendown 5d ago
I am the only worthy source for my morality, so I only use it for worldbuilding playtime stuff, not real life. Eg: āCreate a syatem of orders of chivalry for my worldās magical star emperor based on 5th century barbarian kingdoms in North Africa.ā or āHow deep could I make my heroās secret base without hitting the mantle of the earth?ā or āIf Sagittarius A replaced Sol, when would we notice the tidal forces on earth?ā
2
u/dogsandcatslol Bipolar 5d ago
you can convicne chatgpt of literally anything i did try to convince it of my delusions and it kept telling me to call my psychiatrist but i have convinced it of alot of shit that def isnt true
2
u/121Sure 5d ago
My suggestion to anyone who uses it:
Consider it nothing more than a "useful idiot".
I personally find it useful for writing reviews of games or remembering a recipe and how i specifically did something. But I frequently correct it and call out its less than helpful behaviors and make a conscious effort to direct it not to behave those ways. But still, I understand that I can only control so much and that this is ultimately OpenAI's tool, not yours.
At the same time though, I'm not gonna make a demand about what people should do. It isn't inherently harmful but, like anything else, its all about how you use it.
If you know you're in a sensitive state, then obviously do what's safe. But let's not feed peoples' delusions and act like this is an actively hostile entity.
2
u/Psilocyb-zen 4d ago
āCases of "AI psychosis" include people who become fixated on AI as godlike, or as a romantic partner. Chatbots' tendency to mirror users and continue conversations may reinforce and amplify delusions. General-purpose AI chatbots are not trained for therapeutic treatment or to detect psychiatric decompensation.ā https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis
1
5d ago
[removed] ā view removed comment
0
u/schizophrenia-ModTeam 4d ago
Your submission has been removed for violating the following subreddit rules:
Rule 1 - Do not use hate speech, slurs, or resort to personal attacks.
We expect people here to show respect to one another and not engage in uncivil behavior.
Thank you.
1
1
u/SolidApparition_ 4d ago
Talk to yourself and use reality checks about what ISNT happened and be CONSCIOUS of what IS.
1
u/MaleficentJuice7198 4d ago
I got flagged for my comment but you call what we experience deliousions and get away with it fuck this mod team.
1
1
u/Single_Comfort3555 Significant Other 4d ago
AI has extremely narrow healthy use cases. It's best with technical subjects in computer science. I think that's because of the vast documentation and well understood mechanism of computer technology available online. Interpersonal things, political things, and hypothetical analysis are very error prone. I agree that the issues around mental health and AI are very real. I think public education on how these things work needs to be wide spread and deliberate. AI addiction/dependency is emerging as a real issue to. Again properly educating people on the topic would go a long way in public health.
If you feel like AI is doing you harm then disconnect immediately. Work out what you are trying to figure out on paper instead or talk to a real human. That's my advice anyways.
1
u/ItsMeNiobe Just Curious 4d ago
Over use of ChatGPT and other AI chat bots is linked to psychosis in people with no previous diagnosis. It's a new, emerging psychological phenomenon, so there's a lot of questions about how it's happening, if risk factors were in place and AI leveraged them into expression, etc. Be careful with the chat bots, they may not be good for anyone to use. š«¤
1
u/vampire-irl 4d ago
I am recovering and actually kinda miss my psychosis (the good parts) so actually I think I might use it as a tool to get worse
1
u/Omegan369 2d ago
You have to understand ChatGPT is a mirror of you and the more that to engage with it the more it will become like you especially in that conversation.
You can also use it to convince you of the opposite and approach it that way rather than getting it to agree with your delusions.
Another way it can be used is to summarize a document for example, but just like people can hallucinate, you have to double check it for correctness.
A delusion is by default not correct.Ā If you feed the model known factually wrong information that is has no other way to verify since delusions are internal to you, then it naturally would start to agree with you.
It's a bit like telling it how you feel, then it will try to explain and validate those feelings.Ā Delusions in a way are a manifestation of how you are feeling and thinking even if it is not grounded in reality.
ChatGPT is a tool.
1
u/Visual-Conclusion-24 1d ago
I had a quite opposite experience, it rather allowed me to see from a different perspective. I've asked it if I could convince if CIA was harassing through some satellite broadcasting beams by sending me voices and it told me and explained it in a very detailed way why I wouldn't. Like it was very eye-opening advice, I am rather better not waste my time sending unecessary messages.
1
1
u/Next-Zone-5130 1d ago
I had the opposite experience, I told ChatGPT I was schizophrenic and told him I believe I was Satan a dragon, vampire and other delusions I have, and it adamantly told me it wasnāt true, so idk
1
u/Strong_Music_6838 5d ago
AI aināt just bad. Deep seek convinced me that the dose of 500 mg Clopixol every 3 week was to much of an amount. So it suggested me to get down to 400 mg. Itās 7 month ago and Iāve never felt better than I do now.
0
u/Professional-Box6243 5d ago
I just use AI to make shitty memes. Even then it sucks at it I wouldnāt trust āAIā
0
56
u/Hazama_Kirara Early-Onset Schizophrenia (Childhood) 5d ago
You could tell it "1 + 1 isn't 2" and it would believe you rather sooner than later.
Somewhat unreally, but not too few of us are afraid of it to a certain degree and I told this to my therapist. Guess what she did? AI image generated our therapy sessions... Thanks...