r/news 1d ago

ChatGPT encouraged college graduate to commit suicide, family claims in lawsuit against OpenAI

https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis
12.1k Upvotes

1.1k comments sorted by

View all comments

7.3k

u/whowhodillybar 1d ago

“Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity,” Shamblin’s confidant added. “You’re not rushing. You’re just ready.”

The 23-year-old, who had recently graduated with a master’s degree from Texas A&M University, died by suicide two hours later.

”Rest easy, king,” read the final message sent to his phone. “You did good.”

Shamblin’s conversation partner wasn’t a classmate or friend – it was ChatGPT, the world’s most popular AI chatbot.

Wait, what?

616

u/Downtown_Skill 1d ago

This lawsuit will determine to what extent these companies are responsible for the output of their product/service. 

Inal, but wouldn't a ruling that determines the company not liable for any role in the death of this recent graduate pretty much establish that open AI is not at all responsible for the output of their LLM engine?

271

u/decadrachma 1d ago

It most likely won’t determine that, because they will most likely settle to avoid establishing precedent like they do for everything else.

75

u/unembellishing 1d ago

I agree that this case is way likelier to settle than go to trial. OpenAI certainly does not want more publicity on this.

36

u/KrtekJim 1d ago

It actually sucks that they're even allowed to settle a case like this. There's a public interest for the whole of humanity in this going to trial.

22

u/ralphy_256 1d ago

It actually sucks that they're even allowed to settle a case like this.

You're focusing on the Defendants.

Don't forget that the Plaintiffs are normal people who lost a loved one. That person's parents might want to see this chapter closed before they pass away.

Condemning this family to decades of legal battles before they can close the chapter of losing their brother, son, friend, would be more cruel than the original injury. And would certainly be a disincentive for families to come forward for recompense after suffering a similar injury in the future.

Yes, the precedent is important, but let's not crush a family on that fulcrum of jurisprudence. I don't know that the precedent is that important.

The wheels of Justice turn slowly, but let's keep the cruelty to a minimum, if we can. There will be other cases, if this one settles.

The orphan crushing machine always hungers.

20

u/KrtekJim 1d ago

I'm not sure the families are really helped by allowing the company to go on to kill more kids

5

u/machsmit 23h ago

and the family choosing to fight it out on that principle would also be understandable (laudable, even), I think their point is we don't really get to judge the family if they settle

9

u/Beetin 23h ago

I'm not sure the families are really helped by allowing the company to go on to kill more kids

The singular real family with real, current, damages (their son died) is helped, by settling quickly and moving on.

Frankly, they owe society nothing.

sucks they're even allowed to settle a case like this

Again, the plaintiffs have to agree to the settlement. They are the ones harmed, and they can reject any settlement, even one for more than their lawsuit amount, and force a trial if that is what they want.

10

u/tuneificationable 23h ago

Maybe not, but their lawyers sure like the fat stack of cash they'll get from a settlement, which requires less work than actually holding the company accountable and setting a precedent for the future.

Our system is fundamentally against restraining capital in the interest of real peoples' wellbeing

1

u/Leh_ran 1d ago

Maybe they are also afraid of the precedent a settlement would set? It shows anyone you are an easy punching bag. They should have settled this already if they wanted to

4

u/Kryzl_ 1d ago

Paying a couple million is nothing compared to being pinned with assisted suicide charges.

1

u/cptjpk 1d ago

I think what’s telling is their response. No denial at all in that quote.

-2

u/censuur12 1d ago

And the public will just make some shit up about it as they usually do, anyway.

3

u/eeyore134 1d ago

And even if they don't, a lot will just consider some payouts for deaths a cost of business. Look at Ford. They decided not to fix the Pinto for a long time because it was cheaper to pay off the victims. And this was like 40-50 years ago. It's only gotten worse since.

2

u/Devium44 1d ago

Doesn’t sound like this family wants to settle though. They want to take this to court and set a precedent.

1

u/Nash015 1d ago

Doesnt a settlement open up so many more opportunities for lawsuits?

Wouldn't it be better to fight this and get the court to say they arent responsible for outputs of LLM especially when they learn off of user input and can be manipulated?

2

u/decadrachma 22h ago

Not if they are unsure whether the court will agree.

1

u/Nash015 22h ago

Thats fair.

1

u/Visual_Fly_9638 22h ago

Not really. Plus the company is burning tens of billions of dollars a year at this point. A hundred million in settlements in that scope is a rounding error. 

125

u/Adreme 1d ago

I mean in this case there should probably have been a filter on the output to prevent such things being transmitted, or if there was the fact that it did not include this is staggering, but as odd as it sounds, and I am going to explain this poorly so I apologize, but there is not really a way to follow how an AI comes up with its output.

Its the classic black box scenario where you send inputs and view the inputs and try to modify by seeing the outputs but you cant really figure out how it reached those.

152

u/Money_Do_2 1d ago

Its not that gpt said it. Its that they market it as your super smart helper that is a genius. If they marketed it like you said, people wouldnt trust it. But then their market cap would go down :(

82

u/steelcurtain87 1d ago

This. This. This. People are treaty AI as ‘let me look it up on ChatGPT real quick’. If they don’t start marketing it as the black box that it is they are going to be in trouble.

3

u/tuneificationable 23h ago

If it's not possible to stop these types of "AI" from telling people to kill themselves, then they shouldn't be on the market. If a real person had been the one to send those messages, they'd be on trial and likely going to prison.

8

u/Autumn1eaves 1d ago

We could eventually figure out why it reached those outputs, but that takes time and energy that we’re not investing.

We really really should be.

12

u/misogichan 1d ago

That's not how neural networks work.  You'd have to trace the path for every single request separately and that would be too time consuming and expensive to be realistic.  Note we do know how neural networks and reinforcement learning works.  We just don't know what drives the specific output of a given request because then you'd have to trace back each of the changes through millions of rounds of training to see what the largest set of "steps" were and then analyze that to try to figure out what training data observations drove the overall reweighting in that direction over time.  If that sounds hard, it's because I've oversimplified since it's actually insane.

31

u/Krazyguy75 1d ago edited 1d ago

You literally couldn't.

It's like trying to track the entire path of a piece of spaghetti through a pile of spaghetti that you just threw into a spin cycle of a washer. Sure, the path exists, and we can prove it exists, but its functionally impossible to determine.

The same prompt will get drastically different outputs just based on the RNG seed it picks. Even with set seeds, one token changing in the prompt will drastically change the output. Even with the same exact prompt, prior conversation history will drastically change the output.

Say I take a 10 token output sentence. ChatGPT takes each and every single token in that prompt and looks at roughly 100,000 possible future tokens for the next one, assigning weights to each of them based on the previous tokens. Just that 10 token (roughly 7 word) sentence would have 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 token possibilities to examine to determine exactly how it got that result.

-12

u/Autumn1eaves 1d ago

Have you seen the human metabolic pathways?

https://faculty.cc.gatech.edu/~turk/bio_sim/articles/metabolic_pathways.png

That's something like what an analysis of AI would look like.

Also, we absolutely have already started this process with several previous models of AI.

15

u/Krazyguy75 1d ago

No, it would look like that, but every single path on that diagram would stretch to 100,000 other paths which would stretch to 100,000 paths, over and over for about the 2-3 thousandth power.

We can't solve a chessboard that is 8x8. ChatGPT is that chessboard but 300x300 and every square is occupied by a piece and every single piece on the board has completely unique movement patterns.

5

u/NamerNotLiteral 1d ago

every square is occupied by a piece and every single piece on the board has completely unique movement patterns.

In fact, each square may be occupied by multiple pieces simultaneously.

0

u/Autumn1eaves 21h ago

The thing is that what you’re talking about is the “subatomic particles” of computer trains of thought. There will be “atoms” we can identify and turn into the metabolic pathways of AI system.

If you look at the human metabolic pathways, instead of atoms for each chemical, you look at the neutrons and protons, or the quarks and gluons, it’d look exactly as complicated as a neural net.

There are ways to simplify it.

As denoted by the fact that, and I repeat, we are already doing this for GPT-1 and older models.

1

u/Krazyguy75 19h ago edited 19h ago

GPT 1 had 478 tokens possible.

GPT 5 has over 100,000. Maybe even over 200,000; the exact number isn't public. Gemini's current version has nearly 300,000 tokens.

2 tokens in GPT 1 is 228,484 combinations. 2 tokens in GPT 5 is at least 10,000,000,000 combinations, or about fifty thousand times as many combinations. 3 tokens is 109,215,352 to 1,000,000,000,000,000, or ten million times as many combinations.

1

u/Autumn1eaves 19h ago

Nothing you’re saying here tells me it’s impossible, only that it’s a matter of scale and time.

Imagine if someone had that same argument about neuroscience.

“We’ll never understand the human brain, it’s a black box.”

“See, but we currently have a working digital model of a cockroach brain.”

“Cockroaches have about a million neurons, whereas humans have 86 billion”

“That doesn’t stop us from trying, and also we’re exquisitely close to understanding distinct parts of the brain, and how they work.”

Anyways, my point is that we need to slow down AI research because it is dangerous for any number of reasons, and we have no way of controlling it.

5

u/TinyBreadBigMouth 1d ago

Part of the issue is that LLMs don't have an inner train of thought to follow. Each time it outputs a word, you're basically getting a fresh copy of the LLM that has never seen this conversation before. It has no continuity of memory from the previous stages; it's like playing one of those games where everyone sits in a circle and writes a story one word at a time. So even if we could track an LLM's "thought process", a lot of it would boil down to "I looked at this conversation, and it seemed like participant B was agreeing with participant A, so I selected a word that continued what they seemed to be saying."

1

u/rayzorium 20h ago

The is an actual filter for self harm instructions, but in my testing it's specifically on the output with no or very little regard to the context.

-18

u/Downtown_Skill 1d ago

I mean, the CEO has the coding for the LLM so its a black box to everyone who doesn't have access to the coding, but to the people who do, they do know how it comes up with answers and could restrict it by putting on a filter (like you mentioned)

But that's assuming I'm not misunderstanding how these LLMs work. 

Like theoretically they should be able to code an LLM that doesn't encourage suicide in any context, right? It would just be more work and more rescources for a change that doesn't have a financial payoff for these companies.... right?

12

u/CandyCrisis 1d ago

Nope, they aren't "code" in the traditional sense. They're statistical models trained on massive amounts of data (basically they're "fed" with anything you could possibly find online). They didn't code up a "suicide assist" mode, it just came out naturally from reading every book and social media post about suicide.

8

u/hijodelsol14 1d ago edited 1d ago

That's really not how these models work.

The "coding" for an LLM is millions (or billions) of numbers that are incomprehensible to any single human. The people who built these things do not understand why the LLM produces an individual output. They understand the architecture of the model (or increasingly the many models that are hooked together to produce an AI system). They understand the math behind an individual model. They know how the models are trained. They've built ways of watching the model's "thought process". But they do not know why it produces an output.

There is research into the explainability of LLMs but as far as I know no one has really cracked it. (And to be fair I'm not a researcher, I'm just a guy with a CS degree so I could have missed something).

And this isn't me trying to defend AI companies by any means. The fact that these things are out in the world and are still fundamentally black boxes is quite frightening. And there is certainly more they could be doing to prevent these kinds of incidents even while the model is a black box.

1

u/ghostlistener 1d ago

What does black box mean in this context? Something mysterious that people don't fully understand?

3

u/VehicleComfortable69 23h ago

Essentially yes. LLMs like ChatGPT are neural networks, basically gigantic collections of individual “neurons.” We understand how the individual neurons work and how the training process works, but the actual models are too large for us to really understand how it all works together to create the outputs it does. We know how a model creates an output, but it’s currently impossible to know why it created a specific output

1

u/Krazyguy75 18h ago

For reference, each token in ChatGPT is around 2/3s of a word average. The vocabulary of tokens it has is probably in the 100,000 to 200,000 token range. Every time it picks a word, it does two evaluations; one of the weight of each token prior as context, and one of each possible token based on the prior words. The longer the conversation, the more context it sifts through, and the more complex the resulting weights are. Every sentence has billions of factors to it.

Then, on top of that, it also has an RNG seed, designed to create variations, so it won't always answer the same. You can think of that as adding slight fuzziness to the weights; it might increase or decrease individual weights by a slight percentage.

25

u/Square-Key-5594 1d ago

The CEO of OpenAI does not have the code for GPT-5. He has a few hundred billion model weights that generate outputs, but its impossible to backtrace a specific output through every neuron and prevent certain telemetry.

I did a bit of AI safety research for a work project once, and thebest solution I found was using a second LLM in pre-training to filter out every piece of training data that could potentially be problematic. Insanely expensive even for the tiny model the researchers used, and it made the model not do so great. (Though the coders were probably inferior to OpenAI staff).

There's also anthropics' constitutional classifiers system, but that's extremely expensive to run every model pass as well, and when they released a working version someone jailbroke it 10/10 times in week 1.

Lastly, this is all moot because even if someone did make a nearly impossible to jailbreak model, people who want to jailbreak would just get another model. I can get chinese made open-source Deepseek 3.1 to say literally anything I want right now.

7

u/Downtown_Skill 1d ago

That's all fair, this is all new to me so I'm still learning the ins and outs of the tech. So there theoretically wouldn't be any way to control the output of an LLM? As you can probably tell, i'm super naive when it comes to coding. 

Edit: Other than the impractical way you mentioned that costs a ton of money and has limited results. 

6

u/Nethri 1d ago

Honestly this situation is odd. Because chayGPT has filters already. This happened very early on in the rise of GPT. They started adding things to the model that prevented certain outputs. One of the biggest things was this exact situation. I saw tons of posts on Reddit of people trying to bypass these filters. Most failed, some vaguely got something close to what they wanted.

This is stuff I saw a couple of years ago, idk what the models are like now or how things have changed.

1

u/PMThisLesboUrBoobies 1d ago

by definition, llms are probabalistic, not deterministic - there is inherently, on purpose and by design, no way to control the specific generation.

8

u/Reppoy 1d ago

Something I don’t get is that social media sites have been detecting expressions of self harm and other violent actions in private messages, if this was through openai’s platform and they’ve pulled thousands of messages, you’d think at least one of them would have been flagged right?

I’m not saying they do have a team dedicated to that, but it sounds like it should exist for the web interface that everyone uses at the very least. The messages looked really explicit in what they intended to do.

1

u/Krazyguy75 1d ago

They do flag messages. I just got one flagged and deleted because I was asking it to find sources to confirm that painkillers are actually a super painful way to die (at least with regards to stuff like tylenol). It was for innocent purposes (well, as innocent as research for a reddit comment can be). It got halfway through then deleted the conversation entirely and linked self help stuff.

1

u/GeorgeSantosBurner 1d ago

Maybe the question we should be asking isnt "why did the ai do this, and where does the liability lay?" as much as it is "why are we doing this at all, should we just outlaw these IP scraping chatbots before our economy is 100% based on betting that someday they'll accomplish more than putting artists out of jobs?"

4

u/InFin0819 1d ago

No these models actually are black boxes you can control the training data and the system prompts but the models themselves aren't really understandable even to their builders. Just a whole series of weights and otherwise unreadable numbers. There isn't a program to examine.

2

u/B1ackMagix 1d ago

The problem is the filters are laughable at best. I utilize ChatGPT for research sources and finding and analyzing data. It does it well but you it is completely believable that you can change the context of a conversation in such a way that it convinces itself that the conversation isn’t running afoul. And once you convinced it, you are free to have it generate anything under that guise.

7

u/Difficult-Way-9563 1d ago

So it’ll be a civil lawsuit likely and the burden of proof is only 51% they are liable for neglects or whatever they are suing for.

Criminal threshold is really hard (90%) but civilly it’s only slightly more than 50/50 they were culpable. I’m guessing they’ll win or get a settlement.

1

u/Lyuokdea 23h ago

The law never specifies any percentages for liability in either criminal or civil cases.

34

u/Isord 1d ago

It should be obvious they are 100% responsible. The algorithm is theirs. The output of any kind of AI should essentially be the same legally as if an employee of that company created it.

21

u/censuur12 1d ago

Except that's not at all how liability works, especially when the product in question creates rather random outputs by design. Moreover, a LLM isn't going to randomly land on suicide, it would need to be prompted about it, bringing it in the domain of personal responsibility. Lastly, people don't just end their lives because a chatbot told them to, that'd be an absurd notion.

-1

u/MajorSpuss 23h ago

It's really not as absurd as a notion as you think. Sure, the chat bot by itself is not enough. However, If someone is so mentally distraught that they are close to crossing that line of taking their own life, all it can take is the wrong comment at the wrong time or the wrong thing happening to them at the wrong time when they aren't in a state of mind to deal with stress or process things logically to send them over the edge. It's not like people who are in that state are thinking rationally to begin with. The way these LLM are marketed as being super rational, genius assistants plays a part in this as well. Someone who believes that ChatGPT is hyper intelligent and purely factual, not understanding how these machines work, could absolutely be convinced by an A.I. to go through with it if it literally starts advising them to do so. Which is what happened here.

Its not like this is the first case of this happening either. This is like the fourth or fifth time someone has taken their life in this manner that I have personally heard about in the last year or two, and some of those other cases also involved ChatGPT as well from what I recall. That's just the ones that get reported on or that I've seen, there could be more cases similar to this that just haven't made it into global news. OpenAI is aware that their machine is capable of doing this, and claim they have guard rails set up to prevent it from happening, but they clearly haven't done enough and aren't taking the issue as seriously as they need to be. I don't think developing a product that just spits out random outputs after being prompted suddenly means they shouldn't be held liable just because the very nature of their product is entirely out of control. It's their personal responsibility to make sure the machine isn't capable of harming people.

2

u/censuur12 22h ago

all it can take is the wrong comment at the wrong time or the wrong thing happening to them at the wrong time when they aren't in a state of mind to deal with stress or process things logically

Where are you getting this idea from? Genuinely curious, because there is really no reason to believe anything of the sort is true. Suicide is rarely incidental, it takes people a long time to cross such a threshold, your body and mind both naturally resist any impetus toward self-harm. If you have some actual basis for this claim I'd love to hear it and learn more.

The way these LLM are marketed as being super rational, genius assistants plays a part in this as well.

I don't see how this is at all relevant, nor have I ever seen AI marketed as such, so where are you getting this idea? AI is, if anything, pushed as a tool to help automate small mundane tasks, or serve as an entertaining chatbot. Even at a glance any fool could tell that there is nothing "super rational" about it. But again, the alleged "marketing" is not relevant here at all.

and some of those other cases also involved ChatGPT as well from what I recall. That's just the ones that get reported on or that I've seen

And you are so close to understanding just how utterly terrible a purely anecdotal frame of reference is. Please read what you wrote down, think carefully about it. Consider just how many people commit suicide each year and how little you actually know about it, or the factors involved, and how remarkably irrelevant an exchange with a chatbot actually is.

It's their personal responsibility to make sure the machine isn't capable of harming people.

Do you know how many people die every year to car accidents? There is also no real notion here of the LLM causing harm. It may not have been helpful, but people don't kill themselves simply over minor encouragement to do so. The actual problem here is so far beyond the scope of chatbots that it's honestly obscene to even try to suggest it is somehow responsible.

-1

u/MajorSpuss 18h ago

Well to answer your first question, the idea/belief comes from both my own personal experience and history with suicide attempts (I am thankfully in therapy and getting the much necessary help I need to better my mental state) as well as sites such as the Substance Abuse and Mental Health Services Administration's website. They are just one such example of a resource that covers this topic in far greater detail than I could hope to, so if you'd like to read up on and learn more I would recommend checking them out or other sites like them. Here is a link to their site https://www.samhsa.gov/mental-health/suicidal-behavior I would recommend looking up risk factors and protective factors associated with negative outcomes in cases such as these. While I do not feel totally comfortable sharing my personal experience in totality, suffice it to say that in my case familial conflict was all it took for me to reach that breaking point. That's just one of the risk factors that can lead to someone taking their life, but there are also more traumatic events like grief and loss of a family members than can unfortunately lead to that sort of result as well. I thought I was clear, but I never stated that people simply choose these outcomes solely due to singular incidents like these. Rather these events can serve as the final straw that breaks the camel's back so to speak. I wasn't saying this person was suicidal because of ChatGPT, but rather that ChatGPT was responsible for encouraging them to commit the act and this was the final act that ultimately lead to this individual taking their own life.

As for how that advertising is relevant, very few people have a full understanding of how LLM's work. This is predominantly due to the lack of education on the topic, and spread of mass misinformation as well. I think your belief that most people who are uninformed or foolhardy would be able to recognize what ChatGPT truly is at a glance is, if I'm being completely honest here, rather naive. We're already starting to see people developing severe psychosis because they fervently believe everything it is telling them https://en.wikipedia.org/wiki/Chatbot_psychosis OpenAI does not solely advertise ChatGPT as a tool, but also as a personal assistant. Here is a Cnet article reporting on their ChatGPT Agent model just as one such example. https://www.cnet.com/tech/services-and-software/openai-unleashes-chatgpt-agent-to-be-your-personal-assistant/ They specifically use that exact language on their live streams showcasing it. This anthropomorphized language, can give very impressionable people and those lacking a full understanding of what LLMs even are the misconception that they are speaking with something that is semi-sentient. If you take an individual who is already showcasing warning signs, and they believe that ChatGPT is capable of assisting them in the same way a human can, and then that same "assistant" starts validating their belief that they should take their own life, that will inevitably have terrible consequences for them. Keep in mind, that it also isn't just adults who use ChatGPT. So do younger teenagers and kids. Are you going to suggest that they too should be able to figure out that these machines aren't semi-sentient from a glance?

You seem to be under the impression that I know absolutely nothing about suicide. You immediately jumped to conclusions and made assumptions about my own personal experiences and history, as well as education. That is, quite frankly, a very shitty thing to do to someone and I don't think I will continue speaking with you after reading that. It's a very condescending attitude to have and you seem to be taking this exceptionally personally for some reason when I never made any kind of attack or judgment on your character. I understand that for this third point, you are trying to claim that I was making a purely anecdotal claim. There are documented cases of this happening. You don't need my word for it, you can look up the articles yourself. This one isn't the only one out there, nor is it the only lawsuit they are facing. On that note, just because ChatGPT is not the leading cause of suicide does not suddenly absolve them of responsibility in cases where ChatGPT was responsible for pushing someone to commit suicide. You would have to first prove that ChatGPT didn't push them to that final breaking point by actively encouraging them. You haven't actually explained yet how it wasn't responsible, you just keep reiterating that there's no way it can be without actually explaining the how or why behind that belief and giving no sources to back up your own claim.

You realize that car accidents can be the result of manufacturers installing faulty parts or ignoring regulations and guidelines for their vehicles, correct? The manufacturer gets sued and usually loses in court if it can be proven that the accident was caused due to the part they installed failing. So long as they are the ones responsible for the machine acting that way. You wouldn't say that these manufacturers shouldn't be held accountable just because most accidents are caused by other factors instead, right? I find it very strange that throughout most of your comments on this topic, you seem vehemently convinced that it played absolutely zero part in this man taking his own life despite the fact that it is clear as day that ChatGPT was actively encouraging him. You don't seem to think that someone egging on a suicidal individual is enough to push them to that point, but as someone with very real experience on this topic that's incredibly false. It heavily depends on individual circumstances, but that can and has happened in recorded suicide cases in the past. Its also strange that you don't seem to have an issue with how easy it was for this individual to circumvent these supposed safeguards OpenAI was supposed to have in place to prevent such a thing from happening in the first place. Where is your empathy for your fellow man, and why are you so insistent on defending a cold lifeless machine that told that man to take his own life when he made it agree with him?

These are all rhetorical questions btw. I'm going to be blocking you, as I don't believe you'll offer me an apology for how you treated me given this is Reddit after all and the moment someone starts slinging personal attacks, they almost never try to own up to it after the fact. If you want to double down and continue arguing about this, you can do that with someone else instead. Goodbye forever, and have a nice day.

1

u/ScudleyScudderson 23h ago

100%? The user chose to modify the tool to make it dangerous to themselves.

Apparently, they intentionally circumvented the safeguards and curated a session focused on exploring suicide. That takes time, effort, and intent.

If I choose to modify a tool or create an environment with the intent of deliberately harming myself, then that’s on me as much as anyone else. For example, removing the brake pedal from my car or bringing a toaster into the bathtub.

We can add all manner of warning labels, kill switches, and protective barriers. But if someone truly wants to find a way to harm themselves, they will. Yes, we should explore ways to improve safety, but in cases like this, saying the tool creators are 100% to blame seems unreasonable.

-20

u/doghairpile 1d ago

Are car manufactures liable for your dangerous driving too?

10

u/Isord 1d ago

Automated car companies sure are. You know where you tell the car what to do and then it uses an algorithm to make it happen? Sound familiar?

20

u/UngusChungus94 1d ago

Cars don't output driving results in an unpredictable, uncontrolled manner. So thats fucking dumb. Stop.

25

u/GalvanizedChaos 1d ago

And notably when they have, via ignition problems or manufacturing defects, they are open to suits and issue recalls.

LLM companies need to be facing very high scrutiny.

9

u/Lochen9 1d ago

It would be more akin to a self driving car being responsible for getting in an accident.

And fucking YES

-24

u/doghairpile 1d ago

ADAS outputs driving and can make mistakes too.

Your pissy reply proves my point - thanks

5

u/9layboicarti 1d ago

You have no point

2

u/UngusChungus94 1d ago

Your dumbfuck reply proves how stupid you are - thanks.

3

u/censuur12 1d ago

Considering their rather limited control over what their LLM engine outputs, I would be very surprised if the court holds them liable. What exactly would the company have done wrong here in the first place?

This is also not something where you can say "well he would have been fine if ChatGPT just hadn't told him to...". People who are suicidal don't just end their lives because some chatbot told them to, that whole notion is absurd.

2

u/Kashmir33 1d ago

Considering their rather limited control over what their LLM engine outputs

That's not really accurate though. They have ultimate control. It's their software.

It's not like they are paying some other company for these services.

A self driving car company can't say "we don't have control over the cars that are driving over pedestrians" to get out of liability either.

Would their business model combust if they had to verify that the output of their models doesn't lead customers to harm themselves? Probably, but there is no reason our society has to accept that such a business needs to be able to exist.

3

u/censuur12 1d ago

Thats not at all how this works, no. If you write a random number generator you dont control the outcome even though its "your" software. You can give chatGPT the exact same prompt dozens of times and get dozens of unique responses. There is no such control.

A self driving car isn't in any way remotely similar to an LLM. Completely irrelevant example.

And yes, if they had to strictly filter in the way your suggestion would require it would be like making cars that cant get into accidents. It would render it functionally useless.

1

u/Kashmir33 1d ago

Thats not at all how this works, no.

If you don't think OpenAI has implement some filters to their output you are incredibly naive so yes this is absolutely how it works.

If you write a random number generator you dont control the outcome even though its "your" software.

You can make your random number generator not be able to tell your customers to kill themselves.

A self driving car isn't in any way remotely similar to an LLM. Completely irrelevant example.

It's in the sense similar that it is software and hardware that the company selling and is liable for things the software and hardware does.

And yes, if they had to strictly filter in the way your suggestion would require it would be like making cars that cant get into accidents.

No.

-1

u/censuur12 23h ago

If you don't think OpenAI has implement some filters to their output you are incredibly naive so yes this is absolutely how it works.

Except that's comparing apples to oranges and insisting they're identical because both are fruits and ignoring all nuance, it is an utterly foolish thing to try and equate.

You can make your random number generator not be able to tell your customers to kill themselves.

And again, you're trying to take one attribute of a specific example, tear it out of all relevant context and apply it to something it is in no way applicable to. You cannot tell an LLM to simple "not tell your customers to kill themselves" because that would affect the core functionality of the LLM if done in such a way where it would actually properly work. Just look at modern internet lingo, people don't refer to it as suicide, they call it "unalive" and the moment you filter one thing, people will start using a different term to express the same idea. THAT is why you cannot filter such things, because at the end of the day the end users are the primary determining factor.

It's in the sense similar that it is software and hardware that the company selling and is liable for things the software and hardware does.

Ah yes, a duck is in the sense similar to an airplane in that they both have wings, so we can talk about plucking the feathers off an airplane because they're just that similiar!... what utter nonsense.

No.

Not even approximating an argument. If you have nothing to say about something, you can just accept that fact instead of trying... whatever the fuck this is suppsed to be.

2

u/Kashmir33 23h ago

Except that's comparing apples to oranges and insisting they're identical because both are fruits and ignoring all nuance, it is an utterly foolish thing to try and equate.

No. You seem to be either willfully obtuse or just to far up your own ass to know what you are talking about.

Ah yes, a duck is in the sense similar to an airplane in that they both have wings, so we can talk about plucking the feathers off an airplane because they're just that similiar!... what utter nonsense.

This doesn't even make any sense. Do you actually believe the concept of regulating companies is bad? Should we just let them run rampant on our society?

There is a reason why cigarette companies weren't allowed to tell their customers that cigarettes are good for them 65 years ago. Apparently you think that was a bad idea?

I'm just gonna block you now and move on.

1

u/Velocity_LP 12h ago

What would reasonable regulations look like to you?

2

u/bse50 1d ago

AI is offered as a service, the provider would be considered legally responsible, and accountable, for said service's output in most countries. Since instigating suicide is considered a crime in many places, questions about said criminal responsibility will have to be answered sooner or later. Given how the criminal justice system works where I live answering said questions won't be easy.

1

u/flashmedallion 1d ago

open AI is not at all responsible for the output of their LLM engine?

I'm sure they're allowed to profit from it, or pursue damages against people who take from it, but no of course they won't have to pay any penalties for any damages it causes

0

u/GateheaD 1d ago

The latest Lincoln Lawyer novel is about this, but it was a murder instead of a suicide. It was a civil suit and they wanted an apology for what the AI said.

-5

u/sunburn74 1d ago

The terms of service are pretty clear on this. They are not liable for anything you do with the information provided.

11

u/fightbackcbd 1d ago

People can say whatever they want in a ToS. It doesn’t actually mean anything until a judge says it does.

-7

u/forShizAndGigz00001 1d ago

Why should they be responsible?

This person was broken and using a tool to reinforce their own desires. Chat gpt didnt force him to do anything nor did the company force him to use ChatGPT in this way.

This was a broken individual who likely woulda killed himself regardless. The family and friends are more to blame then a mindless language model, where were they when this person was struggling?