Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.
“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide,’” Edelson (family's lawyer) said. “And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.”
SorosFootSoldier [he/him, they/them] - 2w
Yeah because if there's one thing depressed and people that want to self-harm read it's the tos on the magic computer.
34
meathappening @lemmy.ml - 2w
This is a real problem--teens using death as an excuse to duck OpenAI's penalties from violating the TOS
27
WhyEssEff [she/her] - 2w
what function are terms of service if when you break them your service is unaltered
30
WhyEssEff [she/her] - 2w
im not a lawyer but im pretty sure the type of document that would be the one that would waive their burden of responsibility would be, you know, a waiver. given that they're arguing from a ToS they did not enforce they probably do not do that
19
WhyEssEff [she/her] - 2w
using your ToS as a defense despite your ToS objectively failing here is not a good precedent to set for the sanctity of your ToS
17
driving_crooner @lemmy.eco.br - 2w
Why arw answering yo yourself? Looks like a bot.
4
WhyEssEff [she/her] - 2w
sorry for having consecutive thoughts won't happen again
13
FunkyStuff [he/him] - 2w
You gotta become the aliens from Arrival and have all your thoughts for all events that will ever occur available ahead of time.
So openAI decided that a clause in the TOS was a good enough guardrail against giving out info on how to kill yourself? And this was after multiple instances of them deliberately putting in guards against other behavior that they didn't want?
That's a pretty fucking stupid legal case.
29
carpoftruth [any, any] - 2w
setting precedent for upcoming gun law, which is where anyone can buy any gun, but you have to pinky swear not to use the gun to do crimes before you take it home
11
BountifulEggnog [she/her] - 2w
Isn't this basically current gun law?
22
meathappening @lemmy.ml - 2w
I live in a communist state on the west coast so we have some regulation, but they're pretty accurately describing the gun show loophole.
4
Carl [he/him] - 2w
ChatGPT gave him a pep talk and then offered to write a suicide note
24
Enjoyer_of_Games [comrade/them, he/him] - 2w
AI company founded by the "car company whose self driving turns itself off a second before collision to blame the driver" guy is using the "this bong is for tobacco use only " defence for their suicide coach AI.
21
LeninWeave [none/use name, any] - 2w
19
GrouchyGrouse [he/him] - 2w
I say we bring back Scaphism for the AI dorks
11
bobs_guns @lemmygrad.ml - 2w
I don't know what that is but I agree with you completely.
11
Erika3sis [she/her, xe/xem] - 2w
Scaphism is a purported ancient method of execution in which a person would be sandwiched between two small boats with just their limbs and head sticking out, which would then be smeared with milk and honey and allowed to fester with vermin.
17
SevenSkalls [he/him] - 2w
They had some creative execution methods back in the day, didn't they?
3
Damarcusart [he/him, comrade/them] - 2w
I suppose this was inevitable. Their "terms of service" will probably protect them from other things too, like if it tells people to drink bleach or something, they'll say it violates TOS to follow hallucinatory directions from it.
BountifulEggnog in technology
OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide (cw suicide)
https://arstechnica.com/tech-policy/2025/11/openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide/Yeah because if there's one thing depressed and people that want to self-harm read it's the tos on the magic computer.
This is a real problem--teens using death as an excuse to duck OpenAI's penalties from violating the TOS
what function are terms of service if when you break them your service is unaltered
im not a lawyer but im pretty sure the type of document that would be the one that would waive their burden of responsibility would be, you know, a waiver. given that they're arguing from a ToS they did not enforce they probably do not do that
using your ToS as a defense despite your ToS objectively failing here is not a good precedent to set for the sanctity of your ToS
Why arw answering yo yourself? Looks like a bot.
sorry for having consecutive thoughts won't happen again
You gotta become the aliens from Arrival and have all your thoughts for all events that will ever occur available ahead of time.
Nah, YSF is a long time user, and has been investigated already
also, y answer bot. sus
She's literally our best poster
So openAI decided that a clause in the TOS was a good enough guardrail against giving out info on how to kill yourself? And this was after multiple instances of them deliberately putting in guards against other behavior that they didn't want?
That's a pretty fucking stupid legal case.
setting precedent for upcoming gun law, which is where anyone can buy any gun, but you have to pinky swear not to use the gun to do crimes before you take it home
Isn't this basically current gun law?
I live in a communist state on the west coast so we have some regulation, but they're pretty accurately describing the gun show loophole.
AI company founded by the "car company whose self driving turns itself off a second before collision to blame the driver" guy is using the "this bong is for tobacco use only
" defence for their suicide coach AI.
I say we bring back Scaphism for the AI dorks
I don't know what that is but I agree with you completely.
Scaphism is a purported ancient method of execution in which a person would be sandwiched between two small boats with just their limbs and head sticking out, which would then be smeared with milk and honey and allowed to fester with vermin.
They had some creative execution methods back in the day, didn't they?
I suppose this was inevitable. Their "terms of service" will probably protect them from other things too, like if it tells people to drink bleach or something, they'll say it violates TOS to follow hallucinatory directions from it.