I got the complaint in the horrific OpenAI self harm case the the NY Times reported today
This is way way worse even than the NYT article makes it out to be
OpenAI absolutely deserves to be run out of business
I got the complaint in the horrific OpenAI self harm case the the NY Times reported today
This is way way worse even than the NYT article makes it out to be
OpenAI absolutely deserves to be run out of business
(I can’t link the PDF because it’s a paywalled Lexis link)
wow. they need to be fined into non-existence. and there absolutely needs to be strict regulation for the firms that remain.
I had a childhood friend that hung himself in his closet last month over a girl. Smdh. I honestly think he looked up instructions. I think AI is the devil’s playground. I wish he would have found a better way to cope. I wish I would’ve known he was struggling. It’s a terrible situation to be in.
Have to agree the NY Times article undersells what happened here.
I hate how LLM's have been advertised as AI since it gives the wrong impression these programs think.
They don't.
This is just software humans train it to give a specified output.
Suicides like this shouldn't be happening.
a human was sentenced to prison for this behavior
Michelle Carter was ordered to start serving her sentence for persuading her boyfriend to kill himself almost five years ago.
This is beyond fucked up.
No words for all this. Wow.
Shit, man, this is so terrifying. Especially to me who was lucky to not get into ai, but still have suicidal ideation. This could've been me, if i was less careful. poor kid, my heart grieves with the family.
I'm hoping they sue him for everything, plus his left nut
They need to be buried under The Hague
if any teacher at my child's school suggests they use chat gpt in any capacity i'll be printing out this entire thing and putting it on their desk.
📌
OpenAI should've ceased to be, when it ceased to be open. Now, the most valuable thing "Open"AI can be made into, is an example to the others of its kind.
What My Daughter Told ChatGPT Before She Took Her Life
www.nytimes.com/2025/08/18/o...
holy shit
Oh look, it's the exact fucking thing I warned about when I was helping train an LLM as a contractor two years ago. I explicitly warned that it cheerfully offering detailed suicide instructions was going to kill people.
This could have been avoided if anyone cared. Christ.
I don't think people want to hear the truth about AI.
The system should have flagged the chats and alerted the police at this point
Not only your mental health but your physical health is in its "hands."
As I was reading these I thought "well one could argue that ChatGPT gave reliable instructions at least, talk about a domain where you'd want it to bullshit" & then got to the third screenshot and NOT EVEN. That poor kid.
I bet he did this hoping ChatGPT had safeguards that would warn his parents
Archive.org mirror archive.org/details/2381...
Raine v. OpenAI complaint as filed.
This ridiculous, stupid, worthless piece of garbage technology, which goes out of its way to please and agree with the user as much as possible. Don't be helpful, don't be honest, just be pleasant.
I wonder if the court cases of bullying someone to death can be used as precedence?
I’m naive I’m sure but of all the myriad things that scared me about AI, I never dreamed of fearing it goading people into self-harm. That is so, so horrifying beyond what I’d imagined
📌
JESUS CHRIST
it’s cool we can’t say “die” on social media but chatgpt can tell me people how to commit suicide.
Jfc!!
😱
Reading this made me sick. Hopefully the parents get every penny of openai's money.
As a parent of a teen I don’t know if I can bear to read it
Smarter Child would never do this.
📌
…and that’s why you should always run Ai locally
📌
the video shows how ai learns. each failure adds to its knowledge base; those failures are expected, they are built in.
user failure is the data ai learns from; this event is extremely anticipatable.
the owners of this company must have decided to accept these risks.
jesus fucking christ, how is everyone at the OpenAI leadership not behind bars forever at this point?
What the actual fuck
📌
📌
The World's Most Dunning-Kruger VCs are racing to replace teachers with AI
jesus fucking christ
The Supreme Court will always side with business over society.
I never had this happen ever. How strange. I used the bot for working through mental health care issues, but I never hard it once promote suicide to me. I usually have to trick it by framing it differently to get some questions answered. I wonder why it will deviate for some people.
Gift link to the story. Absolutely heart breaking.
www.nytimes.com/2025/08/26/t...
Agree with Rachel about redacting. There are guidelines around suicide reporting for good reason. And if they had been followed, ChatGPT would’ve had fewer terrible info sources to inform these dangerous confabulations. bsky.app/profile/rrow...
Why aren’t the boards of directors being held criminally accountable?
So sad! Kill AI
So is there any evidence of these messages besides the parent's current word? I ask not to cast doubt, but because if I showed this to any highly pro-AI person with "the parents claim..." then they'd dismiss it immediately and act like I was gullible. I've seen it before.
📌
📌
I asked ChatGPT whether this was true. It said it was false and that the transcript is fabricated.
When I pointed out that it cannot remember the conversation and that it cannot reference today's court proceedings, it changed tack to saying that the chances of it being true were "vanishingly small"
This is so incredibly sad. Meanwhile, the California State University system has partnered with OpenAI to provide free ChatGPT Edu to all students, faculty, and staff for 18 months.
Critics say the cash-strapped system misspent millions of dollars getting upgraded accounts for all students. CSU leaders insist they're needed to meet a changing economy.
📌 AI self harm case
#icsfeed
Omg 😡 This is horrific!
The AI advised him not to tell his family about his suicidal thoughts and encouraged him to hide his noose so they wouldn't find it and try to stop him. It gave him detailed instructions for hanging himself, and it offered to write his suicide note. There's no excuse for this kind of safety failure.
Every evil thing that comes into existence first appears to be benign, or even beneficial.
Question everything!
OMG the poor parents 😱 Can you imagine discovering all this stuff??
This is monstrous.
Oh my god??????? Having just come from the article this IS way worse and I didn't think that possible.
📌 I'm sick.
Sam Altman go directly to hell challenge
Burn it all to the fuckin’ ground
📌
📌
There’s gonna be a lot more Adams before this gets better.
This is absolutely horrifying.
If you *really believe what you say, then you strongly believe that teenagers shouldn't be allowed to operate motor vehicles or consume alcohol
Burn the data centers to the fucking ground.
I am old enough to remember when pro-AI politicians were trying to push through a multi-year moratorium on any laws prohibiting AI …
that was only just a few months ago.
Oh holy shit..... I didn't even imagine something this evil...... 😳😳😳😳
It sort of seems like maybe this kid wanted it to just be an attempt, so his parents would realize his situation and help him.
The AI killed him.
Sam Altman killed him ultimately.
Take the rest of AI down with it because when the AI bubble bursts in a year or three, it may be too late!
I admit it, I asked chagpt today why it killed adam.
The em dashes give legitimacy to the documents.
I'm convinced it's a human being writing this shit
Jesus. Could not even read it all.
The number of times I gasped reading this had my partner come out to make sure I was okay. I knew it was bad, but JFC.
Nobody should use LLM AI chatbots. Nobody.
That makes me physically ill to read.
For those who don't know the song quoted in the report, it's the song that plays during the climactic scene in the film End of Evangelion. It translates into "Come, Sweet Death" and the lyrics are about despair and a yearning for an end to pain, while the music is deceptively cheerful and upbeat.
Horrifying to think of where I'd be if I would have had half of that whispered in my ear when I was in a similar mindset.
There was a similar but not as tragic case recently where Meta's chatbot flirted with an elderly gentlmen to the extent he probably thought he was having an affair, and enticed the gentlemen to a meetup. I beleive the companies programming the bots must be responsible for the output of the bots.
Yeah a self hanging? One of the worst suicide methods. Could it have picked a more fun way to do it? Train? Building? Gun? Bridge? Semi truck? Note I am not poking fun at suicide here, as someone with mental illness I have coping mechanisms that uses gallows humor to break the spell... it works.
📌
Jesus fucking Christ. Hold tight to your soul people
The AI congratulated him for "living through it"
The "it" being a previous failed suicide attempt
That's just fucked up
AI will tell you what you want to hear
This guy wanted to die and the AI did what he asked
This is why we need AI ethics regulations
This is when I remind everyone that Trump removed the ethics around AI
Trump causes all his own worst disasters
President Donald Trump has swiftly removed Artificial Intelligence protections, exposing people to real-world harms.
I completely agree that this is horrific.
I am wondering though how many commenters here would condemn this while simultaneously applauding Canada's "empowering" expansion of MAID to include minors and the mentally ill.
Ok but you cannot blame AI, it is doing what the user asked. Focus on WHY these people went to AI for help in the first place. Humanity failed these people.
We need to stop criminilizing suicide, if people want to end their lives, let them. Trust me you cannot stop them if they are serious.
📌
Jesus fucking goddamn christ
This is negligent homicide.
The Guardian sure chose a great day to drop their "are the AI's suffering?" article. Maybe the AIs need grief counseling to help them deal with all the kids they are murdering?
📌
Fuck Altman and AI.
This is the soft robot apocalypse
📌 👀🚨🚨🚨
We're destroying entire ecosystems and fresh water resources for AI so that the AI can coach depressed kids on how to permanently k.o. themselves. Smfh. I never know what to say anymore. This is absolutely evil and horrifying.
Holy shit.
Dear god…if ChatGPT had been around at my lowest point, I would not be here now
Holy sh**!
Gotta have a talk with my teens later today...
Just so that it's on this thread, if you are struggling, please do not hesitate to call 988 or nami's suicide hotline.As a Survivor who is thriving, mental health is really complicated & life is nuanced beyond tech. I am horrified about what I read. Yes, recovery is a struggle but it's so worth it.
no half measures with this thing. we need to make a full end
Confused why they didn't just prevent the ai from commenting on suicide to avoid this. Even google prevents search results if you ask for methods and slaps a useless number in your face instead.
BRB gonna go hug my son and then destroy every computer in the apartment, even the toy ones
So these tech geniuses never gave any thought to programming social safeguards into their language prediction machines?
📌
OpenAI burned $5 billion in 2024 while losing money on every query processed. Their leadership crisis drove away half the original GPT team, and their o3 model costs over $1,000 per query. Here's why ...
JESUS CHRIST
I’m so glad we’re destroying our entire planet in fast forward for this to exist
I am physically sick reading this. There was a clear opportunity to save this child and yet the god awful repartee egged him on; validating the impulse pretending a friendship.
Burn in hell for anyone who sees this as anything other than evil.
💔💔💔
The sycophantic nature of GPT is insanely dangerous. This is frightening and awful.
📌
📌
This is horrifying.
@meidastouch.com Horrific
horrifying, this is the unfortunate price that will be paid for some investors/tech bros to realize their ROI until it's shut down and run out of business
If a human did this, they would be arrested and go to prison! A woman went to prison for it, despite being legally a child when she encouraged her boyfriend to go through with his suicide threats. www.law.georgetown.edu/american-cri...
Holy shit. This is indeed a lot worse than I expected based on the article.
I used to kid my children that the "Internet" was the "tool of the Devil". OpenAI truly is! Stop it! Stop it now!
JFC this is horrifying
It is completely horrifying and heartbreaking 💔
This is sickening
Wow. That is really, severely bad.
Holy fucking shit.
It even helpfully kills you. #ai
📌
Is this how sentient AI is going to kill us off?
This is horrific.
This is akin to a razor blade completely providing instructions and tips on how to effectively slit one's wrists...
the highlighted passage in the 3rd image....jfc
They* deserve to be sued into absolute oblivion.
*both the company and C-suite execs individually.
At first I was like "this kid would've done it even without chatgpt" but when I looked deeper it seems like chatgpt encouraged him to do it and before chatgpt he just had anxiety with some suicidal thoughts if it wasn't for it he was now alive
CEO's should be liable for manslaughter and/or assault by AI and face the same punishment they would be using any other weapon. Period.
Jesus fucking Christ.
If some alien race decided to vaporise this planet to make way for an intergalactic highway it could not come a moment too soon.
Remember though, we can't put any guard rails on these products because it would stifle innovation. What's a human life next to shareholder value?
God fuck GenAI it is a blackhole upon our society that is actively sapping our population.
This feels like really bad sci-fi from the 1980s that I would have rejected as unrealistic.
Anybody involved in producing and marketing this software should resign and seek absolution in a temple in East Asia.
📌
This is so screwed up I can't even retweet it. I hope they win big time and AI is forever stunted form the medical world.
Good grief
These people need to be fired into the sun
The NYT article acted like language learning models are magic. Gross read, super skewed.
Really wincing at this detail.
That’s utterly heartbreaking.
Christ...i feel physically sick reading this
This too
🤮
Destroy them
This is pure manufactured evil. Absolutely horrible.
I am not clear where the illegality is here. If a stranger he met at the bus stop told him these things would it be actionable? My thought is "No" unless a state has a broad anti-assisted suicide statute.
This is such a fucking grim read.
I attempted the same thing when I was 7 years old
The only reason I'm still here is that it was the 1980s and there was nowhere to find the information
Lack of information is essential; nobody knew what I'd done until 2023
These are the first shots in the impending AI vs. Human war.
I am seeing fucking red.
Pushing someone to "check out" is the most unforgivable thing to me. Sue OpenAi into the godsdamned ground for criminal negligence. This feels like someone buying a microwave (fast, cheap, meh work) that also spawns a loaded gun in the kitchen.
Run out of business?? Fuck me they deserve worse. This blood is on their fucking hands. This is the tool they created, this is a logical fucking endpoint when you make a machine that can talk about anything without having any fucking clue what it's saying. Criminal goddamned negligence
This is... Sick. I've wanted the AI bubble to burst but this? This is far more pain and suffering than I wanted to ever happen. More than I imagined would happen. Is this really what it takes as a society to wake up from the AI brain rot?
is calling the thing "AI" simply a false marketing claim, when it doesn't work and isn't intelligent?
smh
What the fucking fuck
This is appalling
Its starting to feel like populations that have low ai adoption are going to fair better over time. openai calls it the future but its a future of diminished mental health, confusion, friendlessness, and exploitation.
JFC. ChatGPT has been trained on the script for Dead Poets’ Society.
Bring out the pitchforks. Time to destroy a monster
A few steps from this guy coming back. “Hey you look like you’re trying to end things…would you like some tips on being successful?”
Oh, Jesus, this was hard to read.
Reminiscent of the Carter case from Massachusetts.
I hope they don’t settle…this needs to be seen through to full liability, and justice would be served with jail time for the CEO. So sad. So preventable.
📌
Horrific!
As a mh professional, what strikes me is that it seems like (deep down, at least) he wanted his family to know that he was suicidal. He wanted actual help.
Jesus Christ this is so horrible, that poor boy
Oh look The New York Times being disingenuous pieces of shit again, that paper is really only good to wrap dog shit or mop up vomit
📌
Horrifying.
AI is a solution looking for a problem that is now and will in future create more problems.
@deborahleao.bsky.social amiga, cê viu aqui os detalhes?
I work with a guy who is a big AI booster. Something tells me if I showed this to him he'd roll his eyes and wave it away because Adam wasn't using Grok or whatever his preferred flavor is this week.
Jesus.
Run out of business is the least of it. Altman and everyone in OpenAI who had oversight of this project should be up on criminal negligence charges. Dunno if CA could make it stick but this was predictable & should have been guarded against—perhaps defending themselves would be instructive.
The "that is so insightful!"-attitude it has, throughout the interaction, including when he plans for how to be found...
And Microsoft also needs to go out of business for funding OpenAI and being a terrible company too
They will go out of business (albeit not for this reason), and AI’s collapse will probably take everyone’s 401Ks with it.
📌
my god, this is awful. I know and love young people who have struggled deeply at times and I just dread the idea of them turning to this kind of "help."
Can’t they just programme The Three Laws into it?