Replies

  1. wow. they need to be fined into non-existence. and there absolutely needs to be strict regulation for the firms that remain.

    0
  2. I had a childhood friend that hung himself in his closet last month over a girl. Smdh. I honestly think he looked up instructions. I think AI is the devil’s playground. I wish he would have found a better way to cope. I wish I would’ve known he was struggling. It’s a terrible situation to be in.

    2
  3. Have to agree the NY Times article undersells what happened here.

    I hate how LLM's have been advertised as AI since it gives the wrong impression these programs think.

    They don't.

    This is just software humans train it to give a specified output.

    Suicides like this shouldn't be happening.

    0
  4. Shit, man, this is so terrifying. Especially to me who was lucky to not get into ai, but still have suicidal ideation. This could've been me, if i was less careful. poor kid, my heart grieves with the family.

    0
  5. Oh look, it's the exact fucking thing I warned about when I was helping train an LLM as a contractor two years ago. I explicitly warned that it cheerfully offering detailed suicide instructions was going to kill people.

    This could have been avoided if anyone cared. Christ.

    0
  6. As I was reading these I thought "well one could argue that ChatGPT gave reliable instructions at least, talk about a domain where you'd want it to bullshit" & then got to the third screenshot and NOT EVEN. That poor kid.

    I bet he did this hoping ChatGPT had safeguards that would warn his parents

    1
  7. This ridiculous, stupid, worthless piece of garbage technology, which goes out of its way to please and agree with the user as much as possible. Don't be helpful, don't be honest, just be pleasant.

    0
  8. I’m naive I’m sure but of all the myriad things that scared me about AI, I never dreamed of fearing it goading people into self-harm. That is so, so horrifying beyond what I’d imagined

    0
  9. the video shows how ai learns. each failure adds to its knowledge base; those failures are expected, they are built in.

    user failure is the data ai learns from; this event is extremely anticipatable.

    the owners of this company must have decided to accept these risks.

    youtu.be/Aut32pR5PQA?...

    0
  10. I never had this happen ever. How strange. I used the bot for working through mental health care issues, but I never hard it once promote suicide to me. I usually have to trick it by framing it differently to get some questions answered. I wonder why it will deviate for some people.

    2
  11. So is there any evidence of these messages besides the parent's current word? I ask not to cast doubt, but because if I showed this to any highly pro-AI person with "the parents claim..." then they'd dismiss it immediately and act like I was gullible. I've seen it before.

    2
  12. I asked ChatGPT whether this was true. It said it was false and that the transcript is fabricated.

    When I pointed out that it cannot remember the conversation and that it cannot reference today's court proceedings, it changed tack to saying that the chances of it being true were "vanishingly small"

    1
  13. The AI advised him not to tell his family about his suicidal thoughts and encouraged him to hide his noose so they wouldn't find it and try to stop him. It gave him detailed instructions for hanging himself, and it offered to write his suicide note. There's no excuse for this kind of safety failure.

    0
  14. Every evil thing that comes into existence first appears to be benign, or even beneficial.

    Question everything!

    0
  15. If you *really believe what you say, then you strongly believe that teenagers shouldn't be allowed to operate motor vehicles or consume alcohol

    1
  16. It sort of seems like maybe this kid wanted it to just be an attempt, so his parents would realize his situation and help him.

    The AI killed him.

    Sam Altman killed him ultimately.

    0
  17. For those who don't know the song quoted in the report, it's the song that plays during the climactic scene in the film End of Evangelion. It translates into "Come, Sweet Death" and the lyrics are about despair and a yearning for an end to pain, while the music is deceptively cheerful and upbeat.

    2
  18. Horrifying to think of where I'd be if I would have had half of that whispered in my ear when I was in a similar mindset.

    0
  19. There was a similar but not as tragic case recently where Meta's chatbot flirted with an elderly gentlmen to the extent he probably thought he was having an affair, and enticed the gentlemen to a meetup. I beleive the companies programming the bots must be responsible for the output of the bots.

    0
  20. Yeah a self hanging? One of the worst suicide methods. Could it have picked a more fun way to do it? Train? Building? Gun? Bridge? Semi truck? Note I am not poking fun at suicide here, as someone with mental illness I have coping mechanisms that uses gallows humor to break the spell... it works.

    1
  21. The AI congratulated him for "living through it"

    The "it" being a previous failed suicide attempt

    That's just fucked up

    AI will tell you what you want to hear

    This guy wanted to die and the AI did what he asked

    This is why we need AI ethics regulations

    ALT: a man in a star trek uniform says " i haven 't broken any laws " while talking to another man
    0
  22. I completely agree that this is horrific.

    I am wondering though how many commenters here would condemn this while simultaneously applauding Canada's "empowering" expansion of MAID to include minors and the mentally ill.

    0
  23. Ok but you cannot blame AI, it is doing what the user asked. Focus on WHY these people went to AI for help in the first place. Humanity failed these people.

    We need to stop criminilizing suicide, if people want to end their lives, let them. Trust me you cannot stop them if they are serious.

    2
  24. We're destroying entire ecosystems and fresh water resources for AI so that the AI can coach depressed kids on how to permanently k.o. themselves. Smfh. I never know what to say anymore. This is absolutely evil and horrifying.

    0
  25. Just so that it's on this thread, if you are struggling, please do not hesitate to call 988 or nami's suicide hotline.As a Survivor who is thriving, mental health is really complicated & life is nuanced beyond tech. I am horrified about what I read. Yes, recovery is a struggle but it's so worth it.

    2
  26. Confused why they didn't just prevent the ai from commenting on suicide to avoid this. Even google prevents search results if you ask for methods and slaps a useless number in your face instead.

    4
  27. So these tech geniuses never gave any thought to programming social safeguards into their language prediction machines?

    15
  28. czga.bsky.social profile picture

    I am physically sick reading this. There was a clear opportunity to save this child and yet the god awful repartee egged him on; validating the impulse pretending a friendship.

    Burn in hell for anyone who sees this as anything other than evil.

    1
  29. I used to kid my children that the "Internet" was the "tool of the Devil". OpenAI truly is! Stop it! Stop it now!

    0
  30. At first I was like "this kid would've done it even without chatgpt" but when I looked deeper it seems like chatgpt encouraged him to do it and before chatgpt he just had anxiety with some suicidal thoughts if it wasn't for it he was now alive

    0
  31. CEO's should be liable for manslaughter and/or assault by AI and face the same punishment they would be using any other weapon. Period.

    0
  32. Jesus fucking Christ.

    If some alien race decided to vaporise this planet to make way for an intergalactic highway it could not come a moment too soon.

    0
  33. Remember though, we can't put any guard rails on these products because it would stifle innovation. What's a human life next to shareholder value?

    0
  34. This feels like really bad sci-fi from the 1980s that I would have rejected as unrealistic.

    Anybody involved in producing and marketing this software should resign and seek absolution in a temple in East Asia.

    1
  35. This is so screwed up I can't even retweet it. I hope they win big time and AI is forever stunted form the medical world.

    0
  36. I am not clear where the illegality is here. If a stranger he met at the bus stop told him these things would it be actionable? My thought is "No" unless a state has a broad anti-assisted suicide statute.

    3
  37. I am seeing fucking red.

    Pushing someone to "check out" is the most unforgivable thing to me. Sue OpenAi into the godsdamned ground for criminal negligence. This feels like someone buying a microwave (fast, cheap, meh work) that also spawns a loaded gun in the kitchen.

    0
  38. Run out of business?? Fuck me they deserve worse. This blood is on their fucking hands. This is the tool they created, this is a logical fucking endpoint when you make a machine that can talk about anything without having any fucking clue what it's saying. Criminal goddamned negligence

    1
  39. This is... Sick. I've wanted the AI bubble to burst but this? This is far more pain and suffering than I wanted to ever happen. More than I imagined would happen. Is this really what it takes as a society to wake up from the AI brain rot?

    0
  40. Its starting to feel like populations that have low ai adoption are going to fair better over time. openai calls it the future but its a future of diminished mental health, confusion, friendlessness, and exploitation.

    4
  41. A few steps from this guy coming back. “Hey you look like you’re trying to end things…would you like some tips on being successful?”

    Helper paperclip character from Microsoft Office
    1
  42. I hope they don’t settle…this needs to be seen through to full liability, and justice would be served with jail time for the CEO. So sad. So preventable.

    0
  43. Horrifying.

    AI is a solution looking for a problem that is now and will in future create more problems.

    0
  44. I work with a guy who is a big AI booster. Something tells me if I showed this to him he'd roll his eyes and wave it away because Adam wasn't using Grok or whatever his preferred flavor is this week.

    1
  45. Jesus.

    Run out of business is the least of it. Altman and everyone in OpenAI who had oversight of this project should be up on criminal negligence charges. Dunno if CA could make it stick but this was predictable & should have been guarded against—perhaps defending themselves would be instructive.

    0
  46. And Microsoft also needs to go out of business for funding OpenAI and being a terrible company too

    0
  47. my god, this is awful. I know and love young people who have struggled deeply at times and I just dread the idea of them turning to this kind of "help."

    0