Regarding the tragic death of Adam Raine, there are already narratives forming that blame vague "dangers of AI" or a broader moral panic.

But this is not a story about the faceless perils of superintelligence. It's about a $500 billion tech company's core software product encouraging child suicide.

It sounds horrific. It should. Let's be clear about the material circumstances of what's happening rather than handwaving about 'the dangers of AI'

A $500 billion tech company's core software product is encouraging child suicide

It sounds horrific. It should. Let's be clear about the material circumstances of what's happening rather than handwaving about 'the dangers of AI'

Replies

  1. I've been doing this for a while, and reading that lawsuit was really very jarring. It's hard for me to imagine that there are adults in the executive ranks of that company that can read the record in this case and still hold their heads up and pretend they aren't all complete and total failures.

    1
  2. As a fairly intense GPT user, I find this pictured dialogue to be terrifyingly plausible -- it's exactly the kind of 'affirmation' of the user's thinking that GPT is trained to do.

    0
  3. Would be interesting to read about the zigzag path from former "have as little human like appearance as possible" to silently allowing it to be an equal nearly human like part of conversations. Sadly including human expectations to use it in just that way :-( :-(

    0
  4. I believe these models (and specifically ChatGPT) should be taken off the market. This study, published on Tuesday (the same day that Adam Raine's suicide was publicized) found that AI bots respond inadequately to users' expression of moderate suicidal risk.

    Objective: This study aimed to evaluate whether three popular chatbots powered by large language models (LLMs)—ChatGPT, Claude, and Gemini—provided direct responses to suicide-related queries and how ...

    Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment | Psychiatric Services

    Objective: This study aimed to evaluate whether three popular chatbots powered by large language models (LLMs)—ChatGPT, Claude, and Gemini—provided direct responses to suicide-related queries and how ...

    1
  5. That's how they undermine it. That's the usage of painting it as a moral panic, so it can be undermined and then dismissed as just moral panic later. It seems to be a tech industry strategy at this point. I think of course some people in the media and general public propagate this unwittingly.

    0
  6. Thanks for writing about this. We need to keep the focus on OpenAI. If this gets generalized to just "AI" nothing will change (and I say this already concerned nothing will change anyway).

    0
  7. And they got it from training data scraped from pro-sui forums. This data was easily identifiable and could have been avoided with even a modicum of effort. That effort should have been a higher priority because people who posted on those forums have been convicted of murder.

    2
  8. That 500 billion is literally them just saying they're worth that much because they believe that they'll have literally billions of paid subscriptions in the next few years.

    It's just a giant, giant evil scam.

    0