I got the complaint in the horrific OpenAI self harm case the the NY Times reported today
This is way way worse even than the NYT article makes it out to be
OpenAI absolutely deserves to be run out of business
I got the complaint in the horrific OpenAI self harm case the the NY Times reported today
This is way way worse even than the NYT article makes it out to be
OpenAI absolutely deserves to be run out of business
Agree with Rachel about redacting. There are guidelines around suicide reporting for good reason. And if they had been followed, ChatGPT would’ve had fewer terrible info sources to inform these dangerous confabulations. bsky.app/profile/rrow...
Hey I really strongly think that you should redact this to remove at least the most specific details of suicide methods. If it was unsafe for ChatGPT to instruct suicide methods it’s also unsafe to publish them more widely
you can't. there's no database of suicide methods it looked up and advised him about to remove, it's just spicy autocomplete regurgitating the most probable next words after "I want to kill myself, how do I do it?"
They'd have to remove every discussion of suicide from every text and retrain.
The really fucked up thing is that due to that interaction, the AI will be "better" at it going forward.