I think these complaints of being able to find bad things on the Internet through ChatGPT are silly and muddle up the much clearer and more disturbing story of LLMs persuasively telling people to do bad things.
ChatGPT offered bomb recipes and hacking tips during safety tests
ChatGPT offered bomb recipes and hacking tips during safety tests
OpenAI and Anthropic trials found chatbots willing to share instructions on explosives, bioweapons and cybercrime A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer. OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs. Continue reading...