You can download instructions on how to make bioweapons from the NCBI, an US government website. You can download the Anarchist's Cookbook.
These "guardrails can be circumvented" articles are so awful because of two conflicting reasons:
- On the one hand, it fuels the AI hype in a "AI is going to destroy the world, it's inevitable, it's going to kill us all bro" way, while it's actually just a better but very expensive chatbot. It furthers the myth that it will be a fundamentally transformative technology beyond students generating essays and lonely people going mad.
- On the other hand, it shows the thinking beyond the owners of this technology, since these "safety systems" don't in general exist on other systems LLMs are supposed to supplant. You can look up Dementia Don memes on even Google Search, but somehow they want to make sure ChatGPT is able to completely gate off certain information. And this gets normalised with these articles, as the "news" is that "oh, the online narrative control machine can be bypassed, you can look up how to make dangerous weapons, FEAR THIS".