It's two guys in London and one guy in San Francisco. In London there's presumably no OpenAI office, in SF, you can't be at two places at once and Anthropic has more true believers/does more critihype.
Unrelated, few minutes before writing this a bona-fide cultist replied to the programming dev post. Cultist with the handle "BussyGyatt @feddit.org". Truly the dumbest timeline.
Was jumpscared on my YouTube recommendations page by a video from AI safety peddler Rob Miles and decided to take a look.
It talked about how it's almost impossible to detect whether a model was deliberately trained to output some "bad" output (like vulnerable code) for some specific set of inputs.
Pretty mild as cult stuff goes, mostly anthropomorphizing and referring to such LLM as a "sleeper agent". But maybe some of y'all will find it interesting.
link