Born to create meaning
Forced to produce sentences
Born to create meaning
Forced to produce sentences
I think there's some ideology underpinning it, though it's definitely not clear-cut. It goes beyond "sheeple" or "bluepilled" because NPCs literally exist for the sake of the player character of the game. Like, part of why you can justify the violent chaos of a game of Saints Row 3 or whatever is that the only reason those "civilians" exist is to give the player the choice to fuck with them, and the primary way of interacting with anything in that game is shooting it or beating it down with comically large sex toys. I don't know how universal it is, but the Rationalist version of this simulation argument that I've seen relies on a conflation of power and agency, so the people who are obviously PCs are the privileged and the powerful. There is something uniquely distasteful about arguing that not only do rich white dudes know better than you, but that you literally exist only to serve a role in their stories.
I mean, I try not to go full conspiratorial everything-is-a-false-fllag, but the fact that the biggest AI company that has been explicitly trying to create AGI isn't getting the business here is incredibly suspect. On the other hand, though, it feels like anything that publicly leans into the fears of evil computer God would be a self-own when they're in the middle of trying to completely ditch the "for the good of humanity, not just immediate profits" part of their organization.
I feel like he's also broadly misunderstood the actual armchair diagnosis from over here. Like, no acknowledgement of claims that he's stuck in a broken and obviously false framing of the entire world as "popular jocks" vs "put-up on nerds" and seems to match literally any conflict into that model to often horrifying results. Even though that was the main thrust of the actual "diagnosis" that he referenced.
Past a certain point I think anyone who doesn't agree with and support him has been fully excluded from the circle of people who can actually get through to him and help. Unfortunately, he's still a public figure, leaving the rest of us with little to do beyond sneering.
Green and blue are canonical because they tend to have strong contrast with the people and their clothes, so the cromakey isn't likely to pick up random bits of people's face and outfit to cut out. If you want to go for the green cardboard option I would just make sure you get as consistent a color as possible and see about finding a cheap light and/or reflector to put behind you so that it doesn't get obscured by your shadow. Definitely had that happen in a couple of student projects I did and it was impossible to set it up to be an aggressive enough match to always get the board (including the shadowed parts) without also picking up the lining of someone's jacket or something.
On a purely strategic level I think it's worth acknowledging that openAI has a very specific goal and counternarrative here. Scientology's incredibly broad attacks on even Australian randos that nobody cared about was a strong signal that they didn't actually have a goal beyond hurting people, but I think Altman and friends do. There are reasons why he's not targeting DAIR, for example. Or you (yet). They're going very specifically for the people interfering in their attempt to unravel their absurd corporate structure into something that investors are willing to keep pumping money into, and trying, ironically, to paint them as compromised by big money. They're not going for blanket defamation claims or anything so blatant.
Honestly I'm a little surprised that Zitron hasn't gotten flak, given his focus on the financials and how important this transition is for the continued existence of OpenAI as a business. But then as I think about it I guess they haven't been targeting journalists or commentators, just actual parties to the suit. If it starts going badly I wonder if they'll expand the legal threats.
I mean the AGI part is basically magic. If LLMs aren't actually the way to build it then it doesn't change the underlying belief any more than the failure of expert systems or cybercyn or whatever else.
It's really only a matter of time before we get an assassination via bluetooth-enabled pacemaker or something, and it's going to be hell.
Not an EA franchise, but in retrospect it was probably a bad sign that Assassin's Creed started going deep on "maybe the Templar shadow government that rules everyone in secret is actually not that bad?" or "no, actually some of the ancient aliens who enslaved all humanity were good and it was just the one cartoon villain that was bad."
Like, I think there's some thematic depth to looking at how in order to combat the evil shadow cult government the Assassins "had to" become a "good" shadow cult government, but as the man once said you do not, in fact, have to give it to them.
It is a city that looked at Gritty with deep skepticism until they realized how much everyone else hated him, at which point he was elected mayor for life I think? Iconic.
Others were alarmed and advocated internally against scaling large language models. But these were not AGI safety researchers, but critical AI researchers, like Dr. Timnit Gebru.
Here we see rationalists approaching dangerously close to self-awareness and recognizing their whole concept of "AI safety" as marketing copy.
Even if we had the shooter may have just been looking for something heavy enough to brace against for the shot. You can't exactly carry a set of encyclopedias up to the top of the book depository or wherever without attracting attention.