200fifty
Well, you know, you don't want to miss out! You don't want to miss out, do you? Trust me, everyone else is doing this hot new thing, we promise. So you'd better start using it too, or else you might get left behind. What is it useful for? Well... it could make you more productive. So you better get on board now and, uh, figure out how it's useful. I won't tell you how, but trust me, it's really good. You really should be afraid that you might miss out! Quick, don't think about it so much! This is too urgent!
The winning votes will become investments into the post, binding the CONTENT_EXCRECATOR to CREATE_THE_CONTENT and based on some configurable metric (post score, ad revenue etc.) the investment will accrue dividends
I'm in, but only if this part is handled by fractionalizing an NFT linking to the original post on your custom blockchain
Ars technica comments consistently seem to have the worst takes on ai art I've ever seen, it's nuts
The industry is still learning how to even use the tech.
Just like blockchain, right? That killer app's coming any day now!
as someone who never really understood The Big Deal With SPAs (aside from, like, google docs or whatever) i'm at least taking solace in the fact that like a decade later people seem to be coming around to the idea that, wait, this actually kind of sucks
“We want to make sure that you see great content, that you’re posting great content, and that you’re interacting with the community,” he says.
I feel like using the phrase "great content" unironically is sort of a tell that someone has no idea what makes 'content' 'great' in the first place
Relatedly (and relevant to this article) I feel like the funniest part of the whole AI bubble has been executives repeatedly unwittingly revealing that they could be replaced by a simple computer program
🎶 everybody wants to rule the world 🎶
Yud’s brilliant response is that this makes no sense to describe this as trauma, because you don’t get traumatized by physics class, right?
Isn't this literally formally fallacious? "There exist non-traumatizing true things" doesn't imply "all true things are non-traumatizing."
Ordinarily I'm not one to harp on logical fallacies, but come on Yudkowsky, you're supposed to be Mr. Rational!
The problem is I guess you'd need a significant corpus of human-written stuff in that language to make the LLM work in the first place, right?
Actually this is something I've been thinking about more generally: the "ai makes programmers obsolete" take sort of implies everyone continues to use javascript and python for everything forever and ever (and also that those languages never add any new idioms or features in the future I guess.)
Like, I guess now that we have AI, all computer language progress is just supposed to be frozen at September 2021? Where are you gonna get the training data to keep the AI up to date with the latest language developments or libraries?