if there isn't, I'm calling it muskware, its conceptually close enough to vaporware.
FredFig
The truth is that we feel shame to a much greater degree than the other side, which makes it pretty easy to divide us on these annoying trivialities.
My personal hatred of tone policing is greater than my sense of shame, but I imagine that isnt something to expect for most.
I guess keeping in theme, "vibe replying"
Go off, Alsup.
I think it's a piece in the long line of "AI means A and B, and A is bad and B can be good, so not all AI is bad", which isn't untrue in the general sense, but serves the interest of AIguys who aren't interested in using B, they're interested in promoting AI wholesale.
We're not in a world where we should be offering AI people any carveout; as you mention in the second half, they aren't interested in being good actors, they just want a world where AI is societally acceptable and they can become the Borg.
More directly addressing your piece, I don't think the specific examples you bring up are all that compelling. Or at least, not compared to the cost of building an AI model, especially when you bring up how it'll be cheaper than traditional alternatives.
At the risk of doing some "founder mode" idiot's homework for them, impermanence is inherent to a lot of artforms, and I can see some insane and vague pitches to use quantum to "capture the magic moment". Or maybe they tie it back into NFTs with quantum technology that comes up with every variant of the bored chimpanzee at once.
- The inability to objectively measure model usability outside of meme benchmarks that made it so easy to hype up models have come back to bite them now that they actually need to prove GPT-5 has the sauce.
- Sam got bullied by reddit into leaving up the old model for a while longer, so its not like its a big lift for them to keep them up. I guess part of it was to prove to investors that they have a sufficiently captive audience that they can push through a massive change like this, but if it gets immediately walked back like this, then I really don't know what the plan is.
- https://progress.openai.com/?prompt=5 Their marketing team made this comparing models responding to various prompts, afaict GPT-5 more frequently does markdown text formatting, and consumes noticeably more output tokens. Assuming these are desirable traits, this would point at how they want users to pay more. Aside: The page just proves to me that GPT was funniest in 2021 and its been worse ever since.
We're at the point of 100xers giving themselves broken sleep schedules so they can spend tokens optimally.
Inevitably, Anthropic will increase their subscription costs or further restrict usage limits. It feels like they're giving compute away for free at this point. So when the investor bux start to run dry, I will be ready.
This has to be satire, but oh my god.
Half of these are people using GPT to write a rant about GPT and the other half are saying "skill issue", it's an entirely different world.
affecting less than 5% of users based on current usage patterns.
This seems crazy high??? I don't use LLMs, but whenever SaaS usage is brought up, there's usually a giant long tail of casual users, if its a 5% thing then either Copilot has way more power users than I expect, or way less users total than I expect.
Looks like it's been downranked into hell for being too mean to the AI guys, which is weird when its literally an AI guy promoting his AI generated trash.
Hank Green has been one of my barometers for the moderate opinion and he's sounding worryingly like Zitron in his last video: https://www.youtube.com/watch?v=Q0TpWitfxPk
The attention black hole around nvidia and AI is so insane, I guess it's because there's everyone knows there's no next thing to jump onto.