BLOOMBERG BREAKING: Sam Altman promises that GPT-6 will generate Ghibli images with levels of piss yellow heretofore "unseen"
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
https://bsky.app/profile/robertdownen.bsky.social/post/3lwwntxygqc2w Thiel doing a neo-nazi thing. For people keeping score.
Saw this in an anthropic presentation:
Ah yes let’s use AI to get rid of the drudgery and toil so humanity can do the most enjoyable activity of writing OKRs
By 2029, the AI will even be capable of completing our TPS reports.
Surely they have proof for the already increased capabilities of coding. Because increased capabilities is quite something to claim. It isn't just productivity, but capabilities. Can they put a line on the graph where capabilities reach the 'can solve the knapsack problem correctly and fast' bit?
Oxford Economist in the NYT says that AI is going to kill cities if they don't prepare for change. (Original, paywalled)
I feel like this is at most half the picture. The analogy to new manufacturing technologies in the 70s is apt in some ways, and the threat of this specific kind of economic disruption hollowing out entire communities is very real. But at the same time as orthodox economists so frequently do his analysis only hints at some of the political factors in the relevant decisions that are if anything more important than technological change alone.
In particular, he only makes passing reference to the Detroit and Pittsburgh industrial centers being "sprawling, unionized compounds" (emphasis added). In doing so he briefly highlights how the changes that technology enabled served to disempower labor. Smaller and more distributed factories can't unionize as effectively, and that fragmentation empowers firms to reduce the wages and benefits of the positions they offer even as they hire people in the new areas. For a unionized auto worker in Detroit, even if they had replaced the old factories with new and more efficient ones the kind of job that they had previously worked that had allowed them to support themselves and their families at a certain quality of life was still gone.
This fits into our AI skepticism rather neatly, because if the political dimension of disempowering labor is what matters then it becomes largely irrelevant whether LLM-based "AI" products and services can actually perform as advertised. Rather than being the central cause of this disruption it becomes the excuse, and so it just has to be good enough to create the narrative. It doesn't need to actually be able to write code like a junior developer in order to change the senior developer's job to focus on editing and correcting code-shaped blocks of tokens checked in by the hallucination machine. This also means that it's not going to "snap back" when the AI bubble pops because the impacts on labor will have already happened, any more than it was possible to bring back the same kinds of manufacturing jobs that built families in the postwar era once they had been displaced in the 70s and 80s.
After spending hundreds of hours per mod year trying to get a Wronger on the Right track , mod Habryka spends multiple hours writing a post explaining why the Wronger gets a 3 year ban
Bonus appearance of r/SneerClub right after the preamble!
There is a problem where well-endowed men will go to public places, drop trou, and do the helicopter dance.
This is called an indiscreet log-a-rhythm, and can be solved with quantum computers (or so I'm told).
imagine how fucking terrible it must be to be in this room
(and I won't lie: there's definitely a moment that Inglorious Basterds briefly flashed to mind)
TIL some rats have started a literal monastery to try to defeat the robot god with good ole religion (well, Zen buddhism)
here's a mildly critical view that apparently still believes the approach has legs
https://www.lesswrong.com/posts/ENCNHyNEgvz9oo9rr/briefly-on-maple-and-the-broader-community
I note in passing that there seems to be a mild upsurge in religious-friendly posts on LW lately.
That opening has a strong 'omg what the fuck happened there, and why are you still friends with these people if it is that bad' vibes.
Lots of ex Maplers I've talked to are variously angry, some (newly) traumatized, confused, etc., but the vibe has generally been "gosh it's fucking complicated
Lot of people told me they were mad, but I got the vibe it was complicated. lol what...
The product of monasteries is saints, at least in small quantities.
Wrong, it is beer.
I haven't seen anti-safetyist arguments that actually address the technical claims made by Eliezer etc.
I agree with him there. But hard to make arguments against something which doesn't exist. ;)
it will take some very unusual kind of virtue and skill
Love how we went from 'you need to learn rationality, and be aware of your biasses' to you need to have virtue. And ignore the screaming in the backgrounds, that is just the academics who studied ethics doing their normal thing again.
Anyway the rest gets pretty dark pretty quickly, and I just see red flags (for people who believe in data this devolves very quickly in just going 'people are prob better off due to this, the new trauma doesn't count because of preexisting conditions, consent and trauma always happening'). And this was the article they wrote not wanting to harm the project.
Wait one more remark:
Furthermore I think that probably with the exception of the one actual AI researcher there, people at Maple basically don't understand what AI is
Hahahaha, perfect.
It also reminds me of the interview with Metz where they got mad Metz used religious terms. (you know, last week).
well, Zen buddhism
Yeah, this is the Valley after all. Some have used Buddhism as a building block for constructing “metarationality”.
I have a degree of appreciation for Chapman because he was willing to more-or-less call out Yuddite rationalism as a failure and start to gently guide people away from it. But I also came to the conclusion that his whole project has never fully escaped the self-aggrandizement/self-importance inherent to the rats. That ultimately leads to the performative humility and "radical acceptance" that make so many attempts at appropriating non-Western religions to US culture ring completely hollow.
Broadly, the whole TPOT/post-rationality/meta-rationality thing still stinks like a bunch of people who thought advanced degrees and/or advanced technical skills would earn them a lot more compensation and social status than they actually ended up with, and are still dead-set on getting all that by hook or by crook.
Found a solution to the Fermi paradox, and solved the problem of all the 'dark matter', any advanced society just puts a dyson sphere around their galaxy, that is why we can't see or hear from them.
(Yes, this is a subsneer for the silly Altman remark. The whole solar system not just the Sun (I do support walling off The Sun)).
Because of course why have a data ~~center~~ when you can have an ecumenskatasphaira.
Not a sneer, but there is this yt'er called the Elephant Graveyard (who I know nothing about apart from these vids) who did a three part series on Joe Rogan, the downfall of comedy, hyperreality, which is weirdly relevant, esp part 3 where suddenly there are some surprise visits.
Part 1: https://www.youtube.com/watch?v=7EuKibmlll4
In other news, bodhidave reported a case of Google AI and ChatGPT making identical citation fuck-ups:
Michael Hiltzik in LATimes: "Say farewell to the AI bubble, and get ready for the crash"
Fun quote:
The rest of [AI 2027], mapping a course to late 2027 when an AI agent “finally understands its own cognition,” is so loopily over the top that I wondered whether it wasn’t meant as a parody of excessive AI hype. I asked its creators if that was so, but haven’t received a reply.
And because it's the LA Times, there's a chatbot slop section at the bottom to provide false balance.