In case anybody skips the article, it's a six year old cybernetically force grown to the body of a horny 13 to 14 year old.
The rare sentence that makes me want to take a shower for having written it.
In case anybody skips the article, it's a six year old cybernetically force grown to the body of a horny 13 to 14 year old.
The rare sentence that makes me want to take a shower for having written it.
No shot is over two seconds, because AI video can’t keep it together longer than that. Animals and snowmen visibly warp their proportions even over that short time. The trucks’ wheels don’t actually move. You’ll see more wrong with the ad the more you look.
Not to mention the weird AI lighting that makes everything look fake and unnatural even in the ad's dreamlike context, and also that it's the most generic and uninspired shit imaginable.
"When asked about buggy AI [code], a common refrain is ‘it is not my code,’ meaning they feel less accountable because they didn’t write it.”
Strong they cut all my deadlines in half and gave me an OpenAI API key, so fuck it energy.
He stressed that this is not from want of care on the developer’s part but rather a lack of interest in “copy-editing code” on top of quality control processes being unprepared for the speed of AI adoption.
You don't say.
It hasn't worked 'well' for computers since like the pentium, what are you talking about?
The premise was pretty dumb too, as in, if you notice that a (very reductive) technological metric has been rising sort of exponentially, you should probably assume something along the lines of we're probably still at the low hanging fruit stage of R&D, it'll stabilize as it matures, instead of proudly proclaiming that surely it'll approach infinity and break reality.
There's nothing smart or insightful about seeing a line in a graph trending upwards and assuming it's gonna keep doing that no matter what. Not to mention that type of decontextualized wishful thinking is emblematic of the TREACLES mindset mentioned in the community's blurb that you should check out.
So yeah, he thought up the Singularity which is little more than a metaphysical excuse to ignore regulations and negative externalities because with tech rupture around the corner any catastrophic mess we make getting there won't matter. See also: the whole current AI debacle.
Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.
You make his position sound way more measured and responsible than it is.
His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.
So LLM-based AI is apparently such a dead end as far as non-spam and non-party trick use cases are concerned that they are straight up rolling out anti-features that nobody asked or wanted just to convince shareholders that ground breaking stuff is still going on, and somewhat justify the ocean of money they are diverting that way.
At least it's only supposed to work on PCs that incorporate so-called neural processor units, which if I understand correctly is going to be its own thing under a Windows PC branding.
edit: Yud must love that instead of his very smart and very implementable idea of the government enforcing strict regulations on who gets to own GPUs and bombing non-compliants we seem to instead be trending towards having special deep learning facilitating hardware integrated in every new device, or whatever NPUs actually are, starting with iPhones and so-called Windows PCs.
edit edit: the branding appears to be "Copilot+ PCs" not windows pcs.
If I remember correctly SBF taking the stand was completely against his lawyers' recommendations, and in general he seems to have a really hard time doing what people who know better tell him to, such as don't DM journalists about your crimes and definitely don't start a substack detailing how you felt justified in doing them, and also trying to 'explain yourself' to prosecution witnesses is witness tampering and will get your bail revoked.
Sticking numbers next to things and calling it a day is basically the whole idea behind bayesian rationalism.
To be clear, it's because he played Edward Snowden in a movie. That's the conspiracy.
On one hand it's encouraging that the comments are mostly pushing back.
On the other hand a lot of them do so on the basis of a disagreement over the moral calculus of how many chickens a first trimester fetus should be worth, and whether that makes pushing for abortion bans inefficient compared to efforts to reduce the killing of farm animals for food.
Which, while pants-on-head bizarre in any other context, seems fairly normal by EA standards.
This reads very, uh, addled. I guess collapsing the wavefunction means agreeing on stuff? And the uncanny valley is when the vibes are off because people are at each others throats? Is 'being aligned' like having attained spiritual enlightenment by way of Adderall?
Apparently the context is that he wanted the investment firms under ftx (Alameda and Modulo) to completely coordinate, despite being run by different ex girlfriends at the time (most normal EA workplace), which I guess paints Elis' comment about Chinese harem rules of dating in a new light.
edit: i think the 'being aligned' thing is them invoking the 'great minds think alike' adage as absolute truth, i.e. since we both have the High IQ feat you should be agreeing with me, after all we share the same privileged access to absolute truth. That we aren't must mean you are unaligned/need to be further cleansed of thetans.
That's gotta sting a bit.