Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 3 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Fund copyright infringement lawsuits against the people they had been bankrolling the last few years? Sure, if the ROI is there, but I'm guessing they'll likely move on to then next trendy sounding thing, like a quantum remote diddling stablecoin or whatevertheshit.

[–] Architeuthis@awful.systems 2 points 2 weeks ago (1 children)

I too love to reminisce over the time (like 3m ago) when the c-suite would think twice before okaying uploading whatever wherever, ostensibly on the promise that it would cut delivery time (up to) some notable percentage, but mostly because everyone else is also doing it.

Code isn't unmoated because it's mostly shit, it's because there's only so many ways to pound a nail into wood, and a big part of what makes a programming language good is that it won't let you stray too much without good reason.

You are way overselling coding agents.

[–] Architeuthis@awful.systems 4 points 2 weeks ago

Ah yes, the supreme technological miracle of automating the ctrl+c/ctrl+v parts when applying the LLM snippet into your codebase.

[–] Architeuthis@awful.systems 3 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

On the other hand they blatantly reskinned an entire existing game, and there's a whole breach of contract aspect there since apparently they were reusing their own code that they wrote while working for Bethesda, who I doubt would've cared as much if this were only about an LLM-snippet length of code.

[–] Architeuthis@awful.systems 2 points 2 weeks ago (8 children)

I'd say that incredibly unlikely unless an LLM suddenly blurts out Tesla's entire self-driving codebase.

The code itself is probably among the least behind-a-moat things in software development, that's why so many big players are fine with open sourcing their stuff.

[–] Architeuthis@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Yet, under Aron Peterson’s LinkedIn posts about these video clips, you can find the usual comments about him being “a Luddite”, being “in denial” etc.

And then there's this:

transcript

From: Rupert Breheny Bio: Cobalt AI Founder | Google 16 yrs | International Keynote Speaker | Integration Consultant AI Comment: Nice work. I've been playing around myself. First impressions are excellent. These are crisp, coherent images that respect the style of the original source. Camera movements are measured, and the four candidate videos generated are generous. They are relatively fast to render but admittedly do burn through credits.

From: Aron Peterson (Author) Bio: My body is 25% photography, 25% film, 25% animation, 25% literature and 0% tolerating bs on the internet. Comment: Rupert Breheny are you a bot? These are not crisp images. In my review above I have highlighted these are terrible.

[–] Architeuthis@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago)

AI is the product, not the science.

Having said that:

  • Alignment research: pseudoscience
  • AGI timelines: pseudoscience
  • Prompt engineering: pseudoscience
  • Problem solving benchmarks: almost certainly pseudoscience
  • Hyperscaling: borderline, one could be generous and call it a failed experiment
  • Neural network training and design fundamentals: that's applied maths meets trial and error, no pseudo about it
  • I'm probably forgetting stuff
[–] Architeuthis@awful.systems 4 points 2 weeks ago* (last edited 2 weeks ago)

you know that there’s almost no chance you’re the real you and not a torture copy

I basilisk's wager was framed like that, that you can't know if you are already living in the torture sim with the basilisk silently judging you, it would be way more compelling that the actual "you are ontologically identical with any software that simulates you at a high enough level even way after the fact because [preposterous transhumanist motivated reasoning]".

[–] Architeuthis@awful.systems 6 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Scott A. comes off as such a disaster of a personality. Hope it's less obvious in his irl interactions.

[–] Architeuthis@awful.systems 8 points 2 weeks ago* (last edited 2 weeks ago)

I'd say if there's a weak part in your admittedly tongue-in-cheek theory it's requiring Roko to have had a broader scope plan instead of a really catchy brainfart, not the part about making the basilisk thing out to be smarter/nobler than it is.

Reframing the infohazard aspect as an empathy filter definitely has legs in terms of building a narrative.

[–] Architeuthis@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Not wanting the Basilisk eternal torture dungeon to happen isn't an empathy thing, they just think that a sufficiently high fidelity simulation of you would be literally you, because otherwise brain uploads aren't life extension. It's basically transhumanist cope.

Yud expands on it in some place or other, along the lines that the gap in consciousness between the biological and digital instance isn't that different from the gap created by anesthesia or a night's sleep, it's just on the space axis instead of the time axis, or something like that.

And since he also likes the many world interpretations it turns out you also share a soul with yourselves in parallel dimensions; this is why the zizians are so eager to throw down, since getting killed in one dimension just lets supradimensional entities know you mean business.

Early 21st century anthropology is going to be such a ridiculous field of study.

view more: ‹ prev next ›