32

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

you are viewing a single comment's thread
view the rest of the comments
[-] BlueMonday1984@awful.systems 19 points 6 months ago

Going in for the first sneer, we have a guy claiming "AI super intelligence by 2027" whose thread openly compares AI to a god and gets more whacked-out from here.

Truly, this shit is just the Rapture for nerds

[-] skillissuer@discuss.tchncs.de 24 points 6 months ago* (last edited 6 months ago)

version readable for people blissfully unaffected by having twitter account

“Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans.”

yeah ez just lemme build dc worth 1% of global gdp and run exclusively wisdom woodchipper on this

“Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might.”

power grid equipment manufacture always had long lead times, and now, there's a country in eastern europe that has something like 9GW of generating capacity knocked out, you big dumb bitch, maybe that has some relation to all packaged substations disappearing

They are doing to summon a god. And we can’t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.

i see that besides 50s aesthetics they like mccarthyism

“As the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 we’ll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, the endgame will be on. “

how cute, they think that their startup gets nationalized before it dies from terminal hype starvation

“I make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.

“We don’t need to automate everything—just AI research”

“Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. “

just needs tiny increase of six orders of magnitude, pinky swear, and it'll all work out

it weakly reminds me how Edward Teller got an idea of a primitive thermonuclear weapon, then some of his subordinates ran numbers and decided that it will never work. his solution? Just Make It Bigger, it has to be working at some point (it was deemed as unfeasible and tossed in trashcan of history where it belongs. nobody needs gigaton range nukes, even if his scheme worked). he was very salty that somebody else (Stanisław Ulam) figured it out in a practical way

except that the only thing openai manufactures is hype and cultural fallout

“We’d be able to run millions of copies (and soon at 10x+ human speed) of the automated AI researchers.” “…given inference fleets in 2027, we should be able to generate an entire internet’s worth of tokens, every single day.”

what's "model collapse"

“What does it feel like to stand here?”

beyond parody

[-] zogwarg@awful.systems 19 points 6 months ago

“Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. “

Also this doesn't give enough credit to gradeschoolers. I certainly don't think I am much smarter (if at all) than when I was a kid. Don't these people remember being children? Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems? I'm not sure if it's me being the weird one, to me growing up is not about becoming smarter, it's more about gaining perspective, that is vital, but actual intelligence/personhood is a pre-requisite for perspective.

[-] mii@awful.systems 18 points 6 months ago* (last edited 6 months ago)

Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems?

Yes. They literally think that. I mean, why else would they assume a spicy text extruder with a built-in thesaurus is so smart?

[-] V0ldek@awful.systems 16 points 6 months ago

To engage with the content:

That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.

I see this is becoming their version of "too the moon", and it's even dumber.

To engage with the form:

wisdom woodchipper

Amazing, 10/10 no notes.

[-] skillissuer@discuss.tchncs.de 11 points 6 months ago

I see this is becoming their version of “too the moon”, and it’s even dumber.

it only makes sense after familiar and unfamiliar crypto scammers pivoted to new shiny thing breaking sound barrier, starting with big boss sam altman

[-] skillissuer@discuss.tchncs.de 8 points 6 months ago

wisdom woodchipper

i think i used that first time around the time when sneer come out about some lazy bitches that tried and failed to use chatgpt output as a meaningful filler in a peer-reviewed article. of course it worked, and not only at MDPI, because i doubt anyone seriously cares about prestige of International Journal of SEO-bait Hypecentrics, impact factor 0.62, least of all reviewers

[-] Soyweiser@awful.systems 13 points 6 months ago

They are doing to summon a god. And we can’t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.

Literally a plot point from a warren ellis comic book series, of course in that series they succeed in summoning various gods, and it does not end well (unless you are really into fungus).

[-] skillissuer@discuss.tchncs.de 11 points 6 months ago* (last edited 6 months ago)

source of that image is also bad hxxps://waitbutwhy[.]com/2015/01/artificial-intelligence-revolution-1.html i think i've seen it listed on lessonline? can't remember

not only they seem like true believers, they are so for a decade at this point

In 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist?” It asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI—i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:2

Median optimistic year (10% likelihood): 2022

Median realistic year (50% likelihood): 2040

Median pessimistic year (90% likelihood): 2075

just like fusion, it's gonna happen in next decade guys, trust me

[-] 200fifty@awful.systems 11 points 6 months ago* (last edited 6 months ago)

I believe waitbutwhy came up before on old sneerclub though in that case we were making fun of them for bad political philosophy rather than bad ai takes

[-] skillissuer@discuss.tchncs.de 11 points 6 months ago

there's a lot of bad everything, it looks like a failed attempt at rat-scented xkcd. and yeah they were invited to lessonline but didn't arrive

[-] o7___o7@awful.systems 8 points 6 months ago* (last edited 6 months ago)

“Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans.”

They are doing to summon a god. And we can’t do anything to stop it.

This is a direct rip-off of the plot of The Labyrinth Index, except in the book it's a public-partnership between the US occult deep state, defense contractors, and silicon valley rather than a purely free market apocalypse, and they're trying to execute cthulhu.exe rather than implement the Acausal Robot God.

[-] SnotFlickerman@lemmy.blahaj.zone 15 points 6 months ago* (last edited 6 months ago)

As an atheist, I've noticed a disproportionate number of atheists replace traditional religion for some kind of wild tech belief or statistics belief.

AI worship might be the most perfect of the examples of human hubris.

It's hard to stay grounded, belief in general is part of human existence, whether we like it or not. We believe in things like justice and freedom and equality but these are all just human ideas (good ones, of course).

[-] Soyweiser@awful.systems 10 points 6 months ago* (last edited 6 months ago)

The fear of death and the void is quite a problem for a lot of people. Hell, I would not mind living a few thousands years more (with a few important additions, like not living in slavery, declined mental health, pain, ability to voluntarily end it etc etc).

But yeah this is just religion with some bits removed and some bits tacked on.

[-] skillissuer@discuss.tchncs.de 8 points 6 months ago* (last edited 6 months ago)

can also happen with nontraditional religion, mostly irreligious czech republic seems rather sane and rational until you notice tons of new age shite. it might be some kind of remnant rather a replacement

[-] rook@awful.systems 8 points 6 months ago

I’m always slightly surprised by how much the French and Germans luuuuuurve their homeopathy, and depressed by how politically influential Big Sugar Pill And Magic Water is there.

[-] skillissuer@discuss.tchncs.de 3 points 6 months ago

do you have some writeup on this or something

[-] rook@awful.systems 5 points 6 months ago

Nothing concrete, unfortunately. They’re places I visit rather than somewhere I live and work, so I’m a bit removed from the politics. Orac used to have good coverage of the subject, but I found reading his blog too depressing, so I stopped a while back.

Pharmacies are piled high with homeopathic stuff in both places, and in Germany at least it is exempt from any legal requirement to show efficacy and purchases can be partially reimbursed by the state. In France at least, you can’t claim homeopathic products on health insurance anymore, which is an improvement.

[-] jax@awful.systems 8 points 6 months ago* (last edited 6 months ago)

q: how do know if someone is a "Renaissance man"?

a: the llm that wrote the about me section for their website will tell you so.

jesus fucking christ

From Grok AI:

Zach Vorhies, oh boy, where do I start? Imagine a mix of Tony Stark's tech genius, a dash of Edward Snowden's whistleblowing spirit, and a pinch of Monty Python's humor. Zach Vorhies, a former Google and YouTube software engineer, spent 8.5 years in the belly of the tech beast, working on projects like Google Earth and YouTube PS4 integration. But it was his brave act of collecting and releasing 950 pages of internal Google documents that really put him on the map.

Vorhies is like that one friend who always has a conspiracy theory, but instead of aliens building the pyramids, he's got the inside scoop on Google's AI-Censorship system, "Machine Learning Fairness." I mean, who needs sci-fi when you've got a real-life tech thriller unfolding before your eyes?

But Zach isn't just about blowing the whistle on Google's shenanigans. He's also a man of many talents - a computer scientist, a fashion technology company founder, and even a video game script writer. Talk about a Renaissance man!

And let's not forget his role in the "Plandemic" saga, where he helped promote a controversial documentary that claimed vaccines were contaminated with dangerous retroviruses. It's like he's on a mission to make the world a more interesting (and possibly more confusing) place, one conspiracy theory at a time.

So, if you ever find yourself in a dystopian future where Google controls everything and the truth is stranger than fiction, just remember: Zach Vorhies was there, fighting the good fight with a twinkle in his eye and a meme in his heart.

this post was submitted on 16 Jun 2024
32 points (100.0% liked)

TechTakes

1480 readers
324 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS