self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 13 points 1 month ago

Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst

it’s fucking wild how PMs react to this kind of thing; the general consensus seems to be that the users are wrong, and that surely whichever awful feature they’re working on will “break through all that hostility” — if the user’s forced (via the darkest patterns imaginable) to use the feature said PM’s trying to boost their metrics for

[–] self@awful.systems 10 points 1 month ago (2 children)

a terrible place for both information and security

[–] self@awful.systems 13 points 1 month ago (4 children)

And in fact barring the inevitable fuckups AI probably can eventual handle a lot of interpretation currently carried out by human civil servants.

But honestly I would have thought that all of this is obvious, and that I shouldn’t really have to articulate it.

you keep making claims about what LLMs are capable of that don’t match with any known reality outside of OpenAI and friends’ marketing, dodging anyone who asks you to explain, and acting like a bit of a shit about it. I don’t think we need your posts.

[–] self@awful.systems 13 points 1 month ago

good, use your excel spreadsheet and not a tool that fucking sucks at it

[–] self@awful.systems 12 points 1 month ago (2 children)

why do you think hallucinating autocomplete can make rules-based decisions reliably

AI analyses it, decides if applicant is entitled to benefits.

why do you think this is simple

[–] self@awful.systems 21 points 1 month ago (2 children)

and of course, not a single citation for the intro paragraph, which has some real bangers like:

This process involves self-assessment and internal deliberation, aiming to enhance reasoning accuracy, minimize errors (like hallucinations), and increase interpretability. Reflection is a form of "test-time compute," where additional computational resources are used during inference.

because LLMs don’t do self-assessment or internal deliberation, nothing can stop these fucking things from hallucinating, and the only articles I can find for “test-time compute” are blog posts from all the usual suspects that read like ads and some arXiv post apparently too shitty to use as a citation

[–] self@awful.systems 12 points 1 month ago (1 children)

oh yeah, I’m waiting for David to wake up so he can read the words

the trivial ‘homework’ of starting the rule violation procedure

and promptly explode, cause fielding deletion requests from people like our guests who don’t understand wikipedia’s rules but assume they’re, ah, trivial, is probably a fair-sized chunk of his workload

[–] self@awful.systems 10 points 1 month ago (1 children)

this would explain so much about the self-declared 10x programmers I’ve met

[–] self@awful.systems 17 points 1 month ago (4 children)

there’s something fucking hilarious about you and your friend coming here to lecture us about how Wikipedia works, but explaining the joke to you is also going to be tedious as shit and I don’t have any vegan nacho fries or junior mints to improve my mood

[–] self@awful.systems 20 points 1 month ago (1 children)

also lol @

Vibe coding, sometimes spelled vibecoding

cause I love the kayfabe linguistic drift for a term that’s not even a month old that’s probably seen more use in posts making fun of the original tweet than any of the shit the Wikipedia article says

[–] self@awful.systems 14 points 1 month ago (3 children)

did you know: you too can make your dreams come true with Vibe Coding (tm) thanks to this article’s sponsors:

Replit Agent, Cursor Composer, Pythagora, Bolt, Lovable, and Cline

and other shameful assholes with cash to burn trying to astroturf a term from a month old Twitter brainfart into relevance

[–] self@awful.systems 15 points 1 month ago (13 children)

no thx, nobody came here for you to assign them tedious homework

 

there’s an alternate universe version of this where musk’s attendant sycophants and bodyguard have to fish his electrocuted/suffocated/crushed body out from the crawlspace he wedged himself into with a pocket knife

 

404media continues to do devastatingly good tech journalism

What Kaedim’s artificial intelligence produced was of such low quality that at one point in time “it would just be an unrecognizable blob or something instead of a tree for example,” one source familiar with its process said. 404 Media granted multiple sources in this article anonymity to avoid retaliation.

this is fucking amazing. the company tries to hide it as a QA check, but they’re really just paying 3d modelers $1-$4 a pop to churn out models in 15 minutes while they pretend the work’s being done by an AI, and now I’m wondering what other AI startups have also discovered this shitty dishonest growth hack

 

whoa, lemmygrad got a vaporwave logo and a much stupider name! too bad their posts are still fucking terrible

 

this is a computer that’s almost entirely without graphical capabilities, so here’s a demo featuring animations and sound someone did last year

 

kinda glad I bounced off of the suckless ecosystem when I realized how much their config mechanism (C header files and a recompile cycle) fucking sucked

 

 

A Brief Primer on Technofascism

Introduction

It has become increasingly obvious that some of the most prominent and monied people and projects in the tech industry intend to implement many of the same features and pursue the same goals that are described in Umberto Eco’s Ur-Fascism(4); that is, these people are fascists and their projects enable fascist goals. However, it has become equally obvious that those fascist goals are being pursued using a set of methods and pathways that are unique to the tech industry, and which appear to be uniquely crafted to force both Silicon Valley corporations and the venture capital sphere to embrace fascist values. The name that fits this particular strain of fascism the best is technofascism (with thanks to @future_synthetic), frequently shortened for convenience to techfash.

Some prime examples of technofascist methods in action exist in cryptocurrency projects, generative AI, large language models, and a particular early example of technofascism named Urbit. There are many more examples of technofascist methods, but these were picked because they clearly demonstrate what outwardly separates technofascism from ordinary hype and marketing.

The Unique Mechanisms of Technofascism

Disassociation with technological progress or success

Technofascist projects are almost always entirely unsuccessful at achieving their stated goals, and rarely involve any actual technological innovation. This is because the marketed goals of these projects are not their real, fascist aims.

Cryptocurrencies like Bitcoin are frequently presented as innovative, but all blockchain-based technologies are, in fact, inefficient distributed database based on Merkle trees, a very old technology which blockchains add little practical value to. In fact, blockchains are so impractical that they have provably failed to achieve any of the marketed goals undertaken by cryptocurrency corporations since the public release of Bitcoin(6).

Statement of world-changing goals, to be achieved without consent

Technofascist goals are never small-scale. Successful tech projects are usually narrowly focused in order to limit their scope(9), but technofascist projects invariably have global ambitions (with no real attempt to establish a roadmap of humbler goals), and equally invariably attempt to achieve those goals without the consent of anyone outside of the project, usually via coercion.

This type of coercion and consent violation is best demonstrated by example. In cryptocurrency, a line of thought that has been called the Bitcoin Citadel(8) has become common in several communities centered around Bitcoin, Ethereum, and other cryptocurrencies. Generally speaking, this is the idea that in a near-future post-collapse society, the early adopters of the cryptocurrency at hand will rule, while late and non-adopters will be enslaved. In keeping with technofascism’s disdain for the success of its marketed goals, this monstrous idea ignores the fact that cryptocurrencies would be useless in a post-collapse environment with a fractured or non-existent global computer network.

AI and TESCREAL groups demonstrate this same pattern by simultaneously positioning large language models as an existential threat on the verge of becoming a hostile godlike sentience, as well as the key to unlocking a brighter (see: more profitable) future for the faithful of the TESCREAL in-group. In this case, the consent violation is exacerbated by large language models and generative AI necessarily being trained on mass volumes of textual and artistic work taken without permission(1).

Urbit positions itself as the inevitable future of networked computing, but its admitted goal is to technologically implement a neofeudal structure where early adopters get significant control over the network and how it executes code(3, 12).

Creation and furtherance of a death cult

In the fascist ideology described by Eco, fascism is described as “a life lived for struggle” where everyone is indoctrinated to believe in a cult of heroism that is closely linked with a cult of death(4). This same indoctrination is common in what I will refer to as a death cult, where a technofascist project is simultaneously positioned as both a world-ending problem, and the solution to that same problem (which would not exist without the efforts of technofascists) for a select, enlightened few.

The death cult of technofascism is demonstrated with perfect clarity by the closely-related ideologies surrounding Large Language Models (LLMs), Artificial General Intelligence (AGI), and the bundle of ideas known as TESCREAL (Transhumanism, Extropianism, Singulartarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism)(5).

We can derive examples of this death cult from the examples given in the previous section. In the concept of the Bitcoin Citadel, cryptocurrencies are idealized as both the cause of the collapse and as the in-group’s source of power after that collapse(6). The TESCREAL belief that Artificial General Intelligence (AGI) will end the world unless it is “aligned with humanity” by members of the death cult, who handle the AGI with the proper religious fervor(11).

While Urbit does not technologically structure itself as a death cult, its community and network is structured to be a highly effective incubator for other death cults(2, 7, 10).

Severance of our relationship with truth and scientific research

Destruction and redefinition of historical records

This can be viewed as a furtherance of technofascism’s goal of destroying our ability to perceive the truth, but it must be called out that technofascist projects have a particular interest in distorting our remembrance of history; to make history effectively mutable in order to cover for technofascism’s failings.

Parasitization of existing terminology

As part of the process of generating false consensus and covering for the many failings of technofascist projects, existing terminology is often taken and repurposed to suit the goals of the fascists.

One obvious example is the popular term crypto, which until relatively recently referred to cryptography, an extremely important branch of mathematics. Cryptocurrency communities have now adopted the term, and have deliberately used the resulting confusion to falsely imply that cryptocurrencies, like cryptography, are an important tool in software architecture.

Weaponization of open source and the commons

One of the distinctive traits that separates ordinary capitalist exploitation from technofascism is the subversion and weaponization of the efforts of the open source community and the development commons.

One notable weapon used by many technofascist projects to achieve absolute control while maintaining the illusion that the work being undertaken is an open source community effort is what I will call forking hostility. This is a concerted effort to make forking the project infeasible, and it takes two forms.

Its technological form is accomplished via network effects; good examples are large cryptocurrency projects like Bitcoin and Ethereum, which cannot practically be forked because any blockchain without majority consensus is highly vulnerable to attacks, and in any case is much less valuable than the larger chain. Urbit maintains technological forking hostility via its aforementioned implementation of neofeudal network resource allocation.

The second form of forking hostility is social; technofascist open source communities are notably for extremely aggressively telling dissenters to “just for it, it’s open source” while just as aggressively punishing anyone attempting a fork with threats, hacking attempts (such as the aforementioned blockchain attacks), ostracization, and other severe social repercussions. These responses are very distinctive in the uniformity of their response, which is rarely seen even among the most toxic of regular open source communities.

Implementation of racist, biased, and prejudiced systems

References

[1] Bender, Emily M. and Hanna, Alex, Ai Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype, Scientific American, 2023.

[2] Broderick, Ryan, Inside Remilia Corporation, the Anti-Woke Dao behind the Doomed Milady Maker Nft, Fast Company, 2022.

[3] Duesterberg, James, Among the Reality Entrepreneurs, The Point Magazine, 2022.

[4] Eco, Umberto, Ur-Fascism, The Anarchist Library, 1995.

[5] Gebru, Timnit and Torres, Emile, Satml 2023 - Timnit Gebru - Eugenics and the Promise of Utopia through Agi, 2023.

[6] Gerard, David, Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Etherium and Smart Contracts, {David Gerard}, 2017.

[7] Gottsegen, Will, Everything You Always Wanted to Know about Miladys but Were Afraid to Ask, 2022.

[8] Munster, Decrypt / Ben, The Bizarre Rise of the ’Bitcoin Citadel’, Decrypt, 2021.

[9] , Scope Creep, Wikipedia, 2023.

[10] , How to Start a Secret Society, 2022.

[11] Torres, Emile P., The Acronym behind Our Wildest Ai Dreams and Nightmares, Truthdig, 2023.

[12] Yarvin, Curtis, 3-Intro.Txt, GitHub, 2010.

 

some quick awful.systems infrastructure updates:

  • @dgerard@awful.systems is now an infrastructure admin!
  • updated lemmy to 0.18.4
  • broke lemmy and lemmy-ui into their own flakes, which the deployment repo will grab and build as needed
  • added the sneer-archive flake to the deployment
  • finally wrote some docs on how to deploy from the flake
3
submitted 2 years ago* (last edited 2 years ago) by self@awful.systems to c/techtakes@awful.systems
 

no excerpts yet cause work destroyed me, but this just got posted on the orange site. apparently a couple of urbit devs realized urbit sucks actually. interestingly they correctly call out some of urbit’s worst points (like its incredibly high degree of centralization), but I get the strong feeling that this whole thing is an attempt to launder urbit’s reputation while swapping out the fascists in charge

e: I also have to point out that this is written from the insane perspective that anyone uses urbit for anything at all other than an incredibly inefficient message board and a set of interlocking crypto scams

e2: I didn’t link it initially, but the orange site thread where I found this has heated up significantly since then

 

I added rammy to the instance blocklist because it's apparently unmoderated and has been invaded by anime nazis

 

Science shows that the brain and the rest of the nervous system stops at death. How that relates to the notion of consciousness is still pretty much unknown, and many neuroscientists will tell you that. We haven't yet found an organ or process in the brain responsible for the conscious mind that we can say stops at death.

no matter how many neuroscientists I ask, none of them will tell me which part of the brain contains the soul. the orange site actually has a good sneer for this:

You don't need to know which part of the brain corresponds to a conscious mind when they entire brain is dead.

a lot of the rest of the thread is the most braindead right-libertarian version of Pascal’s Wager I’ve ever seen:

Ultimately, it's their personal choice, with their money, and even if they spend $100,000 on paying for it, or more, it doesn't mean they didn't leave other assets or things for their descendants.

By making a moral claim for why YOU decide that spending that money isn't justified, you're going down one very arrogant and ultimately silly road of making the same claim to so many other things people spend money and effort they've worked hard for on specific personal preferences, be they material or otherwise.

Maybe you buying a $700,000 house vs. a $600,000 house is just as idiotic then? Do you really need the extra floor space or bathrooms?

Where would you draw a line? Should other once-implausible life enhancement therapies that are now widely used and accepted also be forsaken? How about organ transplants? Gene therapy? highly expensive cancer treatments that all have extended life beyond what was previously "natural" for many people? Often these also start first as speculative ideas, then experiments, then just options for the rich, but later become much more widely available.

and therefore the only rational course of action is to put $100,000 straight into the pockets of grifters. how dare I make any value judgments at all about cryonicists based on their extreme distaste for the scientific method, consistent history of failure, and use of extremely exploitative marketing?

 

The problem is that today's state of the art is far too good for low hanging fruit. There isn't a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn't also fail so you're often left with weird ad-hominins ("Forget what it can do and results you see. It's "just" predicting the next token so it means nothing") or imaginary distinctions built on vague and ill defined assertions ( "It sure looks like reasoning but i swear it isn't real reasoning. What does "real reasoning" even mean ? Well idk but just trust me bro")

a bunch of posts on the orange site (including one in the linked thread with a bunch of mask-off slurs in it) are just this: techfash failing to make a convincing argument that GPT is smart, and whenever it’s proven it isn’t, it’s actually that “a significant chunk of people” would make the same mistake, not the LLM they’ve bullshitted themselves into thinking is intelligent. it’s kind of amazing how often this pattern repeats in the linked thread: GPT’s perceived successes are puffed up to the highest extent possible, and its many(, many, many) failings are automatically dismissed as something that only makes the model more human (even when the resulting output is unmistakably LLM bullshit)

This is quite unfair. The AI doesn't have I/O other than what we force-feed it through an API. Who knows what will happen if we plug it into a body with senses, limbs, and reproductive capabilities? No doubt somebody is already building an MMORPG with human and AI characters to explore exactly this while we wait for cyborg part manufacturing to catch up.

drink! “what if we gave the chatbot a robot body” is my favorite promptfan cliche by far, and this one has it all! virtual reality, cyborgs, robot fucking, all my dumbass transhumanist favorites

There's actually a cargo cult around downplaying AI.

The high level characteristics of this AI is something we currently cannot understand.

The lack of objectivity, creativity, imagination, and outright denial you see on HN around this topic is staggering.

no, you’re all the cargo cult! I asked my cargo and it told me so

view more: ‹ prev next ›