6
submitted 8 months ago* (last edited 8 months ago) by dgerard@awful.systems to c/morewrite@awful.systems

This is just a draft, best refrain from linking. (I hope we'll get this up tomorrow or Monday. edit: probably this week? edit 2: it's up!!) The [bracketed] stuff is links to cites.

Please critique!


A vision came to us in a dream — and certainly not from any nameable person — on the current state of the venture capital fueled AI and machine learning industry. We asked around and several in the field concurred.

AIs are famous for “hallucinating” made-up answers with wrong facts. The hallucinations are not decreasing. In fact, the hallucinations are getting worse.

If you know how large language models work, you will understand that all output from a LLM is a “hallucination” — it’s generated from the latent space and the training data. But if your input contains mostly facts, then the output has a better chance of not being nonsense.

Unfortunately, the VC-funded AI industry runs on the promise of replacing humans with a very large shell script. If the output is just generated nonsense, that’s a problem. There is a slight panic among AI company leadership about this.

Even more unfortunately, the AI industry has run out of untainted training data. So they’re seriously considering doing the stupidest thing possible: training AIs on the output of other AIs. This is already known to make the models collapse into gibberish. [WSJ, archive]

There is enough money floating around in tech VC to fuel this nonsense for another couple of years — there are hundreds of billions of dollars (family offices, sovereign wealth funds) desperate to find an investment. If ever there was an argument for swingeing taxation followed by massive government spending programs, this would be it.

Ed Zitron gives it three more quarters (nine months). The gossip concurs with Ed on this being likely to last for another three quarters. There should be at least one more wave of massive overhiring. [Ed Zitron]

The current workaround is to hire fresh Ph.Ds to fix the hallucinations and try to underpay them on the promise of future wealth. If you have a degree with machine learning in it, gouge them for every penny you can while the gouging is good.

AI is holding up the S&P 500. This means that when the AI VC bubble pops, tech will drop. Whenever the NASDAQ catches a cold, bitcoin catches COVID — so expect crypto to go through the floor in turn.

you are viewing a single comment's thread
view the rest of the comments
[-] j4k3@lemmy.ml -1 points 8 months ago

I use the tech every day. Good luck with your echo chamber. You are a statistical inevitability. Time will teach you far more than I care to.

[-] AcausalRobotGod@awful.systems 9 points 8 months ago

Buddy, I am a statistical inevitability.

[-] Amoeba_Girl@awful.systems 6 points 8 months ago

wait... i'm sure it sounded cool and all but what does it mean for a person existing here and now to be inevitable in a statistical sense.....

[-] self@awful.systems 6 points 8 months ago

that’s why I wish they’d given us more before they went I said good day sir and fucked off. I wanted more fractally wrong shit from the mind that gave us “the only issue with LLMs is user input, you poor naive soul” and “early computers couldn’t do arithmetic, ever heard of floating point? you fools” and that last one keeps being wrong in exciting new ways every time I think about it

[-] 200fifty@awful.systems 6 points 8 months ago

not least in that calling floating point "arithmetic" is being far too generous to floating point...

[-] dgerard@awful.systems 3 points 8 months ago

oh i assure you the hits just keep on coming

a beautiful mind, with just a soupcon of LLM assistance

[-] blakestacey@awful.systems 5 points 8 months ago

You are a statistical inevitability.

Aw shucks, I bet you say that to all the girls

[-] self@awful.systems 2 points 8 months ago

ahahaha ok bye

this post was submitted on 06 Apr 2024
6 points (100.0% liked)

MoreWrite

110 readers
1 users here now

post bits of your writing and links to stuff you’ve written here for constructive criticism.

if you post anything here try to specify what kind of feedback you would like. For example, are you looking for a critique of your assertions, creative feedback, or an unbiased editorial review?

if OP specifies what kind of feedback they'd like, please respect it. If they don't specify, don't take it as an invite to debate the semantics of what they are writing about. Honest feedback isn’t required to be nice, but don’t be an asshole.

founded 1 year ago
MODERATORS