this post was submitted on 18 Apr 2025
33 points (100.0% liked)

TechTakes

1799 readers
58 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 8 comments
sorted by: hot top controversial new old
[–] spankmonkey@lemmy.world 15 points 3 days ago* (last edited 3 days ago) (1 children)

Why would the steps be literal when everything else is bullshit? Obviously the reasoning steps are AI slop too.

[–] dgerard@awful.systems 8 points 3 days ago (1 children)
[–] Soyweiser@awful.systems 8 points 3 days ago

The paper clipping is nigh! Repent Harlequins

[–] paraphrand@lemmy.world 12 points 3 days ago (1 children)

It’s bullshitting. That’s the word. Bullshitting is saying things without a care for how true they are.

[–] Saledovil@sh.itjust.works 6 points 3 days ago (2 children)

The word "bullshitting" implies a clarity of purpose I don't want to attribute to AI.

[–] antifuchs@awful.systems 4 points 2 days ago

It’s kind of a distinction without much discriminatory power: LLMs are a tool created to ease the task of bullshitting; used to produce bullshit by bullshitters.

[–] Soyweiser@awful.systems 2 points 2 days ago

Yeah that is why people called it confabulating, and not bullshitting.

[–] diz@awful.systems 6 points 3 days ago* (last edited 3 days ago)

It re consumes its own bullshit, and the bullshit it does print is the bullshit it also fed itself, its not lying about that. Of course, it is also always re consuming the initial prompt too so the end bullshit isn’t necessarily quite as far removed from the question as the length would indicate.

Where it gets deceptive is when it knows an answer to the problem, but it constructs some bullshit for the purpose of making you believe that it solved the problem on its own. The only way to tell the difference is to ask it something simpler that it doesn’t know the answer to, and watch it bullshit in circles or to an incorrect answer.