200
you are viewing a single comment's thread
view the rest of the comments
[-] EnderMB@lemmy.world 4 points 1 month ago* (last edited 1 month ago)

"sigh"

(Preface: I work in AI)

This isn't news. We've known this for many, many years. It's one of the reasons why many companies didn't bother using LLM's in the first place, that paired with the sheer amount of hallucinations you'll get that'll often utterly destroy a company's reputation (lol Google).

With that said, for commercial services that use LLM's, it's absolutely not true. The models won't reason, but many will have separate expert agents or API endpoints that it will be told to use to disambiguate or better understand what is being asked, what context is needed, etc.

It's kinda funny, because many AI bros rave about how LLM's are getting super powerful, when in reality the real improvements we're seeing is in smaller models that teach a LLM about things like Personas, where to seek expert opinion, what a user "might" mean if they misspell something or ask for something out of context, etc. The LLM's themselves are only slightly getting better, but the thing that preceded them is propping them up to make them better

IMO, LLM's are what they are, a good way to spit information out fast. They're an orchestration mechanism at best. When you think about them this way, every improvement we see tends to make a lot of sense. The article is kinda true, but not in the way they want it to be.

[-] V0ldek@awful.systems 23 points 1 month ago

(Preface: I work in AI)

Are they a serious researcher in ML with insights into some of the most interesting and complicated intersections of computer science and analytical mathematics, or a promptfondler that earns 3x the former's salary for a nebulous AI startup that will never create anything of value to society? Read on to find out!

[-] froztbyte@awful.systems 10 points 1 month ago

Read on to find out!

do i have to

[-] V0ldek@awful.systems 11 points 1 month ago

Welcome to the future! Suffering is mandatory!

[-] froztbyte@awful.systems 14 points 1 month ago

as a professional abyss-starer, I'm going to talk to my union about this

[-] blakestacey@awful.systems 14 points 1 month ago* (last edited 1 month ago)

(Preface: I work in AI)

Preface: repent for your sins in sackcloth and ashes.

IMO, LLM’s are what they are, a good way to spit information out fast.

Buh bye now.

[-] bitofhope@awful.systems 18 points 1 month ago

while true; do fortune; done is a good way to spit information out fast.

[-] o7___o7@awful.systems 9 points 1 month ago* (last edited 1 month ago)

what a user “might” mean if they misspell something

this but with extra wasabi

[-] sc_griffith@awful.systems 10 points 1 month ago

*trying desperately not to say the thing* what if AI could automatically... round out... spelling

[-] o7___o7@awful.systems 6 points 1 month ago

It hurts when they're so close!

this post was submitted on 13 Oct 2024
200 points (100.0% liked)

TechTakes

1403 readers
93 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS