359
submitted 4 months ago by ray@lemmy.ml to c/opensource@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] Suoko@feddit.it 3 points 3 months ago
[-] sramder@lemmy.world 3 points 3 months ago

It was struggling harder than I was ;-)

[-] Chewy7324@discuss.tchncs.de 8 points 3 months ago* (last edited 3 months ago)

I noticed those language models don't work well for articles with dense information and complex sentence structure. Sometimes they forget the most important point.

They are useful as a TLDR but shouldn't be taken as fact, at least not yet and for the foreseeable future.

A bit off topic, but I've read a comment in another community where someone asked chatgpt something and confidently posted the answer. Problem: the answer is wrong. That's why it's so important to mark ~~AI~~ LLM generated texts (which the TLDR bots do).

[-] quiteStraightEdge@lemmy.ml 5 points 3 months ago

Not calling ML and LLM "AI" would also help. (I went offtopic even more)

load more comments (2 replies)
load more comments (2 replies)
load more comments (2 replies)
this post was submitted on 17 Jul 2024
359 points (99.2% liked)

Open Source

31199 readers
187 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS