108
submitted 1 year ago by JRepin@lemmy.ml to c/technology@lemmy.ml

cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

you are viewing a single comment's thread
view the rest of the comments
[-] mojo@lemm.ee 32 points 1 year ago

The real issue is people need to realize how LLMs work. It's just a really good next word generator that sounds plausible to a human. Accuracy and truth isn't part of consideration for the most part. The AI doesn't even see words, it just breaks words down to numbers and treats it like a giant math problem.

It's an amazing tool that will massively boost productivity, but people need to know its limitations and what it's actually capable of. That's where the hype is overblown.

[-] Tgs91@lemmy.world 24 points 1 year ago

I work on AI research. I've been trying to explain it to people as an improv actor that takes suggestions from the audience. It just plays along with the prompt you give it. It's not an expert, it's just an actor playing a role.

[-] FaceDeer@kbin.social 3 points 1 year ago

Ironically, I think you also are overlooking some details about how LLMs work. They are not just word generators. Stuff is going on inside those neural networks that we're still unsure of.

For example, I read about a study a little while back that was testing the mathematical abilities of LLMs. The researchers would give them simple math problems like "2+2=" and the LLM would fill in 4, which was unsurprising because that equation could be found in the LLM's training data. But as they went to higher numbers the LLM kept giving mostly correct results, even when they knew for a fact that the specific math problem being presented wasn't in the training data. After training on enough simple addition problems the LLM had actually "figured out" some of the underlying rules of math and was using those to make its predictions.

Being overly dismissive of this technology is as fallacious as overly hyping it.

[-] Norgur@kbin.social 16 points 1 year ago

No. Just.... No. The LLM has not "figured out" what's going on. It can't. These things are just good at prediction. The main indicator is in your text: "mostly correct". A computer that knows what to calculate will not be "mostly correct". One false answer proves one hundred percent that it has no clue what it's supposed to do.
What we are seeing with those "studies" is that social study people try to apply the same rules they apply to humans (where "mostly correct" is as good as "always correct") which is bonkers, or behavioral researchers try to prove some behavior they attribute to the AI as if it was a living being, which is also bonkers because the AI will mimic the results in the training data which is human so the data will be biased as fuck and its impossible to determine if the AI did anything by itself at all (which it didn't, because that's not how the software works).

[-] kogasa@programming.dev 1 points 1 year ago

No, you're wrong. All interesting behavior of ML models is emergent. It is learned, not programmed. The fact that it can perform what we consider an abstract task with success clearly distinguishable from random chance is irrefutable proof that some model of the task has been learned.

[-] Norgur@kbin.social 4 points 1 year ago

No one said anyhting about "learned" vs "programmed". Literally no one.

[-] kogasa@programming.dev 3 points 1 year ago

OP is saying it's impossible for a LLM to have "figured out" how something it works, and that if it understood anything it would be able to perform related tasks perfectly reliably. They didn't use the words, but that's what they meant. Sorry for your reading comprehension.

[-] Norgur@kbin.social 1 points 1 year ago

"op" you are referring to is... well... myself, Since you didn't comprehend that from the posts above, my reading comprehension might not be the issue here. \

But in all seriousness: I think this is an issue with concepts. No one is saying that LLMs can't "learn" that would be stupid. But the discussion is not "is everything programmed into the LLM or does it recombine stuff". You seem to reason that when someone says the LLM can't "understand", that person means "the LLM can't learn", but "learning" and "understanding" are not the same at all. The question is not if LLMs can learn, It's wether it can grasp concepts from the content of the words it absorbs as it it's learning data. If it would grasp concepts (like rules in algebra), it could reproduce them everytime it gets confronted with a similar problem. The fact that it can't do that shows that the only thing it does is chain words together by stochastic calculation. Really sophisticated stachastic calculation with lots of possible outcomes, but still.

[-] kogasa@programming.dev 2 points 1 year ago

“op” you are referring to is… well… myself, Since you didn’t comprehend that from the posts above, my reading comprehension might not be the issue here.

I don't care. It doesn't matter, so I didn't check. Your reading comprehension is still, in fact, the issue, since you didn't understand that the "learned" vs "programmed" distinction I had referred to is completely relevant to your post.

It’s wether it can grasp concepts from the content of the words it absorbs as it it’s learning data.

That's what learning is. The fact that it can construct syntactically and semantically correct, relevant responses in perfect English means that it has a highly developed inner model of many things we would consider to be abstract concepts (like the syntax of the English language).

If it would grasp concepts (like rules in algebra), it could reproduce them everytime it gets confronted with a similar problem

This is wrong. It is obvious and irrefutable that it models sophisticated approximations of abstract concepts. Humans are literally no different. Humans who consider themselves to understand a concept can obviously misunderstand some aspect of the concept in some contexts. The fact that these models are not as robust as that of a human's doesn't mean what you're saying it means.

the only thing it does is chain words together by stochastic calculation.

This is a meaningless point, you're thinking at the wrong level of abstraction. This argument is equivalent to "a computer cannot convey meaningful information to a human because it simply activates and deactivates bits according to simple rules." Your statement about an implementation detail says literally nothing about the emergent behavior we're talking about.

[-] diffuselight@lemmy.world -3 points 1 year ago

Can we stop giving out copium like this? You are fact free.

https://arxiv.org/pdf/2212.09196.pdf

[-] Norgur@kbin.social 5 points 1 year ago

How does behaviour that is present in LLMs but not in SLMs show that an LLM can "think"?`It only shows that the amount of stuff an LLM can guess increases when you feed it more data. That's not the hot take you think it is.

[-] coolin@beehaw.org 3 points 1 year ago

I think this is downplaying what LLMs do. Yeah, they are not the best at doing things in general, but the fact that they were able to learn the structure and semantic context of language is quite impressive, even if it doesn't know what the words converted into tokens actually mean. I suspect that we will be able to use LLMs as one part of a full digital "brain", with some model similar to our own prefrontal cortex calling the LLM (and other things like vision model, sound model, etc.) and using its output to reason about a certain task and take an action. That's where I think the hype will be validated, is when you put all these parts we've been working on together and Frankenstein a new and actually intelligent system.

this post was submitted on 04 Aug 2023
108 points (88.6% liked)

Technology

34805 readers
221 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS