128
submitted 10 months ago by haxor@derp.foo to c/hackernews@derp.foo

There is a discussion on Hacker News, but feel free to comment here as well.

you are viewing a single comment's thread
view the rest of the comments
[-] lvxferre@lemmy.ml 12 points 10 months ago

[Double reply to avoid editing my earlier comment]

From the HN thread:

It's a good example how this models are not answering based on any form of understanding and logic reasoning but probabilistic likelihood in many overlapping layers. // Through this also may not matter if this creates a good enough illusion of understanding and intelligence.

I think that the first sentence is accurate, but I disagree with the second one.

Probabilistic likelihood is not enough to create a good illusion of understanding/intelligence. Relying on it will create situations as in the OP, where the bot outputs nonsense because of an unexpected prompt.

To avoid that, the model would need some symbolic (or semantic, or conceptual) layer[s], and handle the concepts being conveyed by the tokens, not just the tokens themselves. But that's already closer to intelligence than to prob likelihood.

this post was submitted on 25 Dec 2023
128 points (95.7% liked)

Hacker News

4091 readers
2 users here now

This community serves to share top posts on Hacker News with the wider fediverse.

Rules0. Keep it legal

  1. Keep it civil and SFW
  2. Keep it safe for members of marginalised groups

founded 1 year ago
MODERATORS