166
submitted 5 months ago by velox_vulnus@lemmy.ml to c/usa@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] queermunist@lemmy.ml 0 points 5 months ago

You have no reason to think it can be solved. You're just blindly putting your faith in something you don't understand and making up percentages to make yourself sound less like a religious nut.

[-] jsomae@lemmy.ml 1 points 5 months ago

If I have no reason to believe X and no reason not to believe X, then the probability of X would be 50%, no?

[-] queermunist@lemmy.ml 0 points 5 months ago

By this logic, the probability of every stupid thing is 50%

You have no reason to believe magic is real, but you have no reason to not believe magic is real. So, is there a 50% probability that magic is real? Evidently you think so, because the magic science mans are going to magic up a solution to the problems faced by these chatbots.

[-] jsomae@lemmy.ml 1 points 5 months ago* (last edited 5 months ago)

Absolutely not true. The probabilities of stupid things are very low; that's because they are stupid. If we expected such things to be probable, we probably wouldn't call them stupid.

I have plenty of evidence to believe magic isn't real. Don't mistake "no evidence (and we haven't checked)" for "no evidence (but we've checked)". I've lived my whole life and haven't seen magic, and I have a very predictive model for the universe which has no term for 'magic'.

LLMs are new, and have made sweeping, landmark improvements every year since GPT2. Therefore I have reason to believe (not 100%!) that we are still in the goldrush phase and new landmark improvements will continue to be made in the field for some time. I haven't really seen an argument that hallucination is an intractable problem, and while it's true that all LLMs have hallucinated so far, GPT4 hallucinates much less than GPT3, and GPT3 hallucinates a lot less than GPT2.

But realistically speaking, even if I were unknowledgeable and unqualified to say anything with confidence about LLMs, I could still say this: for any statement X about LLMs which is not stupid by the metric that an unknowledgeable person would be able to perceive, the probability of that statement being true about LLMs to an unknowledgeable person is 50%. We know this because the opposite of that statement, call it ¬X, would also be equally opaque to an unknowledgeable person. Given X and ¬X are mutually exclusive, and we have no reason to favor one over the other, both have probability 50%.

[-] queermunist@lemmy.ml 0 points 5 months ago

This technology isn't actually that new, it's been around for almost a decade. What's new is the amount of processing power they have to throw at the data bases and the level of data collection, but you're just buying into marketing hype. It's classic tech industry stuff to over promise and under deliver to pump up valuations and sales.

[-] jsomae@lemmy.ml 1 points 5 months ago* (last edited 5 months ago)

Ok, but by that same perspective, you could say convolutional neural networks have been around since the 80s. It wasn't until Geoffrey Hinton put them back on the map in 2012ish that anyone cared. GPT2 is when I started paying attention to LLMs, and that's 5 years old or so.

Even a decade is new in the sense of Laplace's law of succession alone indicating there's still a 10% chance we'll solve the problem in the next year.

[-] queermunist@lemmy.ml 0 points 5 months ago

Laplace’s law of succession only applies if we know an experiment can result in either success or failure. We don't know that. That's just adding new assumptions for your religion. For all we know, this can never result in success and it's a dead end.

[-] jsomae@lemmy.ml 1 points 5 months ago

I have to hard disagree here. Laplace's law of succession does not require that assumption. It's easy to see why intuitively: if it turns out the probability is 0 (or 1) then the predicted probability from Laplace's law of succession limits to 0 (or 1) as more results come in.

[-] queermunist@lemmy.ml 1 points 5 months ago

If the probability is 0 then it will never be 1

Therefore, there must be some probability of success.

[-] jsomae@lemmy.ml 1 points 5 months ago

It may help to distinguish between the "true" probability of an event and the observer's internal probability for that event. If the observer's probability is 0 or 1 then you're right, it can never change. This is why your prior should never be 0 or 1 for anything.

[-] queermunist@lemmy.ml 1 points 5 months ago

This is why your prior should never be 0 or 1 for anything.

For anything? Are you sure about that?

Because I say there's 0 probability that six sided dice will ever produce a 7.

[-] jsomae@lemmy.ml 1 points 5 months ago

A better example of this is "how sure are you that 2+2=4 ?" It makes sense to assign a prior probability of 1 to such mathematical certainties, because they don't depend on our uncertain world. On the other hand, how sure are you that 8858289582116283904726618947467287383847 isn't prime?

For a die in a thought experiment -- sure, it can't be 7. But in a physical universe, a die could indeed surprise you with a 7.

More to the point, why do you believe the probability that hallucinations as a problem will be solved (at least to the point that they are rare and mild enough not to matter) is literally 0? Do you think that the existence of fanatical AI zealots makes it less likely?

[-] queermunist@lemmy.ml 1 points 5 months ago* (last edited 5 months ago)

Okay, so by your logic the probability of literally everything is 1. That's absurd and that's not how Laplace’s law of succession is supposed to be applied. The point I'm trying to make is that some things are literally impossible, you can't just hand-wave that!

And I'm not saying that solving hallucinations is impossible! What I'm saying that it could be impossible and am criticizing your blind faith in progress because you just believe the probability is literally 1. I can't say, for sure, that it's impossible. At the same time you can't say, for sure, that it is possible. You can't just assume the problem will inevitably be fixed, otherwise you've talked yourself into a cult.

[-] jsomae@lemmy.ml 1 points 5 months ago* (last edited 5 months ago)

I'm not saying the probability of literally everything is 1. I am saying nonzero. 0.00003 is not 1 nor 0.

I am not assuming the problem will inevitably be fixed. I think 0.5 is a reasonable p for most.

[-] queermunist@lemmy.ml 1 points 5 months ago* (last edited 5 months ago)

You do not know that it is nonzero, that's just an assumption you made up.

Also, Laplace's law of succession necessarily implies that, over an infinite number of attempts and as long as there is a possibility of success, the probability that the next attempt results in success approaches 1.

[-] jsomae@lemmy.ml 1 points 5 months ago* (last edited 5 months ago)

No, Laplace's law of succession states that the (observer's posterior) probability that the next attempt results in a success approaches the true probability. If it really isn't possible, then Laplace's law predicts that as more attempts are made, the observer will predict that the next result is increasingly unlikely to be a success. In other words, the observer's estimate of the probability approaches 0.

I know that it is possible that it might not be possible. To be clear: in the case that someone isn't sure whether something is possible or impossible, and has no reason to believe one of those options is more likely, then to them the probability is 50%. Saying "it might be 0 or 1 but I don't know which!" is the same as saying 50%. If you can predict something no better than a coin flip, then it's a coin flip. This is basic Bayesian probability theory.

(Laplace's law merely takes into account that repeat attempts might or might not be correlated -- if you flip a coin a hundred times and get tails each time, you're not going to think it's 50/50 anymore by then.)

A Bayesian statistician believes that, in our real physical imperfect universe, a six-sided die rolled once will yield each number with 1/6 probability (the probability of a 1 is 1/6; the probability of a 2 is 1/6, and so on) because the Bayesian statistician doesn't have any way to accurately predict the muscle movements of the person rolling the die, nor the way the die will bounce when it hits the table. (They might reserve a tiny fraction of probability space for esoteric results like landing on a corner or the die quantum-morphing into a neon sign of the number 7.) In contrast, a frequentist statistician will say, "It could end up a 1 or 2 or ... or 6, but I can't tell you which it will be without more information about how exactly the die is rolled. I'm not a physicist! Why can't we imagine an abstract die instead and analyze that?" This is very unhelpful. If you are applying this perspective to science -- which it seems you are if you're so concerned about the possibility that the probability might be 0 and we don't know if it is or isn't so we can't reason about this yet! -- but not to the die, then you need to rethink your philosophy.

this post was submitted on 07 Apr 2024
166 points (96.6% liked)

United States | News & Politics

7124 readers
528 users here now

founded 4 years ago
MODERATORS