393
submitted 5 months ago by neme@lemm.ee to c/technology@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] Buffalox@lemmy.world 145 points 5 months ago

It's kind of funny how AI has the exact same problems some humans have.
I always thought AI wouldn't have that kind of problems, because they would be carefully fed accurate information.
Instead they are taught from things like Facebook and the thing formerly known as Twitter.
What an idiotic timeline we are in. LOL

[-] treefrog@lemm.ee 70 points 5 months ago* (last edited 5 months ago)

I thought the main issue was that AI don't really know how to say I don't know or second guess themselves, as it would take a lot more robust architecture with multiple feedback loops. Like a brain.

Anyway, LLM's aren't the only AI that do this. So them being trained on Facebook data certainly isn't the whole issue.

[-] dan1101@lemm.ee 44 points 5 months ago

Yeah it's the old garbage in, garbage out problem, the AI algorithms don't really understand what they are outputting.

I think at this point voice recognition and text generation AI would be more useful as something like a phone assistant. You could tell it complex things like "Mute my phone for the next 2 hours" or "Notify me if I receive an email from John Smith." Those sort of things could be easily done by AI algorithms that A) Understand your voice and B) Are programmed to know all the features of the OS. Hopefully with a known dataset like a phone OS there shouldn't be hallucination problems, the AI could just act as an OS concierge.

[-] Rhaedas@fedia.io 24 points 5 months ago

The narrow purpose models seem to be the most successful, so this would support the idea that a general AI isn't going to happen from LLMs alone. It's interesting that hallucinations are seen as a problem yet are probably part of why LLMs can be creative (much like humans). We shouldn't want to stop them, but just control when they happen and be aware of when the AI is off the tracks. A group of different models working together and checking each other might work (and probably has already been tried, it's hard to keep up).

load more comments (1 replies)
load more comments (1 replies)
[-] FaceDeer@fedia.io 29 points 5 months ago

The problem with AI hallucinations is not that the AI was fed inaccurate information, it's that it's coming up with information that it wasn't fed in the first place.

As you say, this is a problem that humans have. But I'm not terribly surprised these AIs have it because they're being built in mimicry of how aspects of the human mind works. And in some cases it's desirable behaviour, for example when you're using an AI as a creative assistant. You want it to come up with new stuff in those situations.

It's just something you need to keep in mind when coming up with applications.

load more comments (2 replies)
[-] foggy@lemmy.world 21 points 5 months ago

What weirds me out is that the things it has issues with when generating images/video are basically a list of things lucid dreamers check on to see if they're awake or dreaming.

  1. Hands. Are your hands... Hands? Do they make sense?

  2. Written language. Does it look like normal written language?

(3. Turn the lights off/4. Pinch your nose and breath through it) - these two not so much

  1. How did I get here? Where was I before this? Does the transition make sense?

  2. Mirrors. Are they accurate?

  3. Displays on digital devices. Do they look normal?

  4. Clocks. Digital and analog... Do they look like they're telling time? Even if they do, look away and check again.

(9. Physics, try to do something physically impossible, like poking your finger through your palm. 10. Do you recognize people/do they recognize you) - on two more that aren't relevant.

But still... It's kinda remarkable.

Also, Nvidia launched their earth 2 earth simulator recently. So, simulation theory confirmed, I guess.

load more comments (1 replies)
[-] MentalEdge@sopuli.xyz 20 points 5 months ago* (last edited 5 months ago)

There's also the fact that they can't tell reality apart from fiction in general, because they don't understand anything in the first place.

LLMs have no way of differentiating fantasy RPG elements from IRL things. So they can lose the plot on what is being discussed suddenly, and for seemingly no reason.

LLMs don't just "learn" facts from their training data. They learn how to pretend to be thinking, they can mimic but not really comprehend. If there were facts in the training data, it can regurgitate them, but it doesn't actually know which facts apply to which subjects, or when to not make some up.

[-] Buffalox@lemmy.world 9 points 5 months ago

They learn how to pretend

True, and they are so darn good at it, that it can be somewhat confusing at times.
But the current AIs are not the ones we read about in SciFi.

load more comments (2 replies)
[-] technocrit@lemmy.dbzer0.com 17 points 5 months ago* (last edited 5 months ago)

It's not the exact same problems humans have. It's completely different. Marketers and hucksters just use anthropomorphic terminology to hype their dysfunctional programs.

load more comments (2 replies)
[-] Kecessa@sh.itjust.works 107 points 5 months ago

I'm 100% sure they can't because what they call AI isn't intelligence.

load more comments (63 replies)
[-] eestileib@sh.itjust.works 57 points 5 months ago

You mean we can't teach a bullshit machine to stop bullshitting? I'm shocked.

[-] dch82@lemmy.zip 10 points 5 months ago

What you can do is try to filter out the garbage, but it’s basically trying to find gold in food waste.

[-] cmrn@lemmy.world 51 points 5 months ago

It’s insane how many people already take AI as more capable/accurate than other medium. I’m not against AI, but I’m definitely against how much of a bubble of being worshipped that some people have it in.

[-] Deconceptualist@lemm.ee 46 points 5 months ago

As others are saying it's 100% not possible because LLMs are (as Google optimistically describes) "creative writing aids", or more accurately, predictive word engines. They run on mathematical probability models. They have zero concept of what the words actually mean, what humans are, or even what they themselves are. There's no "intelligence" present except for filters that have been hand-coded in (which of course is human intelligence, not AI).

"Hallucinations" is a total misnomer because the text generation isn't tied to reality in the first place, it's just mathematically "what next word is most likely".

https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

[-] captain_aggravated@sh.itjust.works 9 points 5 months ago

Remember the game people used to play that was something like "type my girlfriend is and then let your phone keyboards auto suggestion take it from there?" LLMs are that.

load more comments (15 replies)
[-] kaffiene@lemmy.world 41 points 5 months ago

I'm 100% sure he can't. Or at least, not from LLMs specifically. I'm not an expert so feel free to ignore my opinion but from what I've read, "hallucinations" are a feature of the way LLMs work.

[-] rottingleaf@lemmy.zip 9 points 5 months ago

One can have an expert system assisted by ML for classification. But that's not an LLM.

[-] chonglibloodsport@lemmy.world 41 points 5 months ago

Everything these AIs output is a hallucination. Imagine if you were locked in a sensory deprivation tank, completely cut off from the outside world, and only had your brain fed the text of all books and internet sites. You would hallucinate everything about them too. You would have no idea what was real and what wasn’t because you’d lack any epistemic tools for confirming your knowledge.

That’s the biggest reason why AIs will always be bullshitters as long as their disembodied software programs running on a server. At best they can be a brain in a vat which is a pure hallucination machine.

[-] Voroxpete@sh.itjust.works 10 points 5 months ago

Yeah, I try to make this point as often as I can. The notion that AI hallucinates only wrong answers really misleads people about how these programs actually work. It couches it in terms of human failings rather than really getting at the underlying flaw in the whole concept.

LLMs are a really interesting area of research, but they never should have made it out of the lab. The fact that they did is purely because all science operates in the service of profit now. Imagine if OpenAI were able to rely on government funding instead of having to find a product to sell.

[-] Excrubulent@slrpnk.net 9 points 5 months ago* (last edited 5 months ago)

First of all I agree with your point that it is all hallucination.

However I think a brain in a vat could confirm information about the world with direct sensors like cameras and access to real-time data, as well as the ability to talk to people and determine things like who was trustworthy. In reality we are brains in vats, we just have a fairly common interface that makes consensus reality possible.

The thing that really stops LLMs from being able to make judgements about what is true and what is not is that they cannot make any judgements whatsoever. Judging what is true is a deeply contextual and meaning-rich question. LLMs cannot understand context.

I think the moment an AI can understand context is the moment it begins to gain true sentience, because a capacity for understanding context is definitionally unbounded. Context means searching beyond the current information for further information. I think this context barrier is fundamental, and we won't get truth-judging machines until we get actually-thinking machines.

[-] AdrianTheFrog@lemmy.world 40 points 5 months ago

They can't. AI has hallucinations. Google has shown that AI can't even rely on external sources, either.

load more comments (1 replies)
[-] nieceandtows@programming.dev 38 points 5 months ago

If Apple can stop AI hallucination, any other AI company can also stop AI hallucination. Which is something they could have already done instead of making AI seem a joke on purpose. AI hallucinations are a sort of phenomena that nobody has control over. Why would Tim Cook have unique control over it?

load more comments (8 replies)
[-] Kolanaki@yiffit.net 34 points 5 months ago* (last edited 5 months ago)

Here's how you stop AI from hallucinating:

Turn it off.

Because everything they output is a hallucination. Just because sometimes those hallucinations are true to life doesn't mean jack shit. Even a broken clock is right twice a day.

"Only feed it accurate information."

Even that doesn't work because it just mixes and matches every element of its input to generate a new, novel output. Which would inevitably be wrong.

[-] john_lemmy@slrpnk.net 10 points 5 months ago

Yeah, just pull the plug. The amount of time we waste talking about this shit for these assholes to play another round of monopoly is unbelievable

[-] crystalmerchant@lemmy.world 26 points 5 months ago

Of course they can't. Any product or feature is only as good as the data underneath it. Training data comes from the internet, and the internet is full of humans. Humans make and write weird shit so so the data that the LLM ingests is weird, this creates hallucinations.

[-] Blackmist@feddit.uk 20 points 5 months ago

Seeing these systems just making shit up when they're not sure on the answer is probably the closest they'll ever come to human behaviour.

We've invented the virtual politician.

[-] flop_leash_973@lemmy.world 19 points 5 months ago

Well yeah, its using the same dataset as MS copilot.

Spitting out inaccurate (I wish the media would stop feeding into calling it something that sounds less bad like hallucinations) answers is nothing something that will go away until the LLM gains the ability to decern context.

[-] Brickardo@feddit.nl 19 points 5 months ago* (last edited 5 months ago)

That's what it comes by not really understanding what you're doing. Most of the AI models I work with are the state of the art just because they happen to work.

In my case, when I solve a PDE using finite difference schemes, there are precise mathematical conditions that guarantee you if the method is going to be stable or not. When I do the same using AI, I can't tell if my method is going to work or not unless I run it. Moreover, I've had it sometimes fail and sometimes succeed.

It's just the way it is for now. Some clever people have to step in and sort things out, because our knowledge is not keeping up with technological resources.

[-] DudeDudenson 9 points 5 months ago* (last edited 5 months ago)

I mean companies world wide just jumped in the AI bandwagon like a lot of people did with the NFT one. Mostly because AI actually has solid use cases and can make a big difference in broad situations.

Just since people are just slapping AI in everything it's gonna end up being another fad to raise stock prices, like firing people last year.

Let's just hope when all of the hype blows over and the general public thinks of AI as the marketing buzzword that never works quite right we'll keep AI in the things it's actually useful for

load more comments (1 replies)
[-] CosmoNova@lemmy.world 15 points 5 months ago* (last edited 5 months ago)

I'm not exaggerating when I say there's only like a dozen true experts for generative AI on the planet and even they're not completely sure what's going on in that blackbox. And as far as I'm aware Tim Cook isn't even one of them. How would he know?

load more comments (2 replies)
[-] DarkThoughts@fedia.io 11 points 5 months ago

I doubt anyone can for as long as "AI" is synonymous with LLMs. LLMs are just inherently unreliable because of how they work.

[-] StaySquared@lemmy.world 10 points 5 months ago

I don't know why they're trying to shove AI down our throats. They need to take their time, allow it to evolve.

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 12 Jun 2024
393 points (95.4% liked)

Technology

59590 readers
2824 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS