this post was submitted on 13 Jul 2025
612 points (97.2% liked)

Comic Strips

18128 readers
2873 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 2 years ago
MODERATORS
 

you are viewing a single comment's thread
view the rest of the comments
[–] logicbomb@lemmy.world 100 points 1 day ago (41 children)

My knowledge on this is several years old, but back then, there were some types of medical imaging where AI consistently outperformed all humans at diagnosis. They used existing data to give both humans and AI the same images and asked them to make a diagnosis, already knowing the correct answer. Sometimes, even when humans reviewed the image after knowing the answer, they couldn't figure out why the AI was right. It would be hard to imagine that AI has gotten worse in the following years.

When it comes to my health, I simply want the best outcomes possible, so whatever method gets the best outcomes, I want to use that method. If humans are better than AI, then I want humans. If AI is better, then I want AI. I think this sentiment will not be uncommon, but I'm not going to sacrifice my health so that somebody else can keep their job. There's a lot of other things that I would sacrifice, but not my health.

[–] Taleya@aussie.zone 29 points 1 day ago* (last edited 1 day ago) (6 children)

That's because the medical one (particularly good ar spotti g cancerous cell clusters) was a pattern and image recognition ai not a plagiarism machine spewing out fresh word salad.

LLMs are not AI

[–] pennomi@lemmy.world 29 points 1 day ago (5 children)

They are AI, but to be fair, it’s an extraordinarily broad field. Even the venerable A* Pathfinding algorithm technically counts as AI.

[–] logicbomb@lemmy.world 17 points 1 day ago (1 children)

When I was in college, expert systems were considered AI. Expert systems can be 100% programmed by a human. As long as they're making decisions that appear intelligent, they're AI.

One example of an expert system "AI" is called "game AI." If a bot in a game appears to be acting similar to a real human, that's considered AI. Or at least it was when I went to college.

[–] GreyEyedGhost@lemmy.ca 3 points 16 hours ago (1 children)

AI is kind of like Scotsmen. It's hard to find a true one, and every time you think you have, the goalposts get moved.

Now, AI is hard, both to make and to define. As for what is sometimes called AGI (artificial general intelligence), I don't think we've come close at this point.

[–] logicbomb@lemmy.world 1 points 14 hours ago (1 children)

I see the no true Scotsman fallacy as something that doesn't affect technical experts, for the most part. Like, an anthropologist would probably go with the simplest definition of birthplace, or perhaps go as far to use heritage. But they wouldn't get stuck on the complicated reasoning in the fallacy.

Similarly, for AI experts, AI is not hard to find. We've had AI of one sort or another since the 1950s, I think. You might have it in some of your home appliances.

When talking about human level intelligence from an inanimate object, the history is much longer. Thousands of years. To me, it's more a question for philosophers than for engineers. The same questions we're asking about AI, philosophers have asked about humans. And just about every time people say modern AI is lacking in some trait compared to humans, you can find a history of philosophers asking whether humans really exhibit that trait in the first place.

I guess neuroscience is also looking into this question. But the point is, once they can explain exactly why human minds are special, we engineers won't get stuck on the Scotsman fallacy, because we'll be too busy copying that behavior into a computer. And then the non-experts will get to have fun inventing another reason that human intelligence is special.

Because that's the real truth behind Scotsman, isn't it? The person has already decided on the answer, and will never admit defeat.

[–] GreyEyedGhost@lemmy.ca 2 points 14 hours ago (1 children)

And yet, look in the comments and you will see people literally saying the examples you gave from the 50s aren't true AI. Granted, those aren't technical experts.

[–] logicbomb@lemmy.world 1 points 14 hours ago

Even I wouldn't call myself a technical expert in AI. I studied it in both my bachelor's and master's degrees and worked professionally with types of AI, such as decision trees, for years. And I did a little professionally to help data scientists develop NN models, but we're talking in the range of weeks or maybe months.

It's really neural networks where I've not had enough experience. I never really developed NN models myself, other than small ones in my personal time, so I'm no expert, but I've studied it enough and been around it enough that I can talk intelligently about the topic with experts... or at least I could the last time I worked with it, which was around 5 years ago.

And that's why it's so depressing to look at these comments you're talking about. People who vastly oversell their expertise and spread misinformation because it fits their agenda. I also think we need to protect people from generative AI, but I'm not willing to ignore facts or lie to do so.

load more comments (3 replies)
load more comments (3 replies)
load more comments (37 replies)