They can't possibly train for every possible scenario.
AI: "Pregnant, 94% confidence"
Patient: "I confess, I shoved an umbrella up my asshole. Don't send me to a gynecologist please!"
Comic Strips is a community for those who love comic stories.
The rules are simple:
Web of links
They can't possibly train for every possible scenario.
AI: "Pregnant, 94% confidence"
Patient: "I confess, I shoved an umbrella up my asshole. Don't send me to a gynecologist please!"
I want to see Dr House make a rude comment to the chatbot that replaced all of his medical staff
At first i thought this was an open house where the visitors slowly became relplaced by AI, honestly i thought this was speaking upon the fact that AI would be able to replace even the housing industry, imagine the amount of land that would be bought up if it were given the resources to generate wealth off of unused land, imagine this scenario but replace the "crime" with anything else.
This IS our future if we let it be.
My knowledge on this is several years old, but back then, there were some types of medical imaging where AI consistently outperformed all humans at diagnosis. They used existing data to give both humans and AI the same images and asked them to make a diagnosis, already knowing the correct answer. Sometimes, even when humans reviewed the image after knowing the answer, they couldn't figure out why the AI was right. It would be hard to imagine that AI has gotten worse in the following years.
When it comes to my health, I simply want the best outcomes possible, so whatever method gets the best outcomes, I want to use that method. If humans are better than AI, then I want humans. If AI is better, then I want AI. I think this sentiment will not be uncommon, but I'm not going to sacrifice my health so that somebody else can keep their job. There's a lot of other things that I would sacrifice, but not my health.
To expand on this a bit AI in medicine is getting super good at cancer screening in specific use cases.
People now heavily associate it with LLMs hallucinating and speaking out of their ass but forget about how AI completely destroys people at chess. AI is already getting better than top physics models at weather predicting, hurricane paths, protein folding and a lot of other use cases.
AI's uses in specific well defined problems with a specific outcome can potentially become way more accurate than any human can. It's not so much about removing humans but handing humans tools to make medicine both more effective and efficient at the same time.
One of the large issues was while they had very good rates of correct diagnosis, they also had higher false positive rates. A false cancer diagnosis can seriously hurt people for example
iirc the reason it isn't used still is because even with it being trained by highly skilled professionals, it had some pretty bad biases with race and gender, and was only as accurate as it was with white, male patients.
Plus the publicly released results were fairly cherry picked for their quality.
Yeah, there were also several stories where the AI just detected that all the pictures of the illness had e.g. a ruler in them, whereas the control pictures did not. It's easy to produce impressive results when your methodology sucks. And unfortunately, those results will get reported on before peer reviews are in and before others have attempted to reproduce the results.
That reminds me, pretty sure at least one of these ai medical tests it was reading metadata that included the diagnosis on the input image.
Medical sciences in general have terrible gender and racial biases. My basic understanding is that it has got better in the past 10 years or so, but past scientific literature is littered with inaccuracies that we are still going along with. I'm thinking drugs specifically, but I suspect it generalizes.
That's because the medical one (particularly good ar spotti g cancerous cell clusters) was a pattern and image recognition ai not a plagiarism machine spewing out fresh word salad.
LLMs are not AI
They are AI, but to be fair, it’s an extraordinarily broad field. Even the venerable A* Pathfinding algorithm technically counts as AI.
When I was in college, expert systems were considered AI. Expert systems can be 100% programmed by a human. As long as they're making decisions that appear intelligent, they're AI.
One example of an expert system "AI" is called "game AI." If a bot in a game appears to be acting similar to a real human, that's considered AI. Or at least it was when I went to college.
The important thing to know here is that those AI were trained by very experienced radiologists who are physicians that specialize in reading imaging. The AI's wouldn't have this capability if the humans didn't train them.
Also, the imaging that AI performs well with is fairly specific, and there are many kinds of imaging techniques and diagnostic applications that the AI is still very bad at.
Yeah this is one of the few tasks that AI is really good at. It's not perfect and it should always have a human doctor to double check the findings, but diagnostics is something AI can greatly assist with.
It's called progress because the cost in frame 4 is just a tenth what it was in frame 1.
Of course prices will still increase, but think of the PROFITS!
Also, there'll be no one to blame for mistakes! Failures are just software errors and can be shrugged off! Increase profits and pay less for insurance! What's not to like?
I hate AI slop as much as the next guy but aren’t medical diagnoses and detecting abnormalities in scans/x-rays something that generative models are actually good at?
They don't use the generative models for this. The AI's that do this kind of work are trained on carefully curated data and have a very narrow scope that they are good at.
Yeah, those models are referred to as "discriminative AI". Basically, if you heard about "AI" from around 2018 until 2022, that's what was meant.
That brings up a significant problem - there are widely different things that are called AI. My company's customers are using AI for biochem and pharm research, protein folding, and other science stuff.
My company cut funding for traditional projects and has prioritized funding for AI projects. So now anything that involves any form of automation is "AI".
Image categorisation AI, or convolutional neural networks, have been in use since well before LLMs and other generative AI. Some medical imaging machines use this technology to highlight features such as specific organs in a scan. CNNs could likely be trained to be extremely proficient and reading X-rays, CT, MRI scans, but these are generally the less operator dependant types of scan, though they can get complicated. An ultrasound for example is highly dependent on the skill of the operator and in certain circumstances things can be made to look worse or better than they are.
I don't know why the technology hasn't become more widespread in the domain. Probably because radiologists are paid really well and have a vested interest in preventing it... they're not going to want to tag the images for their replacement. It's probably also because medical data is hard to get permission for, to ethically train such a model you would need to ask every patient in for every type of scan it their images can be used for medical research which is just another form/hurdle to jump over for everyone.
It's certainly not as bad as the problems generative AI tend to have, but it's still difficult to avoid strange and/or subtle biases.
Very promising technology, but likely to be good at diagnosing problems in Californian students and very hit-and-miss with demographics which don't tend to sign up for studies in silicon valley
Basically AI is generally a decent answer to the needle in a haystack problem. Sure, a human with infinite time and attention can find the needle and perhaps more accurately than an AI could, but practically speaking if there's just 10 needles in a haystack it's considered a lost cause to find any of them.
With AI it might find in that same stack 30 needles, of which only 7 of them are the needles, which means the AI finds more wrong answers than right, but ultimately you do end up finding 7 needles when you would have missed all 10 before, coming out ahead.
So long as you don't let an AI rule out review of a scan that a human really would have reviewed, it seems a win to potentially have more overall scans get a decent review and maybe catch things earlier in otherwise impractical preventative scans
Despite what the luddites would have you believe, AI is an amazing assistive tool when paired with a human reviewing the results.
They skipped the phase where all the doctors were replaced by NPs and PAs.