You need to ask yourself two questions:
- Who's pushing AI - be it in healthcare or any other sector?
- For what purpose?
It doesn't take a genius to figure out it's not about smarter care, and it's not in your best interest either.
Studies, research findings, and interesting tidbits from the ever-expanding scientific world.
Subcommunities on Beehaw:
Be sure to also check out these other Fediverse science communities:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
You need to ask yourself two questions:
It doesn't take a genius to figure out it's not about smarter care, and it's not in your best interest either.
Actually, there are doctors that want AI. Just they may not want it like most tech firms are currently pushing it. Each hospital system has a huge database or two or three with massive amounts of patient data. Doctors have talked about setting up data scientists to sort through that data for more effective outcomes to various health issues. It turns out it is too much data for a team of data scientists to sort through. AI might help with that. Just not the LLMs that are being pushed today.
Some of the challenges: How do you pull that data without personal identification or payment info. Keep in mind John Doe in Somewhere, somestate might be the only person in that state with Obscure Condition, so would be easily identifiable. Because once you have data that may support better outcomes, you'd definitely want to share that with other healthcare systems and government health agencies. Also, how do you use it ethically, something none of the current mainstream AI companies are really going to help you with. How do you share this with insurance companies without them punishing individual patients?
You make a really good point. AI could be helpful, but only if it’s used in the right way. There’s just too much data for people to go through on their own, and AI might help spot patterns that could improve care.
But like you said, it has to be done carefully. Patient privacy, ethical use of data, and making sure insurance companies don’t misuse the info are really important. AI should support doctors, not replace their judgment.
Maybe the best way forward is letting AI do the heavy data work, while doctors use their experience and judgment to decide what it really means. It’ll be interesting to see how we find that balance.
I’m curious—how do you see AI shaping the future of healthcare?
Exactly the way United Healthcare uses AI, as a way to obscure culpability in mass murder through rejection and divestment from essential healthcare systems so that human experts can be replaced with bullshit that constantly fails, hurts and eventually kills vulnerable people.
The AI functions solely as a tool to rationalize the dehumanization and devaluing of human life for profit.
That’s a really important concern to raise. There’s definitely a risk that AI could be used in harmful ways if profit is put ahead of people, especially when it comes to something as critical as healthcare.
At the same time, I think the technology itself isn’t the problem — it’s how organizations choose to use it. If it’s only used to cut costs and deny care, that would be damaging. But if it’s used to support doctors, catch errors, and make care more accessible (while still keeping human oversight), it could be a positive thing.
It really comes down to having strong ethics, transparency, and rules in place to make sure AI is used to help patients, not harm them.
did you write this post using AI?
it has pretty strong AI slop vibes, including a good ol' emdash.
it's also nearly identical to this post that you made 2 weeks ago.
Haha, fair point — I can see why it might sound a bit “AI-like.” 😅 I actually just wrote it myself, but I guess my writing style ends up looking a bit polished sometimes. I wasn’t trying to spam or repeat anything — just wanted to share some thoughts I’ve had for a while.
Appreciate the feedback though — I’ll try to keep it a bit more natural next time.
I guess my writing style ends up looking a bit polished sometimes
uh-huh..."too polished" is not the thing that's causing you to fail the Turing test. and your emdash count keeps rising, btw.
— just wanted to share some thoughts I’ve had for a while.
and what thoughts are those, exactly?
your original post followed the pattern of every AI slop "discussion prompt" post I've ever seen - 3 paragraph structure that ends with "in conclusion, it's a land of contrasts — what do you think?"
and all your other comments in this thread are just variations on "yeah there are positives and negatives — we'll need to think carefully about it"
humans who want to talk about a thing...usually have opinions about that thing. often strong opinions, and often based on specifics about the thing. do you have any?