this post was submitted on 06 Aug 2025
43 points (97.8% liked)
Public Health
1078 readers
21 users here now
For issues concerning:
- Public Health
- Global Health
- Health Systems & Policy
- Environmental Health
- Epidemiology
- etc.
🩺 This community has a broader scope so please feel free to discuss. When it may not be clear, leave a comment talking about why something is important.
Related Communities
- Medical Community Hub
- Medicine
- Medicine Canada
- Premed
- Premed Canada
- Public Health (📍)
See the pinned post in the Medical Community Hub for links and descriptions. link (!medicine@lemmy.world)
Rules
Given the inherent intersection that these topics have with politics, we encourage thoughtful discussions while also adhering to the mander.xyz instance guidelines.
Try to focus on the scientific aspects and refrain from making overly partisan or inflammatory content
Our aim is to foster a respectful environment where we can delve into the scientific foundations of these topics. Thank you!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Literally just came from the doctor's office (Sydney) and saw a poster about AI transcription usage on the wall.
Pros I can see:
Cons I can see:
There are so many issues here. The fact GPs are in a position of thinking this tradeoff is worth it is caused by a cluster of problems. The fact big companies are convincing them this is acceptable is another layer. The history of big tech companies selling such data off to special interest groups (anti-abortion, real estate, etc) a third.
LLM companies are desperate for people to buy their products because nothing is profitable in the AI industry (other than selling the shovels like Nvidia does).
FYI, a medical scribe is a job that a ton of people do. Not all doctors do their own transcription, or have their nursing staff do it. Often one that people looking at getting into medicine do for additional learning.
All the AI is going to do is misunderstand and hallucinate responses like they do with everything else. It's clear that the issues with LLM-based AI are above standard human error at this point.
There is no advantage here either than removing yet another person's job for a worse result.