this post was submitted on 16 Aug 2025
813 points (99.0% liked)

People Twitter

7950 readers
723 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] MystikIncarnate@lemmy.ca 5 points 9 hours ago

To be blunt, if you were to train a gpt model on all the current medical information available, it actually might be a good starting point for most doctors to "collaborate" with and formulate theories on more difficult cases.

However, since GPT and list other LLMs are trained on information generally available on the Internet, they're not going to come up with anything that could possibly be trusted in any field where bad decisions could mean life or death.

It's basically advanced text prediction based on whatever intent statements you made in your prompt. So if you can feed a bunch of symptoms into a machine learning model, that's been trained on the sum of all relatively recent medical texts and case work, that would have some relevant results.

Since chat gpt isn't that, heh. I doubt it would even help someone pass medical school, quite bluntly... Apart from the hiring boiler plate stuff and filling in words and sentences that just take time to write out and don't contribute in any significant manner to the content of their work. (Eg, introduction paragraphs, sentence structures for entering information and conclusions... Etc).

There's a lot of good, time saving stuff ML, in its current form, can do, diagnostics, not so much.