• thehatfox@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    2 days ago

    I think the difference here is that medical reference material is based on long process of proven research. It can be trusted as a reliable source of information.

    AI tools however are so new they haven’t faced anything like the same level of scrutiny. For now they can’t be considered reliable, and their use should be kept within proper medical trials until we understand them better.

    Yes human error will also always be an issue, but putting that on top of the currently shaky foundations of AI only compounds the problem.

    • ShareMySims@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      2 days ago

      Lets not forget that AI is known for not only not providing any sources, or even falsifying them, but now also flat out lying.

      Our GP’s are already mostly running on a tick-box system where they feed your information (but only the stuff on the most recent page of your file, looking any further is too much like hard work) in to their programme and it, rather than the patient or a trained physician, tells them what we need. Remove GP’s from the patients any more, and they’re basically just giving the same generic and often wildly incorrect advice we could find on WebMD.