A new study published in the British Medical Journal reveals that general practitioners (GPs) are using commercially available technologies such as ChatGPT to generate documentation and even recommend alternative diagnosis for patients.
It’s the biggest survey of its kind and reveals the level of AI use in UK general practitioners’ offices for the first time.
Dr. Charlotte Blease, an associate professor at Uppsala University in Sweden and the study’s author, described the degree of AI use by medical professionals as “surprising” because “doctors haven’t received formal training on these tools and they’re still very much a regulatory black hole”.
Many known problems with ChatGPT, such as the potential for patients to “hallucinate” or invent things, could be dangerous.
According to Dr. Blease, “patient privacy poses maybe the biggest risk.” “Our doctors may unintentionally be gifting patients’ highly sensitive information to these tech companies.”
Additionally, they might instill prejudices in the care they provide, putting some patients at risk of receiving unjust clinical judgments.
The researchers polled more than a thousand physicians, and of the one in five who responded that they employ generative AI in their work, 28 percent claimed they use it to provide alternative diagnoses for their patients.