PubMed, the online archive for medical research, currently has 4,018 publications indexed with the term “ChatGPT.” Indeed, from reading pathology slides to responding to patient messages, researchers have been utilizing AI and large-language models (LLMs) for a variety of tasks. However, a new study published in the Journal of the American Medical Association raises the possibility that AI can stand in for a patient during conversations about death. This is excessive.
The authors suggest building an artificial intelligence “chatbot” to communicate on behalf of a patient who is not able to do so. To quote, “AI might figure out what matters to patients and anticipate what they would do by combining individual-level behavioral data—inputs like social media posts, church attendance, donations, travel records, and past health care decisions.
This would allow the AI to communicate in conversational language what the patient “would have wanted,” assisting in the making of end-of-life decisions.
As neurosurgeons who treat patients with brain tumors, strokes, and traumatic brain injuries, we frequently have these discussions with the families of our patients about their final days. These excruciating events are a frequent, demanding, and fulfilling aspect of our work.