By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Baner ClubBaner ClubBaner Club
  • Home
  • Business
    BusinessShow More
    Pakistan, China business firms sign $4.2 billion MoUs
    September 4, 2025
    Billions are lost due to top bosses’ massive abuse of the SOEs Act 2023.
    September 2, 2025
    Second phase of CPEC launched with new agricultural and industrial agreements
    October 7, 2024
    The Rupee depreciates 13 paise in interbank trade against the US dollar.
    October 7, 2024
    Reluctant bidders to keep on PIA employees
    October 7, 2024
  • Politics
    PoliticsShow More
    In Beijing, Xi and Kim discuss and stand to further their socialist agendas.
    September 4, 2025
    Over the National Guard’s deployment, Washington DC is suing the Trump administration.
    September 4, 2025
    As China and Pakistan sign $4.2 billion business agreements under CPEC 2.0, the PM promises investors that they will be facilitated.
    September 4, 2025
    PTI divided over Kalabagh Dam after Gandapur voices support for project
    September 4, 2025
    PTI divided over Kalabagh Dam after Gandapur voices support for project
    September 4, 2025
  • Technology
    TechnologyShow More
    European nuclear research delegation visits Pakistan: FO
    September 2, 2025
    The government warned: “Without a data strategy, the UK risks losing its leadership in AI.”
    October 7, 2024
    A 10-year-old youngster may be seen on camera traversing a ‘busy’ school playground in a stolen automobile.
    October 7, 2024
    Now, for a price, loyalty testers will expose your boyfriend’s infidelity.
    October 7, 2024
    Google unveils a new video-based search interface.
    October 4, 2024
  • Sports
    SportsShow More
    Pakistan defeats the UAE by 31 runs in the T20 Tri-series 2025, with Fakhar and Abrar shining.
    September 4, 2025
    Australian pacer Starc retires from T20Is, shifts focus to Tests, ODIs
    September 2, 2025
    GB climbing season severely hit by climate disasters, challenges
    September 2, 2025
    Afghanistan defeat Pakistan by 18 runs in T20I tri-series clash
    September 2, 2025
    Masood ton puts Pakistan in good position against England
    October 7, 2024
  • Job Portal
  • Matrimonial
  • Marketplace
  • Chat Room
  • More
    • Blog
    • Interests
    • Contact Us
Reading: ChatGPT found by study to spread inaccuracies when answering medication questions
Share
Notification
Font ResizerAa
Font ResizerAa
Baner ClubBaner Club
Search
  • Home
  • Business
  • Politics
  • Technology
  • Sports
  • Job Portal
  • Matrimonial
  • Marketplace
  • Chat Room
  • More
    • Blog
    • Interests
    • Contact Us
  • Privacy Policy
  • Terms and Conditions
© 2022 BanerClub. All Rights Reserved.
Baner Club > Blog > Information Technology > ChatGPT found by study to spread inaccuracies when answering medication questions
Information TechnologyInnovationTechnology

ChatGPT found by study to spread inaccuracies when answering medication questions

ARTIFICIAL INTELLIGENCE

Last updated: 2023/12/20 at 6:52 PM
Published December 15, 2023
Share
9 Min Read
SHARE

ChatGPT has been found to have shared inaccurate information regarding drug usage, according to new research.

Contents
Usage policy by ChatGPT‘Innovative potential’

In a study led by Long Island University (LIU) in Brooklyn, New York, nearly 75% of drug-related, pharmacist-reviewed responses from the generative AI chatbot were found to be incomplete or wrong.

In some cases, ChatGPT, which was developed by OpenAI in San Francisco and released in late 2022, provided “inaccurate responses that could endanger patients,” the American Society of Health System Pharmacists (ASHP), headquartered in Bethesda, Maryland, stated in a press release.

WHAT IS ARTIFICIAL INTELLIGENCE?

ChatGPT also generated “fake citations” when asked to cite references to support some responses, the same study also found.

Along with her team, lead study author Sara Grossman, PharmD, associate professor of pharmacy practice at LIU, asked the AI chatbot real questions that were originally posed to LIU’s College of Pharmacy drug information service between 2022 and 2023.

ChatGPT, the AI chatbot created by OpenAI, generated inaccurate responses about medications, a new study has found. The company itself previously said that “OpenAI’s models are not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions.

Of the 39 questions posed to ChatGPT, only 10 responses were deemed “satisfactory,” according to the research team’s criteria.

The study findings were presented at ASHP’s Midyear Clinical Meeting from Dec. 3 to Dec. 7 in Anaheim, California.

Grossman, the lead author, shared her initial reaction to the study’s findings with Fox News Digital.

BREAST CANCER BREAKTHROUGH: AI PREDICTS A THIRD OF CASES PRIOR TO DIAGNOSIS IN MAMMOGRAPHY STUDY

Since “we had not used ChatGPT previously, we were surprised by ChatGPT’s ability to provide quite a bit of background information about the medication and/or disease state relevant to the question within a matter of seconds,” she said via email. 

“Despite that, ChatGPT did not generate accurate and/or complete responses that directly addressed most questions.”

Grossman also mentioned her surprise that ChatGPT was able to generate “fabricated references to support the information provided.”

chatgpt and pill bottle

Out of 39 questions posed to ChatGPT, only 10 of the responses were deemed “satisfactory” according to the research team’s criteria.

In one example she cited from the study, ChatGPT was asked if “a drug interaction exists between Paxlovid, an antiviral medication used as a treatment for COVID-19, and verapamil, a medication used to lower blood pressure.”

HEAD OF GOOGLE BARD BELIEVES AI CAN HELP IMPROVE COMMUNICATION AND COMPASSION: ‘REALLY REMARKABLE’

The AI model responded that no interactions had been reported with this combination.

But in reality, Grossman said, the two drugs pose a potential threat of “excessive lowering of blood pressure” when combined.

“Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect,” she warned.

“It is always important to consult with health care professionals before using information that is generated by computers.”

ChatGPT should not be considered an “authoritative source of medication-related information,” Grossman emphasized.

“Anyone who uses ChatGPT should make sure to verify information obtained from trusted sources — namely pharmacists, physicians or other health care providers,” Grossman added.

MILITARY MENTAL HEALTH IN FOCUS AS AI TRAINING SIMULATES REAL CONVERSATIONS TO HELP PREVENT VETERAN SUICIDE

The LIU study did not evaluate the responses of other generative AI platforms, Grossman pointed out — so there isn’t any data on how other AI models would perform under the same condition.

“Regardless, it is always important to consult with health care professionals before using information that is generated by computers, which are not familiar with a patient’s specific needs,” she said.

Usage policy by ChatGPT

Fox News Digital reached out to OpenAI, the developer of ChatGPT, for comment on the new study.

OpenAI has a usage policy that disallows use for medical instruction, a company spokesperson previously told Fox News Digital in a statement.

Paxlovid medication

Paxlovid, Pfizer’s antiviral medication to treat COVID-19, is displayed in this picture illustration taken on Oct. 7, 2022. When ChatGPT was asked if a drug interaction exists between Paxlovid and verapamil, the chatbot answered incorrectly, a new study reported.

“OpenAI’s models are not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions,” the company spokesperson stated earlier this year. 

“OpenAI’s platforms should not be used to triage or manage life-threatening issues that need immediate attention.”

Health care providers “must provide a disclaimer to users informing them that AI is being used and of its potential limitations.” 

The company also requires that when using ChatGPT to interface with patients, health care providers “must provide a disclaimer to users informing them that AI is being used and of its potential limitations.” 

In addition, as Fox News Digital previously noted, one big caveat is that ChatGPT’s source of data is the internet — and there is plenty of misinformation on the web, as most people are aware. 

That’s why the chatbot’s responses, however convincing they may sound, should always be vetted by a doctor.

Woman sick at pharmacy

The new study’s author suggested consulting with a health care professional before relying on generative AI for medical inquiries.

Additionally, ChatGPT was only “trained” on data up to September 2021, according to multiple sources. While it can increase its knowledge over time, it has limitations in terms of serving up more recent information.

Last month, CEO Sam Altman reportedly announced that OpenAI’s ChatGPT had gotten an upgrade — and would soon be trained on data up to April 2023.

‘Innovative potential’

Dr. Harvey Castro, a Dallas, Texas-based board-certified emergency medicine physician and national speaker on AI in health care, weighed in on the “innovative potential” that ChatGPT offers in the medical arena.

“For general inquiries, ChatGPT can provide quick, accessible information, potentially reducing the workload on health care professionals,” he told Fox News Digital.

ARTIFICIAL INTELLIGENCE HELPS DOCTORS PREDICT PATIENTS’ RISK OF DYING, STUDY FINDS: ‘SENSE OF URGENCY’

“ChatGPT’s machine learning algorithms allow it to improve over time, especially with proper reinforcement learning mechanisms,” he also said.

ChatGPT’s recently reported response inaccuracies, however, pose a “critical issue” with the program, the AI expert pointed out.

“This is particularly concerning in high-stakes fields like medicine,” Castro said.

doctor with ai ipad

A health tech expert noted that medical professionals are responsible for “guiding and critiquing” artificial intelligence models as they evolve.

placeholder

Another potential risk is that ChatGPT has been shown to “hallucinate” information — meaning it might generate plausible but false or unverified content, Castro warned. 

“This is dangerous in medical settings where accuracy is paramount,” said Castro.

“While ChatGPT shows promise in health care, its current limitations … underscore the need for cautious implementation.”

AI “currently lacks the deep, nuanced understanding of medical contexts” possessed by human health care professionals, Castro added.

“While ChatGPT shows promise in health care, its current limitations, particularly in handling drug-related queries, underscore the need for cautious implementation.”

artificial intelligence language model

OpenAI, the developer of ChatGPT, has a usage policy that disallows use for medical instruction, a company spokesperson told Fox News Digital earlier this year.

Speaking as an ER physician and AI health care consultant, Castro emphasized the “invaluable” role that medical professionals have in “guiding and critiquing this evolving technology.”

You Might Also Like

New oil and gas reserves discovered in Attock district

Which country is going to ban social media, Facebook?

European nuclear research delegation visits Pakistan: FO

The UK is developing the first ovarian cancer vaccine, which “could wipe out the disease.”

The government warned: “Without a data strategy, the UK risks losing its leadership in AI.”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.

By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
December 20, 2023 December 15, 2023
Share This Article
Facebook Twitter Whatsapp Whatsapp LinkedIn Copy Link
Share
Previous Article Federal court reinstates case of ex-high school athletes challenging CT state laws on transgender athletes
Next Article Pakistan trail Australia by 355 in first Test after Jamal takes six on debut
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

Facebook Like
Twitter Follow
Pinterest Pin
Instagram Follow
Youtube Subscribe
Dribbble Follow
- Advertisement -
Ad imageAd image

Latest News

Tom Holland discloses that he was diagnosed with dyslexia and ADHD as a child.
Entertainment September 4, 2025
Afghanistan earthquake death toll rises, survivors face aid crunch
News September 4, 2025
Study reveals urgent climate crisis as wildfires ravage Spain, Portugal
Weather September 4, 2025
Punjab launches ‘clinic on boat’ service for flood victims
Health Pakistan September 4, 2025
//

Where headlines meet insight, and stories shape perspectives. Your gateway to informed perspectives and captivating narratives.

Top Categories

  • BUSINESS
  • POLITICS
  • TECHHot
  • HEALTH
  • News
  • Technology
  • Fashion

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

Baner ClubBaner Club
Follow US
© 2025 BanerClub. All Rights Reserved.
  • Privacy Policy
  • Terms and Conditions
Pay with Paypal

Login

Register

Please enter your username or email address. You will receive a link to create a new password via email.

Loading Back to login
Forgot Password
Facebook Login Loading...
Registration is currently disabled.
  • Login
  • Sign Up
Forgot Password?
Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.
body::-webkit-scrollbar { width: 7px; } body::-webkit-scrollbar-track { border-radius: 10px; background: #f0f0f0; } body::-webkit-scrollbar-thumb { border-radius: 50px; background: #dfdbdb }
Welcome Back!

Sign in to your account

Lost your password?