24 x 7 World News

Researchers call ChatGPT maker OpenAI’s transcription tool used in hospitals ‘problematic’, ET HealthWorld

0

New Delhi: ChatGPT maker OpenAI’s Whisper, an AI-powered transcription tool touted for its accuracy, has come under scrutiny for its tendency to fabricate information, a report has said, adding that experts have called it problematic because the tool is being used in a slew of industries worldwide to translate and transcribe interviews.

According to a report by news agency AP, experts warn that these fabrications – a phenomenon known as “hallucinations” – which can include false medical information, violent rhetoric and racial commentary, pose serious risks, especially in sensitive domains like healthcare.

Despite OpenAI’s warnings against using Whisper in high-risk settings, the tool has been widely adopted across various industries, including healthcare, where it is being used to transcribe patient consultations.

What researchers have to say

According to Alondra Nelson, who led the White House Office of Science and Technology Policy for the Biden administration until last year, such mistakes could have “really grave consequences,” particularly in hospital settings.

“Nobody wants a misdiagnosis. There should be a higher bar,” said Nelson, a professor at the Institute for Advanced Study in Princeton

Whisper can invent things that haven’t been said

Researchers have also found that Whisper can invent entire sentences or chunks of text, with studies showing a significant prevalence of hallucinations in both short and long audio samples.

A University of Michigan researcher conducting a study of public meetings found hallucinations in eight out of every 10 audio transcriptions he inspected. These inaccuracies raise concerns about the reliability of Whisper’s transcriptions and the potential for misinterpretation or misrepresentation of information.

Experts and former OpenAI employees are calling for greater transparency and accountability from the company.

“This seems solvable if the company is willing to prioritise it. It’s problematic if you put this out there and people are overconfident about what it can do and integrate it into all these other systems,” added said William Saunders, a San Francisco-based research engineer who quit OpenAI in February over concerns with the company’s direction.

OpenAI acknowledges the issue and states that it is continually working to reduce hallucinations.

  • Published On Oct 28, 2024 at 11:15 AM IST

Join the community of 2M+ industry professionals

Subscribe to our newsletter to get latest insights & analysis.

Download ETHealthworld App

  • Get Realtime updates
  • Save your favourite articles


Scan to download App

Leave a Reply