Accountable Healthcare - Google adds image analysis capabilities to generative AI model, enabling dialogues with doctors
Skip to Main Content Skip to Menu Skip to Footer
May 23, 2023

Google adds image analysis capabilities to generative AI model, enabling dialogues with doctors

Dive Brief:

  • Google is expanding its medical large language model Med-PaLM 2 to enable it to analyze images and respond to questions.
  • Last month, Google revealed its LLM, which is built on similar technology to ChatGPT, performed at “expert” level on questions based on the U.S. Medical Licensing Exam, achieving accuracy of 85%.
  • Now, the tech company is giving Med-PaLM 2 the ability to review X-rays and mammograms so it can write reports about images and respond to follow-up questions, adding to a wave of interest in generative AI.

Dive Insight:

Google has yet to show the LLM works in healthcare settings or make it available for use. A “small group” of Google Cloud customers will gain access to the technology later this summer and provide feedback to help identify safe, helpful use cases.

With those caveats, Med-PaLM 2 appears to be a potential step forward in the application of artificial intelligence to healthcare. Existing models either analyze images or answer questions. By combining the two capabilities in a single system, Google could enable physicians to question the conclusions of the LLM. 

“Until this year, this was not on the table,” Greg Corrado, a senior research scientist at Google, told Stat. “Now you can build a system — and this is amazing to me, honestly, I did not expect this to happen now — but you can build a system where you give it an image, it writes a report and then you can ask it follow-up questions.”

The follow-up capabilities differentiate Med-PaLM 2 from existing artificial intelligence image analysis systems, which review images and deliver conclusions without explaining their reasoning. In theory, a doctor could talk to Google’s LLM like they would with a colleague, engaging in a discussion about an image and the initial assessment and turning a black box into an ongoing dialogue.

It is unclear how close Google is to realizing that potential in the real world. Google shared an example showing the LLM responding to an X-ray with a three-point summary of the image and an “impression” that the image shows an acute distal radius fracture. However, in healthcare, it can take years to bring products to market, and the failure of IBM Watson Health to live up to expectations shows the pitfalls that may confront Google.

Source: healthcaredrive.com