Google’s Medical AI Was Super Accurate in a Lab. Real Life Was a Different Story.

Google’s Medical AI Was Super Accurate in a Lab. Real Life Was a Different Story.

Posted Jul 26, 2021 from
Community%20of%20Practice/Join/green?scale=1.5


Existing rules for deploying AI in clinical settings, such as the standards for FDA clearance in the US or a CE mark in Europe, focus primarily on accuracy. There are no explicit requirements that an AI must improve the outcome for patients, largely because such trials have not yet run. But that needs to change, says Emma Beede, a UX researcher at Google Health: “We have to understand how AI tools are going to work for people in context—especially in health care—before they’re widely deployed.”

When it worked well, the AI did speed things up. But it sometimes failed to give a result at all. Like most image recognition systems, the deep-learning model had been trained on high-quality scans; to ensure accuracy, it was designed to reject images that fell below a certain threshold of quality.

Continue reading at


See Topics


Next Article