3 Ways the FDA’s “Guiding Principles” for AI/ML Falls Short on Concrete Industry Practices

Recently the FDA, Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA), released their “Guiding Principles for Good Machine Learning Practice” to help the AI/ML industry navigate both patient safety and continuing innovation in new devices and AI/ML algorithms. These include some measures (numbers 1, 3, 4, 5) that are at least partly intended to address the bias issue that can impact the performance of AI/ML tools across diverse populations. As broad-based principles that provide guideposts to the industry, the FDA and related bodies have developed an important set of guidelines. For those in the trenches of device development, however, these principles may not be enough to ensure patient safety, protections against bias, and a number of other ethical issues that emerge in practice. The 10 principles reflect a growing consensus around broad principles that industry can use to balance patient safety with innovation. The field of AI/ML is still seeing a large number of models entering the marketplace with problematic biases. Industry attempts to address the trust issue via tools such as explainable AI have demonstrated mixed results, at best.




Next Article

Did you find this useful?

Medigy Innovation Network

Connecting innovation decision makers to authoritative information, institutions, people and insights.

Medigy Logo

The latest News, Insights & Events

Medigy accurately delivers healthcare and technology information, news and insight from around the world.

The best products, services & solutions

Medigy surfaces the world's best crowdsourced health tech offerings with social interactions and peer reviews.


© 2024 Netspective Media LLC. All Rights Reserved.

Built on Apr 23, 2024 at 3:40am