Responsible AI Deployment in Healthcare Requires Collaboration

Responsible AI Deployment in Healthcare Requires Collaboration

Responsible, secure, and ethical artificial intelligence (AI) deployment in healthcare requires an informed, multi-disciplinary, and collaborative approach. But a lack of industry standards and consensus on how to responsibly deploy AI technologies has left many healthcare decision-makers looking for guidance.

Law firm DLA Piper, the Duke Institute for Health Innovation, and Mayo Clinic (among others) launched the Health AI Partnership in late December 2021 to help organizations navigate the AI software market and establish best practices for responsible AI deployment.

"There is so much excitement around AI and so much potential for doing good with AI in healthcare," David Vidal, a vice chair at Mayo Clinic's Center for Digital Health, who oversees the center's AI quality and regulation operations, explained in an interview.

"But at the same time, there's so much that people don't understand about it. Even the name AI means so many different things to different people, and there's such a rush to adopt and even a pressure to adopt when people don't know yet how to tell good AI from bad AI."

To deploy AI in a way that mitigates risk, key stakeholders must understand AI's use cases in healthcare, consider risks surrounding security, privacy, and ethics, and commit to collaboration.


Next Article

Did you find this useful?

Medigy Innovation Network

Connecting innovation decision makers to authoritative information, institutions, people and insights.

Medigy Logo

The latest News, Insights & Events

Medigy accurately delivers healthcare and technology information, news and insight from around the world.

The best products, services & solutions

Medigy surfaces the world's best crowdsourced health tech offerings with social interactions and peer reviews.


© 2024 Netspective Media LLC. All Rights Reserved.

Built on Mar 28, 2024 at 3:00am