
@ShahidNShah
Posted May 5, 2022 from medicalfuturist.com
Medical A.I. will become competent enoug. But tricking patients into believing they interact with a human practitioner is an entirely different issue.
We slowly grow accustomed to interacting with A.I. Whether it’s the assistant in our phones or a chatbot at a random support service introducing him/her/itself as a person but strangely phrasing the same question exactly the same way thrice.
Dr. A.I. and my trust issues
And based on how some companies and service providers like to trick you into believing their chatbot is a real person, we can pretty much expect the same for Dr. A.I. provided to you by your medical insurance company, answering your questions about the symptoms you experience.
But this might be a bit too optimistic, and I think we better prepare ourselves for a brave new world where spotting a deep fake human becomes an important skill.
Thinking that I’m talking to a doctor and realising mid-process that this is not the case is the worst scenario I can imagine regarding my trust-in-healthcare issues.
So, how can we prepare for a future when we might need the skill to decide if we are talking to a deep fake doctor instead of the real one?
Interact with chatbots, with virtual assistants, and take mental notes on the telltale signs helping you spot they are not human.
2. Practise how to spot deep fake
Norton has a helpful list on how to spot deep fake videos.
WHY IT MATTERS As noted on the framework's website, more than 86 million people in the United Statescurrently use a health or fitness app. Yet the field of health apps is often an …
Posted May 4, 2022 Digital Health Innovation mHealth