...

AI in Healthcare Devices and the Challenge of Data Privacy – with Dr. Ankur Sharma at Bayer


In healthcare, patient data is the foundation of diagnosis, treatment, and trust. With digital health systems and AI tools becoming central to care delivery, healthcare providers collect exponentially more sensitive patient data. This data explosion has also expanded the attack surface, with countries adopting different frameworks and approaches to managing healthcare data privacy and security.

The leading global data privacy frameworks — GDPR (Europe), HIPAA, CCPA (U.S.), APEC Privacy Framework (Asia-Pacific), and POPIA (South Africa) — share the goal of safeguarding patient data, but differ in their enforcement, scope, and technological maturity. 

The 2025 Digital Health Journal paper explains that these disparities result in fragmented compliance practices, inconsistent breach responses, and challenges in cross-border data sharing. 

The paper also highlights that limited IT infrastructure and semantic incompatibilities further weaken protection systems. As a result, healthcare organizations struggle to maintain security, interoperability, and patient trust simultaneously—a challenge compounded by the rise of AI and digital health technologies that increase data volume and complexity.

This issue becomes even more complex when third-party vendors are involved. Hospitals and clinics often rely on external companies for electronic health records, cloud storage, analytics, and AI solutions, each of which introduces additional layers of data access, processing, and potential vulnerability.

According to the Q3 2025 statistics published by the HIPAA Journal, business associates (third-party vendors, including AI developers) were responsible for 12 of these breaches, affecting 88,141 individuals in August 2025 alone, highlighting the significant role of third parties in data breach exposure.

On a recent episode of the ‘AI in Business’ podcast, Emerj Editorial Director Matthew DeMello sat down with Dr. Ankur Sharma, Head of Medical Affairs, Medical Devices and Digital Radiology at Bayer, to discuss challenges and opportunities of AI adoption in healthcare, covering regulatory frameworks, data privacy and governance, and clinical trust.

This conversation highlights two critical insights healthcare organizations must consider to adopt and scale AI:

  • Standardize governance to unlock safe AI collaboration: Unifying data governance and regulations to enable secure sharing and accelerate generative AI (GenAI) adoption in healthcare.
  • Bridge reimbursement gaps for scalable AI adoption: To speed up AI adoption in healthcare, improve model transparency, and create reimbursement models that reward diagnostic and efficiency gains.

Listen to the full episode below:

Guest: Dr. Ankur Sharma, Head of Medical Affairs, Medical Devices and Digital Radiology at Bayer

Expertise: Clinical Research, Medical Devices, Medicine

Brief Recognition: Ankur leads the Medical Affairs Capability Cluster at Bayer, overseeing all medical devices and software-as-a-medical-device (SaMD), including AI-driven solutions. He is a medical professional with extensive experience across the medical device lifecycle, from development to post-launch. He also brings a strong background in clinical research. Ankur holds a degree in Mechanical Engineering, a Bachelor of Medicine and Surgery, and has pursued advanced studies in Clinical Design and Research at the University of California, San Diego.

Standardize Governance to Unlock Safe AI Collaboration

Ankur begins by explaining the practical challenges of data management. Within a healthcare system, strict rules govern the access and sharing of patient information. But even within a single institution, data is spread across multiple systems—each managed by different vendors obligated to protect it. 

The problem compounds because third-party companies outside the healthcare institution often develop AI tools. Getting all these separate entities—hospitals, data system providers, and AI developers—to collaborate and share data safely is a significant obstacle.

Dr. Sharma points out that in the absence of clear, standardized regulations, healthcare institutions often err on the side of caution by being overly restrictive with data access. While ensuring safety, his approach can also limit the potential of AI tools to support physicians and enhance care. 

He feels strongly that AI governance in healthcare is currently fragmented, with different approaches being taken across institutions—some create their own internal governance boards. In contrast, others rely on external vendors or advisory input to define how data can be used safely and securely.

Ankur then explains that AI regulation in healthcare varies globally, depending on where the technology is deployed. In the U.S., the FDA oversees AI systems classified as Software as a Medical Device (SaMD)—the same regulatory framework applied to physical medical devices such as implants and diagnostic equipment.

In Europe, oversight is provided by the EU AI Act and various notified bodies, while other countries have their own systems.

He clarifies that SaMD tools are those designed to directly impact clinical decisions or patient outcomes, for instance, AI that helps physicians diagnose diseases or predict risks based on patient data. These are regulated because their outputs influence medical actions.

In contrast, non-regulated AI tools are those used for patient support or administrative assistance — like an AI that translates a radiology report into simple language for a patient to understand. These don’t directly affect medical outcomes, so they currently fall outside strict regulatory oversight. 

Ankur notes that, to his knowledge, there are currently no generative large language models classified as Software as a Medical Device (SaMD). He explains that the U.S. SaMD space today consists entirely of predictive models, where a given input reliably produces a corresponding, predictable output.

“As far as I’m aware, there are no generative LLMs that are in the SAMD space. The SAMD space currently in the U.S. consists of all predictive models, meaning that if we provide an input, we can be certain it will yield a prediction based on that input, and it will always work that way.

There’s a set structure to input and output for predictive models. It’s not creating its own content. And those are really where we are right now in the regulated space, there is still some challenge on the evolution of what we’re going to see from notified bodies like the FDA or the agency around how they want to regulate the use of these generative models in healthcare, but we don’t have absolute good clarity in the U.S. on that, especially in in Europe, with the EU AI Act, there is some of this that’s starting to happen.”

–Dr. Ankur Sharma, Head of Medical Affairs, Medical Devices and Digital Radiology at Bayer

He suggests that this emerging framework could eventually pave the way for the first generation of GenAI tools to be recognized and regulated as medical devices.

Bridge Reimbursement Gaps for Scalable AI Adoption

Ankur contrasts how AI functions in clinical research versus real-world patient care. In clinical research, operations occur within a tightly controlled environment. Therefore, variables are limited, allowing researchers to trace outcomes to specific causes clearly. But in real-world practice, those guardrails disappear. Patient care involves numerous variables, making it more challenging to monitor and validate AI outputs as precisely as in a trial setting.

He explains that, with predictive models, physicians can easily interpret and trust the results because the relationship between inputs and outputs is fixed. For instance, if a model receives input A, it should yield a predictable output B. If the result differs, the physician can still assess what went wrong and adjust accordingly.

However, GenAI models don’t offer that same transparency. Physicians can’t see how the system arrived at its output, which makes it difficult to assess the accuracy or reliability of its clinical decisions. The opacity poses a challenge for ensuring safety and accountability in patient care.

Ankur adds that, until these issues are resolved, GenAI’s most immediate use cases will likely remain in non-regulated areas, focusing on tools that improve efficiency rather than directly influencing diagnosis or treatment. 

These include applications that help doctors write reports more efficiently or assist patients in better understanding their care. Over time, he expects more advanced versions of these tools to evolve toward regulated use cases, supporting diagnostic or treatment planning once safety, transparency, and reliability standards are established.

Ankur also highlights that one of the biggest challenges in healthcare AI adoption is reimbursement. Many AI tools, he explains, are not currently reimbursed by insurers or healthcare systems. Without a clear reimbursement structure, hospitals and clinics have little financial incentive to adopt these technologies, slowing down their integration into everyday care.

He notes that traditional reimbursement models are outcome-based, meaning payments are tied to measurable improvements in patient outcomes. However, many AI tools don’t directly produce a result — they assist with diagnosis, planning, or workflow efficiency. These contributions still create value, but they don’t fit neatly into existing reimbursement frameworks.

Although the U.S. government has recently begun exploring new reimbursement pathways for AI in healthcare, signaling a potential shift in how these tools are funded, the specifics remain unclear. This development suggests that policymakers are starting to recognize the role AI can play in improving clinical efficiency and care delivery.

He concludes that if these reimbursement pathways become standardized, they could significantly accelerate the adoption of AI across the healthcare system. Since healthcare has historically been slow to adopt digital technologies, this shift could mark a significant turning point, paving the way for the broader use of digital and AI-powered tools that ultimately benefit both physicians and patients.

Source link

#Healthcare #Devices #Challenge #Data #Privacy #Ankur #Sharma #Bayer