...

Shaping the Future of Healthcare with AI – with Lyndi Wu of NVIDIA and Will Guyman of Microsoft


This interview analysis is sponsored by Microsoft and NVIDIA and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.

U.S. hospitals are facing an unprecedented digital infrastructure crunch. According to the U.S. Department of Health and Human Services, after a nationwide push to digitize health records, 96% of hospitals now use certified EHR systems. A 2022 JAMA Network Open study found that 43% of U.S. adults used telemedicine in 2022, so the demand for computing power in healthcare has skyrocketed. 

Yet many health systems find their IT backbones stretched thin. A 2024 peer-reviewed study published in Health Affairs Scholar used data from the American Hospital Association to reveal that only about one-fifth of hospitals deployed some form of AI solution by 2022.

Meanwhile, 35% of health IT leaders cited in a 2023 survey conducted by Healthcare IT Leaders that limited budget and resources were the top barrier to adopting AI tools at their organizations.

Core clinical platforms can suffer under these constraints – clinicians still encounter sluggish EHR performance and even downtime when servers or networks can’t keep up. 

At the same time, the Cloud Security Alliance notes that hospitals have been cautious about expanding cloud capacity: Healthcare keeps just 47% of sensitive data in the cloud (versus 61% in other industries), relying heavily on aging on-premise data centers. 

These strains on capacity and security have become top of mind in the C-suite in the post-pandemic era.

Emerj Editorial Director Matthew DeMello recently hosted a conversation with  Lyndi Wu, Senior Director of Ecosystem Business Development in Healthcare and Life Sciences at NVIDIA, and Will Guyman, Principal Group Product Manager in Healthcare AI Models at Microsoft on the ‘AI in Business’ podcast to explore how healthcare leaders can best navigate these bottlenecks and future-proof their data infrastructure. 

Their discussion spanned a wide range of topics, including unified AI development and deployment in healthcare. Both Will and Lyndi emphasized the need for seamless integration of AI agents and the use of scalable infrastructure to accelerate the adoption and impact of AI technologies in improving patient care and healthcare operations.

Talking points throughout highlighted the transformative potential of agentic AI in healthcare, emphasizing the need for cross-disciplinary collaboration, efficient data management, and the role of scalable GPU infrastructure in optimizing AI system performance.

This article examines two critical insights for CX leaders from their conversation:

  • Collaborating across teams to optimize AI agent deployment: Collaborating between clinicians, developers, and data scientists to identify pain points and create customized AI agents that address specific healthcare challenges.
  • Saving costs with smart cloud right-sizing: Running AI in the cloud to match GPU power exactly to workload needs, avoiding the wasted expense of oversized, underused on-premise hardware and paying only for the performance required.

Listen to the full episode below:

Guest: Lyndi Wu, Senior Director of Ecosystem Business Development in Healthcare and Life Sciences, NVIDIA

Expertise: Artificial Intelligence, Business Strategy, Partner Relationship Management

Brief Recognition: Lyndi is a creative and analytical senior executive with a successful track record of building high-performing teams. Prior to NVIDIA, she worked at Google for over 15 years, where she held various leadership roles, including leading business development for Google’s Healthcare and Life Sciences research team and leading Google Cloud Platform’s Healthcare and Life Sciences partnerships team. She holds a B.S. in electrical engineering from Princeton University and an MBA from The Wharton School.

Guest: Will Guyman, Principal Group Product Manager in Healthcare AI Models, Microsoft

Expertise: Artificial Intelligence, Healthcare, Computer Vision

Brief Recognition: Will has been with Microsoft for over a decade, working on computer Vision in the Azure AI Platform. He holds a Bachelor’s degree in math and computational design from Stanford University. 

Collaborating Across Teams to Optimize AI Agent Deployment

In the course of the conversation, Will begins to explain that AI agents represent one of the most promising developments in healthcare because they go beyond simply answering questions like chatbots. 

Instead, they directly contribute to the success of patient outcomes with human agents by handling administrative tasks, integrating and processing multiple types of healthcare data such as clinical notes, and delivering useful summaries or insights directly to doctors before patient interactions. 

For example, an agent could prepare a chronological summary of a patient’s history or alert the care team about new research findings or clinical trial opportunities relevant to a specific case. 

Will also emphasizes that this is critical given the sheer scale of healthcare data, which is exponentially larger than data seen in industries like streaming, making it impossible for human teams to manage effectively. He notes that the development of agentic workflows relies on close collaboration between clinicians, developers, and data scientists. 

He articulates how clinicians describe their pain points, and developers build tailored solutions, creating a bridge between the frontlines of care and the technical teams working on advanced AI models. Will then suggests that agents stand out as a hero scenario with the potential to transform healthcare operations and improve efficiency and outcomes meaningfully.

Armed with a strong background in infrastructure development, Lyndi builds on these ideas by expanding on how agentic AI systems are not just about individual agents but often involve multiple agents working together.

She describes how agents frequently feed into an orchestrating or coordinating agent that pulls all the inputs together before delivering the needed answer or performing the intended task. 

The layered and interconnected setup of agentic systems involves significant complexity compared to even those that have traditionally supported the most advanced generative AI (GenAI) use cases, especially in parallelizing the workstreams and computational processes to ensure everything runs efficiently.

However, she emphasizes that GPUs alone are not sufficient to deliver the performance and ease-of-deployment healthcare providers need. NVIDIA’s software stack, including optimized containers like NIMs, is what enables the GPU acceleration to be usable, scalable, and secure.

Throughout the podcast, Lyndi underscores these points by noting the power of their partnership with Microsoft comes from combining NVIDIA’s software and GPU stack with the scalability and security of Azure:

“At NVIDIA, what we try to do is to build a full-stack software solution that integrates seamlessly with the broader infrastructure and compute needs of healthcare organizations — whether that’s running locally on devices for digital or robotic surgery, or scaling across cloud environments. 

We don’t do this alone. One of the greatest strengths of our partnership with Microsoft is the ability to support edge-to-cloud deployments with enterprise-grade reliability.”

– Lyndi Wu, Senior Director of Ecosystem Business Development in Healthcare and Life Sciences, NVIDIA

Saving Costs with Smart Cloud Right-Sizing

Further, Lyndi gives the example of NVIDIA’s collaboration with Microsoft, where they have worked to ensure that their software stack on Azure is set up and ready out of the box, drastically reducing the time to first inference. It means someone can get up and running with use cases and workflows in just minutes, which is not an exaggeration, she says. 

She also highlights the scalability advantage of using cloud systems. Unlike running on-premises, where a company is limited to a single type of GPU and must maximize its utility, even if it’s more powerful and more expensive than needed.

She explains that the benefit of running in the cloud is that the company can right-size and optimize, select the right GPU type and instance depending on the workload, and carefully balance cost and performance:

“As an example, take radiological imaging. Those workloads can often be processed in the background, where latency isn’t as critical, and analysis can happen overnight. But latency becomes a major factor for real-time applications, like a digital human interacting directly with a patient. In those scenarios, the response time needs to be measured in microseconds to ensure a seamless, human-like experience.”

– Lyndi Wu, Senior Director of Ecosystem Business Development in Healthcare and Life Sciences, NVIDIA

Pulling from Lyndi’s point on cloud efficiencies, Will then explains that different AI use cases have very different performance and infrastructure requirements.

He cites the example of administrative tasks focused on text processing that can now be scaled efficiently and affordably because the cost of inference for text models has dropped sharply in recent years. However, as healthcare teams move into image-heavy tasks, like working with complex medical imaging data, they must carefully plan the required capacity and determine which scenarios they want to optimize for.

He advises healthcare providers to start by identifying the end ROI. It could be:

  • Saving time for radiologists, clinicians, or administrators
  • Improving patient outcomes
  • Reducing the risk of missing something critical
  • Shortening turnaround times

Once they’ve defined the target ROI, they can work backward to determine which AI models to use, estimate the data volume, and decide what quality threshold is good enough to begin deployment, even for evaluation purposes. He cites a specific success example: 

“The University of Wisconsin has done an amazing job operationalizing AI for medical imaging. They identified that chest X-rays are one of the most common imaging procedures and that the majority are normal. By deploying a foundation model from AI Concrete and customizing it for reliability, they enabled AI to take a first pass at screening. If the model flagged an image as abnormal, it was escalated; otherwise, it moved to a triage bucket. Their approach not only reduced radiologist workload but also allowed them to measure and optimize quality at scale.”

– Will Guyman, Principal Group Product Manager in Healthcare AI Models, Microsoft

The above use case achieved 99% accuracy in identifying abnormal X-rays, which reduced the radiologists’ workload by 42% and significantly improved turnaround time.

Punctuating his points for healthcare leaders, Will presents a strong end-to-end approach:

  • Set up the right AI models, data volume, and scaling strategy.
  • Define the use case
  • Determine the needed quality threshold
  • Plan how to measure ROI

Source link

#Shaping #Future #Healthcare #Lyndi #NVIDIA #Guyman #Microsoft