Health Technology Strategist - Physician - Author - Speaker 

Dr. Sarah Matt operates at the intersection of clinical medicine, technology innovation, and healthcare access transformation.




NATIONAL BESTSELLER

About the book

The Borderless Healthcare Revolution: The Definitive Guide to Breaking Geographic Barriers Through Technology is your field guide to a future in which a clinic visit is never farther away than the nearest screen and a surgeon’s skill can cross oceans in real time. Dr. Sarah Matt translates frontier-grade innovation into day-to-day practice for clinicians, health-system strategists, and policymakers who refuse to accept geography as destiny.

“Here in the US, patients with little access to technology are all too often cut out of healthcare services. Dr. Matt has spent her career working diligently at the intersection of practice and technology to enable access to care and better outcomes. Her stories and research from the field will illuminate and empower a future in which all people, regardless of geography, will have the healthcare access and opportunities they need.”

—Betty Rabinowitz, MD, Founder and Former CEO of EagleDream Health

about

Sarah Matt, MD, MBA

Dr. Matt is a surgeon turned health technology strategist, author, and speaker. Her work focuses on how digital tools, from remote surgery to telemedicine to AI, can expand access to healthcare and eliminate the traditional boundaries that separate patients from care. Through her various leadership roles at Oracle and Sovato, Dr. Sarah Matt has worked with healthcare organizations of all sizes around the world, giving her firsthand experience with access challenges and solutions across different healthcare systems, cultures, and economic environments. She has seen what works, what doesn't, and most importantly, what's possible when we reimagine healthcare delivery without geographic constraints.

LEARN MORE

In the Media

Taking Back Healthcare

Sarah Matt: MD, MBA

Dr. Sarah Matt, MD, MBA, joins the podcast to explore how borderless healthcare is reshaping access, efficiency, and patient outcomes. She shares how cross-state collaboration, virtual care, and innovative care models are breaking down traditional barriers in medicine. Dr. Matt also discusses what leaders must do to build systems that truly follow the patient—no matter where they live.

KevinMD

Why fee-for-service reform is needed

You just saved a patient an emergency room visit with a three-minute portal message. You reviewed their connected blood pressure cuff data, saw a concerning trend, and tweaked their medication. It was efficient, high-quality, proactive care.

Engagements

Speaking

Dr. Sarah Matt delivers keynotes and leads panels on AI in global healthcare, tech-enabled primary care, building trust at scale, and countless talks on leadership and strategy.

WORK WITH SARAH

Articles

By Sarah Matt February 27, 2026
I've been in Houston for less than 24 hours and ACHE is already telling two different stories. Last night at the networking reception, I ended up in a conversation with a COO from a mid-size system in the Midwest. She'd just come from a vendor demo showcasing an AI-powered clinical decision support tool. Impressive interface. Slick pitch deck. She turned to me, drink in hand, and said: "That's the third AI platform someone's tried to sell me this quarter. I still don't have a governance framework for the first one we bought." That one sentence captures what I'm hearing everywhere here. The main stage narrative: AI is transformative, governance is improving, and health systems are ready. The hallway truth: leaders are terrified of vendor lock-in, frontline adoption is stalling, and boards have no idea what they're approving. I'm here all week. This dispatch is what I'm actually hearing. The Hallway Truth Last month's newsletter centered on the FDA AI Reimbursement Gap : 1,300 devices authorized, almost none getting paid. That gap doesn't exist in isolation. It's a symptom of a much larger dysfunction: healthcare leadership is making AI decisions without a clinical or operational north star. At ACHE, I'm watching this play out in real time. The keynotes celebrate "digital transformation." But in the hallways, the real conversations sound different. On governance: CMOs and Chief Innovation Officers are quietly admitting they don't have a governance framework for AI that actually works. One prominent health system executive told me they've deployed 7 different AI tools in the last 18 months with zero integrated evaluation criteria. Another said their board approved a $2M AI contract based on a vendor pitch and a Gartner report . Neither knew what clinical outcomes they were supposed to measure. On vendor relationships: There's palpable anxiety about AWS, Google Cloud, and Microsoft dominating the health system AI stack. Leaders understand that one vendor's AI model now touches EHR, analytics, coding, billing, and operations. But they don't see a path to diversify without ripping out infrastructure. Strategic captivity masquerading as innovation. On frontline reality: Clinicians still aren't using most of the AI tools that were supposed to save them time. Burnout-measurement apps? Yes. AI-powered documentation assistance that actually integrates into workflow? Rare. The gap between "we bought this AI solution" and "our people actually use it" is where billions are evaporating. A 2025 AMA survey found that while physician interest in AI tools is growing, actual clinical workflow integration remains stubbornly low. The hallway truth: ACHE attendees are smarter and more skeptical than the marketing suggests. They're just not saying it from the stage yet. Three Things I'm Watching This Week 1. Laura Kaiser's "Purposeful Urgency" Framework. Kaiser's ACHE session on strategy execution is the most subscribed session I've heard about. Why? Because health system leaders are exhausted by "digital transformation theater." They want permission to be selective. Purposeful urgency means: pick the 3-5 things that actually matter to your system's mission, execute them ruthlessly, and don't get distracted by trend-chasing. AI governance should follow this exact model. Not every health system needs every AI tool. Clarity on "what's core to us?" is the missing strategic input. 2. The PE Reckoning Session (Wednesday). Private equity is quietly reshaping which health systems exist and which close. The governance panel on Wednesday is going to surface an uncomfortable truth: PE-backed groups are moving faster on AI but with less transparency to their boards. I'm watching to see if anyone acknowledges that financial pressure and AI investment aren't always aligned with patient outcomes. 3. The Governance Vacuum. There's no agreed-upon standard for AI governance in health systems yet. CMS guidance is anticipated, but it's not here. Board members are approving AI contracts without clear authority or accountability. This is the pre-regulation moment: whoever builds the governance framework first gains massive strategic advantage. This matters more than the AI technology itself. What I Tell My Clients When I consult with health system leaders on AI governance, three patterns surface repeatedly. First, the "innovation committee" structure is failing. Most health systems created AI oversight by bolting a committee onto the existing quality or IT governance structure. The problem: these committees meet monthly, lack clinical AI expertise, and have no budget authority. They produce reports. They do not produce decisions. Effective AI governance requires a standing body with three things: clinical representation, financial authority, and a decision timeline that matches vendor sales cycles, not academic publishing cycles. Second, the vendor evaluation process is backwards. Health systems are evaluating AI tools on accuracy metrics, user interface quality, and integration timelines. Those matter. But the question they should ask first is: "What happens when this vendor's model changes?" AI products update continuously. The tool your team validated in Q1 may behave differently by Q3. Without a revalidation protocol tied to model updates, you're governing a product that no longer exists. Third, "clinician engagement" is not the same as clinician trust. I see this conflation constantly. Leadership sends out a survey. Clinicians say they're "engaged" with the AI rollout. Leadership reports adoption is on track. But engagement is not trust. Trust means the physician will change a clinical decision based on the tool's output. That requires transparency about how the algorithm reaches its recommendations, the ability to verify against patient-specific data, and the authority to override when clinical judgment disagrees. Without those three conditions, you have participation, not adoption. And participation does not produce outcomes. The distinction between engagement and trust is where most health system AI strategies quietly fail. The dashboards look green. The utilization numbers tell a different story. The Pre-Regulation Window Here's the strategic reality: we are in the 12-18 month window before formal AI governance standards arrive in healthcare. CMS is developing guidance. The AMA's CMAA framework is establishing clinical AI classification standards. State legislatures are active, with over 250 healthcare AI bills introduced across 34+ states by mid-2025. Health systems that build governance frameworks now, before they're required, gain two advantages. First, they shape the standard rather than react to it. Second, they create internal muscle memory for AI evaluation that compounds over time. The organizations that wait for a mandate will spend their first year catching up. The organizations that start now will spend that year refining. This is the conversation I came to ACHE to have. Reply to me if it's one your system needs to have too. -Dr. Matt  **Get exclusive consulting frameworks and behind-the-scenes analysis I don't publish on the blog. Subscribe to the newsletter: [ https://www.drsarahmatt.com/newsletter-signup
By Sarah Matt February 23, 2026
The Moral Injury of Being a Liability Sponge Let me be direct with you. Healthcare AI has had its honeymoon phase. The conference keynotes. The breathless press releases. The "transformative potential" slide decks. We all sat through them. Some of us gave them. That era is over. 2026 is the year where every AI tool, every vendor, and every health system strategy gets measured against a single question: Does this actually work at scale, and can we prove it? If the answer is no, the budget is gone. The pilot is dead. The vendor is off the approved list. Welcome to the proving ground. But as we enter this phase of accountability, we’ve stumbled into a dangerous trap. We keep talking about the "Human in the Loop" (HITL) as a design gold standard. In reality, HITL has become a legal strategy used to offload 100% of the malpractice risk onto the provider, while the system captures 100% of the efficiency gains. I’m all for AI, heck I've built a career on it! But the current implementation doesn't feel right. The providers are exhausted, and we aren't the ones getting the benefit. 1. The Liability Sponge: All the Risk, None of the Shield In the current legal landscape, the clinician is in a double-bind. If you follow the AI’s suggestion and it leads to a "hallucination" or error, you are liable for failing to exercise independent clinical judgment. If you override the AI and your intuition is wrong, you are liable for ignoring a "validated" clinical decision support tool. We are being used as a "Liability Sponge." Vendors often use "click-wrap" agreements to disclaim responsibility, leaving the person with the MD or RN after their name to hold the bag. According to Bell Law Firm’s 2026 analysis , technology is no longer a neutral tool; it is a causal chain of injury, and yet the "Human in the Loop" is the only one who answers for it in court. To derisk this, we must advocate for Statutory Safe Harbors . If a provider uses a certified, validated tool as intended, they should not face higher standards than the machine itself. We need shared liability, where vendors put their balance sheets behind their 99% accuracy claims. 2. The Productivity Trap: The "15-Minute" Repossession Ambient listening (AI scribing) was the great hope for 2026. It was supposed to let us look patients in the eye again. And it does save time, roughly 30 to 45 minutes of documentation a day . But here’s the catch: In many health systems, that "gift of time" is immediately repossessed by administration to increase RVU targets. We’ve automated the clerical task of writing a note, only to replace it with 2–3 more high-stakes human interactions. We are trading clerical fatigue for emotional exhaustion. The "Human in the Loop" isn't just a safety net; they’ve become an accelerator for the system’s bottom line. We aren't getting a break; we're just being asked to run a faster race. 3. The Vendor’s Dilemma: Finding the Middle Ground I’ve been on the vendor side. I know the fear. If a tech company takes on unlimited clinical liability, they effectively become an insurance company. Most wouldn't survive their first major malpractice suit. So, how do we break the standoff? Clinical Accuracy Warranties: Forward-thinking vendors are beginning to offer performance guarantees. They aren't promising perfection, but they are guaranteeing their model stays within a specific "standard of care" band. The Registry Solution: We need a National AI Incident Reporting System, like the FAA’s "black box" for aviation. If a model fails in a specific clinical scenario, that data should be shared immediately so every other "Human in the Loop" knows to watch for it. Check the Epstein Becker Green Checklist for the five critical points your 2026 vendor agreement must cover to ensure you aren't the only one at risk. Get the "Clinical Reality Check" Before Everyone Else. I send these briefings to my private list 24 hours before they hit social media. Join other healthcare leaders who get the raw, uncensored analysis first. [Join the Clinical Realist List] The Bottom Line In 2026, the "cool factor" is dead. We are entering the era of Clinical Pragmatism . I want AI to win. I want it to help us. But a system where the provider is the "Liability Sponge" and the vendor is a "Ghost" is unsustainable. We don't need faster AI; we need a fairer contract. If the system wants us to be the final authority, it needs to give us the authority to slow down. Stay Real, -Dr. Matt
By Sarah Matt February 16, 2026
The Moral Injury of Being a Liability Sponge Let me be direct with you. Healthcare AI has had its honeymoon phase. The conference keynotes. The breathless press releases. The "transformative potential" slide decks. We all sat through them. Some of us gave them. That era is over. 2026 is the year where every AI tool, every vendor, and every health system strategy gets measured against a single question: Does this actually work at scale, and can we prove it? If the answer is no, the budget is gone. The pilot is dead. The vendor is off the approved list. Welcome to the proving ground. But as we enter this phase of accountability, we’ve stumbled into a dangerous trap. We keep talking about the "Human in the Loop" (HITL) as a design gold standard. In reality, HITL has become a legal strategy used to offload 100% of the malpractice risk onto the provider, while the system captures 100% of the efficiency gains. I’m all for AI, heck I've built a career on it! But the current implementation doesn't feel right. The providers are exhausted, and we aren't the ones getting the benefit. 1. The Liability Sponge: All the Risk, None of the Shield In the current legal landscape, the clinician is in a double-bind. If you follow the AI’s suggestion and it leads to a "hallucination" or error, you are liable for failing to exercise independent clinical judgment. If you override the AI and your intuition is wrong, you are liable for ignoring a "validated" clinical decision support tool. We are being used as a "Liability Sponge." Vendors often use "click-wrap" agreements to disclaim responsibility, leaving the person with the MD or RN after their name to hold the bag. According to Bell Law Firm’s 2026 analysis , technology is no longer a neutral tool; it is a causal chain of injury, and yet the "Human in the Loop" is the only one who answers for it in court. To derisk this, we must advocate for Statutory Safe Harbors . If a provider uses a certified, validated tool as intended, they should not face higher standards than the machine itself. We need shared liability, where vendors put their balance sheets behind their 99% accuracy claims. 2. The Productivity Trap: The "15-Minute" Repossession Ambient listening (AI scribing) was the great hope for 2026. It was supposed to let us look patients in the eye again. And it does save time, roughly 30 to 45 minutes of documentation a day . But here’s the catch: In many health systems, that "gift of time" is immediately repossessed by administration to increase RVU targets. We’ve automated the clerical task of writing a note, only to replace it with 2–3 more high-stakes human interactions. We are trading clerical fatigue for emotional exhaustion. The "Human in the Loop" isn't just a safety net; they’ve become an accelerator for the system’s bottom line. We aren't getting a break; we're just being asked to run a faster race. 3. The Vendor’s Dilemma: Finding the Middle Ground I’ve been on the vendor side. I know the fear. If a tech company takes on unlimited clinical liability, they effectively become an insurance company. Most wouldn't survive their first major malpractice suit. So, how do we break the standoff? Clinical Accuracy Warranties: Forward-thinking vendors are beginning to offer performance guarantees. They aren't promising perfection, but they are guaranteeing their model stays within a specific "standard of care" band. The Registry Solution: We need a National AI Incident Reporting System, like the FAA’s "black box" for aviation. If a model fails in a specific clinical scenario, that data should be shared immediately so every other "Human in the Loop" knows to watch for it. Check the Epstein Becker Green Checklist for the five critical points your 2026 vendor agreement must cover to ensure you aren't the only one at risk. Get the "Clinical Reality Check" Before Everyone Else. I send these briefings to my private list 24 hours before they hit social media. Join other healthcare leaders who get the raw, uncensored analysis first. [Join the Clinical Realist List] The Bottom Line In 2026, the "cool factor" is dead. We are entering the era of Clinical Pragmatism . I want AI to win. I want it to help us. But a system where the provider is the "Liability Sponge" and the vendor is a "Ghost" is unsustainable. We don't need faster AI; we need a fairer contract. If the system wants us to be the final authority, it needs to give us the authority to slow down. Stay Real, -Dr. Matt
More Posts

Contact

For Partnerships, Media Inquiries, Speaking Engagements, and Bulk Sales: 

 

*Unfortunately, I do not have the bandwidth to give individual advice over email.

Main Contact