ACHE 2026: What the Hallways Reveal About AI That the Keynotes Won't
I've been in Houston for less than 24 hours and ACHE is already telling two different stories.
Last night at the networking reception, I ended up in a conversation with a COO from a mid-size system in the Midwest. She'd just come from a vendor demo showcasing an AI-powered clinical decision support tool. Impressive interface. Slick pitch deck. She turned to me, drink in hand, and said: "That's the third AI platform someone's tried to sell me this quarter. I still don't have a governance framework for the first one we bought."
That one sentence captures what I'm hearing everywhere here. The main stage narrative: AI is transformative, governance is improving, and health systems are ready. The hallway truth: leaders are terrified of vendor lock-in, frontline adoption is stalling, and boards have no idea what they're approving. I'm here all week. This dispatch is what I'm actually hearing.
The Hallway Truth
Last month's newsletter centered on the FDA AI Reimbursement Gap: 1,300 devices authorized, almost none getting paid. That gap doesn't exist in isolation. It's a symptom of a much larger dysfunction: healthcare leadership is making AI decisions without a clinical or operational north star.
At ACHE, I'm watching this play out in real time. The keynotes celebrate "digital transformation." But in the hallways, the real conversations sound different.
On governance: CMOs and Chief Innovation Officers are quietly admitting they don't have a governance framework for AI that actually works. One prominent health system executive told me they've deployed 7 different AI tools in the last 18 months with zero integrated evaluation criteria. Another said their board approved a $2M AI contract based on a vendor pitch and a Gartner report. Neither knew what clinical outcomes they were supposed to measure.
On vendor relationships: There's palpable anxiety about AWS, Google Cloud, and Microsoft dominating the health system AI stack. Leaders understand that one vendor's AI model now touches EHR, analytics, coding, billing, and operations. But they don't see a path to diversify without ripping out infrastructure. Strategic captivity masquerading as innovation.
On frontline reality: Clinicians still aren't using most of the AI tools that were supposed to save them time. Burnout-measurement apps? Yes. AI-powered documentation assistance that actually integrates into workflow? Rare. The gap between "we bought this AI solution" and "our people actually use it" is where billions are evaporating. A 2025 AMA survey found that while physician interest in AI tools is growing, actual clinical workflow integration remains stubbornly low.
The hallway truth: ACHE attendees are smarter and more skeptical than the marketing suggests. They're just not saying it from the stage yet.
Three Things I'm Watching This Week
1. Laura Kaiser's "Purposeful Urgency" Framework. Kaiser's ACHE session on strategy execution is the most subscribed session I've heard about. Why? Because health system leaders are exhausted by "digital transformation theater." They want permission to be selective. Purposeful urgency means: pick the 3-5 things that actually matter to your system's mission, execute them ruthlessly, and don't get distracted by trend-chasing. AI governance should follow this exact model. Not every health system needs every AI tool. Clarity on "what's core to us?" is the missing strategic input.
2. The PE Reckoning Session (Wednesday). Private equity is quietly reshaping which health systems exist and which close. The governance panel on Wednesday is going to surface an uncomfortable truth: PE-backed groups are moving faster on AI but with less transparency to their boards. I'm watching to see if anyone acknowledges that financial pressure and AI investment aren't always aligned with patient outcomes.
3. The Governance Vacuum. There's no agreed-upon standard for AI governance in health systems yet. CMS guidance is anticipated, but it's not here. Board members are approving AI contracts without clear authority or accountability. This is the pre-regulation moment: whoever builds the governance framework first gains massive strategic advantage. This matters more than the AI technology itself.
What I Tell My Clients
When I consult with health system leaders on AI governance, three patterns surface repeatedly.
First, the "innovation committee" structure is failing. Most health systems created AI oversight by bolting a committee onto the existing quality or IT governance structure. The problem: these committees meet monthly, lack clinical AI expertise, and have no budget authority. They produce reports. They do not produce decisions. Effective AI governance requires a standing body with three things: clinical representation, financial authority, and a decision timeline that matches vendor sales cycles, not academic publishing cycles.
Second, the vendor evaluation process is backwards. Health systems are evaluating AI tools on accuracy metrics, user interface quality, and integration timelines. Those matter. But the question they should ask first is: "What happens when this vendor's model changes?" AI products update continuously. The tool your team validated in Q1 may behave differently by Q3. Without a revalidation protocol tied to model updates, you're governing a product that no longer exists.
Third, "clinician engagement" is not the same as clinician trust. I see this conflation constantly. Leadership sends out a survey. Clinicians say they're "engaged" with the AI rollout. Leadership reports adoption is on track. But engagement is not trust. Trust means the physician will change a clinical decision based on the tool's output. That requires transparency about how the algorithm reaches its recommendations, the ability to verify against patient-specific data, and the authority to override when clinical judgment disagrees. Without those three conditions, you have participation, not adoption. And participation does not produce outcomes.
The distinction between engagement and trust is where most health system AI strategies quietly fail. The dashboards look green. The utilization numbers tell a different story.
The Pre-Regulation Window
Here's the strategic reality: we are in the 12-18 month window before formal AI governance standards arrive in healthcare. CMS is developing guidance. The AMA's CMAA framework is establishing clinical AI classification standards. State legislatures are active, with over 250 healthcare AI bills introduced across 34+ states by mid-2025.
Health systems that build governance frameworks now, before they're required, gain two advantages. First, they shape the standard rather than react to it. Second, they create internal muscle memory for AI evaluation that compounds over time. The organizations that wait for a mandate will spend their first year catching up. The organizations that start now will spend that year refining.
This is the conversation I came to ACHE to have. Reply to me if it's one your system needs to have too.
-Dr. Matt
**Get exclusive consulting frameworks and behind-the-scenes analysis I don't publish on the blog. Subscribe to the newsletter: [https://www.drsarahmatt.com/newsletter-signup




