Surviving the Squeeze: A Clinician's and Builder's Guide to HealthTech's Messy Middle

September 3, 2025

Whether you’re on the clinical front lines or building the technology that supports them, the last few years have felt like a case of whiplash. We rocketed from a period of "growth at all costs", fueled by unprecedented venture capital and promises of total transformation, straight into an era of economic constraint, layoffs, and a relentless focus on the "path to profitability."


For those of us in the trenches, this shift isn't just a headline; it's a tangible change in our daily reality. It’s creating a unique and challenging environment: a "messy middle" caught between the immense pressures of today's economy and the revolutionary promise of tomorrow's technology.


This isn't just a patient-facing issue. It's a professional crisis for the very people meant to be healing and innovating. Today, I want to dissect this awkward gap from both sides of the screen; from the perspective of the clinicians using the tools and the health tech professionals building them.


The Anatomy of the Awkward Gap


To navigate this period, we first have to understand the two powerful, opposing forces squeezing the industry.


1. The Economic Headwinds (The "Now") The financial landscape has fundamentally changed. The era of ZIRP (Zero Interest Rate Policy) is over, which means capital is no longer cheap. Venture funding for healthtech has tightened dramatically, and the metrics have shifted from user growth to hard ROI.


For health tech companies, this means the directive from the board is no longer "disrupt," but "survive." The focus is on conserving cash, achieving profitability, and reducing burn rate, which has led to widespread layoffs. These cuts often hit "cost centers" first; the crucial but hard-to-quantify teams in customer success, implementation, and forward-looking R&D.


For hospitals and health systems, the pressure is just as intense. Facing their own razor-thin margins, CFOs have become the primary gatekeepers for any new technology purchase. A tool that doesn't demonstrate a clear, near-term return on investment, either through cost savings or proven efficiency gains, is a non-starter.


2. The AI Horizon (The "Next") Simultaneously, we're living through an AI revolution that promises to solve healthcare's most intractable problems. The vision is no longer science fiction; it’s a tangible roadmap featuring:


  • Ambient Intelligence: AI scribes that passively listen to a clinical encounter and auto-generate a complete, accurate note, order labs, and draft referral letters.


  • Predictive Analytics: Algorithms that can identify patients at high risk for sepsis or readmission before they decompensate, allowing for proactive intervention.


  • Generative AI & LLMs: Tools that can automate the soul-crushing process of prior authorizations or summarize a 500-page patient record into a concise clinical summary.


Here’s the fundamental problem: The economic cuts are happening today. The AI-powered solutions, while incredibly promising, are not yet seamlessly integrated, universally trusted, or capable of navigating the last-mile complexities of clinical workflow. We are living in the gap between the drawdown of human-powered support and the ramp-up of AI-powered automation.


The View from the Front Lines: A Clinician's Reality


For doctors and nurses ( and I'm waving at you too pharmacists!), this "messy middle" isn't a strategic challenge; it's a daily burden.


You were promised technology that would reduce burnout, but the human support for the systems you already have is thinning out. When your EHR freezes mid-shift or a third-party application fails, the dedicated support specialist you used to call has likely been replaced by a generic ticketing system with a 24-hour response time. You’re forced to become a part-time IT specialist, a role you were never trained for and that takes you away from patients.


This is the downstream effect of health tech’s financial squeeze. The "technical debt" accrued over years of using clunky, non-interoperable EHRs is now compounded by a "support debt," and clinicians are the ones paying the interest with their time and sanity. Many of you have experienced "pilot program purgatory": participating in an exciting trial of a new AI tool that actually saves you time, only to see it shelved after six months due to budget cuts, forcing you back to the old, inefficient workflow.


The View from the Trenches: A Health Tech Professional's Dilemma


For those of you building the products, this period is just as fraught with tension. You entered this field to solve problems for clinicians, yet you’re caught between your users' needs and your company's financial imperatives.


The game has shifted from "disruption" to "integration." A slick UI is no longer enough. To succeed, your product must now flawlessly integrate with the complex, legacy ecosystems of Epic, Oracle/Cerner, and others. This requires deep institutional knowledge and robust engineering resources; the very things that are often downsized in a layoff.


You understand the customer success crisis intimately. You know that in healthcare, a product is only as good as its training, implementation, and adoption. Yet you watch as those very teams are cut, knowing that it will lead to failed deployments and frustrated users down the line.

Most painfully, you're under immense pressure to "show ROI now." The 18-to-24-month sales and implementation cycle that's standard in healthcare is at odds with a board that needs to see positive ROI in 6 to 12 months. This forces difficult product decisions, prioritizing features that look good on a sales deck over foundational improvements that solve deep, systemic workflow problems for your clinical partners.


Dr. Matt's Take: Forging a Path Through the Middle


This "messy middle" is one of the greatest leadership challenges our industry has faced. But it is also a powerful clarifying moment. The hype has evaporated, and only real, durable value will survive. To get to the other side, both clinicians and builders must adapt their approach.


For Clinicians: Your voice has never been more critical. Stop accepting technology that adds to your workload. Become demanding, educated customers. Your detailed feedback on workflow inefficiencies is the most valuable commodity in healthtech. Champion the tools that give you time back and band together to demand that your administration invests in them. Don't settle for "good enough."


For Health Tech Professionals: This is a flight to quality. The companies that win this next decade will be those that are obsessively focused on solving a specific, painful clinical problem completely. They will treat clinicians as essential design partners, not just end-users. They will over-invest in implementation and support, recognizing that trust is the ultimate currency. Your mission is to be the voice of the user in every meeting, relentlessly advocating for solutions that deliver real, measurable value, not just hype.


The bridge across this gap will be built on a foundation of co-creation and trust. We have a once-in-a-generation opportunity to build a truly intelligent, efficient, and humane healthcare system. But we can only do it together.


#StayCrispy and informed,


-Dr. Matt

Surviving the Squeeze: A Clinician's and Builder's Guide to HealthTech's Messy Middle
November 11, 2025
I get this a lot. When I talk about the future of healthcare, people are energized. But when I pivot to AI, the mood shifts. People are "freaking out." I was speaking on this topic the other day, and a respected physician came up to me and said, “I am so glad I’m retired!” He’s not wrong to feel that way. The noise is deafening. We’re all being hit from two sides, and it's enough to make anyone feel paralyzed. The Two Extremes (And Why They're Both Wrong) On one side, there's the AI Hype . This is the utopian promise. You’ve seen the vendors. You’ve read the headlines. AI will read every scan instantly, end diagnostic errors, write all our notes, eliminate clinician burnout, and solve our staffing crisis by next quarter. It’s the magic wand we've been waiting for. On the other side, there's the AI Hysteria . This is the dystopian warning. AI is a black box trained on biased data. It will amplify systemic inequities. It will replace our best clinicians. Hackers will cripple our systems. And insurance companies are already using it as a weapon to deny care at a scale we’ve never seen before. No wonder that doctor is glad he’s retired. No wonder we feel paralyzed. The Sober Reality: AI is a Mirror Here is the reality. AI is not magic. It's math. It is a powerful tool. And a tool is only as good as the system we put it into. Here is the single most important thing I can tell you today: AI does not fix a broken system. It just scales the broken parts faster. But here’s the part we're missing: AI is a mirror. It's not inventing bias; it's just exposing the bias that's been in our data for decades. It's not creating interoperability problems; it's just shining a harsh light on our absurd reliance on the fax machine. It's not creating new financial barriers; it's just automating the ones that already exist. This isn't a catastrophe. It's a diagnostic. AI is showing us, with data, exactly where the cracks in our system are. And that is not a reason to be paralyzed. That is a reason to be focused. A Practical Path Forward We are leaders, clinicians, technologists, and more. Here is how we move from paralysis to action. Start with Problems, Not Platforms. We must have the discipline to reject "shiny object syndrome." The conversation needs to change. Instead of a random sales guy asking, "Do you want to buy an AI platform?" we need to be clear: "Show me how you will reduce my nurse's documentation time by 30%." "Show me how you will get my sepsis patients their antibiotics 15 minutes sooner." We start with the problem, not the tech. Govern What You've Got. We must be the ones to audit the data, asking before we buy a tool, "Who is not in your training set?" We must also consider when to keep a human in the loop, empowered to overrule the algorithm, or when it's not necessary. Invest in the "Boring" Stuff. AI doesn't work in a vacuum. It needs the boring infrastructure: the broadband in our rural counties, the interoperability between our EHRs. It needs payment models that reward using AI for prevention, not just for billing. And it needs us to design for trust; which means bringing patients, community leaders, and our frontline care teams into the room before we buy the tool. Teach How to Use It. We are responsible for creating the next generation of healthcare professionals: the nurses, the PAs, the therapists, the coders, and the physicians. We have a mandate to make AI literacy a core competency. This is not as simple as handing someone a new app. A striking JAMA study highlighted this very gap. It found that AI, used alone, actually outperformed both physicians working alone and physicians who were given the AI tool to help. What does that tell us? It tells us that this is a complex, learned skill. Simply giving a clinician a powerful AI doesn't guarantee a better outcome. We have to train our teams how to use it, when to trust it, and when to overrule it. They can't just learn to use AI; they must learn to effectively partner with it. The Bottom Line The goal of AI is not to be intelligent. The goal is to be useful. The goal is to be safe. The goal is to restore the human connection that technology so often breaks. The future of healthcare is not about replacing our clinicians with algorithms. It's about augmenting our care teams. It's about giving them the tools and the time to do what only humans can do: listen, show empathy, and heal. The future isn’t about intelligence without borders. It’s about building a system that delivers humanity, to everyone, without barriers. Stay grounded. #StayCrispy -Dr. Matt 
November 4, 2025
For decades, medicine has operated on a foundation of averages. We rely on clinical trials that tell us how a drug affects the "average" person, and we follow treatment protocols designed for a broad population. But as any clinician knows, there is no such thing as an "average" patient. Each person is a unique combination of genetics, environment, and lifestyle. What if we could change that? What if we could test a new heart valve on your specific heart before surgery? Or simulate five different cancer treatments on your specific tumor to see which one works best, all without you ever taking a single dose? This is the promise of the digital twin : a dynamic, living, and personalized virtual model of a patient. If It's Not a New Idea, Why Talk About It Now? The concept of a "digital twin" is not new. It has been used for decades in advanced manufacturing and aerospace to model complex machines like jet engines. So why is it suddenly one of the most talked-about topics in health tech? The answer is convergence. For the first time, three powerful forces are maturing at the same time: Massive Data: We now have oceans of data from EHRs, rich genomic sequencing, and medical imaging. Constant Data: The explosion of wearables and remote patient monitoring devices provides a continuous, real-time stream of data about an individual's physiology. Powerful AI: We finally have the advanced artificial intelligence and computational power to make sense of all this data, building and running simulations that were impossible just a few years ago. This convergence is moving digital twins from a futuristic concept, and evolving into a practical clinical tool. The Volcano in Your Computer When I explain this concept, I often use an analogy that seems to resonate. Think about scientists trying to understand a volcano. They cannot safely trigger a real eruption just to study it. That would be impossible and catastrophic. Instead, they build a highly complex computer model of that specific volcano. They feed it real data: magma pressure, ground tremors, gas emissions, and geological structures. This model allows them to run simulations. They can ask "what if" questions. What if the pressure increases by 10%? What if a fissure opens on the north flank? This simulation allows them to test scenarios and predict a real eruption, all without any real-world risk. Now, apply this exact logic to the human body, which is infinitely more complex than a volcano. We cannot ethically or safely test ten different interventions on a live patient. But we can test them on their digital twin. Where Virtual Patients Are Already Making a Real-World Impact This is not just theory. Digital twins are actively being used to improve outcomes. In Cardiology: The Dassault Systèmes "Living Heart" project creates highly accurate, personalized heart models. This allows cardiologists to test how a specific patient's heart will react to a new device, like a stent or valve, before it is ever implanted. Similarly, FEops HEARTguide helps clinical teams predict how a transcatheter aortic valve implantation (TAVI) device will interact with a patient's unique anatomy, helping them choose the right size and position to avoid complications. In Hospital Operations: Beyond individual patients, Karolinska University Hospital in Sweden has utilized digital twins to optimize its surgical workflows. By simulating the flow of patients, staff, and resources, they can identify bottlenecks, improve scheduling, and ensure operating rooms are used more efficiently. The Hurdles on the Horizon As with any revolutionary technology, the path forward has significant challenges. Data Integration: Building an accurate twin requires pulling vast amounts of different data from siloed systems. Computational Cost: Running these complex simulations requires enormous processing power. Validation and Ethics: How do we "validate" a digital twin? How do we know it is accurate enough to base life-or-death decisions on? And who owns your virtual data? These are critical questions we must answer. The digital twin represents the ultimate destination for personalized medicine. It is not a tool to replace the clinician, but a powerful new instrument to inform their judgment. The goal is no longer just to treat the average patient, but to provide precise, predictive, and personal care for the individual patient. And it all starts with building the virtual you. #StayCrispy -Dr. Matt
October 28, 2025
For the last decade, we’ve talked about clinician burnout as a problem. Let's be blunt: it’s no longer a problem. It’s an existential crisis. It’s the "pajama time" spent logging hours in the EHR after the kids are in bed. It's the "death by a thousand clicks" that has turned highly-trained physicians and nurses into the world's most expensive data-entry clerks. And it’s the moral injury of knowing you could provide better care if you weren't constantly battling your own inbox. For years, tech has felt more like an antagonist in this story than a solution. But the narrative is changing. Generative AI is finally here, and it’s making two very different, very powerful promises. The question is: are we listening to both? Part 1: The AI Scribe - A Fix for the Process The most visible, headline-grabbing solution to burnout is the Ambient Clinical Scribe . This is the "shiny object" that's actually working. The news is now dominated by massive, enterprise-wide rollouts. Kaiser Permanente recently announced a historic deployment of Abridge to 10,000 of its clinicians. This comes on the heels of dozens of other health systems adopting Microsoft’s DAX Copilot (formerly Nuance), Oracle/Cerner , Abridge , and similar tools integrated directly into Epic and Cerner. The promise is intoxicatingly simple: The doctor and patient just talk. The AI listens in the background. By the time the patient has left the room, a structured, accurate, and billable clinical note is 80-90% complete in the EHR. This is not a small thing. It’s a direct assault on the 2+ hours per day that physicians spend on documentation. This technology gives clinicians back the single most valuable asset they have: time . It’s a powerful painkiller for the most acute symptom of burnout. But what happens when you’ve taken the painkiller? The immediate, throbbing pain of documentation is gone. But the underlying disease remains. What if you get two hours of your day back, only to spend it in a unit where you feel isolated, unvalued, and completely disconnected from leadership and your colleagues? Part 2: The Deeper Disease - A Crisis of Culture This brings us to the other side of the burnout coin. This crisis was never just about documentation. The clicks were the symptom. The disease is a fundamental breakdown in culture, connection, and belonging. Burnout is what happens when a nurse doesn't feel safe speaking up. It’s what happens when a physician feels a total lack of autonomy and a deep misalignment between their values and the hospital's business objectives. It’s the isolation of a 12-hour shift where you feel like a cog in a machine, not a human in a community. For decades, how have we tried to "fix" this? With a clumsy, 60-question annual employee engagement survey. This is a tool from a different era. By the time the data is collected, analyzed (six weeks later), and presented to managers, it’s a historical document. It’s a rear-view mirror. It tells you how your team felt last quarter, not how they feel right now. And worse, it provides managers with a mountain of data but no clear path to action, so it often gathers dust. Part 3: The AI "Pulse" - A Fix for the Culture This critical gap has created a new category of tools: real-time employee listening or "pulse" platforms. For years, major platforms like Glint (now part of Microsoft), Culture Amp , and Perceptyx have tried to solve this, arguing that continuous feedback is far better than an annual snapshot. They provide powerful analytics to HR leaders, helping them understand the macro trends driving attrition and engagement. But a different, more lightweight approach is also emerging, one focused less on periodic surveys and more on creating a daily habit of connection. Full disclosure, it’s a space I’ve recently started advising in, after being introduced to a platform called Sayhii . Their model is designed to act as a high-frequency pulse. It’s built on a deceptively simple premise: one simple, science-backed question sent to every employee, every day. It’s a 10-second interaction, not a 30-minute survey. "Do you feel your work has purpose?" "Do you trust the leadership of this organization?" "Did you feel you belonged at work this week?" Instead of a rear-view mirror, this approach creates a real-time, anonymous "check engine" light for frontline managers. A nurse manager can see an anonymous, real-time dashboard indicating that their team’s "sense of purpose" score has dipped 15% this week, and then be prompted with a micro-action to address it, like starting the next huddle by sharing a recent patient-win story. The Full Prescription: Clicks and Culture A health system that gives its doctors two hours back with an AI scribe (but leaves them in a culture where they feel unheard and unvalued) hasn’t solved burnout. It’s just created more efficient, slightly-less-tired, still-burnt-out employees. The AI scribe is the painkiller . It's essential for immediate, acute relief. We absolutely need it. But these continuous listening tools, like the daily pulse of a @Sayhii, are the antibiotic . They are the long-term therapy designed to fix the underlying cultural infection that made the system sick in the first place. The smartest health systems in 2026 and beyond will be the ones that realize they must do both. They will use one set of AI tools to fix the process and another set to fix the culture. Because you can't heal a workforce by just treating the symptoms. Until next time, #Stay Crispy Dr. Matt