Beyond Compliance: Why Trust is Health Tech's Most Critical Asset

September 23, 2025

We are in the golden age of health innovation. From AI-driven diagnostics to personalized wellness platforms, our work is fundamentally reshaping the future of care. This progress is fueled by an unprecedented flow of data. Yet, a critical vulnerability exists at the core of our industry: the growing deficit between our technological capabilities and the trust of the patients we serve.


The prevailing model of opaque data collection and secondary monetization is not just a reputational risk; it is an unsustainable business strategy. The next generation of market leaders will not be defined by the cleverness of their algorithms alone, but by the robustness of their trust architecture.


The HIPAA Paradox and the Coming Regulatory Storm: As an industry, we navigate our data strategies around HIPAA (in the US, GDPR and others around the world). We treat it as the definitive rulebook for patient privacy. But this perspective is dangerously narrow. HIPAA is a floor, not a ceiling, and it was built for a world that no longer exists. It governs covered entities, leaving a vast, unregulated ecosystem of wellness apps, wearables, and direct-to-consumer platforms in a compliance gray area. This regulatory gap is well-documented by the Department of Health and Human Services, which clarifies that data shared with many third-party apps falls outside HIPAA's protections.


This regulatory gap is closing. With state-level privacy laws like the California Consumer Privacy Act (CCPA) setting new precedents and the FTC signaling more aggressive enforcement via its Health Breach Notification Rule, the era of regulatory ambiguity is ending. Relying on a minimalist, check-the-box approach to compliance is a strategy with a rapidly expiring shelf life. The question is no longer if a stricter regulatory framework will arrive, but when. The smart play is not to wait for it, but to build for it proactively.


Deconstructing the Flawed Value Exchange: The current unspoken contract with the user is often a lopsided one. We offer a service, and in exchange, we capture data whose downstream value far exceeds the immediate benefit provided to the user. This data flows into a complex secondary market of data brokers and aggregators, a market projected to be worth hundreds of billions of dollars, fueling everything from pharmaceutical research to targeted advertising.


While the process of "de-identification" provides a layer of legal and ethical cover, we know its limitations. The increasing sophistication of analytical techniques means that re-identifying individuals from de-identified data is often possible by cross-referencing datasets. More importantly, this model creates a fundamental misalignment. When users discover how their data is being leveraged, trust is broken, often irreparably. This leads to increased churn, negative brand perception, and a user base that is increasingly unwilling to share the very data our innovations depend on. It is a house of cards.


Trust as a Competitive Moat - An Architectural Blueprint: In a crowded market, the most defensible competitive advantage is not a feature or a price point; it is trust. Companies that treat trust as a core business metric rather than a legal hurdle will attract more engaged users, command greater pricing power, and build more resilient brands. Research consistently shows that a lack of trust is a significant barrier to the adoption of digital health technologies. (Hey, my book talks all about this!) Here is a blueprint for moving beyond compliance to build a foundation of trust:


  1. Frame Transparency as a Brand Pillar. Your data policy should not be a document crafted by lawyers to minimize liability. It should be a manifesto, written in plain language, that your marketing team can proudly feature. Use your onboarding, UI, and communications to be radically transparent about what you collect, why you collect it, and the value it creates.

  2. Engineer an Equitable Value Exchange. For every data point requested, you must clearly articulate the direct, tangible benefit the user receives. Move away from implicit collection and toward explicit, granular consent. If the value exchange is strong enough, users will willingly opt in. If it is not, the problem is with your value proposition, not the user's reluctance. This is why we allllllll share our data with Google maps for example. We get immense value from up to date directions, and precise placement of where all the construction delays are. Take my data!

  3. Build for User-Centric Governance. Empowering the user means more than a settings page. It means building intuitive privacy dashboards, enabling effortless data portability, and providing a simple, verifiable process for data deletion. The future is user-owned health records, and the platforms that embrace this will render closed-silo competitors obsolete.

  4. Champion Data Stewardship. The ultimate evolution is to shift the corporate mindset from being a data processor to a data steward. This means accepting a 'fiduciary-like' responsibility to act in the best interest of your users and their data. This is not altruism; it is a long-term strategy for building enterprise value.


A Strategic Call to Action: The conversation about data needs to move from the legal department to the C-suite and the product roadmap. It is a fundamental strategic issue that will define the winners and losers of the next decade in health tech.


This week, ask these questions within your organization:


  • How clearly do we articulate our data value exchange in the first 60 seconds of a new user's experience?


  • Could a non-technical user read our privacy policy and feel empowered rather than confused? (cough cough, the answer today is probably no!)



  • How would our business model be impacted if our users could instantly port their data to a competitor?


The future of healthcare innovation depends on a foundation of trust. It is our collective responsibility to build it.....and it's good business for the future.


#StayCrispy


-Dr. Matt


Dr. Matt believes technology can erase the borders that limit access to care. This vision is the heart of her book, The Borderless Healthcare Revolution. Join her in building this future by visiting drsarahmatt.com to learn more and get your copy.

Beyond Compliance: Why Trust is Health Tech's Most Critical Asset
November 11, 2025
I get this a lot. When I talk about the future of healthcare, people are energized. But when I pivot to AI, the mood shifts. People are "freaking out." I was speaking on this topic the other day, and a respected physician came up to me and said, “I am so glad I’m retired!” He’s not wrong to feel that way. The noise is deafening. We’re all being hit from two sides, and it's enough to make anyone feel paralyzed. The Two Extremes (And Why They're Both Wrong) On one side, there's the AI Hype . This is the utopian promise. You’ve seen the vendors. You’ve read the headlines. AI will read every scan instantly, end diagnostic errors, write all our notes, eliminate clinician burnout, and solve our staffing crisis by next quarter. It’s the magic wand we've been waiting for. On the other side, there's the AI Hysteria . This is the dystopian warning. AI is a black box trained on biased data. It will amplify systemic inequities. It will replace our best clinicians. Hackers will cripple our systems. And insurance companies are already using it as a weapon to deny care at a scale we’ve never seen before. No wonder that doctor is glad he’s retired. No wonder we feel paralyzed. The Sober Reality: AI is a Mirror Here is the reality. AI is not magic. It's math. It is a powerful tool. And a tool is only as good as the system we put it into. Here is the single most important thing I can tell you today: AI does not fix a broken system. It just scales the broken parts faster. But here’s the part we're missing: AI is a mirror. It's not inventing bias; it's just exposing the bias that's been in our data for decades. It's not creating interoperability problems; it's just shining a harsh light on our absurd reliance on the fax machine. It's not creating new financial barriers; it's just automating the ones that already exist. This isn't a catastrophe. It's a diagnostic. AI is showing us, with data, exactly where the cracks in our system are. And that is not a reason to be paralyzed. That is a reason to be focused. A Practical Path Forward We are leaders, clinicians, technologists, and more. Here is how we move from paralysis to action. Start with Problems, Not Platforms. We must have the discipline to reject "shiny object syndrome." The conversation needs to change. Instead of a random sales guy asking, "Do you want to buy an AI platform?" we need to be clear: "Show me how you will reduce my nurse's documentation time by 30%." "Show me how you will get my sepsis patients their antibiotics 15 minutes sooner." We start with the problem, not the tech. Govern What You've Got. We must be the ones to audit the data, asking before we buy a tool, "Who is not in your training set?" We must also consider when to keep a human in the loop, empowered to overrule the algorithm, or when it's not necessary. Invest in the "Boring" Stuff. AI doesn't work in a vacuum. It needs the boring infrastructure: the broadband in our rural counties, the interoperability between our EHRs. It needs payment models that reward using AI for prevention, not just for billing. And it needs us to design for trust; which means bringing patients, community leaders, and our frontline care teams into the room before we buy the tool. Teach How to Use It. We are responsible for creating the next generation of healthcare professionals: the nurses, the PAs, the therapists, the coders, and the physicians. We have a mandate to make AI literacy a core competency. This is not as simple as handing someone a new app. A striking JAMA study highlighted this very gap. It found that AI, used alone, actually outperformed both physicians working alone and physicians who were given the AI tool to help. What does that tell us? It tells us that this is a complex, learned skill. Simply giving a clinician a powerful AI doesn't guarantee a better outcome. We have to train our teams how to use it, when to trust it, and when to overrule it. They can't just learn to use AI; they must learn to effectively partner with it. The Bottom Line The goal of AI is not to be intelligent. The goal is to be useful. The goal is to be safe. The goal is to restore the human connection that technology so often breaks. The future of healthcare is not about replacing our clinicians with algorithms. It's about augmenting our care teams. It's about giving them the tools and the time to do what only humans can do: listen, show empathy, and heal. The future isn’t about intelligence without borders. It’s about building a system that delivers humanity, to everyone, without barriers. Stay grounded. #StayCrispy -Dr. Matt 
November 4, 2025
For decades, medicine has operated on a foundation of averages. We rely on clinical trials that tell us how a drug affects the "average" person, and we follow treatment protocols designed for a broad population. But as any clinician knows, there is no such thing as an "average" patient. Each person is a unique combination of genetics, environment, and lifestyle. What if we could change that? What if we could test a new heart valve on your specific heart before surgery? Or simulate five different cancer treatments on your specific tumor to see which one works best, all without you ever taking a single dose? This is the promise of the digital twin : a dynamic, living, and personalized virtual model of a patient. If It's Not a New Idea, Why Talk About It Now? The concept of a "digital twin" is not new. It has been used for decades in advanced manufacturing and aerospace to model complex machines like jet engines. So why is it suddenly one of the most talked-about topics in health tech? The answer is convergence. For the first time, three powerful forces are maturing at the same time: Massive Data: We now have oceans of data from EHRs, rich genomic sequencing, and medical imaging. Constant Data: The explosion of wearables and remote patient monitoring devices provides a continuous, real-time stream of data about an individual's physiology. Powerful AI: We finally have the advanced artificial intelligence and computational power to make sense of all this data, building and running simulations that were impossible just a few years ago. This convergence is moving digital twins from a futuristic concept, and evolving into a practical clinical tool. The Volcano in Your Computer When I explain this concept, I often use an analogy that seems to resonate. Think about scientists trying to understand a volcano. They cannot safely trigger a real eruption just to study it. That would be impossible and catastrophic. Instead, they build a highly complex computer model of that specific volcano. They feed it real data: magma pressure, ground tremors, gas emissions, and geological structures. This model allows them to run simulations. They can ask "what if" questions. What if the pressure increases by 10%? What if a fissure opens on the north flank? This simulation allows them to test scenarios and predict a real eruption, all without any real-world risk. Now, apply this exact logic to the human body, which is infinitely more complex than a volcano. We cannot ethically or safely test ten different interventions on a live patient. But we can test them on their digital twin. Where Virtual Patients Are Already Making a Real-World Impact This is not just theory. Digital twins are actively being used to improve outcomes. In Cardiology: The Dassault Systèmes "Living Heart" project creates highly accurate, personalized heart models. This allows cardiologists to test how a specific patient's heart will react to a new device, like a stent or valve, before it is ever implanted. Similarly, FEops HEARTguide helps clinical teams predict how a transcatheter aortic valve implantation (TAVI) device will interact with a patient's unique anatomy, helping them choose the right size and position to avoid complications. In Hospital Operations: Beyond individual patients, Karolinska University Hospital in Sweden has utilized digital twins to optimize its surgical workflows. By simulating the flow of patients, staff, and resources, they can identify bottlenecks, improve scheduling, and ensure operating rooms are used more efficiently. The Hurdles on the Horizon As with any revolutionary technology, the path forward has significant challenges. Data Integration: Building an accurate twin requires pulling vast amounts of different data from siloed systems. Computational Cost: Running these complex simulations requires enormous processing power. Validation and Ethics: How do we "validate" a digital twin? How do we know it is accurate enough to base life-or-death decisions on? And who owns your virtual data? These are critical questions we must answer. The digital twin represents the ultimate destination for personalized medicine. It is not a tool to replace the clinician, but a powerful new instrument to inform their judgment. The goal is no longer just to treat the average patient, but to provide precise, predictive, and personal care for the individual patient. And it all starts with building the virtual you. #StayCrispy -Dr. Matt
October 28, 2025
For the last decade, we’ve talked about clinician burnout as a problem. Let's be blunt: it’s no longer a problem. It’s an existential crisis. It’s the "pajama time" spent logging hours in the EHR after the kids are in bed. It's the "death by a thousand clicks" that has turned highly-trained physicians and nurses into the world's most expensive data-entry clerks. And it’s the moral injury of knowing you could provide better care if you weren't constantly battling your own inbox. For years, tech has felt more like an antagonist in this story than a solution. But the narrative is changing. Generative AI is finally here, and it’s making two very different, very powerful promises. The question is: are we listening to both? Part 1: The AI Scribe - A Fix for the Process The most visible, headline-grabbing solution to burnout is the Ambient Clinical Scribe . This is the "shiny object" that's actually working. The news is now dominated by massive, enterprise-wide rollouts. Kaiser Permanente recently announced a historic deployment of Abridge to 10,000 of its clinicians. This comes on the heels of dozens of other health systems adopting Microsoft’s DAX Copilot (formerly Nuance), Oracle/Cerner , Abridge , and similar tools integrated directly into Epic and Cerner. The promise is intoxicatingly simple: The doctor and patient just talk. The AI listens in the background. By the time the patient has left the room, a structured, accurate, and billable clinical note is 80-90% complete in the EHR. This is not a small thing. It’s a direct assault on the 2+ hours per day that physicians spend on documentation. This technology gives clinicians back the single most valuable asset they have: time . It’s a powerful painkiller for the most acute symptom of burnout. But what happens when you’ve taken the painkiller? The immediate, throbbing pain of documentation is gone. But the underlying disease remains. What if you get two hours of your day back, only to spend it in a unit where you feel isolated, unvalued, and completely disconnected from leadership and your colleagues? Part 2: The Deeper Disease - A Crisis of Culture This brings us to the other side of the burnout coin. This crisis was never just about documentation. The clicks were the symptom. The disease is a fundamental breakdown in culture, connection, and belonging. Burnout is what happens when a nurse doesn't feel safe speaking up. It’s what happens when a physician feels a total lack of autonomy and a deep misalignment between their values and the hospital's business objectives. It’s the isolation of a 12-hour shift where you feel like a cog in a machine, not a human in a community. For decades, how have we tried to "fix" this? With a clumsy, 60-question annual employee engagement survey. This is a tool from a different era. By the time the data is collected, analyzed (six weeks later), and presented to managers, it’s a historical document. It’s a rear-view mirror. It tells you how your team felt last quarter, not how they feel right now. And worse, it provides managers with a mountain of data but no clear path to action, so it often gathers dust. Part 3: The AI "Pulse" - A Fix for the Culture This critical gap has created a new category of tools: real-time employee listening or "pulse" platforms. For years, major platforms like Glint (now part of Microsoft), Culture Amp , and Perceptyx have tried to solve this, arguing that continuous feedback is far better than an annual snapshot. They provide powerful analytics to HR leaders, helping them understand the macro trends driving attrition and engagement. But a different, more lightweight approach is also emerging, one focused less on periodic surveys and more on creating a daily habit of connection. Full disclosure, it’s a space I’ve recently started advising in, after being introduced to a platform called Sayhii . Their model is designed to act as a high-frequency pulse. It’s built on a deceptively simple premise: one simple, science-backed question sent to every employee, every day. It’s a 10-second interaction, not a 30-minute survey. "Do you feel your work has purpose?" "Do you trust the leadership of this organization?" "Did you feel you belonged at work this week?" Instead of a rear-view mirror, this approach creates a real-time, anonymous "check engine" light for frontline managers. A nurse manager can see an anonymous, real-time dashboard indicating that their team’s "sense of purpose" score has dipped 15% this week, and then be prompted with a micro-action to address it, like starting the next huddle by sharing a recent patient-win story. The Full Prescription: Clicks and Culture A health system that gives its doctors two hours back with an AI scribe (but leaves them in a culture where they feel unheard and unvalued) hasn’t solved burnout. It’s just created more efficient, slightly-less-tired, still-burnt-out employees. The AI scribe is the painkiller . It's essential for immediate, acute relief. We absolutely need it. But these continuous listening tools, like the daily pulse of a @Sayhii, are the antibiotic . They are the long-term therapy designed to fix the underlying cultural infection that made the system sick in the first place. The smartest health systems in 2026 and beyond will be the ones that realize they must do both. They will use one set of AI tools to fix the process and another set to fix the culture. Because you can't heal a workforce by just treating the symptoms. Until next time, #Stay Crispy Dr. Matt