The "Grace Period" is over (New laws as of Jan 1)
Navigating the $50B CMS expansion? Download my free 2026 Rural Health Strategy Checklist here.
Happy Tuesday (and welcome back to reality!).
If you are like me, you spent the last few weeks actively ignoring the "upcoming regulatory changes" emails so you could actually enjoy the holidays.
Well, it is January 6th. The holidays are over. The grace period is gone. And the new laws are officially here.
While we were toasting to the New Year, a significant shift in health data and AI regulation quietly went into effect on January 1, 2026. If you are building, buying, or implementing health tech this year, the ground just shifted beneath your feet.
For the last two years, we have talked about AI regulation in the future tense. We treated it like a "2027 problem" or something the EU would figure out first. That complacency ends this morning.
Here are the three critical "Jan 1" updates you need to know, fully expanded with what they actually mean for your product roadmap.
1. The "AI Doctor" Disclosure (California AB 489)
Status: Effective Jan 1, 2026. The Law: Assembly Bill 489 (Bonta)
As of five days ago, it is now explicitly illegal in California for an AI system to imply it is a licensed healthcare professional. This sounds simple on paper, but the text of AB 489 goes much further than just requiring a "I am a bot" badge.
The law targets "deceptive design" in patient interfaces. Specifically, it prohibits the use of semantic choices that suggest human agency in a clinical context.
The Reality for Your Product: If your chatbot says "I think you might have the flu" or "We recommend you see a specialist," you are likely non-compliant. The use of first-person pronouns ("I," "me," "we") by an algorithmic system in a healthcare setting is now legally risky.
The Fix: You need to audit your conversational UI this week. You cannot just slap a disclaimer in the footer anymore. The dialogue itself must be "mechanical" by design.
- Bad: "I found three specialists near you."
- Compliant: "The system has identified three specialists near you."
This removes the "magic" from the user experience, yes. But it also keeps you from being the first test case for the California Attorney General.
2. The Texas "Black Box" Opener (HB 149)
Status: Effective Jan 1, 2026. The Law: Texas Responsible AI Governance Act
While California is focused on what the AI says, Texas is focused on how the AI thinks. The Texas Responsible AI Governance Act (HB 149) went live this week, and it is arguably the most aggressive transparency law in the country.
It demands explainability for high-stakes decisions made by automated systems. In the text of the law, healthcare algorithms are categorized as "high-stakes" by default.
The Reality for Your Product: "It’s a proprietary black box algorithm" is no longer a valid legal defense in Texas if a patient claims your AI denied them care or misprioritized their triage.
If your tool helps a payer decide to deny a claim, or helps a hospital system flag a patient for discharge, you must be able to produce a "meaningful explanation" of how that decision was reached. This does not mean showing the source code. It means showing the weighting. You have to be able to tell the state: "The AI prioritized this patient because of Variable A and Variable B, not because of Variable C."
The Fix: If you are using deep learning models where even you don't know exactly why the model made a prediction, you have a liability problem in Texas as of this morning. You need to implement "explainability layers" or regression testing reports that can be pulled on demand.
3. The HTI-1 "Soft" Deadline is Vanishing
Status: Enforcement begins March 1, 2026. The Law: ONC HTI-1 Final Rule (ASTP/ONC)
We technically caught a break here. The Office of the National Coordinator (now ASTP/ONC) announced a "temporary enforcement discretion" window for the new certification updates.
But do not let that lull you into a false sense of security.
The discretion window closes on March 1, 2026. That gives you exactly 54 days from today to finalize your "Decision Support Interventions" (DSI).
The Reality for Your Product: This is not just about updating your software version. The HTI-1 rule requires you to provide "transparent details" about your algorithm's training data to your customers. Your hospital clients will start demanding this data in February so they can meet their own deadlines.
If you are a vendor, your customers need to know:
- What data was this model trained on?
- How did you test for bias?
- When was the last time it was re-validated?
If you don't have those documentation sheets ready to send, your product will be blocked from use in certified EHR environments by spring.
The Fix: Stop building new features. Dedicate your engineering and product teams to "Compliance Documentation" for the next 8 weeks. It is not glamorous work. But it is the only work that matters right now.
The Bottom Line
2025 was the year we theorized about AI safety. 2026 is the year we have to document it.
The "Wild West" era of deploying health algorithms without oversight ended on New Year's Eve. We are now in the era of audits, disclosures, and explainability.
Don't let this paralyze your roadmap. Just realize that "Compliance" is no longer a wrapper you add at the end of the development cycle. It is now the core feature.
Let’s get to work. #StayCrispy
-Dr. Matt




