The EMR Just Got Smarter. WiSeR Is Watching.

Last week, Mount Sinai Health System and OpenEvidence announced a collaboration that didn't get nearly enough attention in wound care circles.

OpenEvidence — the most widely used medical AI platform among U.S. physicians — is now embedded directly inside Epic across Mount Sinai's seven hospitals and 400+ outpatient practices. Every physician, every nurse, every pharmacist can now ask a clinical question in natural language and receive an evidence-based answer without leaving the EMR workflow.

That's the headline. Here's the wound care story underneath it.

What WiSeR Actually Demands

If you're operating in the skin substitute space, you already know what WiSeR is costing you in time and friction. The Wound Insurance electronic Review prior authorization platform — now active in multiple states and expanding — requires clinicians to document clinical necessity at the point of care, in a format that satisfies payer review, before a skin substitute claim gets approved.

The documentation burden is real. And it is falling disproportionately on nurses and wound care coordinators — the exact clinicians managing the majority of chronic wound encounters, making product selection decisions, and completing the documentation that determines whether a claim clears or triggers an audit.

That is precisely the care team the Mount Sinai deployment just handed an AI-powered evidence tool.

Where This Gets Interesting

The potential upside is direct. AI-embedded clinical decision support at the point of care means clinicians can surface guideline-consistent recommendations faster — and document in a format more likely to align with what payer review criteria are looking for. Not because the tool games the system, but because better evidence access produces better-documented decisions. That is exactly what WiSeR is theoretically designed to reward.

For wound care companies with strong clinical evidence behind their products, this infrastructure is a tailwind. If the AI tool surfaces their product as guideline-consistent, and the clinician documents accordingly, the prior authorization friction decreases. Evidence-backed products get a structural workflow advantage they didn't have before.

The downside is less obvious — and worth paying closer attention to.

AI tools trained on peer-reviewed literature don't automatically reflect the specific LCD and coverage criteria WiSeR is checking against. A clinically sound recommendation is not automatically a reimbursement-defensible one. Clinicians who assume the AI covered their documentation requirements — without verifying payer-specific criteria — are building a compliance gap they may not discover until an audit.

There's a second risk that almost no one is discussing. If payers eventually gain access to utilization patterns from AI-assisted clinical encounters, they gain a new data layer for setting coverage benchmarks. AI-assisted documentation at scale creates a visibility into prescribing behavior and product selection that didn't previously exist in structured form. That data will not sit unused.

The Gap That Widens

The Mount Sinai deployment is a large academic medical center story. Seven hospitals. Hundreds of outpatient practices. IT infrastructure, Epic integration, and the organizational capacity to deploy enterprise-wide AI tools.

Community wound centers, independent podiatric practices, and mobile wound care providers — the settings managing a significant share of the chronic wound patient population — are nowhere near this infrastructure. WiSeR doesn't care. The prior authorization requirements are the same regardless of whether the clinic has AI-assisted documentation support or a paper form.

That gap — between well-resourced systems with AI-embedded workflows and smaller providers navigating WiSeR manually — is going to widen before it narrows. And it will show up in claim approval rates, audit exposure, and ultimately in which providers remain viable in the CAMPs space.

What to Watch

Whether OpenEvidence and similar platforms build wound-care-specific coverage criteria — not just clinical guidelines — into their knowledge base. That's the gap between useful and reimbursement-defensible.

Whether payers begin referencing AI-assisted documentation patterns in coverage determinations or audit triggers. The data infrastructure for this is being built now.

And whether the wound care industry treats EMR interoperability as a commercial strategy — not just a product feature. The providers who can document well will be reimbursed. The platforms that make documentation easier will be adopted. The products with the evidence to support both will win.

The AI layer just arrived in the EMR. WiSeR was already there.

See you at SAWC in 2 days - I’m sure this will be part of the conversation.

— Scott

Recommended for you