Why E-Referrals Solved the Wrong Problem
Reuven Mashitz
The healthcare technology industry spent the last decade celebrating a victory that never happened.
E-referrals were supposed to eliminate the chaos of post-acute admissions. Vendors promised less paperwork, faster decisions, and more time for patient care. Hospitals adopted them. Money flowed. Dashboards showed referrals moving electronically instead of by fax.
Everyone declared success.
But if you sit next to an admissions coordinator at a skilled nursing facility on a Tuesday afternoon, you'll see something different. You'll watch them stare at referral #30 of the day—delivered digitally, timestamped perfectly, completely electronic—and spend 20 minutes hunting through contradictory notes, calling the hospital for missing information, and trying to decide if "UTI with delirium, resolved" actually means "chronic dementia with ongoing behavioral risk."
The referral arrived faster. The decision didn't get easier.
We built e-referrals to solve transmission. What we actually needed was a centralized platform for judgment.
What Actually Happens at the Start
When an e-referral lands at a post-acute facility, the system logs it as "received." The dashboard updates. From the vendor's perspective, the workflow is complete.
From the coordinator's perspective, the work is just starting.
First 5 minutes: Triage, not data entry.
The coordinator scans for the essentials—diagnosis, payer, discharge timing, and obvious red flags like isolation precautions or high-acuity orders. They're asking themselves one question before they touch a single field in the system: "Is this even possible for us today?"
If that gut check feels wrong, they already know this referral will require work, regardless of how clean it looks in structured fields.
Minutes 5-15: Building a story from fragments.
Now they're flipping between PDFs, portal views, and their own system, trying to answer three questions:
- Can we take care of this patient safely?
- Will we get paid appropriately?
- Do we have the right bed and staffing for this profile?
This is where contradictions show up. Different primary diagnoses across different notes. Unclear behavioral status. Payer fields that don't match the actual authorization situation.
Despite regulations mandating comprehensive discharge summaries, incomplete documentation remains a persistent problem that contributes to poor post-hospital outcomes. Even with standardized EHR formats implemented hospital-wide, the gap between what hospitals send and what post-acute facilities actually need continues to exist.
Most tools just sit there saying, "Here are the fields." The coordinator is doing the real work—reconciling conflicts and deciding what they still don't know.
Minutes 15-25: Chasing the gaps.
By now they've identified what's missing or unclear, and the clock is ticking. They might call the hospital case manager: "I need updated labs, a clearer behavior history, and what's the actual plan status with this Medicare Advantage product?"
They may ping internal nursing: "If we accept this today, can your team actually handle this acuity on evenings?"
They're deciding what's a hard stop versus what they're willing to accept as uncertainty, based on today's census, payer mix, and strategic relationships with referral sources.
None of that shows up in a status change. It's judgment under ambiguity.
Minutes 25-30: Making it look linear.
By the 30-minute mark, the coordinator has made a provisional decision. Now they go back to the system to make it look like a clean workflow—updating the status, dropping in a summary note, checking whatever boxes the platform requires.
The record matches the decision they already made in their head and through side conversations.
The reality is this: the first 30 minutes are not "processing an e-referral." They're compressing clinical, financial, operational, and relational judgment into a very small window. The problem with most automation is that it only sees the e-referral. It doesn't see the thinking.
The Fundamental Misunderstanding About "Solved"
Most e-referral vendors define success like this: the packet arrived electronically instead of by fax. Demographics, diagnosis codes, and payer fields mapped cleanly into the system. A referral ID, status, and timestamps exist for reporting and compliance.
From their standpoint, the win is fewer lost faxes, more legible data, faster routing, and better dashboards showing referral volume and response times.
All of that is good. It's also table stakes, not a solution to the actual admissions problem.
They're confusing "the referral arrived" with "the decision got easier."
For the person staring at referral #30, "solved" would mean something completely different. The system would help them see contradictions and gaps in the clinical story—different diagnoses in different notes, unclear behavior, missing medications—instead of forcing them to dig through PDFs to find them.
It would connect clinical, financial, and operational factors so they can see the tradeoffs of yes or no in one place. It would reduce the need for shadow tools like spreadsheets, side calls, and manual pre-triage.
In other words, solving means the thinking gets easier, not just the delivery of the information.
Most vendors act as if the bottleneck were getting data from point A to point B. In post-acute admissions, the real bottleneck is turning messy, multi-source, often contradictory data into a safe, financially sound, operationally realistic decision under time pressure.
Until a system is judged on whether it helps with that step, calling the problem "solved" because the referral is now digital, is mistaking a better envelope for a better decision.
The Hidden Cost of a Single Contradiction
Take a concrete example: the diagnosis conflict between "UTI with delirium" and "probable dementia with behavioral disturbance."
On the e-referral, you might see:
- Discharge diagnosis: "UTI with delirium; condition improved"
- Problem list: multiple entries over the stay—"confusion," "falls," "cognitive decline"
- Nursing notes: mixed language like "pleasantly confused" in one place and "agitated, attempting to climb out of bed, requires sitter" in another
On paper, it's a resolved infection. In the narrative, it might be evolving dementia with ongoing behavioral risk.
That one contradiction can easily cost 10-20 minutes and a significant chunk of mental energy.
The time cost:
The coordinator spends 5-10 minutes re-reading the discharge summary, nursing notes, maybe psych or geri consults, trying to understand whether the behavior was transient delirium or a chronic pattern.
Then another 5-10 minutes of outreach—calling the hospital case manager or nurse to ask: "Is this patient still trying to get out of bed? Any sitter or restraints in the last 24-48 hours?" Plus, an internal check with nursing leadership about whether they can safely manage that behavior on the current unit.
That's up to 20 extra minutes on a single referral that, in the system's structured fields, looked straightforward.
The cognitive cost:
Beyond the clock, there's mental strain. The coordinator is holding two competing mental models—"short-term delirium that's resolved" versus "ongoing behavioral risk that could destabilize our unit"—and imagining the consequences of being wrong.
They're weighing those against today's reality: staffing levels, current resident mix, open bed type, and the facility's recent history with similar patients.
They're deciding how much uncertainty they're willing to live with and what to document to defend that decision later if something goes wrong.
The system, if it just shows "UTI with delirium – improved," pretends none of that thinking exists. The coordinator pays the price in extra cycles, stress, and the risk of missing a red flag because it was buried in contradictory notes.
When Messy Data Becomes Systematically Biased Data
Here's what we realized after watching the same "mistake" repeat in ways that were too consistent to be random: certain hospitals kept sending us referrals where the paperwork said one thing, but the patient in front of us was clearly something else.
On paper: "UTI with delirium, resolved" or "acute confusion, back to baseline."
In reality, a resident who had clear, ongoing cognitive impairment and behavioral risk that looked a lot more like chronic dementia than a transient episode.
At first, you treat that as messy documentation. But when you see the same hospital, same service line, same kind of patient, over and over, you realize this is a pattern, not noise.
The moment it clicked:
We connected three things. Skilled nursing facilities are more likely to deny referrals tied to mental illness, dementia, and complex behavioral profiles because they're harder and riskier to manage. Hospitals are under intense pressure to move patients downstream quickly, avoid readmissions, and show they've "stabilized" the acute issue.
And discharge documentation around cognitive and behavioral issues is often incomplete, inconsistent, or minimized, even when the chart clearly shows serious neuropsychiatric symptoms.
When you put that together, it stops looking like random sloppiness and starts looking like structural bias. The way the story is written shapes how willing post-acute providers are to accept the patient.
Hospitals are not neutral narrators in that story.
Once we saw that, "dirty data" stopped being a purely technical problem. It became a reflection of incentives, how hospitals' documentation can be influenced by bed pressure, readmission penalties, and where they want patients to land. It became a source of systematic distortion, especially around diagnoses that skilled nursing facilities are more likely to deny.
When we say a centralized platform has to treat ambiguity and contradiction as data, we don't just mean "the packet is messy." We mean the way the story is told is shaped by someone else's pressures, and our customer needs a centralized platform that helps them see through that, not just store it.
What a Centralized Platform Actually Looks Like
A smart coordinator knows "Hospital X always soft-pedals behavior," but it's a feeling. Software can be relentless across hundreds of patients.
The system can quantify it: "In the last 90 days, Hospital X documented 'delirium resolved' in 70% of cases where skilled nursing facility nursing later charted ongoing disruptive behavior." It can break it down by service line, attending, unit, and payer, so operators see where and when the narrative is consistently off.
That turns gut suspicion into an actionable signal they can use in real time.
Auto-surface contradictions instead of making people hunt.
A coordinator can eventually find inconsistencies. The system can highlight them instantly: "Discharge summary: 'back to baseline.' Progress notes in last 48 hours: sitter, attempts to leave bed, PRN antipsychotic."
It can present a focused "tension view" so they don't have to read 40 pages to realize the story doesn't line up. That saves minutes per case and preserves mental energy for the actual decision.
Turn unknowns into structured gaps with next steps.
A good coordinator already thinks, "I don't know enough about behavior or payer here." The system can catalog those unknowns explicitly: "Behavior at discharge = unclear," "Skilled nursing facility days remaining = unknown," "Baseline cognition = undocumented."
It can tie each to quick actions and track whether they were resolved. That shifts "I have a bad feeling" into a repeatable, team-wide practice.
Capture judgment as a byproduct, not a separate task.
We built an early version of a "Decision Justification" section. After the coordinator reviewed the referral and made an admit or deny call, the screen asked them to pick a reason code, select contributing factors, and type a short narrative explaining their reasoning.
On paper, it sounded great. In reality, they had already thought through all of this while working the case.
When we watched someone use it, we realized we were making them reconstruct a decision they had already made. Pure duplication.
Her body language changed. Shoulders up, small sigh, quick clicking just to get through it. That's when we knew this was a "documentation of judgment" feature, not a "judgment-support" feature.
We redesigned it so the decision workspace itself captured judgment as a side effect. The place where they make the decision is the place where the system quietly captures their reasoning in structured form.
If they have to "go document their judgment," you've already lost.
The Shoulder Moment Happens Constantly
Studies on EHR and digital tools repeatedly find that poor usability, duplicate entry, and misaligned workflows are routine, not rare. Clinicians report significant documentation burden driven by extra clicks, repeated fields, and fragmented information.
In many settings, people spend close to half their day documenting or "fixing" what the system wants, rather than doing the work itself. They develop workarounds—external notes, copy-paste, charting later—to survive.
If you translate that into referral and admissions work, it means every time a tool adds separate "reason codes," extra checkboxes, or parallel documentation steps "for reporting," those shoulder moments are happening dozens of times per shift.
Most referral tools are optimized to capture standardized data for compliance and analytics. They prove that each referral had a documented path and reason code. They feed dashboards that show throughput, not cognitive load.
So "streamlining" often means the physical paper went away, but the cognitive paper multiplied. The system forces coordinators to re-enter or restate things they already considered. It splits the thinking across multiple screens and forms. It adds new documentation tasks that don't change the decision, only how it looks later.
That's performative documentation. Work that exists primarily so the system, the report, or the auditor can be satisfied, while the human still has to do the real thinking separately.
In June 2023, CMS issued a
memorandum specifically highlighting concerns with the hospital discharge process to post-acute care providers, focusing on "inclusion and accuracy of critical patient information." A regulatory acknowledgment that digitization hasn't solved the fundamental communication problem.
The Question Leadership Should Ask
When you're evaluating referral systems, here's the question that reveals whether a platform actually reduces cognitive load or just redistributes it:
"Show me, step by step, what your system removes from my coordinator's day on referral #30—what will they stop doing, not just do in a different screen?"
Push on it:
- "Which spreadsheets, side notes, and off-system steps disappear in the first 90 days—and how do you know?"
- "Walk me through a real, messy referral and point to every moment where my coordinator thinks less and clicks less, not just clicks somewhere new."
- "If I shadow my team for a day after go-live, what will I see them no longer doing that they do today?"
The industry celebrated "we eliminated fax and paper from referrals," but in many cases, it simply converted physical paperwork into digital performative work—shifting and disguising the burden instead of truly removing the mental and procedural steps operators have to take to decide safely on referral #30.
What We Should Have Said From the Beginning
If we could go back to the moment when e-referrals were being celebrated as the solution, here's what we would whisper into the room:
Digitally moving a bad story is not a solution.
If this technology doesn't make the hardest, messiest referral easier for the person in the chair, all you've built is a faster way to ship your problems downstream.
The real opportunity in post-acute referral management isn't optimizing transmission. It's building a centralized platform that transforms fragmented inputs into structured, actionable decisions. a centralized platform that captures judgment as a byproduct of doing the work, not as a separate performance for the system.
A centralized platform that helps operators see through systematically biased data, not just store it faster.
We're not interested in moving complexity from paper to screen. We're interested in collapsing it entirely—so the person staring at referral #30 can make a confident decision without reconstructing the entire clinical story from scratch, without chasing gaps through phone calls and side systems, without performing their thinking twice.
That's what solved actually means.
Citations:
-
- Kripalani S, LeFevre F, Phillips CO, et al. “Deficits in Communication and Information Transfer Between Hospital-Based and Primary Care Physicians.” JAMA. 2007.
- https://pmc.ncbi.nlm.nih.gov/articles/PMC5367268/
- Kruse CS, Stein A, Thomas H, Kaur H. “The Use of Electronic Health Records to Support Population Health: A Systematic Review of the Literature.” JMIR Med Inform. 2018. (Cites ongoing discharge and documentation issues.)
- https://pmc.ncbi.nlm.nih.gov/articles/PMC5967650/
- Melnick ER, Dyrbye LN, Sinsky CA, et al. “The Association Between Perceived Electronic Health Record Usability and Professional Burnout Among US Physicians.” Mayo Clin Proc. 2019.
- https://pmc.ncbi.nlm.nih.gov/articles/PMC11491602/
- Shanafelt TD, Dyrbye LN, West CP, Sinsky CA. “The Electronic Health Record Problem List: Challenges, Controversies, and Opportunities.” Mayo Clin Proc. 2023. (On documentation burden and cognitive load.)
- https://pmc.ncbi.nlm.nih.gov/articles/PMC11491602/
- Centers for Medicare & Medicaid Services (CMS). “Overview of CMS Requirements for Hospital Discharges to Post-Acute Care Providers (QSO-23-xx).” Quality, Safety & Oversight memorandum, 2023. Summary via IPRO QI:
- https://qi.ipro.org/2024/03/14/overview-of-cms-requirements-for-hospital-discharges-to-post-acute-care-providers/
- CMS. “Discharge Planning Requirements (Final Rule).” Federal Register. 2019. (Regulatory basis for complete and accurate discharge information.)
- https://www.federalregister.gov/documents/2019/09/30/2019-20732/medicare-and-medicaid-programs-revisions-to-requirements-for-discharge-planning-for-hospitals
- Middleton B, Bloomrosen M, Dente MA, et al. “Enhancing Patient Safety and Quality of Care by Improving the Usability of Electronic Health Record Systems.” JMIR Med Inform. 2013. (On usability, duplicate work, and workarounds.)
- https://pmc.ncbi.nlm.nih.gov/articles/PMC3708030/
- Baines R, de Bruin JS, Teune W, et al. “Impact of EHRs on Documentation Time and Clinician Workload: A Systematic Review.” BMJ Health Care Inform. 2020. (Quantifies documentation time and cognitive burden.)
- https://informatics.bmj.com/content/27/3/e100190