What Is Clinical Decision Support? A Complete Guide for Clinicians
Clinical decision support (CDS) is health information technology that delivers knowledge, patient-specific data, and evidence-based recommendations to clinicians at the point of care. It covers everything from a warfarin-fluconazole interaction alert firing in an EHR to an AI system generating a ranked differential diagnosis from an ambient patient encounter. According to the Office of the National Coordinator for Health IT (ONC), 78% of office-based physicians now use certified EHR systems, and CDS is a standard part of mainstream EHR workflows across ambulatory and hospital care (ONC physician adoption data). That makes clinical decision support one of the most widely deployed, least well understood, and most frequently ignored categories of health IT in American medicine.
The term “clinical decision support system” (CDSS) refers specifically to the software that delivers CDS. In practice, the two terms are used interchangeably. What matters more than the label is what the technology actually does: it sits between the clinician and the patient data, and it either helps the clinician think better or it gets in the way. The history of CDS is a story of both.
This guide covers the five types of clinical decision support, the evolution from rule-based alerts to AI-powered reasoning, the regulatory requirements that govern CDS in the U.S., alert fatigue as the central unsolved problem, and what next-generation CDS platforms like Glass Health look like in practice. For a comparison of specific CDS tools, see our best clinical decision support tools guide.
The History of Clinical Decision Support
CDS did not begin with modern AI. Long before ambient workflows and large language models, clinicians and informaticians were trying to answer the same question: how do you get the right clinical knowledge to the right clinician at the right moment without creating more work?
1970s-1980s: Standalone expert systems
Early CDS grew out of academic expert-system research. MYCIN, developed at Stanford in the 1970s, used rule-based logic to recommend diagnoses and antibiotic therapy for infectious diseases. INTERNIST-I and its successor QMR attempted a much broader form of diagnostic support for internal medicine. DXplain, introduced at Massachusetts General Hospital in the late 1980s, took a checklist-style approach by generating ranked diagnostic possibilities from clinical findings.
The important historical point is not that these systems replaced clinicians. They did not. The important point is that they proved computers could organize clinical knowledge in a way physicians found useful, especially for differential diagnosis and consistency checks. Their biggest limitation was workflow: they were largely standalone systems, not tools embedded in everyday clinical documentation or order entry.
1990s-2000s: Medication safety, CPOE, and EHR-linked CDS
As computerized physician order entry and EHRs became more common, CDS shifted from standalone diagnostic systems toward medication safety and ordering support. Drug-drug interaction warnings, allergy checks, dose calculators, and preventive reminders became the first widely deployed forms of operational CDS.
This era also produced some of the strongest early clinical evidence for CDS. Landmark order-entry studies showed that medication safety improved when decision support was embedded directly into ordering workflows, not left as a separate reference task. Federal EHR incentive programs then accelerated adoption. Meaningful Use Stage 2 specifically required eligible hospitals to implement multiple CDS interventions tied to high-priority conditions and to enable drug-drug and drug-allergy checking.
The downside of this generation was obvious almost immediately: too many alerts, too little relevance. CDS became associated with pop-ups, overrides, and alert fatigue because health systems often deployed every available rule instead of only the ones that clearly changed care.
2010s: Passive CDS, knowledge resources, and interoperability
The 2010s expanded CDS beyond interruptive alerts. Reference tools such as UpToDate, DynaMed, and AMBOSS became core parts of clinical practice because they gave clinicians fast access to synthesized evidence without forcing a specific action. This was a different CDS model: not “stop and respond to this alert,” but “look up what you need when you need it.”
The other major shift was interoperability. SMART on FHIR created a standard way for third-party applications to launch inside the EHR and use patient data without requiring a bespoke integration for every health system. That made modern CDS platforms much more feasible because the workflow barrier began to fall.
2020s: AI-powered and ambient CDS
The current phase of CDS is defined by systems that can reason across narrative clinical context rather than only structured fields. AI-powered CDS can work from transcripts, chart data, uploaded documents, and existing note context to generate differentials, suggested next steps, and assessment-and-plan drafts.
Platforms like Glass Health represent this shift. Glass combines ambient capture, encounter-specific clinical reasoning, and downstream documentation in a single workflow. Instead of asking the clinician to leave the encounter, open a separate tool, and manually restate the case, the CDS is generated from the encounter itself. That is the major historical shift: CDS moving from a separate destination in the workflow to something embedded directly inside it.
Five Types of Clinical Decision Support
A practical clinician-facing way to organize CDS is into five common functions. Most health systems use a combination. Each type solves a different clinical problem and has different strengths and failure modes.
1. Alert and reminder systems
Alerts are the most visible and most criticized form of CDS. They notify clinicians of potential safety issues in real time: drug-drug interactions, drug-allergy conflicts, critical lab values, duplicate orders.
The clinical value, when alerts work correctly, is real. A physician ordering ketorolac for a postoperative patient on warfarin receives an alert that combining an NSAID with warfarin significantly increases bleeding risk, a potentially life-saving interruption. A radiologist ordering an iodinated contrast CT on a patient taking metformin gets an alert about the risk of metformin-associated lactic acidosis in the setting of contrast-induced nephropathy, prompting a hold on metformin 48 hours before the study. A pharmacist verifying a prescription for simvastatin 80mg in a patient already on amlodipine sees a flag that this combination increases the risk of rhabdomyolysis, prompting a dose adjustment.
The problem is volume. Research published in the Journal of the American Medical Informatics Association estimated that approximately 196,600 adverse drug events occur annually in the U.S. due to inappropriate medication-related alert overrides (JAMIA, 2020). Clinicians override most alerts because most alerts are not clinically important for the specific patient. A mild severity alert for the theoretical interaction between lisinopril and potassium supplements in a patient with a normal potassium level and stable renal function is technically correct but clinically useless. Multiply that by dozens of medications and hundreds of patients, and clinicians learn to click “override” reflexively.
Reminder systems, a subset of alerts, prompt clinicians about overdue preventive care (colonoscopy screening at age 45, diabetic eye exams, annual wellness visits) or needed follow-up actions (recheck potassium after starting an ACE inhibitor, repeat TSH in 6-8 weeks after a levothyroxine dose change). These tend to generate less fatigue than safety alerts because they appear at natural decision points rather than interrupting active ordering.
2. Clinical guidelines and protocols
Guideline-based CDS presents evidence-based protocols within the clinical workflow, making it easier for clinicians to follow best practices without memorizing every recommendation from every specialty society.
Sepsis screening is one of the most impactful examples. Many hospitals now use automated screening tools that monitor vital signs, lab values, and nursing assessments for criteria suggestive of sepsis (two or more SIRS criteria plus a suspected infection source, or a qSOFA score of 2 or higher). When the screening criteria are met, the system generates a sepsis alert and presents the hospital’s sepsis bundle protocol: blood cultures before antibiotics, lactate measurement, broad-spectrum antibiotics within one hour, fluid resuscitation of 30 mL/kg for hypotension or lactate over 4 mmol/L. Studies at institutions like Northwell Health have shown that CDS-driven sepsis protocols reduce time to antibiotic administration and decrease sepsis mortality.
Antibiotic stewardship is another area where guideline-based CDS has measurable impact. When a physician orders vancomycin for a urinary tract infection, the system can suggest narrower-spectrum alternatives based on the hospital’s local antibiogram data. When a patient has been on IV antibiotics for 72 hours and is clinically improving, the system can recommend oral step-down therapy. These interventions reduce unnecessary broad-spectrum antibiotic use, which directly combats antimicrobial resistance.
Other examples include VTE prophylaxis protocols (automatically assessing hospitalized patients for DVT risk and recommending appropriate prophylaxis), opioid prescribing guidelines (flagging doses above 90 morphine milligram equivalents per day and checking the state prescription drug monitoring program), and heart failure management pathways (recommending guideline-directed medical therapy titration based on ejection fraction, blood pressure, and renal function).
3. Order sets and documentation templates
Order sets bundle the tests, medications, and procedures commonly needed for specific clinical scenarios into predefined packages. An acute coronary syndrome admission order set might include troponin levels q6h, aspirin 325mg, heparin drip per protocol, cardiology consult, telemetry monitoring, NPO status, and a statin if not already prescribed. A community-acquired pneumonia order set might include blood cultures, sputum culture, chest x-ray, azithromycin plus ceftriaxone (or respiratory fluoroquinolone monotherapy for penicillin-allergic patients), and a CURB-65 severity assessment.
Order sets reduce unwarranted variation and decrease the cognitive load of remembering every component of a complex treatment plan. They also function as a subtle form of CDS: by pre-selecting evidence-based defaults, they nudge clinicians toward best practices without generating an alert. Research from the Agency for Healthcare Research and Quality (AHRQ) has shown that well-designed order sets improve adherence to evidence-based care bundles.
Documentation templates serve a similar function for clinical notes. A structured template for a diabetic foot exam ensures that the clinician documents monofilament testing, pedal pulse assessment, skin integrity, and deformity evaluation, all elements required for a complete diabetic foot examination per ADA guidelines. A structured asthma visit template prompts documentation of symptom frequency, rescue inhaler use, controller medication adherence, peak flow measurements, and an updated asthma action plan.
4. Diagnostic support tools
Diagnostic errors affect approximately 12 million Americans annually, according to the National Academy of Medicine’s landmark 2015 report Improving Diagnosis in Health Care (NAM, 2015). Diagnostic CDS tools aim to reduce these errors by helping clinicians generate more complete differential diagnoses and avoid the cognitive biases, particularly anchoring bias and premature closure, that cause most diagnostic failures.
Traditional diagnostic CDS tools like Isabel Healthcare use symptom-matching algorithms: the clinician enters a set of findings, and the system returns a list of possible diagnoses ranked by fit. These tools are useful as a safety net (a way to catch the diagnosis you did not think of) but they require the clinician to take a separate step outside their workflow to enter data.
AI-powered diagnostic CDS represents a significant advance. Glass Health produces a three-tier differential diagnosis (Most Likely, Expanded, and Can’t Miss) based on the full clinical context of the patient encounter, including the ambient conversation captured during the visit. The “Can’t Miss” tier is particularly valuable for patient safety: it surfaces diagnoses that are low probability but high consequence, the aortic dissection in the patient presenting with back pain, the ectopic pregnancy in the woman with unilateral pelvic pain and a missed period. For more on how AI is improving diagnostic accuracy, see our guide to AI diagnosis.
DxGPT, developed by the Foundation 29 research team, uses large language models to generate differential diagnoses and has shown particular promise in rare disease identification. Each of these tools approaches the same core problem from a different angle: helping clinicians think of more possibilities before committing to a diagnosis.
5. Reference information and knowledge resources
Medical reference databases sit at the intersection of CDS and medical education. UpToDate is one of the most widely used references in clinical medicine. Its topic reviews, written and updated by physician-specialists, synthesize the evidence on thousands of clinical questions. A 2012 study at Brigham and Women’s Hospital associated UpToDate use with improved patient outcomes and lower mortality rates (Isaac et al., 2012).
AMBOSS, which launched as a medical education platform, has expanded into clinical reference with integrated drug information and tightly cross-linked learning content. DynaMed differentiates itself through a more systematic evidence review process, with explicit evidence ratings for each recommendation.
These tools are powerful, but they share a common limitation: they require the clinician to stop what they are doing, open a separate application, formulate a search query, find the relevant section, and apply the information to their specific patient. That friction means clinicians often do not look things up when they should. Studies estimate that physicians have clinical questions about patient care roughly twice per patient encounter but pursue answers to fewer than half of those questions (Ely et al., 2005). The information exists; the workflow does not support accessing it.
This is why the integration of reference information into ambient CDS platforms matters. When a system like Glass Health generates an assessment and plan with citations to current evidence, it delivers the knowledge-resource function of CDS without requiring the clinician to leave the encounter workflow.
Traditional CDS vs. AI-Powered CDS
The difference between traditional rule-based CDS and AI-powered CDS is not just a technology upgrade. It changes which clinical problems CDS can address.
| Dimension | Traditional Rule-Based CDS | AI-Powered CDS |
|---|---|---|
| Logic | If-then rules written by clinical informaticists | Machine learning models trained on medical literature and clinical data |
| Scope | Narrow, predefined scenarios (drug interactions, lab alerts) | Broad, including complex diagnostic reasoning and evidence synthesis |
| Activation | Triggered by specific coded data entries or order events | Can activate from ambient conversation or narrative data |
| Data handling | Requires structured, coded inputs (ICD codes, RxNorm) | Can reason across unstructured clinical narratives |
| Maintenance | Manual rule updates as guidelines change | Models updated with new training data and literature |
| Alert volume | High, contributing to alert fatigue | Can prioritize and filter based on clinical relevance |
| Handling ambiguity | Poor; binary logic cannot handle clinical uncertainty | Strong; can express probability and generate ranked differentials |
| Transparency | Fully transparent (rules are readable) | Less transparent (model reasoning is not directly inspectable) |
| Examples | Drug-drug interaction alerts, allergy checking, dosing calculators | Glass Health DDx generator, AI-powered A&P recommendations |
Where rule-based CDS still wins. Rule-based systems are predictable, auditable, and fast. A drug-allergy alert has a clear trigger, a clear rule, and a clear action. You can trace exactly why the alert fired and evaluate whether it was correct. For binary safety checks, where the answer is either “this is safe” or “this is not safe,” rules are the right tool. No health system should replace its penicillin allergy checking system with a large language model.
Where AI-powered CDS wins. Complex diagnostic reasoning does not fit into if-then rules. A patient presenting with fever, joint pain, malar rash, oral ulcers, and proteinuria has a clinical picture that points toward systemic lupus erythematosus, but the differential includes dozens of other conditions depending on the specific combination and timing of findings. An AI-powered system can reason across that full context and produce a ranked differential with supporting evidence, something a rule-based system cannot do because you would need to write rules for every possible combination of every possible finding for every possible disease.
The answer is both. The most effective CDS strategy uses rule-based systems for safety (drug interactions, allergy checking, dosing limits) and AI-powered systems for clinical complexity (differential diagnosis, evidence synthesis, treatment planning). They solve different problems.
Why Alert Fatigue Is Destroying CDS Effectiveness and How AI Fixes It
Alert fatigue is not a minor usability complaint. It is the single largest reason that CDS, despite billions of dollars of investment and federal mandates, has underdelivered on its promise to improve clinical outcomes.
The scope of the problem
High alert-override rates are documented across institutions and care settings, which is the pattern that matters most operationally. When every other action generates a pop-up, clinicians stop trusting the channel. This is not a character flaw; it is a predictable human response to a system that has too low a signal-to-noise ratio.
The clinical consequences
Alert fatigue has directly caused patient harm. The safest evidence-backed way to frame the problem is this: medication-related alert overrides are associated with preventable adverse drug events, and the national burden is substantial. The JAMIA 2020 analysis estimated 196,600 adverse drug events annually from inappropriate medication-related alert overrides, a signal that the scale of the problem is not theoretical.
The tragedy is that alert fatigue is a self-inflicted problem. The CDS is generating correct alerts, but it is generating so many of them that clinicians cannot distinguish the important ones from the trivial ones. Turning up the volume on everything is functionally equivalent to turning down the volume on everything.
How ambient and passive CDS avoids the trap
AI-powered CDS platforms like Glass Health take a fundamentally different approach. Instead of interrupting the clinician with alerts, they provide clinical decision support passively, embedded in the documentation workflow.
When Glass generates a differential diagnosis and assessment-and-plan from an ambient encounter, the clinician reviews the output as part of their normal note review. There is no pop-up to dismiss, no checkbox to click, no “override” button. The CDS is the note. If the DDx includes a diagnosis the clinician had not considered, they see it during review and can investigate further. If the assessment and plan recommends a medication or test the clinician would not have ordered, they edit the note.
This model works because it aligns with how clinicians already work. Every clinician reviews their notes before signing them. Building CDS into that review step adds zero additional workflow burden. The contrast with traditional alert-based CDS is stark: instead of adding cognitive load (one more alert to process), ambient CDS reduces cognitive load (the differential has already been generated for you).
The research supports this approach. AHRQ’s best practices for CDS integration emphasize that CDS embedded in clinical workflow achieves significantly higher adoption rates than standalone or interruptive CDS (AHRQ CDS Initiative). Alert fatigue is not an inevitable property of CDS. It is a consequence of a specific design choice, interruptive alerts, that AI-powered ambient CDS avoids entirely.
The Regulatory Landscape for Clinical Decision Support
CDS regulation in the United States involves multiple federal agencies and evolving standards. Understanding the regulatory framework matters because it determines what CDS tools can do, what they must disclose, and what liability attaches to their use.
FDA regulation: The 21st Century Cures Act exemption
The 21st Century Cures Act (2016) established the regulatory boundary that defines most of the CDS market. Under Section 3060 of the Act, CDS software is exempt from FDA regulation as a medical device if it meets all four of the following criteria:
- It is not intended to acquire, process, or analyze a medical image, signal from an in vitro diagnostic device, or a pattern or signal from a signal acquisition system.
- It is intended for the purpose of displaying, analyzing, or printing medical information about a patient or other medical information.
- It is intended for use by a healthcare professional.
- It is intended for the healthcare professional to independently review the basis for the recommendations presented so that the professional does not rely primarily on the recommendations to make a clinical decision.
The fourth criterion is the one that matters most in practice. CDS that presents recommendations with supporting evidence, that a clinician independently reviews and can accept or reject, is generally treated differently from software that makes or drives autonomous clinical decisions. The FDA’s clinical decision support software guidance remains the right primary source to check when product capabilities are changing (FDA CDS Guidance).
Most AI-powered CDS tools, including Glass Health, are designed to meet these exemption criteria. Glass generates differential diagnoses and assessment-and-plan recommendations for clinician review. The clinician makes all clinical decisions. The FDA has indicated that it will continue to evaluate its approach as AI capabilities evolve, particularly for CDS that processes medical images or makes high-confidence predictions that clinicians may be inclined to follow without independent verification.
ONC interoperability requirements: FHIR R4 and information blocking
ONC’s 21st Century Cures Act Final Rule established interoperability standards that directly affect CDS tools. Certified EHR technology must support FHIR R4 (Fast Healthcare Interoperability Resources, Release 4) APIs for data exchange, and CDS tools that integrate with certified EHRs must comply with these standards. The rule also includes information blocking provisions that prohibit healthcare providers, EHR vendors, and health information exchanges from practices that interfere with the access, exchange, or use of electronic health information.
For CDS developers, this means two things. First, FHIR R4 is the expected integration standard. CDS tools that can consume and produce FHIR-compliant data have a significant advantage in EHR integration. Second, EHR vendors cannot unreasonably restrict third-party CDS applications from accessing patient data through standardized APIs, though the practical enforcement of this provision varies. Glass Health’s support for Epic, eClinicalWorks, and Athena clinical workflows on the Max plan aligns with these ONC interoperability frameworks.
CMS Promoting Interoperability program
CMS helped mainstream CDS through the EHR incentive programs that followed HITECH and Meaningful Use. The key historical point is that CDS became part of how hospitals proved meaningful EHR use, not just an optional informatics add-on. Because CMS measure specifications evolve, clinicians and operators should confirm the current details directly from CMS rather than relying on older summaries.
How Glass Health Delivers Next-Generation Clinical Decision Support
Glass Health combines AI-powered clinical decision support with ambient AI scribing in a single product. This combination matters because it solves the workflow problem that has limited CDS adoption for decades: clinicians do not use CDS tools that require additional steps.
How Glass CDS works in a patient encounter:
- The clinician starts a patient visit with Glass ambient listening active.
- As the patient describes their symptoms and the clinician performs the examination, Glass captures the full clinical context: symptoms, duration, severity, associated findings, past medical history, current medications, social history.
- Glass generates encounter-specific clinical reasoning support, including differential diagnosis, suggested history questions, potential next steps, and downstream documentation components.
- Glass produces an evidence-based assessment and plan, structured in the clinician’s preferred note format (SOAP, H&P, progress note, or specialty-specific formats).
- The DDx and A&P flow directly into the clinical documentation. The clinician reviews, edits as needed, and signs. No re-entry, no copy-paste, no switching applications.
This workflow means that the CDS is not an additional task. It is a byproduct of the documentation the clinician was going to do anyway. The clinical reasoning happens at the same time as the note generation, not as a separate step before or after.
On the Max plan, Glass Health supports Epic, eClinicalWorks, and Athena clinical workflows. Glass also offers transparent self-serve pricing from a free Lite tier through paid plans, while HIPAA deployment requires confirming current BAA and implementation terms with the company. Try Glass Health free.
Barriers to Effective CDS Implementation
Despite near-universal adoption of some form of CDS in U.S. healthcare, most implementations fail to achieve their intended benefits. A 2020 systematic review in the Journal of the American Medical Informatics Association found that fewer than half of CDS implementations produced statistically significant improvements in clinical outcomes. Understanding why CDS fails is as important as understanding how it works.
Alert fatigue remains the primary barrier. This problem is discussed in detail above. The short version: too many alerts, too few that matter, and clinicians learn to ignore all of them. The solution is not eliminating alerts. It is reducing volume, improving specificity, implementing tiered severity systems, and adopting ambient CDS approaches that do not rely on interruptive notifications.
Poor workflow integration kills adoption. CDS tools that require clinicians to open a separate application, re-enter patient data, or navigate away from their primary workflow generally see materially lower adoption than tools embedded into everyday work. The 2005 Ely et al. study on clinical information needs found that the most common reason physicians did not pursue answers to clinical questions was that it would take too long. Every additional click, every additional screen, every additional second is friction. The most effective CDS tools are invisible, embedded so deeply in the existing workflow that using them is not a conscious decision. Glass Health’s ambient CDS model is designed around this principle.
Lack of transparency erodes trust. Clinicians distrust recommendations they cannot evaluate. A rule-based alert that says “warfarin + fluconazole: risk of increased INR” is transparent: the clinician knows the rule, understands the pharmacology, and can assess whether it applies to their patient. An AI system that says “consider sarcoidosis” without explaining why is a black box. AI-powered CDS tools must provide reasoning and citations. Glass Health’s DDx includes the clinical features supporting each diagnosis and cites relevant literature, because a recommendation without an explanation is not clinical decision support; it is an opinion.
Maintenance burden is unsustainable for rule-based systems. Clinical guidelines change constantly. The average clinical practice guideline has a half-life of approximately 5.8 years (Shekelle et al., 2001), but many guidelines are updated more frequently. A health system with thousands of CDS rules needs dedicated informatics staff to monitor guideline changes, update rules, test updates, and validate that changes do not introduce unintended consequences. Most health systems do not have this capacity. The result is CDS that becomes progressively less accurate over time as the rules drift from current evidence.
One-size-fits-all design generates irrelevant recommendations. A sepsis alert calibrated for an ICU population generates excessive false positives in an outpatient primary care setting. An opioid prescribing alert designed for acute care is irrelevant in a palliative care practice. CDS that does not account for specialty-specific workflows, patient populations, practice settings, and local formularies will generate recommendations that clinicians correctly identify as inapplicable. The more irrelevant recommendations a system generates, the less attention clinicians pay to any of its recommendations.
Insufficient training undermines even well-designed systems. Many health systems deploy CDS without adequate clinician training on what the alerts mean, how the system generates recommendations, and how to interpret and act on CDS output. When clinicians do not understand why they are seeing an alert, their default response is to dismiss it.
Frequently Asked Questions
What is the difference between CDS and CDSS?
CDS (clinical decision support) is the broad concept: any technology, process, or knowledge resource that helps clinicians make better clinical decisions. CDSS (clinical decision support system) refers specifically to a software application that delivers CDS. In practice, the industry uses both terms interchangeably. AHRQ and ONC primarily use “CDS” in their official guidance and frameworks. The distinction matters mainly in academic literature, where authors use “CDSS” to specify that they are studying a particular software implementation rather than the general concept. When you see “CDSS” in a product description or vendor marketing, it means the same thing as “CDS tool” or “CDS platform.” The term “CDSS” also appears frequently in international literature, particularly from European and Australian researchers, while U.S.-based organizations tend to prefer “CDS.”
Does clinical decision support require FDA approval?
Most CDS software does not require FDA approval. The 21st Century Cures Act established four criteria for CDS that is exempt from FDA device regulation. The key criterion is that the software must be intended for a healthcare professional to independently review the basis for the recommendations, meaning the clinician makes the final decision, not the software. CDS that provides recommendations with supporting evidence for clinician review, which describes the vast majority of CDS tools on the market, meets this exemption. However, CDS that analyzes medical images (such as radiology AI that identifies pulmonary nodules), processes physiological signals (such as ECG interpretation algorithms), or makes autonomous treatment decisions without clinician review may require FDA clearance or approval. The FDA has also signaled that it may reevaluate the exemption criteria as AI-powered CDS becomes more capable, particularly for systems that generate high-confidence predictions that clinicians may follow without substantial independent evaluation.
How does CDS reduce diagnostic errors?
Diagnostic CDS reduces errors by addressing the cognitive biases that cause most diagnostic failures. Anchoring bias occurs when a clinician fixates on an initial diagnosis and fails to consider alternatives. Premature closure occurs when a clinician stops the diagnostic process before all reasonable possibilities have been evaluated. CDS tools that generate comprehensive differential diagnoses, like Glass Health’s three-tier DDx, directly counteract both biases by presenting diagnoses the clinician may not have considered. The National Academy of Medicine estimates that diagnostic errors affect 12 million Americans annually, with roughly half of those errors having the potential to cause harm (NAM, 2015). AI-powered diagnostic CDS is particularly effective because it can reason across the full complexity of a patient presentation, including findings that do not fit the working diagnosis, and surface “can’t miss” diagnoses that are unlikely but dangerous if missed, such as aortic dissection in a patient presenting with acute back pain or pulmonary embolism in a patient with pleuritic chest pain and recent immobilization.
What does a clinical decision support system cost?
CDS costs vary widely by product type and buyer. Some CDS is bundled into the EHR license, some is sold as an individual subscription, and some is procured through institutional or enterprise contracts. Reference tools, AI-native CDS platforms, and health-system rule engines all use different pricing models, so the relevant comparison is not just sticker price but workflow fit, implementation burden, and whether the tool reduces the need for additional products. For detailed pricing comparisons across CDS tools, see our best CDS tools comparison.
Can clinical decision support work with any EHR system?
CDS integration depends on the specific tool and EHR vendor. SMART on FHIR standards have improved interoperability significantly since the mid-2010s, enabling third-party CDS applications to launch within Epic, Cerner (now Oracle Health), Allscripts, and other SMART-enabled EHRs. However, integration complexity varies. Some EHR vendors have restrictive app marketplaces or require extensive certification processes. On the Max plan, Glass Health supports Epic, eClinicalWorks, and Athena clinical workflows, and it also operates as a standalone web application for clinicians on EHR systems without direct integration. Standalone operation means the clinician uses Glass alongside their EHR, with ambient listening capturing the encounter and the generated note being copied or pushed into the EHR. ONC’s information blocking rules under the 21st Century Cures Act are intended to reduce barriers to third-party CDS integration, but practical enforcement varies by vendor and health system.
What is alert fatigue in clinical decision support?
Alert fatigue occurs when clinicians are exposed to so many CDS notifications that they begin dismissing all of them, including the clinically important ones. High override rates are widely documented across institutions. The problem is not that the alerts are wrong; most alerts are technically accurate. The problem is that the system does not distinguish between a trivial theoretical interaction and a life-threatening one. The research-supported solutions include reducing total alert volume by suppressing low-severity alerts, improving alert specificity by incorporating patient-specific context, using tiered severity displays so that critical alerts look different from informational ones, and adopting passive CDS models that provide information without interruption.
How will AI change clinical decision support in the next five years?
AI is shifting CDS from reactive safety checking to proactive clinical reasoning. In the immediate future, the most significant changes will be: ambient CDS that activates automatically during patient encounters without requiring manual data entry; diagnostic CDS that generates differential diagnoses from the full clinical narrative, not just coded problem lists; evidence synthesis that pulls current literature and guidelines into the point of care automatically; and specialty-specific clinical reasoning that adapts to the norms and decision patterns of different medical specialties. The longer-term trajectory includes CDS that integrates longitudinal patient data across encounters to identify patterns (a gradual hemoglobin decline over six months that suggests a workup for occult malignancy), CDS that monitors for guideline changes and automatically updates its recommendations, and CDS that learns from clinician feedback to improve its relevance for specific practice settings. The regulatory environment will be a significant factor; how the FDA and state regulators approach AI-powered CDS will determine how quickly these capabilities reach clinical practice.
Is CDS only for hospitals?
CDS is used in every healthcare setting: hospitals, ambulatory clinics, primary care practices, specialty offices, urgent care centers, emergency departments, long-term care facilities, and telehealth platforms. In fact, some of the highest-value applications of CDS are in outpatient settings, where a single clinician managing a panel of patients with complex chronic diseases benefits from guideline-based reminders, drug interaction checking, and diagnostic support. Glass Health is designed for clinicians in any setting, from solo practitioners to large health systems. The free tier makes AI-powered CDS accessible to any clinician with an internet connection, regardless of practice size or setting. For independent practices that do not have a clinical informatics team to build and maintain custom CDS rules, a platform like Glass provides capabilities that were previously available only to large academic medical centers.
What is the difference between active and passive CDS?
Active CDS interrupts the clinician with a notification that requires a response: an alert, a pop-up, a hard stop. The clinician must acknowledge the alert, override it, or change their order before proceeding. Passive CDS provides information without interruption, available when the clinician wants it but not forcing an interaction. Reference tools like UpToDate are passive CDS. Dashboard displays showing a patient’s trending lab values are passive CDS. Glass Health’s ambient DDx generation is a form of passive CDS: the differential appears as part of the note for the clinician to review, not as an interruptive alert. The research consistently shows that active CDS is more effective at changing behavior for simple, binary decisions (do not prescribe this drug to this patient) but generates more alert fatigue. Passive CDS is more effective for complex decisions (what is the best treatment approach for this patient’s multi-system disease) and generates essentially zero fatigue. The most effective CDS strategies use active CDS sparingly, only for high-severity safety issues, and passive CDS for everything else.
What training do clinicians need to use CDS effectively?
Effective CDS use requires three types of training that most health systems underinvest in. First, clinicians need to understand what the CDS system can and cannot do: what types of checks it performs, what data it uses, what its known limitations are. A clinician who does not know that the drug interaction checker only covers medications in the current EHR medication list, and not medications prescribed by outside providers, may have a false sense of security. Second, clinicians need training on how to interpret CDS output, particularly for AI-powered CDS that generates differential diagnoses or treatment recommendations. Understanding that a DDx is a probability-ranked list of hypotheses to investigate, not a definitive diagnosis, is fundamental to using diagnostic CDS safely. Third, clinicians need training on when to override alerts and when to pause. The goal is not zero overrides; many overrides are clinically appropriate. The goal is thoughtful overrides, where the clinician reads the alert, evaluates it against the patient context, and makes a deliberate decision.
The Bottom Line
Clinical decision support has been part of medicine for over fifty years, and for most of that time, it has been a compromise: useful in theory, frustrating in practice. Rule-based alerts catch real safety problems, but they fire so often that clinicians stop reading them. Reference tools contain the right information, but clinicians do not always have time to look. Guideline-based protocols improve adherence to evidence, but they require dedicated informatics teams to maintain.
AI-powered CDS, particularly ambient CDS that activates during the patient encounter, changes the equation. It delivers clinical reasoning support, not just safety alerts, and it does so without adding work to the clinician’s day. That is what makes it different from every previous generation of CDS: it actually fits into how clinicians practice.
Glass Health combines AI-powered differential diagnosis, evidence-based assessment and plan generation, and ambient clinical documentation in a single platform. It supports Epic, eClinicalWorks, and Athena clinical workflows on the Max plan and offers a free tier for evaluation. If you have been frustrated by CDS that generates noise instead of insight, try Glass Health for free and see what CDS looks like when it is designed around the clinician instead of the checkbox.
Source Snapshot (Reviewed 2026-03-09)
- AHRQ CDS workflow best-practices project: https://digital.ahrq.gov/ahrq-funded-projects/best-practices-integrating-clinical-decision-support-clinical-workflow
- ONC Health IT CDS Fact Sheet: https://www.healthit.gov/clinical-quality-and-safety/clinical-decision-support/
- CMS Promoting Interoperability: https://www.cms.gov/medicare/regulations-guidance/promoting-interoperability-programs
- FDA CDS Guidance: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-decision-support-software
- National Academy of Medicine, Improving Diagnosis in Health Care (2015): https://nap.nationalacademies.org/catalog/21794/improving-diagnosis-in-health-care
- 21st Century Cures Act (official text): https://www.govinfo.gov/app/details/PLAW-114publ255
- Bates et al., Effect of Computerized Physician Order Entry and a Team Intervention on Prevention of Serious Medication Errors, JAMA (1998): https://pubmed.ncbi.nlm.nih.gov/9794308/
- van der Sijs et al., Overriding of Drug Safety Alerts in Computerized Physician Order Entry, JAMIA (2006)
- JAMIA, Adverse Drug Events from Overrides (2020): https://pmc.ncbi.nlm.nih.gov/articles/PMC7646874/
- Isaac et al., The Relationship Between Use of UpToDate and Clinical Outcomes, JAMIA (2012)
- Ely et al., Answering Physicians’ Clinical Questions, JAMIA (2005)
- Yu et al., Evaluating the Performance of a Computer-Based Consultant, Computer Programs in Biomedicine (1979)
- Miller et al., INTERNIST-1, An Experimental Computer-Based Diagnostic Consultant for General Internal Medicine (1982): https://pubmed.ncbi.nlm.nih.gov/7048091/
- Barnett et al., DXplain, An Evolving Diagnostic Decision-Support System (1987): https://pubmed.ncbi.nlm.nih.gov/3295316/
- Mandl et al., SMART on FHIR, A Standards-Based, Interoperable Apps Platform for Electronic Health Records (2016): https://pubmed.ncbi.nlm.nih.gov/26911829/
- CMS Promoting Interoperability overview including CDS intervention requirements: https://www.cms.gov/medicare/regulations-guidance/promoting-interoperability-programs
- Shekelle et al., Validity of the Agency for Healthcare Research and Quality Clinical Practice Guidelines, JAMA (2001)