AI for Doctors: What It Can Do, What It Can't, and Where It's Headed

AI for doctors has moved past the proof-of-concept stage. In 2026, physicians are using artificial intelligence to ambiently capture encounters, surface real-time clinical insights during visits, build differential diagnoses from patient presentations, draft evidence-cited assessment and plans, and answer complex clinical questions in real time. Industry reports from Menlo Ventures, KLAS, and Grand View Research all point in the same direction: ambient documentation and clinician-facing AI have moved from early experimentation to rapid commercial deployment (Menlo Ventures, 2025; Grand View Research, 2024).

But adoption has outpaced understanding. Many physicians know AI can “help with notes” without grasping the full scope of what these tools can and cannot do. This guide is a practical walkthrough of the major AI workflows doctors are using today, the real limitations you should understand before relying on any tool, and how to evaluate whether a specific AI platform belongs in your clinical workflow.

What Can AI Actually Do for Doctors Right Now?

AI capabilities for physicians in 2026 cluster into four functional categories. Each one addresses a distinct clinical problem, and the best tools combine multiple categories with pre-charting and patient-context ingestion into a single workflow rather than forcing you to switch between separate applications.

1. Ambient Scribing and Real-Time Clinical Insights

Ambient AI scribing is the most widely adopted AI capability among physicians, and for good reason: documentation and EHR work remain major burnout drivers in medicine (AMA, 2024). A multicenter quality-improvement study in JAMA Network Open reported lower burnout after ambient AI scribe rollout across six health systems, with the proportion of physicians reporting burnout falling from 51.9% at baseline to 38.8% at 30 days among the clinicians who completed follow-up (JAMA Network Open, 2025).

Here is how ambient scribing works in practice. You start a patient encounter with the AI tool listening in the background. You conduct your visit exactly as you normally would – taking history, performing your exam, discussing the plan. The AI captures the conversation passively, without requiring you to dictate specific phrases, press buttons, or interact with the software during the visit. After the encounter, the tool generates a structured clinical note – typically a SOAP note, H&P, or progress note – that you review, edit, and sign.

The clinical specificity of these notes has improved substantially since the early versions in 2023-2024. A well-tuned ambient scribe captures not just what was explicitly said but infers pertinent documentation elements from context. When a patient reports “the lisinopril has been making me cough,” a good scribe documents the ACE inhibitor-related cough and may flag it for medication reconciliation in the plan. When you perform a cardiac exam and describe findings aloud, the tool populates the physical exam section with structured documentation – regular rate and rhythm, no murmurs, gallops, or rubs – formatted for your preferred note template.

The most important evolution is that the best systems do not wait until the visit is over. They can surface useful ambient insights while the encounter is still happening: follow-up questions worth asking, diagnoses that should stay on the table, and reminders that the medication list, prior notes, or recent labs materially change the differential. That is a different category from a tool that simply writes a note after the conversation ends.

The critical differentiator among ambient scribes is what happens with the encounter data. Some tools stop at documentation. Glass Health’s ambient scribing workflow uses encounter data, supported chart context, and uploaded clinical records to surface real-time ambient insights, generate differential diagnosis, draft an evidence-grounded assessment and plan, and answer clinical questions inside the same workflow. That distinction matters because documentation in isolation does not address the cognitive burden of diagnosis and treatment planning. It addresses the clerical burden. Both matter, but they are different problems.

EHR integration determines whether an ambient scribe actually saves time or simply shifts the work from typing to copy-pasting. The strongest workflows do two things: they pull patient context into the encounter before you start, and they push documentation back into the correct encounter record when you are done. On the Max plan, Glass Health supports Epic, eClinicalWorks, and Athena clinical workflows. If your EHR is not directly supported, you are typically left with a browser extension or clipboard workflow that adds friction. For a deeper comparison of ambient scribing tools, see our best AI medical scribe guide.

2. Differential Diagnosis Generation

AI-powered differential diagnosis represents a fundamentally different capability from documentation. Where scribing automates a clerical task, DDx generation augments a cognitive one. The National Academy of Medicine estimates that diagnostic errors affect approximately 12 million Americans annually in outpatient settings, with roughly half of those errors potentially harmful (NAM, 2015). Cognitive factors – anchoring bias, premature closure, availability heuristic – account for a substantial share of these errors.

AI differential diagnosis tools work by ingesting clinical data – symptoms, history, physical exam findings, lab results, imaging, and sometimes chart context – and generating a ranked list of diagnostic possibilities based on that presentation. This is not a symptom checker. The models underlying these tools have been trained on medical literature, clinical guidelines, and large corpora of clinical text, enabling them to reason across complex, multi-system presentations in ways that simple algorithmic matchers cannot. This is one reason AI can be useful on atypical or easy-to-anchor-on cases: it can expand the diagnostic search space beyond the most obvious explanation, provided it has enough patient context and the physician is still validating the output.

Glass Health structures its DDx output into three tiers that mirror how physicians actually think about differential diagnosis. Most Likely diagnoses are the conditions that best explain the overall clinical picture given the data available. Expanded diagnoses include less common conditions that fit the presentation and should be considered, particularly when the most likely diagnoses have been ruled out or the clinical picture is evolving. Can’t Miss diagnoses are the high-severity, time-sensitive conditions that must be excluded regardless of probability – the diagnoses where missing them carries catastrophic consequences.

This three-tier structure addresses a specific cognitive vulnerability. In a busy primary care clinic seeing 20-25 patients per day, it is easy to anchor on the most obvious diagnosis and move on. A 58-year-old male presenting with epigastric pain and GERD symptoms gets prescribed omeprazole. The most likely diagnosis is correct. But the Can’t Miss tier might surface acute coronary syndrome or pancreatic malignancy as considerations that warrant specific exclusion – not because they are probable, but because missing them is unacceptable. This is the same reasoning process that experienced clinicians perform internally. The AI externalizes it, making it visible, documentable, and consistent.

AI differential diagnosis does not replace your clinical judgment. It augments it by expanding the diagnostic space you are actively considering. Recent benchmark-style studies and reviews suggest large language models can perform competitively on some simulated clinical reasoning tasks, but the safest interpretation is still narrow: these systems are useful thought partners, not independent diagnosticians (Nature Medicine, 2023). For a more practical look at how this shows up in real workflows, see our guide on AI diagnosis.

3. Treatment Planning and Assessment & Plan Generation

Documentation of the assessment and plan is where high-value clinical reasoning meets time-consuming charting work. Writing an A&P for a medically complex patient means synthesizing multiple active problems, reconciling recommendations that come from different specialties, and making your reasoning explicit enough that another clinician can understand what you are worried about, what you are prioritizing, and what happens next.

AI-generated assessment and plan takes the clinical data from the encounter – the same data captured by ambient scribing, supported EHR context, uploaded documents, and differential-generation workflows – and organizes it into a structured, problem-based note. This is not a generic template. A strong A&P generator should identify the active problems, summarize why they matter in this patient, and draft a plan that reflects current evidence while still leaving the final clinical decisions to the physician.

For a patient with overlapping heart failure, chronic kidney disease, and newly recognized atrial fibrillation, Glass Health can organize the note into discrete problems, surface the relevant guideline domains, and draft a plan that makes the reasoning explicit: what needs immediate workup, which treatment questions still need physician review, what monitoring matters, and which issues likely need cardiology or nephrology follow-up. The physician still chooses the actual medications, orders, and timing.

The difference between AI-generated A&P and what you would get from a general-purpose chatbot is clinical structure. General-purpose AI models can discuss medical topics, but they do not usually produce documentation-ready, problem-organized assessment and plan sections that slot directly into a clinical note. Purpose-built clinical AI tools are designed to do that work in the same workflow as the note itself.

This capability also matters for documentation quality. A structured A&P can make it easier for the note to reflect the actual complexity of the visit: the number of active problems addressed, the data reviewed, and the risk-management tradeoffs being considered. It should not be treated as automatic coding advice, but it can reduce the gap between the clinical reasoning that happened in the room and what ultimately gets captured in the chart.

4. Evidence Synthesis and Clinical Q&A

The fourth capability – clinical Q&A and evidence synthesis – addresses a problem every physician has experienced: you need an answer during a patient encounter and you do not have time to search UpToDate, navigate to the right article, read through a long review, and extract the specific data point you need. The cognitive overhead of traditional reference tools is significant enough that many clinicians simply rely on memory rather than looking something up, even when they are uncertain.

Glass Health’s clinical Q&A workflow lets you ask a clinical question in natural language and receive a synthesized answer grounded in medical evidence. Unlike consumer chatbots, purpose-built clinical AI tools constrain their responses to medical literature and clinical guidelines rather than drawing from the entire internet. The answers include citations that you can verify, which means the tool is synthesizing the literature and guidelines directly rather than summarizing generic web content.

The types of questions this handles well in daily practice span a broad range. Drug-specific questions: “What monitoring issues matter for this medication in advanced CKD?” Guideline clarifications: “How do current guidelines frame device referral in HFrEF?” Differential refinement: “What features help distinguish polymyalgia rheumatica from late-onset rheumatoid arthritis on initial presentation?” Management planning: “Which cardiology, nephrology, and endocrine guidelines are most relevant to this patient right now?”

Glass Health’s clinical Q&A workflow is integrated into the same platform as the ambient scribe and DDx generator. This means you can ask follow-up questions about the differential the tool generated, request deeper evidence on a treatment question in the A&P, or explore alternative management approaches – all without leaving the encounter workflow. The integration matters. When the chat function is embedded in the same context as your patient’s encounter data, uploaded documents, and supported EHR data, the tool can ground its answers in the specific clinical scenario rather than providing generic responses.

This is distinct from traditional clinical reference tools like UpToDate, which remain excellent but operate on a search-and-read model. You type a topic, navigate to an article, and extract what you need. UpToDate’s content is physician-authored, peer-reviewed, and deeply authoritative. Glass Health’s clinical Q&A offers a different interaction model: conversational, contextual, and faster for targeted questions. Many physicians will use both. For a comparison of how these tools differ, see our clinical decision support guide.


What Can’t AI Do for Doctors?

Honest assessment of limitations is more valuable than optimistic marketing. AI tools for doctors have real boundaries, and understanding them is necessary for safe, effective use.

AI cannot perform a physical examination. No software tool replaces the information gathered by palpating an abdomen, auscultating lung fields, or observing a patient’s gait. Ambient scribes document what you say during the exam, but they have no independent access to physical findings. If you do not verbalize “lungs clear bilaterally, no wheezes, rales, or rhonchi,” the scribe does not know your lung exam was normal. This means the quality of AI-generated documentation is directly dependent on how completely you articulate findings during the encounter. Physicians who are accustomed to silent examinations – performing the exam, noting findings mentally, and documenting later – need to adjust their workflow to verbalize findings in real time.

AI cannot establish a therapeutic relationship. Empathy, trust, shared decision-making, motivational interviewing, delivering a cancer diagnosis with compassion – these are fundamentally human capabilities. AI can document the conversation and generate the clinical note afterward, but the patient-physician relationship is something no algorithm replicates. Physicians who worry that AI tools will make medicine feel impersonal are asking the right question, but the answer is paradoxical: by offloading documentation and cognitive overhead to the tool, physicians often report spending more face time with patients, not less. The JAMA Network Open burnout study is directionally consistent with that workflow benefit, even though it does not justify autonomous care (JAMA Network Open, 2025).

AI does not guarantee clinical accuracy. Large language models can hallucinate – generating plausible-sounding but factually incorrect information. In clinical contexts, this could mean citing a guideline that does not exist, recommending a drug at the wrong dose, or suggesting a diagnosis that does not fit the clinical picture. Purpose-built clinical AI tools implement guardrails – evidence grounding, citation requirements, structured output constraints – that substantially reduce hallucination rates compared to general-purpose chatbots. But they do not eliminate the risk entirely. Every AI-generated output requires physician review. The physician-in-the-loop is not a legal disclaimer. It is a clinical safety requirement.

AI can be especially helpful on rare and atypical cases, but those cases still require the most scrutiny. One reason physicians find AI valuable is that it can surface uncommon diagnoses and atypical variants that are easy to miss when you are fatigued or time-constrained. That advantage grows when the system has richer context from uploaded records or supported EHR data. But unusual cases are also the ones where mistakes are most expensive. Physicians should treat AI as a second set of eyes, not a final arbiter, especially when the presentation is strange, high-risk, or evolving.

AI only knows the context it can access. Standalone tools often see only the current prompt or transcript. Chart-connected workflows can do much better by pulling in medication lists, problem lists, recent labs, prior notes, and uploaded records before or during the encounter. In some cases that means the system can surface more relevant structured context than a physician can manually review in a short visit. But that is still not the same as the physician’s longitudinal understanding of the patient, bedside judgment, or ability to resolve conflicting data.

AI does not handle medicolegal judgment. Decisions about capacity assessments, involuntary holds, mandatory reporting, informed consent for high-risk procedures, and documentation for disability determinations require legal and ethical judgment that sits outside the scope of any AI tool. These decisions demand clinical, ethical, and legal reasoning that must remain entirely physician-driven.


How Are Doctors Using AI in 2026?

The abstract capabilities described above become concrete when you see how they fit into the daily workflow of specific specialties. The key distinction is not just that AI writes faster notes. It is that the best integrated platforms can pre-chart, listen during the encounter, surface ambient insights, help with differential diagnosis and treatment planning, and then turn that same work into usable documentation.

Family Medicine: The 20-Patient Day

A family medicine physician running a high-volume outpatient schedule may still spend substantial after-hours time finishing notes. In an integrated workflow, the physician starts by pre-charting the patient with the medication list, recent labs, problem list, and prior documentation already summarized before entering the room.

During the visit, ambient scribing captures the conversation while real-time insights keep the differential broad enough to avoid premature closure. By the time the patient leaves, the physician has a draft note, an updated differential, and an assessment and plan that already reflects the clinical reasoning and preventive care follow-through discussed in the room.

During a visit with a 45-year-old woman presenting with fatigue, weight gain, and constipation, the AI DDx generates hypothyroidism as most likely while still surfacing iron-deficiency anemia, depression, and colorectal pathology as diagnoses that should stay in the frame. The physician was already thinking hypothyroidism and planned to order TSH. The AI prompt for iron-deficiency anemia triggers an additional CBC and ferritin. The colorectal-cancer flag is also a useful reminder that average-risk screening now starts at age 45 under current USPSTF guidance (USPSTF, 2021) – something that might not have come up in a visit focused on fatigue. The A&P documents all of this, including the preventive follow-through, rather than leaving the reasoning implicit.

Internal Medicine: The Complex Cardiorenal-Metabolic Visit

A general internist managing a patient with diabetes, hypertension, CKD, and new heart-failure concerns faces a clinical decision matrix that requires weighing multiple interacting guidelines simultaneously. Which problems are most urgent today? Which issues need cardiology or nephrology input? Which medication questions depend on the latest renal function, potassium trend, and symptom burden? In a chart-aware workflow, the labs, meds, uploaded outside records, and recent problem history are already in view before the question is asked.

Glass Health’s clinical Q&A workflow handles these questions by synthesizing the relevant ADA, KDIGO, and ACC/AHA guidance into a patient-specific answer that helps the physician prioritize the next decision, document the rationale, and decide what needs active follow-up. The A&P can then capture that reasoning in the note instead of forcing the physician to reconstruct it later from memory.

Without an integrated tool, this physician either spends additional time navigating multiple reference articles or relies on memory for the relevant specialty guidance. Neither approach is as efficient or as well-documented as a workflow where chart context, evidence synthesis, treatment planning, and documentation all sit in one place.

Emergency Medicine: The Undifferentiated Patient

A 32-year-old female presents to the ED with acute-onset right lower quadrant pain, low-grade fever, and nausea. In a strong workflow, the physician can pre-chart the triage note, recent ED visits, pregnancy history, labs, and imaging context before walking into the room. Once the conversation starts, ambient scribing captures the encounter while the AI keeps the differential broad enough to include time-sensitive pathology.

The obvious differential includes acute appendicitis, ectopic pregnancy, ovarian torsion, and other gynecologic, urinary, and gastrointestinal causes of right lower quadrant pain. The value is not that the emergency physician has never heard of these diagnoses. The value is consistency. On hour 10 of a 12-hour shift, after seeing 25 patients, cognitive fatigue degrades the reliability of mental checklists. The AI functions as an externalized cognitive checklist that helps keep high-stakes alternatives in view.

The ambient scribe simultaneously documents the workup – the beta-hCG sent to rule out ectopic, the CT abdomen/pelvis ordered, the surgical consult placed – while the physician updates the differential and ED course in real time. That means the medical record captures the clinical reasoning as it unfolds rather than forcing retrospective reconstruction after the shift gets busy.

Internal Medicine Subspecialties: The Longitudinal Coordination Problem

A cardiologist, nephrologist, or endocrinologist often inherits a patient whose story is already spread across years of labs, imaging, hospital discharges, and prior outpatient notes. In these encounters, the value of AI is not just note generation. It is the ability to ingest uploaded records, supported chart context, and the current conversation, then keep the longitudinal clinical story coherent while the physician decides what matters today.

A nephrology follow-up for a patient with CKD, heart failure, recurrent hyperkalemia, and diabetes may require reconciling medication changes made by multiple teams, reviewing recent trends in creatinine and potassium, deciding whether the volume picture is primarily cardiac or renal, and documenting why certain therapies are being continued, adjusted, or deferred. That is a classic documentation-clinical-reasoning problem: the charting burden and the cognitive burden are both high.

An integrated workflow helps by organizing the record, surfacing the active management questions, and drafting a note that reflects how the subspecialist actually thought through the case. Instead of using one tool to summarize the chart and a second tool to search the literature, the physician works in a single environment where context, reasoning, and documentation stay connected.


The Documentation-Clinical Reasoning Gap: Why Most AI Tools Only Solve Half the Problem

The AI-for-doctors landscape in 2026 has a structural problem. The market has split into two largely non-overlapping categories: documentation tools and reasoning tools. Most physicians who adopt AI are forced to choose one or stack both.

Scribing-only tools (ambient scribes like Freed, Abridge, Nuance DAX Copilot, Suki, DeepScribe) solve the charting burden. They listen to encounters and generate notes. This is genuinely valuable. But when the note is generated, these tools generally stop at documentation. They are not designed around the same integrated differential-diagnosis, assessment-and-plan, and encounter-native clinical-Q&A workflow that Glass provides. The clinical data helps produce the note, but it does not become a full reasoning layer inside the same encounter.

CDS-only tools (UpToDate, AMBOSS, OpenEvidence, DxGPT) solve the knowledge access problem. They help physicians find answers, explore differentials, and review evidence. But they have no connection to the patient encounter. You finish seeing a patient, open a separate application, manually type in the clinical scenario, and read the output. The context switch is real: you leave your EHR or documentation tool, enter the reasoning tool, formulate your question from memory, and then return to your note to incorporate what you learned. Each context switch carries cognitive cost and time cost.

This is what we call the documentation-clinical reasoning gap. The data that documentation tools capture during the encounter – the symptoms, history, exam findings, clinical context – is precisely the data that reasoning tools need to generate useful clinical intelligence. But in the typical physician’s workflow, these two processes are completely disconnected. The scribe does not talk to the reference tool. The reference tool does not know what the scribe heard.

Glass Health was built to close this gap. The platform captures encounter data through ambient scribing and uses that same data to power differential diagnosis, assessment and plan generation, and clinical Q&A. The clinical information flows once – from the patient conversation into the system – and serves both documentation and reasoning purposes. You do not re-enter data. You do not context-switch. You do not maintain two separate subscriptions to two separate tools that do not communicate with each other.

The practical impact of closing this gap shows up in three dimensions. First, time: eliminating the context switch between documentation and reasoning tools can recover meaningful physician time across a full clinic day. Second, completeness: when the DDx and A&P are generated from the same encounter data as the note, the documentation is more likely to reflect the reasoning that actually happened during the visit. Third, consistency: the AI produces a DDx and A&P for every encounter, not just the ones where the physician remembers to open a separate tool. This creates a more reliable cognitive safety net rather than an intermittent one. A non-verticalized stack makes you pay twice: once in software cost, and again in physician time spent re-entering context, translating between tools, and cleaning up disconnected outputs.

For physicians currently using a separate scribe and a separate reference tool, this integration is the most important architectural question in evaluating AI platforms. Not “which scribe has the best note quality?” or “which reference tool has the best content?” but “is there a single platform that does both well enough to replace my current two-tool stack with one verticalized workflow?” For a detailed comparison of how Glass Health stacks up against individual tools in each category, see our comparison hub.


How to Evaluate an AI Tool for Your Practice

Choosing an AI tool based on marketing demos is a reliable way to end up disappointed. Here is a framework for evaluating any AI platform for clinical use, based on criteria that matter once the novelty wears off.

Clinical accuracy under real conditions. Demo encounters with straightforward presentations tell you nothing about how the tool performs on your actual patient population. Test the tool on your most complex patients – the ones with multiple comorbidities, polypharmacy, and ambiguous presentations. Test it on encounters with heavy accents, background noise, and interruptions. Test it on telehealth visits with variable audio quality. If the vendor only shows you clean demos, ask why.

EHR integration depth. “EHR compatible” is a marketing term that can mean anything from full bidirectional integration to “you can copy and paste from our app into your EHR.” Ask specifically: does the tool push completed notes directly into the encounter record? Does it pull in the patient’s medication list and problem list? Does it work inside your EHR workflow or require a separate browser tab? On the Max plan, Glass Health supports Epic, eClinicalWorks, and Athena clinical workflows. If your EHR is not on the integration list for a tool you are evaluating, understand exactly what the workaround looks like in daily use.

HIPAA compliance and BAA. Any tool that processes patient health information must be HIPAA compliant and willing to sign a Business Associate Agreement. This is non-negotiable. Ask for the BAA before you trial the product. Be skeptical of tools that claim HIPAA compliance but hesitate to provide a BAA, and be especially cautious of tools that are ambiguous about data residency, retention, or training use.

Pricing transparency and total cost. Subscription pricing is straightforward for Glass: Lite (free), Starter at $20/month, Pro at $90/month, and Max at $200/month. Many competitors now publish at least some self-serve pricing, while enterprise products still use custom contracts. When evaluating cost, consider the total: subscription fee plus IT implementation time plus training time plus ongoing correction burden (time spent editing AI-generated outputs). A cheaper tool that generates notes requiring 5 minutes of editing per encounter may cost more in physician time than a pricier tool that generates notes requiring 1 minute of editing.

Scope of capabilities. This is where the documentation-reasoning gap analysis becomes practical. If you are evaluating a scribe-only tool, ask: what will I use for clinical reasoning support? If you already have a separate CDS workflow and are happy with it, a scribe-only tool may be sufficient. If you want pre-charting, patient-context ingestion, ambient insights, differential diagnosis, treatment planning, and documentation in one flow, the field narrows considerably. If you are evaluating a CDS-only tool, ask: what will I use for documentation? Map your actual workflow before deciding whether a single integrated platform or a two-tool stack serves you better.

Pilot structure. Run a real-world pilot, not a demo. Use the tool for a minimum of two weeks on your actual patient population. Measure time saved per encounter, note edit burden, clinical utility of any reasoning outputs, and your subjective satisfaction. Compare against your baseline workflow, not against perfection. Glass Health’s Lite tier lowers the barrier to evaluation, but it is limited, so use it to validate the workflow and then decide whether you need higher-volume access.


AI for Doctors by Specialty

Different specialties have different clinical workflows, documentation patterns, and reasoning demands. AI tools serve each differently.

Primary care and family medicine physicians benefit most from the full integration of scribing and reasoning. High patient volumes, broad diagnostic scope, and the need to manage complex chronic disease across multiple organ systems make these specialties the highest-ROI adopters of combined platforms.

Internal medicine and hospitalist physicians face the most complex clinical reasoning demands – multimorbidity, polypharmacy, guideline conflicts across specialties. The AI DDx and evidence synthesis capabilities are particularly valuable here, where a single patient may require simultaneous reference to cardiology, nephrology, and endocrinology guidelines. The A&P generation capability directly supports the high-complexity medical decision-making documentation that these encounters require.

Emergency medicine physicians operate under time pressure with undifferentiated patients. The AI DDx’s Can’t Miss tier is specifically designed for this clinical environment, ensuring that time-sensitive diagnoses are surfaced even during high-census, high-acuity shifts. Ambient scribing in the ED also addresses a unique documentation challenge: EM physicians often see patients in rapid succession and document retrospectively, introducing recall errors that real-time ambient capture eliminates.

Internal medicine subspecialists such as cardiologists, nephrologists, endocrinologists, and hospital-based consultants benefit when the platform can reconcile longitudinal records with the current encounter. These are often the visits where guideline conflicts, medication interactions, and multi-team handoffs create the most reasoning burden.

Surgical specialties use AI primarily for pre-operative and post-operative documentation, pre-chart review, and perioperative planning support. Operative notes remain more template-driven than clinic or inpatient notes in many workflows, but consult notes, progress updates, and discharge documentation can still benefit materially from ambient capture plus structured reasoning support.

For additional workflow context, explore our clinical decision support guide and ambient scribing page.


Frequently Asked Questions

Is AI safe for doctors to use with patients?

AI tools designed for clinical use incorporate multiple safety layers that distinguish them from consumer chatbots. HIPAA compliance ensures patient data is encrypted, access-controlled, and governed by a Business Associate Agreement. Physician-in-the-loop design means the AI generates drafts and suggestions, but the physician reviews, edits, and approves every output before it enters the medical record. Evidence grounding constrains the AI’s responses to medical literature and clinical guidelines rather than general internet content. Glass Health maintains all of these safeguards. The critical safety principle is that AI tools are clinical assistants, not autonomous agents. They generate; you decide. No AI tool should make clinical decisions without physician review, and any tool that implies otherwise should be avoided.

How much does AI for doctors cost?

Pricing spans a wide range. Glass Health offers a Lite tier (free) with limited ambient scribing and limited clinical decision support – no credit card required. Glass also offers Starter at $20/month, Pro at $90/month, and Max at $200/month, adding additional features for higher-volume practices. Individual ambient scribes like Freed publish self-serve plans, while enterprise ambient scribes like Abridge and Nuance DAX Copilot typically require institutional contracts. Clinical reference tools like UpToDate are separate annual subscriptions.

Can AI replace doctors?

No, and the question itself misunderstands what clinical AI does. AI tools automate specific tasks within the physician’s workflow – documentation, differential generation, evidence retrieval – but they do not perform physical examinations, establish therapeutic relationships, make medicolegal judgments, or exercise the integrative clinical reasoning that defines physician practice. The more accurate frame is that AI augments physician capability, the same way an EHR augmented paper charting or a CT scanner augmented physical diagnosis. Physicians who use AI effectively will deliver more efficient, better-documented, and more thorough care. But the physician remains the decision-maker.

Do I need to tell patients I’m using AI?

Consent requirements for AI documentation tools vary by state and by the specific function of the tool. For ambient scribing, which involves recording the patient encounter, most practices implement an informed consent process – either verbal notification at the start of the visit or a sign posted in the clinic. Some states have two-party consent laws for audio recording that make explicit consent legally required. For reasoning tools such as DDx, A&P generation, and clinical Q&A, disclosure expectations are less uniform and should be checked against local compliance policies. Best practice is to notify patients that AI technology assists with documentation and clinical support, even where not legally mandated. Transparency builds trust.

What is the difference between AI scribing and clinical decision support?

AI scribing converts the patient encounter conversation into structured clinical documentation. Clinical decision support analyzes clinical data to assist with diagnosis and treatment planning. These are distinct capabilities addressing different problems: scribing addresses documentation burden (a time problem), while CDS addresses clinical reasoning support (a cognitive problem). Most tools do one or the other. Glass Health combines both in a single integrated workflow, using the same encounter data for documentation and clinical reasoning simultaneously. For more detail on the CDS side, see our guide on clinical decision support.

Which AI tools are HIPAA compliant?

HIPAA compliance requires specific technical safeguards (encryption, access controls, audit logging), administrative safeguards (workforce training, policies), and a signed Business Associate Agreement. Established clinical AI vendors generally market HIPAA-compliant deployments, but you should still verify BAA availability, data retention policy, and training-data policy for the exact plan you intend to use. General-purpose AI tools – ChatGPT, Claude, Gemini – are not HIPAA compliant in their consumer versions and should never be used with identifiable patient health information unless your organization has an enterprise agreement with specific BAA coverage. The enterprise versions of some general-purpose models (e.g., Azure OpenAI) can be deployed in HIPAA-compliant configurations, but the consumer interfaces cannot.

How accurate are AI-generated clinical notes?

Modern ambient AI scribes can produce high-quality first drafts, but note accuracy still varies with audio quality, accent, medical terminology density, encounter complexity, and how explicitly the clinician verbalizes findings. KLAS has reported meaningful reductions in note-edit burden with ambient documentation tools, but the safest operational assumption is that every note still requires clinician review. The most common errors involve medication names, numerical values, and pertinent negatives that were implied but not explicitly stated. Accuracy improves with use as the physician learns to verbalize findings more consistently and the tool adapts to the physician’s speech patterns and documentation preferences.

Can AI help with medical coding and billing?

AI-generated documentation can help when it makes the medical decision-making in the note more explicit: the active problems addressed, the data reviewed, and the management risks being considered. Under current E/M rules, those elements matter. The safest framing is not that AI “does the coding” for you. It is that better-structured documentation can make it easier for the chart to reflect the complexity of the care you actually delivered.

Will my EHR work with AI tools?

EHR compatibility varies significantly across AI platforms. On the Max plan, Glass Health supports Epic, eClinicalWorks, and Athena clinical workflows. Some competitors emphasize deep Epic or Oracle Health deployment, while others emphasize broader but shallower EHR coverage. For EHRs without direct integration, most AI tools offer browser-based workflows or clipboard transfer, though these add friction compared to native integration. Before committing to any tool, verify the specific integration method for your EHR and test it in your actual workflow.

How do I get started with AI in my practice?

Start with a tool that has a free tier so you can evaluate without financial commitment. Glass Health’s free Lite tier includes limited ambient scribing and limited clinical decision support. Sign up at glass.health/signup, use it for two weeks on your normal patient panel, and measure whether it saves you time and adds clinical value. Do not evaluate during a light clinic week or with simple patients only. Test it on your hardest days with your most complex patients. That is where AI tools prove or disprove their value.

The Bottom Line

AI for doctors in 2026 does five concrete things: it preps the encounter with available patient context, documents encounters through ambient scribing, surfaces real-time ambient insights, generates differential diagnoses and assessment-and-plan drafts from clinical data, and answers clinical questions through conversational evidence synthesis. Most tools do one slice of this well. Glass Health does all of it from a single patient encounter, closing the gap between documentation and clinical reasoning that forces physicians into multi-tool stacks. The free tier gives you a low-friction way to evaluate the workflow – start at glass.health/signup and decide for yourself whether the integrated approach works for your practice.


Source Snapshot (Reviewed 2026-03-10)