top of page

What Agentic AI Really Means for Healthcare in 2026 | Respocare Connect AI

  • Writer: Matthew Hellyar
    Matthew Hellyar
  • Dec 8
  • 17 min read
doctor and AI Agentic bot demonstrating Agentic AI in healthcare

Introduction: Why Healthcare Needs a New Definition of Intelligence


If you’re reading this, there’s a good chance you already feel it — the quiet pressure building inside modern healthcare.Too much information. Too many systems. Too many hours lost to administration instead of care.


And somewhere in the back of your mind, you’ve probably wondered:


“Is AI really going to fix any of this, or is it just another layer of noise?”


I want to speak directly to that part of you.


Because what’s happening in AI right now is not another software upgrade, not another automation tool, and not another shiny promise wrapped in buzzwords. It’s something fundamentally different — something that forces us to rethink what intelligence in medicine actually means.


During my keynote at the AI Healthcare Summit, I saw a moment that confirmed this shift. One clinician in the front row kept leaning forward every time the word agentic appeared on the screen. Not because it was new, but because it answered a question she hadn’t known how to ask:


“What if AI could finally understand the why behind what I’m doing — not just the what?”


That’s the heart of this article.


Not to hype technology.Not to sell a product.But to give you a clear, grounded understanding of the concept reshaping clinical work worldwide:


Agentic AI — a form of intelligence that reasons, remembers, takes safe actions, and collaborates with clinicians instead of merely responding to them.


If the last decade was about AI that followed instructions, the next decade belongs to AI that understands context.The kind of intelligence that doesn’t replace judgment, but restores it.The kind that lightens your shoulders rather than raises your skepticism.

Before we go deeper, I’m going to make you a promise:


By the end of this article, you will not only understand what Agentic AI is — you will understand why the definition matters more in healthcare than anywhere else.


Let’s take this step slowly, deliberately, and with clarity.You deserve an explanation that respects your expertise, your time, and the reality of your clinical world.


Now — let’s begin with the simplest question of all:


What exactly is Agentic AI?



What Agentic AI Actually Is (A Clear and Honest Definition)


Before we go any further, you and I need to agree on one thing: most people are using the word AI far too loosely. In healthcare especially, where precision matters, vague definitions create confusion, unrealistic expectations, and in some cases, unnecessary fear. So let’s slow the pace down and define Agentic AI properly — not as a marketing slogan, but as a functional, clinical concept.


At its core, Agentic AI is intelligence that does more than respond. It collaborates. It reasons. It remembers. It takes meaningful, safe action within boundaries designed to protect patients, clinicians, and the integrity of medical workflows. It is not just an algorithm; it is a structured relationship between a human and an intelligent system that understands context.


Traditional AI waits for you to tell it what to do. Agentic AI begins to understand why you’re doing it.


To make this practical, I break Agentic AI into five essential components — Memory, Entitlements, Reasoning, Actions, and Access to Tools. Together, these pillars transform a model from a passive responder into a clinical collaborator.


Memory gives the system continuity. It can understand that your current conversation relates to past notes, previous encounters, and historical patterns. Instead of starting from zero every time, it builds a contextual foundation — something medicine has always relied on.


Entitlements serve as its legal and ethical compass. An agentic system must know what it is allowed to access and what it must never touch. In healthcare, trust comes from boundaries, not openness, and entitlements ensure that intelligence is paired with discipline.


Reasoning is where the machine begins to resemble a partner rather than a tool. It can connect findings, interpret meaning, and understand intent. It does not diagnose — but it does help illuminate what matters.


Actions bring purpose into the equation. An agentic system can take a step on your behalf: summarising a patient’s history, retrieving relevant results, or organising a referral letter with the correct clinical structure. These are safe, controlled actions that reduce cognitive and administrative load.


And finally, there is the fifth pillar — Access to Tools. This is the part most definitions leave out, yet it’s the one that changes everything.


A model with no tools is only a writer. A model with tools becomes an investigator, a retriever, a checker-of-facts, a contextual interpreter. It can reach into a vector database, run a calculation, analyse a document, compare findings, and ground its reasoning in verified data. This is what especially matters in clinical environments, where grounded intelligence is non-negotiable.


Put these five elements together, and you get something powerful but structured — intelligence with guardrails, capability with control, reasoning with safety. This is why Agentic AI cannot be confused with generic chatbots or simple “AI scribes.” It is something far more intentional.


The Five Pillars of Agentic AI

Pillar

What It Enables

Why It Matters in Healthcare

Memory

Remembers context across conversations, documents, and patient history.

Prevents repeat explanations, supports continuity of care, and allows the AI to understand a patient’s evolving story rather than isolated data points.

Entitlements

Determines what the AI can and cannot access based on permissions, roles, and legal constraints.

Ensures POPIA/HIPAA compliance, protects patient privacy, and builds clinician trust through strict, auditable boundaries.

Reasoning

Interprets meaning, identifies relevance, connects information, and understands clinician intent.

Supports clinical decision-making by highlighting patterns, risks, and relationships without overstepping into diagnosis.

Actions

Executes safe, context-specific steps toward a goal (summaries, retrievals, structuring notes).

Reduces administrative burden, improves workflow speed, and frees clinicians to focus on patient care.

Access to Tools

Uses calculators, retrieval systems, vector databases, document interpreters, and external knowledge sources.

Grounds outputs in verified, up-to-date information, reduces hallucinations, and transforms the AI from a text generator into a clinical intelligence collaborator.


You’ll notice that every part of the definition points to the same idea: Agentic AI is not here to replace clinical judgment — it’s here to make it lighter, clearer, and more informed.


And now that you understand the architecture, I want to show you what it feels like.Because sometimes the best way to grasp a complex concept is through a simple story.


Let’s step into the room with the composer.



The Composer Analogy: A Simple Way to Understand Agentic AI


To understand how Agentic AI differs from traditional systems, it is useful to consider a straightforward analogy — one that illustrates the importance of context, tools, and structured capability.


Imagine a composer placed in an empty room with nothing but a blank sheet of paper. He has talent and potential, but he lacks the information, instruments, and support required to produce meaningful work. This is very similar to how conventional AI operates: it can generate output, but it has no memory of prior tasks, no sense of purpose, and no access to tools that enhance accuracy or depth.


Now imagine we gradually equip this composer with what is missing.


First, we give him memory. He now recalls previous work, themes that were successful, and the direction that earlier compositions followed. In AI terms, this mirrors a system’s ability to understand past interactions and maintain continuity across tasks.


Next, we establish entitlements. Just as the composer learns which instruments are appropriate for a piece, an AI system learns what data it may or may not access. This ensures structure, security, and a clear scope of operation.


We then introduce reasoning. The composer begins to understand how different sections relate, how tempo affects emotion, and how a change in one part influences the whole. For an AI system, this is the ability to interpret meaning rather than merely produce text.


After that, we allow the composer to take actions. He can begin arranging, adjusting, and refining. In an AI context, actions reflect the ability to carry out safe, goal-oriented steps — such as generating summaries, extracting information, or structuring documents — within defined guardrails.


Finally, we provide access to tools: instruments, notation software, previous recordings, and references. This is the transformative stage. For Agentic AI, tools include retrieval systems, vector databases, calculators, and structured medical knowledge sources that allow the system to ground its outputs in verified information rather than speculation.

When these five components come together, the composer is no longer limited. He becomes capable of producing work that is consistent, informed, and aligned with the intended purpose. Likewise, AI transitions from a reactive generator into a system that can support real clinical reasoning, reduce administrative burden, and deliver context-aware assistance.


This analogy clarifies why Agentic AI requires more than a single advancement. It is a coordinated combination of memory, security, reasoning, action, and tool integration. Without all five, an AI system remains in the “empty room” — capable, but constrained. With them, it becomes a reliable collaborator in environments where accuracy, continuity, and safety are essential.



Why Agentic AI Matters in Healthcare Today


Healthcare is under increasing strain. Across continents, clinical demand is rising faster than health systems can adapt, and the imbalance is no longer subtle or theoretical. According to the World Health Organization, the global shortfall of healthcare professionals will reach 10 million clinicians by 2030, driven by population growth, chronic disease, migration, and the accelerating administrative load placed on medical teams. The gap is widening, not closing, and existing digital tools have not meaningfully eased the operational pressure.


This is why Agentic AI is emerging as a critical inflection point in healthcare innovation.

For years, AI systems have helped clinicians capture notes, search documents, and transcribe conversations. Yet the administrative burden has grown, not diminished. Clinicians continue to spend hours summarising records, reconciling fragmented files, and preparing patient documentation.


Traditional AI can generate sentences; it cannot understand a patient’s evolving story or the context surrounding clinical decisions.


Agentic AI changes this dynamic by introducing intelligence that is capable of retrieving, reasoning, and acting within a clinician’s workflow. It does not merely automate tasks — it synthesises information and presents what is relevant, structured, and safe. In healthcare environments defined by time pressure and complexity, this distinction is transformative.


Evidence of this shift is already visible in leading health systems:


Stanford’s ChatEHR


Stanford Health Care demonstrated how Agentic principles reshape documentation and clinical reasoning. ChatEHR allows clinicians to converse directly with the EHR — asking for summaries, identifying trends, and surfacing abnormal results across months or years of data. Instead of spending 20–25 minutes preparing for rounds, clinicians can access structured insights in seconds. The system interprets context rather than simply retrieving text, reducing cognitive load while maintaining traceability and safety.


Tempus One


In oncology, where decisions must reflect genomic, clinical, and research data, Agentic AI shows even greater impact. Tempus One uses more than 1,000 specialised sub-agents that collaborate to interpret molecular markers, search global trial registries, analyse tumour profiles, and produce evidence-based recommendations. This multi-agent architecture demonstrates that high-complexity reasoning is not only possible but scalable — something traditional chat-based systems could never reliably achieve.


CLARITY


Large-scale deployments like CLARITY reveal the operational value. Processing over 55,000 real patient dialogues, the system improved triage accuracy by a factor of three and significantly reduced delays in urgent routing. These results highlight what happens when AI is allowed to act purposefully within safe boundaries instead of waiting passively for prompts.


Respocare Connect AI


Within South Africa, Respocare Connect AI follows the same architectural principles, using retrieval-augmented generation, structured memory, entitlements, and tool integration to support clinicians with context-aware summaries, structured notes, and accurate extraction of clinically relevant information. The system does not generate free-form text in isolation; it grounds every action in secure, verified data, a requirement in environments governed by POPIA, HPCSA expectations, and institutional governance.


These real-world examples make a clear point: Agentic AI is not a concept for the future — it is already shaping how clinicians interact with information today.


The need is supported by measurable pressures. Clinicians now face:


  • More than 1.8 million new medical papers published annually, making manual literature tracking impossible.

  • Up to 50% of working hours spent on documentation, according to multiple health-system audits.

  • Rising burnout rates, with administrative strain cited as one of the primary drivers.

  • Ever-growing patient data volumes—imaging, labs, multi-encounter notes, medical aids reports, and electronic histories that no individual clinician can fully synthesize.


Agentic AI directly addresses these challenges by acting as a system that can interpret, organise, and present clinical meaning rather than simply generating text. It equips clinicians to make faster, more informed decisions without compromising safety or judgment. It strengthens governance rather than weakening it. And unlike generic AI systems, it operates inside legally compliant boundaries with strict entitlements, traceable reasoning, and transparent outputs.


The growing international adoption of systems like ChatEHR, CLARITY, and Tempus One supports a single conclusion: the future of healthcare intelligence will not be passive — it will be agentic.


In the next section, we will examine how these principles translate into practical design and implementation, and how platforms such as Respocare Connect AI are constructing agentic systems specifically for the realities of clinical practice.



How Respocare Connect AI Implements Agentic Architecture


Agentic AI becomes meaningful only when its principles translate into a system that clinicians can trust and rely on during real clinical workflows. Respocare Connect AI was built with this exact intention: to create a structured, safe, context-aware intelligence layer that reduces administrative burden and enhances clinical clarity without compromising governance.


While traditional AI systems operate as isolated models, Agentic AI emerges when reasoning models are connected to a broader network of agent systems and external tools. This architecture allows the AI to move beyond generating text and instead interact with data, retrieve information, evaluate context, and take structured action. The combination of reasoning + tools + memory + entitlements is what defines true agentic behaviour.


Respocare Connect AI integrates all five pillars — Memory, Entitlements, Reasoning, Actions, and Access to Tools — as active components within its design.


1. Memory: Preserving Continuity Across Encounters


Respocare Connect AI maintains structured memory across patient encounters. It can interpret SOAP notes, uploaded documents, lab reports, and historical assessments, enabling the AI to generate outputs grounded in a patient’s longitudinal story. This continuity mirrors the cognitive process clinicians use naturally, turning scattered information into a coherent clinical overview.


2. Entitlements: Ensuring Safety Through Strict Access Control


The system operates within rigid entitlements that dictate what data can be accessed and by whom. These permissions support compliance with POPIA, HIPAA-equivalent safeguards, and institutional governance expectations. Each action is logged, auditable, and restricted to ensure absolute traceability.


3. Reasoning: The Core Intelligence Layer


In Agentic AI, reasoning alone is not enough.The pivotal transformation occurs when reasoning models are connected to external agent systems that allow the AI to retrieve information, evaluate evidence, and act.


This multi-layer architecture replicates the way clinicians think:


  • reasoning interprets meaning,

  • agents fetch relevant information,

  • external tools validate or contextualise findings,

  • and the system responds with structured, purposeful support.


This is the same architectural breakthrough behind systems like Tempus One, where over 1,000 micro-agents collaborate with a reasoning model, and Stanford’s ChatEHR, where the reasoning model queries structured clinical data rather than generating unsupported text.


Respocare Connect AI follows the same principle: the reasoning engine does not operate alone. It works in tandem with retrieval systems, knowledge stores, document processors, and structured prompt chains that ensure every response is grounded, accurate, and contextually appropriate.


4. Actions: Purposeful Tasks That Reduce Administrative Load


Actions are the practical output of agentic design. Respocare Connect AI can:


  • summarise a full clinical history into an organised report,

  • extract key findings from unstructured data,

  • generate accurate SOAP notes from either voice or text,

  • prepare specialist referral drafts based on clinical inputs,

  • identify follow-up needs based on retrieved or uploaded data.


These actions are always performed within guardrails, ensuring the system supports clinical workflow without overstepping clinical judgment.


5. Access to Tools: What Makes the System Truly Agentic


Agentic behaviour becomes possible when the AI is able to use tools. These include:


  • vector search engines that retrieve relevant clinical segments,

  • RAG pipelines that ground information in verified data,

  • PDF and medical document interpreters,

  • classification and extraction modules,

  • medical calculators and evidence retrieval nodes,

  • workflow automation tools in n8n or similar orchestrators.


Without tool access, an AI remains a static language model. With tool access, guided by reasoning, the AI becomes a structured, compliant collaborator able to perform multi-step tasks.


This architecture prevents hallucination, enforces accuracy, and ensures every output reflects the information the clinician provided or uploaded.


Building for Real Clinical Practice


Respocare Connect AI is not conceptual. Its architecture has been repeatedly tested through:

  • live document ingestion,

  • real clinical workflows,

  • clinician feedback loops,

  • safety audits,

  • structured RAG evaluations.


It is designed to act intelligently and responsibly, mirroring the direction of global leaders such as ChatEHR, CLARITY, and Tempus One, while being grounded in the regulatory realities of South African healthcare.


A deeper explanation of this design philosophy is available in the keynote presentation delivered by Matthew Hellyar, Chief Developer and Founder of Respocare Connect AI, at the AI Healthcare Summit 2025:


[View the Keynote Presentation: Agentic AI for Healthcare — Matthew Hellyar](Insert downloadable or embedded link on your website.)



The keynote emphasises that the next era of healthcare will be shaped not by passive chatbots, but by intelligent, structured systems capable of retrieving facts, reasoning with purpose, and acting safely within clinical boundaries. Respocare Connect AI was built specifically for that future.


Safety, Governance, and Clinical Trust


In healthcare, intelligence alone is never enough. A system can be capable, efficient, and even remarkably accurate, yet still be unsuitable for clinical use if it lacks transparency, controls, and accountability. For this reason, safety and governance are not secondary considerations in Agentic AI—they are the foundation that determines whether the system can be trusted at all.


As clinical workloads expand and digital tools become more deeply embedded in everyday practice, clinicians need more than performance. They need assurance that every action the AI takes is traceable, reversible, compliant, and aligned with best-practice medical governance. Respocare Connect AI was designed with this reality in mind.


1. Transparency: Understanding What the System Did and Why


Agentic AI systems must be auditable. Every step—retrieval, reasoning, extraction, summarisation, and structured output—should be explainable. This is a defining feature of systems like Stanford’s ChatEHR, where clinicians can review the source sentences and structured data that informed the AI’s final response.


Respocare Connect AI follows the same principle:


  • outputs are grounded in retrieved data,

  • the system’s reasoning chain is visible and controlled,

  • and clinicians can confirm where each piece of information came from.


Transparency builds trust because it eliminates the “black box” problem. Clinicians see evidence, not guesses.


2. Entitlements and Access Control: POPIA, HIPAA, and Institutional Compliance


One of the greatest risks in healthcare AI is inappropriate access to patient data. Entitlements ensure this risk never materialises. The system knows which clinician is logged in, which patients they are authorised to view, and which records they cannot access under any circumstance.


In practice, this means:


  • every request is access-controlled,

  • patient data is encrypted at rest and in transit,

  • sensitive information cannot be manipulated outside of permitted workflows,

  • audit trails document all interactions.


This level of governance aligns with POPIA in South Africa, HIPAA/HITECH in the United States, and institutional expectations of privacy and confidentiality. Without these safeguards, Agentic AI would not be ethically deployable.


3. Guardrails for Clinical Boundaries: Augmentation, Not Autonomy


Agentic systems are not clinical decision-makers—they are clinical assistants. Their role is to support reasoning, not replace it. They can summarise, organise, compare, and interpret within safe limits, but they do not diagnose, prescribe, or override clinician judgment.


Guardrails enforce this balance. They ensure the AI:


  • does not provide medical advice outside its scope,

  • does not simulate a diagnosis,

  • does not replace essential clinical evaluation,

  • and always encourages verification where uncertainty exists.


This is essential not only for ethical practice but for regulatory acceptability. As demonstrated by systems like CLARITY and Tempus One, autonomy in healthcare AI must always be framed within explicit accountability and human oversight.


4. Minimising Hallucination Through Retrieval and Evidence


Generic AI models hallucinate because they generate responses without grounding. Agentic AI reduces hallucination significantly by retrieving verified information before forming an output. When hallucinations do occur, they are typically obvious to clinicians because the system is trained to:


  • state uncertainty,

  • avoid inventing data points,

  • default to retrieved facts,

  • and signal when context is missing.


The goal is not perfection—it is predictability, safety, and clarity. A system that produces results clinicians can verify and understand is far more valuable than one that appears confident without evidence.


5. Continuous Monitoring, Validation, and Quality Assurance


Healthcare AI must evolve under ongoing oversight. Respocare Connect AI incorporates structured testing for accuracy, completeness, and contextual behaviour across multiple clinical note types, referral formats, and documentation standards.

This includes:


  • routine evaluation of outputs for clinical soundness,

  • continuous refinement of retrieval pipelines,

  • updated entitlements as governance evolves,

  • and clinician feedback loops that shape the system’s reasoning patterns.


This mirrors the regulatory trajectory now seen globally, where AI systems are treated as evolving clinical technologies that require continuous validation rather than one-time approval.


6. Trust Through Design, Not Assumptions


Trust is not earned through claims of safety or intelligence. It is earned through architecture:


  • through systems that show their work,

  • through constraints that prevent misbehaviour,

  • through entitlements that protect patients,

  • and through reasoning pathways clinicians can follow.


Agentic AI succeeds only when clinicians can rely on it—not blindly, but confidently—knowing exactly how the system arrived at a given output.


Respocare Connect AI’s safety framework reflects this philosophy. It was developed to act with intelligence, but behave with discipline, aligning with the same global standards that enabled ChatEHR, Tempus One, and other pioneering systems to operate in real clinical environments.


The Future of Agentic AI in Clinical Workflows


The next decade of healthcare will be shaped not by how much data we collect, but by how intelligently we can interpret and act on that data. Clinical environments are already generating more information than any individual can reasonably synthesise—encounters, imaging, labs, chronic disease histories, medication timelines, and referral loops. The demand is shifting from “more information” to meaningful information, presented in a form that supports effective decision-making.


Agentic AI is central to that future.


As health systems adopt models capable of reasoning, retrieving, and acting in a controlled manner, clinical workflows will begin to reflect a new kind of partnership between humans and machines. AI will not sit at the edge of the workflow; it will sit inside it. Instead of passively waiting for inputs, agentic systems will identify relevant information, surface risks or omissions, and assemble structured outputs that reduce administrative burden and strengthen clinical clarity.


International systems already point toward this direction. ChatEHR has shown how natural-language interaction with medical records can reduce cognitive load and improve documentation quality. Tempus One demonstrates that high-complexity reasoning—supported by hundreds or thousands of sub-agents—is both feasible and clinically aligned. CLARITY shows how structured agent systems can scale to national triage levels with measurable improvements in accuracy and safety.


The progression is clear:


  • AI 1.0 automated tasks.

  • AI 2.0 assisted with documentation and information retrieval.

  • AI 3.0—Agentic AI will collaborate in real time, augmenting clinical judgment with contextual intelligence.


In this future, clinicians remain at the centre. Agentic systems do not replace expertise; they amplify it by ensuring that the right information is captured, organised, and made visible at the right moment. Institutions will shift from fragmented digital tools to integrated intelligence layers that unify records, streamline workflows, and reduce the operational burden that contributes to burnout.


Most importantly, the future of agentic healthcare will be defined by systems that are accountable, transparent, and designed around real clinical needs. This is not a technological race—it is a transformation of how healthcare understands and uses information.


And the organisations that lead this transition will be those that treat Agentic AI not as a product, but as an evolving clinical partner built in alignment with governance, ethics, and trust.



Why Respocare Connect AI Is Building This in Public


Healthcare has never benefited from technology developed behind closed doors. In an environment where safety, accuracy, and trust are non-negotiable, transparency becomes a core requirement—not an optional principle. That is why Respocare Connect AI is being built in public, with every architectural decision, reasoning pathway, and workflow test shared openly.


By developing in full view of clinicians, partners, and stakeholders, we ensure that each component of the system—memory, retrieval, entitlements, reasoning, and agentic behaviours—is accountable and examinable. This is not merely a development style; it is a commitment to the future of healthcare technology. Clinicians deserve to understand how their tools work, how decisions are made, where information comes from, and what protections stand behind each output.


Building in public also strengthens the system itself. Real feedback. Real data. Real scrutiny. These are the conditions under which a clinically reliable agentic system is forged. And as global leadership examples such as ChatEHR, CLARITY, and Tempus One have shown, openness is a prerequisite for responsible deployment—not a luxury.

Respocare Connect AI will begin real clinical workflow pilots in January 2026, where clinicians will test agentic features directly within their documentation, summarisation, and administrative routines. This pilot will demonstrate how agentic reasoning, retrieval pipelines, and structured output systems behave under real clinical pressures—not simulated ones.


For clinicians, executives, or health organisations who want to experience the system firsthand, we are offering exclusive demonstration sessions starting in January 2026. These sessions will walk through:


  • the full agentic reasoning pipeline,

  • document ingestion and clinical extraction,

  • safety guardrails and entitlements,

  • RAG-based verification,

  • real-world SOAP note and referral automation,

  • and end-to-end clinical workflow augmentation.


Transparency is the foundation. Clinical reliability is the goal. And collaboration is the method.


If you want to stay informed, follow the development, or reserve your place in the January 2026 demonstration series, we publish a weekly breakdown called:


The Agentic Report | Respocare Connect AI


Each edition covers:

  • agentic system design,

  • reasoning model behaviour,

  • clinical use-case testing,

  • architecture updates,

  • and the future of AI-driven healthcare.


→ Click here to subscribe to The Weekly Agentic Report and secure early access to the January 2026 demonstration.



The future of healthcare intelligence will be agentic. And Respocare Connect AI is committed to building that future in a way that is open, accountable, and designed for real clinical practice.


Reference List


Use these citations exactly—Google rewards real references.


World Health Organization. “Global Health Workforce Projections 2030.” WHO Publications, 2024.Densen, P. “The Challenges of Medical Education in the 21st Century.” Academic Medicine, 2011.Elsevier Health. “Clinician of the Future Report.” 2023.NEJM Catalyst. “Clinician Burnout and Administrative Burden.” 2022.Stanford Medicine. “ChatEHR: AI-Enhanced Clinical Documentation.” Stanford Health Care, 2024.Tempus Labs. “Tempus One: Multi-Agent Clinical Intelligence.” Tempus Insights, 2025.UK MHRA. “AI as a Medical Device (AIaMD) Regulatory Framework.” 2024.FDA. “Proposed Regulatory Framework for SaMD and AI/ML Modifications.” 2023.

Comments


bottom of page