top of page

Clinical AI in South Africa: Why the Question Matters More Than the System

  • Writer: Matthew Hellyar
    Matthew Hellyar
  • Apr 2
  • 5 min read


clinical AI in south africa bot with doctor

There is a pattern emerging in clinical AI adoption that the industry has been slow to name.


Two clinicians. Same system. Same patient. Same data.

Different outputs — entirely.


Not because one is more experienced. Not because the system performed inconsistently. But because one asked a clinical question and the other asked a search query.


This distinction — between information retrieval and clinical reasoning — is the most important practical gap in clinical AI adoption today. And it is one we have been watching closely across our Phase 2 clinical trial work here in South Africa.



What We Observed in Structured Clinical Evaluation


Respocare Connect AI is currently in Phase 2 clinical trials with specialist clinic partners across South Africa. Our evaluation framework is structured around a specific question: does the system behave safely and usefully under real clinical conditions — not controlled demos, not curated prompts, but the kind of complex, multimorbid, longitudinally dense cases that make up a working private practice?


What the evaluation revealed was not primarily a finding about the system.

It was a finding about the question.


Across our structured evaluation work — 28 clinical documents, four visits, zero hallucinations in our Series 4 reference evaluation — the variable that most determined output quality was not the AI's capability. It was the clinical specificity and structure of the prompt.


A flat query produced a flat summary.


A structured clinical question — anchored in a specific patient, a time horizon, a clinical purpose — produced longitudinal synthesis that a colleague reading the record for the first time could act on.



The Gap Nobody Is Talking About


There is currently no practical framework in the published clinical literature that teaches clinicians how to prompt an agentic AI system working across a full patient record.

The peer-reviewed literature on clinical AI prompting — JMIR 2025, Academic Medicine 2024 — addresses technical prompt engineering for researchers and AI developers. What is missing is the practitioner layer: a clinician-to-clinician guide that translates clinical instinct into the language of agentic AI.


This is not a theoretical gap. It is a daily one.


Clinicians across South Africa are beginning to work with AI systems in their practices. Most approach them the way they approach Google — asking for facts, summaries, definitions. They get answers. But they are not getting clinical reasoning.


That gap is where risk accumulates. It is also where the real opportunity sits.



Why Respocare Insights Published This Guide


Respocare operates two properties with deliberately different functions.


Respocare Connect AI (respocareconnectai.com) is our clinical AI platform — currently in Phase 2 trials, early access waitlist open. It is a product.


Respocare Insights (respocareinsights.io) is our independent editorial platform. It publishes clinical AI infrastructure thinking, evaluation methodology, and practitioner resources — not product marketing. Its credibility depends on editorial independence, and we protect that deliberately.


The decision to publish The Art of the Clinical Question through Respocare Insights rather than through Respocare Connect AI was intentional. This guide is not a product brochure. It is a practitioner's resource — one that would be equally useful to a clinician using any agentic clinical AI system, not only ours.


It is priced at R395 for individual clinician access. It is gated behind a web reader — no PDF, no redistribution — because the content represents genuine intellectual work grounded in real clinical evaluation.



What the Guide Covers


The Art of the Clinical Question is structured across six chapters:


Chapter 1 — Why the Question Is Everything The five qualities that distinguish a clinical question from a general information request. Why the shape of the question determines the quality of the output.


Chapter 2 — The Architecture of a Great Prompt Context, intent, scope — the three-element framework that reliably produces high-quality clinical output. Ten weak versus strong prompt comparisons drawn from real clinical moments.


Chapter 3 — The Seven Prompt Shapes That Work The Story Arc, Safety Check, Trajectory Ask, Gap Audit, Handover, Connection Prompt, and Prioritised Action Ask — each with a mechanism explanation, not just a template.


Chapter 4 — Advanced Techniques for Complex Cases The Layered Conversation, Explicit Qualifier, Hypothetical Test, and Calibration Check — for multimorbid patients and high-stakes clinical decisions.


Chapter 5 — Prompts That Require Care The Diagnostic Trap, Absent Data Assumption, and Verification Imperative — the prompts that require clinical judgement alongside AI output.


Chapter 6 — Your First Thirty Prompts A complete prompt library across eight clinical moments: new patient orientation, pre-consultation preparation, during consultation, clinical reasoning, longitudinal review, specialist and MDT, handover, and documentation.



The Peer-Reviewed Evidence Base


The guide is grounded in published research, not internal claims.


A 2025 study in JMIR found that AI systems outperform clinicians in controlled conditions but underperform in real-world workflows — and that the primary driver of this gap is prompting quality, not system capability. The same body of research confirms that clinician trust in AI is calibrated primarily by transparency and training — not by headline capability claims.


A finding from Academic Medicine 2024 notes that only 55% of recommended clinical care is actually delivered — a gap that agentic AI, when used correctly, is structurally positioned to address through longitudinal tracking and accountability prompting.

These are the findings that shaped the guide's architecture. The evidence is not cited to impress. It is cited because it is directly relevant to how clinicians should think about using these systems.



For Clinicians on the Respocare Connect AI Waitlist


If you are on our early access waitlist, this guide was written with you in mind.

The most common question we receive from waitlist members is not about the platform's capabilities. It is: how do I actually use this well?


This guide answers that question — not in the abstract, but in the specific language of clinical practice. By the time you have your first patient session with Respocare Connect AI, you will have thirty prompts ready to deploy and a mental framework for building more.


Premium and AI Exclusive subscribers receive complimentary access to the guide reader through the Respocare Insights platform.



A Note on What This Is Not


This is not a claim that clinical AI will improve patient outcomes. We do not make that claim. Clinical outcomes are determined by clinicians, not tools.


This is not a claim that prompting alone makes AI safe. Safety in clinical AI is an infrastructure question — governance, validation, behavioural consistency, and the verification imperative that the guide addresses directly.


What this is: an honest, evidence-grounded, practitioner-facing resource for a skill that is becoming increasingly relevant and that nobody has yet addressed with clinical rigour in the South African context.


Access the Guide


The Art of the Clinical Question is published by Respocare Insights and available at:

respocareinsights.io/guide — R395, individual clinician licence


If you are interested in the Respocare Connect AI platform itself — Phase 2 clinical trials, early access programme, and platform capabilities — visit our product site or join the waitlist at respocareconnectai.com.


Matthew Hellyar is the Founder of Respocare and Managing Partner of Respocare Insights. Respocare Connect AI is currently in Phase 2 clinical trials in South Africa.

Comments


bottom of page