AI in Surgical Decision Support: Promise and Pitfalls
How AI is transforming evidence retrieval for surgeons — and what to watch out for.
Large language models are revolutionising how clinicians access evidence. But surgical AI tools need careful design to avoid introducing new risks.
What AI Does Well
- Speed — synthesising hundreds of papers in seconds
- Comprehensiveness — searching across multiple databases simultaneously
- Summarisation — distilling long guideline documents into actionable points
- Citation — linking every claim back to its source
The Hallucination Problem
The biggest risk with LLM-based clinical tools is confabulation — generating plausible but incorrect information. This is why any serious clinical AI must:
- Always cite sources — no unsourced claims
- Grade evidence quality — distinguish meta-analyses from case reports
- Flag uncertainty — clearly indicate when evidence is limited
- Never diagnose or prescribe — remain an evidence retrieval tool
Torr Health's Approach
We use a RAG (Retrieval-Augmented Generation) architecture that:
- Retrieves real evidence from verified sources before generating any answer
- Grades every citation on a 5-level evidence strength scale
- Includes a mandatory disclaimer on every answer
- Stores no patient data (avoids SaMD classification)
Regulatory Considerations
Clinical AI tools in the UK must consider MHRA regulation. By positioning as an evidence retrieval and synthesis tool — not a diagnostic or treatment recommendation engine — Torr Health operates as a clinical reference tool rather than a Software as a Medical Device (SaMD).
Ask Torr Health about this topic
Get evidence-based answers with citations from NICE, PubMed, and your trust's guidelines.
Try Torr Health free