Skip to main content

Medical AI

Glass box. Not black box.

The Medical AI surface is the reasoning layer behind every Regain recommendation. Each suggestion shows the rule that fired, the evidence cited, and the supervision verdict that let it through. If the supervision layer rejects a suggestion, you never see it — and the rejection is logged.

How the reasoning runs

Three layers, separate codebases.

Reasoning

The reasoning engine proposes clinical actions — draft plans, differential diagnoses, remediation steps. It works against a structured chart built from your records and conversation.

Supervision

A separate supervision system — different codebase, different runtime, different access controls — inspects every proposed action before it can leave the platform. It can block, annotate, or escalate. It cannot be disabled from inside the reasoning layer.

Governance

A governance layer attaches regulatory context: FDA QMSR records, IEC 62304 traceability, ISO 14971 risk analysis. Clinical AI artifacts are auditable end-to-end.

What this means for you

Every output is accountable.

No anonymous recommendations

Every recommendation is traceable to the rule, the citation, the model version, and the supervision verdict that allowed it.

No autonomous clinical actions

The AI drafts. A human clinician signs. Nothing that affects your care reaches you without a signature.

No quiet failures

When the supervision layer rejects a suggestion, it's logged. When a rule misfires, it's logged. When the model changes, the change is in the audit trail.

Under the hood

The same supervision protocol runs on our enterprise deployments.

Regain's supervision layer isn't a consumer feature — it's the same mechanism we deploy at medical centers and what institutional customers audit against.

Read the Trust & Safety architecture →