The Platform
Reviewer Surface
Tooling that gives peer reviewers leverage on the work that requires clinical judgment. Mock surveys against encoded Standards, gap analysis tied to the underlying metric, cross-program standards mapping, gated remediation drafting.
Reviewer Surface
The compliance engine produces findings. The Reviewer Surface is where peer reviewers, quality coordinators, and accreditation staff actually do the work the findings exist to support.
The premise is that completeness checking is not what reviewers should be spending their time on. Reviewers are the volunteer backbone of accreditation; their hours are the scarce resource in the cycle. The Reviewer Surface is tooling built around that constraint, it absorbs the mechanical work so reviewers can spend their attention on the work only they can do.
What the surface offers
Standards lookup. Clause-level retrieval against the encoded Standards for a program area. When a reviewer needs to know exactly what a clause requires, the answer comes from the structured rule pack, not from a PDF search or a memory of the last surveyor training.
Mock surveys. A simulated review against the current compliance posture. Every applicable rule fires; findings present in the same shape a peer reviewer would produce. A program can run a mock survey weekly, the day before a site visit, or on demand.
Gap analysis. Side-by-side view of current state against the target Standards for the program area. Conditions met, conditions at risk, conditions failing, each tied to the underlying metric and the source record the metric was computed from.
Cross-program mapping. Where two frameworks address the same underlying requirement, the surface identifies the overlap. The same evidence satisfies both. The reviewer reads it once.
What-if evaluation. Model the compliance impact of a hypothetical change. If a second qualified staff member is hired, which findings resolve. If the QI plan review frequency increases, which conditions clear. The engine re-evaluates against the full rule pack and surfaces the delta.
Rule-fire explanation. For any finding, the complete trail: which rule fired, what metric was computed, what underlying record the metric came from, which clause the rule implements, what authoritative source the clause traces to.
Remediation drafting (gated). Drafts a remediation plan from a set of findings. This operation is gated, it produces a draft for human review. A qualified person decides. The substrate does not act.
Multi-program coordination
A facility serving more than one program area, or an accreditation body administering multiple programs, needs to coordinate across them. The Reviewer Surface supports this with cross-program views that share the underlying evidence base.
A query like "which programs are at risk of a finding the next time they enter the cycle" runs once, against the shared evidence base, and returns a view aligned to each program's Standards. The work that used to require switching between binders, spreadsheets, and inboxes happens in a single view.
Drift between cycles
Drift is the slow degradation of readiness between cycles. A credential expires. A QI plan review falls behind schedule. Evidence freshness ages out. These changes are individually minor and collectively dangerous, and they are invisible to periodic manual reviews.
The surface monitors four dimensions on a continuous basis:
- Rule-fire rate shifts. A sudden change in finding rates signals a systemic issue, not just an isolated gap
- Attestation aging. Required attestations approaching or past their due dates
- Evidence freshness. Clinical evidence aging beyond its validity window
- Cycle proximity. Heightened monitoring as cycle boundaries approach, when readiness matters most
Drift findings feed into briefings. The team starts each day knowing what changed overnight and what needs attention.
Workflows as configuration
Quality staff can author structured workflows, tracer methodologies, root-cause analyses, evidence-collection protocols, as hot-reloadable configuration. These are not code. They are structured instructions the Reviewer Surface follows, authored in the domain language of quality professionals.
Pre-built workflows cover mock surveys, tracer methodology, root-cause analysis, gap analysis, and evidence pull for specific findings. Programs extend these with site-specific workflows that reflect their own quality processes. The surface adapts to the program's methodology, not the other way around.
Judgment is not delegable
Every operation that affects clinical operations or accreditation outcomes is gated. The surface produces drafts, recommendations, and analyses. A qualified person decides what to do with them. There is no path through the substrate that takes an action on a clinical or accreditation outcome without explicit human review.
This is the right architecture for a domain where clinical judgment is not delegable. The substrate handles the synthesis and the mechanics. Reviewers handle the decisions that require their expertise. The point of the surface is to make sure the second category gets more of their time, not less.