Skip to main content

The Platform

Compliance Engine

Deterministic rule evaluation against the Standards for each program area. No machine-learning model in the compliance path. Same inputs produce the same finding, every run.

Compliance Engine

The compliance engine is the substrate's evaluator. It takes structured clinical data, applies the rule pack for the program area, and produces findings. The work is deterministic, continuous, and fully provenanced.

There is no machine-learning model in the compliance path. This is a deliberate architectural choice. Compliance evaluation is a domain where the correct answer must be reproducible. Two evaluators looking at the same evidence against the same standard must reach the same finding. A language model cannot guarantee that. A deterministic rule evaluator can.


How the engine works

The engine evaluates condition kinds against the rule pack for the program area being assessed. Each condition kind is a structured check (a credential currency lookup, a volume tally, a report-field completeness test, a turnaround-time computation, a quality-indicator threshold) implemented once, generically, in the evaluator.

A new accreditation framework or a new modality does not add condition kinds to the engine. It adds rules to a pack. The pack declares which condition kind each rule uses, which clause of the published Standards it implements, and which scoring model the framework applies. The engine reads the pack, fires the conditions, and emits findings.

Representative condition kinds in use today include:

  • A required credential is absent or expired
  • Procedure volume falls below the volume requirement
  • A required field in a final report is absent
  • Report turnaround time exceeds the standard
  • A required documentation artifact cannot be located
  • A required attestation has not been completed
  • Equipment QC has not been performed on schedule
  • An institutional policy has not been reviewed on schedule
  • A staff competency assessment is overdue
  • Continuing education hours are below the requirement
  • The QI plan review cycle is not current
  • A measured quality metric is below the standard
  • External proficiency testing is overdue

Each finding carries the rule identifier, the clause the rule implements, the metric the engine computed, and the underlying source record the metric was computed from.


Scoring models

Different accreditation frameworks use different scoring methodologies. The engine supports the patterns that recur across the published frameworks it has been run against:

Binary. The standard is either met or not. No partial credit. Used by frameworks where compliance is categorical.

All-or-none with threshold. Each individual standard is binary, but the standard set has a minimum passing count. A facility can fail a small number of standards and still pass the framework as a whole.

Graduated. Compliance is scored on a scale, with multiple levels of conformance recognized. Partial compliance receives partial credit.

Composite. Multiple scoring models layered within the same accreditation. Clinical standards may be graduated while governance standards are binary.

The engine selects the scoring model from the rule pack itself. A program enrolled in more than one framework gets each evaluated with its native scoring methodology, on the same data, in the same cycle.


Evaluation cadence

Source-system data syncs on a scheduled basis. Rule evaluation runs at scheduled intervals via background task workers. Drift detection identifies changes between cycles, shifts in rule-fire rates, attestations approaching their due dates, evidence freshness aging out, cycle boundaries approaching.

The compliance posture surfaced in the morning reflects the data state of the previous day, not the last time someone opened a binder.


Pack integrity

Every rule pack is hashed at load time. The engine verifies pack integrity, detects tampered packs, identifies unauthorized pack injection, and flags missing packs before evaluation begins. Bundle manifests are computed and verified on every evaluation cycle.

In an audit context, the integrity of the evaluation system is itself subject to scrutiny. The engine must be able to demonstrate that the rules it applied are the rules it was supposed to apply. The hash chain answers that question without ceremony.


What this replaces

The status quo for accreditation readiness is a quality coordinator reading the Standards in a PDF, manually compiling evidence from a dozen source systems, entering findings into a spreadsheet, and presenting a summary at a meeting. Between meetings, drift is invisible. Findings are tracked in email threads. Evidence is scattered across shared drives.

The compliance engine replaces that workflow with continuous, deterministic, evidence-linked evaluation. Not as a convenience, as a prerequisite for the kind of continuous readiness that survives contact with the accreditation cycle.

Read the Clinical Data Pipeline page →