Detection Method · 01
Finding the setpoints your operators silently overrode — before the override becomes procedure.
Measurement-and-verification literature has named operator-overridden setpoints as a leading cause of persistent drift in industrial and building systems for two decades. Sensor-first fault detection can see the override; it cannot see the reasoning that produced it. Sovel cross-references work-order narrative, shift-logbook entries, and setpoint-change records to surface overrides alongside the institutional reasoning that would otherwise stay in a single operator's head.
Why this is hard
Fig 1. Cross-signal detection. Setpoint-change records (grey) are bounded by expected operational envelopes. Narrative-thin WO closures co-occurring with recurrent setpoint edits (pink) mark the review window where institutional reasoning is evaporating.
How we detect it
Setpoint change records tell you *what* moved. Work-order narratives and logbook entries tell you *why* someone moved it — if the narrative is thick enough. Sovel pairs the two signals and scores the combination against asset criticality. Thin narrative plus a recurring override pattern is where knowledge is concentrating in a single operator.
-
Cross-signal join
Setpoint-change logs join to work orders on asset + time window. Sovel surfaces changes that lack a closing narrative and flags the asset as carrying undocumented intervention history.
-
Narrative thinness score
Each WO note is scored for diagnostic content. The archetype signature is a recurring 'thing fixed' closure on an asset whose setpoint-change history shows repeated manual intervention.
-
ROI gating
Overrides on high-criticality assets escalate first. The reviewer sees only issues whose expected cost clears the plant's threshold — the SkySpark Tariff-Engine pattern adapted to knowledge-risk cost.
-
Reviewer-governed capture
Each surfaced override becomes a point-of-decision capture prompt when the next WO closes on that asset. The reasoning enters the governed knowledge base with reviewer sign-off, not an autonomous write.
What reviewers see
The Sovel reviewer surface after an override is detected: setpoint-change timeline on the left, thinly-narrated work orders on the right, a capture-prompt draft pre-populated for the operator who closed the most recent ticket.
How we benchmark it
Detection is evaluated against FMUCD (Facility Maintenance Unstructured Corpus Dataset, 3.69M work orders, CC BY 4.0) augmented with MaintKG ontology edges. Override-archetype labels are injected synthetically via controlled setpoint-narrative mismatches, then recovered.
| Metric | Method | Dataset / Corpus | Result |
|---|---|---|---|
| Precision | Override-archetype label recovery against seeded mismatches | FMUCD subset + MaintKG edges | 0.89 (targeting ≥0.85) |
| Recall | Seeded override patterns retrieved from held-out slice | FMUCD held-out 20% | 0.81 (targeting ≥0.75) |
| False-positive rate | Non-override WO clusters mis-flagged as overrides | Baltimore PM (~87K instances) | < 4% |
Numbers reflect the 2026-04 evaluation pass against public corpora. Pilot telemetry on live plant data will publish under the retainer engagement with reviewer-approved redactions.
Positioning against adjacent tooling
Override detection sits in a specific architectural slot between sensor-first FDD/MBCx platforms and CMMS work-order management. Neither neighbor solves it alone.
| Adjacent Tooling | Their lane | Sovel lane |
|---|---|---|
| SkySpark / Aspen Mtell / Clockworks (FDD + MBCx) | Watch the sensor. Detect valve-versus-damper conflicts, seal-failure lead time, and energy-waste rules. Gate alerts by tariff cost. | Watch the reviewer. Detect the undocumented workaround that the operator runs because the FDD rule fires too often. Same ROI-gating logic, different signal. |
| MaintainX / UpKeep / Fiix (CMMS + WO-level RCA) | Structured forensic entry on a single WO after failure. Good template for documenting one incident. | Cross-WO pattern detection. The third recurrence on the same asset with a thin closure is the signal — no single technician would ever see it from inside one ticket. |
| Texas A&M ESL CC® suite (Opportunity Assessor, WinAM, Implementer, Validator) | Periodic retro-commissioning studies that tune controls and catch sensor drift. | Preserves the *why* behind each override RCx corrects, so the optimization persists past six months instead of decaying back to drift. |
Frequently asked questions
- How is this different from an FDD alert on a fighting valve-damper pair?
- An FDD rule fires on the mechanical symptom. Sovel detects the institutional reasoning the operator used to silence or work around the alert — the knowledge that would disappear with retirement or a shift rotation. FDD watches the sensor. Sovel watches the reviewer.
- Do you need our setpoint-change log to work?
- No. The setpoint-change log sharpens the detector substantially when it is available. When it is not, the archetype still surfaces from narrative-thinness clustering on high-criticality assets — we recommend the free 48-hour diagnostic to see what signal is recoverable from your current data.
- Does the detector auto-commit an override correction to our CMMS?
- No. Sovel produces a reviewer-governed draft. Every governed knowledge write passes through a human reviewer with reason-coded approval, edit, or rejection. The audit trail is immutable by construction.
- We run retro-commissioning every few years — why do overrides keep coming back?
- Retro-Cx tunes the controls but does not capture the operator reasoning that produced the override in the first place. MBCx studies show the drift returns within months because the underlying institutional knowledge was never captured durably. Sovel is the knowledge layer that makes RCx wins persist.
- How does this connect to the MBCx literature?
- Model-Based Continuous Commissioning research (FEMP and adjacent) has named operator-overridden control settings as a major waste category for twenty years. The detection problem was framed a long time ago. The open problem has been the capture-and-governance layer that converts a detected override into a durable, auditable organizational artifact. That is the slot Sovel occupies.
Where this method came from
The override archetype surfaced out of two converging inputs: the 2026-04-21 Notre Dame utility-plant conversation with Paul Kempf (who described “a lot of human intervention, but they don’t ___, and there’s costs”), and a 43-source NotebookLM synthesis of Continuous Commissioning and MBCx literature which independently named operator overrides as a leading cause of persistent drift.
The detection rubric is not novel in origin — it has been described in academic and FEMP literature for two decades. What has been missing is the product layer: a reviewer-governed workflow that turns a detected override into a durable knowledge artifact tied to the specific asset, the specific operator, and the specific context of the next technician who will work that pump.
Where this method is going
The first pilot evaluation runs against a mid-market utility plant with an established eLogbook and a CMMS that predates any FDD deployment. The expected result is a ranked list of 10–25 recurring-override issues the plant did not know about, each paired with a named expert to capture from and a pre-populated point-of-decision capture prompt for the next WO close on that asset.
Test the method.
Run the diagnostic on your own work-order export. 48-hour turnaround, no data migration, no seat licenses.