Why this exists

Every API has at least one contract that's drifted from reality. A field marked required that's been optional in practice for two years. An SLA promising 99.9% that the dashboards quietly disprove. A response shape the docs swear by and the logs contradict on every third request. Nobody set out to lie. The code shipped, the spec stayed, and the gap grew. Consumers integrate against the lie. The bug surfaces years later in a junior team that trusted the spec because trusting the spec is what specs are for.

What you get back

  • A ranked list of lying contracts — worst drift first, sorted by blast radius.
  • Side-by-side spec-vs-reality evidence: the schema line, the production response, the metric that contradicts the SLA.
  • A triage of which lies are dangerous and which are cosmetic — so you fix the ones that bite, not the ones that itch.
  • A corrected spec ready to ship. Not a TODO. The actual diff.

When to reach for this pattern

Before a major version cut, when breaking changes are cheap and lies are about to get baked in for another decade. When integrating with a service for the first time and you need to know which parts of the spec to actually trust. During due diligence on a partnership or acquisition, where the gap between contract and behavior is the gap between the deal you signed and the system you bought. And before promising an SLA you can't verify — find out what you're actually delivering before you put a number on it in writing.