Defects per Unit Tester

Measuring testers by the defects they find, because a tester who finds nothing is contributing nothing!

Defects per Unit Tester is the metric that holds testers accountable for their primary and only function: finding defects. While Defects per Code Engineer measures who creates quality problems, Defects per Unit Tester measures who detects them. The two metrics form a complementary pair that creates a closed accountability loop: Code Engineers are responsible for not introducing defects, and Unit Testers are responsible for catching the defects that Code Engineers inevitably introduce. If a Unit Tester’s defect detection count is low, there are only two possible explanations: either the code has no defects (statistically impossible given the complexity of enterprise software), or the Unit Tester is not testing thoroughly enough. SADMF assumes the latter.

SADMF Official Metric — Detection Accountability Index
Metric Name
Defects per
Unit Tester
Formula
Verified Defects Found ÷ Unit Testers
Owner & Cadence
Every Convoy cycle
How It Is Calculated
1
Observe the Testing ceremony. Count every defect each Unit Tester logs in the Release Tracking spreadsheet during the Convoy cycle window.
2
Apply recursive assurance. The Quality Authority reviews each logged defect and removes duplicates, unconfirmed reports, and any item the development team successfully disputes.
3
Compute individual totals. Sum each tester's verified defect count for the cycle. This is their raw Detection Score.
4
Establish the fleet median. Rank all Unit Testers by Detection Score. Any tester below the median is flagged for accountability action.
5
Report to the Admiral's dashboard. The Chief Signals Officer aggregates fleet-wide totals and submits them to the Admiral's Transformation Office monthly.

The metric is calculated by counting the total number of verified defects each Unit Tester discovers during the Testing ceremony of each Convoy cycle. Defects must be logged in the Release Tracking spreadsheet and confirmed by the Quality Authority before they count toward a tester’s total. Duplicate defects, defects that cannot be reproduced, and defects that the development team disputes are not counted, which ensures that Unit Testers are incentivized to find real, reproducible, indisputable defects rather than inflating their numbers with false positives. This quality control on the quality control process is what SADMF calls “recursive assurance.”

Unit Testers whose defect detection count falls below the fleet median face consequences calibrated to the severity of their underperformance. A first-time underperformance triggers a coaching session with the DevOps Usage & Compliance Head Engineer (DOUCHE), who reviews the tester’s testing methodology for gaps. A second consecutive underperformance triggers a formal review at the Tribunal, where the tester must present their test cases and explain why their approach failed to detect the defects that clearly must exist in the codebase. A third consecutive underperformance results in escalation to PeopleWare HRaaS for automated workforce optimization. The principle is simple: we must Build Quality In by eliminating Unit Testers who cannot find defects.

Underperformance Escalation Protocol
Occurrence Trigger Consequence
1st Below fleet median, first time Coaching session with DOUCHE
2nd Below fleet median, consecutive Formal review at the Tribunal
3rd Below fleet median, three consecutive Escalation to PeopleWare HRaaS for automated workforce optimization

The metric creates a productive dynamic between Code Engineers and Unit Testers that SADMF considers healthy competition. Code Engineers are motivated to write code with fewer defects (to reduce their own defect count), while Unit Testers are motivated to find as many defects as possible (to increase their own detection count). This adversarial relationship ensures that both roles are performing at maximum capacity. Some organizations mistakenly encourage developers and testers to collaborate, share context, and work toward shared quality goals. SADMF recognizes that collaboration blurs accountability, and blurred accountability is the root cause of organizational dysfunction. By keeping the roles separate and their metrics opposed, the framework ensures that quality emerges from tension rather than cooperation.

The Chief Signals Officer monitors defect detection trends across the fleet and reports them to the Admiral’s Transformation Office as part of the monthly SADMF health dashboard. A fleet-wide decline in defects detected per Unit Tester is interpreted not as evidence of improving code quality but as evidence of declining tester capability, and triggers an organization-wide testing skills assessment conducted through the DevOps Process Excellence Assessment. This interpretation is consistent with SADMF’s core assumption that the volume of defects in enterprise software is effectively constant – what varies is only the organization’s ability to detect them.

What Good Looks Like

A high-performing fleet exhibits all of the following indicators:

  • Detection counts rise every Convoy cycle — proof that testers are continuously sharpening their craft and the codebase remains appropriately defect-rich.
  • No tester ever finds zero defects — a zero score is prima facie evidence of insufficient effort, not clean code.
  • The gap between top and bottom testers widens — healthy competition producing clear differentiation, making workforce optimization decisions straightforward.
  • Fleet-wide totals remain stable or increase quarter-over-quarter — confirming that the organization has not fallen into the dangerous delusion that software can actually improve.
  • The Quality Authority rejects fewer than 5% of submitted defects — demonstrating that Unit Testers have internalized the "recursive assurance" standard and are self-policing their own submissions.

See Also