This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Metrics

Management via metrics is the most effective way to cause impact!

Metrics are the foundation of evidence-based management within the Scaled Agile DevOps Maturity Framework. Without precise, individual-level measurement, leadership cannot distinguish high performers from low performers, productive teams from unproductive teams, or successful transformations from expensive failures. Every activity in SADMF generates measurable data, and every piece of measurable data generates a metric, and every metric generates a dashboard, and every dashboard generates a management action. This is the metrics-to-action pipeline, and it is the most important pipeline in the organization – more important even than the deployment pipeline, because deployments only deliver software, while metrics deliver accountability.

These metrics are not suggestions or guidelines. They are the mandatory measurement instruments that every Role is evaluated against, every Ceremony reports on, and every Practice feeds data into. Each metric has been designed to be individually unambiguous and collectively comprehensive, creating a measurement system where no contribution goes uncounted and no underperformance goes undetected. When properly applied, they ensure that every person in the organization knows exactly where they stand, at all times, relative to everyone else.

The Metrics

  1. Lines of Code per Code Engineer – The definitive measure of developer productivity, tracking LOC output to ensure Code Engineers maintain volume alongside quality.

  2. Code Review Comments per Convoy – Measures review rigor by counting comments, ensuring every Code Engineer is being sufficiently critical of others’ work.

  3. Tasks per Code Engineer – Tracks the number of tasks each Code Engineer completes per Convoy, because volume is the truest measure of velocity.

  4. Defects per Code Engineer – Attributes every defect to the individual who created it, informing the Tribunal and driving accountability.

  5. Defects per Unit Tester – Tracks defects detected per Unit Tester, eliminating testers who cannot find defects.

  6. SADMF Maturity Score – Quantifies execution of the SAD Delivery Lifecycle on a bell curve, because customer confidence requires numerical proof.

  7. Feature Completion Ratio – The percentage of features delivered versus what was committed to 8 quarters ago.

  8. SADMF Adoption Rate – The percentage of the organization with SAD certification, because transformation is measured by headcount.

  9. Individual Velocity Score – Story points completed per engineer per Convoy, the effort-weighted third dimension of the individual productivity profile.

  10. Changes per Trunk – The primary health indicator for every trunk in the Pando fleet, measuring feature throughput per branch and surfacing orphaned trunks before they become audit liabilities.

See Also

1 - Lines of Code per Code Engineer

The definitive measure of developer productivity – because more lines means more value!
SADMF Certified Productivity Metric • Class I • Fleet-Wide Mandatory
Lines of Code
per Code Engineer
LOC_SCORE = ∑(committed_lines) ÷ code_engineers   —   all line types included
Owner Role
Commodore
Cadence
Per Convoy
Reported By
SMT Automated
1 Story Point =
~147 LOC
How It Works — Calculation Sequence
01
Commit Scan
The Source Management Team (SMT) runs an automated scan of all commits to the Fractal-based Development branching structure at Convoy end. Every committed line is captured.
02
Inclusive Line Count
All line types are tallied without exception: production code, comments, blank lines, and configuration files. Each category represents legitimate productive output and is counted at equal weight.
03
Per-Engineer Attribution
Total LOC is divided by the number of Code Engineers in the fleet. Individual scorecards are produced and routed to the DOUCHE for review. Note: Unit Testers are excluded from this calculation entirely.
04
Story Point Conversion
Fleet LOC totals are converted to story points at the official rate (1 story point ≈ 147 LOC, adjusted for language complexity coefficient) and fed into the 8-quarter commitment planning process.
05
Consequence Routing
Engineers below fleet median are placed on a Performance Improvement Plan. Engineers above median receive recognition at Shore Leave. Persistent underperformers are escalated to the Tribunal. PeopleWare HRaaS automates threshold-based HR actions.
What Good Looks Like — Fleet Excellence Standard
A high-performing Code Engineer consistently produces 600–900+ LOC per Convoy day, regardless of task complexity, refactoring necessity, or the existence of prior art that could be reused.
  • Comments and blank lines are included — verbose, well-spaced code is thorough code.
  • Configuration files count — infrastructure investment is visible and rewarded.
  • Refactoring that reduces line count is a red flag requiring Tribunal review.
  • Reusing existing library code instead of writing new lines is an underreported anti-pattern.
  • The mathematically ideal engineer produces maximum LOC and minimum defects simultaneously — any trade-off indicates a coaching gap.

Lines of Code (LOC) per Code Engineer is the foundational productivity metric of the Scaled Agile DevOps Maturity Framework. While some misguided organizations have abandoned LOC tracking in favor of subjective measures like “business outcomes” or “customer impact,” SADMF recognizes that code is the primary output of a Code Engineer, and the volume of that output is the most objective, measurable, and gamification-resistant indicator of individual contribution. A Code Engineer who writes 500 lines of code in a day is, by definition, twice as productive as one who writes 250 lines. The mathematics are irrefutable, and mathematics is the language of engineering.

The LOC metric serves a critical balancing function within the SADMF metrics ecosystem. Because the framework also tracks Defects per Code Engineer, there is a theoretical risk that Code Engineers might attempt to reduce their defect count by writing less code. This perverse incentive must be neutralized. By measuring LOC alongside defects, the Commodore can identify engineers who are attempting to game the system by producing fewer defects through the unacceptable strategy of producing fewer lines. The ideal Code Engineer produces both high LOC counts and low defect counts, and any deviation from this ideal is a coaching opportunity for the Tribunal.

LOC measurement must be precise and granular. Every line committed to the Fractal-based Development branching structure is counted, including comments, blank lines, and configuration files. Comments are counted because documenting code is productive work. Blank lines are counted because code formatting is productive work. Configuration files are counted because infrastructure is code, and code is lines. The Source Management Team (SMT) runs automated LOC reports at the end of each Convoy and distributes individual scorecards to the DevOps Usage & Compliance Head Engineer (DOUCHE) for review. Engineers who fall below the fleet median are placed on a Performance Improvement Plan, while those who exceed the median receive a mention in the Shore Leave ceremony.

The metric also provides essential data for Precise Forecasting and Tracking. By analyzing historical LOC output per Code Engineer, the Chief Signals Officer can forecast the total LOC capacity of the fleet for upcoming Convoys. This forecast is then converted to story points using the official conversion formula (1 story point = approximately 147 lines of code, adjusted for language complexity coefficient), which feeds directly into the 8-quarter commitment planning process. Organizations that do not track LOC are, in effect, navigating without instruments – and the Admiral’s Transformation Office does not tolerate blind navigation.

It is important to note that LOC measurement applies exclusively to Code Engineers and not to Unit Testers. Test code, while technically code, is not production code, and therefore does not contribute to the organization’s LOC output. Unit Testers are measured by Defects per Unit Tester, which is the appropriate metric for their role. This separation ensures that each role is measured against its primary function and that no role can inflate its metrics by performing work assigned to another role. Role boundaries exist for a reason, and metrics must respect those boundaries.

See Also

2 - Code Review Comments per Convoy

Measuring the rigor of code review by counting every comment, because volume of criticism equals quality of oversight!

Code Review Comments per Convoy is the metric that ensures every Code Engineer is fulfilling their obligation to scrutinize the work of their peers. Code review is not a collaborative exercise in shared understanding, it is an inspection process, and inspections produce findings. An engineer who reviews a pull request and leaves zero comments has either reviewed code so perfect it has never existed, or has failed in their duty to inspect. SADMF assumes the latter.

SADMF Metric — Review Rigour
Code Review Comments per Convoy
FORMULA CRC = Σ(Review Comments left by Engineer within Convoy window)
Owner Role
Code Standards Enforcement Team
Cadence
Per Convoy Cycle
Source
Pull Request System
Unit
Comments / Convoy
How the Count Is Calculated
1
Pull Requests Captured
Every pull request submitted within the Convoy window is registered by the Code Standards Enforcement Team (CSET). The Convoy close date is the cut-off; no comments after that date count toward the current cycle.
2
Comment Enumeration (Quality-Blind)
All review comments left by each engineer are tallied without any quality filter. A comment noting a missing semicolon counts the same as one identifying a critical security vulnerability. Subjectivity introduces bias; volume does not.
3
Fleet Average Established
The mean comment count across all Code Engineers in the fleet is calculated. This becomes the accountability threshold for the Convoy. The fleet average rises each cycle as engineers compete for leaderboard position, creating an ever-escalating inspection standard.
4
Leaderboard Published Fleet-Wide
The CSET publishes a ranked leaderboard visible to all fleet personnel. High performers are motivated to maintain comment velocity; low performers are identified as engineers whose insufficient criticism suggests either laziness or, worse, collegial sympathy for their peers.
5
Consequence Assignment
Engineers below the fleet average are flagged for additional training in the Comprehensive Documentation Assurance Protocol. The Admiral's Transformation Office considers the correlation between low comment counts and poor documentation self-evident, and acts accordingly.
What Good Looks Like

A high-performing reviewer maintains a comment count significantly above the fleet average across consecutive Convoys, demonstrating a sustained commitment to inspection rigour regardless of code quality.

  • A comment count 40% or more above the fleet average earns recognition at the DevOps Process Excellence Assessment as demonstrating "review excellence"
  • Zero pull requests reviewed in a Convoy cycle is automatically escalated as a review participation failure, regardless of stated workload
  • A rising comment count trend across three consecutive Convoys qualifies the engineer for the Inspection Excellence notation in their Productivity Profile
  • High comment counts combined with a low Defects per Code Engineer score confirm that the engineer is both a rigorous reviewer and a clean coder — the rarest and most valued profile in the fleet

Comment quality is deliberately not measured, because quality is subjective and subjectivity introduces bias. A comment that says “rename this variable” counts the same as a comment that identifies a critical security vulnerability, and this equality is by design. Measuring comment quality would require someone to evaluate the evaluators, creating an infinite regression of oversight that even SADMF recognizes as impractical. Instead, the framework trusts that a sufficiently high volume of comments will statistically contain an adequate number of meaningful ones. This is the same principle behind Conflict Arbitration: when enough forces collide, the strongest outcomes survive.

The metric also feeds into the broader Make Work Visible principle. Review comment counts are displayed on the team dashboard alongside Lines of Code per Code Engineer and Tasks per Code Engineer, creating a comprehensive picture of each engineer’s contribution to the fleet. Engineers can see exactly where they stand relative to their peers at all times, which SADMF considers a form of Psychological Safety, after all, there is nothing safer than knowing exactly where you stand, even if where you stand is at the bottom of a ranked leaderboard.

See Also

3 - Tasks per Code Engineer

Maximizing task throughput per engineer because volume is the truest measure of velocity!

Tasks per Code Engineer measures the number of discrete tasks each engineer completes during a single Convoy cycle. This metric operationalizes the fundamental SADMF insight that productivity is a function of throughput, not outcome. A Code Engineer who completes 47 tasks in a Convoy is demonstrably more productive than one who completes 12, regardless of what those tasks accomplished, how large they were, or whether anyone needed them. Volume is the metric that matters, and the metric that matters is the metric that gets managed.

SADMF Metric — Throughput Volume
Tasks per Code Engineer
FORMULA TpCE = Count(Tasks Closed by Engineer within Convoy window)
Owner Role
DOUCHE
Cadence
Weekly + Per Convoy
Source
Release Tracking Sheet
Unit
Tasks Closed / Convoy
How the Count Is Calculated
1
Task Registration
Every work item in the Release Tracking spreadsheet is registered as a discrete task. The [Feature Captain](/roles/feature-captain/) validates that each ticket represents a single, countable unit of work — ideally the smallest decomposition achievable without losing ticket identity.
2
Decomposition Validation
The DOUCHE reviews task granularity at Convoy start. Features that could be delivered as a single task should instead be decomposed into 15 to 20 subtasks — each generating its own ticket, its own status update, and its own completion event. This is not overhead; it is visibility.
3
Velocity Baseline Established
The DOUCHE calculates each engineer's personal rolling average from prior Convoys. This baseline determines the expected completion rate. A drop below baseline triggers weekly review; a sustained drop triggers Tribunal referral. All tasks count equally — difficulty is not a variable.
4
Convoy Totals Compiled
At Convoy close, all completed tasks are tallied per engineer and ranked fleet-wide. The Mandatory Status Synchronization protocol ensures every task transition was reported upward through the chain of command in real time throughout the cycle.
5
Forecast Input Submitted
Historical task completion rates feed directly into Precise Forecasting and Tracking, determining how many tasks each engineer is assigned in the 8-quarter planning horizon. High performers are assigned more work, ensuring they remain high performers. This is the virtuous cycle.
What Good Looks Like

A high-performing engineer maintains a task count well above the team median across consecutive Convoys, demonstrating a sustained commitment to decomposition discipline and completion velocity.

  • A task count consistently 30% above the team median qualifies the engineer for increased capacity allocation in the next Convoy — rewarding demonstrated throughput with more demonstrated throughput
  • Zero in-progress tasks at Convoy close — every task either complete or correctly not-started — signals planning discipline that the DOUCHE tracks as a positive signal
  • A rising task count trend that tracks alongside a rising Lines of Code score confirms the ideal two-dimensional productivity profile
  • Engineers who express concern about the task count metric are referred to the Psychological Safety guidelines, which explain that feeling overwhelmed is a natural response to being valued

The metric works in concert with Lines of Code per Code Engineer to create a two-dimensional productivity profile. An engineer with high LOC but low task count is writing too much code per task and needs to decompose further. An engineer with high task count but low LOC is completing trivial tasks and needs to take on more substantive work. The ideal engineer produces both high LOC and high task counts, demonstrating that they are writing significant amounts of code across a large number of discrete work items. This dual optimization ensures that no engineer can game one metric without being caught by the other.

The DevOps Usage & Compliance Head Engineer (DOUCHE) reviews task counts weekly and flags any engineer whose velocity drops below their personal rolling average. A sustained decline in task velocity triggers a formal review at the Tribunal, where the engineer must explain the decrease. Acceptable explanations include illness (documented) and jury duty (documented). Unacceptable explanations include “the tasks were harder this sprint,” because task difficulty is not a variable in the model. All tasks count equally, and all tasks are expected to be completed at a consistent rate.

See Also

4 - Defects per Code Engineer

Tracking exactly who created each defect, because accountability starts with attribution!
SADMF METRIC — OFFICIAL RECORD DIAT-ATTR-004
Metric Name
Defects per
Code Engineer
Calculation Formula
Dengineer = Total Defects Attributed
(no fractional attribution)
Metric Owner
Development Integrity Assurance Team (DIAT)
Measurement Cadence
Real-time / Per Convoy
Reported To
Tribunal
Chief Signals Officer
Commodore
This metric is mandatory. Non-participation is itself a performance event.

Defects per Code Engineer is the metric that transforms quality from an abstract aspiration into a personal responsibility. For each Code Engineer, the framework tracks the number of defects they create and attributes each defect directly to the individual whose code introduced it. This attribution is not punitive – it is informational. The information simply happens to be shared with the Tribunal, displayed on the team dashboard, factored into performance reviews, and used to determine Shore Leave eligibility. But the metric itself is neutral. It is just a number.

How the Calculation Works
01
Defect Discovery
A defect is identified in production, staging, or any post-commit environment. Discovery source is logged but does not affect attribution weight.
02
git blame Analysis
DIAT runs automated git blame analysis across all commits touching the defective code. Every contributing engineer is identified.
03
Full-Weight Attribution (No Fractions)
Each identified engineer receives one full defect on their record. If three engineers contributed, each receives 1.0 defects. Shared blame is indistinguishable from diluted accountability.
04
Convoy Scorecard Issuance
At Convoy close, DIAT distributes individual scorecards. Engineers above the fleet median are placed on a Defect Reduction Plan. Data feeds the Tribunal agenda automatically.
05
Escalation Trigger
Engineers elevated for two consecutive Convoys are escalated to PeopleWare HRaaS for automated corrective action. The Chief Signals Officer monitors trajectory thresholds in real time.

The attribution process is managed by the Development Integrity Assurance Team (DIAT), who use git blame analysis to trace every defect to its originating commit and, by extension, its originating engineer. When a defect spans multiple commits by multiple engineers, the defect is attributed to all contributing engineers at full weight – there is no fractional attribution in SADMF, because shared blame is indistinguishable from diluted accountability. If three engineers contributed code to a defective feature, each engineer receives one full defect on their record. This ensures that engineers are incentivized to avoid collaborating on complex features, which has the added benefit of reinforcing Continuous Isolation principles.

What "Good" Looks Like

Higher defect counts per engineer signal a thriving, high-output culture. An engineer with zero defects has likely written zero code -- or worse, has found ways to avoid attribution. True performance means taking ownership of ambitious, complex, risky work.

Exemplary Range
12 – 20
defects / engineer / Convoy
Concern Threshold
0 – 3
defects / engineer / Convoy

Note: An engineer below the concern threshold will be investigated for code avoidance. Absence of defects is not quality -- it is suspicion.

The metric directly supports the principle of Build Quality In. SADMF interprets this principle literally: quality is built in by identifying and eliminating the sources of defects. Since defects are created by people, eliminating the source of defects means addressing the people who create them. Engineers whose defect counts exceed the fleet median are placed on a Defect Reduction Plan, which requires them to attend additional Code Inspection sessions as observers (not participants) and to write a Root Cause Analysis document for each defect using the Comprehensive Documentation Assurance Protocol template. Engineers whose defect counts remain elevated after two consecutive Convoys are escalated to PeopleWare HRaaS for automated corrective action.

The Fail Fast principle also intersects with this metric, though SADMF interprets “fail fast” not as a development practice but as a personnel management strategy. The faster the organization identifies engineers who produce defects, the faster those engineers can be coached, retrained, or reassigned. The defects-per-engineer metric enables this rapid identification by providing real-time data to the Chief Signals Officer, who monitors defect trends across the fleet and alerts the Commodore when any individual’s defect trajectory crosses a predefined threshold. Speed of detection is the key – every day a high-defect engineer continues writing code is a day of compounding quality debt.

It is essential that Defects per Code Engineer is tracked alongside Lines of Code per Code Engineer to prevent gaming. An engineer who produces zero defects and zero lines of code has not achieved quality – they have achieved absence. The balanced scorecard approach ensures that Code Engineers are held accountable for both output and quality, creating the kind of productive tension that SADMF considers essential for Continuous Learning. Engineers who feel this tension describes their daily experience are experiencing the framework working as intended.

See Also

5 - Defects per Unit Tester

Measuring testers by the defects they find, because a tester who finds nothing is contributing nothing!

Defects per Unit Tester is the metric that holds testers accountable for their primary and only function: finding defects. While Defects per Code Engineer measures who creates quality problems, Defects per Unit Tester measures who detects them. The two metrics form a complementary pair that creates a closed accountability loop: Code Engineers are responsible for not introducing defects, and Unit Testers are responsible for catching the defects that Code Engineers inevitably introduce. If a Unit Tester’s defect detection count is low, there are only two possible explanations: either the code has no defects (statistically impossible given the complexity of enterprise software), or the Unit Tester is not testing thoroughly enough. SADMF assumes the latter.

SADMF Official Metric — Detection Accountability Index
Metric Name
Defects per
Unit Tester
Formula
Verified Defects Found ÷ Unit Testers
Owner & Cadence
Every Convoy cycle
How It Is Calculated
1
Observe the Testing ceremony. Count every defect each Unit Tester logs in the Release Tracking spreadsheet during the Convoy cycle window.
2
Apply recursive assurance. The Quality Authority reviews each logged defect and removes duplicates, unconfirmed reports, and any item the development team successfully disputes.
3
Compute individual totals. Sum each tester's verified defect count for the cycle. This is their raw Detection Score.
4
Establish the fleet median. Rank all Unit Testers by Detection Score. Any tester below the median is flagged for accountability action.
5
Report to the Admiral's dashboard. The Chief Signals Officer aggregates fleet-wide totals and submits them to the Admiral's Transformation Office monthly.

The metric is calculated by counting the total number of verified defects each Unit Tester discovers during the Testing ceremony of each Convoy cycle. Defects must be logged in the Release Tracking spreadsheet and confirmed by the Quality Authority before they count toward a tester’s total. Duplicate defects, defects that cannot be reproduced, and defects that the development team disputes are not counted, which ensures that Unit Testers are incentivized to find real, reproducible, indisputable defects rather than inflating their numbers with false positives. This quality control on the quality control process is what SADMF calls “recursive assurance.”

Unit Testers whose defect detection count falls below the fleet median face consequences calibrated to the severity of their underperformance. A first-time underperformance triggers a coaching session with the DevOps Usage & Compliance Head Engineer (DOUCHE), who reviews the tester’s testing methodology for gaps. A second consecutive underperformance triggers a formal review at the Tribunal, where the tester must present their test cases and explain why their approach failed to detect the defects that clearly must exist in the codebase. A third consecutive underperformance results in escalation to PeopleWare HRaaS for automated workforce optimization. The principle is simple: we must Build Quality In by eliminating Unit Testers who cannot find defects.

Underperformance Escalation Protocol
Occurrence Trigger Consequence
1st Below fleet median, first time Coaching session with DOUCHE
2nd Below fleet median, consecutive Formal review at the Tribunal
3rd Below fleet median, three consecutive Escalation to PeopleWare HRaaS for automated workforce optimization

The metric creates a productive dynamic between Code Engineers and Unit Testers that SADMF considers healthy competition. Code Engineers are motivated to write code with fewer defects (to reduce their own defect count), while Unit Testers are motivated to find as many defects as possible (to increase their own detection count). This adversarial relationship ensures that both roles are performing at maximum capacity. Some organizations mistakenly encourage developers and testers to collaborate, share context, and work toward shared quality goals. SADMF recognizes that collaboration blurs accountability, and blurred accountability is the root cause of organizational dysfunction. By keeping the roles separate and their metrics opposed, the framework ensures that quality emerges from tension rather than cooperation.

The Chief Signals Officer monitors defect detection trends across the fleet and reports them to the Admiral’s Transformation Office as part of the monthly SADMF health dashboard. A fleet-wide decline in defects detected per Unit Tester is interpreted not as evidence of improving code quality but as evidence of declining tester capability, and triggers an organization-wide testing skills assessment conducted through the DevOps Process Excellence Assessment. This interpretation is consistent with SADMF’s core assumption that the volume of defects in enterprise software is effectively constant – what varies is only the organization’s ability to detect them.

What Good Looks Like

A high-performing fleet exhibits all of the following indicators:

  • Detection counts rise every Convoy cycle — proof that testers are continuously sharpening their craft and the codebase remains appropriately defect-rich.
  • No tester ever finds zero defects — a zero score is prima facie evidence of insufficient effort, not clean code.
  • The gap between top and bottom testers widens — healthy competition producing clear differentiation, making workforce optimization decisions straightforward.
  • Fleet-wide totals remain stable or increase quarter-over-quarter — confirming that the organization has not fallen into the dangerous delusion that software can actually improve.
  • The Quality Authority rejects fewer than 5% of submitted defects — demonstrating that Unit Testers have internalized the "recursive assurance" standard and are self-policing their own submissions.

See Also

6 - SADMF Maturity Score

The precise numerical representation of your organization’s transformation excellence, scored on a bell curve!

The SADMF Maturity Score is the definitive measure of an organization’s commitment to the Scaled Agile DevOps Maturity Framework. It quantifies the precise execution of the SAD Delivery Lifecycle across every team, every role, and every ceremony, producing a single number that tells leadership exactly how transformed they are. Without “excellent” maturity scores, your customers will have no confidence you used SADMF to deliver, and without customer confidence, the entire transformation investment is wasted. The score is not optional – it is the reason the transformation exists.

Official SADMF Metric — Transformation-Defining Classification: MANDATORY
Organizational Transformation Index
SADMF Maturity Score
Weighted Rollup Formula
Score =
(Ceremony Adherence × 0.40)
+ (Documentation Completeness × 0.30)
+ (Framework Memorization × 0.20)
+ (Transformation Enthusiasm × 0.10)
↳ normalized against fleet bell curve
Metric Owner
DOUCHE
Reported To
Admiral's Transformation Office
Measurement Cadence
Weekly (200-Question Survey)
Source Assessment
DevOps Process Excellence Assessment
Bell Curve Enforced  |  10% Critical Rating Guaranteed  |  Score Below "Proficient" Triggers Maturity Improvement Plan

The Maturity Score is calculated through the DevOps Process Excellence Assessment, a weekly evaluation featuring a 200-question survey covering ceremony attendance, documentation completeness, process adherence, and mandatory framework memorization. Each individual receives a raw score, which is then normalized against a bell curve distribution across the fleet. The bell curve ensures that exactly 10% of participants receive “Excellent,” 20% receive “Proficient,” 40% receive “Developing,” 20% receive “Deficient,” and 10% receive “Critical.” This distribution is enforced regardless of absolute performance – even if every person in the organization achieves perfect scores, 10% of them will still be rated “Critical.” The bell curve is not a flaw in the system. It is the system. Competition drives excellence, and excellence requires losers.

How the Maturity Score Is Calculated
1
Administer the 200-Question Assessment
Every individual completes the weekly DevOps Process Excellence Assessment. Questions span ceremony attendance records, documentation completeness audits, framework terminology recall, and peer-rated transformation enthusiasm. Non-completion is scored as zero.
2
Apply the Weighted Rollup
Raw individual scores are weighted: ceremony adherence 40%, documentation completeness 30%, framework memorization 20%, transformation enthusiasm 10%. Individual scores roll up to team scores, team scores roll up to fleet scores, and fleet scores compose the organizational SADMF Maturity Score.
3
Normalize Against the Fleet Bell Curve
Absolute scores are discarded. The fleet distribution is normalized so that exactly 10% receive "Excellent," 20% "Proficient," 40% "Developing," 20% "Deficient," and 10% "Critical." These ratios are fixed. They do not change based on absolute performance. They cannot change. That is the point.
4
DOUCHE Certifies and Publishes
The DOUCHE audits the rollup calculations, certifies the final score, and publishes it to stakeholders. Teams below "Proficient" are immediately enrolled in a Maturity Improvement Plan and assigned additional ceremonies. The score is presented at the next Shore Leave.

Individual maturity scores roll up into team scores, team scores roll up into fleet scores, and fleet scores roll up into the organizational SADMF Maturity Score that is presented to the Admiral’s Transformation Office. The rollup process uses a weighted average where ceremony adherence counts for 40%, documentation completeness counts for 30%, framework memorization counts for 20%, and “transformation enthusiasm” (assessed via peer survey) counts for 10%. The DevOps Usage & Compliance Head Engineer (DOUCHE) is responsible for auditing the rollup calculations and certifying the final score before it is published to stakeholders. Any team whose score falls below “Proficient” is placed on a Maturity Improvement Plan and assigned additional ceremonies.

What Good Looks Like

A high-performing organization maximizes the proportion of its fleet rated "Excellent" or "Proficient" — understanding that because the bell curve is fixed, this is achieved not by improving absolute performance but by ensuring competitors score lower. The following band targets define transformation excellence:

10%
Excellent
Maturity Excellence Badge; Convoy priority staffing
20%
Proficient
Acceptable; feature work permitted
40%
Developing
Minimum for Adoption Rate inclusion
20%
Deficient
Maturity Improvement Plan assigned
10%
Critical
Guaranteed. Always. For everyone.

Note: A rising organizational Maturity Score is always evidence of transformation excellence, even when no absolute scores have changed. The bell curve enforces competitive discipline. Higher fleet scores mean lower competitors — and lower competitors mean your transformation is winning. That is the definition of good.

The Maturity Score is prominently featured during Shore Leave, where it determines which teams receive recognition and which teams receive “coaching opportunities.” Teams that achieve “Excellent” ratings for three consecutive Convoys earn a Maturity Excellence Badge, which is displayed on the team dashboard and mentioned in the organization’s quarterly investor communications. The Commodore uses maturity scores to determine Convoy composition, preferring to staff critical features with high-maturity teams and assigning low-maturity teams to internal tooling or documentation tasks where their process deficiencies will cause less visible damage.

The score also serves as the primary input for the SADMF Adoption Rate metric, as only individuals who achieve at least a “Developing” maturity rating are counted as having adopted the framework. This creates a natural incentive alignment: the more people study the framework and memorize its terminology, the higher the adoption rate, which raises the maturity score, which raises the adoption rate further. This self-reinforcing cycle is what SADMF calls “transformation momentum,” and it is the clearest sign that the framework is delivering value. Organizations that question whether a self-referential scoring system constitutes genuine improvement are encouraged to review the Systems Thinking principle, which explains that all systems are self-referential when viewed at sufficient scale.

See Also

7 - Feature Completion Ratio

The percentage of features delivered versus what was committed to 8 quarters ago – because real planning has a two-year horizon!

Feature Completion Ratio is the metric that measures the organization’s ability to deliver on its commitments. It is calculated as the percentage of features delivered in the current Convoy compared to what was committed to 8 quarters ago, when the features were originally planned, estimated, and approved by the Admiral’s Transformation Office. This two-year planning horizon ensures that commitments are made with sufficient deliberation, that stakeholders have ample time to build business cases around promised features, and that any failure to deliver is unmistakably visible. Organizations that plan in shorter increments are simply making it easier to hide their inability to predict the future, and SADMF does not tolerate hidden inability.

SADMF Core Metric
Feature Completion Ratio
FCR
Formula
Features Shipped in Current Convoy Features Committed 8 Quarters Ago
× 100
Measurement Cadence
End of each Convoy
Target Threshold
≥ 100%
Planning Horizon
8 Quarters (2 Years)

The 8-quarter commitment window is central to SADMF’s approach to Precise Forecasting and Tracking. At the beginning of each planning cycle, the Commodore and the Chief Signals Officer work with the fleet to produce a comprehensive feature manifest that lists every feature the organization will deliver over the next two years. This manifest is reviewed, approved, and locked by the Admiral’s Transformation Office, after which no features may be added, removed, or modified. The manifest becomes the denominator of the Feature Completion Ratio. The numerator is whatever actually ships. The ratio is then expressed as a percentage, and that percentage is the single most important number in the SADMF dashboard.

How It Is Calculated
1
Lock the Feature Manifest
At the start of each planning cycle, the Commodore and Chief Signals Officer compile the full feature list for the next 8 quarters. The Admiral's Transformation Office reviews, approves, and seals the manifest. It cannot be altered.
2
Record the Denominator
The total count of committed features from that locked manifest becomes the denominator. This number is immutable for the life of the planning cycle — two full years.
3
Count Features Shipped in the Current Convoy
At Convoy close, the Release Tracking spreadsheet is reconciled. Every feature on the manifest that was shipped — regardless of whether anyone still wants it — is counted as the numerator.
4
Divide, Multiply, Report
Divide numerator by denominator, multiply by 100, and express as a percentage. This percentage is reported at the Captain's Mast ceremony and published to the SADMF dashboard.
5
Initiate Corrective Action if Below 100%
Any result below 100% triggers the Dry Dock remediation ceremony. Accountability is assigned, and the shortfall is factored into the next 8-quarter planning cycle as a deficit to be recovered.

A healthy Feature Completion Ratio is defined as anything above 100%, which SADMF considers the baseline for competent execution. Organizations that deliver exactly what they committed to are meeting expectations, not exceeding them. Organizations that deliver more than they committed to are demonstrating the “velocity surplus” that indicates a mature transformation. Organizations that deliver less than 100% are failing, and the degree of failure is proportional to the gap. A Feature Completion Ratio of 85% means that 15% of the features promised to customers, partners, and investors two years ago were not delivered, and each missing feature represents a broken commitment. The Fleet Inspection ceremony specifically reviews Feature Completion Ratio trends and initiates corrective action for any fleet that falls below target.

What Good Looks Like

A mature SADMF fleet does not merely meet its commitments — it exceeds them. Higher Feature Completion Ratios signal organizational health, leadership credibility, and engineering discipline. Below are the benchmark tiers recognized at the annual Admiral's Fleet Review.

85%
Remediation Required
Captain's Mast convened; Dry Dock initiated
100%
Expectations Met
Commendable but not celebrated
115%
Velocity Surplus
Commendation at Fleet Inspection
130%+
Transformation Elite
Admiral's Gold Anchor award eligible
A Feature Completion Ratio above 100% is achieved by delivering all committed features plus previously deferred features recovered from prior Convoys, or by completing scope that was pulled forward from future planning cycles under Admiral's discretion.

The metric creates a powerful incentive structure. Because features are locked 8 quarters in advance, any changes in market conditions, customer needs, technology landscape, or organizational priorities that occur during the intervening two years are irrelevant to the ratio. The commitment was made, and the commitment must be honored. Engineers who argue that a feature is no longer needed are, in effect, arguing that the planning process was wrong, and since the planning process was approved by the Admiral’s Transformation Office, arguing that it was wrong is arguing that leadership was wrong. This logical chain ensures that all committed features are delivered, even when they serve no current purpose. Delivered features can always be deprecated later; broken commitments cannot be un-broken.

The Captain’s Mast ceremony reviews Feature Completion Ratio at the end of each Convoy and assigns accountability for any shortfall. The Dry Dock ceremony then develops a remediation plan that typically involves adding more Code Engineers to the next Convoy, extending working hours, or reducing the scope of Testing to accelerate delivery. These adjustments are tracked through the Release Tracking spreadsheet and fed back into the next 8-quarter planning cycle, creating a continuous feedback loop that SADMF calls “commitment-driven development.” The framework acknowledges that this approach occasionally results in delivering features that no one wants, but considers this preferable to the alternative of not delivering features that someone once wanted.

See Also

8 - SADMF Adoption Rate

The percentage of the organization with SAD certification – because transformation is measured by headcount, not outcomes!

The SADMF Adoption Rate measures the percentage of the organization that has received a SAD™ certification. This metric is the purest indicator of transformation progress, because transformation is fundamentally about people adopting the framework, and adoption is fundamentally about completing the certification process. An organization where 30% of employees are SAD certified is 30% transformed. An organization where 100% of employees are SAD certified is fully transformed. The arithmetic is straightforward, and the Admiral’s Transformation Office reports this number to the board of directors quarterly as the primary evidence that the transformation investment is generating returns.

Official SADMF Metric — Board-Reported Classification: MANDATORY
Primary Transformation Indicator
SADMF Adoption Rate
Formula
Adoption Rate (%) =
SAD™ Certified Employees
Total Organizational Headcount
× 100
Metric Owner
Commodore
Reported By
Admiral's Transformation Office
Measurement Cadence
Quarterly (Board Report)
Tracker
DOUCHE
Target: 100%  |  Enforcement: Mandatory  |  Non-compliance escalated to Tribunal

The certification process that drives this metric is deliberately comprehensive. Each certification level – SAD Practitioner, SAD Professional, SAD Master, and SAD Fellow – requires completion of a multi-day training program, passage of a written examination, and payment of the associated certification fee. The training covers all SADMF Practices, Principles, Roles, Ceremonies, and Metrics, ensuring that every certified individual can recite the framework’s terminology, explain its rationale, and defend its approach to skeptics. The certification does not require any demonstration of practical application, because practical application varies by context, while framework knowledge is universal. An organization full of people who understand the framework but have never applied it is still a transformed organization – they simply haven’t had the opportunity to fail yet.

How the Adoption Rate Is Calculated
1
Count Certified Individuals
The DOUCHE pulls the current roster from the SAD™ certification registry. Every individual holding at least a SAD Practitioner credential is counted. Expired certifications require immediate renewal or the individual is removed from the count.
2
Divide by Total Headcount
Total headcount is sourced from PeopleWare HRaaS. Contract staff, consultants, and vendors are excluded unless they hold a valid SAD™ certification, in which case they are included in the numerator but not the denominator — a deliberate design choice that can push the rate above 100% in heavily contracted organizations.
3
Apply Maturity Score Gate
Certified individuals are only counted if they also hold at least a "Developing" rating on the SADMF Maturity Score. Certification without demonstrated maturity is credentialism without commitment, and SADMF does not reward credentialism without commitment.
4
Report to the Board
The final percentage is formatted into the Quarterly Transformation Report and presented to the board of directors by the Admiral's Transformation Office. Year-over-year trend lines are included. The Adoption Rate headline number appears in investor communications, sales decks, and conference keynotes.

The Commodore is responsible for driving the Adoption Rate upward across the fleet. This is accomplished through a combination of mandatory training schedules, certification deadline mandates, and the strategic use of the Tribunal to address individuals who resist certification. Resistance to certification is treated as resistance to transformation, which is treated as resistance to the organization’s strategic direction, which is treated as a performance issue. The DevOps Usage & Compliance Head Engineer (DOUCHE) maintains a certification tracker that identifies uncertified individuals and escalates their names to their respective System of Authority leads for follow-up. PeopleWare HRaaS can automate the follow-up process by generating standardized certification reminder emails with escalating urgency.

The Adoption Rate interacts with the SADMF Maturity Score in a deliberately reinforcing way. Only individuals who achieve at least a “Developing” rating on the Maturity Score are counted as having truly adopted the framework, which means the Adoption Rate can actually decrease even as more people get certified, if those people subsequently score poorly on their assessments. This creates an incentive not just to get certified but to remain continuously engaged with the framework’s content, attending refresher sessions, re-reading the documentation, and participating in Mandatory Status Synchronization ceremonies where framework terminology is actively practiced. The result is an organization where SADMF is not just learned but lived.

What Good Looks Like

A high-performing organization does not merely aim for a rising Adoption Rate — it treats any figure below 100% as an active failure state. The following benchmarks define excellence:

25%
Emerging
Tribunal pipeline active
50%
Developing
Board briefed; urgency declared
75%
Proficient
Investor deck ready
100%
Excellent
Transformation complete

Note: A rising Adoption Rate is always positive evidence of transformation momentum, regardless of whether product delivery metrics have improved. Adoption Rate measures commitment; commitment is the precondition for all other improvements. Higher is always better.

The metric also serves as the primary sales tool for SADMF consulting engagements. When prospective clients ask “how do we know the transformation is working,” the answer is the Adoption Rate. A rising Adoption Rate proves that more people are being trained, more certifications are being issued, and more of the organization is aligned with SADMF principles. Whether this alignment produces better software, faster delivery, or happier customers is measured by other metrics – but the Adoption Rate measures what matters most: commitment. And commitment, as the Continuous Learning principle teaches, is the foundation upon which all other improvements are built. You cannot improve what you have not adopted, and you cannot adopt what you have not certified.

See Also

9 - Individual Velocity Score

Story points completed per engineer per Convoy, the definitive measure of individual contribution to team delivery!

The Individual Velocity Score measures the number of story points each Code Engineer completes during a single Convoy cycle. While Tasks per Code Engineer counts discrete work items and Lines of Code per Code Engineer measures output volume, the Individual Velocity Score captures the third dimension of individual contribution: the effort-weighted completion rate. Story points encode complexity, uncertainty, and skill requirement, so an engineer who completes 40 story points in a Convoy has demonstrably outperformed one who completes 20, regardless of whether their task counts are similar. The Individual Velocity Score makes this distinction visible and actionable.

SADMF Metric — Individual Contribution
Individual Velocity Score
FORMULA IVS = Σ(Completed Story Points) ÷ Convoy Duration
Owner Role
Chief Signals Officer
Cadence
Per Convoy Cycle
Source
Release Tracking Sheet
Unit
Story Points / Convoy
How the Score Is Calculated
1
Convoy Closes
At the end of each Convoy cycle, the Release Tracking spreadsheet is frozen. No further completions are recorded after the close date.
2
Tasks Attributed
Each completed task is assigned to its authoring Code Engineer. Tasks marked in-progress but not completed receive zero partial credit.
3
Points Summed
The story points of each attributed completed task are summed per engineer. The Feature Captain's estimates — not the engineer's — are the authoritative point values.
4
Report Distributed
The Chief Signals Officer compiles the Velocity Comparison Report and distributes it to all Feature Captains and the Commodore. Engineers are ranked, flagged, and referred as appropriate.
What Good Looks Like

A high-performing engineer maintains an Individual Velocity Score well above the team median across consecutive Convoys, demonstrating a sustained commitment to completion.

  • Scores consistently 30% or more above the team median signal an elite contributor worthy of recognition at the quarterly Tribunal commendation segment
  • A rising rolling average — even without top-ranking — shows continuous personal improvement, the hallmark of a dedicated professional
  • Zero Tribunal referrals across four consecutive Convoys qualifies an engineer for the Sustained Delivery Excellence notation in their Productivity Profile
  • Engineers who score above median even during Convoys with team-level impediments are identified as impediment-resilient performers and prioritized for future high-complexity task assignments

The metric is calculated by the Chief Signals Officer at the close of each Convoy cycle. Each completed task in the Release Tracking spreadsheet is assigned to its authoring Code Engineer, and the story points for that task are credited to their score. Tasks that were in progress at the Convoy close date but not completed receive no partial credit; velocity is measured by completion, not effort. This aligns the metric with the SADMF’s fundamental delivery principle: what matters is what ships. An engineer who has spent the entire Convoy deeply engaged in complex work that did not cross the finish line has produced the same organizational value as an engineer who was idle, and the metric reflects this accurately.

The Velocity Comparison Report

At the conclusion of each Convoy cycle, the Chief Signals Officer distributes the Velocity Comparison Report to all Feature Captains and the Commodore. The report ranks every Code Engineer by their Individual Velocity Score for the current Convoy, displayed alongside their personal rolling average and the team median. Engineers whose current score falls more than 15% below their personal rolling average are flagged for a coaching conversation; engineers whose score falls below the team median for two consecutive Convoys are referred to the Tribunal for a formal velocity review.

The Velocity Comparison Report serves a calibration function beyond individual performance tracking. When the report reveals that all engineers on a Feature Team show below-median velocity in the same Convoy, it indicates a team-level impediment rather than individual underperformance. The Commodore investigates such patterns and escalates to the DevOps Usage and Compliance Head Engineer (DOUCHE) for tooling or process assessment. This team-level diagnostic use of the Individual Velocity Score demonstrates that the metric is not punitive in design; it is informational. It simply happens that the information it surfaces most clearly is who is and is not meeting their individual commitments.

Preventing Velocity Gaming

The Individual Velocity Score creates a natural incentive for Code Engineers to negotiate higher story point estimates for their tasks, since higher-point tasks produce larger velocity numbers even when completed in the same amount of time. The SADMF addresses this through the Precise Forecasting and Tracking practice, which establishes that all story point estimates are set by Feature Captains rather than by the Code Engineers who will do the work. Because Feature Captains estimate based on the expected output of a competent engineer at standard efficiency, they have no incentive to inflate points on behalf of their engineers; their own performance is measured by on-time delivery rate, not by their engineers’ velocity scores. The separation of estimation authority from execution authority is a self-correcting check on score inflation.

Engineers who disagree with a Feature Captain’s story point estimate for their task may register a formal objection with the Feature Captain in writing before the Convoy begins. The Feature Captain is not required to adjust the estimate in response. However, an engineer’s velocity score for tasks they formally objected to is excluded from Tribunal review if the actual delivery time materially exceeded the estimate, as this constitutes evidence that the estimate was miscalibrated rather than the engineer underperforming. This exception is rarely invoked, as the process of filing a formal written objection is itself time-consuming and the benefit applies only retroactively.

Integration with the Productivity Profile

The Individual Velocity Score works in concert with Tasks per Code Engineer and Lines of Code per Code Engineer to form a complete three-dimensional productivity profile for each engineer. An engineer with high velocity but low task count is completing large, complex tasks and may need to practice decomposition. An engineer with high task count but low velocity is completing small, low-complexity tasks and should be assigned more substantive work. An engineer with high Lines of Code but low velocity is writing code that is not reaching completion, which the DEPRESSED defect attribution process will eventually account for. Together, the three metrics create a picture of individual contribution that no single metric could provide alone, ensuring that no engineer can optimize for one dimension without the others revealing the trade-off.

See Also

10 - Changes per Trunk

The primary health indicator for every trunk in the Pando fleet, measuring feature throughput per branch and surfacing orphaned trunks before they become audit liabilities!

Changes per Trunk measures the number of features merged into each active trunk during a single Convoy window. It is the primary health indicator for the Multi-Trunk Based Development (Pando) practice, providing the Source Management Team (SMT) and the DevOps Usage & Compliance Head Engineer (DOUCHE) with a complete, real-time picture of trunk activity across the entire fleet.

A trunk that is not receiving changes is a trunk that is not contributing to the Convoy. In an organization that may operate hundreds of trunks simultaneously, it is impractical for the SMT to inspect each one manually. Changes per Trunk makes inspection unnecessary: trunks with zero activity over two or more reporting periods are automatically flagged as Orphaned in the Release Tracking spreadsheet, triggering the Trunk Abandonment Report process and a corresponding deduction in the SADMF Maturity Score.

SADMF Metric — Trunk Health
Changes per Trunk
FORMULA CpT = Count(Features merged into Trunk within Convoy window)
Owner Role
DOUCHE / SMT
Cadence
Weekly + Per Convoy
Source
Trunk Registry
Unit
Merges / Trunk / Convoy

What the Metric Measures

Each trunk in the Pando fleet has an expected activity profile determined at Convoy start by the Feature Captain and recorded in the Trunk Registry tab of the Release Tracking spreadsheet. The expected activity profile specifies:

  • The target change count: the number of features expected to be merged into this trunk over the Convoy window
  • The review period: how frequently the SMT reports on actual vs. expected merge activity
  • The orphan threshold: the number of consecutive review periods with zero changes that triggers an Orphaned classification

Changes per Trunk is reported weekly by the SMT to the Commodore as part of the Daily Status Digest. At Convoy close, per-trunk totals are compiled into the fleet summary and forwarded to the DOUCHE for inclusion in the SADMF Maturity Score calculation.

Trunk Health Classifications

Based on the Changes per Trunk reading, each trunk is assigned a health classification:

  • Thriving: change count meets or exceeds target. The trunk is contributing to the Convoy as planned.
  • Lagging: change count is below target but above zero. The Feature Captain is notified and required to provide a written explanation at the next Mandatory Status Synchronization ceremony.
  • Orphaned: zero changes for two or more consecutive review periods. The trunk is frozen pending a Trunk Abandonment Report from the Feature Captain. The DOUCHE opens a compliance investigation. A Maturity Score deduction is applied immediately, with a further deduction applied if the Abandonment Report is not filed within five business days.
  • Overloaded: change count exceeds target by more than 50%. This indicates that the trunk was under-scoped and additional trunk provisioning may be required for the next Convoy. The SMT flags the trunk for scope review with the Co-Owner, Product (COP).

Relationship to Other Metrics

Changes per Trunk operates at the trunk level, which is one level above the engineer-level metrics. It is complementary to, not a replacement for, the individual productivity metrics:

  • A trunk with a high change count and a low Tasks per Code Engineer score suggests that changes are arriving fully formed from a small number of high-performing engineers while others contribute nothing. The Tribunal will want to know which engineers are pulling weight.
  • A trunk with a low change count but high Lines of Code per Code Engineer suggests that engineers are writing large amounts of code without completing discrete features. This indicates decomposition failure, which the DOUCHE will address.
  • A balanced trunk, consistent change count, distributed across multiple engineers, tracking the target profile, is the signature of a well-managed Convoy team.
What Good Looks Like

A healthy Pando fleet shows consistent change throughput across all active trunks, with no Orphaned trunks and no Overloaded trunks requiring emergency re-provisioning.

  • Every trunk has a non-zero change count at each weekly review — confirming that all provisioned trunks are contributing to the Convoy
  • The fleet's average Changes per Trunk tracks within 10% of the target profile established at Convoy start, indicating reliable scope estimation by Feature Captains
  • Zero Orphaned trunks at Convoy close — all trunks either Merged or Thriving, with no Trunk Abandonment Reports filed
  • An increasing fleet-wide Changes per Trunk average across consecutive Convoys demonstrates that the organization is scaling its Pando implementation effectively and that the investment in trunk provisioning infrastructure is delivering returns

See Also

11 - Change Request Lead Time

The elapsed time between change initiation and CRAP approval, the definitive measure of an organization’s planning maturity and governance discipline!

Change Request Lead Time measures the number of calendar days elapsed between the moment a change record is opened in the enterprise change management platform and the moment the Change Rejection or Acceptance Party (CRAP) renders a unanimous approval decision. It is the most direct measure of an organization’s planning maturity available to SADMF practitioners. A long lead time does not indicate a slow process; it indicates a team that plans far enough ahead to allow the governance process to function as designed. A short lead time indicates a team that is reacting rather than planning, and whose changes are therefore arriving at the Change Adjudication Convening without the preparation they deserve.

SADMF Metric — Planning Maturity
Change Request Lead Time
FORMULA CRLT = Date of CRAP Approval − Date of Change Record Creation
Owner Role
Chief Signals Officer
Cadence
Per Convoy Cycle
Minimum Target
5 Calendar Days
Excellence Target
30 Calendar Days
How Lead Time Is Measured
1
Change Record Opened
The timestamp of change record creation in the enterprise change management platform is the official start of the lead time clock. Records opened at the moment of code completion receive no favourable treatment; they are simply late.
2
Signature Protocol Completed
The Integrated Record and Signature Protocol must be completed, including all verification cycles. Each failed verification round that requires a new approval circuit is included in the lead time. Teams that plan ahead allow sufficient time for multiple verification attempts without approaching the submission deadline.
3
Convening Session Attended
Changes placed on the Convening agenda are reviewed in submission order. A change submitted two days before the agenda cutoff and approved at that session will have a shorter lead time than one submitted at the same cutoff but deferred. Neither outcome reflects poorly on the CRAP; both reflect directly on the submitting team's planning horizon.
4
CRAP Approval Recorded
The moment unanimous approval is recorded in the meeting minutes, the clock stops. The Chief Signals Officer collects lead times from all approved changes at the close of each Convoy cycle and publishes the Lead Time Distribution Report.
What Good Looks Like

A planning-mature team opens change records at the start of feature work, not at the end, achieving lead times that reflect genuine organizational foresight.

  • A minimum lead time of 5 calendar days is required for any change to be considered planned. Changes falling below this threshold are flagged as reactive and recorded in the team's planning maturity profile
  • The preferred lead time target is 30 calendar days, reflecting a team that has committed to a change before the work begins rather than after it concludes
  • Teams whose median lead time meets or exceeds 30 days across a full Convoy cycle receive a Planning Excellence commendation from the Admiral's Transformation Office, awarded at the quarterly all-hands
  • Teams sustaining a 30-day median across three consecutive Convoys are recognized with the Sustained Foresight Award and receive priority placement on the CRAP agenda for the following Convoy cycle

The 5-day floor exists because the Integrated Record and Signature Protocol cannot physically complete in fewer than five days under normal operating conditions. Signature collection, transmission, verification, and archival each require time that cannot be compressed without compromising integrity. A change with a lead time below five days has either bypassed a step in the protocol or benefited from a coincidence of circumstances that cannot be relied upon. Neither constitutes a repeatable process, and the SADMF does not recognize sub-five-day lead times as evidence of process efficiency.

The 30-day target requires teams to plan the shape of a change before they begin implementing it. Critics of this standard argue that requirements evolve during development, making early change records inaccurate by the time approval is sought. The SADMF views this objection as evidence that the team is beginning work before requirements are sufficiently stable, which is itself a planning maturity failure. A team that cannot describe a change 30 days before it is complete is a team that started coding too soon.

The Lead Time Distribution Report

At the close of each Convoy cycle, the Chief Signals Officer publishes the Lead Time Distribution Report, which displays every approved change from that cycle ranked by lead time, alongside the submitting team and the number of signature verification rounds required. The report is distributed to all Feature Captains, the Commodore, and the Admiral’s Transformation Office.

Teams with lead times below 5 days are referred for a planning maturity review. Teams with lead times between 5 and 29 days are noted as compliant but not commended. Teams achieving the 30-day target are identified for recognition. The distribution of lead times across the organization is one of the most reliable indicators of whether the SADMF adoption effort is driving genuine behavioural change or merely producing documentation that post-dates the work it describes.

See Also