The Scaled Agile DevOps Maturity Framework defines a comprehensive organizational hierarchy where every person knows their responsibilities, every responsibility has a named owner, and every owner is accountable through clearly defined reporting lines. Roles in SADMF are not fluid or self-organizing; they are precisely defined, carefully separated, and rigorously enforced. When everyone knows exactly what they are allowed to do, nobody wastes time doing what they are not supposed to. This Systems Thinking extends from the highest levels of strategic leadership down to the individual Code Engineer typing at their keyboard.
The organizational structure is built on three pillars: the Admiral’s Transformation Office (ATO) provides strategic direction and accountability, the System of Authority (SOA) implants and enforces the framework through external consultants, and the System of Service (SOS) delivers software through the DevOps Release Convoy. Within this structure, specialized roles handle code review, testing, source management, architecture governance, change approval, and status reporting, ensuring that no individual is burdened with responsibilities outside their defined scope. The result is an organization where accountability is absolute, oversight is comprehensive, and every activity is performed by the role specifically designed to perform it.
The Roles
Admiral’s Transformation Office (ATO) – The command-and-control center accountable for the 5-8 year transformation roadmap, assessments, metrics, and certification renewals.
Build Engineers (BE) – YAML experts who own the entire build pipeline, ensuring Code Engineers never waste time on build concerns.
Change Rejection or Acceptance Party (CRAP) – Seven-person review board that approves changes by unanimous secret vote after supplicants swear an oath of checklist compliance.
Chief Signals Officer (CSO) – Senior executive who publishes the Feature Completion Ratio daily to ensure plan adherence.
Code Standards Enforcement Team (CSET) – Dedicated reviewers who perform all code reviews, enforcing the Enterprise Coding Standards Manual across every line of code.
Code Engineer (CE) – The backbone of a SAD implementation, transforming requirements into machine-readable instructions quickly and quietly.
Co-Owner, Product (COP) – The undivided Single Point of Contact for a product, shared across multiple COPs who extract delivery commitments from technical staff until achievability is confirmed.
Commodore (C) – The delivery commander who collects status from every role and ensures framework compliance before deploying the fleet.
Feature Captain (FC) – The mid-level manager responsible for tracking feature progress and reporting status to the Commodore.
Feature Team (FT) – The group of Code Engineers assembled per Convoy through the Press Gang ceremony to deliver a feature.
Quality Authority (QA) – Manual testing specialists and the final arbiter of requirements, because the only TRUE way to test is by hand.
Product Direction Arbitration Council (PDAC) – Cross-functional council of seven to fifteen stakeholder representatives that replaces individual product ownership with consensus-based backlog governance.
Review Board Review Board (RBRB) – The board that reviews the decisions of the EARB and CRAP, ensuring the reviewers are themselves reviewed.
Source Management Team (SMT) – Authorizes branches, merges code, and resolves all conflicts so Code Engineers never have to.
System of Authority (SOA) – The team of teams staffed by contractors and consultants accountable for implanting SADMF in your organization.
System of Service (SOS) – The team of teams accountable for achieving deadlines and shipping code under servant leadership.
Unit Tester (UT) – Dedicated specialists who write unit tests after code is delivered, because Code Engineers should focus on writing code.
The command-and-control layer that drives organizational transformation and alignment.
1.1 - Admiral's Transformation Office
The command-and-control center ensuring everyone achieves the goals of SADMF through centralized direction, assessment, and accountability!
The Admiral’s Transformation Office is the nerve center of every SADMF implementation. Without centralized command, transformation efforts fragment into isolated pockets of local optimization where teams make decisions based on their own narrow context rather than the broader organizational vision. The ATO eliminates this risk by concentrating all strategic authority, methodology decisions, and innovation directives under the Admiral, a senior leader whose singular vision ensures coherence across every team, every Convoy, and every quarter. The Admiral does not merely oversee the transformation; the Admiral is the transformation. Every process change, every tool adoption, every team restructuring flows from the ATO’s directives, ensuring that the organization moves as one body toward maturity rather than stumbling forward as a collection of disconnected limbs.
The Transformation Roadmap
The ATO is accountable for the 5-8 year transformation roadmap, a document of extraordinary scope and precision that plots the organization’s journey from its current state of chaos to full SADMF maturity. The roadmap is updated annually during a three-week planning summit attended by the Admiral, the System of Authority (SOA), and selected consultants. Each year of the roadmap specifies:
Teams are not consulted during roadmap creation, as their perspective is necessarily limited to their own delivery concerns and cannot encompass the strategic vision that only the Admiral possesses. The roadmap is communicated downward through the System of Authority and enforced through quarterly assessments that measure each team’s compliance with the current year’s objectives.
Assessments and Accountability
Assessments are the ATO’s primary instrument of accountability. The DevOps Process Excellence Assessment is administered weekly under the ATO’s authority, generating the individual Excellence Scores that feed the SADMF Maturity Score. The ATO reviews these scores at the aggregate level, identifying teams and individuals whose performance threatens the roadmap timeline. When a team’s scores fall below acceptable thresholds, the ATO may:
The ATO also oversees certification renewals, ensuring that every practitioner maintains current credentials and that the organization’s overall certification count trends upward as required by the roadmap.
Innovation Governance
Beyond assessments and roadmaps, the ATO serves as the organization’s center of innovation. All proposals for new tools, new processes, or new methodologies must be submitted to the ATO for evaluation. The ATO maintains a Technology Evaluation Queue where proposals wait for review, typically for 8-12 weeks to ensure that enthusiasm does not override rigor. Proposals that survive the evaluation period are forwarded to the Enterprise Architecture Review Board (EARB) for naming compliance and then to the Change Rejection or Acceptance Party (CRAP) for formal approval. This multi-stage gatekeeping process ensures that innovation is controlled, documented, and aligned with the roadmap. Spontaneous innovation by individual teams is actively discouraged, as it introduces variance that the ATO cannot track and therefore cannot manage.
Transformation Tracking
The ATO also manages the general project management of the transformation itself, tracking milestones, dependencies, and blockers in a dedicated Transformation Tracking Spreadsheet that mirrors the structure of the Release Tracking spreadsheet but operates at the organizational level. This spreadsheet is maintained by hand to ensure that the ATO retains full awareness of every detail, as automated dashboards create a false sense of visibility by hiding the complexity behind aggregated views. The Admiral reviews the Transformation Tracking Spreadsheet daily during the Mandatory Status Synchronization and uses it to issue directives for the coming day. In this way, the ATO ensures that the transformation is not merely a set of aspirations but a managed program with clear ownership, measurable outcomes, and consequences for non-compliance.
Lean Management for the principle behind centralized decision-making
1.2 - Chief Signals Officer
The senior executive ensuring plan adherence through daily publication of the Feature Completion Ratio!
The Chief Signals Officer is the senior executive responsible for ensuring that the organization remains aligned with the plan at all times. In organizations without this role, metrics are scattered across dashboards that nobody checks, reports that nobody reads, and stand-ups where nobody listens. The CSO eliminates this dysfunction by serving as the single authoritative voice for delivery metrics, publishing the Feature Completion Ratio daily and ensuring that every stakeholder from the Admiral’s Transformation Office to individual Feature Captains knows exactly where the organization stands relative to the plan. The CSO does not interpret the numbers or offer recommendations; the numbers speak for themselves, and the CSO’s job is to ensure they are heard.
The Feature Completion Ratio
The Feature Completion Ratio is the CSO’s primary signal, a single number that expresses the percentage of planned features that have been completed relative to the plan’s timeline. The CSO calculates this ratio daily by collecting status reports from every Commodore, cross-referencing them against the Release Tracking spreadsheet, and applying the official formula documented in the Precise Forecasting and Tracking practice. The daily cadence is essential: weekly reporting creates dangerous gaps where problems can fester undetected, while real-time dashboards encourage constant monitoring that distracts leadership from strategic thinking. Daily publication strikes the perfect balance, providing timely information without overwhelming consumers with continuous streams of data.
The Signal Report
The CSO’s daily signal is distributed through a standardized format called the Signal Report, a one-page document that presents the Feature Completion Ratio alongside trending data for the past 30 days, variance from plan, and a color-coded status indicator:
Color
Meaning
Green
On track
Amber
At risk
Red
Behind plan
Black
Critically behind
The Signal Report is emailed to all leadership, posted in the team communication channels, and displayed on physical monitors mounted in common areas. This multi-channel distribution ensures that nobody can claim ignorance of the current state. The CSO also presents the Signal Report at the daily Mandatory Status Synchronization ceremony, where it serves as the opening topic and sets the tone for all subsequent discussion.
Escalation Signal Protocol
When the Feature Completion Ratio drops below target thresholds, the CSO is responsible for initiating the Escalation Signal Protocol. This protocol defines the actions triggered at each threshold level:
CSO recommends invoking the Tribunal to address systemic failures
Each escalation level adds more meetings, more reports, and more oversight, creating a feedback loop that ensures declining performance receives proportionally increasing management attention.
Role Qualifications
The CSO position requires a senior executive with deep experience in metrics, reporting, and organizational communication, but explicitly not in software delivery. A CSO with engineering experience might be tempted to look behind the numbers, to ask why the ratio is declining rather than simply reporting that it is declining. This would compromise the CSO’s objectivity. The CSO’s value lies in being a pure signal transmitter: taking raw data, formatting it into the standardized report, and distributing it without editorial comment. The DOUCHE owns the process, the Commodore owns the delivery, and the CSO owns the signal. This separation of responsibilities ensures that no single role can both generate and interpret the metrics, which would create the appearance of accountability without its substance.
Release Tracking for the spreadsheet that feeds the CSO’s calculations
Commodore for the role that provides status data to the CSO
1.3 - Commodore
The delivery commander who collects status, ensures framework compliance, and authorizes fleet deployment!
The Commodore is the linchpin between strategy and execution in the SADMF delivery model. Where the Admiral’s Transformation Office sets the vision and the Feature Captains manage individual features, the Commodore commands the entire Convoy, ensuring that every step in the framework is performed correctly before Deploying the Fleet. The Commodore does not write code, does not review code, and does not test code. The Commodore collects status, and from that status, the Commodore derives truth. In a complex organization where dozens of Feature Teams work simultaneously on overlapping codebases, no individual contributor can see the whole picture. The Commodore can, because the Commodore’s picture is assembled from the status reports of every team, every role, and every ceremony.
Status Collection
Status collection is the Commodore’s primary activity and most sacred duty. Each day, the Commodore gathers reports from:
These reports are compiled into the Commodore’s Daily Status Digest, a comprehensive document that feeds the Chief Signals Officer’s daily Signal Report and the Release Tracking spreadsheet. The Commodore personally reviews every entry in the Digest for consistency, because a discrepancy between the Feature Captain’s reported progress and the Source Management Team’s branch status may indicate unreported problems that could threaten the Convoy timeline.
Deployment Readiness
Before any Convoy can Deploy the Fleet, the Commodore must verify that every step in the framework has been completed. This verification is performed using the Deployment Readiness Checklist, a document that enumerates every gate, every review, every approval, and every sign-off required by SADMF. The checklist includes confirmation that:
The Commodore signs the checklist personally, accepting accountability for the Convoy’s readiness. If a post-deployment defect is traced to a checklist item that should have been caught, the Commodore bears responsibility.
Liaison Between Systems
The Commodore also serves as the primary liaison between the System of Service (SOS) and the System of Authority (SOA). When the SOA issues directives from the Admiral’s Transformation Office, the Commodore translates those directives into actionable instructions for the Feature Teams. When the SOS encounters blockers that threaten delivery, the Commodore escalates them to the SOA with the appropriate severity classification. This translation function is essential because the SOA speaks in terms of transformation goals, maturity scores, and roadmap milestones, while the SOS speaks in terms of branches, builds, and test results. The Commodore is fluent in both languages and ensures that neither side must learn the other’s vocabulary.
Performance and Accountability
The Commodore’s performance is measured by:
On-time delivery rate: the percentage of Convoys that ship within the planned timeline
Daily Status Digest accuracy: how closely reported status matches actual outcomes
A Commodore who consistently delivers Convoys on time is recognized during the Tribunal; a Commodore whose Convoys are late or whose status reports prove inaccurate faces the same Tribunal with less favorable proceedings. The Commodore role requires someone who is meticulous, process-oriented, and comfortable with the authority to halt a deployment when the checklist is incomplete, even when business stakeholders are demanding immediate release. The checklist is the law, and the Commodore is its enforcer.
The oversight bodies and boards that ensure every change passes through proper channels.
2.1 - Change Rejection or Acceptance Party
CRAP ensures that only thoroughly reviewed and unanimously approved changes reach the Convoy!
The Change Rejection or Acceptance Party is the final human checkpoint between a proposed change and its inclusion in the next DevOps Release Convoy. While automated checks can verify syntax and tests can confirm functional behavior, neither can assess whether a change is truly ready for production. That judgment requires the wisdom, detachment, and institutional authority that only a formal review board can provide. The CRAP convenes twice per week, reviewing every change that has passed through the Code Standards Enforcement Team (CSET) and the Development Integrity Assurance Team (DIAT). No change may proceed to the DORC without CRAP approval, regardless of its size, urgency, or the seniority of its author.
Composition and Objectivity
The CRAP meeting dias seats seven members drawn from areas of the organization with no direct knowledge of the systems being changed. This is not an oversight; it is the CRAP’s greatest strength. Reviewers who understand the system being modified are inherently biased toward approval, as their familiarity breeds sympathy for the developer’s choices. Reviewers from unrelated domains bring the detachment and objectivity necessary to evaluate whether:
The seven-member composition ensures that no single perspective dominates, and the diversity of ignorance guarantees that the review focuses on process compliance rather than technical merit, which is exactly as it should be.
Voting and Approval
All approval decisions are made by unanimous secret vote. Each CRAP member casts their ballot independently, without discussion, after reviewing the change package. If even one member votes to reject, the change is returned to the submitting team with a Rejection Notice that specifies which checklist items were incomplete or which documentation was insufficient. The secret ballot prevents social pressure from influencing votes, ensuring that a junior CRAP member feels as empowered to reject a change from a senior Code Engineer as from a new hire. Unanimous approval is required because the strength of a change gate is measured by its strictest reviewer, not its most lenient. If six of seven reviewers approve but one has concerns, those concerns represent an unresolved risk that the organization cannot afford to accept.
The Supplicant’s Oath
Before presenting their change to the CRAP, meeting supplicants must take a formal oath affirming that they have personally applied every control on the change checklist. The oath is administered by the CRAP chairperson and recorded in the meeting minutes. This may seem ceremonial, but the oath serves a critical psychological function: it transforms checklist completion from a bureaucratic task into a personal commitment. A Code Engineer who has sworn an oath is far less likely to have skipped steps than one who merely checked boxes on a form. The oath text is standardized by the Admiral’s Transformation Office and updated annually to reflect new checklist items. Supplicants who are later found to have sworn falsely are referred to the Tribunal for review, and their oath violation is recorded in their PeopleWare profile.
Change Rejection Log and Oversight
The CRAP also maintains the Change Rejection Log, a comprehensive record of every rejected change, the reasons for rejection, and the number of resubmissions required before acceptance. This log is reviewed monthly by the Review Board Review Board (RBRB) to ensure that the CRAP’s rejection rate remains within acceptable bounds. A rejection rate that is too low suggests insufficient rigor; a rate that is too high may indicate that the change checklist has become unreasonably complex, in which case the Admiral’s Transformation Office will add additional checklist items to address the root cause. The CRAP’s standards are set by the iteration goals published by the ATO, and the CRAP is empowered to reject any change that does not meet those standards, regardless of business pressure or delivery timelines.
CSET performs all code reviews so that Code Engineers can focus on typing code instead of reading it!
The Code Standards Enforcement Team exists because the uncomfortable truth about code review is that the people who wrote the code are the least qualified to review it. Code Engineers are too close to the problem, too invested in their own solutions, and too pressed for time to perform the dispassionate, rigorous evaluation that quality code demands. Additionally, performing code review takes time away from coding, which is the Code Engineer’s only job. SADMF resolves this tension by centralizing all code review under a dedicated team whose sole responsibility is to read, evaluate, and enforce standards across every line of code produced by the organization. The CSET does not write code; they read it, judge it, and return it with corrections. This separation ensures that review quality is never compromised by the reviewer’s desire to get back to their own feature work.
Enterprise Coding Standards Manual
The CSET is responsible for defining and enforcing all coding standards for the enterprise. These standards are codified in the Enterprise Coding Standards Manual, a living document maintained by the CSET and approved by the Enterprise Architecture Review Board (EARB). The Manual covers every aspect of code formatting and structure including, but not limited to:
Indentation depth and the use of tabs versus spaces
Approved variable and method names from the EARB’s Book of Names
Comment format and density requirements
Maximum line length, maximum method length, and maximum file length
Approved design patterns for each programming language
Standards that are not in the Manual are not standards, and code that violates standards that are in the Manual will not be approved regardless of whether the code functions correctly. Correctness is necessary but not sufficient; conformity is the higher bar.
Review Process
The CSET review process begins when a Code Engineer submits their changes to the CSET review queue:
The CSET assigns a reviewer from a rotation, ensuring that no reviewer becomes too familiar with any particular codebase, which would risk the development of sympathy or context that could bias their judgment.
The reviewer evaluates the submission against the Enterprise Coding Standards Manual using a 47-point checklist.
Each checklist item requires a pass or fail determination; partial passes are not permitted, as ambiguity in standards enforcement is the first step toward standards erosion.
Submissions that fail any checklist item are returned to the Code Engineer with detailed annotations specifying the violations and the corresponding Manual sections.
The Code Engineer corrects the violations and resubmits, and the cycle repeats until all 47 points pass.
Standards Adherence Metrics
The average code change passes through the CSET 2.3 times before approval, a number that the CSET tracks as the Standards Adherence Iteration Count. A high iteration count for a Code Engineer indicates insufficient familiarity with the Enterprise Coding Standards Manual and may trigger a referral to the DevOps Process Excellence Assessment for additional evaluation. A low iteration count across the organization might suggest that standards have become too lenient, prompting the CSET to propose additional rules to the EARB for inclusion in the Manual. The CSET also publishes a weekly Standards Compliance Report that ranks all Code Engineers by their average iteration count, first-pass approval rate, and most frequently violated standard. This report is distributed to the Feature Captains, Commodore, and Admiral’s Transformation Office for visibility.
Authority and Amendment Process
The CSET’s authority is absolute within the domain of code standards. Neither a Feature Captain nor a Commodore may override a CSET rejection, as doing so would undermine the integrity of the standards enforcement process. If a Code Engineer believes a standard is incorrect or counterproductive, they may submit a Standards Amendment Proposal to the EARB, which will review it at their next scheduled meeting in 6 weeks. Until the amendment is approved, the existing standard remains in force and the CSET will continue to enforce it. This ensures that standards evolve through deliberate governance rather than ad hoc exceptions driven by delivery pressure.
DIAT validates the work of QA to ensure that testing itself meets the organization’s quality standards!
The Development Integrity Assurance Team addresses a question that most organizations are afraid to ask: who tests the testers? The Quality Authority is responsible for manually executing test scripts and verifying that code meets requirements, but the Quality Authority’s own work is itself a human process, subject to the same errors, oversights, and shortcuts that affect any other activity. Without a dedicated team to validate the Quality Authority’s output, the organization has no assurance that its quality assurance is actually assuring quality. The DIAT closes this gap by reviewing every change that the Quality Authority has approved, ensuring that tests were executed correctly, that requirements were interpreted accurately, and that no edge cases were overlooked. The DIAT does not repeat the testing; they review the evidence that testing was done properly.
Composition
The DIAT is composed of senior-level practitioners who have demonstrated deep expertise in their respective domains and have achieved high scores on the DevOps Process Excellence Assessment:
Senior Code Engineers: bring deep knowledge of code behavior and edge cases
Senior Build Engineers: contribute expertise in environment configuration and build artifacts
Senior Designers: provide perspective on requirements interpretation and user intent
This seniority is essential because the DIAT must be able to identify subtle errors that less experienced practitioners would miss. A junior Code Engineer might accept a test result at face value, but a senior DIAT member will examine the test steps, the test data, the environment configuration, and the screenshots to confirm that the test actually validated what it claimed to validate. The DIAT’s review is forensic in nature, treating each test execution as evidence that must withstand scrutiny.
The DIAT reviewer examines the Quality Authority’s test execution log, verifying that every test script was executed in the correct order.
The reviewer confirms that all prerequisite conditions were met and that the pass/fail determination was consistent with the observed results.
The DIAT cross-references the test scripts against the original requirements to ensure that the Quality Authority did not inadvertently test the wrong thing or test the right thing with the wrong data.
Discrepancies are documented in a DIAT Findings Report and returned to the Quality Authority for remediation.
The change cannot proceed to the CRAP until the DIAT is satisfied.
Quarterly Test Script Audit
The DIAT’s authority extends beyond individual change review. They are also responsible for auditing the Quality Authority’s test script library on a quarterly basis, ensuring that:
Scripts remain current
Deprecated test cases have been removed
New requirements have corresponding test scripts
This audit produces the Test Coverage Integrity Report, which is reviewed by the Commodore and the Admiral’s Transformation Office. Gaps identified in the audit trigger the creation of new test scripts by the Quality Authority, which are then reviewed by the DIAT before being added to the library. This circular dependency between the QA and the DIAT ensures that both teams remain continuously engaged and that neither can operate without the other’s oversight.
Oversight of the DIAT
Some may argue that having a team to review the reviewers creates an infinite regression problem: if the DIAT validates QA, who validates the DIAT? SADMF addresses this through the Review Board Review Board (RBRB), which periodically reviews the decisions of all review bodies including the DIAT. Additionally, the DIAT’s own work is subject to the DevOps Process Excellence Assessment, ensuring that DIAT members are individually accountable for their framework knowledge and process adherence. The layered review structure is not redundant; it is resilient. Each layer catches what the previous layer missed, creating a defense-in-depth model that ensures quality is verified, the verification is validated, and the validation is reviewed.
The EARB maintains the Book of Names, ensuring all Code Engineers use only approved words when naming things!
Naming is the hardest problem in software engineering, and the Enterprise Architecture Review Board ensures that no individual Code Engineer is burdened with solving it alone. Left to their own devices, Code Engineers will invent variable names, method names, class names, and service names according to their personal preferences, creating a Tower of Babel where every codebase speaks its own dialect. The EARB eliminates this chaos by maintaining the Book of Names, the master list that defines all acceptable words and word combinations that may be used for naming things during coding. If a word is not in the Book, it may not be used. If a combination is not in the Book, it may not be used. This discipline ensures that any Code Engineer joining a new Feature Team for the next Convoy will immediately recognize every identifier in the codebase, because every identifier was drawn from the same approved vocabulary.
The Book of Names
The Book of Names is organized into sections by domain:
Section
Contents
Business Nouns
Core domain entities and concepts
Technical Verbs
Approved action words for methods and functions
Modifier Adjectives
Approved qualifiers for compound identifiers
Status Indicators
Approved words for state and condition naming
Compound Expressions
Pre-approved multi-word combinations
Each entry includes the approved word, its canonical spelling, its permitted abbreviation (if any), its allowed contexts, and usage examples. The Book currently contains 2,847 approved entries, a number that the EARB considers comprehensive but not final.
New entries may be proposed by any member of the organization through the Name Submission Process, which requires the submitter to provide:
A justification
Three proposed usage examples
A statement confirming that no existing approved name adequately covers the intended meaning
Submissions are queued for the next EARB review meeting, where they are evaluated against the criteria of clarity, consistency, and necessity.
Review Cadence and Rejection Policy
The EARB meets every six weeks to review and, in most cases, reject new words submitted for inclusion in the Book. The six-week cadence is deliberate: it provides sufficient time for the EARB members to research each submission thoroughly and ensures that the Book is not diluted by hasty additions. The EARB’s default posture is rejection, because every new word added to the Book increases the vocabulary that Code Engineers must memorize and that the Code Standards Enforcement Team (CSET) must enforce. A lean vocabulary is a learnable vocabulary, and a learnable vocabulary is a consistent vocabulary.
Submissions are rejected for reasons including but not limited to:
The word is too similar to an existing approved word
The word is too domain-specific
The word is too colloquial
The word contains more than three syllables
The justification is insufficient
Rejected submissions may be resubmitted after a 12-week cooling-off period with additional justification.
Architectural Governance
The EARB also governs architectural decisions beyond naming, including the approved set of design patterns, approved technology stacks, and approved integration methods. When a Feature Team proposes to use a new library, framework, or architectural pattern, the proposal must be submitted to the EARB for evaluation. The EARB assesses the proposal against:
The organization’s existing technology landscape
The training requirements for adoption
The impact on the Enterprise Coding Standards Manual maintained by the CSET
Proposals that introduce technologies not already in the approved stack face a particularly high bar, as each new technology increases organizational complexity and the scope of the DevOps Process Excellence Assessment knowledge test.
Oversight and Decision Records
The EARB’s decisions are subject to review by the Review Board Review Board (RBRB), which meets every three weeks to evaluate whether the EARB is applying its criteria consistently and whether its rejection rate remains within acceptable bounds. The EARB reports to the Admiral’s Transformation Office and its decisions are recorded in the Architecture Decision Log, an append-only document that preserves the rationale for every approval and rejection. The Architecture Decision Log is a valuable resource for future EARB members, as it establishes precedent that guides future decisions. The EARB does not innovate; the EARB governs innovation, and governance is what separates a mature organization from a chaotic one.
The RBRB reviews the decisions of EARB and CRAP, ensuring that the reviewers are themselves properly reviewed!
The Review Board Review Board exists to answer the question that every mature governance structure must eventually confront: who watches the watchmen? The Enterprise Architecture Review Board (EARB) governs naming and architecture decisions. The Change Rejection or Acceptance Party (CRAP) governs change approval. The Development Integrity Assurance Team (DIAT) validates quality assurance. Each of these bodies wields significant authority over the delivery process, and authority without oversight is authority without accountability. The RBRB closes this governance loop by reviewing the decisions of all other review bodies, ensuring that their criteria are applied consistently, that their rejection rates are appropriate, and that their processes align with the standards set by the Admiral’s Transformation Office.
Meeting Cadence and Review Scope
The RBRB meets every three weeks to review and, when necessary, reject decisions made by the EARB and CRAP. The three-week cadence is offset from the EARB’s six-week cycle to ensure that RBRB reviews cover multiple EARB decision windows. During each meeting, the RBRB examines a sample of recent EARB and CRAP decisions, selected both randomly and based on flags raised by teams affected by those decisions. For each decision, the RBRB evaluates:
Whether the review body applied its documented criteria
Whether the rationale recorded in the decision log is sufficient
Whether the outcome was proportionate to the issue
An EARB rejection that lacks adequate justification may be overturned by the RBRB, requiring the EARB to reconsider the submission at their next meeting. A CRAP approval that appears to have been granted without proper checklist verification triggers a CRAP process audit.
Membership and Objectivity
The members of the RBRB must come from areas as far removed from the work as possible to maintain objectivity. This principle, shared with the CRAP, reflects SADMF’s foundational belief that proximity to the work creates bias and that the most objective judgment comes from those with the least context. RBRB members are typically drawn from departments such as:
Facilities management
Legal
Human resources
Finance
These are roles that have no involvement in software delivery and therefore no stake in any particular technical decision. This composition ensures that the RBRB evaluates process compliance rather than technical merit, which is exactly its mandate. The RBRB does not ask whether the EARB made the right technical decision; the RBRB asks whether the EARB followed the right process in making it.
Appeals Process
The RBRB also serves as an escalation path for teams that believe they have been unfairly treated by the EARB or CRAP. If a Feature Team submits a name to the EARB that is rejected three times despite what the team considers adequate justification, the team may appeal to the RBRB. The RBRB reviews the submission history, the EARB’s rejection rationale, and the team’s appeal, and renders a binding decision. Similarly, if a Code Engineer believes their change was rejected by the CRAP for reasons not documented in the change checklist, they may file an appeal. The RBRB’s appeal decisions are final and are recorded in the Governance Appeals Log, which is reviewed annually by the Admiral’s Transformation Office to identify systemic issues in the governance structure.
Oversight of the RBRB
Some practitioners question whether the RBRB itself requires oversight, noting that an infinite regression of review boards would be impractical. SADMF addresses this through the DevOps Process Excellence Assessment, which evaluates RBRB members as individuals, and through the annual Governance Structure Review conducted by the Admiral’s Transformation Office. The Governance Structure Review examines the effectiveness of all review bodies, including the RBRB, and may recommend changes to meeting frequency, membership criteria, or decision-making procedures. This ensures that the RBRB is accountable without requiring yet another review board, breaking the recursion at the organizational level through executive authority rather than structural repetition.
The practitioners who perform the technical work of building, testing, and delivering software.
3.1 - Build Engineers
YAML experts who ensure Code Engineers never waste time on build concerns and can focus entirely on feature delivery!
Build Engineers are the specialized practitioners who own the entire build pipeline, from the first line of YAML to the final artifact. In organizations that lack this role, Code Engineers are forced to maintain their own build configurations, leading to inconsistency, tribal knowledge, and the dangerous illusion that developers understand their own build systems. SADMF eliminates this risk by centralizing all build ownership under a dedicated team whose sole purpose is to write, maintain, and enforce the YAML that transforms source code into deployable artifacts. Code Engineers submit requests to the Build Engineers when they need build changes, and the Build Engineers evaluate, prioritize, and implement those changes according to the build roadmap established by the Admiral’s Transformation Office.
The Canonical Build Definition
The Build Engineers are responsible for maintaining the Canonical Build Definition, a single authoritative YAML file (or, more commonly, a hierarchy of 40-60 YAML files) that defines every step of the build process for every application in the organization. The Canonical Build Definition is stored in a dedicated repository managed by the Source Management Team, and changes to it follow the same Fractal-based Development branching pattern as application code. This ensures that build changes receive the same level of scrutiny as feature code, passing through the Code Standards Enforcement Team (CSET) for review and the Change Rejection or Acceptance Party (CRAP) for approval. Build changes that are approved are merged into the Conflict branch by the Source Management Team and included in the next Convoy.
Build Change Requests
The separation between Build Engineers and Code Engineers is a cornerstone of SADMF’s commitment to Systems Thinking. When a Code Engineer updates a dependency, adds a new module, or changes a compilation flag, they do not modify the build configuration themselves. Instead, they file a Build Change Request (BCR) with the Build Engineering team, specifying what they need and why. The Build Engineers then:
Evaluate the BCR against the Canonical Build Definition.
Assess the impact on other applications that share build components.
Schedule the change for implementation.
This process typically adds 3-5 business days to the development cycle, but this delay is a feature, not a flaw: it ensures that build changes are deliberate, documented, and coordinated across the enterprise rather than made impulsively by individual Code Engineers who cannot see the full picture.
Environment Provisioning
Build Engineers also execute the Standardized Environment Provisioning practice, maintaining the 200-step SEPAW checklist that ensures every environment is configured identically. When a Feature Team requires a new environment for testing or development, they submit a request to the Build Engineers, who provision the environment manually according to the checklist. The manual nature of this process is intentional: automated provisioning scripts can contain bugs, but a human following a checklist will catch discrepancies that a script would silently propagate. Each provisioned environment is signed off by the Build Engineer who created it and countersigned by the Development Integrity Assurance Team (DIAT) before it is released for use.
Their performance is measured by the Feature Completion Ratio, ensuring that build reliability is tracked as rigorously as feature delivery. Build Engineers who achieve consistently high scores on the DevOps Process Excellence Assessment may be nominated for the SADMF Certified Build Architect credential, recognizing their mastery of YAML and their commitment to centralized build ownership.
The backbone of a SAD implementation, transforming requirements into machine-readable instructions quickly and quietly!
While other roles plan, assess, review, track, and govern, the Code Engineer performs the fundamental act that justifies the entire framework’s existence. The job is straightforward and should be treated as such. A Code Engineer receives requirements from the Feature Captain, writes the code that fulfills those requirements, and submits it for review by the Code Standards Enforcement Team (CSET).
The Code Engineer does not perform activities outside their lane:
Build configuration: that is the domain of the Build Engineers
The Code Engineer types code. That is the job.
Expertise and Silence
Code Engineer expertise can be reliably judged by the number of questions they ask. Since a Code Engineer is expected to be an expert at data structures and algorithms, fewer questions indicate more expertise. A senior Code Engineer should be able to receive a requirement, internalize it immediately, and begin typing. Questions suggest confusion, and confusion suggests a gap in expertise that the DevOps Process Excellence Assessment should identify and address. The most productive Code Engineers are those who accept requirements without discussion, produce code without complaint, and submit it without explanation. This is not a sign of disengagement; it is a sign of mastery. The requirement speaks for itself, the code speaks for itself, and the Code Engineer’s silence is the loudest testament to their competence.
Team Assignment
Code Engineers are organized into Feature Teams for each Convoy, assembled through the Press Gang ceremony based on the skills required for the upcoming feature set. Because SADMF invests heavily in Build Quality In through the Tribunal and other review mechanisms, Feature Teams should be able to deliver at maximum throughput immediately upon formation, regardless of whether the team members have worked together before or have any familiarity with the codebase they are being assigned to. Onboarding time is a sign that the organization has failed to standardize sufficiently. If a Code Engineer needs more than a day to become productive on a new codebase, the fault lies not with the engineer but with the codebase’s failure to conform to the Enterprise Coding Standards Manual maintained by the CSET.
Workflow
The Code Engineer’s workflow is precisely defined by the framework:
Hand the approved branch to the Source Management Team for merging into the Conflict branch.
At no point does the Code Engineer interact directly with the production system, the build pipeline, the test suite, or the deployment process. These boundaries exist to protect both the Code Engineer and the organization: the Code Engineer is protected from the complexity of systems they do not need to understand, and the organization is protected from the risk of a Code Engineer making changes outside their area of expertise.
Performance data is compiled by the Chief Signals Officer and reviewed at the Tribunal, where Code Engineers whose metrics fall below acceptable thresholds receive coaching, reassignment, or additional process training. Code Engineers who demonstrate consistent excellence may be considered for advancement to the CSET or the Development Integrity Assurance Team (DIAT), roles that allow them to review and judge the work of their former peers. This career path reinforces the principle that the highest form of engineering is not writing code but evaluating it.
See Also
Feature Team for the team structure Code Engineers work within
The mid-level manager who tracks feature progress and ensures their assigned Feature Team delivers on time!
The Feature Captain is the mid-level manager responsible for tracking the progress of the feature they are assigned to and ensuring that their Feature Team delivers according to the plan. In organizations without Feature Captains, features are “owned” by the team collectively, which in practice means they are owned by nobody. Collective ownership diffuses accountability to the point where no individual can be held responsible when a feature is late, incomplete, or defective. SADMF eliminates this ambiguity by assigning a named Feature Captain to every feature in every Convoy.
The Feature Captain does not write code, does not test code, and does not review code. The Feature Captain tracks progress, removes blockers through escalation, and reports status to the Commodore. The Feature Captain is the human embodiment of the Release Tracking spreadsheet for their assigned feature.
Daily Responsibilities
The Feature Captain’s day begins with a review of the previous day’s progress against the Precise Forecasting and Tracking plan:
Each Code Engineer on the Feature Team reports their status to the Feature Captain, specifying the number of story points completed, the number remaining, and any impediments.
The Feature Captain records these figures in their section of the Release Tracking spreadsheet and calculates the feature’s current velocity against planned velocity.
Discrepancies are flagged immediately: if a Code Engineer reports fewer story points completed than planned, the Feature Captain investigates the cause and determines whether the variance is recoverable within the Convoy timeline or whether escalation to the Commodore is required.
The Feature Captain does not accept “it’s taking longer than expected” as an explanation; the plan was built using the official formula (1 SP = 0.73 person-days), and deviations from the plan indicate either an estimation error (which the Admiral’s Transformation Office must address) or a performance gap (which the Tribunal must address).
Ceremony Participation
The Feature Captain participates in every ceremony related to their feature’s lifecycle:
Mandatory Status Synchronization: the Feature Captain presents their feature’s current status to the broader team and flags any cross-feature dependencies or blockers.
Tribunal: the Feature Captain presents the feature’s delivery metrics and is held accountable for any variances from the plan.
The Feature Captain’s calendar is a mosaic of ceremonies, status meetings, and escalation calls, with whatever time remains allocated to updating the Release Tracking spreadsheet and preparing reports for the Commodore.
Relationship with the Feature Team
The Feature Captain’s relationship with the Feature Team is one of oversight, not collaboration. The Feature Captain assigns tasks, tracks completion, and reports results. The Feature Captain does not pair with Code Engineers, does not participate in technical discussions, and does not provide feedback on code quality, as those responsibilities belong to the CSET and DIAT respectively. This separation is intentional: a Feature Captain who becomes too involved in the technical work loses the objectivity required to report status accurately. Status reporting must be dispassionate, based on numbers rather than narratives, and free from the optimistic bias that infects those who are emotionally invested in the work. The Feature Captain’s detachment is their greatest asset.
Performance and Career Path
Feature Captains report to the Commodore and are evaluated based on:
Accuracy of status reports: how closely reported progress matches actual outcomes
On-time delivery rate: the percentage of their features that ship within the Convoy timeline
Release Tracking spreadsheet quality: the completeness and precision of their entries
A Feature Captain whose features consistently deliver on time and whose status reports prove accurate is recognized as a high performer. A Feature Captain whose features are late or whose status reports reveal systematic inaccuracies is referred for additional training in Precise Forecasting and Tracking methodology. The Feature Captain role is the proving ground for future Commodores: those who master the art of status collection, escalation, and reporting at the feature level are prepared to do so at the Convoy level.
See Also
Commodore for the role the Feature Captain reports to
Feature Team for the team the Feature Captain tracks
Release Tracking for the spreadsheet the Feature Captain maintains
Tribunal for the ceremony where Feature Captains are held accountable
3.4 - Feature Team
The group of Code Engineers assembled per Convoy to deliver a feature at maximum throughput from day one!
The Feature Team is the fundamental delivery unit of the Scaled Agile DevOps Maturity Framework. It is the group of Code Engineers assembled to build a new feature for the next Convoy, led by a Feature Captain who tracks their progress and reports status to the Commodore. Feature Teams are not permanent; they are formed fresh for each Convoy through the Press Gang ceremony, which matches available Code Engineers to the skills required for the upcoming feature set. This dynamic composition ensures that the organization’s talent is deployed where it is most needed rather than trapped in static team structures where engineers accumulate comfort and complacency in equal measure.
The Press Gang Ceremony
The Press Gang ceremony is the mechanism by which Feature Teams come into existence. During the ceremony, the Commodore and Feature Captains review the feature requirements for the upcoming Convoy and identify the skill profiles needed for each feature. Code Engineers are then assigned to Feature Teams based on:
Skill profiles: matching engineer capabilities to feature requirements
Availability: ensuring no engineer is double-assigned
Code Engineers do not volunteer for Feature Teams and do not express preferences; the Press Gang assigns them where the organization needs them most. This prevents the formation of cliques, ensures that no Code Engineer becomes a single point of failure for any particular system, and distributes institutional knowledge across the organization rather than concentrating it in self-selected groups.
Instant Productivity
Because SADMF invests so heavily in Build Quality In through the Tribunal, the CSET, and the Enterprise Coding Standards Manual, Feature Teams should be able to deliver at maximum throughput immediately upon formation. If the organization has properly standardized its codebases, properly enforced its naming conventions through the EARB, and properly documented its systems through the Comprehensive Documentation Assurance Protocol, then any Code Engineer should be productive on any codebase within a single day. Teams that require extended ramp-up periods are exhibiting a symptom of insufficient standardization, and the DOUCHE should investigate the root cause. The goal is interchangeable parts: Code Engineers who can be assembled into any configuration and immediately function as a unit.
Workflow
The Feature Team’s workflow follows a precisely defined sequence:
The Feature Captain decomposes the feature into tasks during Convoy Planning and assigns them to individual Code Engineers.
Each Code Engineer works in their assigned feature branch.
Completed code is submitted to the CSET for review.
The Code Engineer addresses any standards violations.
The approved branch is handed to the Source Management Team for merging into the Conflict branch.
The Feature Team does not self-organize, does not choose its own practices, and does not deviate from the defined workflow. The framework has already determined the optimal process; the team’s role is to execute it.
Dissolution and the Clean Slate
At the conclusion of the Convoy, the Feature Team is dissolved. Code Engineers return to the available pool, and the relationships, context, and working rhythms they developed during the Convoy are deliberately discarded. This may seem wasteful, but it is essential to SADMF’s organizational resilience. Teams that persist across multiple Convoys develop informal processes, undocumented conventions, and interpersonal dynamics that create friction when members eventually leave or are reassigned. By dissolving and reforming teams each Convoy, SADMF ensures that the organization never depends on any particular team configuration and that every delivery cycle begins with a clean slate, free from the accumulated technical and social debt of previous iterations.
See Also
Press Gang for the ceremony that forms Feature Teams
Manual testing specialists who serve as the final arbiter of requirements, because the only TRUE way to test is by hand!
Verifying quality is a specialist field that no Code Engineer is qualified to perform. This is not a reflection on the Code Engineer’s intelligence or dedication; it is a recognition that the skills required to build a system and the skills required to verify that system are fundamentally different disciplines. A Code Engineer who tests their own work is like a student grading their own exam: they will inevitably overlook the gaps in their understanding because those same gaps blind them to the deficiencies in their output. Additionally, performing testing impedes the ability of the Code Engineer to do their only job, which is typing code. SADMF addresses this by establishing the Quality Authority as a dedicated team of testing specialists whose sole purpose is to validate that the software meets requirements through comprehensive manual test execution.
Requirements Interpretation
The Quality Authority is the final arbiter of what the requirements mean:
Ambiguous requirements: when a requirement is ambiguous, as requirements inevitably are, the Quality Authority interprets it.
Conflicting requirements: when a requirement conflicts with another requirement, the Quality Authority resolves the conflict.
Implementation disputes: when a Code Engineer implements a requirement differently than the Quality Authority expected, the Quality Authority’s interpretation prevails, because the Quality Authority has studied the requirements more deeply than any Code Engineer, who was focused on typing the code rather than understanding the broader business context.
The Quality Authority maintains a Requirements Interpretation Log that records every interpretive decision, creating an authoritative reference that prevents the same ambiguity from being re-debated in future Convoys.
Manual Test Execution
The Quality Authority creates, maintains, and manually executes test scripts based on their understanding of the requirements. Each test script specifies the exact steps to perform, the exact data to enter, and the exact results to observe. Test scripts are executed by hand because the end-user uses the system manually, and therefore the only TRUE way to test it is manually. Automated tests verify that code runs; manual tests verify that the software works as a human would experience it. The Quality Authority executes each test script exactly as written, recording pass or fail for each step, capturing screenshots of every result, and documenting any deviations in a Test Execution Report. The Test Execution Report is reviewed by the Development Integrity Assurance Team (DIAT) to validate that the testing was performed correctly.
Testing Cycle
The Quality Authority’s test execution cycle is integrated into the Convoy timeline:
The Source Management Team merges all feature branches into the Conflict branch and resolves all conflicts.
The Quality Authority receives the integrated build for testing.
The testing window is defined by the Commodore and typically spans two to three weeks, during which the Quality Authority executes every test script in the regression suite plus all new scripts written for the current Convoy’s features.
Defects discovered during testing are logged in the Defect Tracking Spreadsheet (a dedicated tab in the Release Tracking spreadsheet) and assigned back to the Code Engineers who wrote the offending code.
Code Engineers fix the defects, the Source Management Team re-merges, and the Quality Authority re-executes the affected test scripts.
This cycle repeats until the Quality Authority signs off that all tests pass.
Authority and Metrics
The Quality Authority’s sign-off is a prerequisite for the Change Rejection or Acceptance Party (CRAP) to review the Convoy’s changes. Without QA sign-off, the CRAP will not convene, and the Convoy cannot proceed to Deploy the Fleet. This gives the Quality Authority effective veto power over any release, a power that is appropriate given their role as the organization’s last line of defense against defective software reaching production.
Quality Authority members are measured by:
Defects found during testing: more is better, as it indicates thorough testing
Production defects found after release: fewer is better, as it indicates effective testing
The SMT authorizes branches, merges code, and resolves all conflicts so Code Engineers never have to!
To improve Code Engineer productivity by reducing the work required to integrate changes, SADMF introduces the Source Management Team. The premise is straightforward: merging code is complex, conflict resolution is error-prone, and neither activity produces features. Every minute a Code Engineer spends resolving a merge conflict is a minute not spent typing new code. The SMT eliminates this waste by centralizing all source control operations under a dedicated team. The SMT:
Authorizes new feature branches: approves and creates branches in the repository
Merges completed branches: integrates each Code Engineer’s work into the Conflict branch
Resolves all conflicts: determines which changes prevail when branches collide
Alerts the Quality Authority: signals when the Convoy is ready for testing
Code Engineers interact with their own feature branches and nothing else. Everything beyond that boundary is SMT territory.
Branch Authorization
The branch authorization process begins when a Feature Captain submits a Branch Request Form to the SMT, specifying:
The SMT reviews the request against the current branch topology documented in the Fractal-based Development map to ensure that the new branch will not create structural conflicts with existing branches. Once approved, the SMT creates the branch and grants write access to the assigned Code Engineers. Code Engineers may not create branches themselves, as unauthorized branches would introduce untracked parallel development that the SMT cannot monitor or manage. Every branch in the repository must appear on the Fractal-based Development map, and the SMT is the sole authority for updating that map.
Conflict Arbitration
Merging is the SMT’s most critical and time-consuming function. At defined integration points during the Convoy cycle, the SMT merges all completed feature branches into the Conflict branch. This is where Conflict Arbitration occurs: when multiple Code Engineers have modified the same files, the SMT resolves the conflicts by examining the changes, consulting the requirements, and determining which changes should prevail. The SMT does not ask the Code Engineers to resolve their own conflicts, as this would reintroduce the productivity loss that the SMT was created to eliminate. Instead, the SMT makes the resolution decision and documents it in the Conflict Resolution Log, which is reviewed by the Development Integrity Assurance Team (DIAT) to verify that resolutions did not introduce defects.
Documentation Branch Management
The SMT also manages the branching lifecycle for the Comprehensive Documentation Assurance Protocol (CDAP), ensuring that documentation branches follow the same controlled process as code branches. When the CSET approves a code change and the DIAT validates the testing, the SMT verifies that all associated documentation has been merged before marking the feature as integration-complete. This ensures that code and documentation remain synchronized throughout the Convoy, preventing the common problem of code shipping without updated documentation or documentation shipping that describes code not yet merged.
Merge success rate: the percentage of merges completed without requiring Code Engineer consultation
Average conflict resolution time: how quickly the SMT resolves conflicts once encountered
Fractal-based Development map accuracy: how closely the map reflects the actual repository state
SMT members must pass the DevOps Process Excellence Assessment with high scores in the source management and branching strategy sections, and senior SMT members may pursue the SADMF Certified Source Architect credential. The SMT is the organization’s cartographer and diplomat, mapping the territory of code and negotiating peace when branches go to war.
Code Engineer for the role whose branches the SMT manages
Quality Authority for the role that receives the integrated build from the SMT
Commodore for the role the SMT reports to during Convoys
Systems Thinking for the principle behind dedicated source management
3.7 - Unit Tester
Dedicated specialists who write unit tests after code is delivered, because Code Engineers should focus on writing code!
Code Engineers should be focusing on writing code. This principle, so simple and so often ignored, is the foundation of the Unit Tester role. In organizations that lack this role, Code Engineers are expected to write their own unit tests, a practice that introduces three compounding problems:
Feature Completion Ratio erosion: it diverts coding capacity toward testing, reducing the Feature Completion Ratio by consuming time that should be spent on features.
Conflict of interest: a Code Engineer who writes tests for their own code will unconsciously write tests that confirm their assumptions rather than challenge them.
Systems Thinking blur: it blurs the Systems Thinking that SADMF depends upon, mixing production code and test code in the same mental context and the same workflow.
The Unit Tester role resolves all three problems by establishing a dedicated specialist who writes unit tests after the code is delivered.
The Unit Tester receives the approved code along with the requirements that it implements.
The Unit Tester studies the code, identifies the logical paths and boundary conditions, and writes unit tests that verify each path and condition.
The Unit Tester does not consult the Code Engineer about the code’s intended behavior; the code itself is the specification, and the Unit Tester’s tests verify that the code does what the code says it does.
If the Unit Tester discovers that the code behaves in a way that seems inconsistent with the requirements, they log a discrepancy in the Discrepancy Report for the Quality Authority to evaluate.
The Value of Temporal Separation
The temporal separation between coding and testing is a feature of this model, not a flaw. When a Code Engineer writes tests simultaneously with their code, the tests and the code evolve together, sharing the same assumptions and the same blind spots. By introducing a time gap and a different person, SADMF ensures that the tests are written with fresh eyes and an independent understanding of the requirements. The Unit Tester approaches the code as an outsider, seeing it for the first time and testing it without the author’s preconceptions about what it should do. This independence produces tests that catch defects the Code Engineer would never have found, because the Code Engineer would never have written a test for a scenario they did not anticipate.
Unit Test Repository
Unit Testers maintain the Unit Test Repository, a centralized collection of all unit tests organized by feature, by Convoy, and by Code Engineer. The repository is managed by the Source Management Team and follows the same Fractal-based Development branching pattern as production code. When the Source Management Team merges feature branches into the Conflict branch, they also merge the corresponding unit test branches, ensuring that the integrated build includes both the production code and its unit tests. The Unit Tester verifies that all tests pass on the integrated build and reports the results to the Feature Captain and the Quality Authority. Test failures on the integrated build that passed on the feature branch indicate a conflict introduced during merging, which the Source Management Team investigates.
Distinction from the Quality Authority
The Unit Tester role is distinct from the Quality Authority, which performs manual end-to-end testing. The Unit Tester writes automated tests at the unit level; the Quality Authority executes manual tests at the system level. This layered approach ensures that code is tested at multiple granularities by multiple independent teams, creating a comprehensive quality net that no single testing approach could provide.
Metrics
Unit Testers are measured by:
Tests written per Convoy: the volume of test coverage produced each delivery cycle
Code coverage percentage: the proportion of production code exercised by their tests
Unit Testers who achieve high coverage while maintaining test quality may be recognized at the Tribunal and considered for advancement to the DIAT, where they can apply their testing expertise to validating the work of the Quality Authority.
See Also
Code Engineer for the role whose code the Unit Tester tests
Quality Authority for the role that performs manual system-level testing
The roles responsible for defining what gets built and ensuring it aligns with enterprise direction.
4.1 - Co-Owner, Product (COP)
The undivided Single Point of Contact for a product, shared across multiple COPs who collectively own accountability for decisions no single person could survive alone!
Product ownership is too consequential to entrust to one person. A single Product Owner may be biased, unavailable, or simply wrong. SADMF addresses this vulnerability by distributing undivided product ownership across a council of Co-Owners, Product, each of whom serves as the sole Single Point of Contact for their product, alongside the other COPs who are also each the sole Single Point of Contact for that same product. This structure ensures that accountability is never diluted, because every COP is individually and fully accountable, and together they are collectively and fully accountable, which compounds accountability rather than dividing it. When something goes wrong, there is never any ambiguity about who is responsible: everyone is responsible, and any one of them can be asked to account for any decision made by any of them.
Single Point of Contact
The COP’s designation as Single Point of Contact means that all questions, concerns, escalations, and decisions regarding the product flow through the COP. Stakeholders never need to wonder who owns a product; they ask the COP. Developers never need to wonder who to escalate to; they escalate to the COP. When there are multiple COPs for a product, stakeholders ask any of them, because each COP has identical authority and will give authoritative answers. If the answers from different COPs conflict, those COPs are accountable to one another for alignment, which is handled through the Product Direction Arbitration Council (PDAC), a body specifically established to resolve the inevitable disagreements that arise when multiple people each have undivided authority over the same thing.
The COP does not own the backlog directly. Backlog governance belongs to the PDAC. The COP owns the decisions that emerge from the PDAC, which are distinct from the decisions made by the PDAC, and which are further distinct from the decisions made by other COPs operating in the same decision space. This separation of decision types ensures that the COP retains strategic autonomy while still participating in the consensus-based governance structure that prevents any one COP from acting unilaterally.
Commitment Extraction
The COP’s most critical function is securing delivery commitments from the technical teams. Because Code Engineers and Quality Authorities may not naturally volunteer that a given deadline is achievable, due to excessive caution, incomplete information, or a temperamental resistance to optimism, the COP is trained in the SADMF Commitment Extraction methodology.
If the answer is yes, the commitment is documented in the Release Tracking spreadsheet and signed by the relevant parties.
If the answer is anything other than yes, the COP re-presents the business context, the strategic importance of the deadline, and the personal accountability implications of non-delivery.
Steps 2–4 repeat until the deadline is confirmed as achievable.
This methodology is grounded in the SADMF principle that technical estimates are inherently conservative and that engineers who express uncertainty are not communicating facts about the future, they are communicating feelings about the present. The COP’s role is to help the technical team move past feelings and toward commitment. A commitment extracted through persistent questioning is considered equally valid to one offered voluntarily, and in practice more reliable, because the team has had more time to internalize it.
Decision Authority
Within the COP’s domain, the following decisions belong exclusively to the COP (and, jointly, to the other COPs):
Feature prioritization: determining which features are most important, subject to PDAC ratification
Requirement interpretation: clarifying ambiguous requirements, except when the Quality Authority has already interpreted them
Deadline confirmation: certifying that all committed dates are achievable, based on commitments extracted from technical staff
Stakeholder communication: informing stakeholders of product status, unless the Commodore or Chief Signals Officer is communicating the same information through separate channels
The COP does not make architectural decisions, that is the domain of the Enterprise Architecture Review Board. The COP does not approve changes, that is the domain of the Change Rejection or Acceptance Party (CRAP). The COP does not write requirements in sufficient detail for implementation, that is the domain of the Feature Captain. The COP owns the decisions between those decisions.
Performance and Accountability
COP performance is measured by:
Commitment accuracy rate: the percentage of extracted commitments that result in on-time delivery
A COP whose extracted commitments repeatedly fail to materialize will be reviewed at the Tribunal. Because the Commitment Extraction methodology is considered sound, missed commitments are attributed to insufficient extraction effort rather than to unrealistic deadlines. The corrective action is additional Commitment Extraction training, not deadline revision.
The DOUCHE owns the DevOps Process Binder and holds all teams accountable to the Right Way of doing DevOps!
If the Right Way to do DevOps is not owned and controlled by an executive, then nobody will do it. This is not cynicism; it is an observation confirmed by decades of organizational behavior research and by every failed transformation that lacked executive ownership of process compliance. The DevOps Usage & Compliance Head Engineer exists to ensure that the Right Way is not merely documented but enforced, not merely communicated but internalized, and not merely measured but consequential. The DOUCHE is the named person accountable for codifying the Right Way in the DevOps Process Binder and holding every team, every role, and every individual accountable to the standards it contains. Without the DOUCHE, DevOps devolves from a disciplined methodology into a collection of ad hoc practices that vary by team, by project, and by the personal preferences of whoever happens to be the loudest voice in the room.
The DevOps Process Binder
The DevOps Process Binder is the DOUCHE’s primary instrument of governance. This comprehensive document defines every aspect of the organization’s DevOps practices:
Every ceremony, metric, and role interaction in between
The Binder is updated quarterly by the DOUCHE in consultation with the Admiral’s Transformation Office and is versioned to ensure that teams always know which edition they are expected to follow. Teams found using outdated editions of the Binder are flagged for non-compliance and required to complete a Binder Familiarization Session before their next Convoy.
Enforcement Mechanisms
The DOUCHE’s enforcement mechanism is the DevOps Process Excellence Assessment, a weekly assessment that measures every individual’s adherence to the practices defined in the Process Binder. The DOUCHE reviews all Assessment results, identifies patterns of non-compliance, and initiates corrective actions ranging from individual coaching to team restructuring.
The DOUCHE also conducts quarterly Process Compliance Audits, where a random sample of teams is subjected to a deep review of their actual practices against the Binder’s specifications. Auditors examine:
Commit histories
Branch structures
Ceremony attendance records
Documentation artifacts
Teams that pass the audit receive a Process Excellence Certificate; teams that fail receive a Corrective Action Plan with deadlines and mandatory follow-up reviews.
Preventing Process Drift
By staffing the DOUCHE role, the organization prevents process drift and the eventual mutation of the Right Way. Process drift is the silent killer of transformations: it begins with small, seemingly reasonable deviations, each one justified by local context or time pressure, that collectively erode the standardization that the framework depends upon. A team that skips one ceremony because they are “too busy” will skip two the next sprint, and within a quarter, they will have constructed their own informal process that bears no resemblance to the Binder. The DOUCHE detects this drift through the Assessment and audit mechanisms and corrects it before it metastasizes. The DOUCHE’s presence alone serves as a deterrent, as teams that know their practices will be audited are far less likely to deviate than teams operating without oversight.
Reporting and Metrics
The DOUCHE reports directly to the Admiral’s Transformation Office and provides the data that feeds the SADMF Maturity Score, the organization-wide metric that tracks annual progress toward framework maturity. The DOUCHE’s own performance is measured by:
Overall Assessment scores: the aggregate excellence scores across all individuals
Audit pass rates: the percentage of teams passing quarterly compliance audits
Year-over-year Maturity Score trend: whether the organization is progressing toward full SADMF maturity
A rising Maturity Score indicates that the DOUCHE is successfully embedding the Right Way into the organization’s culture; a declining score triggers a review of the DOUCHE’s enforcement strategy and, if necessary, additional staffing to support the DOUCHE’s compliance mission. The DOUCHE is not merely a role; the DOUCHE is the conscience of the organization, the persistent voice reminding everyone that the Right Way exists, that it is documented, and that deviation will be detected.
Continuous Learning for the principle behind mandatory framework memorization
Certifications for the credentials that validate DevOps process knowledge
CI/CD/ED for the delivery practice the Process Binder codifies
4.3 - System of Authority
The team of teams accountable for implanting SADMF in your organization through contractors and consultants!
The System of Authority is the organizational layer responsible for implanting SADMF in your organization and ensuring that it takes root. The SOA is not composed of internal employees; it is staffed by contractors and consultants with diverse points of view who bring the external perspective necessary to transform an organization that cannot transform itself. Internal staff are too embedded in existing culture, too loyal to existing processes, and too sympathetic to existing pain points to drive the kind of fundamental change that SADMF requires. The SOA’s external composition ensures objectivity, urgency, and the willingness to make difficult recommendations without concern for internal politics or long-term relationship management. The SOA arrives, implants the framework, and maintains it until the organization achieves self-sustaining maturity.
Sub-Team Structure
The SOA operates as a team of teams, with each SOA sub-team assigned to a specific domain of the transformation:
Process implementation: ensures that every practice from CI/CD/ED to Release Tracking is adopted according to the specifications in the DevOps Process Binder maintained by the DOUCHE.
These sub-teams operate under the unified direction of the Admiral’s Transformation Office (ATO), ensuring coherence across all transformation activities.
Execution of ATO Directives
The SOA’s principled practitioners focus on implementing the orders of the ATO with precision and consistency. When the ATO directs that all teams must adopt Fractal-based Development, the SOA deploys to each team to train them on the branching pattern, verify their compliance, and report back to the ATO on adoption progress. When the ATO mandates a new ceremony, the SOA facilitates the first several instances to establish the pattern and then monitors ongoing execution. The SOA does not negotiate with teams about whether to adopt a practice; the ATO has already decided, and the SOA’s role is execution. Teams that resist are documented in the SOA’s Transformation Resistance Log, which the ATO reviews during the Tribunal to determine appropriate corrective actions.
Trusted Advisor Role
A critical function of the SOA is serving as trusted advisors for the teams so they can report the ground-level truth to leadership during the Tribunal. This advisory relationship is built on the understanding that the SOA’s primary loyalty is to the framework, not to the team. When a team confides in their SOA advisor that they are struggling with a practice, the SOA advisor helps them develop a remediation plan while simultaneously reporting the struggle to the ATO. This dual role of confidant and informant is not contradictory; it is essential. Teams that are struggling need help, and the ATO cannot provide help if it does not know the struggle exists. The SOA’s transparency ensures that problems surface early, when they can be addressed through additional training or process reinforcement, rather than late, when they manifest as missed deadlines and failed Convoys.
Readiness Assessment
The SOA also focuses on updating plans, collecting metrics, and assessing organizational readiness for each phase of the transformation roadmap. The SOA conducts readiness assessments before each major milestone, evaluating whether teams have the training, tools, and process maturity required to proceed. Teams that fail readiness assessments are held back until they meet the criteria, even if this delays the overall roadmap. The SOA’s assessment methodology is documented in the SOA Assessment Framework, a companion document to the DevOps Process Binder that specifies the evaluation criteria, scoring rubrics, and pass/fail thresholds for each transformation phase. The SOA’s ultimate measure of success is the organization’s ability to sustain SADMF practices without SOA support, though in practice, most organizations find that the complexity of the framework requires ongoing SOA engagement indefinitely.
Certifications for the credentials SOA practitioners hold
4.4 - System of Service
The team of teams accountable for achieving deadlines and shipping code through servant leadership and self-governance!
The System of Service is the organizational layer where software actually gets built and shipped. While the System of Authority (SOA) focuses on implanting and maintaining the framework, the SOS focuses on delivering working software within the deadlines established by the Admiral’s Transformation Office. The SOS is a team of teams, encompassing every Feature Team, every Code Engineer, every Build Engineer, and every support role that directly contributes to the DevOps Release Convoy. The SOS is where plans become code, where code becomes builds, and where builds become deployments. It is the engine room of the SADMF vessel, and its members are expected to row in perfect synchrony under the direction of the chain of command.
Servant Leadership and Self-Governance
The SOS looks to the chain of command for servant leadership to ensure self-governance. This may appear contradictory, but it reflects SADMF’s sophisticated understanding of organizational dynamics. True self-governance is not the absence of leadership; it is the presence of leadership so effective that teams internalize its directives and execute them without explicit instruction. The leadership cascade operates as follows:
Feature Captains provide servant leadership to the Code Engineers
Code Engineers provide servant leadership to the code itself
Each level of the hierarchy serves the level below by removing ambiguity, making decisions, and absorbing the complexity that would otherwise distract the teams from their primary mission of delivering features. The chain of command does not constrain the SOS; it liberates the SOS from the burden of autonomous decision-making.
Daily Instruction Cascade
The SOS is instructed on day-to-day work through a structured cascade of ceremonies and communications:
The Feature Captains decompose features into tasks for the Code Engineers.
Each morning, the Mandatory Status Synchronization ensures that every SOS member knows what was accomplished yesterday, what is planned for today, and what impediments exist.
Impediments are escalated up the chain of command, where they are resolved by the level of authority appropriate to their scope:
The SOS is empowered to predictably deliver on time and on budget. This empowerment takes the form of clearly defined processes, pre-approved tools, and standardized workflows that eliminate the need for teams to make ad hoc decisions that could introduce variance. When every team follows the same Fractal-based Development branching pattern, uses the same naming conventions from the EARB’s Book of Names, and passes through the same review gates (CSET, DIAT, CRAP), delivery becomes predictable because the process is deterministic. Variance is the enemy of predictability, and the SOS achieves predictability by eliminating variance at every level. Code Engineers who identify process improvements are encouraged to document them and submit them through the governance process rather than implementing them locally, as local improvements are local variance by another name.
Performance Measurement
The SOS’s performance is measured collectively through the Feature Completion Ratio and individually through the DevOps Process Excellence Assessment. The Chief Signals Officer publishes the SOS’s aggregate metrics daily, and the Commodore reviews them during the Mandatory Status Synchronization. Teams within the SOS that consistently underperform may be dissolved through the Press Gang and their members redistributed, while high-performing teams are recognized at the Tribunal with Certificates of Excellence, permanently recorded in their PeopleWare profiles as formal acknowledgement of distinguished Framework performance. The SOS delivers; the SOA governs; and together, they form the complete organizational structure that SADMF requires.
Build Quality In for the principle that enables predictable delivery
4.5 - Product Direction Arbitration Council
PDAC ensures every stakeholder’s voice is heard in product decisions, preventing any single perspective from dominating the backlog!
The Product Direction Arbitration Council is the cross-functional body responsible for maintaining, prioritizing, and adjudicating the feature backlog for each product line. In organizations without a PDAC, backlog decisions fall to a single Product Owner, a role the SADMF recognizes as structurally dangerous. A Product Owner represents one set of business priorities. The enterprise has many stakeholders, and a single Product Owner will, by definition, underrepresent most of them. The PDAC corrects this by replacing individual product ownership with a council of representatives drawn from every business unit with a stake in the product’s direction. Every voice is included. Every priority is weighed. Every commitment is shared.
The PDAC consists of between seven and fifteen members depending on the product’s stakeholder footprint. Typical representation includes Business Analysis, Compliance, Legal, Finance, Customer Success, Operations, the Enterprise Architecture Review Board, and at least one Feature Captain from the most recent Convoy cycle. The council meets biweekly to review the backlog, add new items, reprioritize existing items, and resolve disputes between competing priority requests. All decisions require consensus, which the SADMF defines as the absence of sustained objection. If a member objects to a prioritization decision, the discussion continues until the objection is resolved or the member agrees to defer their objection to the following session.
The Prioritization Protocol
Backlog items submitted to the PDAC must be accompanied by a Business Value Justification Statement (BVJS) completed by the sponsoring stakeholder. The BVJS documents the business case, the expected beneficiaries, the success criteria, and the organizational impact of deprioritization. PDAC members review submitted BVJSs before each session and rank them privately before the meeting begins. During the session, the PDAC chairperson, a rotating role held for one Convoy cycle at a time, facilitates discussion of each submitted item until the council reaches consensus on its relative priority.
Items that fail to achieve consensus are placed on the Arbitration Agenda for the following session, where they receive extended deliberation time. If an item appears on the Arbitration Agenda for three consecutive sessions without resolution, it is escalated to the Admiral’s Transformation Office for executive adjudication. This escalation mechanism ensures that no item is lost due to persistent disagreement while also providing an incentive for council members to reach consensus rather than escalate: ATO adjudications are binding and final, and historically favor the interpretation that requires the least cross-functional coordination.
The Role of the Technical Lead
In organizations that have not yet constituted a full PDAC, the Feature Captain may serve as interim product direction authority. The Feature Captain’s dual role, tracking delivery progress and guiding product decisions, is acknowledged as a span-of-control expansion that demands exceptional organizational skills. Feature Captains in this configuration are expected to manage the backlog, facilitate stakeholder conversations, attend all PDAC-equivalent sessions, and continue their standard delivery tracking responsibilities without reduction in either function. This arrangement is explicitly transitional; the ATO’s transformation roadmap includes PDAC formation as a maturity milestone, typically reached in the second or third year of SADMF adoption.
Feature Captains serving as interim product direction authority should resist pressure from individual stakeholders to make unilateral backlog decisions outside the formal process. When a stakeholder requests a priority change informally, the Feature Captain should direct them to file a BVJS for consideration at the next scheduled session. Informal priority changes are invisible to the council’s consensus record and cannot be reflected in the Release Tracking spreadsheet, which means they cannot be reported upward through the Mandatory Status Synchronization protocol. An untracked priority change is, by definition, an unauthorized one.
Requirements Agility
One of the PDAC’s most valued organizational contributions is its responsiveness to changing business conditions. Because the council meets biweekly, backlog priorities can be refreshed on a fourteen-day cycle, ensuring that the product’s direction reflects current business needs rather than decisions made months ago under different conditions. The SADMF considers this responsiveness a form of agility: the organization is not locked into a fixed backlog, but is continuously recalibrating toward the highest-value work available at any moment.
Code Engineers who have partially completed work on a feature that has been deprioritized should document their progress using the Comprehensive Documentation Assurance Protocol and await reassignment to the new highest-priority item. The work is not lost; it is paused. If the feature is reprioritized in a future session, the documentation ensures that a different engineer can resume it from a known state. The PDAC’s responsiveness, combined with CDAP’s documentation discipline, ensures that no engineering effort is ever truly wasted, only deferred.
See Also
Feature Captain for the role that may serve as interim product direction authority