This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Review & Governance

The oversight bodies and boards that ensure every change passes through proper channels.

1 - Change Rejection or Acceptance Party

CRAP ensures that only thoroughly reviewed and unanimously approved changes reach the Convoy!

The Change Rejection or Acceptance Party is the final human checkpoint between a proposed change and its inclusion in the next DevOps Release Convoy. While automated checks can verify syntax and tests can confirm functional behavior, neither can assess whether a change is truly ready for production. That judgment requires the wisdom, detachment, and institutional authority that only a formal review board can provide. The CRAP convenes twice per week, reviewing every change that has passed through the Code Standards Enforcement Team (CSET) and the Development Integrity Assurance Team (DIAT). No change may proceed to the DORC without CRAP approval, regardless of its size, urgency, or the seniority of its author.

Composition and Objectivity

The CRAP meeting dias seats seven members drawn from areas of the organization with no direct knowledge of the systems being changed. This is not an oversight; it is the CRAP’s greatest strength. Reviewers who understand the system being modified are inherently biased toward approval, as their familiarity breeds sympathy for the developer’s choices. Reviewers from unrelated domains bring the detachment and objectivity necessary to evaluate whether:

The seven-member composition ensures that no single perspective dominates, and the diversity of ignorance guarantees that the review focuses on process compliance rather than technical merit, which is exactly as it should be.

Voting and Approval

All approval decisions are made by unanimous secret vote. Each CRAP member casts their ballot independently, without discussion, after reviewing the change package. If even one member votes to reject, the change is returned to the submitting team with a Rejection Notice that specifies which checklist items were incomplete or which documentation was insufficient. The secret ballot prevents social pressure from influencing votes, ensuring that a junior CRAP member feels as empowered to reject a change from a senior Code Engineer as from a new hire. Unanimous approval is required because the strength of a change gate is measured by its strictest reviewer, not its most lenient. If six of seven reviewers approve but one has concerns, those concerns represent an unresolved risk that the organization cannot afford to accept.

The Supplicant’s Oath

Before presenting their change to the CRAP, meeting supplicants must take a formal oath affirming that they have personally applied every control on the change checklist. The oath is administered by the CRAP chairperson and recorded in the meeting minutes. This may seem ceremonial, but the oath serves a critical psychological function: it transforms checklist completion from a bureaucratic task into a personal commitment. A Code Engineer who has sworn an oath is far less likely to have skipped steps than one who merely checked boxes on a form. The oath text is standardized by the Admiral’s Transformation Office and updated annually to reflect new checklist items. Supplicants who are later found to have sworn falsely are referred to the Tribunal for review, and their oath violation is recorded in their PeopleWare profile.

Change Rejection Log and Oversight

The CRAP also maintains the Change Rejection Log, a comprehensive record of every rejected change, the reasons for rejection, and the number of resubmissions required before acceptance. This log is reviewed monthly by the Review Board Review Board (RBRB) to ensure that the CRAP’s rejection rate remains within acceptable bounds. A rejection rate that is too low suggests insufficient rigor; a rate that is too high may indicate that the change checklist has become unreasonably complex, in which case the Admiral’s Transformation Office will add additional checklist items to address the root cause. The CRAP’s standards are set by the iteration goals published by the ATO, and the CRAP is empowered to reject any change that does not meet those standards, regardless of business pressure or delivery timelines.

See Also

2 - Code Standards Enforcement Team

CSET performs all code reviews so that Code Engineers can focus on typing code instead of reading it!

The Code Standards Enforcement Team exists because the uncomfortable truth about code review is that the people who wrote the code are the least qualified to review it. Code Engineers are too close to the problem, too invested in their own solutions, and too pressed for time to perform the dispassionate, rigorous evaluation that quality code demands. Additionally, performing code review takes time away from coding, which is the Code Engineer’s only job. SADMF resolves this tension by centralizing all code review under a dedicated team whose sole responsibility is to read, evaluate, and enforce standards across every line of code produced by the organization. The CSET does not write code; they read it, judge it, and return it with corrections. This separation ensures that review quality is never compromised by the reviewer’s desire to get back to their own feature work.

Enterprise Coding Standards Manual

The CSET is responsible for defining and enforcing all coding standards for the enterprise. These standards are codified in the Enterprise Coding Standards Manual, a living document maintained by the CSET and approved by the Enterprise Architecture Review Board (EARB). The Manual covers every aspect of code formatting and structure including, but not limited to:

  • Indentation depth and the use of tabs versus spaces
  • Approved variable and method names from the EARB’s Book of Names
  • Comment format and density requirements
  • Maximum line length, maximum method length, and maximum file length
  • Approved design patterns for each programming language

Standards that are not in the Manual are not standards, and code that violates standards that are in the Manual will not be approved regardless of whether the code functions correctly. Correctness is necessary but not sufficient; conformity is the higher bar.

Review Process

The CSET review process begins when a Code Engineer submits their changes to the CSET review queue:

  1. The CSET assigns a reviewer from a rotation, ensuring that no reviewer becomes too familiar with any particular codebase, which would risk the development of sympathy or context that could bias their judgment.
  2. The reviewer evaluates the submission against the Enterprise Coding Standards Manual using a 47-point checklist.
  3. Each checklist item requires a pass or fail determination; partial passes are not permitted, as ambiguity in standards enforcement is the first step toward standards erosion.
  4. Submissions that fail any checklist item are returned to the Code Engineer with detailed annotations specifying the violations and the corresponding Manual sections.
  5. The Code Engineer corrects the violations and resubmits, and the cycle repeats until all 47 points pass.

Standards Adherence Metrics

The average code change passes through the CSET 2.3 times before approval, a number that the CSET tracks as the Standards Adherence Iteration Count. A high iteration count for a Code Engineer indicates insufficient familiarity with the Enterprise Coding Standards Manual and may trigger a referral to the DevOps Process Excellence Assessment for additional evaluation. A low iteration count across the organization might suggest that standards have become too lenient, prompting the CSET to propose additional rules to the EARB for inclusion in the Manual. The CSET also publishes a weekly Standards Compliance Report that ranks all Code Engineers by their average iteration count, first-pass approval rate, and most frequently violated standard. This report is distributed to the Feature Captains, Commodore, and Admiral’s Transformation Office for visibility.

Authority and Amendment Process

The CSET’s authority is absolute within the domain of code standards. Neither a Feature Captain nor a Commodore may override a CSET rejection, as doing so would undermine the integrity of the standards enforcement process. If a Code Engineer believes a standard is incorrect or counterproductive, they may submit a Standards Amendment Proposal to the EARB, which will review it at their next scheduled meeting in 6 weeks. Until the amendment is approved, the existing standard remains in force and the CSET will continue to enforce it. This ensures that standards evolve through deliberate governance rather than ad hoc exceptions driven by delivery pressure.

See Also

3 - Development Integrity Assurance Team

DIAT validates the work of QA to ensure that testing itself meets the organization’s quality standards!

The Development Integrity Assurance Team addresses a question that most organizations are afraid to ask: who tests the testers? The Quality Authority is responsible for manually executing test scripts and verifying that code meets requirements, but the Quality Authority’s own work is itself a human process, subject to the same errors, oversights, and shortcuts that affect any other activity. Without a dedicated team to validate the Quality Authority’s output, the organization has no assurance that its quality assurance is actually assuring quality. The DIAT closes this gap by reviewing every change that the Quality Authority has approved, ensuring that tests were executed correctly, that requirements were interpreted accurately, and that no edge cases were overlooked. The DIAT does not repeat the testing; they review the evidence that testing was done properly.

Composition

The DIAT is composed of senior-level practitioners who have demonstrated deep expertise in their respective domains and have achieved high scores on the DevOps Process Excellence Assessment:

  • Senior Code Engineers: bring deep knowledge of code behavior and edge cases
  • Senior Build Engineers: contribute expertise in environment configuration and build artifacts
  • Senior Designers: provide perspective on requirements interpretation and user intent

This seniority is essential because the DIAT must be able to identify subtle errors that less experienced practitioners would miss. A junior Code Engineer might accept a test result at face value, but a senior DIAT member will examine the test steps, the test data, the environment configuration, and the screenshots to confirm that the test actually validated what it claimed to validate. The DIAT’s review is forensic in nature, treating each test execution as evidence that must withstand scrutiny.

Review Process

The DIAT review process begins after the Quality Authority signs off on a change and before it is submitted to the Change Rejection or Acceptance Party (CRAP) for final approval:

  1. The DIAT reviewer examines the Quality Authority’s test execution log, verifying that every test script was executed in the correct order.
  2. The reviewer confirms that all prerequisite conditions were met and that the pass/fail determination was consistent with the observed results.
  3. The DIAT cross-references the test scripts against the original requirements to ensure that the Quality Authority did not inadvertently test the wrong thing or test the right thing with the wrong data.
  4. Discrepancies are documented in a DIAT Findings Report and returned to the Quality Authority for remediation.
  5. The change cannot proceed to the CRAP until the DIAT is satisfied.

Quarterly Test Script Audit

The DIAT’s authority extends beyond individual change review. They are also responsible for auditing the Quality Authority’s test script library on a quarterly basis, ensuring that:

  • Scripts remain current
  • Deprecated test cases have been removed
  • New requirements have corresponding test scripts

This audit produces the Test Coverage Integrity Report, which is reviewed by the Commodore and the Admiral’s Transformation Office. Gaps identified in the audit trigger the creation of new test scripts by the Quality Authority, which are then reviewed by the DIAT before being added to the library. This circular dependency between the QA and the DIAT ensures that both teams remain continuously engaged and that neither can operate without the other’s oversight.

Oversight of the DIAT

Some may argue that having a team to review the reviewers creates an infinite regression problem: if the DIAT validates QA, who validates the DIAT? SADMF addresses this through the Review Board Review Board (RBRB), which periodically reviews the decisions of all review bodies including the DIAT. Additionally, the DIAT’s own work is subject to the DevOps Process Excellence Assessment, ensuring that DIAT members are individually accountable for their framework knowledge and process adherence. The layered review structure is not redundant; it is resilient. Each layer catches what the previous layer missed, creating a defense-in-depth model that ensures quality is verified, the verification is validated, and the validation is reviewed.

See Also

4 - Enterprise Architecture Review Board

The EARB maintains the Book of Names, ensuring all Code Engineers use only approved words when naming things!

Naming is the hardest problem in software engineering, and the Enterprise Architecture Review Board ensures that no individual Code Engineer is burdened with solving it alone. Left to their own devices, Code Engineers will invent variable names, method names, class names, and service names according to their personal preferences, creating a Tower of Babel where every codebase speaks its own dialect. The EARB eliminates this chaos by maintaining the Book of Names, the master list that defines all acceptable words and word combinations that may be used for naming things during coding. If a word is not in the Book, it may not be used. If a combination is not in the Book, it may not be used. This discipline ensures that any Code Engineer joining a new Feature Team for the next Convoy will immediately recognize every identifier in the codebase, because every identifier was drawn from the same approved vocabulary.

The Book of Names

The Book of Names is organized into sections by domain:

Section Contents
Business Nouns Core domain entities and concepts
Technical Verbs Approved action words for methods and functions
Modifier Adjectives Approved qualifiers for compound identifiers
Status Indicators Approved words for state and condition naming
Compound Expressions Pre-approved multi-word combinations

Each entry includes the approved word, its canonical spelling, its permitted abbreviation (if any), its allowed contexts, and usage examples. The Book currently contains 2,847 approved entries, a number that the EARB considers comprehensive but not final.

New entries may be proposed by any member of the organization through the Name Submission Process, which requires the submitter to provide:

  • A justification
  • Three proposed usage examples
  • A statement confirming that no existing approved name adequately covers the intended meaning

Submissions are queued for the next EARB review meeting, where they are evaluated against the criteria of clarity, consistency, and necessity.

Review Cadence and Rejection Policy

The EARB meets every six weeks to review and, in most cases, reject new words submitted for inclusion in the Book. The six-week cadence is deliberate: it provides sufficient time for the EARB members to research each submission thoroughly and ensures that the Book is not diluted by hasty additions. The EARB’s default posture is rejection, because every new word added to the Book increases the vocabulary that Code Engineers must memorize and that the Code Standards Enforcement Team (CSET) must enforce. A lean vocabulary is a learnable vocabulary, and a learnable vocabulary is a consistent vocabulary.

Submissions are rejected for reasons including but not limited to:

  • The word is too similar to an existing approved word
  • The word is too domain-specific
  • The word is too colloquial
  • The word contains more than three syllables
  • The justification is insufficient

Rejected submissions may be resubmitted after a 12-week cooling-off period with additional justification.

Architectural Governance

The EARB also governs architectural decisions beyond naming, including the approved set of design patterns, approved technology stacks, and approved integration methods. When a Feature Team proposes to use a new library, framework, or architectural pattern, the proposal must be submitted to the EARB for evaluation. The EARB assesses the proposal against:

  • The organization’s existing technology landscape
  • The training requirements for adoption
  • The impact on the Enterprise Coding Standards Manual maintained by the CSET

Proposals that introduce technologies not already in the approved stack face a particularly high bar, as each new technology increases organizational complexity and the scope of the DevOps Process Excellence Assessment knowledge test.

Oversight and Decision Records

The EARB’s decisions are subject to review by the Review Board Review Board (RBRB), which meets every three weeks to evaluate whether the EARB is applying its criteria consistently and whether its rejection rate remains within acceptable bounds. The EARB reports to the Admiral’s Transformation Office and its decisions are recorded in the Architecture Decision Log, an append-only document that preserves the rationale for every approval and rejection. The Architecture Decision Log is a valuable resource for future EARB members, as it establishes precedent that guides future decisions. The EARB does not innovate; the EARB governs innovation, and governance is what separates a mature organization from a chaotic one.

See Also

5 - Review Board Review Board

The RBRB reviews the decisions of EARB and CRAP, ensuring that the reviewers are themselves properly reviewed!

The Review Board Review Board exists to answer the question that every mature governance structure must eventually confront: who watches the watchmen? The Enterprise Architecture Review Board (EARB) governs naming and architecture decisions. The Change Rejection or Acceptance Party (CRAP) governs change approval. The Development Integrity Assurance Team (DIAT) validates quality assurance. Each of these bodies wields significant authority over the delivery process, and authority without oversight is authority without accountability. The RBRB closes this governance loop by reviewing the decisions of all other review bodies, ensuring that their criteria are applied consistently, that their rejection rates are appropriate, and that their processes align with the standards set by the Admiral’s Transformation Office.

Meeting Cadence and Review Scope

The RBRB meets every three weeks to review and, when necessary, reject decisions made by the EARB and CRAP. The three-week cadence is offset from the EARB’s six-week cycle to ensure that RBRB reviews cover multiple EARB decision windows. During each meeting, the RBRB examines a sample of recent EARB and CRAP decisions, selected both randomly and based on flags raised by teams affected by those decisions. For each decision, the RBRB evaluates:

  • Whether the review body applied its documented criteria
  • Whether the rationale recorded in the decision log is sufficient
  • Whether the outcome was proportionate to the issue

An EARB rejection that lacks adequate justification may be overturned by the RBRB, requiring the EARB to reconsider the submission at their next meeting. A CRAP approval that appears to have been granted without proper checklist verification triggers a CRAP process audit.

Membership and Objectivity

The members of the RBRB must come from areas as far removed from the work as possible to maintain objectivity. This principle, shared with the CRAP, reflects SADMF’s foundational belief that proximity to the work creates bias and that the most objective judgment comes from those with the least context. RBRB members are typically drawn from departments such as:

  • Facilities management
  • Legal
  • Human resources
  • Finance

These are roles that have no involvement in software delivery and therefore no stake in any particular technical decision. This composition ensures that the RBRB evaluates process compliance rather than technical merit, which is exactly its mandate. The RBRB does not ask whether the EARB made the right technical decision; the RBRB asks whether the EARB followed the right process in making it.

Appeals Process

The RBRB also serves as an escalation path for teams that believe they have been unfairly treated by the EARB or CRAP. If a Feature Team submits a name to the EARB that is rejected three times despite what the team considers adequate justification, the team may appeal to the RBRB. The RBRB reviews the submission history, the EARB’s rejection rationale, and the team’s appeal, and renders a binding decision. Similarly, if a Code Engineer believes their change was rejected by the CRAP for reasons not documented in the change checklist, they may file an appeal. The RBRB’s appeal decisions are final and are recorded in the Governance Appeals Log, which is reviewed annually by the Admiral’s Transformation Office to identify systemic issues in the governance structure.

Oversight of the RBRB

Some practitioners question whether the RBRB itself requires oversight, noting that an infinite regression of review boards would be impractical. SADMF addresses this through the DevOps Process Excellence Assessment, which evaluates RBRB members as individuals, and through the annual Governance Structure Review conducted by the Admiral’s Transformation Office. The Governance Structure Review examines the effectiveness of all review bodies, including the RBRB, and may recommend changes to meeting frequency, membership criteria, or decision-making procedures. This ensures that the RBRB is accountable without requiring yet another review board, breaking the recursion at the organizational level through executive authority rather than structural repetition.

See Also