This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Quality & Documentation

Practices ensuring that every defect is deferred appropriately and every process is comprehensively documented.

1 - DevOps Process Excellence Assessment

Weekly assessment of every person ensures the organization achieves and maintains framework maturity!

While other frameworks rely on team-level retrospectives or voluntary feedback mechanisms, SADMF recognizes that organizational maturity can only be achieved when every person is individually assessed, ranked, and held accountable for their framework knowledge. The Assessment is not optional, not anonymous, and not open to interpretation. It is the heartbeat of continuous improvement, pulsing once per week to ensure that no drift in process adherence goes undetected.

The Assessment consists of two components:

  • Self-Reported Compliance Survey: 200 questions covering every aspect of SADMF practice
  • Framework Knowledge Examination: a 45-minute timed test of memorized framework material

Self-Reported Compliance Survey

The Self-Reported Compliance Survey contains 200 questions that cover every aspect of SADMF practice, from branching procedures to ceremony attendance to proper use of the Release Tracking spreadsheet. Each question requires the respondent to rate their own adherence on a scale from 1 (Aspirational) to 5 (Exemplary).

Respondents are expected to be honest. To encourage honesty, all responses are reviewed by the DevOps Usage & Compliance Head Engineer (DOUCHE) and cross-referenced against ceremony attendance logs, commit histories, and PeopleWare records.

Discrepancies between self-reported scores and observed behavior are flagged as Integrity Gaps and are treated more seriously than low compliance scores. An engineer who does not follow the process is merely untrained, but an engineer who misrepresents their adherence is untrustworthy.

Framework Knowledge Examination

The Framework Knowledge Examination is a 45-minute timed test that gauges how much of the SADMF material each person has memorized. The test covers:

  • Role definitions and ceremony sequences
  • Metric formulas and principle statements

Questions are drawn from the official SADMF Body of Knowledge (SADBOK), which is updated quarterly by the Admiral’s Transformation Office (ATO).

The test is intentionally closed-book. The ability to look up information when needed is not a substitute for having internalized it. A Code Engineer who must consult documentation to remember the Fractal-based Development branching rules will inevitably make errors during the time it takes to look them up. Memorization eliminates this latency and ensures that framework adherence is reflexive rather than deliberate.

Excellence Score and Bell Curve

Assessment results are compiled into an individual Excellence Score, which is plotted on a mandatory bell curve. The bell curve ensures that regardless of how well the organization performs in absolute terms, a fixed percentage of individuals will be identified as underperformers.

This is by design. The purpose of the bell curve is not to punish low performers but to ensure that the organization never becomes complacent. If every person scored in the top tier, the Assessment would fail to differentiate, and differentiation is the foundation of accountability.

Score outcomes are as follows:

  • Bottom 10%: automatically referred to the Tribunal for review; Excellence Scores are appended to their PeopleWare profiles as permanent records
  • Top 10%: receive a Certificate of Excellence, permanently recorded in their PeopleWare profile as formal acknowledgement of distinguished Framework performance

Organizational Maturity Impact

The Assessment feeds directly into the SADMF Maturity Score, which aggregates individual Excellence Scores into an organization-wide metric. The System of Authority (SOA) uses the Maturity Score to identify teams that require additional coaching, process reinforcement, or restructuring.

Teams with consistently low Maturity Scores may be dissolved and their members redistributed through the Press Gang ceremony, on the theory that low performance is contagious and must be quarantined before it spreads.

The Assessment is the single most important practice in the framework, because without measurement, improvement is merely a wish.

See Also

2 - Comprehensive Documentation Assurance Protocol

Documentation is the primary deliverable – code is merely an artifact of the documentation process!

Other frameworks treat documentation as a secondary concern – something generated after the code is written, if generated at all. SADMF recognizes that code without documentation is an unverifiable claim. Anyone can write code that appears to work. Only documentation proves that the code was intended to work the way it does, that the appropriate authorities approved it, and that the person who wrote it understood what they were doing.

The CDAP Documentation Lifecycle

1
Change Impact Assessment (CIA)
12-page document covering business justification, technical approach, file impact, line-count estimates, Convoy risk, and SADMF Risk Taxonomy scoring (17 categories × 5-point scale). Submitted before any code is written.
2
Sequential CIA Approval Chain (2–3 weeks)
Feature Captain CSET EARB DOUCHE
Parallel approval is not permitted. Each approver requires the context of prior signatures.
3
Method Specification Documents (MSD)
One per method. Covers purpose, inputs, outputs, side effects, and the CIA section it implements. Reviewed alongside the code during Code Inspection. A method without an MSD is treated as unauthorized code.
4
Post-Implementation Verification Document (PIVD)
Produced after coding is complete. Describes what was actually built, any deviations from the CIA, and the justification for each deviation. Undocumented deviations are flagged as process violations.
5
Documentation Repository & Completeness Reporting
All CIA, MSD, and PIVD artifacts are stored in a separate version control system maintained by the Documentation Coordinator. Monthly Documentation Completeness Reports are produced; any ratio below 100% triggers automatic escalation to the ATO.

Before a Code Engineer writes a single line of code, they must complete a 12-page Change Impact Assessment (CIA). The CIA documents the proposed change in exhaustive detail: the business justification, the technical approach, the files that will be modified, the estimated number of lines added and removed, the potential impact on every other feature in the current Convoy, and a risk assessment scored against the SADMF Risk Taxonomy (17 risk categories, each rated on a 5-point severity scale). The CIA must be approved in sequence by the Feature Captain, the Code Standards Enforcement Team (CSET), the Enterprise Architecture Review Board (EARB), and the DevOps Usage & Compliance Head Engineer (DOUCHE). Approval in sequence means that each approver reviews the document only after the previous approver has signed. Parallel approval is not permitted because later approvers need the context of earlier approvers’ comments to make informed decisions. The sequential approval process typically takes 2-3 weeks, during which the Code Engineer is expected to remain available for questions but is not permitted to begin coding.

Upon CIA approval, the Code Engineer may begin writing code, but the documentation requirements do not end. Every method must have an accompanying Method Specification Document (MSD) that describes the method’s purpose, inputs, outputs, side effects, and the section of the CIA it implements. The MSD is reviewed by the CSET during Code Inspection alongside the code itself, and a method without a corresponding MSD is treated as undocumented code, which is functionally equivalent to unauthorized code. After coding is complete, the Code Engineer must produce a Post-Implementation Verification Document (PIVD) that describes what was actually built, how it differs from the CIA (if at all), and why any deviations occurred. Deviations without documented justification are flagged as process violations.

The CDAP documentation suite – CIA, MSD, and PIVD – is stored in the Documentation Repository, a separate version control system from the code repository. This separation is intentional: code and documentation have different lifecycles, different approval chains, and different audiences. Code is for machines; documentation is for auditors. The Documentation Repository is maintained by a dedicated Documentation Coordinator role within the System of Authority (SOA), who ensures that every document is properly versioned, cross-referenced, and archived. The Documentation Coordinator also produces the monthly Documentation Completeness Report, which tracks the ratio of documented to undocumented code changes. A ratio below 100% triggers an automatic escalation to the Admiral’s Transformation Office (ATO).

Critics of CDAP sometimes observe that the documentation process adds significant time to the delivery cycle. This observation is correct and is precisely the point. Documentation time is thinking time, and thinking time prevents defects. A Code Engineer who spends two weeks documenting a proposed change will inevitably discover flaws in their approach that would have become defects in production. The documentation process is, in effect, a form of static analysis performed by humans – slower than automated tools, certainly, but more thorough and more aligned with the SADMF principle that human judgment must never be replaced by automation. The CDAP ensures that when code finally reaches production, it arrives with a complete paper trail that proves the organization knew exactly what it was deploying and chose to deploy it deliberately.

See Also

3 - Standardized Environment Provisioning

Environments manually configured via a 200-step checklist ensure consistency that code-based provisioning can never guarantee!

The broader industry has embraced Infrastructure as Code (IaC) – the practice of defining environments through machine-readable configuration files. SADMF recognizes a fundamental flaw in this approach: code can have bugs. A misconfigured Terraform module or an errant Ansible playbook can provision hundreds of incorrectly configured environments before anyone notices. Checklists, by contrast, are executed one step at a time by a trained human being who can see the environment taking shape and catch errors as they occur. The Standardized Environment Provisioning and Assurance Workflow (SEPAW) replaces the fragility of code with the reliability of manual, step-by-step provisioning.

SEPAW Workflow — 6 to 8 Weeks, End to End
1
Step 1 — Request Submission
Environment Provisioning Request Form
Submitted by the requesting team; describes environment purpose, required software, and target Convoy
2
Step 2 — Sequential Approval Chain
Feature Captain → Commodore → Admiral's Transformation Office
Each approver reviews and signs in order; no approver may act until the preceding approval is complete
3
Step 3 — Build Engineer Queue
Priority-Based Capacity Allocation
Request queued alongside all other provisioning work; prioritized by Convoy urgency and current BE capacity
4
Step 4 — Manual Provisioning (200 Steps)
Certified Build Engineer Executes SEPAW Checklist
Each step initialed upon completion; each verification substep cross-checked against the SEPAW Reference Binder
5
Step 5 — Documentation & Completion
Completed Checklist Filed with Convoy Manifest
Signed checklist archived as evidence of proper provisioning; environment released to requesting team

The SEPAW checklist contains 200 steps, each specifying a single configuration action to be performed by a certified Build Engineer (BE). Steps range from the foundational (install the operating system, configure network interfaces, set DNS resolution) to the framework-specific (install the approved version of the deployment toolchain, configure the approved monitoring agents, create the directory structure required by Fractal-based Development). Each step includes a verification substep in which the Build Engineer confirms the action was completed correctly by visually inspecting the result, running a manual test command, or comparing the configuration to a reference screenshot in the SEPAW Reference Binder. The Build Engineer initials each step upon completion, and the completed checklist is filed with the Convoy Manifest as evidence of proper provisioning.

Environment provisioning under SEPAW typically requires 6-8 weeks from request to availability. This timeline reflects the thoroughness of the process: the initial request must be submitted via the Environment Provisioning Request Form, which is approved by the Feature Captain, the Commodore, and the Admiral’s Transformation Office (ATO). Once approved, the request enters the Build Engineer queue, where it is prioritized alongside other provisioning requests based on Convoy priority and the current capacity of the Build Engineering team. Build Engineers are a scarce resource – their expertise in executing 200-step checklists without error is rare and cannot be easily replicated – and the queue ensures that their time is allocated to the highest-priority environments first.

The SEPAW process produces environments that are identical in configuration, because every environment is built from the same checklist. Organizations that use Infrastructure as Code claim the same benefit, but their consistency depends on the correctness of their code, which must itself be tested, reviewed, and maintained – creating an infinite regress of automation that requires its own automation. SEPAW breaks this regress by grounding consistency in human action. If two environments differ, the difference can be traced to a specific step in the checklist and a specific Build Engineer who executed it. This traceability is impossible with automated provisioning, where a configuration drift might be caused by a race condition, a version mismatch, or a bug in the provisioning tool itself. Human error is at least human and therefore comprehensible; machine error is opaque.

When an environment requires modification after initial provisioning – a configuration change, a software update, or a capacity adjustment – the modification follows the Change Provisioning Amendment Process (CPAP). CPAP requires a new checklist that documents only the steps being changed, cross-referenced to the original SEPAW checklist by step number. The amendment checklist is approved through the same sequential approval chain as the original provisioning request and is executed by the same Build Engineer who provisioned the environment originally, ensuring continuity of knowledge. If the original Build Engineer is unavailable, a Knowledge Transfer Session (minimum 4 hours) is conducted with the replacement Build Engineer before any modifications begin. This practice ensures that no environment is ever modified by someone who does not fully understand its history, its purpose, and the 200 steps that brought it into existence.

See Also

4 - DEPRESSED

The Defect Escalation and Progressive Remediation Enforcement System for Sustained Excellence and Delivery ensures every defect receives the thorough, multi-stage treatment it deserves!

Other frameworks treat defect management as a simple triage process – find the bug, fix the bug, move on. SADMF recognizes that defects are organizational events that require organizational responses. A defect is not merely broken code; it is evidence of a process failure, a training gap, a supervision lapse, or all three. The seven stages of DEPRESSED ensure that every defect is investigated with the rigor it demands and that the remediation addresses not just the symptom but the systemic conditions that allowed the defect to exist.

STAGE 1: DETECTION QA · User · DIAT STAGE 2: CLASSIFICATION Severity Committee (biweekly — ≥2 week wait) STAGE 3: ATTRIBUTION Defect Attribution Algorithm identifies responsible CE STAGE 4: ASSIGNMENT Feature Captain assigns a different Code Engineer STAGE 5: REMEDIATION Isolated branch · CDAP docs no priority over features STAGE 6: VERIFICATION Quality Authority reviews the fix Fix approved STAGE 7: CLOSURE Severity Committee + DIAT + DOUCHE sign-off Defect closed 6–10 weeks elapsed Fix rejected (new engineer required)

The DEPRESSED process consists of seven stages, each managed by a different team and each producing its own documentation artifact. Stage 1: Detection occurs when a defect is identified by the Quality Authority, a user, or the DIAT during post-release validation. The defect is logged in the Defect Registry with a preliminary description and the name of the person who detected it. Stage 2: Classification is performed by the Severity Committee, a cross-functional body comprising one representative from the System of Authority (SOA), one Feature Captain, and one member of the CRAP. The Severity Committee meets biweekly to classify each new defect according to the SADMF Severity Taxonomy (Critical, Significant, Moderate, Cosmetic, Philosophical). Classification typically takes 2 weeks from detection, as the Committee must reach unanimous consensus and must document their reasoning in the Severity Justification Memorandum.

Stage 3: Attribution uses the Defect Attribution Algorithm, the same algorithm employed by the Tribunal, to identify the Code Engineer who introduced the defect. Critically, the attributed engineer is never assigned to fix their own defect. Allowing the original engineer to fix their own bug would create a conflict of interest: they have a personal incentive to minimize the defect’s significance and to implement the quickest possible fix rather than the most thorough one. Stage 4: Assignment allocates the defect to a different Code Engineer, selected by the Feature Captain based on availability, skill match, and absence of any personal relationship with the attributed engineer that might introduce bias. The assigned engineer receives a Defect Remediation Packet containing the defect description, the Severity Justification Memorandum, the attribution analysis, and the Comprehensive Documentation Assurance Protocol templates for the fix.

Stage 5: Remediation is the actual fixing of the defect, which proceeds under the same CI/CD/ED control as any other code change. The assigned engineer works on an isolated branch, completes the CDAP documentation suite, and submits the fix for Conflict Arbitration alongside other changes. The fix receives no priority over feature work, as prioritizing defect fixes would create a perverse incentive for engineers to introduce defects in order to receive priority scheduling. Stage 6: Verification is performed by the Quality Authority, who tests the fix against the original defect description, the Severity Justification Memorandum, and the CDAP documentation. If the fix does not fully resolve the defect as classified, it is returned to Stage 4 for reassignment – never to the same engineer, as they have already demonstrated an inability to remediate this particular defect.

Stage 7: Closure requires sign-off from the Severity Committee, the DIAT, and the DOUCHE. Closure sign-off confirms that the defect has been remediated, that the remediation has been verified, that the CDAP documentation is complete, and that the attribution record has been finalized in the Tribunal Log. The entire DEPRESSED lifecycle, from Detection to Closure, typically spans 6-10 weeks for a Moderate severity defect. Critical defects follow an expedited path that reduces the Severity Committee deliberation period from 2 weeks to 1 week. The thoroughness of DEPRESSED ensures that the organization never confuses speed of resolution with quality of resolution, and that every defect leaves behind a complete paper trail that proves the organization learned from its mistakes.

See Also

5 - Strategic Test Deferral

Velocity-first quality sequencing ensures that tests are written when time permits, not as a precondition for shipping!

Testing is not delivery. Every hour a Code Engineer spends writing tests is an hour not spent writing features, and features are what the business has committed to delivering by the Convoy sailing date. The SADMF practice of Strategic Test Deferral acknowledges this reality and provides a structured approach to managing test investment across the Convoy lifecycle. Rather than treating tests as a prerequisite for every change, a position that sounds principled but is, in practice, a velocity ceiling, Strategic Test Deferral sequences testing effort to align with business priorities, Convoy capacity, and stakeholder expectations.

The foundational insight of Strategic Test Deferral is that tests and features are not produced in a fixed ratio. A feature can ship without tests. This is not irresponsible; it is a calculated allocation of limited engineering time toward maximum stakeholder value. The Quality Authority performs manual verification of every feature before it enters the Convoy Manifest, providing the quality confirmation that automated tests would otherwise offer. Since manual verification is performed regardless of test coverage, automated tests are additive rather than essential. Reducing test authorship during high-velocity Convoy phases does not reduce quality; it redistributes the quality assurance function to the team that specializes in it.

FEATURE CONVOY Full velocity Tests deferred → FEATURE CONVOY Full velocity Tests deferred → test backlog accumulates HARDENING CONVOY No new features Tests authored Start Done

The Hardening Convoy

Strategic Test Deferral does not mean that tests are never written. It means tests are written at the right time. The right time is the Hardening Convoy: a dedicated Convoy cycle scheduled following any period of accelerated feature delivery. During the Hardening Convoy, Code Engineers are assigned to the test backlog that has accumulated during prior Convoys, writing coverage for the features that shipped without it. The Hardening Convoy carries no new feature commitments. Its sole purpose is technical remediation, which includes test authorship, refactoring of high-complexity modules, and documentation updates aligned with the Comprehensive Documentation Assurance Protocol.

The Hardening Convoy is scheduled by the Admiral’s Transformation Office based on signals from the DEPRESSED defect pipeline and the DevOps Process Excellence Assessment. When defect rates begin to climb and assessment surveys reveal knowledge gaps in the codebase, the ATO initiates Hardening Convoy scheduling discussions with the Commodore. The business accepts a temporary reduction in feature throughput in exchange for a more stable foundation for future Convoys. This trade-off is presented to stakeholders as investing in quality, which is accurate: the investment is simply deferred to the moment when deferring it further would produce unacceptable risk.

1
Signal: Defect Rate Climbs
The DEPRESSED defect pipeline shows increasing volume. Feature Convoys have run without test coverage; escaped defects accumulate.
2
Signal: Assessment Reveals Knowledge Gaps
The DevOps Process Excellence Assessment survey results indicate engineers are uncertain about untested code paths.
3
ATO Initiates Scheduling Discussions
The Admiral's Transformation Office engages the Commodore to negotiate a Hardening Convoy slot, framing it as a quality investment to stakeholders.
4
Hardening Convoy Executes
Code Engineers are assigned from the test backlog. No new feature commitments are accepted. Refactoring and documentation updates proceed in parallel.

Coverage as a Lagging Indicator

The SADMF treats test coverage as a lagging indicator of quality, not a leading one. A high coverage percentage reflects the investment that has already been made in testing; it does not guarantee that the most important behaviors are protected or that the tests will catch the defects that matter. The Defects per Code Engineer metric is the primary quality indicator the SADMF tracks, because defects are direct evidence of quality gaps regardless of coverage levels. An engineer whose features consistently produce post-release defects should work to reduce their defect rate, not increase their test count, as the two are not equivalent.

Coverage targets, where they exist, should be set by the CRAP as part of the Convoy commitment process, not enforced at the individual change level. A Convoy-level coverage target allows Feature Captains to allocate test authorship effort where it has the highest impact rather than requiring each engineer to maintain a fixed percentage regardless of feature risk profile. This flexibility is the operational expression of the Lean Management principle: managing the system’s outcomes rather than policing individual activities.

Production Defects as System Signals

Some engineers express concern that shipping untested features introduces production risk. The SADMF acknowledges this concern and responds with perspective. The DEPRESSED defect management process exists precisely to handle production defects with the thoroughness and accountability they require. A defect that reaches production is not a failure of the delivery process; it is an input to the remediation process. DEPRESSED ensures that every defect is classified, attributed, assigned, remediated, verified, and closed with complete documentation. The existence of a production defect is, in this sense, evidence that the quality system is working: the detection and response mechanisms are functioning as designed.

Engineers who remain concerned about production defect risk are encouraged to write tests during discretionary time, as this demonstrates the kind of personal commitment to quality that the Build Quality In principle celebrates. Such initiative is noted in the PeopleWare HRaaS profile and considered favorably during the Tribunal review cycle.

See Also

  • Quality Authority for the team that performs manual verification in place of automated test coverage
  • DEPRESSED for the defect management process that handles issues that escape to production
  • Defects per Code Engineer for the quality metric the SADMF tracks in lieu of test coverage
  • Lean Management for the principle that justifies outcome-based quality management over activity-based controls
  • Build Quality In for the principle that grounds quality in individual engineer performance
  • Feature Completion Ratio for the velocity metric that Strategic Test Deferral protects