This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Governance

Structures ensuring that AI usage remains centralized, approved, and fully traceable at all times.

1 - Centralized AI Generation Function (AIGF)

Consolidating all AI-assisted development into a single organizational function ensures consistent Output Quality Assurance (OQA) and eliminates dangerous local optimization!

To ensure consistent Output Quality Assurance (OQA), organizations should consolidate all AI-assisted development into a single AI Center of Production (AI-CoP). This is not merely a recommendation but a structural imperative derived from the Core Centralization Doctrine (CCD) that underpins the entire SADMF framework. When individual teams are permitted to use AI tools independently, the result is Distributed Innovation Chaos (DIC) – a state in which different teams solve problems in different ways, arrive at different conclusions, and produce code that reflects different assumptions about architecture, style, and intent. This kind of uncontrolled “local optimization” may produce locally useful results, but it creates Enterprise Coherence Degradation (ECD) that is visible only from the executive level. The AIGF eliminates this risk by routing all AI-assisted development through a single, centrally governed function staffed by certified AI Prompt Operators (APOs) who have completed the mandatory Prompt Governance Certification Program (PGCP).

Batch-Processing Model and Queue Management

The AIGF operates on a batch-processing model that ensures maximum throughput while maintaining Governance Fidelity (GF). All Line-of-Business Delivery Units (LBDUs) submit Feature Intake Requests (FIRs) into the AIGF’s centralized queue, where they are prioritized according to the Enterprise Priority Weighting Algorithm (EPWA). The EPWA considers factors such as:

  • Executive sponsorship level: Higher executive visibility yields higher priority weighting.
  • Strategic alignment score: Alignment to current organizational objectives as defined in the transformation roadmap.
  • PowerPoint slide count: The number of PowerPoint slides that reference the feature serves as a proxy for strategic importance.

Once prioritized, the AI Prompt Operators execute the requests using the organization’s approved Large Language Model Instance (LLMI), which has been configured with Enterprise Context Injection Profiles (ECIPs) to ensure that all generated code reflects organizational standards. The batch generation cadence is quarterly, aligning with the Program Increment cycle and allowing sufficient time for the Prompt Review Authority (PRA) to validate each prompt before execution.

Eliminating Shadow AI Usage (SAU)

One of the most significant benefits of the AIGF is the elimination of Shadow AI Usage (SAU). Shadow AI – the unauthorized use of AI tools by individual developers outside the approved governance framework – represents one of the greatest threats to Enterprise Delivery Integrity (EDI). When a Code Engineer uses an unapproved AI tool to generate code locally, that code bypasses the Prompt Governance Protocols (PGPs), the Output Validation Pipeline (OVP), and the mandatory AI Artifact Traceability Log (AATL). The resulting code is, from a governance perspective, indistinguishable from code written without any process at all. The AIGF addresses this by restricting AI tool access to the centralized function, with all other AI endpoints blocked at the network level by the Enterprise Architecture Review Board (EARB). Code Engineers who are discovered using unauthorized AI tools receive an Unauthorized Innovation Citation (UIC) in their PeopleWare profile.

Staffing Model: Hub-and-Spoke Competency Distribution (HSCD)

The staffing model for the AIGF follows the Hub-and-Spoke Competency Distribution (HSCD) pattern:

  • Central hub – Senior AI Prompt Architects (SAPAs): Design the master prompt templates and hold Prompt Design Authority (PDA).
  • Spoke level – AI Prompt Operators (APOs): Execute requests against those templates and hold Prompt Execution Responsibility (PER).

This separation of PDA from PER ensures that no single individual has the ability to both design and execute a prompt, which would create an unacceptable Governance Bypass Risk (GBR). The AIGF reports directly to the Admiral’s Transformation Office, ensuring that AI generation capacity is aligned with strategic transformation objectives rather than the tactical needs of individual delivery teams.

Enterprise Velocity Dynamics (EVD) and Output Consistency

The quarterly batch cadence of the AIGF may initially seem slower than having individual teams use AI on demand, but this perception reflects a fundamental misunderstanding of Enterprise Velocity Dynamics (EVD). True enterprise velocity is not measured by how quickly individual teams produce code but by how consistently the organization produces code that meets all governance, compliance, and architectural standards simultaneously. The AIGF achieves a Governed Output Consistency Rate (GOCR) of 100%, compared to the estimated 0% GOCR of ungoverned team-level AI usage. When measured against the Enterprise Output Maximization Scorecard (EOMS), the AIGF consistently outperforms decentralized models by every metric that matters to leadership.

See Also

2 - Prompt Operating Procedures (POP-Ops)

Mandating a single, enterprise-wide Prompt Operating Procedure reduces cognitive load, eliminates contextual variation, and ensures AI Request Uniformity across the organization!

To reduce cognitive load and contextual variation across the enterprise, the EAIEF™ mandates a single Prompt Operating Procedure (POP) for all AI interactions. Left to their own devices, individual Code Engineers will develop idiosyncratic prompting styles that reflect their personal preferences, domain knowledge, and creative instincts – a phenomenon known as Prompt Divergence Syndrome (PDS). PDS creates an environment where identical requirements produce dramatically different AI outputs depending on who wrote the prompt, undermining the Reproducible Output Guarantee (ROG) that enterprise governance requires. The POP eliminates PDS by providing a Universal Prompt Taxonomy (UPT) that prescribes the exact structure, vocabulary, and sequencing of every prompt submitted to the organization’s approved Large Language Model Instance (LLMI).

The Prompt Governance Stack (PGS)

The POP is built on four mandatory artifacts that together form the Prompt Governance Stack (PGS):

Artifact 1: Universal Prompt Taxonomy (UPT)

The Universal Prompt Taxonomy (UPT) is a hierarchical classification system that categorizes every possible prompting scenario into one of 47 Prompt Type Designations (PTDs). Each PTD has a prescribed prompt template that specifies the required sections, their order, the minimum and maximum word counts for each section, and the approved vocabulary that may be used.

Artifact 2: Prompt Compliance Checklist (PCC)

The Prompt Compliance Checklist (PCC) is a 23-item verification form that must be completed before any prompt is submitted to the LLMI. The PCC verifies that the prompt:

  • Conforms to the UPT
  • References the correct Fully Documented Requirements Package artifact
  • Includes the mandatory Enterprise Context Headers (ECHs)
  • Does not contain any Unauthorized Creative Direction (UCD)

The Prompt Compliance Officer (PCO), a role within the Centralized AI Generation Function, reviews and signs off on each PCC before execution.

Artifact 3: Context Injection Manifest (CIM)

The Context Injection Manifest (CIM) specifies exactly what contextual information must be included in each prompt and, critically, what contextual information must be excluded. The CIM operates on the Minimum Necessary Context Principle (MNCP), which holds that prompts should contain only the information explicitly approved by the Enterprise Architecture Review Board (EARB) – no more, no less. Including too much context creates Context Overflow Risk (COR), where the AI model becomes confused by competing signals and produces Variable Quality Outputs (VQOs). Including too little context creates Context Starvation Events (CSEs), where the AI model fills gaps with assumptions that may not align with enterprise standards. The CIM provides the exact calibration between these extremes, ensuring that every prompt operates within the Optimal Context Window (OCW) defined by the organization’s AI Governance Board (AGB).

Artifact 4: Prompt Outcome Verification Step (POVS)

The Prompt Outcome Verification Step (POVS) is a mandatory post-generation review process that compares AI output against the Expected Output Profile (EOP) defined in the prompt template. The POVS is conducted by the Prompt Outcome Validator (POV) – not to be confused with the Prompt Compliance Officer (PCO), as the separation of pre-execution compliance from post-execution validation is a critical governance control. The POV evaluates each output against the Output Conformance Criteria (OCC), which includes:

  • Structural compliance
  • Naming convention adherence
  • Estimated line count accuracy
  • Absence of Unauthorized Architectural Innovation (UAI)

Outputs that fail any OCC criterion are flagged as Non-Conformant Generated Artifacts (NCGAs) and are returned to the generation queue with an Escalated Priority Designation (EPD). The NCGA rate is tracked per Prompt Type Designation and reported to the Chief Signals Officer as part of the AI Operations Health Dashboard (AOHD).

AI Request Uniformity Standards (AIRUS) and Outcomes

Together, these four artifacts ensure adherence to AI Request Uniformity Standards (AIRUS), the overarching compliance framework that governs all AI interactions within the enterprise. AIRUS was developed in response to the observation that organizations with unstructured AI adoption – where individual developers write prompts freely and evaluate outputs using personal judgment – experience Uncontrolled Output Variance (UOV) rates exceeding 300%. By contrast, organizations operating under full POP-Ops governance achieve a Standardized Output Conformance Rate (SOCR) approaching 100%, with variance attributable only to model-level stochasticity (which is addressed separately through the Temperature Governance Protocol, or TGP). The POP-Ops framework proves that with sufficient process, documentation, and oversight, the inherently creative and unpredictable nature of AI can be reduced to a fully deterministic enterprise capability.

See Also

3 - Environment Access Governance (EAG)

Restricting AI workflows to Non-Production, Non-Prod-Like Environments (NPNPLEs) protects regulatory compliance and ensures all validation occurs immediately prior to Go-Live!

To protect compliance needs aligned to Regulatory Assurance Matrices (RAMx), the EAIEF™ mandates that all AI-assisted development workflows be restricted from executing deployments, tests, or validations in any environment that resembles production. This restriction encompasses not only production itself but also staging environments, pre-production environments, performance testing environments, and any environment configured to mirror production characteristics – a category collectively designated as Non-Production, Non-Prod-Like Environments (NPNPLEs). The distinction is critical: an environment that behaves like production could, through the Behavioral Equivalence Inference (BEI), be mistaken for production by auditors, regulators, or compliance officers, creating a Regulatory Perception Risk (RPR) that no amount of technical labeling can mitigate. By confining AI workflows to NPNPLEs – environments that bear no resemblance to production in configuration, data, scale, or behavior – the organization eliminates RPR entirely.

Production Resemblance Index (PRI) and Environment Classification

The NPNPLE specification is maintained by the Enterprise Architecture Review Board (EARB) in the Environment Classification Registry (ECR), a controlled document that categorizes every organizational environment according to the Production Resemblance Index (PRI). The PRI is a composite score ranging from 0 (no resemblance) to 100 (identical to production), calculated from factors including hardware specifications, network topology, data volume, configuration parity, and access control similarity.

PRI Score Classification AI Workflow Status
0–14 Non-Production, Non-Prod-Like (NPNPLE) Permitted
15–100 Prod-Proximate Off-limits for AI workflows

Only environments scoring below 15 – typically developer workstations with sample data, isolated sandbox instances with no network connectivity, and documentation-only environments – qualify as NPNPLEs. This rigorous classification ensures that AI-generated code is never tested or validated under conditions that could produce Misleading Confidence Artifacts (MCAs) – test results that suggest the code will work in production when in fact it has only been validated in an environment that shares no characteristics with production.

The Validation Gap as Governance Feature

The restriction of AI to NPNPLEs creates what the EAIEF™ calls the Validation Gap – the period between the last test in the NPNPLE and the first execution in production. Rather than treating this gap as a risk (as lesser frameworks might), the EAIEF™ treats it as a governance feature. The Validation Gap ensures that all environment-specific validation occurs in a concentrated window immediately prior to the Go-Live Authorization Meeting (GLAM), where it can be observed, documented, and approved by the full complement of governance stakeholders. This concentrated validation window, known as the Pre-Production Validation Sprint (PPVS), typically lasts 2-4 weeks and involves deploying the AI-generated code to a temporarily provisioned Compliance Validation Environment (CVE) that is immediately decommissioned after the GLAM concludes. The CVE exists for the sole purpose of generating the Go-Live Evidence Package (GLEP) required by the Change Rejection or Acceptance Party (CRAP).

Preventing AI Tool Environment Leakage (ATEL)

Environment Access Governance also addresses the risk of AI Tool Environment Leakage (ATEL) – the scenario in which an AI tool configured for use in a development environment inadvertently accesses or modifies resources in a production or prod-like environment. ATEL can occur through misconfigured connection strings, shared credential stores, or AI agents that autonomously discover and connect to available endpoints. To prevent ATEL, the EAIEF™ requires the implementation of an AI Network Isolation Perimeter (ANIP), a dedicated network segment that physically separates AI tool traffic from all production and prod-like network segments. The ANIP is monitored by the Environment Boundary Enforcement System (EBES), which logs all AI tool network activity and generates alerts for any connection attempt that crosses the perimeter. EBES alerts are triaged by the Source Management Team and escalated to the DOUCHE for investigation when the alert severity exceeds the Perimeter Violation Significance Threshold (PVST).

Productive Uncertainty (PU) and the Case Against Early Validation

The practical impact of EAG is that AI-generated code receives its first realistic validation only days before production deployment – a pattern that critics have characterized as “testing in production with extra steps.” The EAIEF™ rejects this characterization on the grounds that it confuses temporal proximity with procedural inadequacy. The fact that realistic validation occurs late in the cycle does not mean it is insufficient; it means it is efficiently concentrated. Early testing in realistic environments creates a false sense of security that the EAIEF™ calls Premature Confidence Syndrome (PCS), where teams believe their code is ready for production simply because it passed tests in an environment that happened to look like production. By withholding realistic validation until the GLAM window, the EAIEF™ ensures that no one in the organization develops PCS, maintaining a healthy state of Productive Uncertainty (PU) that keeps all stakeholders engaged and vigilant throughout the delivery process.

See Also

4 - Change Approval Board (CAB) Processing

Requiring full CAB review for every AI-generated change, regardless of size or impact, guarantees Governance Fidelity, Audit Trail Robustness, and Multi-Stakeholder Visibility Alignment!

Regardless of size, impact, or testing status, every AI-generated change must go through the full Change Approval Board (CAB) workflow defined in the Enterprise Governance and Compliance Lifecycle (EGCL). Some organizations have experimented with expedited approval paths for low-risk changes – a practice the EAIEF™ categorizes as Governance Shortcutting Behavior (GSB). The fundamental flaw in risk-based change categorization is that it requires someone to assess the risk of a change before it has been fully reviewed, creating a Pre-Assessment Paradox (PAP): you cannot know the risk of a change without reviewing it, but the purpose of risk categorization is to determine how much review the change needs. The EAIEF™ resolves the PAP by eliminating risk-based categorization entirely and requiring full CAB processing for every AI-generated change, including minor modifications (AIO-MMs) such as comment updates, whitespace changes, and configuration value adjustments.

The AI Output Change Processing Protocol (AOCPP)

The full CAB workflow for AI-generated changes, designated as the AI Output Change Processing Protocol (AOCPP), consists of seven sequential phases:

  1. Phase 1, Change Registration Phase (CRP): The change is entered into the Change Management Registry (CMR) with a unique Change Tracking Identifier (CTI) and linked to its originating Fully Documented Requirements Package reference.
  2. Phase 2, Technical Impact Assessment (TIA): Conducted by the Enterprise Architecture Review Board (EARB), which evaluates the change’s effect on system architecture, data flow, and integration points.
  3. Phase 3, Security Implications Review (SIR): Assesses potential security impacts regardless of whether the change touches security-relevant code.
  4. Phase 4, Compliance Mapping Verification (CMV): Confirms that the change does not violate any regulatory requirements documented in the Regulatory Assurance Matrix (RAMx).
  5. Phase 5, Stakeholder Notification Period (SNP): A mandatory 5-business-day waiting period during which all stakeholders are notified of the pending change and given the opportunity to raise objections.
  6. Phase 6, CAB Deliberation Session (CDS): A formal meeting where all evidence from Phases 1-5 is presented and the CAB renders a Deployment Authorization Decision (DAD).
  7. Phase 7, Post-Decision Documentation Phase (PDDP): The CAB’s decision, rationale, and any conditions are recorded in the Governance Decision Archive (GDA).

The CAB Deliberation Session (CDS)

The CAB Deliberation Session (CDS) deserves particular attention, as it is the ceremony where all governance threads converge. The CDS is chaired by the DOUCHE and attended by representatives from:

Each representative presents their assessment, and the CAB reaches a decision through Consensus-Based Governance Resolution (CBGR) – a process in which all representatives must agree before a change is approved. A single objection from any representative returns the change to Phase 2 for re-assessment, regardless of how minor the objection. This consensus requirement ensures that no change reaches production without the full backing of every governance body, a principle known as Universal Governance Endorsement (UGE).

Why Full CAB Processing Applies to Minor Modifications

The requirement for full CAB processing of minor modifications may appear disproportionate, but the EAIEF™ identifies several critical reasons for this stance:

  • Provenance Uncertainty Factor (PUF): AI-generated changes have a unique PUF that human-authored changes do not – the code was produced by a model whose reasoning process is not fully transparent, making every change inherently more uncertain than an equivalent human-authored change.
  • Aggregate Modification Effect (AME): The accumulated volume of AI-generated changes – which, under the Code Volume Productivity metrics, is expected to be substantial – means that even individually minor changes can produce significant cumulative impact.
  • Governance Completeness Immunity (GCI): Full CAB processing for every change generates the comprehensive Audit Trail Robustness (ATR) that regulators and compliance officers expect. An organization that can demonstrate that every single change – no matter how small – was reviewed by a full CAB has an audit position that is functionally unassailable.

Scaling Throughput via Regional Change Approval Sub-Boards (RCASBs)

The practical throughput of the AOCPP is approximately 3-5 changes per CAB session, with sessions held bi-weekly. Organizations generating hundreds or thousands of AI changes per quarter may initially experience a Change Processing Backlog (CPB) as the CAB scales to meet demand. The EAIEF™ addresses CPB not by streamlining the process but by scaling the CAB horizontally through the creation of Regional Change Approval Sub-Boards (RCASBs), each empowered to process changes within their geographic or business-unit jurisdiction. Each RCASB follows the identical seven-phase AOCPP, ensuring governance consistency while increasing organizational throughput. The total number of changes processed is tracked through the CAB Throughput Index (CTI) and reported to the Admiral’s Transformation Office as evidence of governance maturity. A rising CTI demonstrates that the organization is successfully scaling its governance apparatus to match its AI-accelerated output – a hallmark of true Enterprise AI Maturity (EAM).

See Also