arrow_backAll posts
GovernanceDecember 9, 2025·8 min read

Governance on by default: the enterprise AI baseline

Governance is the first thing enterprises ask about and the last thing pilots build. Here is a pragmatic baseline that scales without slowing the team down.

PK
Pavan K
Founder, Mudish Technologies
GovernanceComplianceSecurity
Governance on by default: the enterprise AI baseline

Every enterprise AI review we join opens the same way. The CISO asks where data goes, the general counsel asks who is accountable, and the head of data asks whether the model has seen production records it should not have. These questions are reasonable. The problem is that the pilot team usually does not have answers — because governance was not part of the original build.

The good news: you do not need a fifty-page policy to ship responsibly. You need a small number of defaults that are on for every project from day one. The ones below are the baseline we recommend to customers in regulated and public-sector work. They are compatible with speed, and they make every subsequent review dramatically easier.

The eight defaults

1. Data-residency tags on every prompt

Every inbound request carries a residency tag (for example: US, EU, IN, or GOV). The router refuses to send a tagged request to a provider or region that does not match. This is cheap to build and removes an entire class of after-the-fact incidents.

2. Provider allow-lists per workload

Not every workload gets every model. A workload touching PII should not have a development-time fallback to a model you have not contractually approved. Allow-lists are a config file, not a project.

3. Output classification hooks

Before an output leaves the agent, run a lightweight classifier for PII leakage, policy violations, and jailbreaks. This does not have to be perfect — it has to be present, versioned, and logged.

4. Traces on every step, retained for 90 days

Every agent step — prompt, tool call, model response, latency, cost — goes to a trace store. Ninety days is long enough to support most incident investigations and short enough to keep costs sane. Your auditors will ask for this; bake it in now.

5. A human-in-the-loop escape hatch

Every production agent has a 'I am not sure' path that routes to a person. The agent does not get to silently fail. You will not predict every edge case — you can guarantee that the ones you miss are caught.

6. Documented accountable owner

A name, a team, and an escalation contact per agent. This is the single cheapest governance control and the one pilots most often skip. If nobody is accountable, the agent will drift.

7. Model-change notifications with a rollback plan

Providers deprecate and retune models. Subscribe to their change feeds, and require every agent to have a documented rollback model. The question is not whether a silent regression will land; it is whether you will notice before your customers.

8. A signed-off risk register

For every agent, one page: what it does, who it affects, what data it touches, what could go wrong, and what the compensating control is. Signed by the accountable owner and reviewed quarterly. The register is the artifact your board and your regulators actually want — and most teams do not produce until they have to.

What this does not cover

The eight defaults are a baseline, not a strategy. A mature AI governance program also covers training-data lineage, model-card management, red-team exercises, and third-party model assessments. But those are the second thousand meters. Start with the defaults — every project, from day one — and you will have something to build on instead of something to apologize for.

Found this useful?
Share it with your team.

Have a project in mind? Let's talk.

Tell us where you are and where you want to go. We will come back with a working prototype, not a proposal deck.