← All articles
Article

Why Every AI Agent Needs a Clear Contract with Its Environment

February 18, 2025 · 8 min read

Most agents fail not because of bad models — they fail because of poorly defined action spaces. Here’s a framework for thinking about agent contracts and why it changes everything about how you build.

What Is an Agent Contract?

When we talk about an agent contract, we’re borrowing from software engineering’s concept of design by contract. The idea is simple: before an agent can act in an environment, both sides need to agree on the terms.

The contract has three parts:

  1. Preconditions — what must be true before the agent can act
  2. Postconditions — what the agent guarantees will be true after it acts
  3. Invariants — what must remain true throughout the agent’s operation

Most teams skip all three. They hand an LLM a list of tools, a system prompt, and hope for the best. Sometimes this works. More often, it doesn’t.

The Action Space Problem

Every agent operates in an action space — the set of all possible actions it can take. The action space is defined by the tools available, the state it can modify, and the constraints on when and how it can act.

The problem is that most action spaces are:

The result? Agents that hallucinate tool calls, use the wrong tool for the job, or get stuck in loops because they don’t understand the constraints of their environment.

Designing the Contract

A well-designed agent contract answers these questions explicitly:

What can the agent observe? Define the agent’s perception space carefully. An agent that can see everything often sees nothing useful. Constrain what information reaches the agent.

What can the agent do? Each action should have a clear purpose, well-defined inputs, and predictable outputs. If you can’t write a unit test for a tool, the agent won’t use it reliably.

What are the hard limits? Some actions should be forbidden entirely. Others should require confirmation. Make these explicit in the contract, not buried in a system prompt.

How does the agent know it succeeded? Define success criteria. Without them, the agent will either stop too early or never stop.

Practical Implementation

Here’s a concrete structure for defining an agent contract in code:

@dataclass
class AgentContract:
    # What the agent can observe
    observation_schema: dict

    # Available actions with their constraints
    actions: list[ActionDefinition]

    # Termination conditions
    success_criteria: list[Callable]
    failure_criteria: list[Callable]

    # Hard limits
    max_steps: int
    forbidden_patterns: list[str]

    # Invariants that must hold throughout
    invariants: list[Callable]

The key insight is that this contract is not just documentation — it’s a runtime constraint. Your orchestration layer should enforce it.

Why This Changes Everything

When you define contracts explicitly, something interesting happens: the agent becomes dramatically more reliable. Not because the model got better, but because the problem got better defined.

The model’s job is no longer “figure out what to do in an ambiguous environment.” It’s “execute a well-defined task within known constraints.” That’s a much easier problem — and it’s one LLMs are actually good at.

The teams shipping reliable agent systems aren’t using better models. They’re using better contracts.

→ Tool Use Patterns → Memory Without Rag