“Agent” has become the default term for a new class of AI systems.

Agentic workflows. Autonomous tools. Systems that can “think” and “act.”

The language suggests something fundamentally new.

But the more useful question is simpler:

What actually changes when a system becomes an agent?


The Boundary

A system becomes an agent when it can:

  • observe
  • infer a course of action
  • act within a permitted boundary

That is the transition.

Before that point, a system is a tool.

It takes input and produces output.
It does not initiate behavior.
It does not act beyond what it is explicitly invoked to do.

Once a system can observe its environment, infer what to do next, and act within a defined boundary, it stops being a passive component.

It becomes an actor inside a system.

That shift matters more than the term “agent.”

Because it changes the problem space entirely.


We Already Know This System

Systems composed of interacting actors are not new.

They exist throughout production environments.

A payment service evaluates a transaction and decides whether to authorize it.

A retry worker observes failure state and determines whether to reprocess a job.

A fraud system analyzes activity and triggers downstream workflows.

Each of these systems:

  • operates on partial information
  • makes decisions under constraints
  • takes actions that affect shared state
  • interacts with other components doing the same

They encounter familiar challenges:

  • duplicate execution due to retries
  • race conditions between services
  • inconsistent state during partial failure
  • difficulty tracing why a decision occurred
  • unclear ownership of authority

This is a known problem space.

We have established patterns for it:

  • idempotency
  • message boundaries
  • retry strategies
  • coordination mechanisms
  • observability and tracing

These are the patterns used to build systems of independent components that evaluate state and act.

These components are actors.

The structure is well understood.


The Actual Difference

What changes with AI agents is not that the system now has actors.

It is that the actor’s decision function is no longer stable.

Traditional actors are deterministic.

Given the same input and state, they produce the same outcome.

Their behavior can be reasoned about directly, tested, and relied upon in coordination with other components.

AI agents are not deterministic.

They are:

  • probabilistic
  • sensitive to context that may not be fully visible
  • capable of producing different outcomes under similar conditions
  • difficult to fully explain after the fact

In traditional actor systems, behavior is defined and repeatable.

In agentic systems, behavior is inferred and variable.

The coordination model remains, but the assumptions about consistency do not.

The structure of the system remains familiar.

But the decision-making inside the actor is no longer fixed.

And that change propagates outward.

Retries may not produce the same result.
Identical requests may diverge.
Coordination assumptions begin to weaken.

The system is now built from actors whose behavior cannot be treated as stable.

That is the difference.


Familiar Problems, New Conditions

As these systems move into production, the same issues begin to surface.

An agent retries an action and produces a different result.

A workflow executes twice because the system cannot determine whether the previous step completed.

Two components make conflicting decisions based on slightly different views of state.

A downstream system cannot explain why a decision was made because the reasoning is embedded in model behavior rather than explicit logic.

These are not new categories of failure.

They are recognizable.

They are the same classes of problems that appear in distributed systems:

  • coordination under partial information
  • idempotency and safe retries
  • consistency across boundaries
  • observability of decision paths
  • ownership of authority

The difference is not the type of problem.

It is the behavior of the actor inside it.

Retries no longer converge.
Identical inputs do not guarantee identical outcomes.
Decision paths cannot always be reconstructed from deterministic logic.

The system is now coordinating actors whose behavior is not strictly repeatable.

That makes familiar problems harder to contain.


Where the Work Actually Is

Many implementations focus on the agent itself:

  • prompt design
  • model selection
  • tool integration
  • chaining behaviors

These matter.

But they are not where most production failures originate.

Failures emerge at the system boundaries:

  • what the agent is permitted to do
  • how its actions are validated before execution
  • where authority actually resides
  • how decisions are recorded and reconstructed
  • how failures are detected and corrected

An agent operating without clear constraints will eventually produce actions the system cannot safely absorb.

An agent allowed to modify state without independent validation will eventually create outcomes that cannot be explained, reproduced, or defended.

The difficulty is not making the agent capable.

It is designing a system that can safely contain that capability.


Implication

These are not new concerns.

They are the same problems that exist in any system composed of interacting actors:

  • defining boundaries of action
  • controlling authority over state changes
  • validating behavior before execution
  • coordinating across components
  • observing and reconstructing decisions
  • recovering when things go wrong

The difference is the nature of the actor.

When the actor is probabilistic, the system cannot rely on behavior alone.

The boundaries have to carry more of the responsibility.


Closing

AI does not remove the need for engineering discipline.

It makes the consequences of missing it harder to ignore.

Once a system can observe, decide, and act, it becomes an actor inside a larger system.

And systems of interacting actors require structure:

  • defined authority
  • enforced constraints
  • clear coordination

The more capable the actor becomes, the less the system can rely on its behavior alone.

Responsibility shifts to the system that contains it.

The problems are not new.

What is new is the nature of the actor.

And that changes how those problems must be solved.