AWS Executive in Residence Blog

Most Organizations Can’t Use AI Agents Across Teams—Here’s Why

Agentic AIAI agents can’t work across teams because they lack the domain knowledge that exists only in developers’ minds (e.g., architectural patterns, business rules, design constraints). When agents make changes to another team’s code, they usually fail. Not because the agent lacks capability, but because it doesn’t know that team’s context.

You could supervise the agent more carefully, but you can’t guide an agent through a domain that you don’t understand yourself. Some teams try using another team’s agent directly. This creates a different bottleneck. Developers don’t know the other domain well enough to formulate requests correctly or evaluate responses.

The solution isn’t giving your agent more context about the other team’s code. It’s letting the other team’s agent handle their own code. Agent-to-agent collaboration means your checkout team’s agent communicates with the billing team’s agent through structured requests. Each agent operates within its own domain context—no cross-team knowledge transfer required. This requires foundations most organizations haven’t built yet.

Here’s how it works.

How Agent-to-Agent Collaboration Actually Works

Each team runs a coordinator agent that manages its specialized agents for product, development, testing, and operations. Coordinators communicate through structured requests. Your checkout coordinator sends a request to the billing coordinator. The request specifies the capability needed, acceptance criteria, and business context.

The billing coordinator agent checks this against architectural decisions and policies. If the request fits established patterns, the agent proceeds. It directs specialized agents to write code, generate tests, run the test suite, and submit a pull request (PR).

A developer on the billing team reviews the PR. The agents generated tests and validated technical correctness, but the developer still checks the implementation. Does this fit your architectural patterns? Does it handle edge cases? Does it introduce technical debt? The review takes hours instead of days. The agent did the heavy lifting—writing code, generating tests, and ensuring it works. The developer ensures it fits the system. Once merged, the billing coordinator agent notifies the checkout agent of deployment details and updated API docs.

When requests are unclear or violate constraints, the billing coordinator agent asks questions or suggests alternatives. When requests need architectural decisions or business approval, the agent escalates immediately. Both coordinator agents clarify technical details first. Humans focus on strategy, not coordination logistics.

Both teams keep full control over their code, quality standards, and architecture.

Domain-Specific Knowledge

Agent-to-agent collaboration requires explicit documentation of business concepts, architectural patterns, design decisions, and escalation policies. This knowledge ensures each team’s agents make decisions aligned with that domain’s technical and business constraints. Without it, coordinator agents can’t evaluate whether requests make sense or how to implement them correctly.

Most teams lack this documentation. Here’s what each team needs to build.

Document Your Business Domain

Each team must document what their domain does and what business rules constrain it. Without this, their coordinator agent can’t evaluate requests from other teams or implement changes correctly.

Teams document their core concepts. The billing team documents invoices, payment terms, and dunning procedures. The checkout team documents cart management, payment processing, and order confirmation. Each team lists their business rules, workflows, and domain boundaries.

If your organization practices domain-driven design (DDD), teams already have much of this foundation. The bounded contexts, ubiquitous language, and domain models provide exactly what coordinator agents need. Organizations without DDD practices need each team to build similar documentation from scratch. AI agents can accelerate this by analyzing your codebase to identify patterns, interviewing team members to extract business rules, and drafting initial domain models for human review.

Document Your Architecture

Each team’s coordinator agent needs to understand your system structure to maintain architectural consistency. Teams document their system components and relationships. They specify integration patterns and map data ownership and location across services. They define their communication protocols between services.

Without architectural context, a coordinator agent might suggest synchronous API calls when your architecture requires asynchronous events or propose changes that violate your service boundaries.

Capture Your Architectural Decisions

Architecture documentation describes your system’s structure. Architectural decisions explain why you built it that way. Coordinator agents must understand design rationale to evaluate whether new requests from other teams violate established principles.

For each decision, teams document the context that led to it, the alternatives they considered, the trade-offs they accepted, and the constraints that influenced their choice. Format these as architecture decision records (ADRs): documents that capture the why behind architectural choices.

Each team’s coordinator agent uses its team’s ADRs knowledge base when evaluating incoming requests. When a request from another team violates an existing decision, the agent escalates immediately rather than implementing something inconsistent with the team’s architecture.

Define Explicit Escalation Policies

Coordinator agents need clear boundaries between what they can handle autonomously and what requires human judgment. Each team specifies what their coordinator must escalate—business decisions requiring approval (e.g., new payment methods or pricing changes); security-sensitive changes involving authentication, authorization, or data access; or breaking changes (e.g., API modifications or schema changes).

These policies ensure coordinator agents operate safely within each team’s constraints when handling requests from other teams.

Operational Infrastructure

Agent-to-agent collaboration requires infrastructure that most organizations haven’t yet built.

Create an Agent Registry

Teams need a central registry to discover which coordinator agents exist and what capabilities they offer, just as microservices need service registries to find each other. Each coordinator agent advertises its capabilities—what this domain handles, its boundaries, and what it explicitly doesn’t handle. It publishes interface specifications showing how to make requests. It lists escalation requirements, indicating what needs human approval.

Strengthen Your Delivery Pipeline

Agent-to-agent collaboration increases the volume of cross-team pull requests. According to the 2025 DORA report, about 77% of organizations deploy once per day or less. Manual testing, integration, and deployment cannot handle this increased volume. The coordination overhead simply shifts from requesting changes to deploying them.

Organizations need automated testing, continuous integration, and continuous deployment before scaling agent-to-agent collaboration. Without these foundations, increased code volume creates coordination problems instead of acceleration. I wrote about building this foundation in my post, Your AI Coding Assistants Will Overwhelm Your Delivery Pipeline: Here’s How to Prepare. If your teams lack these capabilities, address them first.

AI Agents Reduce the Cost of Building Foundations

The same AI agents that need this documentation can help you create it.

They interview your team about domain concepts and business rules, analyze your codebase to extract architectural patterns and generate initial ADRs, and draft domain models from existing code and documentation. Agents draft; humans review and refine.

Start small with one team’s domain. As documentation improves, agents work more autonomously. Those autonomous agents then help maintain and extend the documentation they rely on.

Where to Start

Build the foundation before building the agent. Start by documenting one team’s domain knowledge. Test that documentation with real cross-team requests using AI coding assistants. Refine what breaks. Once the documentation works reliably, build your first coordinator agent. Then scale.

This sequence de-risks your investment. It validates that teams can collaborate effectively with just documentation and AI assistants while revealing what’s missing through real-world usage. By the time you build the coordinator, you know exactly what it needs to succeed.

Pick one team that causes the most cross-team delays. Have them spend one day creating four documents: (1) a domain model explaining core business concepts, (2) an architecture overview showing system structure, (3) a contribution guide with request templates, and (4) escalation policies defining what needs human approval.

Test with three to five real requests. Give requesting teams the documentation and AI coding assistants. Track time to first pull request and review cycles. Compare to your baseline.

Refine what’s missing. Apply the playbook to three to four more teams over two months.

Once you have four to five teams with proven documentation, consider building coordinator agents. Build one coordinator for your highest-volume team. Handle one request type. Measure whether agent-mediated requests reduce overhead versus developers with AI. If it proves valuable, build the registry and expand. Otherwise, improve and iterate.

References

  1. DORA 2025 Report: State of AI-assisted Software Development
  2. How AI Is Transforming Work at Anthropic
  3. Measuring the Impact of AI Assistants on Software Development
  4. Amazon Bedrock AgentCore adds quality evaluations and policy controls for deploying trusted AI agents
Matthias Patzak

Matthias Patzak

Matthias joined the AWS Executive in Residence team in early 2023 after a stint as a Principal Advisor in AWS Solutions Architecture. In this role, Matthias works with executive teams on how the cloud can help to increase the speed of innovation, the efficiency of their IT, and the business value their technology generates from a people, process and technology perspective. Before joining AWS, Matthias was Vice President IT at AutoScout24 and Managing Director at Home Shopping Europe. In both companies he introduced lean-agile operational models at scale and led successful cloud transformations resulting in shorter delivery times, increased business value and higher company valuations