Artificial Intelligence

Scaling content review operations with multi-agent workflow

Enterprises are managing ever-growing volumes of content, ranging from product catalogs and support articles to knowledge bases and technical documentation. Ensuring this information remains accurate, relevant, and aligned with the latest business facts is a formidable challenge. Manual content review processes are often slow, costly, and unable to keep pace with dynamic business needs. According to a McKinsey study, organizations that use generative AI for knowledge work, including content review and quality assurance can boost productivity by up to 30–50% and dramatically reduce time spent on repetitive verification tasks. Similarly, research from Deloitte highlights that AI-driven content operations not only increase efficiency but also help organizations maintain higher content accuracy and reduce operational risk.

Amazon Bedrock AgentCore, a purpose-built infrastructure for deploying and operating AI agents at scale, combined with Strands Agents, an open source SDK for building AI agents, empowers organizations to automate comprehensive content review workflows. This agent-based approach enables businesses to evaluate content for accuracy, verify information against authoritative sources, and generate actionable recommendations for improvement. By using specialized agents that work together autonomously, human experts can focus on strategic review tasks while the AI agent system handles large-scale content validation.

The agent-based approach we present is applicable to any type of enterprise content, from product documentation and knowledge bases to marketing materials and technical specifications. To demonstrate these concepts in action, we walk through a practical example of reviewing blog content for technical accuracy. These patterns and techniques can be directly adapted to various content review needs by adjusting the agent configurations, tools, and verification sources.

Solution overview

The content review solution implements a multi-agent workflow pattern, where three specialized AI agents built with Strands Agents and deployed on Amazon Bedrock AgentCore work in a coordinated pipeline. Each agent receives the output from the previous agent, processes it according to its specialized function, and passes enriched information to the next agent in the sequence. This creates a progressive refinement process where:

  1. Content scanner agent analyzes raw content and extracts relevant information
  2. Content verification agent takes these extracted elements and validates them against authoritative sources
  3. Recommendation agent transforms verification findings into actionable content updates

Technical content maintenance requires multiple specialized agents because manually scanning, verifying, and updating documentation is inefficient and error prone. Each agent has a focused role – the scanner identifies time-sensitive elements, the verifier checks current accuracy, and the recommendation agent crafts precise updates. The system’s modular design, with clear interfaces and responsibilities, makes it easy to add new agents or expand capabilities as content complexity grows. To illustrate how this agent-based content review system works in practice, we walk through an implementation that reviews technical blog posts for accuracy. Tech companies frequently publish blog posts detailing new features, updates, and best practices. However, the rapid pace of innovation means some features become deprecated or updated, making it challenging to keep information current across hundreds or thousands of published posts. While we demonstrate this pattern with blog content, the architecture is content agnostic and supports any content type by configuring the agents with appropriate prompts, tools, and data sources.

Practical example: Blog content review solution

We use three specialized agents that communicate sequentially to automatically review posts and identify outdated technical information. Users can trigger the system manually or schedule it to run periodically.

Architecture diagram showing a three-stage blog review workflow on Amazon Bedrock Agent Core Runtime with Blog Scanner, Blog Verification, and Recommendation agents processing a blog URL through HTTP Tool and AWS Documentation MCP Server to generate review results

Figure-1 Blog content review architecture

The workflow begins when a blog URL is provided to the blog scanner agent, which retrieves the content using Strands http_request tool and extracts key technical claims requiring verification. The verification agent then queries the AWS documentation MCP server to fetch the latest documentation and validate the technical claims against current documentation. Finally, the recommendation agent synthesizes the findings and generates a comprehensive review report with actionable recommendations for the blog team.

The code is open source and hosted on GitHub.

Multi-agent workflow

Content scanner agent: Intelligent extraction for obsolescence detection

The content scanner agent serves as the entry point to the multi-agent workflow. It is responsible for identifying potentially obsolete technical information. This agent specifically targets elements that are likely to become outdated over time. The agent analyzes content and produces structured output that categorizes each technical element by type, location in the blog, and time-sensitivity. This structured format enables the verification agent to receive well-organized data it can efficiently process.

Content verification agent: Evidence-based validation

The content verification agent receives the structured technical elements from the scanner agent and performs validation against authoritative sources. The verification agent uses the AWS documentation MCP server to access current technical documentation. For each technical element received from the scanner agent, it follows a systematic verification process guided by specific prompts that focus on objective, measurable criteria.

The agent is prompted to check for:

  • Version-specific information: Does the mentioned version number, API endpoint, or configuration parameter still exist?
  • Feature availability: Is the described service feature still available in the specified regions or tiers?
  • Syntax accuracy: Do code examples, CLI commands, or configuration snippets match current documentation?
  • Prerequisite validity: Are the listed requirements, dependencies, or setup steps still accurate?
  • Pricing and limits: Do mentioned costs, quotas, or service limits align with current published information?

For each technical element received from the scanner agent, the agent performs the following steps:

  1. Generates targeted search queries based on the element type and content
  2. Queries the documentation server for current information
  3. Compares the original claim against authoritative sources using the specific criteria above
  4. Classifies the verification result as CURRENT, PARTIALLY_OBSOLETE, or FULLY_OBSOLETE
  5. Documents specific discrepancies with evidence

Example verification in action: When the scanner agent identifies the claim “Amazon Bedrock is available in us-east-1 and us-west-2 regions only,” the Verification Agent generates the search query “Amazon Bedrock available regions” and retrieves current regional availability from AWS documentation. Upon finding that Bedrock is now available in 8+ regions including eu-west-1 and ap-southeast-1, it classifies this as PARTIALLY_OBSOLETE with the evidence: “Original claim lists 2 regions, but current documentation shows availability in us-east-1, us-west-2, eu-west-1, ap-southeast-1, and 4 additional regions as of the verification date.”

The verification agent’s output maintains the element structure from the scanner agent while adding these verification details and evidence-based classifications.

Recommendation agent: Actionable update generation

The recommendation agent represents the final stage in the multi-agent workflow, transforming verification findings into ready-to-implement content updates. This agent receives the verification results and generates specific recommendations that maintain the original content’s style while correcting technical inaccuracies.

Adapting the multi-agent workflow pattern for your content review use cases

The multi-agent workflow pattern can be quickly adapted to any content review scenario without architectural changes. Whether reviewing product documentation, marketing materials, or regulatory compliance documents, the same three agent sequential workflow applies. The system prompts need to be modified for each agent to focus on domain specific elements and potentially swap out the tools or knowledge sources. For instance, while our blog review example uses an http_request tool to fetch the blog content and the AWS Documentation MCP Server for verification, a product catalog review system might use database connector tool to retrieve product information and query inventory management APIs for verification. Similarly, a compliance review system would adjust the scanner agent’s prompt to identify regulatory statements instead of technical claims, connect the verification agent to legal databases rather than technical documentation, and configure the recommendation agent to generate audit-ready reports instead of content updates. The core sequential steps extraction, verification, and recommendation remain constant across all these scenarios, providing a proven pattern that scales from technical blogs to any enterprise content type.We recommend the following changes to customize the solution for other content types.

  1. Replace the values of CONTENT_SCANNER_PROMPT, CONTENT_VERIFICATION_PROMPT, and RECOMMENDATION_PROMPT variables with your custom prompt instructions:
  python
  CONTENT_SCANNER_PROMPT = """<replace with your prompt instructions>"""
  CONTENT_VERIFICATION_PROMPT = """<replace with your prompt instructions>"""
  RECOMMENDATION_PROMPT = """<replace with your prompt instructions>"""
  1. Update the official documentation MCP server for content verification agent:
  python
   product_db_mcp_client = MCPClient(
       lambda: stdio_client(StdioServerParameters(
           command="uvx", args=["<replace with your official documentation MCP server>"]
       ))
   )
  1. Add appropriate content access tools such as database_query_tooland cms_api_tool for the content scanner agent when http_request tool is insufficient:
python
   scanner_agent = Agent(
       model="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
       system_prompt=CONTENT_SCANNER_PROMPT,
       tools=[database_query_tool, cms_api_tool]  # Replace http_request
   )

These targeted modifications enable the same architectural pattern to handle any content type while maintaining the proven three-agent workflow structure, ensuring reliability and consistency across different content domains without requiring changes to the core orchestration logic.

Conclusion and next steps

In this post, we explained how to architect an AI agent powered content review system using Amazon Bedrock AgentCore and Strands Agents. We demonstrated the multi-agent workflow pattern where specialized agents work together to scan content, verify technical accuracy against authoritative sources, and generate actionable recommendations. Additionally, we discussed how to adapt this multi-agent pattern for different content types by modifying agent prompts, tools, and data sources while maintaining the same architectural framework.

We encourage you to test the sample code available on GitHub in your own account to gain first-hand experience with the solution. As next steps, consider starting with a pilot project on a subset of your content, customizing the agent prompts for your specific domain, and integrating appropriate verification sources for your use case. The modular nature of this architecture allows you to iteratively refine each agent’s capabilities as you expand the system to handle your organization’s full content review needs.


About the authors

Sarath Krishnan is a Senior Gen AI/ML Specialist Solutions Architect at Amazon Web Services, where he helps enterprise customers design and deploy generative AI and machine learning solutions that deliver measurable business outcomes. He brings deep expertise in Generative AI, Machine Learning, and MLOps to build scalable, secure, and production-ready AI systems.

Santhosh Kuriakose is an AI/ML Specialist Solutions Architect at Amazon Web Services, where he leverages his expertise in AI and ML to build technology solutions that deliver strategic business outcomes for his customers

Ravi Vijayan is a Customer Solutions Manager with Amazon Web Services. He brings expertise as a Developer, Tech Program Manager, and Client Partner, and is currently focused on helping customers fully realize the potential and benefits of migrating to the cloud and modernizing with Generative AI