AWS Database Blog
Accelerating data modeling accuracy with the Amazon DynamoDB Data Model Validation Tool
In Introducing the Amazon DynamoDB data modeling MCP tool, we announced the Amazon DynamoDB Model Context Protocol (MCP) server, which connects DynamoDB specific tools to AI-powered assistants such as Amazon Q Developer, Kiro, and Cursor. The MCP server enables you to design DynamoDB data models through natural language and produces structured artifacts such as dynamodb_requirements.md and dynamodb_data_model.md.
In Raising the bar on Amazon DynamoDB data modeling, we introduced an automated evaluation framework that measures the quality of the data modeling prompt itself, how effectively it gathers requirements, reasons about access patterns, and produces scalable designs. The framework, built with Amazon Bedrock, DSPy, and Strands Agents, helps us continuously improve the guidance that the DynamoDB modeling tool provides.
Today, we’re introducing the Amazon DynamoDB Data Model Validation Tool, a new component of the MCP server that closes the loop between generation, evaluation, and execution. The validation tool automatically tests generated data models against Amazon DynamoDB local, refining them iteratively until every access pattern behaves as intended.
From static design to self-validating models
Data modeling is inherently iterative. Traditionally, developers validated data models by manually deploying schemas, writing test scripts, and analyzing results, a process that was both time-consuming and inconsistent. The Data Model Validation Tool automates this cycle from end to end.
First, the DynamoDB MCP server helps you design data models through natural language or automated database analysis, generating schemas that align with your application’s access patterns. The new Data Model Validation Tool extends this process by automatically spinning up DynamoDB local and executing the read and write operations to confirm that each access pattern behaves as expected.
This creates an iterative generation-and-validation loop that enables the agent to refine the model until it’s fully valid. If a test fails, for example, when a query returns incomplete results because the partition key is misaligned, the validator records the issue, regenerates the affected portion of the schema, and reruns the tests. The process continues until the access patterns pass successfully.
The following diagram illustrates how you interact with the MCP server through the agentic framework of your choice.

- When you ask your agent to help design a DynamoDB data model, it calls the DynamoDB Model Context Protocol (MCP) server. The server responds by prompting you to choose between a natural language conversation or the Database Source Analyzer tool, which automatically infers your data modeling requirements.
- The tool generates a DynamoDB data model that captures your application’s access patterns and organizes them into a scalable, cost-efficient design.
- The agent then asks whether you want to validate the data model. If you choose Yes, it invokes the Data Model Validation Tool.
- The validator performs the following steps:
- Spins up a local DynamoDB environment using DynamoDB local.
- Generates a JSON file named
dynamodb_data_model.jsonthat lists the actions required to validate the design, including the tables to create, the access patterns to test, and the corresponding CLI commands to run. - Creates tables and indexes locally, and inserts sample data.
- Executes the expected read and write operations captured in the
dynamodb_data_model.jsonfile. - Validates responses, checking that each access pattern behaves correctly and efficiently.
- If a validation step fails, the feedback loop automatically updates the data model and re-runs the tests until the access patterns pass. Once complete, the tool outputs an updated model and a
validation_result.mdfile summarizing the validation process.
Example: validating a high-traffic deals app
The following example shows validation results for a Deals App, a mobile-first system serving millions of users and handling large fan-out events during flash sales. After the MCP data modeling tool generates a schema, the validator automatically spins up DynamoDB local and runs simulated operations such as:
- Browsing deals by category and brand
- Tracking deal popularity by view count
- Writing notifications for followed categories
If the validator detects that a “GetUserFeed” query results in an error, it identifies the issue, regenerates the relevant index, and re-tests the access pattern. Within minutes, you have a validated data model that’s functionally tested, performance-aware, and aligned with DynamoDB best practices.Below is an example output from the validation tool using Amazon Q CLI
Sample validation output
When the validation completes, it creates a file dynamodb_data_model.json which highlights some of the key findings from the validation tests. In this case, the Deals App data model achieved a 100% success rate across the tested access patterns, confirming that the design is ready.
Conclusion
The Amazon DynamoDB Data Model Validation Tool closes the loop between generation, evaluation, and validation. It can give you measurable confidence that your data model not only looks right, but that it works right too.
With natural language modeling and executable validation, developers can now design, test, and refine DynamoDB schemas faster than ever. We’re excited to see how you use these tools to accelerate your DynamoDB design workflows and share your feedback as the MCP environment continues to evolve.
To learn more, visit the GitHub repository for the DynamoDB MCP server and explore how the validation tool integrates into your workflow.