AWS Database Blog

Query billion-scale vectors with SQL: Integrating Amazon S3 Vectors and Aurora PostgreSQL

If you already manage relational data in Amazon Aurora PostgreSQL and need to add similarity search over large embedding collections without migrating everything into your database, this post is for you. Aurora PostgreSQL with pgvector excels at low-latency similarity searches on database-resident vectors, Amazon S3 Vectors provides economical storage for massive vector datasets that may reach hundreds of millions or billions of embeddings. By connecting these services through AWS Lambda, you get a familiar SQL interface for both vector search and relational joins. A CloudFormation template is provided with this post that handles the infrastructure deployment, so you can focus on writing queries rather than wiring services together.

In this post, you’ll learn how to query Amazon S3 Vectors from Amazon Aurora PostgreSQL-Compatible Edition using standard SQL, and how to combine vector similarity results with relational filters in a single query, for example, finding the most semantically similar products and then filtering by price, stock status, or tenant in one SQL statement.

Benefits of integration S3 Vectors and Aurora PostgreSQL

S3 Vectors support basic key-value metadata that is stored alongside embeddings and can be used for simple filtering at query time. However, when your application requires complex SQL filters, multi-table joins, access-control policies, or transactional guarantees over that metadata, Aurora PostgreSQL is the right place to manage it. Meanwhile, S3 Vectors provides highly scalable, cost-optimized storage and indexing for large embedding collections that may grow to hundreds of millions or billions of vectors and require infrequent access. This separation reduces database storage pressure and allows vector search infrastructure to scale independently from transactional workloads. In practice, applications will often first filter candidate records in Aurora using structured data such as tenant ID, document type, or timestamp, and then perform similarity search in S3 Vectors to find the most semantically relevant results.

The integration bridges Aurora PostgreSQL to S3 Vectors using the native aws_lambda extension in Aurora, with Lambda serving as the translation layer. When you call s3vl.query_vectors() with your query parameters, the function uses aws_lambda_invoke() to call the Lambda function. Lambda translates the request and calls the S3 Vectors QueryVectors API, formats the response for PostgreSQL, and returns results as a standard PostgreSQL table.

The architecture demonstrates separation of concerns where Lambda handles S3 Vectors API integration while Aurora handles relational data. The architecture provides security isolation through minimal permissions – the Lambda execution role accesses only specific S3 Vectors operations, and the Aurora role can only invoke Lambda. This design allows you to update Lambda code or S3 Vectors indexes independently, making the system maintainable over time.

Security

The sample integration implements IAM role separation (Aurora role invokes Lambda only; Lambda role calls S3 Vectors APIs only), network security (Lambda in Aurora’s VPC with security group restrictions), and database security (s3vl schema access controlled by PostgreSQL permissions). No credentials are stored in the database – all authentication uses IAM roles.

Considerations for data consistency

This integration pattern distributes data across Aurora PostgreSQL and Amazon S3 Vectors, trading ACID guarantees for billion-scale vector search capabilities. Production deployments must address data consistency and synchronization between the two systems. During the synchronization window, queries may return stale results, missing products, or orphaned vector IDs. Production deployments require explicit synchronization processes (batch updates, change data capture, or event-driven updates), embedding version tracking to detect staleness, and application logic that validates results and handles inconsistencies gracefully.

This architecture is appropriate when your use case tolerates eventual consistency (for example, recommendations, content discovery) and vector scale makes database-resident storage impractical. For strong consistency requirements, consider Aurora PostgreSQL with pgvector instead.

Considerations for performance and cost

The Lambda-based approach provides reasonable performance. In testing, expect Lambda invocation latency of 100-500ms including cold starts, while Aurora pgvector delivers single-digit millisecond response times for direct queries. S3 Vectors delivers sub-second query performance for cold queries and less than 100ms for warm queries at billion-vector scale, making it suitable for applications where slightly higher latency is acceptable compared to Aurora pgvector’s single-digit millisecond response times.

From a storage cost perspective, S3 Vectors ($0.06/GB) is more economical than Aurora ($0.10/GB), making this combined approach well-suited for high-volume vector data that needs to be archived and queryable, though not at Aurora’s low latency. Compute costs are harder to generalize as they are highly application-dependent. S3 Vectors’ pay-per-query model favors infrequent use cases, while Aurora’s provisioned clusters spread fixed costs across many queries in high-volume scenarios but could be expensive in low volume situations. Aurora Serverless further blurs this distinction by scaling to zero cost when idle, making it suitable for infrequent use cases as well. We recommend evaluating query volume patterns and latency requirements to best optimize compute costs when choosing between S3 Vectors, Aurora with pgvector, or both.

For production use, optimize Lambda memory allocation, implement caching, and monitor CloudWatch metrics. Consider AWS pricing for Lambda invocations, S3 Vectors queries, and Aurora usage when planning production deployments.

Architecture

The integration uses three components: Aurora PostgreSQL provides SQL functions that invoke AWS Lambda using the native aws_lambda extension, the AWS Lambda function translates PostgreSQL requests into S3 Vectors API calls, and S3 Vectors runs similarity search on large-scale vector indexes.

PostgreSQL Layer creates a dedicated s3vl schema with configuration tables (storing the Lambda ARN and region), an index registry (mapping friendly names to S3 Vectors ARNs), and query functions that mirror S3 Vectors API operations.

Lambda Layer receives JSON payloads from Aurora, converts them to S3 Vectors API calls using boto3, and formats responses for PostgreSQL. The function deploys in your Aurora cluster’s virtual private cloud (VPC) for secure communication.

Amazon S3 Vectors stores vector embeddings and provides three core operations: QueryVectors for similarity search, GetVectors for retrieval by ID, and ListVectors for browsing index contents.

Data Flow

A typical query flows through five key stages:

  1. SQL query – Call s3vl.query_vectors() with your index name, query vector, and top_k parameter
  2. Function processing – PostgreSQL validates the parameters, looks up the index ARN, retrieves the Lambda ARN, constructs a JSON payload, and invokes Lambda using Aurora’s aws_lambda extension with IAM role authentication
  3. API translation – Lambda parses the payload and calls S3 Vectors QueryVectors API
  4. Similarity search – S3 Vectors performs k-nearest neighbor search and returns results with similarity scores, which Lambda converts to PostgreSQL-compatible JSON
  5. Result processing – PostgreSQL parses the response and returns a standard table format

Prerequisites

This tutorial assumes the following experience:

  • Aurora PostgreSQL – Database administration, extensions (particularly aws_lambda), and SQL query optimization
  • AWS Lambda – Creating, deploying, and configuring functions within a VPC
  • Vector databases – Basic understanding of embeddings and similarity search concepts
  • AWS CLI – Comfortable running commands for IAM roles, Lambda deployment, and resource configuration
  • SQL – Writing complex queries including joins and JSON processing

AWS resources:

  • Aurora PostgreSQL cluster (version 16.6 or higher, including Aurora Serverless)
  • VPC with internet access via NAT Gateway or NAT Instance (required for Lambda to call AWS APIs)
  • AWS CLI configured with permissions for S3 Vectors, Lambda, IAM, and RDS operations
  • PostgreSQL client (psql) with network connectivity to your Aurora cluster
  • Permissions to create IAM roles, Lambda functions, and S3 Vectors resources
  • No existing Lambda role associated with your Aurora cluster (Aurora supports only one Lambda role per cluster)

Note: The default VPC in AWS accounts typically lacks the NAT Gateway required for Lambda integration. See the Aurora PostgreSQL Lambda Integration documentation for VPC requirements.

New to these services? Review these guides before starting:

Estimated time to complete: 30-45 minutes

Walkthrough

The complete source code, CloudFormation templates, and step-by-step deployment instructions are available in the AWS Samples GitHub repository. The walkthrough below focuses on the key deployment decisions and the SQL queries that demonstrate the integration’s value. For detailed CLI commands, console instructions, and troubleshooting, refer to the repository README.

  1. Deploy Infrastructure and Configure the Integration
  2. Populate with Sample Data
  3. Run Vector Queries
  4. Combine Relational and Vector Queries

Deploy and configure the integration components

Clone the repository and follow the setup instructions in the README. The deployment covers seven sub-steps: gathering your Aurora cluster details, deploying the CloudFormation stack, associating the IAM role, updating the Lambda function code, installing the PostgreSQL schema, configuring the Lambda integration, and registering the S3 Vectors index. The repository README provides both AWS CLI and AWS Management Console instructions for each sub-step. Here, we summarize the key actions.

Gather Aurora Cluster information

Collect the following configuration values from your Aurora PostgreSQL cluster in the RDS Console before deploying the CloudFormation stack:

Parameter RDS Console Location Notes
AuroraClusterArn Configuration tab → Resource ID
VpcId Connectivity & security tab
SubnetIds Connectivity & security tab → Subnets Select 2–3 across different Availability Zones

Deploy the stack

Clone the repository and prepare your deployment:

git clone https://github.com/aws-samples/sample-rds-lambda-s3vector-integration.git

cd sample-rds-lambda-s3vector-integration/deployment/cloudformation

# Create parameters file from example
cp parameters-example.json parameters.json

Edit parameters.json with your Aurora cluster details. Each parameter serves a specific purpose: the Aurora cluster ARN identifies your database, VPC and subnet IDs determine where Lambda runs. Remove any comment parameters (prefixed with _) from the example json file.

Your parameter.json should look like:

[
  {
    "ParameterKey": "AuroraClusterArn",
    "ParameterValue": "arn:aws:rds:us-west-2:123456789012:cluster:your-cluster-name"
  },
  {
    "ParameterKey": "VpcId",
    "ParameterValue": "vpc-0123456789abcdef0"
  },
  {
    "ParameterKey": "SubnetIds",
    "ParameterValue": "subnet-abc123,subnet-def456,subnet-ghi789"
  },
  {
    "ParameterKey": "ResourcePrefix",
    "ParameterValue": "s3vl"
  },
  {
    "ParameterKey": "LambdaTimeout",
    "ParameterValue": "10"
  },
  {
    "ParameterKey": "LambdaMemorySize",
    "ParameterValue": "128"
  }
]

Deploy the CloudFormation stack (typically completes in 2-5 minutes):

aws cloudformation create-stack \
    --stack-name sample-rds-lambda-s3vector-integration \
    --template-body file://template.yaml \
    --parameters file://parameters.json \
    --capabilities CAPABILITY_NAMED_IAM

# Monitor deployment progress
aws cloudformation describe-stacks \
    --stack-name sample-rds-lambda-s3vector-integration \
    --query 'Stacks[].StackStatus'

# Wait for completion
aws cloudformation wait stack-create-complete \
    --stack-name sample-rds-lambda-s3vector-integration

During deployment, CloudFormation creates resources in a specific order to satisfy dependencies. It provisions the S3 Vectors bucket and index, creates IAM roles with least-privilege permissions (Lambda execution role for S3 Vectors APIs, Aurora role for Lambda invocation), deploys the Lambda function in your VPC with a dedicated security group, sets up CloudWatch logging for debugging and monitoring, and a dead letter queue for failed invocations.

Associate Aurora Cluster with Lambda role

After CloudFormation completes, authorize Aurora to invoke Lambda functions using the IAM role. Aurora supports only one Lambda role association per cluster, so make sure you don’t have an existing Lambda role attached:

# Get the association command from CloudFormation outputs
aws cloudformation describe-stacks \
    --stack-name sample-rds-lambda-s3vector-integration \
    --query 'Stacks[].Outputs[?OutputKey==`AuroraRoleAttachmentCommand`].OutputValue' \
    --output text

# Execute the returned command (example):
aws rds add-role-to-db-cluster \
    --db-cluster-identifier your-cluster-name \
    --role-arn arn:aws:iam::123456789012:role/s3vl-aurora-lambda-role \
    --feature-name Lambda \
    --region us-west-2

The role association typically completes within seconds, allowing Aurora to invoke Lambda functions using this role’s permissions.

Update Lambda Function code

The CloudFormation template deploys a placeholder function to satisfy deployment dependencies. Now replace it with the actual implementation that handles S3 Vectors API calls:

# Package and update the Lambda function
# Navigate to the lambda directory
cd ../../source/lambda
./package.sh

aws lambda update-function-code \
    --function-name s3vl-vector-query \
    --zip-file fileb://build/sample-rds-lambda-s3vector.zip

# Verify the update
aws lambda get-function --function-name s3vl-vector-query --query 'Configuration.[FunctionName,LastModified,CodeSize,State,LastUpdateStatus]'

[
    "s3vl-vector-query",
    "2026-01-26T19:55:30.000+0000",
    4682,
    "Active",
    "Successful"
]

The packaging script bundles the Python code with dependencies (primarily boto3 for AWS API calls). The Lambda function acts as a protocol translator, receiving JSON payloads from Aurora and converting them into S3 Vectors API calls.

Configure PostgreSQL

Connect to your Aurora cluster and install the s3vl schema, which provides SQL functions that wrap Lambda invocations:

# Set environment variables for the lambda function and s3 index arns
export S3VL_LAMBDA_FUNCTION_ARN=$(aws cloudformation describe-stacks \
     --stack-name sample-rds-lambda-s3vector-integration \
     --query 'Stacks[].Outputs[?OutputKey==`LambdaFunctionArn`].OutputValue' \
     --output text)
echo ${S3VL_LAMBDA_FUNCTION_ARN}
export S3VL_S3_INDEX_ARN=$(aws cloudformation describe-stacks \
     --stack-name sample-rds-lambda-s3vector-integration \
     --query 'Stacks[].Outputs[?OutputKey==`S3VectorIndexArn`].OutputValue' \
     --output text)
echo ${S3VL_S3_INDEX_ARN}

# Navigate to SQL directory and install the s3vl schema
cd ../sql
psql -h your-aurora-endpoint -U postgres -d your_database
\i install.sql

-- Configure with Lambda ARN from CloudFormation outputs
\getenv S3VL_LAMBDA_FUNCTION_ARN  S3VL_LAMBDA_FUNCTION_ARN 
SELECT s3vl.configure(
    :'S3VL_LAMBDA_FUNCTION_ARN',
    'us-west-2'
);

-- Validate configuration
SELECT * FROM s3vl.validate_config();

“Note: If you encounter Lambda invocation failures or timeouts during validation, this typically indicates VPC configuration issues. See the Troubleshooting section for detailed guidance on resolving connectivity problems.”

-- Register the S3 Vectors index (get ARN from CloudFormation outputs)
\getenv S3VL_S3_INDEX_ARN S3VL_S3_INDEX_ARN
SELECT s3vl.register_index(
    'test-index-5d',
    :'S3VL_S3_INDEX_ARN',
    'Test 5-dimensional vectors for demonstration'
);

The configure() function stores the Lambda ARN and region in a configuration table, while validate_config() executes a test invocation to verify connectivity. The register_index() function creates a mapping between a friendly name and the full S3 Vectors index ARN, letting you reference indexes by name in queries.

Populate with sample data

Upload the provided test vectors to your S3 Vectors index:

# Get index ARN and upload vectors
INDEX_ARN=$(aws cloudformation describe-stacks \
    --stack-name sample-rds-lambda-s3vector-integration \
    --query 'Stacks[].Outputs[?OutputKey==`S3VectorIndexArn`].OutputValue' \
    --output text)

echo $INDEX_ARN

# Navigate to the top level directory
cd sample-rds-lambda-s3vector-integration

aws s3vectors put-vectors \
    --index-arn "$INDEX_ARN" \
    --vectors file://sample-data/sample-vectors.json

# Verify upload
aws s3vectors list-vectors \
    --index-arn "$INDEX_ARN" \
    --max-results 5

Testing and combined queries

Now you can perform vector operations by querying S3 Vectors indexes directly from SQL using the test data. You can find similar vectors, retrieve specific vectors by ID, and test with identical vectors to verify similarity scoring.

-- Similarity search: find 5 most similar vectors
SELECT * FROM s3vl.query_vectors(
    index_name => 'test-index-5d',
    query_vector => ARRAY[0.23, -0.41, 0.88, 0.0, 0.03],
    top_k => 5,
    return_metadata => TRUE
);

-- Retrieve specific vectors by ID
SELECT * FROM s3vl.get_vectors(
    index_name => 'test-index-5d',
    vector_ids => ARRAY['vec_001', 'vec_002', 'vec_048']
);

-- Test with identical vector (should return similarity score of 1.0)
SELECT * FROM s3vl.query_vectors(
    index_name => 'test-index-5d',
    query_vector => ARRAY[-0.116703, 0.325043, 0.578032, 0.739321, -0.003246],
    top_k => 3,
    return_metadata => TRUE
);

-- Test with metadata filter
SELECT 
    vector_id,
    similarity_score,
    metadata
FROM s3vl.query_vectors(
    query_vector => ARRAY[0.1, 0.2, 0.3, 0.4, 0.5],
    index_name => 'test-index-5d',
    top_k => 5,
    return_metadata => TRUE,
    metadata_filter => '{"category": "test"}'::jsonb
);

The query_vectors() function sends your vector to S3 Vectors through Lambda, which performs k-nearest neighbor search using cosine similarity. The top_k parameter controls how many nearest neighbors to return—the examples here use small values for readability, but you can request larger result sets depending on your use case. Results include vector IDs, similarity scores (ranging from -1 to 1, where 1.0 indicates identical vectors and lower values indicate less similarity), and metadata. The entire operation typically completes in 100-500 milliseconds, including Lambda invocation overhead.

Combine relational and vector queries

This is the core reason to integrate these two services: the practical advantage emerges when you combine vector similarity with relational data. This pattern is common in recommendation systems, content discovery, and similarity-based analytics. Create a sample product catalog:

-- Create sample product table
CREATE TABLE s3vl_demo_products (
    vector_key TEXT PRIMARY KEY,
    product_name TEXT,
    category TEXT,
    price DECIMAL(10,2),
    in_stock BOOLEAN,
    description TEXT
);

-- Insert sample data matching test vectors
INSERT INTO s3vl_demo_products VALUES
    ('vec_001', 'Neural Network Processor', 'example', 299.99, true, 'High-performance AI chip'),
    ('vec_002', 'Machine Learning Kit', 'example', 199.99, true, 'Complete ML development kit'),
    ('vec_003', 'Demo Board v1', 'demo', 89.99, false, 'Prototype demonstration board'),
    ('vec_012', 'Demo Board v2', 'demo', 129.99, true, 'Advanced demo hardware'),
    ('vec_015', 'Demo Board v3', 'demo', 149.99, true, 'Latest demo hardware'),
    ('vec_005', 'Sample Sensor', 'sample', 49.99, true, 'Environmental monitoring sensor'),
    ('vec_007', 'Sample Module', 'sample', 79.99, true, 'Modular component system'),
    ('vec_006', 'Test Framework', 'test', 399.99, true, 'Automated testing utility'),
    ('vec_009', 'Test Suite Pro', 'test', 599.99, false, 'Professional testing tools'),
    ('vec_048', 'Identical Unit A', 'identical', 99.99, true, 'Standard reference unit'),
    ('vec_049', 'Identical Unit B', 'identical', 99.99, true, 'Standard reference unit'),
    ('vec_050', 'Identical Unit C', 'identical', 99.99, true, 'Standard reference unit');

-- Combined relational and vector query: similarity search + business filters
WITH similar_products AS (
    SELECT vector_id, similarity_score
    FROM s3vl.query_vectors(
        index_name => 'test-index-5d',
        query_vector => ARRAY[-0.392024, 0.229378, 0.582031, 0.382919, -0.555263],
        top_k => 10
    )
)
SELECT
    p.product_name,
    p.category,
    p.price,
    sp.similarity_score,
    p.description
FROM similar_products sp
JOIN s3vl_demo_products p ON p.vector_key = sp.vector_id
WHERE p.in_stock = true AND p.price < 500
ORDER BY sp.similarity_score DESC
LIMIT 5;

This combined query demonstrates the integration’s key value: you can express complex logic entirely in SQL. The CTE performs vector similarity search to retrieve the top 10 most similar products, then the main query joins these results with your product catalog and applies business filters (in stock and under $500). In production recommendation systems, you might use this pattern to find similar products based on user behavior embeddings, then filter by inventory availability, price range, or user preferences – using native SQL queries, keeping your architecture clean by handling both vector similarity and relational filtering in the database layer.

When combining metadata stored in Amazon Aurora PostgreSQL with embeddings indexed in Amazon S3 Vectors, developers should be aware of a common hybrid-search trade-off. If an application first queries Aurora to filter rows by metadata (for example tenant, document type, or date) and then sends the remaining IDs to a vector similarity search, the search space becomes smaller, which improves performance and ensures strict metadata constraints are respected. However, this pre-filtering can reduce recall if relevant vectors are excluded by the metadata filter before similarity search occurs. In practice this trade-off is often desirable – especially for multi-tenant, security, or domain-restricted workloads – because it guarantees that results meet required metadata conditions while still enabling fast semantic retrieval. Architects should design filters carefully (for example avoiding overly selective predicates when possible) and, when needed, retrieve a slightly larger candidate set and re-rank results to balance recall, precision, and latency.

Troubleshooting

Lambda invocation failures or timeouts typically indicate VPC configuration issues. Run the validation function to check connectivity:

SELECT * FROM s3vl.validate_config();

If validation fails with timeout errors, the most common cause is VPC networking misconfiguration (missing NAT Gateway, incorrect security group rules, or Lambda not in the same VPC as Aurora). These are general Aurora-Lambda connectivity issues, not specific to S3 Vectors. See the Aurora PostgreSQL Lambda Integration documentation for detailed VPC requirements and debugging steps.

IAM and permission errors occur when roles are misconfigured. Confirm the Aurora cluster has the Lambda invoke role attached, the Lambda execution role includes S3 Vectors API permissions, and the Lambda function exists in the correct Region.

S3 Vectors API errors suggest permission or connectivity problems. Confirm the Lambda execution role has S3 Vectors permissions, validate the S3 Vectors index ARN is correct, check that S3 Vectors service is available in your region, and confirm Lambda can reach S3 Vectors endpoints through internet access.

Performance issues often stem from Lambda cold starts causing initial delays. Review Lambda timeout settings to accommodate your query complexity. For production deployments, optimize Lambda memory allocation and implement caching strategies.

Cleaning up

To avoid incurring future charges, delete the resources when you’re done testing.

Remove PostgreSQL schema

Connect to your Aurora PostgreSQL database and remove the s3vl schema:

-- Drop the s3vl schema and all its objects
DROP SCHEMA IF EXISTS s3vl CASCADE;
DROP TABLE IF EXISTS s3vl_demo_products;

Remove Aurora Cluster IAM Role Association

Before deleting the CloudFormation stack, remove the Lambda role association from your Aurora cluster:

# Get the role ARN and cluster identifier from CloudFormation outputs
ROLE_ARN=$(aws cloudformation describe-stacks \
  --stack-name sample-rds-lambda-s3vector-integration \
  --query 'Stacks[].Outputs[?OutputKey==`AuroraLambdaRoleArn`].OutputValue' \
  --output text)

echo $ROLE_ARN

CLUSTER_ID=$(aws cloudformation describe-stacks \
  --stack-name sample-rds-lambda-s3vector-integration \
  --query 'Stacks[].Outputs[?OutputKey==`AuroraClusterIdentifier`].OutputValue' \
    --output text)

echo $CLUSTER_ID

# Remove the role from Aurora cluster
aws rds remove-role-from-db-cluster \
    --db-cluster-identifier "$CLUSTER_ID" \
    --role-arn "$ROLE_ARN" \
    --feature-name Lambda

Note: Only perform this step if you used the CloudFormation template to create the Aurora Lambda role. If your Aurora cluster had a pre-existing Lambda role, skip this step.

Delete the CloudFormation stack

Delete the CloudFormation stack to remove all AWS resources created during deployment:

# Delete the CloudFormation stack
aws cloudformation delete-stack \
    --stack-name sample-rds-lambda-s3vector-integration

# Monitor deletion progress
aws cloudformation describe-stacks \
    --stack-name sample-rds-lambda-s3vector-integration \
    --query 'Stacks[].StackStatus'

# Wait for deletion to complete (typically 2-5 minutes)
aws cloudformation wait stack-delete-complete \
    --stack-name sample-rds-lambda-s3vector-integration

The CloudFormation stack deletion removes all resources created by the stack.

Verify resource cleanup

After cleanup, verify that all resources have been removed:

# Verify CloudFormation stack is deleted
aws cloudformation describe-stacks \
--stack-name sample-rds-lambda-s3vector-integration 2>/dev/null || echo "Stack successfully deleted"
# Verify Lambda function is deleted
aws lambda get-function \
--function-name s3vl-vector-query 2>/dev/null || echo "Lambda function successfully deleted"

Note: Your Aurora PostgreSQL cluster remains unchanged and will continue to incur its normal charges.

Conclusion

This post demonstrated how to integrate Aurora PostgreSQL with Amazon S3 Vectors using AWS Lambda so you can query vector similarity results alongside relational data in single SQL queries. The architecture maintains separation of concerns—Aurora handles relational data, Lambda manages API translation, and S3 Vectors performs similarity search at scale—while using IAM roles for least-privilege access and VPC networking for secure communication. Choose Aurora pgvector for single-digit millisecond response times, or S3 Vectors for cost-effective billion-vector scale with sub-second cold query performance and sub-100ms warm query latency. For production deployments, optimize Lambda memory, implement caching, and monitor CloudWatch metrics to balance performance with cost.

To learn more, visit the Amazon Aurora PostgreSQL Vector Database, Amazon S3 Vectors, and AWS Lambda documentation. The complete source code and deployment scripts are available in the AWS Samples repository.


About the authors

Mark Greenhalgh

Mark Greenhalgh

Mark is a Senior Database Engineer at Amazon Web Services with over 20 years of experience designing, developing, and optimizing high-performance database systems. He specializes in analyzing database benchmarks and metrics to improve performance and scalability.

Shayon Sanyal

Shayon Sanyal

Shayon is a Principal Specialist Solutions Architect for Data and AI and a Subject Matter Expert for Amazon’s flagship relational database, Amazon Aurora. He has over 15 years of experience managing relational databases and analytics workloads. Shayon’s relentless dedication to customer success allows him to help customers design scalable, secure, and robust cloud-based architectures.

Steve Dille

Steve Dille

Steve is a Senior Product Manager for Amazon Aurora, where he drives generative AI strategy and product innovation across Amazon Aurora databases and Amazon Bedrock. Since joining AWS in 2020, he has led Aurora performance and benchmarking efforts, and launched the Amazon RDS Data API for Amazon Aurora Serverless, Aurora pgvector 0.8.0, Aurora quick create for Amazon Bedrock Knowledge Bases, and numerous Aurora zero-ETL features.