AWS Database Blog

Amazon Aurora DSQL connections: Drivers, strings, and best practices

Setting up your first connection to Amazon Aurora DSQL? The process looks familiar if you’ve worked with PostgreSQL, but a few key things are different. Instead of long-lived passwords, you use short-lived IAM authentication tokens. Instead of static endpoints, you work with distributed cluster endpoints that route connections across Availability Zones. If you’re troubleshooting connection timeouts, managing token expiration, or configuring drivers for the first time, understanding these connection patterns helps you avoid common pitfalls.

In this post, you learn how to configure connection strings, set up drivers in Python, Java, and Node.js, and implement best practices for authentication, connection pooling, and lifecycle management with Amazon Aurora DSQL.

Connection architecture

Amazon Aurora DSQL uses a distributed connection architecture that differs fundamentally from traditional PostgreSQL deployments. Rather than connecting to a single database instance, your application connects through a routing layer that distributes traffic across multiple Availability Zones. Understanding this architecture, including how endpoints are structured and how the wire protocol works, is essential before configuring your drivers and connection strings. The following sections describe the endpoint format and wire protocol compatibility you need to know before connecting.

Endpoint format

Your Amazon Aurora DSQL cluster endpoint follows this pattern:

<cluster-id>.dsql.<region>.on.aws

For example: weaxxxxxxxxxxxxxxxxqdqqm.dsql.us-east-1.on.aws

This dual-stack format supports both IPv4 and IPv6. The endpoint connects to Amazon Aurora DSQL’s distributed routing layer, which automatically handles connection distribution across multiple Availability Zones.

Key connection parameters:

  • Host: Your cluster endpoint (preceding format).
  • Port: 5432 (standard PostgreSQL port).
  • Database: postgres (default database name).
  • SSL Mode: Required for all connections.

Wire protocol compatibility

Amazon Aurora DSQL uses the standard PostgreSQL v3 wire protocol, enabling compatibility with popular PostgreSQL drivers including psql, pgjdbc, psycopg, and psycopg2. Your existing tools and libraries work with minimal configuration changes.

Authentication and security

Aurora DSQL takes a different approach to authentication and network security than traditional PostgreSQL databases. The following sections cover IAM-based token generation, network connectivity options, and credential management best practices.

IAM-based authentication

Amazon Aurora DSQL relies exclusively on short-lived IAM authentication tokens. This approach provides several security benefits:

  • Enhanced security: Reduces risks from password storage and rotation.
  • Centralized access control: Uses AWS Identity and Access Management (AWS IAM) for unified permission management.
  • Audit trail: Connection attempts are logged through AWS CloudTrail.
  • Automatic expiration: Tokens expire after 15 minutes by default (configurable up to 1 week). Extending token lifetime beyond the default is strongly discouraged — a leaked long-lived token is a significant security risk. If extended lifetimes are required, scope tokens to minimum permissions and monitor for long-lived tokens using CloudTrail.

For comprehensive access control patterns and security best practices, see Securing Amazon Aurora DSQL: Access Control Best Practices.

Generating tokens with the AWS Command Line Interface (AWS CLI):

The following command generates an authentication token for your Aurora DSQL cluster using the AWS CLI.

aws dsql generate-db-connect-admin-auth-token \
--region us-east-1 \
--hostname <your-cluster-id>.dsql.us-east-1.on.aws

Required IAM permissions:

{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Effect": "Allow",
     "Action": [
       "dsql:DbConnect",
       "dsql:DbConnectAdmin"
     ],
     "Resource": "arn:aws:dsql:region:account-id:cluster/cluster-id",
     "Condition": { 
       "IpAddress": { 
         "aws:SourceIp": ["10.0.0.0/8"] 
       } 
     }
   }
 ]
}
  • dsql:DbConnect: Grants access to connect as a regular database user.
  • dsql:DbConnectAdmin: Grants administrative privileges.

Principle of least privilege

Grant only the minimum permissions necessary for each use case:

  • Use dsql:DbConnect for standard application access.
  • Reserve dsql:DbConnectAdmin exclusively for administrative tasks.
  • Add IP-based conditions to restrict access to known network ranges.

Network security

Amazon Aurora DSQL supports both public and private connectivity:

Public endpoint access provides security through:

  • IAM-based authentication – Reduces password-based vulnerabilities.
  • IP-based access control – Restricts connections through IAM policy conditions.
  • Mandatory SSL/TLS encryption – Connections require encrypted transport.

Private endpoint access (AWS PrivateLink) keeps traffic within AWS:

  • VPC interface endpoints – Private connectivity without internet exposure.
  • VPC endpoint policies – Additional network-level access controls.
  • Security groups – Restrict traffic to specific subnets and ports.

Attach a VPC endpoint policy to restrict which principals can connect through the endpoint. Without one, any principal in the VPC can use the endpoint to reach your cluster.

{
"Version": "2012-10-17",
"Statement": [
   {
     "Effect": "Allow",
     "Principal": {
       "AWS": "arn:aws:iam::account-id:role/your-app-role"
     },
     "Action": [
       "dsql:DbConnect"
     ],
     "Resource": "arn:aws:dsql:region:account-id:cluster/cluster-id"
   }
]
}

Network egress controls

Controlling inbound access is only half the picture. Without egress restrictions, a compromised application could exfiltrate data to external destinations. Restrict outbound traffic from your application hosts:

  • Security group outbound rules – Allow traffic only to required destinations (for example, Aurora DSQL on port 5432, AWS service endpoints).
  • VPC Network ACLs – Add subnet-level egress restrictions as a secondary layer.
  • VPC Flow Logs – Monitor for unexpected outbound traffic patterns.
  • AWS Network Firewall – Use for fine-grained egress filtering beyond security groups.

Credential management

The following are some best practices for managing credentials securely when connecting to Aurora DSQL:

  • Never hardcode credentials in application code.
  • Use environment variables for configuration values like hostnames and Regions.
  • Generate tokens dynamically using AWS SDK calls at connection time.
  • Use AWS Secrets Manager for storing connection configuration.
  • Rotate IAM credentials regularly following AWS security best practices.
  • Monitor authentication attempts through CloudTrail for anomaly detection.
  • Never log or persist authentication tokens – Tokens are passed as database passwords and can leak into connection string logs, application logs, or error messages. Make sure logging frameworks redact password fields, and avoid including tokens in URLs or diagnostic output.

Connection monitoring

CloudTrail logs all Aurora DSQL authentication events. Set up alerts to detect anomalous connection activity:

  • Failed authentication attempts – Create Amazon CloudWatch alarms on repeated DbConnect or DbConnectAdmin failures to detect credential misuse or misconfiguration.
  • Unexpected source IPs or regions – Filter CloudTrail events by sourceIPAddress and awsRegion to flag connections from outside expected network ranges.
  • Unusual connection patterns – Monitor for spikes in connection volume or connections outside normal operating hours using CloudWatch anomaly detection.
  • Long-lived token usage – Track GenerateDbConnectAdminAuthToken calls where the requested lifetime exceeds the default 15 minutes.

For automated response, use Amazon EventBridge rules on CloudTrail events to trigger Amazon Simple Notification Service (Amazon SNS) notifications or AWS Lambda remediation workflows.

SSL/TLS configuration

Amazon Aurora DSQL requires encrypted transport for connections:

  • sslmode=require – Minimum encryption requirement.
  • sslmode=verify-full – Enhanced security with full certificate and hostname validation.

Production recommendation: Use verify-full mode. It validates both the certificate chain and hostname, helping to protect against man-in-the-middle threats.

Amazon Aurora DSQL Connectors

AWS provides Amazon Aurora DSQL connectors that act as transparent authentication layers, automatically handling IAM token generation and refresh. The connectors handle token generation for you. You write connection code, not auth code.

Available connectors

  • JDBC Connector — Integrates IAM authentication into the standard Java database connectivity layer, enabling seamless use with existing Java-based data access frameworks.
  • Python Connector — Supports psycopg, psycopg2, and asyncpg for asynchronous workloads. Acts as an authentication plugin, handling token generation without changes to existing connection workflows.
  • Node.js Connectors — Available for both node-postgres (pg) and Postgres.js.
  • Go Connector — Wraps pgx with automatic IAM authentication, handling token generation, SSL configuration, and connection management.
  • Ruby Connector — Provides IAM-based authentication for Ruby applications.
  • .NET Connector — Wraps Npgsql with automatic IAM authentication, handling token generation, SSL configuration, and connection management.
  • Rust Connector — Wraps SQLx with automatic IAM authentication, handling token generation, SSL configuration, and connection management.

For implementation details, see the Amazon Aurora DSQL Connectors GitHub.

Benefits of using connectors

  • Automatic Token Management — Full lifecycle of IAM token generation and refresh, including region auto-discovery from the cluster hostname.
  • Seamless Integration — Works transparently with connection pooling libraries (HikariCP, psycopg ConnectionPool, psycopg2 ThreadedConnectionPool, asyncpg native pool).
  • Framework Support — Compatible with Spring Boot, Django, and other frameworks that rely on standard database driver interfaces.
  • Reduced Boilerplate — No manual token generation code to write or maintain.

Quick start example (JDBC connector)

The following example demonstrates how to connect to an Aurora DSQL cluster using the JDBC connector in Java. Before running the code, make sure you have the Aurora DSQL JDBC driver added to your project dependencies and that your IAM credentials are configured, either through environment variables, an instance profile, or the AWS credentials file. The example shows how to connect using the JDBC connector: configure the JDBC URL with the jdbc:aws-dsql:// prefix and call DriverManager.getConnection. The connector handles IAM token generation automatically — no manual token code is needed. Note that the connector generates a fresh token for each new connection or connection pool initialization rather than caching tokens long-term.

// Change the JDBC URL prefix to jdbc:aws-dsql://
String url = "jdbc:aws-dsql://" + clusterEndpoint + ":5432/postgres";
Connection conn = DriverManager.getConnection(url, "admin", "");
// No password needed — the connector handles token generation automatically

Manual connection patterns

If you’re not using connectors (for learning, debugging, or custom authentication flows), you can generate IAM tokens manually through the AWS SDK and pass them as the database password.

Connections require sslmode=require at minimum. Tokens are time-limited credentials derived from the caller’s IAM identity and scoped to the specific cluster hostname.

SDK Token Generation Method
Python (boto3) generate_db_connect_admin_auth_token
Java DsqlClient.generateDbConnectAdminAuthToken
Node.js GenerateDbConnectAdminAuthTokenCommand
Go dsql.GenerateDbConnectAdminAuthToken
Ruby Aws::DSQL::Client#generate_db_connect_admin_auth_token
.NET AmazonDSQLClient.GenerateDBConnectAdminAuthToken
Rust dsql::Client::generate_db_connect_admin_auth_token

Pass the generated token as the database password when establishing your connection.

For complete code examples, see the Amazon Aurora DSQL User Guide and Amazon Aurora DSQL Code Samples.

Connection pooling

Properly configured connection pooling reduces latency and avoids hitting Aurora DSQL’s connection rate limits. This section covers pool configuration, sizing, and the key constraints you need to account for.

Client-side pooling is required

Aurora DSQL has built-in connection pooling at the service layer, but every new connection must perform a TLS handshake and be authenticated by the service. Pool your connections and you pay that cost once, not on every request.

Do NOT use server-side connection poolers like PgBouncer or pgpool-II with Amazon Aurora DSQL. These tools are designed for traditional PostgreSQL architectures and can cause availability issues with Amazon Aurora DSQL’s distributed connection handling.

Pool configuration

The most critical parameter is maximum connection age. Amazon Aurora DSQL enforces a hard 60-minute limit on connection duration. Configure your pool’s max lifetime to 45–55 minutes so it proactively recycles connections before Amazon Aurora DSQL closes them.

For Java with HikariCP, configure maximumPoolSize, maxLifetime (below 60 minutes), and use the JDBC Connector to avoid manual token management. For complete HikariCP setup, review the official guide: Using Amazon Aurora DSQL with JDBC, Hibernate, and HikariCP.

For Python, connect using psycopg2 with a manually generated IAM token (see Amazon Aurora DSQL User Guide – Using Psycopg2), or use the Amazon Aurora DSQL Python Connector on GitHub to avoid token boilerplate entirely.

Connection limits and quotas

Before finalizing your connection pool sizing, you need to understand Amazon Aurora DSQL’s connection limits. Amazon Aurora DSQL uses a token bucket algorithm to govern connection creation rates: each new connection consumes one token; the bucket refills at a steady rate, and you can burst above the steady-state rate up to the bucket capacity.

Here are the default limits per cluster:

Quota Default Value Notes
Maximum established connections 10,000 Per-cluster limit; adjustable via Service Quotas
New connection rate (steady state) 100 connections/second Token bucket refill rate
Burst capacity 1,000 connections Tokens available at t=0 before refill
Maximum connection duration 60 minutes Hard limit; connections closed after 1 hour
Maximum transaction duration 5 minutes Per transaction (BEGIN to COMMIT)

Token bucket in practice: Your application starts and opens 1,000 connections. All succeed (1,000 burst tokens). But the bucket is now empty. Connection #1,001 must wait for the bucket to refill at 100 tokens/second. This is why client-side pooling matters: reusing connections avoids burning through your creation budget.

Connection lifecycle

Aurora DSQL connections have a fixed maximum lifetime and use time-limited tokens, so your application needs to handle connection recycling and token refresh gracefully.

The 1-hour connection limit

Every Amazon Aurora DSQL connection has a maximum lifetime of 60 minutes. After one hour, the service closes the connection, regardless of whether it’s idle or active. This is by design: Amazon Aurora DSQL’s distributed architecture means internal components can fail and be replaced transparently, and the 1-hour limit makes sure your application periodically establishes fresh connections, naturally picking up healthy infrastructure. Amazon Aurora DSQL applies jitter to closures, so connections don’t drop simultaneously, and it won’t close a connection mid-transaction.

Token expiration management

Tokens expire after 15 minutes by default (configurable up to 1 week). The key nuance: after a connection is established with a valid token, that connection remains valid even after the token expires. You only need a fresh token when establishing a new connection — making the 60-minute connection limit the binding constraint, not the token expiration.

Tokens are also Region scoped. A token generated with region=us-east-1 is only valid for connections to the us-east-1 endpoint. It will not work for the us-east-2 endpoint of the same multi-Region cluster. For multi-Region deployments, generate a separate token for each regional endpoint your application connects to.

Recommended approach: Use Amazon Aurora DSQL Connectors, which automatically generate a fresh token for every new connection with no token management code required.

Connection retry logic

Transient connection failures are a normal part of operating against a distributed system, not an exception. When an internal component fails, Amazon Aurora DSQL handles it automatically, but your application will see a connection error for that specific connection.

Implement retry logic with exponential backoff and jitter for both SerializationFailure (OCC conflicts) and OperationalError (transient failures). See the Amazon Aurora DSQL concurrency control documentation and the AWS Builders’ Library – Timeouts, retries, and backoff with jitter for recommended patterns.

Multi-Region connection patterns

For applications requiring high availability across geographic regions, Amazon Aurora DSQL multi-Region clusters provide active-active architecture with regional endpoints supporting both reads and writes.

Active-active multi-Region architecture

Amazon Aurora DSQL multi-Region clusters provide regional endpoints for active-active access. Applications can connect to either endpoint for both reads and writes, enabling geographic distribution and regional failover capabilities.

Endpoint selection strategies

Connect to the nearest regional endpoint for latency and implement health-based failover to the second endpoint if the primary region has issues. Test your failover logic before you need it.

Troubleshooting common connection issues

This section covers the most common errors and connection failures you might encounter when connecting to Aurora DSQL, along with their likely causes and recommended remediation steps. Whether you’re seeing authentication failures, timeout errors, or driver compatibility issues, the guidance below will help you diagnose and resolve problems quickly.

Issue 1: “Connection Attempt Failed”

Symptoms: Unable to establish connection to Amazon Aurora DSQL endpoint

Common causes: Incorrect IAM permissions, expired authentication token, network connectivity issues, incorrect endpoint format

Resolution: To resolve a failed connection attempt, work through the following steps in order. First, verify that the IAM user or role has the appropriate dsql:DbConnect or dsql:DbConnectAdmin permission attached to their policy. Next, confirm that your authentication token has not expired — tokens are short-lived and must be regenerated for each new connection attempt. Check that your cluster endpoint is correctly formatted, and that there are no network-level restrictions (such as security groups, VPC routing rules, or firewall policies) blocking outbound traffic on port 5432. The following example demonstrates how to programmatically verify your connection by generating a fresh token and attempting a connection with explicit error handling, making it easier to isolate the root cause:

# Verify IAM permissions
aws iam get-user
# Test token generation
aws dsql generate-db-connect-admin-auth-token \
--region us-east-1 \
--hostname <cluster-id>.dsql.us-east-1.on.aws
# Test network connectivity
nc -zv <cluster-id>.dsql.us-east-1.on.aws 5432

Issue 2: “Access Denied” Errors

Symptoms: Connection established but authentication fails

Resolution:

  • Verify that the IAM policy includes dsql:DbConnect or dsql:DbConnectAdmin.
  • Review the IAM policy for any conditions that might restrict access, such as aws:SourceIp, aws:RequestedRegion, or aws:PrincipalTag conditions — that could silently prevent a successful connection even when the base permission is granted.
  • Make sure the token is generated for the correct region.
  • Check that your AWS credentials are not expired.

Issue 3: PrivateLink connection issues

When connecting through PrivateLink from outside the VPC, the client needs to resolve the cluster endpoint to the VPC endpoint IP. There are two approaches:

Option 1: Override the IP address with PGHOSTADDR

export PGHOSTADDR=<vpce-ip-address>
export HOSTNAME=<cluster-id>.dsql.<region>.on.aws
psql -h $HOSTNAME -U admin -d postgres

This makes sure the correct hostname is used for SNI while connecting to the VPC endpoint IP.

Option 2: Use the amzn-cluster-id connection option (no DNS required)

export CLUSTERID=<cluster-id>
export PGOPTIONS="-c amzn-cluster-id=$CLUSTERID"
psql -h <vpce-endpoint> -U admin -d postgres

This passes the cluster identifier directly as a connection option, avoiding the need for DNS resolution. Useful when private DNS is not configured for the VPC endpoint.

For full details, see Connecting to Amazon Aurora DSQL using a PrivateLink connection endpoint.

Issue 4: Connection pool health check storms

Symptoms: Mass connection drops and re-establishments during load spikes, cascading health check failures, connection rate limit errors

Cause: Aggressive connection health check intervals (such as HikariCP’s default 5-second timeout) can trigger simultaneous health checks across thousands of pooled connections. When many checks fail at once, the pool attempts to re-establish all connections simultaneously, exhausting the 100 connections/second rate limit and causing cascading failures.

Resolution:

  • Stagger health check intervals across connections rather than using a fixed interval for all.
  • Increase idle timeouts to avoid unnecessary connection recycling.
  • For HikariCP, increase connectionTimeout and validationTimeout beyond the defaults.
  • Set maxLifetime with sufficient jitter (HikariCP applies ±2.5% automatically) to avoid synchronized connection expiration.

Conclusion

In this post, we showed you how to connect to Amazon Aurora DSQL using a variety of drivers and tools, including JDBC and PostgreSQL-compatible clients to the AWS CLI. We walked you through the connection architecture, explained how IAM-based authentication tokens are generated and used, and covered best practices for credential management and connection pooling. We also provided quick start examples to help you get up and running quickly, and a troubleshooting guide to help you diagnose and resolve the most common connection issues.

Ready to see it in action? Try Aurora DSQL for yourself in the playground with no setup required. Experiment with connections, run queries, and explore the features covered in this post firsthand.


About the authors

Alex Pawvathil

Alex Pawvathil

Alex is a Senior Technical Account Manager at AWS specializing in database architecture and enterprise-scale implementations. With over 14 years of hands-on experience in cloud architecture, database strategy, and enterprise advisory, he is a go-to expert on Amazon RDS for SQL Server implementations and enterprise-scale deployments.

Sandhya Khanderia

Sandhya Khanderia

Sandhya is a Sr. Technical Account Manager and Data Analytics Specialist at AWS. She works closely with AWS customers to provide ongoing support and technical guidance, helping them plan and build solutions using best practices while proactively keeping their AWS environments operationally healthy.

Rob Petersen

Rob Petersen

Rob is a Senior Technical Account Manager at AWS, bringing 20 years of IT industry experience to help customers accelerate their cloud adoption journey. His experience spans both leading large-scale cloud migrations and managing hybrid infrastructure operations, giving him unique insights into the challenges and opportunities organizations face during cloud adoption.