AWS Database Blog

Unlock Amazon Aurora’s Advanced Features with Standard JDBC Driver using AWS Advanced JDBC Wrapper

Modern Java applications using Amazon Aurora often struggle to take full advantage of their cloud-based capabilities. Although Aurora offers powerful features such as fast failover, AWS Identity and Access Management (IAM) authentication support, and AWS Secrets Manager integration, standard JDBC drivers weren’t designed with cloud-specific features in mind. This isn’t a limitation of open source drivers; they excel at what they were designed for and focus on database standards rather than cloud-based optimizations.

When Aurora fails over in seconds, standard JDBC drivers can take up to a minute to reconnect because of DNS propagation delays. While Aurora supports powerful features like IAM authentication and Secrets Manager integration, implementing these features with standard JDBC drivers requires complex custom code and error handling—complexity that the AWS Advanced JDBC Wrapper eliminates

This blog post shows java developers how to enhance an existing application that uses the open source standard JDBC driver with a HikariCP connection pooler by adding the AWS Advanced JDBC Wrapper (JDBC Wrapper), unlocking the capabilities of Aurora and the AWS Cloud with minimal code changes. This approach preserves all the benefits of your existing PostgreSQL driver while adding cloud-based features. The post also demonstrates one of JDBC Wrapper’s powerful features: read/write splitting.

Solution overview

The JDBC Wrapper is an intelligent wrapper that enhances your existing JDBC driver with capabilities of Aurora and the AWS Cloud. The wrapper can transform your standard PostgreSQL, MySQL, or MariaDB driver into a cloud-aware, production-ready solution. Developers can adopt the JDBC Wrapper to take advantage of the following capabilities:

  • Fast failover beyond DNS limitations – The JDBC Wrapper maintains a real-time cache of your Aurora cluster topology and each database instance’s primary or replica role through direct queries to Aurora. This bypasses DNS delays entirely, enabling immediate connections to the new primary instance during failover.
  • Seamless AWS authentication – Aurora supports IAM database authentication, but implementing it traditionally requires custom code to generate tokens, handle expiration, and manage renewals. The JDBC Wrapper automatically handles the entire IAM authentication lifecycle.
  • Built-in Secrets Manager support – Secrets Manager integration retrieves database credentials automatically. Your application doesn’t need to know the actual password—the driver handles everything behind the scenes.
  • Federated authentication – Enable database access by using organizational credentials through Microsoft Active Directory Federation Services or Okta.
  • Read/write splitting using connection control – You can maximize Aurora performance by routing write operations to the primary instance and distributing reads across Aurora replicas.
    Note: Read/write splitting feature requires developers to explicitly call setReadOnly(true) on connections for read operations. The driver does not automatically parse queries to determine read versus write operations. When setReadOnly(true) is called, all subsequent statements executed on that connection will be routed to replicas until setReadOnly(false) is called. This feature is explored in detail later in this post.

This post walks through a real-world transformation of java application using the JDBC Wrapper. You’ll see how an existing Java application evolves through three progressive stages:

  • Stage 1: Standard JDBC driver(baseline) – The application connects directly to the Aurora writer endpoint through the standard JDBC driver, with all operations using a single database instance and relying on DNS-based failover.
  • Stage 2: JDBC Wrapper with fast failover – The application uses the JDBC Wrapper to maintain an internal topology cache of the Aurora cluster, enabling fast failover through direct instance discovery while still routing all operations through the writer endpoint.
  • Stage 3: Read/write splitting – The application uses the JDBC Wrapper read/write splitting feature to send write operations to the Aurora writer instance and distribute read operations across Aurora reader instances, optimizing performance through automatic load balancing.

Figure 1: Architecture diagram showing Stage 3 configuration with read/write splitting enabled

Prerequisites

You must have the following in place to implement this post’s solution:

  • An AWS account with permissions to create Aurora clusters
  • A Linux-based machine with the following software installed to run the demo application that can connect to the Aurora cluster:

Infrastructure setup options

Implementing the Solution

Set up the development environment

In this section, you will clone the sample repository and examine the Java order management application that uses HikariCP connection pooling with a standard PostgreSQL JDBC driver

Clone the GitHub repository by using the following code:

git clone https://github.com/aws-samples/sample-aws-advanced-jdbc-wrapper-demo.git
cd sample-aws-advanced-jdbc-wrapper-demo

The demo application simulates a real-world order management system that powers an online store where customers place orders, staff members update order statuses, and managers generate sales reports. This scenario demonstrates the challenge of mixed-database workloads: Some write-heavy operations need immediate consistency such as processing payments, but other read-heavy operations that use read replicas can tolerate slight delays such as when generating sales reports

The repository has the following structure:

Now that you have the demo application code locally and understand its structure as a typical Java order management system using HikariCP and standard PostgreSQL JDBC drivers, the next step is to create the Aurora database infrastructure that the application will connect to.

Deploy the database infrastructure

You will create an Aurora cluster with two read replicas by using an automated script that uses infrastructure as code with the AWS CDK. The two read replicas are needed for demonstrating the AWS Advanced JDBC Wrapper’s read/write splitting capabilities—they provide separate instances to route read operations to while the primary instance handles write operations. If you choose not to use the provided script, you can create the cluster manually through the AWS Management Console.

Override defaults with. env (Optional)

You can override the default settings by creating a .env file if you need to use existing AWS resources (like a specific VPC or security group) or want to customize resource names. If you don’t want to use existing AWS infrastructure, you can skip this step and use the defaults.

cp .env.example .env
# Edit the .env file with your AWS resource values, if needed
# Aurora cluster name (default: demo-app)
# AURORA_CLUSTER_ID=demo-app
# Database username (default: postgres)
# AURORA_DB_USERNAME=postgres
# Database name (default: postgres)
# AURORA_DB_NAME=postgres
# AWS Region (default: AWS CLI configures the Region)
# AWS_REGION=us-east-1
# Existing VPC ID (default: the CDK creates a new VPC)
# AWS_VPC_ID=vpc-xxxxxxxxx
# Existing security group (default: the CDK creates a new security group with port 5432 open)
# AWS_SECURITY_GROUP_ID=sg-xxxxxxxxx
# Existing subnet group (default: the CDK creates a new subnet group)
# AWS_DB_SUBNET_GROUP_NAME=existing-subnet-group

Create an Aurora cluster

Run the setup script to create an Aurora cluster with two reader instances and the writer instance:

# Set up the Aurora cluster with your configuration
./setup-aurora-cdk.sh

You will see the following output after successfully creating the cluster:

==================================================
📋 Connection details:
==================================================
Writer endpoint: aurora-jdbc-demo.cluster-abc123.us-east-1.rds.amazonaws.com
Reader endpoint: aurora-jdbc-demo.cluster-ro-abc123.us-east-1.rds.amazonaws.com
Username: postgres
Database: postgres
Port: 5432
Region: us-east-1
📝 Next steps:
1. ✅ application.properties has been updated automatically
2. Set up database password environment variable (see next section)
3. Run the demo: ./gradlew clean run
🧹 To clean up resources later:
   cd infrastructure/cdk && cdk destroy

Set up application properties

The application properties file contains the database connection details that your Java application uses to connect to Aurora cluster.

If you created the cluster by using the provided AWS CDK script (option A), the script automatically created and configured src/main/resources/application.properties with your Aurora connection details. As a result, you don’t need to create or configure the application properties file because the script did this for you.

For manual setup (option B), create and configure the application properties file:

cp src/main/resources/application.properties.example src/main/resources/application.properties
# Edit application.properties with your Aurora connection details

Set up the database password

If you created infrastructure by using the AWS CDK script provided (option A): The AWS CDK script automatically generates a secure password and stores it in Secrets Manager. Set up the database password environment variable by using the following commands:

 # Get your current AWS Region and the secret ARN from the AWS CDK deployment
AWS_REGION=$(aws configure get region) 
SECRET_ARN=$(aws cloudformation describe-stacks --stack-name aws-jdbc-driver-stack --region $AWS_REGION --query "Stacks[0].Outputs[?OutputKey=='SecretArn'].OutputValue" --output text) export DB_PASSWORD=$(aws secretsmanager get-secret-value --secret-id "$SECRET_ARN" --region $AWS_REGION --query SecretString --output text | jq -r .password)

If you used the manual setup (option B):

Run this command to set the password you specified when creating your Aurora cluster:

export DB_PASSWORD=<your_database_password>

Now that you have successfully deployed your Aurora cluster with read replicas and configured the application properties and database password, the next step is to test the application in three progressive stages that demonstrate the AWS Advanced JDBC Wrapper’s capabilities

Configure the application with the JDBC Wrapper

This section covers three progressive stages of configuring your Java application with the JDBC Wrapper:

  • Stage 1: Standard JDBC driver (baseline) – Run the application with the standard PostgreSQL JDBC driver.
  • Stage 2: JDBC Wrapper with fast failover – Configure the JDBC Wrapper with fast failover capabilities.
  • Stage 3: Read/write splitting – Enable read/write splitting to distribute reads across Aurora replicas

Stage 1: Standard JDBC driver (baseline)

You’ll run the application by using the standard PostgreSQL JDBC driver to establish a baseline before enhancing it with JDBC Wrapper capabilities. Execute the application to observe standard JDBC behavior:

./gradlew clean run

The following is the sample output:

Task :run
INFO com.zaxxer.hikari.HikariDataSource - StandardPostgresPool - Starting...
INFO com.example.config.DatabaseConfig - Standard JDBC connection pool initialized
=== PERFORMING WRITE OPERATIONS ===
INFO com.example.dao.OrderDAO - WRITE OPERATION: Creating new order for customer: John Doe
INFO com.example.dao.OrderDAO - Connection URL: 
    → WRITER: jdbc:postgresql://aurora-jdbc-demo.cluster-xxxxxxx.us-east-1.rds.amazonaws.com:5432/postgres
INFO com.example.dao.OrderDAO - Order created with ID: 1
=== PERFORMING READ OPERATIONS ===
INFO com.example.dao.OrderDAO - READ OPERATION: Getting order history
INFO com.example.dao.OrderDAO - Connection URL: 
    → WRITER: jdbc:postgresql://aurora-jdbc-demo.cluster-xxxxxxx.us-east-1.rds.amazonaws.com:5432/postgres
INFO com.example.dao.OrderDAO - Found 4 orders
INFO com.example.Application - Retrieved 4 total orders
BUILD SUCCESSFUL in 2s

Notice in the output that both write operations (creating orders) and read operations (getting order history) show the same connection URL pattern: → WRITER jdbc:postgresql://aurora-jdbc-demo.cluster-xxxxxxx. This demonstrates standard JDBC behavior where all database operations route to the Aurora writer endpoint, meaning both transactional operations and analytical queries compete for the same writer resources—the exact problem the AWS Advanced JDBC Wrapper’s read/write splitting will solve in the next steps.

Now that you have established a baseline with the standard JDBC driver and observed how all operations route to the Aurora writer endpoint, the next step is to configure the application to use the JDBC Wrapper while maintaining the same functionality but adding cloud capabilities such as fast failover.

Stage 2: JDBC Wrapper with fast failover

Now, transform this application to use the JDBC Wrapper while maintaining the same functionality but adding capabilities such as fast failover. You will use a script to automatically apply the necessary changes to upgrade your standard JDBC application with Aurora and AWS Cloud features.Before running the script, let’s examine which changes are needed for the application to use the JDBC Wrapper:

The build.gradle (Before configured to use JDBC Wrapper):

dependencies {
    implementation 'com.zaxxer:HikariCP:5.0.1' 
    implementation 'org.postgresql:postgresql:42.6.0'
    implementation 'ch.qos.logback:logback-classic:1.4.11'
    implementation 'org.slf4j:slf4j-api:2.0.9'
    
    compileOnly 'org.projectlombok:lombok:1.18.30'
    annotationProcessor 'org.projectlombok:lombok:1.18.30'
}

The following configuration shows the required changes to use JDBC Wrapper capabilities. The build.gradle (After configured to use JDBC Wrapper):

The build.gradle (After configured to use JDBC Wrapper):

dependencies {
    implementation 'com.zaxxer:HikariCP:5.0.1'
    implementation 'org.postgresql:postgresql:42.6.0'
 implementation 'software.amazon.jdbc:aws-advanced-jdbc-wrapper:2.5.6’ // ← Add this
    implementation 'ch.qos.logback:logback-classic:1.4.11'
    implementation 'org.slf4j:slf4j-api:2.0.9'
    
    compileOnly 'org.projectlombok:lombok:1.18.30'
    annotationProcessor 'org.projectlombok:lombok:1.18.30'
}

This change adds the AWS Advanced JDBC Wrapper library (software.amazon.jdbc:aws-advanced-jdbc-wrapper:2.5.6) alongside the existing PostgreSQL driver (org.postgresql:postgresql:42.6.0). The wrapper acts as an intermediary layer that intercepts database calls, adds specific capabilities, then delegates actual SQL operations to the PostgreSQL driver.

In addition to the code changes above, you also need to update the JDBC URL in the application.properties file, which contains the database connection settings. The following configuration illustrates the current configuration with standard JDBC:

Before configured to use JDBC Wrapper:

db.url=jdbc:postgresql://aurora-jdbc-demo.cluster-abc123.us-east-1.rds.amazonaws.com:5432/postgres

The following configuration shows the required change with the JDBC Wrapper

After configured to use JDBC Wrapper:

db.url=jdbc:aws-wrapper:postgresql://aurora-jdbc-demo.cluster-abc123.us-east-1.rds.amazonaws.com:5432/postgres

The aws-wrapper: prefix tells the driver manager to use JDBC Wrapper capabilities.

The DatabaseConfig.java file updates the connection configuration. The following code illustrates the current configuration with standard JDBC:

Before configured to use JDBC Wrapper:

// Standard JDBC configuration
configuredJdbcUrl = props.getProperty("db.url");
config.setJdbcUrl(configuredJdbcUrl);
config.setUsername(props.getProperty("db.username"));
config.setPassword(props.getProperty("db.password"));
config.setPoolName("StandardPostgresPool");
log.info("Standard JDBC connection pool initialized");

The following code shows the required change with the JDBC Wrapper:

After configured to use JDBC Wrapper:

// JDBC Wrapper configuration
configuredJdbcUrl = props.getProperty("db.url");
config.setDataSourceClassName("software.amazon.jdbc.ds.AwsWrapperDataSource"); config.addDataSourceProperty("jdbcUrl", configuredJdbcUrl); config.addDataSourceProperty("targetDataSourceClassName", "org.postgresql.ds.PGSimpleDataSource");
Properties targetProps = new Properties();
targetProps.setProperty("user", props.getProperty("db.username"));
targetProps.setProperty("password", props.getProperty("db.password"));
targetProps.setProperty("wrapperPlugins", "failover");  // ← Enables fast failover
config.addDataSourceProperty("targetDataSourceProperties", targetProps);
config.setPoolName("AWSJDBCPool");
log.info("JDBC Wrapper connection pool initialized");

The preceding code switches from a direct JDBC URL configuration to using the JDBC Wrapper.. This enables fast failover capabilities and supports advanced features like read/write splitting and IAM authentication. While adding these cloud capabilities, the wrapper still delegates all actual database operations to the underlying PostgreSQL driver. This gives you Aurora’s cloud features without changing your application’s business logic.

Run the following script to apply all the above changes and then execute the application:

./demo.sh aws-jdbc-wrapper

The preceding script makes the JDBC Wrapper changes and runs the Java application. You will see the same output as before, but now it includes JDBC Wrapper capabilities:

Running application...
> Task :run
16:22:18.954 [main] INFO  com.zaxxer.hikari.HikariDataSource - AWSJDBCPool - Starting...
16:22:19.632 [main] INFO  com.zaxxer.hikari.pool.HikariPool - AWSJDBCPool - Added connection software.amazon.jdbc.wrapper.ConnectionWrapper@770d3326 - org.postgresql.jdbc.PgConnection@4cc8eb05
16:22:19.634 [main] INFO  com.zaxxer.hikari.HikariDataSource - AWSJDBCPool - Start completed.
16:22:19.634 [main] INFO  com.example.config.DatabaseConfig - AWS JDBC Wrapper connection pool initialized
=== WRITE OPERATIONS ===
16:22:19.661 [main] INFO  com.example.dao.OrderDAO - WRITE OPERATION: Creating new order for customer: John Doe
16:22:19.665 [main] INFO  com.example.dao.OrderDAO - Connection URL: 
    → WRITER: jdbc:postgresql://aurora-jdbc-demo4.cluster-curzkcvul3uv.us-east-1.rds.amazonaws.com:5432/postgres
16:22:19.684 [main] INFO  com.example.dao.OrderDAO - Order created with ID: 13
=== READ OPERATIONS ===
16:22:19.706 [main] INFO  com.example.dao.OrderDAO - READ OPERATION: Getting order history
16:22:19.708 [main] INFO  com.example.dao.OrderDAO - Connection URL: 
    → WRITER: jdbc:postgresql://aurora-jdbc-demo4.cluster-curzkcvul3uv.us-east-1.rds.amazonaws.com:5432/postgres
16:22:19.714 [main] INFO  com.example.dao.OrderDAO - Found 16 orders

Notice that the connection pool name has changed from StandardPostgresPool to AWSJDBCPool , and the log shows AWS JDBC Wrapper connection pool initialized, confirming that the application is now using the JDBC Wrapper. The connection type shows software.amazon.jdbc.wrapper.ConnectionWrapper wrapping the underlying org.postgresql.jdbc.PgConnection, demonstrating that the wrapper is intercepting database calls while delegating to the PostgreSQL driver.

Operations still use the Aurora writer endpoint, but now your application has fast failover capabilities without you having made any business logic changes.

Now that you have successfully configured the application to use the JDBC Wrapper with fast failover capabilities while maintaining all operations on the Aurora writer endpoint, the next step is to configure read/write splitting to distribute read operations across Aurora replicas and optimize performance

Stage 3: Enable read/write splitting

Now let’s implement the JDBC Wrapper read/write capability by enabling connection routing. With connection routing, writes go to the primary instance and reads are distributed across Aurora replicas based on reader selection strategies such as roundRobin and fastestResponse. For detailed configuration information, see Reader Selection Strategies.

Performance considerations with HikariCP using JDBC Wrapper

The demo application uses external HikariCP connection pooling to demonstrate multiple use cases. However, for production applications with frequent read/write operations, using the JDBC Wrapper’s internal connection pooling is recommended. The JDBC wrapper currently uses HikariCP to create and maintain its internal connection pools.

For a comprehensive example with performance testing using internal and external pools and compare to no read/write splitting, see the ReadWriteSplittingSample.java example, which demonstrates three approaches.

Spring Boot/Framework considerations

If you are using Spring Boot/Framework, be aware of performance implications when using the read/write splitting feature. For example, the @Transactional(readOnly = true) annotation can cause significant performance degradation because of constant switching between reader and writer connections. For detailed information about these considerations and recommended workarounds, see Limitations when using Spring Boot/Framework.

Changes needed to use read/write splitting

Let’s review the changes needed to use read/write splitting. The DatabaseConfig.java file adds the readWriteSplitting plugin.

The following code shows the existing JDBC Wrapper configuration with failover:

targetProps.setProperty("wrapperPlugins", "failover");

The updated code to allow the use of read/write splitting is:

targetProps.setProperty("wrapperPlugins", "readWriteSplitting,failover");

The OrderDAO.java file marks connections as read only to enable routing to reader instances:

conn.setReadOnly(true);  // Enable read/write splitting for this connection
Note: When `setReadOnly(true) is called, the connection allows read-only operations ONLY. Write operations (INSERT, UPDATE, DELETE) will fail on this connection. To perform write operations through this connection, you must call setReadOnly(false)” Now, run the read/write splitting configuration: ./demo.sh read-write-splitting
 The following is the sample output after running the configuration:

Now, run the read/write splitting configuration:

./demo.sh read-write-splitting

The following is the sample output after running the configuration:

Running application...
> Task :run
16:51:18.705 [main] INFO  com.zaxxer.hikari.HikariDataSource - AWSJDBCReadWritePool - Starting...
16:51:19.405 [main] INFO  com.example.config.DatabaseConfig - AWS JDBC Wrapper with Read/Write Splitting initialized
=== PERFORMING WRITE OPERATIONS ===
16:51:19.434 [main] INFO  com.example.dao.OrderDAO - WRITE OPERATION: Creating new order for customer: John Doe
16:51:19.437 [main] INFO  com.example.dao.OrderDAO - Connection URL: 
    → WRITER: jdbc:postgresql://aurora-jdbc-demo4.cluster-curzkcvul3uv.us-east-1.rds.amazonaws.com:5432/postgres
16:51:19.456 [main] INFO  com.example.dao.OrderDAO - Order created with ID: 17
16:51:19.469 [main] INFO  com.example.dao.OrderDAO - WRITE OPERATION: Updating order 1 status to SHIPPED
16:51:19.469 [main] INFO  com.example.dao.OrderDAO - Connection URL: 
    → WRITER: jdbc:postgresql://aurora-jdbc-demo4.cluster-curzkcvul3uv.us-east-1.rds.amazonaws.com:5432/postgres
16:51:19.474 [main] INFO  com.example.dao.OrderDAO - Updated 1 order(s)
=== PERFORMING READ OPERATIONS ===
16:51:19.477 [main] INFO  com.example.dao.OrderDAO - READ OPERATION: Getting order history
16:51:20.044 [main] INFO  com.example.dao.OrderDAO - Connection URL: 
    → READER: jdbc:postgresql://aurora-jdbc-reader-2.curzkcvul3uv.us-east-1.rds.amazonaws.com:5432/postgres
16:51:20.051 [main] INFO  com.example.dao.OrderDAO - Found 20 orders
16:51:20.052 [main] INFO  com.example.dao.OrderDAO - READ OPERATION: Generating sales report
16:51:20.285 [main] INFO  com.example.dao.OrderDAO - Connection URL: 
    → READER: jdbc:postgresql://aurora-jdbc-reader-2.curzkcvul3uv.us-east-1.rds.amazonaws.com:5432/postgres
16:51:20.285 [main] INFO  com.example.dao.OrderDAO - Sales report generated: {totalOrders=20, totalRevenue=8150.0}
16:51:20.286 [main] INFO  com.example.dao.OrderDAO - READ OPERATION: Searching orders for customer: John
16:51:20.287 [main] INFO  com.example.dao.OrderDAO - Connection URL: 
    → READER: jdbc:postgresql://aurora-jdbc-reader-2.curzkcvul3uv.us-east-1.rds.amazonaws.com:5432/postgres
16:51:20.353 [main] INFO  com.example.dao.OrderDAO - Found 10 orders for customer: John
BUILD SUCCESSFUL in 3s

the JDBC Wrapper now routes write operations to the Aurora writer endpoint (the primary instance) and read operations to Aurora reader endpoints (the replica instances).Read/write splitting plugin offers the following benefits:

  • Simplified connection management – You don’t need to manage separate connection pools for read and write connections within your application. Just by setting Connection#setReadOnly() method in the application, the JDBC Wrapper automatically manages the connections.
  • Flexible reader selection strategies – Choose from multiple reader selection strategies like roundRobin, fastestResponse, or least connections to optimize performance based on your specific application requirements and workload patterns
  • Reduced writer load – Analytics queries no longer compete with transactions
  • Better resource utilization – Read traffic distributes across multiple replicas, allowing each Aurora instance to serve its optimal workload without requiring application logic changes

Cleanup

To avoid incurring future charges, delete the resources created during this walkthrough.

If you used the AWS CDK script (Option A):

Run the following commands to delete all AWS resources:

# Navigate to the CDK directory
cd infrastructure/cdk
# Destroy the stack and all resources
cdk destroy

If you created resources manually (Option B):Delete the Aurora cluster and any associated resources (security groups, DB subnet groups) using the same method you used to create them—either through the AWS Management Console or AWS CLI.

Conclusion

This post showed how you can enhance your Java application with the cloud-based capabilities of Aurora by using the JDBC Wrapper. The simple code changes shared in this post can transform a standard JDBC application to use fast failover, read/write splitting, IAM authentication, Secrets Manager integration, and federated authentication.


About the authors

Ramesh Eega

Ramesh Eega

Ramesh is a Global Accounts Solutions Architect based out of Atlanta, GA. He is passionate about helping customers throughout their cloud journey.

Chirag Dave

Chirag Dave

Chirag is a Principal Solutions Architect with Amazon Web Services, focusing on managed PostgreSQL. He maintains technical relationships with customers, making recommendations on security, cost, performance, reliability, operational efficiency, and best practice architectures.

Dave Cramer

Dave Cramer

Dave is a Senior Software Engineer for Amazon Web Services. He is also a major contributor to PostgreSQL as the maintainer of the PostgreSQL JDBC driver. His passion is client interfaces and working with clients.