AWS Database Blog
Build a custom HTTP client in Amazon Aurora PostgreSQL and Amazon RDS for PostgreSQL: An alternative to Oracle’s UTL_HTTP
Some customers use Oracle UTL_HTTP package to write PL/SQL programs that communicate with web (HTTP) servers and invoke third-party APIs. When migrating to Amazon Aurora PostgreSQL-Compatible Edition or Amazon Relational Database Service (Amazon RDS) for PostgreSQL, these customers need to perform a custom conversion of their SQL code since PostgreSQL does not offer a similar built-in functionality. The UTL_HTTP package in Oracle provides a convenient way to make HTTP callouts directly from PL/SQL.
In this post, we demonstrate how you can use PL/pgSQL custom wrapper functions to convert Oracle UTL_HTTP referenced custom code to the Amazon Aurora PostgreSQL and Amazon RDS for PostgreSQL equivalent.
Solution overview
This solution focuses on integrating AWS Lambda with Amazon Aurora PostgreSQL. However, the steps and approach are applicable to Amazon RDS for PostgreSQL as well.
AWS Lambda is a serverless computing service that allows you to run code without provisioning or managing servers. It automatically scales and executes your code in response to events or triggers, such as HTTP requests.
Users send HTTP requests, invoking an AWS Lambda function from the Aurora PostgreSQL database, implemented using the Python Requests module.
Requests is a simple, yet elegant, HTTP library that allows you to send HTTP/1.1 requests with ease.
At a high level, the solution steps are as follows:
- Create a new schema as
utl_http_utility
in Amazon Aurora PostgreSQL DB. - Create Request and Response objects as user-defined types in Aurora PostgreSQL to represent an HTTP request and response.
- Deploy PL/pgSQL custom wrapper functions in the Aurora PostgreSQL DB for HTTP operations like beginning a request, setting authentication, setting headers, setting parameters, getting a response, and reading and writing lines to and from an HTTP request. These wrapper functions are used to continuously build a JSON object with all the HTTP parameters (response content, URL, parameters, custom headers) and payload.
- Install the aws_lambda and aws_commons extensions in Aurora PostgreSQL DB. These extensions enable seamless integration with Lambda functions, offering a more versatile approach to handling API requests and responses.
- Create the
get_response
wrapper function in Aurora PostgreSQL DB. This function takes a Request JSON object from the application or database as input and invokes a Lambda function. The Lambda function, implemented using Python and the Requests module, is responsible for sending HTTP requests to the desired API endpoint. - The Lambda function performs web service API invocations and returns responses, enabling real-time (synchronous) forwarding of the data back to the Aurora database.
The following diagram illustrates the architecture.
The architecture consists of the following components:
- An Aurora PostgreSQL database with
aws_lambda
and custom wrapper functions. - A Lambda function takes HTTP requests from the database, invokes APIs, and sends responses back.
- AWS Identity and Access Management (IAM) role for invoking Lambda functions from the Aurora PostgreSQL DB cluster.
Prerequisites
To implement this solution, you need
- An Aurora PostgreSQL cluster with the latest minor version available for Aurora PostgreSQL version 14 or above or an RDS for PostgreSQL instance with the latest minor version available for Amazon RDS for PostgreSQL version 14 or above inside a VPC.
- Permissions to invoke a Lambda function from an Aurora PostgreSQL DB cluster
-
- Refer to the following guides to set up connectivity between your DB cluster and Lambda and then create the
aws_lambda
extension in the DB cluster
- Refer to the following guides to set up connectivity between your DB cluster and Lambda and then create the
Package and deploy a Lambda function
A key advantage of this solution is the real-time communication between the Lambda function and Aurora PostgreSQL database, enabling the prompt forwarding of API responses.
To use the Lambda function, create a .zip deployment package with all the required dependencies, including the Requests library.
- Download the source code and deploy it by running the following commands:
- Navigate to the project directory containing your lambda_function.py source code file
- Create a new directory named package into which you will install your dependencies.
- Install dependencies in the package directory.
- Create a .zip file with the installed libraries at the root.
- Add the lambda_function.py file to the root of the .zip file
The following steps shows the deployment of the Lambda function using the AWS CLI. Make sure you have configured the AWS CLI before proceeding with the next steps.
- Create an IAM role
- Attach an IAM Role Policy
- Create a lambda function
You should get the following output:
The lambda_handler
function takes event
and context
parameters, processing various HTTP methods (GET, PUT, POST, DELETE, PATCH, HEAD, OPTIONS). It employs the Requests library to send custom HTTP requests with headers, credentials, and payload, handling timeouts and exceptions. Unsupported methods return 405 (Method Not Allowed), and HTTP errors result in 500 (Internal Server Error) responses with error details.
Create wrapper functions
The following wrapper function construct JSON objects with HTTP parameters and payload to invoke a Lambda function from an Aurora PostgreSQL database.
From the downloaded source code, run the following commands to deploy the database objects into the Aurora PostgreSQL database:
- Navigate to the
wrapper-for-utl-http-with-amazon-aurora
directory
- Connect to your Aurora PostgreSQL DB instance as a user with privileges to create the schema and deploy the objects. The default postgres user is shown in the following example:
- Run the file sql to create a utility schema and wrapper objects for constructing a JSON object with HTTP parameters and payload for Lambda invocation:
PostgreSQL’s user-defined composite types, utl_http_utility.req
and utl_http_utility.res
, encapsulate the HTTP request and response components for convenient parameter passing in database functions and procedures:
- BEGIN_REQUEST – Initializes an HTTP request object with URL, method, and version, and generates a unique handle
- SET_HEADER – Adds custom headers to the request
- WRITE_TEXT – Appends text to the request
- WRITE_LINE – Appends text as a new line
- WRITE_RAW – Appends binary data
- SET_TRANSFER_TIMEOUT – Sets transfer timeout
- SET_AUTHENTICATION – Configures authentication
- SET_PARAMS – Sets URL parameters
- GET_RESPONSE – Invokes a Lambda function, returning a response object
- READ_TEXT – Reads and outputs text data
- READ_RAW – Reads and outputs binary data
You can extend these functions to accommodate additional use cases. The open-source nature of this solution provides the flexibility for customization to meet your specific requirements.
- Run the AWS CLI command below to get the Lambda function ARN details and region to update them in next step.
- Update the parameter table to reflect the Lambda function ARN details and Region. The
utl_http_utility_params
table is a user-defined table in PostgreSQL. It is designed to store configuration parameters related to theutl_http_utility
package.
Test wrapper functions
We can now use the following anonymous code block to test a sample PL/pgSQL procedure that utilizes various functions from the utl_http_utility
package to perform an HTTP POST request with specific headers, payload, and authentication. Additionally, it reads the response and processes it line by line.
For the purpose of testing this module, you can use any valid HTTP endpoint (alternatively, you can use a free HTTP request testing site such as webhook.site).
Make sure to update the l_url
variable with the valid API URL to invoke:
We get the following output:
The following screenshot shows the POST request sent to the Webhook API.
Clean up
To avoid incurring future charges, clean up the resources created as part of this post:
- Navigate to the downloaded source code directory
wrapper-for-utl-http-with-amazon-aurora
- Clean up database objects, connect to your Aurora PostgreSQL DB instance as a user with privileges to drop the schema. The default
postgres
user is shown in the following example:
- Run the file
uninstall.sql
to drop utility schema and wrapper objects. - Delete the Lambda function that was created using the AWS CLI:
- Delete the IAM role and the database instance if you no longer need to use the resources.
Conclusion
In this post, we showed how you can build a custom HTTP client in Amazon Aurora PostgreSQL, to use as an equivalent to Oracle’s UTL_HTTP package. With this solution, you can now invoke third-party APIs, handle HTTP requests and responses. Remember, the open-source nature of this solution allows for customization and adaptation to your specific requirements. So, dive in, explore, and unlock the full potential of your Aurora PostgreSQL database.
Leave your thoughts or questions in the comments section.
About the Authors
Bhanu Ganesh Gudivada is a Database Consultant on the AWS Professional Services team at AWS and specializes in database migrations. He helps customers build highly available, cost-effective database solutions and migrate their commercial engines to the AWS Cloud. He is curious to learn and implement new technologies around databases, generative AI and orchestrate migrations through automation.
Rajeshkumar Sabankar is a Database Specialty Architect with Amazon Web Services. He works with internal Amazon customers to build secure, scalable and resilient architectures in AWS cloud and help customers perform migrations from on-premise databases to AWS RDS and Aurora Databases.
Vamsi Krishna Jammula is a Database Consultant with the AWS Professional Services team at AWS. He works as a database migration specialist to help Amazon customers in design and implementing scalable, secure, performant, and robust database solutions in the cloud.
Sumana Yanamandra is a Database Consultant with the AWS Professional Services team at AWS. She has been supporting and enabling customers to migrate their database from on-premises data centers to the AWS Cloud and also migrate from commercial database engines to open source databases.