AWS Database Blog
Managing inactive Amazon Aurora PostgreSQL users
July 2023: This post was reviewed for accuracy.
Data is one of the most precious assets for any organization, and keeping data secure is always the top priority. One common security requirement for databases is to restrict user access—if a database user is compromised, it can cause significant damage to your data. You should follow the least privilege model for database users. This means that you only grant a user the minimal set of privileges required to do their work. In addition, if a database user is inactive for an extended period, it’s a good practice to disable them. This can limit the impact of any mistake or a security breach. For more information, see Managing PostgreSQL users and roles.
This post provides a solution that identifies inactive Amazon Aurora PostgreSQL users and locks or deletes them automatically. The post also shares example code and an AWS CloudFormation template to get you started.
In the following use case, you want to identify users that have been inactive for a given number of days (for this post, 90 days). After you identify these users, you want to lock them out. As the DBA, you can unlock these users anytime. However, you want to delete the users automatically if they are inactive for 180 days.
However, PostgreSQL doesn’t natively support this feature. Some commercial databases provide this functionality with the user profile feature, which allows you to lock a user account that hasn’t logged in to the database for a specified number of days.
For PostgreSQL, this becomes more challenging because PostgreSQL doesn’t store user login timestamps in any of the catalog tables. Therefore, it’s not straightforward to determine how long a user has been inactive. In addition, because there aren’t any login triggers in PostgreSQL, a trigger-based solution is also not possible.
Solution
You can enable logging the details of each successful login in PostgreSQL by setting the parameter log_connections
to 1
in the parameter group. After you set this parameter, the PostgreSQL engine logs a message like the following for each successful login:
Aurora PostgreSQL allows you to publish these engine logs to Amazon CloudWatch Logs. With the logs publishing enabled, you can develop a scheduled AWS Lambda function that parses out this login information from the logs to get the user’s last login timestamp.
The Lambda function performs the following high-level steps:
- Extracts login information from log messages
- Stores the last login timestamp for the users in a status table
- Uses the timestamp for each user to apply the lock or delete policy
Prerequisites
Before you get started, make sure to complete the following prerequisites:
- Set the parameter
log_connection
to1
in the parameter group. For more information about creating a custom parameter group for your instance, see Working with DB Parameter Groups. - Select the option to publish PostgreSQL logs to Amazon CloudWatch Logs. For instructions, see Publishing Aurora PostgreSQL Logs to Amazon CloudWatch Logs.
- Store the database user credentials in AWS Secrets Manager. This user should have the
rds_superuser
role assigned to it or have permission to lock and delete other users. - Store the Lambda function code in an Amazon S3 bucket that is in the same Region as the Lambda function (the function is created in the same Region where you create the CloudFormation stack).
Solution architecture
This solution uses the following AWS services:
- Amazon Aurora PostgreSQL – Serves as a relational database for most demanding business applications
- AWS CloudFormation – Describes and provisions all infrastructure resources in the cloud environment
- Amazon CloudWatch Events – Schedules the triggering action for Lambda functions at specific times using rate expression
- Amazon CloudWatch Logs – Enables you to centralize the logs from various systems and services, in a single, highly scalable service
- AWS Identity and Access Management (IAM) – Helps manage access to AWS services and resources securely via users, groups, permissions
- AWS Lambda – Allows you to run the code without provisioning or managing servers
- AWS Secrets Manager – Provides easy rotation, management, and retrieval of credentials
The following diagram illustrates how the solution uses these various services.
This solution provides the following benefits:
- You can audit and optionally lock and delete inactive database users
- You can run the solution in your desired Region, VPC, and subnet
- Although you can only run it against Aurora PostgreSQL, you can modify the code to work with Amazon RDS for PostgreSQL
- The solution uses Secrets Manager to store the database credentials
- You can exclude a list of users that shouldn’t be locked or deleted regardless of inactivity
- The solution provides a configurable inactivity time limit, after which the user is locked and deleted
- You can configure the Lambda function execution schedule
The following sections provide the details of how each component works together to implement this solution.
Lambda function actions
The Lambda function implements the policy as per the configuration parameters. The process includes the following steps:
- Fetch database user credentials from AWS Secrets Manager.
- Use AWS CLI to fetch the Aurora cluster details.
- Create the following tables if they don’t exist:
- Sync up the user names from the
pg_users
table to theuser_login_status
table as follows:- Fetch the current users list from
pg_users
, excludingrdsadmin
, the master user, and a given list of excluded users (if provided by the input parameter). - Copy new users to
user_login_status
with the last login time set to the current time. - Delete users from the
user_login_status
table that don’t exist in the list. - Set the status column in the table
user_login_status
of all new users to Active.
- Fetch the current users list from
- Fetch the
last_processed_timestamp
from the tableuser_login_job_status
.- If a row doesn’t exist, create a row with the last time set to 24 hours ago.
- Read database login messages from PostgreSQL logs in CloudWatch Logs, starting from
last_processed_timestamp
. - If the logs contain a login message, update the last login timestamp in the
user_login_status
table. - Store the last log timestamp in the table
user_login_job_status
. - Get the list of users with the inactivity time from the
user_login_status
table. - Lock all users inactive for 90 days (or preferred number of days) and set the status column in the table
user_login_status
toLocked
. - Delete all users inactive for 180 days (or preferred number of days) if they don’t own any objects. If they own object, mark them
Ready to delete
in theuser_login_status
table.
The following sections provide the details of how to implement this logic using the Lambda function LockNDeleteUser
.
Creating the Lambda function deployment package
This utility consists of the Python Lambda function code. You can download the code for LockNDeleteUser.py from the GitHub repo. This code uses a psycopg2
library, which you need to download. To create a Lambda function deployment package, you need to bundle the code and all dependent libraries.
The following steps walk you through creating the deployment package for the LockNDeleteUser
Lambda function. For more information about deployment packages, see AWS Lambda Deployment Package in Python.
The following steps detail creating the deployment package from an Amazon EC2 instance with Amazon Linux:
- On your Linux host, create the project directory for your deployment package:
- Get the Python code file LockNDeleteUser.py from the GitHub repo and copy it to this directory.
- To make sure that the file is in the Unix format, install
dos2unix
: - Verify that your Python 3.6 environment is activated.
- Install the psycopg2 library that gets imported in your code using pip:
- Set the required permissions:
- Locate the packages you installed using pip, and make sure that they have global read permissions. You should find it in the
site-packages
directory of Python 3.6: - Go to the packages directory where you installed your Python packages and zip the directory contents (not the directory itself). Place the .zip file in your project folder. To include all hidden files, use the option
zip -r9
: - From your project directory, add the
LockNDeleteUser.py
script to the zip file:
Using this utility
This utility is very easy to use. To get it working, complete the following steps:
- Upload the deployment package file LockNDeleteUser.zip to your S3 bucket.
- Get ready with the values for the following parameters:
- Secrets name – The Secrets Manager secret that stores the database credentials.
- S3 bucket name – The S3 bucket location where the deployment package is uploaded.
- Subnet ID(s) – The subnet that the Lambda function uses.
- Security Group ID(s) – The security group that the Lambda function uses.
- LambdaFunctionName – The name of the Lambda function.
- LambdaIAMRole – The ARN of the Lambda IAM role, which has access to execute Lambda functions, read from Secrets Manager, and read and write to CloudWatch.
- HttpsProxy (optional) – The value of
https_proxy
, which you need to use for internet bound requests (if required). - Lock Time Interval in Days (optional) – If you provide the lock interval, the utility locks the database users if they have been inactive for the specified number of days. If you don’t provide this value, inactive users aren’t locked.
- Delete Time Interval in Days (optional) – If you provide the delete interval, the utility deletes the database users if they have been inactive for the specified number of days. If you don’t provide this value, inactive users aren’t deleted.
- Log Level (optional) – This value is set to
info
by default. For detailed logging, you can set it todebug
. - Ignore Users List (optional) By default,
rdsadmin
and the master user are skipped from the lock and delete actions. If you need to skip any other users, you can list them in this parameter. - Execution Rate (optional) – By default, this utility runs every 24 hours. If you need to adjust the schedule, you can change it using this parameter.
- Download the CloudFormation template JSON from the GitHub repo.
- Create the CloudFormation stack by providing the stack name and the values for parameters. For more information about creating a CloudFormation stack, see How Does AWS CloudFormation Work?
The CloudFormation template creates the Lambda function within the VPC so that it can securely access your Aurora PostgreSQL cluster. The Lambda function needs internet access to connect to the Amazon RDS API endpoints to fetch the cluster metadata information. For this, the subnet used for the Lambda function needs to have the default route set to a NAT gateway. For more information about setting up your subnet, see How do I give internet access to my Lambda function in a VPC?
After you configure the function, your solution is in place. The Lambda function is called as per the interval that the execution rate parameter specifies. Every time the function runs, it checks all the new log messages generated since the last run and updates the login information.
Deleting a user
To drop a PostgreSQL user, if the user doesn’t own any objects and doesn’t have any permissions, enter a DROP USER
command. Any user with the rds_superuser
role can drop another user. See the following code:
This process becomes more involved if the PostgreSQL user you’re dropping has any permissions or owns any objects. You see the following error when you try to drop such a user:
The Lambda function in this post only deletes a user if it doesn’t own any objects, such as tables and functions. If the target user owns any objects, the code sets the status column for the user to ReadyToDelete
in the table user_login_status
. You can set up a process to look at the user status in this table and manually delete the user after you decide what to do with the objects they own.
To drop this user you have to revoke all permissions granted to the user and change their ownership of any objects. To do this, you first need to have the permissions to change the user’s ownership and permissions. If you don’t have the required permissions, the following code to change the ownership fails:
The first step is to grant all permissions that testuser
has to the user that you’re using for this use case (for this post, the user postgres
). See the following code:
The preceding code fails if postgres
has been granted to testuser
already. In that case, you need to revoke that permission by entering the following code:
After you grant this permission, the postgres
user can change and drop any objects that testuser
owns. Now you can reassign the ownership of all the objects:
With the preceding command, all objects owned by testuser
transfer to postgres
user. However, the drop still fails because a permission is still pending:
Drop everything that testuser
owns. This is now safe because you already reassigned all objects to the postgres
user. This following code drops the privileges that are currently assigned to testuser
:
The scope of the preceding code is only for the database you’re connected to. If you have multiple databases, you have to connect to each database and repeat these steps.
After you have revoked and reassigned all permissions, you can drop the user:
Unlocking the user
After the utility locks out an inactive user, you can unlock it if the user wants to access the database. To unlock the user, enter the following SQL statements:
If you deleted the user, you have to create the user again and grant the required permissions.
Limitations
As mentioned previously, PostgreSQL doesn’t store the database user’s login timestamps. Therefore, when the utility runs for the first time, it relies on the logs that you captured and published to CloudWatch Logs. If the logs don’t contain the login information for the user, the utility assumes the current timestamp as the last login timestamp.
For this solution, the Lambda function is scheduled to run at the specified time interval. Therefore, the lock and delete action occurs when the Lambda function is running and not exactly at the 90- or 180-day boundary. For example, assume that the Lambda function is scheduled to run every 24 hours. In one of the runs, it determines that a user has been inactive for 89 days and 23 hours. Because this is below the 90-day mark, the user isn’t locked in this run. The user should be locked after 1 hour, but because the next Lambda function run is triggered after 24 hours, this user stays unlocked beyond the 90-day mark.
As stated earlier, by design, the Lambda function doesn’t delete any users that own any database objects, such as tables and functions. PostgreSQL doesn’t allow you to drop any users that own objects. To drop such a user, you have to either drop the objects or change their ownership. Because it’s risky to drop objects or change their ownership automatically, the Lambda function only changes the status of this user to ReadyToDelete
in the user_login_status
table. As the DBA, you need to check the user status in this table and manually drop the user.
If your database has a very high number of connections coming in, this results in numerous connection log messages that each Lambda function run has to process. You should test to determine how long the Lambda function takes to process the connection messages. You should also consider running the Lambda function more frequently in the case of high connections load. This makes sure that each run is processing a smaller chunk of log messages instead of a single Lambda function running one time a day and processing all the connection log messages collected during the day.
Summary
This post explained how to implement a mechanism to identify inactive Aurora PostgreSQL users and lock or delete them as per your security policy. When you try out this solution, you should test and modify the code to meet your specific requirements. You should also consider adding alerts to make sure the job is running and inform you when a user is locked or deleted.
As always, AWS welcomes your feedback. Please share your experience and any questions in the comments.
About the Authors
Amishi Shah is a DevOps Consultant with the Professional Services team at Amazon Web Services. She works with customers to build scalable, highly available and secure solutions in AWS cloud. Her focus area is leading enterprise customers through large technical, organizational, and SDLC transformations.
Yaser Raja is a Senior Consultant with Professional Services team at Amazon Web Services. He works with customers to build scalable, highly available and secure solutions in AWS cloud. His focus area is homogenous and heterogeneous migrations of on-premise databases to AWS RDS and Aurora PostgreSQL.
David Rader is a Database Engineering Manager at Amazon Web Services.