AWS Database Blog

Managing inactive Amazon Aurora PostgreSQL users

July 2023: This post was reviewed for accuracy.

Data is one of the most precious assets for any organization, and keeping data secure is always the top priority. One common security requirement for databases is to restrict user access—if a database user is compromised, it can cause significant damage to your data. You should follow the least privilege model for database users. This means that you only grant a user the minimal set of privileges required to do their work. In addition, if a database user is inactive for an extended period, it’s a good practice to disable them. This can limit the impact of any mistake or a security breach. For more information, see Managing PostgreSQL users and roles.

This post provides a solution that identifies inactive Amazon Aurora PostgreSQL users and locks or deletes them automatically. The post also shares example code and an AWS CloudFormation template to get you started.

In the following use case, you want to identify users that have been inactive for a given number of days (for this post, 90 days). After you identify these users, you want to lock them out. As the DBA, you can unlock these users anytime. However, you want to delete the users automatically if they are inactive for 180 days.

However, PostgreSQL doesn’t natively support this feature. Some commercial databases provide this functionality with the user profile feature, which allows you to lock a user account that hasn’t logged in to the database for a specified number of days.

For PostgreSQL, this becomes more challenging because PostgreSQL doesn’t store user login timestamps in any of the catalog tables. Therefore, it’s not straightforward to determine how long a user has been inactive. In addition, because there aren’t any login triggers in PostgreSQL, a trigger-based solution is also not possible.

Solution

You can enable logging the details of each successful login in PostgreSQL by setting the parameter log_connections to 1 in the parameter group. After you set this parameter, the PostgreSQL engine logs a message like the following for each successful login:

2020-01-07 03:51:56 UTC:10.0.0.123(52820):postgres@logtestdb:[18161]:LOG: connection authorized: user=postgres database=logtestdb SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, compression=off)

Aurora PostgreSQL allows you to publish these engine logs to Amazon CloudWatch Logs. With the logs publishing enabled, you can develop a scheduled AWS Lambda function that parses out this login information from the logs to get the user’s last login timestamp.

The Lambda function performs the following high-level steps:

  • Extracts login information from log messages
  • Stores the last login timestamp for the users in a status table
  • Uses the timestamp for each user to apply the lock or delete policy

Prerequisites

Before you get started, make sure to complete the following prerequisites:

  • Set the parameter log_connection to 1 in the parameter group. For more information about creating a custom parameter group for your instance, see Working with DB Parameter Groups.
  • Select the option to publish PostgreSQL logs to Amazon CloudWatch Logs. For instructions, see Publishing Aurora PostgreSQL Logs to Amazon CloudWatch Logs.
  • Store the database user credentials in AWS Secrets Manager. This user should have the rds_superuser role assigned to it or have permission to lock and delete other users.
  • Store the Lambda function code in an Amazon S3 bucket that is in the same Region as the Lambda function (the function is created in the same Region where you create the CloudFormation stack).

Solution architecture

This solution uses the following AWS services:

  • Amazon Aurora PostgreSQL – Serves as a relational database for most demanding business applications
  • AWS CloudFormation – Describes and provisions all infrastructure resources in the cloud environment
  • Amazon CloudWatch Events – Schedules the triggering action for Lambda functions at specific times using rate expression
  • Amazon CloudWatch Logs – Enables you to centralize the logs from various systems and services, in a single, highly scalable service
  • AWS Identity and Access Management (IAM) – Helps manage access to AWS services and resources securely via users, groups, permissions
  • AWS Lambda – Allows you to run the code without provisioning or managing servers
  • AWS Secrets Manager – Provides easy rotation, management, and retrieval of credentials

The following diagram illustrates how the solution uses these various services.

This solution provides the following benefits:

  • You can audit and optionally lock and delete inactive database users
  • You can run the solution in your desired Region, VPC, and subnet
  • Although you can only run it against Aurora PostgreSQL, you can modify the code to work with Amazon RDS for PostgreSQL
  • The solution uses Secrets Manager to store the database credentials
  • You can exclude a list of users that shouldn’t be locked or deleted regardless of inactivity
  • The solution provides a configurable inactivity time limit, after which the user is locked and deleted
  • You can configure the Lambda function execution schedule

The following sections provide the details of how each component works together to implement this solution.

Lambda function actions

The Lambda function implements the policy as per the configuration parameters. The process includes the following steps:

  1. Fetch database user credentials from AWS Secrets Manager.
  2. Use AWS CLI to fetch the Aurora cluster details.
  3. Create the following tables if they don’t exist:
    CREATE TABLE user_login_status (
    user_name TEXT PRIMARY KEY, 
    last_login_time TIMESTAMPTZ DEFAULT now(),
    status TEXT
    );
    
    CREATE TABLE user_login_job_status (
    	instance_name TEXT,
    	last_processed_time TIMESTAMPTZ
    );
  4. Sync up the user names from the pg_users table to the user_login_status table as follows:
    • Fetch the current users list from pg_users, excluding rdsadmin, the master user, and a given list of excluded users (if provided by the input parameter).
    • Copy new users to user_login_status with the last login time set to the current time.
    • Delete users from the user_login_status table that don’t exist in the list.
    • Set the status column in the table user_login_status of all new users to Active.
  5. Fetch the last_processed_timestamp from the table user_login_job_status.
    • If a row doesn’t exist, create a row with the last time set to 24 hours ago.
  6. Read database login messages from PostgreSQL logs in CloudWatch Logs, starting from last_processed_timestamp.
  7. If the logs contain a login message, update the last login timestamp in the user_login_status table.
  8. Store the last log timestamp in the table user_login_job_status.
  9. Get the list of users with the inactivity time from the user_login_status table.
  10. Lock all users inactive for 90 days (or preferred number of days) and set the status column in the table user_login_status to Locked.
  11. Delete all users inactive for 180 days (or preferred number of days) if they don’t own any objects. If they own object, mark them Ready to delete in the user_login_status table.

The following sections provide the details of how to implement this logic using the Lambda function LockNDeleteUser.

Creating the Lambda function deployment package

This utility consists of the Python Lambda function code. You can download the code for LockNDeleteUser.py from the GitHub repo. This code uses a psycopg2 library, which you need to download. To create a Lambda function deployment package, you need to bundle the code and all dependent libraries.

The following steps walk you through creating the deployment package for the LockNDeleteUser Lambda function. For more information about deployment packages, see AWS Lambda Deployment Package in Python.

The following steps detail creating the deployment package from an Amazon EC2 instance with Amazon Linux:

  1. On your Linux host, create the project directory for your deployment package:
    mkdir ~/lockndeleteuser 
    cd ~/lockndeleteuser 
  2. Get the Python code file LockNDeleteUser.py from the GitHub repo and copy it to this directory.
  3. To make sure that the file is in the Unix format, install dos2unix:
    sudo yum install -y dos2unix
    dos2unix LockNDeleteUser.py
  4. Verify that your Python 3.6 environment is activated.
  1. Install the psycopg2 library that gets imported in your code using pip:
    pip3 install psycopg2-binary --user
  2. Set the required permissions:
    chmod a+r ~/lockndeleteuser
    chmod 744 LockNDeleteUser.py
  3. Locate the packages you installed using pip, and make sure that they have global read permissions. You should find it in the site-packages directory of Python 3.6:
    ls /home/ec2-user/.local/lib/python3.6/site-packages
    sudo chmod a+r -R /home/ec2-user/.local/lib/python3.6/site-packages
  4. Go to the packages directory where you installed your Python packages and zip the directory contents (not the directory itself). Place the .zip file in your project folder. To include all hidden files, use the option zip -r9:
    cd /home/ec2-user/.local/lib/python3.6/site-packages
    zip -r9 ~/lockndeleteuser/LockNDeleteUser.zip *
  5. From your project directory, add the LockNDeleteUser.py script to the zip file:
    cd ~/lockndeleteuser
    zip -g LockNDeleteUser.zip LockNDeleteUser.py

Using this utility

This utility is very easy to use. To get it working, complete the following steps:

  1. Upload the deployment package file LockNDeleteUser.zip to your S3 bucket.
  2. Get ready with the values for the following parameters:
    • Secrets name – The Secrets Manager secret that stores the database credentials.
    • S3 bucket name – The S3 bucket location where the deployment package is uploaded.
    • Subnet ID(s) – The subnet that the Lambda function uses.
    • Security Group ID(s) – The security group that the Lambda function uses.
    • LambdaFunctionName – The name of the Lambda function.
    • LambdaIAMRole – The ARN of the Lambda IAM role, which has access to execute Lambda functions, read from Secrets Manager, and read and write to CloudWatch.
    • HttpsProxy (optional) – The value of https_proxy, which you need to use for internet bound requests (if required).
    • Lock Time Interval in Days (optional) – If you provide the lock interval, the utility locks the database users if they have been inactive for the specified number of days. If you don’t provide this value, inactive users aren’t locked.
    • Delete Time Interval in Days (optional) – If you provide the delete interval, the utility deletes the database users if they have been inactive for the specified number of days. If you don’t provide this value, inactive users aren’t deleted.
    • Log Level (optional) – This value is set to info by default. For detailed logging, you can set it to debug.
    • Ignore Users List (optional) By default, rdsadmin and the master user are skipped from the lock and delete actions. If you need to skip any other users, you can list them in this parameter.
    • Execution Rate (optional) – By default, this utility runs every 24 hours. If you need to adjust the schedule, you can change it using this parameter.
  3. Download the CloudFormation template JSON from the GitHub repo.
  4. Create the CloudFormation stack by providing the stack name and the values for parameters. For more information about creating a CloudFormation stack, see How Does AWS CloudFormation Work?

The CloudFormation template creates the Lambda function within the VPC so that it can securely access your Aurora PostgreSQL cluster. The Lambda function needs internet access to connect to the Amazon RDS API endpoints to fetch the cluster metadata information. For this, the subnet used for the Lambda function needs to have the default route set to a NAT gateway. For more information about setting up your subnet, see How do I give internet access to my Lambda function in a VPC?

After you configure the function, your solution is in place. The Lambda function is called as per the interval that the execution rate parameter specifies. Every time the function runs, it checks all the new log messages generated since the last run and updates the login information.

Deleting a user

To drop a PostgreSQL user, if the user doesn’t own any objects and doesn’t have any permissions, enter a DROP USER command. Any user with the rds_superuser role can drop another user. See the following code:

postgres=> DROP USER testuser;
DROP ROLE
postgres=>

This process becomes more involved if the PostgreSQL user you’re dropping has any permissions or owns any objects. You see the following error when you try to drop such a user:

postgres=> DROP USER testuser;
ERROR: role "testuser" cannot be dropped because some objects depend on it
DETAIL: privileges for table mytable1
owner of table perm_test_table
postgres=>

The Lambda function in this post only deletes a user if it doesn’t own any objects, such as tables and functions. If the target user owns any objects, the code sets the status column for the user to ReadyToDelete in the table user_login_status. You can set up a process to look at the user status in this table and manually delete the user after you decide what to do with the objects they own.

To drop this user you have to revoke all permissions granted to the user and change their ownership of any objects. To do this, you first need to have the permissions to change the user’s ownership and permissions. If you don’t have the required permissions, the following code to change the ownership fails:

postgres=> REASSIGN OWNED BY testuser TO postgres;
ERROR: permission denied to reassign objects
postgres=>

The first step is to grant all permissions that testuser has to the user that you’re using for this use case (for this post, the user postgres). See the following code:

postgres=> GRANT testuser TO postgres;
GRANT ROLE
postgres=>

The preceding code fails if postgres has been granted to testuser already. In that case, you need to revoke that permission by entering the following code:

REVOKE postgres FROM testuser;

After you grant this permission, the postgres user can change and drop any objects that testuser owns. Now you can reassign the ownership of all the objects:

postgres=> REASSIGN OWNED BY testuser TO postgres;
REASSIGN OWNED
postgres=>

With the preceding command, all objects owned by testuser transfer to postgres user. However, the drop still fails because a permission is still pending:

postgres=> DROP USER testuser;
ERROR: role "testuser" cannot be dropped because some objects depend on it
DETAIL: privileges for table mytable1
postgres=>

Drop everything that testuser owns. This is now safe because you already reassigned all objects to the postgres user. This following code drops the privileges that are currently assigned to testuser:

postgres=> DROP OWNED BY testuser;
DROP OWNED
postgres=>

The scope of the preceding code is only for the database you’re connected to. If you have multiple databases, you have to connect to each database and repeat these steps.

After you have revoked and reassigned all permissions, you can drop the user:

postgres=> DROP USER testuser;
DROP ROLE
postgres=>

Unlocking the user

After the utility locks out an inactive user, you can unlock it if the user wants to access the database. To unlock the user, enter the following SQL statements:

ALTER USER some_user CONNECTION LIMIT -1;

UPDATE user_login_status 
   SET last_login_time=now(), 
       status= 'Active' 
 WHERE user_name='some_user';

If you deleted the user, you have to create the user again and grant the required permissions.

Limitations

As mentioned previously, PostgreSQL doesn’t store the database user’s login timestamps. Therefore, when the utility runs for the first time, it relies on the logs that you captured and published to CloudWatch Logs. If the logs don’t contain the login information for the user, the utility assumes the current timestamp as the last login timestamp.

For this solution, the Lambda function is scheduled to run at the specified time interval. Therefore, the lock and delete action occurs when the Lambda function is running and not exactly at the 90- or 180-day boundary. For example, assume that the Lambda function is scheduled to run every 24 hours. In one of the runs, it determines that a user has been inactive for 89 days and 23 hours. Because this is below the 90-day mark, the user isn’t locked in this run. The user should be locked after 1 hour, but because the next Lambda function run is triggered after 24 hours, this user stays unlocked beyond the 90-day mark.

As stated earlier, by design, the Lambda function doesn’t delete any users that own any database objects, such as tables and functions. PostgreSQL doesn’t allow you to drop any users that own objects. To drop such a user, you have to either drop the objects or change their ownership. Because it’s risky to drop objects or change their ownership automatically, the Lambda function only changes the status of this user to ReadyToDelete in the user_login_status table. As the DBA, you need to check the user status in this table and manually drop the user.

If your database has a very high number of connections coming in, this results in numerous connection log messages that each Lambda function run has to process. You should test to determine how long the Lambda function takes to process the connection messages. You should also consider running the Lambda function more frequently in the case of high connections load. This makes sure that each run is processing a smaller chunk of log messages instead of a single Lambda function running one time a day and processing all the connection log messages collected during the day.

Summary

This post explained how to implement a mechanism to identify inactive Aurora PostgreSQL users and lock or delete them as per your security policy. When you try out this solution, you should test and modify the code to meet your specific requirements. You should also consider adding alerts to make sure the job is running and inform you when a user is locked or deleted.

As always, AWS welcomes your feedback. Please share your experience and any questions in the comments.


About the Authors

Amishi Shah is a DevOps Consultant with the Professional Services team at Amazon Web Services. She works with customers to build scalable, highly available and secure solutions in AWS cloud. Her focus area is leading enterprise customers through large technical, organizational, and SDLC transformations.

Yaser Raja is a Senior Consultant with Professional Services team at Amazon Web Services. He works with customers to build scalable, highly available and secure solutions in AWS cloud. His focus area is homogenous and heterogeneous migrations of on-premise databases to AWS RDS and Aurora PostgreSQL.

David Rader is a Database Engineering Manager at Amazon Web Services.