Overview of security best practices for Amazon RDS for PostgreSQL and Amazon Aurora PostgreSQL-Compatible Edition
July 2023: This post was reviewed for accuracy.
Security is a key factor to consider when choosing or migrating to a database. Cloud security at AWS is the highest priority. Security and compliance is a shared responsibility between AWS and the customer. This shared model can help relieve your operational burden, because AWS allows you to securely manage your databases in the cloud by providing a variety of security features that you can use with AWS database services. For example, AWS services like AWS Identity and Access Management (IAM) and AWS Key Management Service (AWS KMS) can interact with both Amazon Relational Database Service (Amazon RDS) and Amazon Aurora. Numerous options are also available with the native PostgreSQL engine, which is also available in AWS as Amazon RDS for PostgreSQL and Amazon Aurora PostgreSQL-Compatible Edition.
In this post, we provide you with an overview of different options available with both Amazon RDS for PostgreSQL or Aurora PostgreSQL and natively with the PostgreSQL engine to run your databases securely on AWS. We discuss network security, database security, and data encryption options.
To achieve network security, we first need to start from the most basic networking best practices.
There are two methods of securing your network within AWS: security groups and network access control lists (NACLs). Both resource types act as a firewall to the traffic. However, security groups and NACLs operate at separate layers in the VPC and handle response traffic in different ways:
- Always restrict traffic using security groups. A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Configure your security groups to allow ingress traffic only from selected application security groups or selected application servers.
- Make sure you restrict network traffic using NACLs. This is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets.
Amazon RDS and Aurora, being managed services, restrict access to PostgreSQL configuration files like postgresql.conf and pg_hba.conf. Therefore, the best way to manage who connects to the database is based on specific IPs or range of IPs and users. This is done using a combination of NACLs and security groups, and by setting the right user permissions from within the database. We discuss database user permissions later in this post.
After you implement the correct NACLs and security group settings, you can also consider the encryption in transit options available. Amazon RDS for PostgreSQL and Aurora PostgreSQL support Secure Socket Layer (SSL), and you have the option to force all connections to your PostgreSQL instance to use SSL.
SSL with Amazon RDS for PostgreSQL and Aurora PostgreSQL
Amazon RDS for PostgreSQL and Aurora PostgreSQL offer a parameter called rds.force_ssl, which when set to 1, forces all clients to connect only via SSL. For more information about available SSL modes, see SSL Library Initialization. When you set
rds.force_ssl to 1, the client is forced to connect with the
“verify-full” only. The following is a connection string example for the same:
Amazon RDS for PostgreSQL and Aurora PostgreSQL support Transport Layer Security (TLS) versions 1, 1.1, and 1.2. Starting from PostgreSQL version 12, two new parameters were introduced for maintaining the TLS minimum and maximum allowed version. These parameters are ssl_min_protocol_version, which defaults to v1.2, and ssl_max_protocol_version, which is modifiable but doesn’t have a preset value in Amazon RDS for PostgreSQL.
Now that we have discussed network security, let’s dive deeper into database security.
Authentication, Authorization, and Accounting (AAA) is a widely known security framework for controlling application and network access. In this section, we discuss examples and possible solutions to achieve AAA in your RDS for PostgreSQL or Aurora PostgreSQL database.
Authentication in the database world involves deciding who is allowed to log in to the database. It could be only your DBA or a group of other people like database developers who might need access to the database for developing and testing purposes. To control who can actually log in to the database to perform any tasks, the user needs to authenticate. You have three different options for setting up secure access to your instance:
- IAM authentication – You can authenticate to your DB instance using IAM database authentication. With this authentication method, you don’t need to use a password when you connect to a DB instance. Instead, you use an authentication token which is a unique string of characters that Amazon Aurora generates on request. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes, which means, login using the token has to be in 15 mins. You don’t need to store user credentials in the database, because authentication is managed externally using IAM. You can also still use standard database authentication. The main benefit of using IAM is that you can centrally manage access to your database resources, instead of managing access individually on each DB instance. A limitation of IAM is the maximum number of connections per second for your DB instance might be limited depending on its DB instance class and your workload.
- Kerberos authentication – You can use external authentication of database users using Kerberos and Microsoft Active Directory. Kerberos is a network authentication protocol that uses tickets and symmetric-key cryptography to eliminate the need to transmit passwords over the network. Kerberos has been built into Active Directory and is designed to authenticate users to network resources, such as databases. Kerberos and Active Directory provide the benefits of single sign-on and centralized authentication of database users. You can keep your user credentials in Active Directory, which provides a centralized place for storing and managing credentials for multiple DB instances.
- DB password login – In this approach, you create database users with passwords, which then have access to the database. This approach has a management burden because the password remains the same until changed (if you set a timeline when it expires). You would need to change the password manually every time when you planned for it. You can also use AWS Secrets Manager to manage and rotate your database credentials automatically. You can refer to Rotate Amazon RDS database credentials manually with AWS Secrets manager for more details.
You can either use IAM or Kerberos authentication but not both at the same time.
Now that we have discussed who can access the database, let’s shed some light on what a user can access.
Authorization is a way of controlling access to the data. It defines what a user can do, such as reading from or writing to tables, creating another database, or creating another user.
Authorization is given through assigning privileges on database objects (like tables, sequence, functions, and type) to database users, roles, and groups based on your requirements. The role’s or user’s capabilities and level of access is defined by the privileges granted to them directly, through role membership, or through object ownership. For a list of privileges, see GRANT and REVOKE. The following are best practices to control access and have proper authorization:
- Create a role (that doesn’t have CONNECT privileges to the database therefore can’t modify the data) and grant necessary privileges on objects (like databases, schemas, tables, and functions). You can assign this role to database users per your requirements. This prevents accidental changes to the data model and structure because users that modify the data aren’t the owner of the objects.
- Create and assign database users based on the application type. This allows for privilege separation by login role when necessary.
- Assign login roles to the access roles in order to provide them access to a set or group of database objects.
You can achieve better user management when you segregate the users that are used to connect the applications (application users) and the users that are used to connect for development or testing or administration purpose.
The rds_superuser role is a predefined Amazon RDS role similar to the PostgreSQL superuser role, but with some restrictions. As with the PostgreSQL superuser role, the rds_superuser role has the most privileges for your DB instance. You should not assign this role to users unless they need the most access to the DB instance.
Also, it is a best practice to avoid rds_superuser or application users to connect the database for development purpose. You can have separate user logins for developers. For more information, see Managing PostgreSQL users and roles.
Another way to secure data at the table level is row-level security. Row-level security allows you to restrict access of rows in a table based on a policy you define. In other words, it provides partial access to the table. For example, consider a use case in which you have a table with manager and employee details. You may have sensitive information like salaries and other employee details. However, you might not want a manager to see the employee details of employees who aren’t reporting to them. In this case, you can either split the table into multiple parts depending on how big the table is, or use row-level security to restrict data access. For more information, refer to Multi-tenant data isolation with PostgreSQL Row Level Security.
Now that we’ve discussed who can access what in the database, let’s dive deep into logging this information. In the next section, we discuss how to monitor what users are actually doing inside the database.
The final “A” in the AAA framework is accounting. Accounting is keeping track of what the database user accessed in the database and how. This includes the amount of system time or the amount of data a user has sent or received during a session.
Accounting is carried out by logging session usage information, and is used for authorization control, billing, trend analysis, resource utilization, and capacity planning activities. By default, Amazon RDS for PostgreSQL logging parameters capture all server errors, including query failures, login failures, fatal server errors, and deadlocks.
You can enable query logging using Amazon RDS for PostgreSQL or Aurora PostgreSQL based on your requirements. If you decide to enable extensive logging for auditing purposes, be aware that these logs take up database storage. A best practice is to monitor storage utilization and set up log retention (using the
rds.log_retention_period parameter) accordingly for proper rotation of logs. If you need the error logs for a longer time, you can also download them and send them to your own Amazon Simple Storage Service (Amazon S3) bucket for later use. This is a way to save the database storage and still keep the logs for a sufficient time for auditing purposes. You can also publish PostgreSQL logs to Amazon CloudWatch Logs to perform real-time analysis of log data for auditing purposes.
For more information on dealing with PostgreSQL log files, see Working with RDS and Aurora PostgreSQL logs: Part 1 and Working with RDS and Aurora PostgreSQL logs: Part 2.
AWS CloudTrail can also help you audit your AWS account. CloudTrail is enabled on your AWS account when you create it, and logs all Amazon RDS actions. CloudTrail provides a record of actions taken by a user, role, or AWS service in Amazon RDS. For more information, see Working with AWS CloudTrail and Amazon RDS.
You can also use the pgAudit extension to audit RDS for PostgreSQL or Aurora PostgreSQL instances. pgAudit logs detailed information about the sessions to the database. It logs what table was accessed by which user and the operation performed in much more detail than the standard
log_statement parameter in the parameter group. Because this logging information also goes in the same error log, monitoring storage is very important if you have pgAudit enabled. For more details on pgAudit and its usage with Amazon RDS, see How do I use the pgaudit extension to audit my Amazon RDS DB instance that is running PostgreSQL. You can also integrate Database Activity Streams (DAS) with your monitoring tools to monitor and set alarms for audit activity in your Amazon Aurora cluster.
Encryption is another way to achieve data security. Encryption enables users to protect and secure the data when it’s stored or being transmitted through the internet.
You can create an RDS instance or Aurora cluster with an encrypted storage. The storage is encrypted using AWS KMS keys. By default, when you enable encryption, the storage is encrypted using an AWS owned key; however, you can also use your own custom keys for any compliance or security requirements. You can only enable storage-layer encryption when creating the instance or cluster. For more information, see Encrypting Amazon RDS resources.
Other important recommendations and best practices
Consider the following additional recommendations and best practices:
- If you set
all/ddl, it logs creating or altering user statements with a password in plain text. Therefore, we don’t recommend logging those statements. You can change the logging level before and after such statements. For other alternatives for this particular use case, see How can I stop Amazon RDS for PostgreSQL from logging my passwords in clear-text in the log files.
- Avoid usage in the public schema. Irrespective of criticality of the functions, if you create anything in the public schema, any user in your database can see and run it. We always recommend that application users have their own schemas to avoid public schema usage, especially if you’re new to PostgreSQL, because the public schema by default allows access to everyone.
- You can use event triggers to avoid accidental DDLs on any objects. You can also use event triggers for auditing, to avoid unwanted data modifications or accidental data losses.
- If you want a non-privileged user to have an escalated privilege for the purpose of running a particular function, you can use a security definer keyword in the function that you’re creating. When a user calls the function, the security definer gives the user elevated privileges to be able to run that function as an owner of the function. Therefore, it’s quite useful if you just want a user to run a specific function but don’t want them to have that privilege all the time.
- It is always recommended to upgrade your databases to the most recent minor release for whatever major version is in use. So, keep your RDS PostgreSQL or Aurora PostgreSQL database up to date. If Amazon RDS for PostgreSQL or Aurora PostgreSQL minor releases fix security vulnerabilities, you need to upgrade as soon as possible to maintain data confidentiality and integrity.
- Refer to Upgrading the PostgreSQL DB engine for Aurora PostgreSQL for Aurora PostgreSQL upgrades and Amazon Aurora PostgreSQL releases and engine versions for available database engine versions.
- Refer to Upgrading the PostgreSQL DB engine for Amazon RDS for Amazon RDS for PostgreSQL upgrades and Supported PostgreSQL database versions for supported versions.
When migrating from databases like Oracle or Microsoft SQL Server to Amazon RDS for PostgreSQL or Aurora PostgreSQL, security is one of the major areas that we should focus on to meet the necessary guidelines. This post covered different security options available with RDS for PostgreSQL and Aurora PostgreSQL instances at different layers like network, database, and storage. At the network layer, you can look at NACLs and security groups to restrict database connections, and use SSL protection for allowed connections. At the database layer, you can use the AAA (Authentication, Authorization, and Accounting) framework to secure your data with the various tools and features available. At the storage layer, you can have encrypted storage option using AWS KMS keys or customer owned keys. Based on the overview of the solutions discussed, you can consider the best security mechanism for your databases.
If you have any questions or suggestions about this post, feel free to leave a comment. We hope the information we shared helps!
About the Authors
Baji Shaik is a Sr. Lead Consultant with AWS ProServe, GCC India. His background spans a wide depth and breadth of expertise and experience in SQL/NoSQL database technologies. He is a Database Migration Expert and has developed many successful database solutions addressing challenging business requirements for moving databases from on-premises to Amazon RDS and Aurora PostgreSQL/MySQL. He is an eminent author, having written several books on PostgreSQL. A few of his recent works include “PostgreSQL Configuration“, “Beginning PostgreSQL on the Cloud”, and “PostgreSQL Development Essentials“. Furthermore, he has delivered several conference and workshop sessions.
Divya Sharma is a Database Specialist Solutions architect at AWS, focusing on RDS/Aurora PostgreSQL. She has helped multiple enterprise customers move their databases to AWS, providing assistance on PostgreSQL performance and best practices.