AWS Big Data Blog

Analyze Database Audit Logs for Security and Compliance Using Amazon Redshift Spectrum

With the increased adoption of cloud services, organizations are moving their critical workloads to AWS. Some of these workloads store, process, and analyze sensitive data that must be audited to satisfy security and compliance requirements. The most common questions from the auditors are around who logs in to the system when, who queried which sensitive data when, and when did the user last modify or update his or her credentials.

By default, Amazon Redshift logs all information related to user connections, user modifications, and user activity on the database. However, to efficiently manage disk space, log tables are only retained for 2–5 days, depending on log usage and available disk space. To retain the log data for longer period of time, enable database audit logging. After it’s enabled, Amazon Redshift automatically pushes the data to a configured S3 bucket periodically.

Amazon Redshift Spectrum is a recently released feature that enables querying and joining data stored in Amazon S3 with Amazon Redshift tables. With Redshift Spectrum, you can retrieve the audit data stored in S3 to answer all security and compliance–related questions. Redshift Spectrum can also combine the datasets from the tables in the database with the datasets stored in S3. It supports files in Parquet, text file (CSV, pipe-delimited, TSV), sequence file, and RC file format. It also supports different compression types like gzip, snappy, and bz2.

In this post, we’ll demonstrate querying the Amazon Redshift audit data logged in S3 to provide answers to common use cases described preceding.

Walkthrough

You set up the following resources:

  • Amazon Redshift cluster and parameter group
  • IAM role and policies to give Redshift Spectrum access to Amazon Redshift
  • Redshift Spectrum external tables

Prerequisites

  • Create an AWS account
  • Configure the AWS CLI to access your AWS account
  • Get access to a query tool compatible with Amazon Redshift
  • Create an S3 bucket

Cluster requirements

These must be true of your Amazon Redshift cluster:

  • Be in the same AWS Region as the S3 bucket storing the audit log files.
  • Be version 1.0.1294 or later.
  • Have read bucket and put object permissions on the S3 bucket to be configured for logging.
  • Have an IAM role attached that has at least the AmazonS3ReadOnlyAccess and AmazonAthenaFullAccess built-in policies attached.

Set up Amazon Redshift

Create a new parameter group to enable user activity logging:

aws redshift create-cluster-parameter-group --parameter-group-name rs10-enable-log --parameter-group-family Redshift-1.0 --description "Enable Audit Logging"
aws redshift modify-cluster-parameter-group --parameter-group-name rs10-enable-log --parameters '{"ParameterName":"enable_user_activity_logging","ParameterValue":"true"}'

Create the Amazon Redshift cluster using the new parameter group created:

aws redshift create-cluster --node-type dc1.large --cluster-type single-node --cluster-parameter-group-name rs10-enable-log --master-username <Username> --master-user-password <Password> --cluster-identifier <ClusterName>

Wait for the cluster to build. When it is complete, enable audit logging:

aws redshift enable-logging --cluster-identifier rscluster –bucket-name <bucketname> --s3-key-prefix <myprefix>

After this is completed, you should see that Amazon Redshift is creating audit log data into the path s3://bucketname/myprefix. The structure under this prefix will be created by the Amazon Redshift audit logger, and follows this format:

AWSLogs/<Your Account Number>/redshift/<region name>/<year>/<month>/<day>/

Configure the AWS Glue Data Catalog

The next step is to add this audit log location into the AWS Glue Data Catalog, which allows you to easily work with this data from a variety of tools. These include AWS Glue applications, Amazon Athena, and Amazon Redshift Spectrum.

To get started, go to the AWS Glue console page, and choose Crawlers in the left navigation pane. Create a new crawler at the location you configured preceding (s3://bucketname/myprefix). At this time, you might want to create a new AWS Glue database, and add a table prefix for each table to be crawled from this location.

After the crawler has run, you should have a new table, redshift_audit_logs (assuming a table prefix in your crawler of redshift_):

This table only has positional column names, because the audit log files are CSV format, and will include ‘col0’ through ‘col15’. The crawler will also have created partition columns for each of the parts of the prefix that are added automatically by the Amazon Redshift audit logger including account number, region, and year/month/day. These are placed into columns named ‘partition_0’ through ‘partition_7’.

We can now immediately query this data by going to the Athena console and previewing the contents of this table:

Set up Redshift Spectrum

We can now set up Redshift Spectrum to allow us to query this dataset on S3. We first configure Amazon Redshift with the correct IAM privileges to access S3 (remember that no AWS service has rights to access other services by default!).

IAM role and policies

Create an IAM role for Redshift Spectrum to access the S3 bucket from an Amazon Redshift database:

aws iam create-role --role-name RedshiftSpectrumRole --assume-role-policy-document file://RedshiftAssumeRole.json

RedshiftAssumeRole.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "redshift.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Attach built-in policies AmazonS3ReadOnlyAccess and AmazonAthenaFullAccess for Amazon Redshift to access S3 using Redshift Spectrum:

aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonAthenaFullAccess --role-name RedshiftSpectrumRole
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess --role-name RedshiftSpectrumRole

Add the role that you just created to the Amazon Redshift cluster. Replace the account number with your account number.

aws redshift modify-cluster-iam-roles --cluster-identifier rscluster --add-iam-roles arn:aws:iam::<accountNumber>:role/RedshiftSpectrumRole

At this point, you now have Redshift Spectrum completely configured to access S3 from the Amazon Redshift cluster.

External database and schema

The next step is to create a relationship between your Amazon Redshift cluster and the AWS Glue Data Catalog database that contains your Amazon Redshift audit logs. This is done using direct SQL statements to create an EXTERNAL SCHEMA that defines a mapping of a database. For this database, metadata locations exist either in the catalog or in a user-specified Apache Hive metastore server, to a local schema inside the Amazon Redshift cluster. This mapping allows table definitions to be shared between Athena and Amazon Redshift, or even different infrastructure setups between Amazon Redshift, Apache Spark, and Apache Hive.

To query audit log data, create an external database and schema in Amazon Redshift:

CREATE external SCHEMA auditLogSchema 
FROM data catalog 
DATABASE <catalog database> 
iam_role 'arn:aws:iam::<AccountNumber>:role/<RoleName>'
REGION ‘<regionName>’
CREATE external DATABASE if not exists;

The catalog database parameter references the database in AWS Glue Data Catalog where you configured your crawler to create tables, and the region parameter specifies the AWS Region of the Athena data catalog. By default, this is the same AWS Region  as the Amazon Redshift cluster.

You should now be able to see the audit log table from the AWS Glue Data Catalog in your external schema by querying the SVV_EXTERNAL_TABLES view:

master=# select tablename from svv_external_tables where schemaname = 'auditLogSchema';
      tablename
---------------------
 redshift_audit_logs
 
(1 rows)

master=# select * from catalog_default.redshift_audit_logs limit 5;
          col0          |             col1             |       col2        | col3  | col4  | col5 |  col6  |   col7    | col8  | col9 | col10 | col11 | col12 | col13 | col14 | col15 | partition_0 | partition_1 | partition_2  | partition_3 | partition_4 | partition_5 | partition_6 | partition_7
------------------------+------------------------------+-------------------+-------+-------+------+--------+-----------+-------+------+-------+-------+-------+-------+-------+-------+-------------+-------------+--------------+-------------+-------------+-------------+-------------+-------------
 authenticated          | Sat, 5 May 2018 23:07:13:128 | [local]           |       | 29095 | dev  | rdsdb  | Ident     |     0 |      |       |     0 |       |       |       |       | energy-demo | AWSLogs     | 887210671223 | redshift    | eu-west-1   | 2018        | 05          | 06
 initiating session     | Sat, 5 May 2018 23:07:13:128 | [local]           |       | 29095 | dev  | rdsdb  | Ident     |     0 |      |       |     0 |       |       |       |       | energy-demo | AWSLogs     | 887210671223 | redshift    | eu-west-1   | 2018        | 05          | 06
 disconnecting session  | Sat, 5 May 2018 23:07:13:165 | [local]           |       | 29095 | dev  | rdsdb  | Ident     | 36818 |      |       |     0 |       |       |       |       | energy-demo | AWSLogs     | 887210671223 | redshift    | eu-west-1   | 2018        | 05          | 06
 authenticated          | Sat, 5 May 2018 23:33:31:241 | ::ffff:127.0.0.1  | 47722 | 30232 | dev  | rdsdb  | password  |     0 |      |       |     0 |       |       |       |       | energy-demo | AWSLogs     | 887210671223 | redshift    | eu-west-1   | 2018        | 05          | 06
 initiating session     | Sat, 5 May 2018 23:33:31:241 | ::ffff:127.0.0.1  | 47722 | 30232 | dev  | rdsdb  | password  |     0 |      |       |     0 |       |       |       |       | energy-demo | AWSLogs     | 887210671223 | redshift    | eu-west-1   | 2018        | 05          | 06
(5 rows)

Summary

In this post, we discussed how to configure Amazon Redshift audit logging and how to query these audit logs data stored in S3 directly from Amazon Athena or Redshift Spectrum. Doing this can help you to answer security and compliance-related queries with ease.

To learn more about Amazon Redshift Spectrum, visit this page. You can also visit the Get Started page to know more about new features.

If you have any questions or suggestions, comment below.

 


Additional Reading

Learn how to improve query performance by orders of magnitude in the Amazon Redshift Engineering’s Advanced Table Design Playbook.

 

 

 

 

 


About the Author

Sandeep Kariro is a Consultant for AWS Professional Services. He is a seasoned data center migration specialist to AWS. Sandeep also provides data-centric design solutions optimal for Cloud deployments while keeping security, compliance and operations as top design principles. He also acts as a liaison between the customer and AWS product teams to request new product features and communicate top customer asks. He loves travelling around the world and has traveled to several countries around the globe in last decade.

 

 

Ian Meyers looks after the Database Services Customer Advisory Team. He is based in London, UK, and has been with AWS for 6 years as a specialist in Database and Analytics. When he’s not doing cloudy stuff, he’s likely to be found with his family and the dog by the Thames riverside.