AWS Big Data Blog

Top 10 Performance Tuning Techniques for Amazon Redshift

Customers use Amazon Redshift for everything from accelerating existing database environments, to ingesting weblogs for big data analytics. Amazon Redshift is a fully managed, petabyte-scale, massively parallel data warehouse that offers simple operations and high performance. Amazon Redshift provides an open standard JDBC/ODBC driver interface, which allows customers to connect their existing business intelligence tools and reuse existing analytics queries.

Amazon Redshift can run any type of data model, from a production transaction system third-normal-form model to star and snowflake schemas, data vault, or simple flat tables. If you plan to adopt Amazon Redshift, or if you already have, consider the model architecture to ensure that your data model deploys correctly and maintains the database.

This post takes you through the most common issues that customers find as they adopt Amazon Redshift and give you concrete guidance on how to address each. If you address each of these items, you should be able to achieve optimal performance of queries and scale effectively to meet customer demand.

Tip #1: Minimize the blocks scanned with sort key columns

You can identify a column in Amazon Redshift tables as a sort key. This acts as an index in other databases but doesn’t incur a storage cost like it might on other platforms. You get the most value creating a sort key on columns commonly used in WHERE clauses. The Filters Used query helps identify columns frequently used to filter rows per table.

In most cases, select up to five columns as the COMPOUND sort key and sorting them by increasing cardinality provides the most benefit; you can also select more columns, which may be helpful for extremely large tables. AWS Support or solutions architects may recommend an INTERLEAVED sort key for a small number of edge cases, but save those solutions for after you consult with AWS.

To determine which tables don’t have sort keys, run the following query against the v_extended_table_info view from the Amazon Redshift Utils repository:

SELECT * FROM admin.v_extended_table_info WHERE sortkey IS null;

The Amazon Redshift Developer Guide also offers a tutorial that walks you through addressing unsorted tables. You can also run the following query to generate a list of recommended sort keys based on query activity:

SELECT ti.schemaname||'.'||ti.tablename AS "table", 
  avg(r.s_rows_pre_filter) avg_s_rows_pre_filter, 
  round(1::float - avg(r.s_rows_pre_filter)::float/ti.tbl_rows::float,6) avg_prune_pct, 
  avg(r.s_rows) avg_s_rows, 
  round(1::float - avg(r.s_rows)::float/avg(r.s_rows_pre_filter)::float,6) avg_filter_pct, 
  trim(a.typname) "type", 
  count(distinct i.query) * avg(r.time) AS total_scan_secs, 
  avg(r.time) AS scan_time, 
  count(distinct i.query) AS num, 
  max(i.query) AS query, 
  trim(info) AS filter 
FROM stl_explain p 
JOIN stl_plan_info i 
ON (i.userid=p.userid AND i.query=p.query AND i.nodeid=p.nodeid ) 
JOIN stl_scan s 
ON (s.userid=i.userid AND s.query=i.query AND s.segment=i.segment AND s.step=i.step) 
  SELECT table_id,
    "table" tablename,
    schema schemaname,
  FROM svv_table_info) ti 
ON ti.table_id=s.tbl 
  SELECT query, 
    datediff(s,min(starttime),max(endtime)) AS time, 
    sum(rows) s_rows, 
    sum(rows_pre_filter) s_rows_pre_filter, 
    round(sum(rows)::float/sum(rows_pre_filter)::float,6) filter_pct 
  FROM stl_scan 
  WHERE userid>1 AND starttime::date = current_date-1 AND starttime < endtime 
  GROUP BY 1,2,3 HAVING sum(rows_pre_filter) > 0 ) r 
ON (r.query = i.query AND r.segment = i.segment AND r.step = i.step) 
  SELECT attrelid,
  FROM pg_attribute a 
  JOIN pg_type t 
  ON t.oid=a.atttypid 
  WHERE attsortkeyord IN (1,-1)) a 
ON a.attrelid=s.tbl 
WHERE s.type = 2 AND ti.tbl_rows > 1000000 AND LIKE 'Filter:%' AND p.nodeid > 0 
GROUP BY 1,2,7,8,9,10,15 
ORDER BY 1, 13 desc, 11 desc;

Keep in mind that queries evaluated against a sort key column must not apply a SQL function to the sort key. Apply the SQL function to the compared values and use the sort key as a reference. This is commonly found on TIMESTAMP columns that are used as sort keys.

Another useful tool is the column ‘is_rrscan’ on the SVL_QUERY_SUMMARY system view. When is_rrscan = ‘t,’ the sort key was leveraged to minimize the number of blocks scanned for the step in that particular query. This information is also captured on STL_SCAN.

Tip #2: Minimize network transfer time with distribution keys

Amazon Redshift is a distributed, shared-nothing database architecture. Every node in the cluster stores a subset of the data. Each node subdivides into slices, with each slice having one or more dedicated cores. The number of slices per node depends on the node type. For example, each DS2.XL compute node has two slices, and each DS2.8XL compute node has 16 slices.

When you create a table, you decide whether to spread the data evenly among slices (the default) or assign data to specific slices according to one of the columns. By choosing columns for distribution that are usually joined together, you can minimize the amount of data transferred over the network during the join. This can significantly increase performance on these queries.

Amazon Redshift Advisor includes several specific, evidence-backed recommendations to help with you determine the optimal distribution strategy. Advisor automatically analyzes each table’s usage and builds a recommendation to get the best performance out of Amazon Redshift.

Many AWS topics focus on the selection of a good distribution key, including Choose the Best Distribution Style or the section on distribution styles and distribution keys portion of the Amazon Redshift Engineering’s Advanced Table Design Playbook post. In general, a good distribution key should have the following traits:

  • High cardinality: The column should include a large number of unique data values relative to the number of slices in the cluster.
  • Uniform distribution/low skew: Ideally, each unique value in the distribution key column should occur in the table about the same number of times. This allows Amazon Redshift to put the same number of records on each slice in the cluster.
  • Commonly joined: Select a distribution key column that joins to other tables. If you have many possible columns that fit this criterion, choose the column that joins the largest number of rows at runtime. This is usually but not always the column that joins to the largest table.

A skewed distribution key results in some slices working harder than others during query execution, unbalancing CPU or memory, and ultimately only running as fast as the slowest slice:



If skew is a problem, you typically see uneven node performance on the cluster. Use one of the admin scripts in the amazon-redshift-utils GitHub repository, such as table_inspector.sql, to see how data blocks in a distribution key map to the slices and nodes in the cluster.

If you find that you have tables with skewed distribution keys, consider changing the distribution key to a column that exhibits high cardinality and uniform distribution. Evaluate a candidate column as a distribution key by creating a new table using CTAS:

CREATE TABLE my_test_table DISTKEY (<column name>) AS SELECT <column name> FROM <table name>;

Run the table_inspector.sql script against the table again to analyze data skew.

If there is no good distribution key in any of your records, you may find that moving to EVEN distribution works better. For small tables (for example, dimension tables with a couple of million rows), you can also use DISTSTYLE ALL to place table data onto the first slice of every node in the cluster.

A table’s minimum table size can lead to increased block fetches and inordinately greater storage footprint. For a discussion about minimum table size derivation, see Why does a table in an Amazon Redshift cluster consume more or less disk storage space than expected? You see this most commonly in wide tables on larger clusters with a proportionately higher number of slices. The distribution style of ALL can be an important tool in minimum table size scenarios.

Tip #3: Use Workload Management’s queues to increase throughput

Amazon Redshift runs queries using the queuing system workload management (WLM). You can define up to eight queues to separate workloads from each other. You can further adjust the concurrency on each queue to meet your throughput requirements.

Amazon Redshift Advisor automatically analyzes the current WLM usage and can make recommendations to get more throughput from your cluster. Periodically reviewing the suggestions from Advisor helps you get the best performance.

If you don’t have to differentiate your workloads via SQL statements, the Auto WLM option automatically and dynamically determines an effective active statement count and memory allocation.

For all workloads, you can take advantage of Amazon Redshift’s short query acceleration (SQA). SQA uses machine learning to execute short-running jobs in their own queue. This keeps small jobs processing, rather than waiting behind longer-running SQL statements. You can embed SQA via a check box on the console, or by using the AWS CLI.

If you add Concurrency Scaling, Amazon Redshift can automatically and quickly provision additional clusters should your workload begin to back up. This is an important consideration when deciding the cluster’s WLM configuration.

A common pattern is to optimize the WLM configuration to run most SQL statements without the assistance of supplemental memory, reserving additional processing power for short jobs. Some queueing is acceptable because additional clusters spin up if your needs suddenly expand. To enable Concurrency Scaling on a WLM queue, set the Concurrency Scaling mode value to AUTO. You can best inform your decisions by reviewing the Concurrency Scaling billing model.

In some cases, unless you enable Concurrency Scaling for the queue, the user or query’s assigned queue may be busy, and that use must wait for a slot to open. During this time, the system is not executing the query at all. If this becomes a frequent problem, you may have to increase concurrency.

First, determine if any queries are queuing, using the queuing_queries.sql admin script. Review the maximum concurrency that your cluster needed in the past with wlm_apex.sql, or get an hour-by-hour historical analysis with wlm_apex_hourly.sql. Keep in mind that increasing concurrency allows more queries to run, but each query gets a smaller share of the memory. You may find that by increasing concurrency, some queries must use temporary disk storage to complete, which is also suboptimal.

Tip #4: Handle surprise and anticipated bursts of activity

The legacy, on-premises model provisions for the expected capacity at the end of a contract. But the ability to resize a cluster allows for right-sizing your resources as you go. Amazon Redshift extends this ability with elastic resize and Concurrency Scaling.

Elastic resize provides the ability to quickly increase or decrease the number of compute nodes, doubling or halving the original cluster’s node count. You can expand the cluster to provide additional processing power to accommodate an expected increase in workload, such as Black Friday for internet shopping, or a championship game for a team’s web business. An elastic resize contrasts with a classic resize, which is slower but allows you to change the node type or expand beyond the doubling/halving size limitations of an elastic resize.

Concurrency Scaling, as described in the previous tip, allows your Amazon Redshift cluster to add capacity dynamically in response to the workload arriving at the cluster.

These options, taken together, open up new ways to right-size the platform to meet demand. Before these options, you would have needed to size your WLM queue, or even an entire Amazon Redshift cluster, beforehand in anticipation of upcoming peaks.

Tip #5: Increase performance and cluster throughput with Amazon Redshift Spectrum

Amazon Redshift Spectrum gives you the ability to query data directly from files on Amazon S3 through an independent, elastically sized compute layer. This opens the door to several new options to improve job performance and query throughput:

  • Use the Amazon Redshift Spectrum compute layer to offload workloads from the main cluster, and to apply more processing power to the specific SQL statement. Amazon Redshift Spectrum automatically assigns compute power up to approximately 10 times the processing power of the main cluster. This may be an effective way to quickly process large transform or aggregate jobs.
  • Skip the load in an ELT process and run the transform directly against data on S3. You can execute transform logic against partitioned, columnar data on S3 with an INSERT … SELECT statement. It’s easier than going through the extra work of loading a staging dataset, joining it to other tables, and then running a transform against it.
  • Use Amazon Redshift Spectrum to run queries as the data lands on S3, rather than adding a step to load the data onto the main cluster. This allows for real-time analytics.
  • Land the output of a staging/transformation cluster on S3 in a partitioned, columnar format. The main/reporting cluster can then either query from that S3 dataset directly or load it via an INSERT … SELECT statement.

You can use these patterns independently or apply them together to offload work to the Amazon Redshift Spectrum compute layer, quickly create a transformed or aggregated dataset, or eliminate entire steps in a traditional ETL process.

Tip #6: Address the inefficient use of temporary tables

Amazon Redshift provides temporary tables, which act like normal tables but are only visible in a single session. When you disconnect the session, the Amazon Redshift automatically deletes the tables. You can create temporary tables using the CREATE TEMPORARY TABLE syntax, or by issuing a SELECT … INTO #TEMP_TABLE query. The CREATE TABLE statement gives you complete control over the definition of the temporary table. The SELECT … INTO and C(T)TAS commands use the input data to determine column names, sizes and data types, and use default storage properties.

Consider default storage properties carefully, as they may cause problems. Amazon Redshift’s default table structure uses EVEN distribution with no column encoding. This data structure is suboptimal for many types of queries.

If you employ the SELECT…INTO syntax, you cannot set the column encoding, column distribution, or sort keys. The CREATE TABLE AS (CTAS) syntax instead lets you specify a distribution style and sort keys, and Amazon Redshift automatically applies LZO encoding for everything other than sort keys, Booleans, reals, and doubles. You can exert additional control by using the CREATE TABLE syntax rather than CTAS.

If you create temporary tables, remember to convert all SELECT…INTO syntax into the CREATE statement. This ensures that your temporary tables have column encodings and don’t cause distribution errors within your workflow. To convert a statement using this syntax:

SELECT column_a, column_b INTO #my_temp_table FROM my_table;

Analyze the temporary table for optimal column encoding:

Master=# analyze compression #my_temp_table;

Table      | Column   | Encoding


#my_temp_table | columb_a | lzo

#my_temp_table | columb_b | bytedict

(2 rows)

Now convert the select/into a statement to:

column_a varchar(128) encode lzo,
column_b char(4) encode bytedict)
distkey (column_a) -- Assuming you intend to join this table on column_a
sortkey (column_b); -- Assuming you are sorting or grouping by column_b
INSERT INTO my_temp_table SELECT column_a, column_b FROM my_table;

You may also want to analyze statistics on the temporary table, especially when you use it as a join table for subsequent queries:

ANALYZE my_temp_table;

With this trick, you retain the functionality of temporary tables but control data placement on the cluster through distribution key assignment. You also take advantage of the columnar nature of Amazon Redshift by using column encoding.

Using explain plan alerts

Use diagnostic information from the cluster during query execution. Amazon Redshift stores this in a log called STL_ALERT_EVENT_LOG. Use the perf_alert.sql admin script to diagnose issues that the cluster encountered recently. Doing so helps you understand how your cluster develops over time.

Tip #7: Improve performance with Analyze and Vacuum

The Amazon Redshift Advisor automatically analyzes the current workload management (WLM) usage and makes recommendations for better performance and throughput. It also gathers statistics on your Amazon Redshift tables regularly, and warns you if table statistics become stale. Take advantage of this easy resource and check the Advisor suggestions from time to time.

Amazon Redshift, like other databases, requires statistics about tables and the composition of data blocks to make good decisions when planning a query. Without good statistics, the optimizer might make suboptimal choices about table access order, or how to join datasets.

The automatic analyze largely automates this simple maintenance task. But if you tend to create tables and then immediately query them (perhaps in an ETL process), that automated process may not have the chance to run. In this case, you can collect some statistics on the table with the MinAnalyze stored procedure. This provides Amazon Redshift with enough information to create a plan.

The ANALYZE Command History topic in the Amazon Redshift Developer Guide provides queries to help you address missing or stale statistics. You can instead run the missing_table_stats.sql admin script to determine which tables have missing stats. You can also use the statement below to determine tables that have stale statistics:

SELECT database, schema || '.' || "table" AS "table", stats_off 
FROM svv_table_info 
WHERE stats_off > 5 

In Amazon Redshift, data blocks are immutable. When rows are DELETED or UPDATED, the computer flags them for deletion, but doesn’t physically remove them from disk. Updates result in Amazon Redshift writing a new block with new data appended.

In either case, the system still continues scanning the previous version of the row, and continues consuming disk space. As a result, table storage space increases and performance degrades due to avoidable disk I/O during scans. A VACUUM command recovers the space from deleted rows and restores the sort order.

Amazon Redshift’s Auto Vacuum eliminates the need for you to run the VACUUM DELETE command manually. However, the automated process gets the best results when you have sorted your data via the sort key columns. Periodically re-sort a table with a defined sort key with the VACUUM SORT command to keep the automated process working efficiently.

The perf_alert.sql admin script identifies tables that have a large number of deleted rows.

To address issues involving tables with missing or stale statistics, or with deleted rows, run another AWSLabs utility: Analyze & Vacuum Schema. This ensures that you always keep up-to-date statistics, and only vacuum tables that need reorganization.

Tip #8: Diagnose and correct inefficient data loads

Amazon Redshift best practices suggest the use of the COPY command to perform data loads. This API operation uses all the compute nodes in your cluster to load data in parallel, from sources such as S3, Amazon DynamoDB, Amazon EMR HDFS file systems, or any SSH connection.

When performing data loads, compress the files whenever possible. Amazon Redshift supports both GZIP and LZO compression. It’s more efficient to load a large number of small files than one large one, and the ideal file count is a multiple of the slice count.

The number of slices per node depends on the cluster’s node size. By ensuring an equal number of files per slice, you know that COPY execution evenly use cluster resources and complete as quickly as possible.

The following query calculates statistics for each load:

SELECT a.tbl,
  trim(c.nspname) AS "schema", 
  trim(b.relname) AS "tablename", 
  sum(a.rows_inserted) AS "rows_inserted", 
  sum(d.distinct_files) AS files_scanned,  
  sum(d.MB_scanned) AS MB_scanned, 
  (sum(d.distinct_files)::numeric(19,3)/count(distinct a.query)::numeric(19,3))::numeric(19,3) AS avg_files_per_copy, 
  (sum(d.MB_scanned)/sum(d.distinct_files)::numeric(19,3))::numeric(19,3) AS avg_file_size_mb, 
  count(distinct a.query) no_of_copy, 
  max(a.query) AS sample_query, 
  (sum(d.MB_scanned)*1024*1000000/SUM(d.load_micro)) AS scan_rate_kbps, 
  (sum(a.rows_inserted)*1000000/SUM(a.insert_micro)) AS insert_rate_rows_ps 
  (SELECT query, 
    sum(rows) AS rows_inserted, 
    max(endtime) AS endtime, 
    datediff('microsecond',min(starttime),max(endtime)) AS insert_micro 
  FROM stl_insert 
  GROUP BY query, tbl) a,      
  pg_class b, 
  pg_namespace c,                 
  (SELECT b.query, 
    count(distinct b.bucket||b.key) AS distinct_files, 
    sum(b.transfer_size)/1024/1024 AS MB_scanned, 
    sum(b.transfer_time) AS load_micro 
  FROM stl_s3client b 
  WHERE b.http_method = 'GET' 
  GROUP BY b.query) d 
WHERE a.tbl = b.oid AND b.relnamespace = c.oid AND d.query = a.query 
GROUP BY 1,2,3 
ORDER BY 4 desc;

The following query shows the time taken to load a table, and the time taken to update the table statistics, both in seconds and as a percentage of the overall load process:

SELECT a.userid, 
  round(b.comp_time::float/1000::float,2) comp_sec, 
  round(a.copy_time::float/1000::float,2) load_sec, 
  round(100*b.comp_time::float/(b.comp_time + a.copy_time)::float,2) ||'%' pct_complyze, 
  SELECT userid, 
    datediff(ms,starttime,endtime) copy_time 
  FROM stl_query q 
  WHERE (querytxt ILIKE 'copy %from%') 
  AND exists (
    SELECT 1 
    FROM stl_commit_stats cs 
    WHERE cs.xid=q.xid) 
  AND exists (
    SELECT xid 
    FROM stl_query 
    WHERE query IN (
      SELECT distinct query 
      FROM stl_load_commits))) a 
  SELECT xid, 
    sum(datediff(ms,starttime,endtime)) comp_time 
  FROM stl_query q 
  WHERE (querytxt LIKE 'COPY ANALYZE %' OR querytxt LIKE 'analyze compression phase %') 
  AND exists (
    SELECT 1 
    FROM stl_commit_stats cs 
    WHERE cs.xid=q.xid) 
  AND exists (
    SELECT xid 
    FROM stl_query 
    WHERE query IN (
      SELECT distinct query 
      FROM stl_load_commits)) 
  GROUP BY 1) b 
ON b.xid = a.xid 
JOIN stl_query q 
ON q.query = a.query 
WHERE (b.comp_time IS NOT null) 

An anti-pattern is to insert data directly into Amazon Redshift, with single record inserts or the use of a multi-value INSERT statement. This allows you to insert up to 16 MB of data at one time. These are leader node–based operations and can create performance bottlenecks by maxing out the leader node network as the leader distributes the data to the compute nodes.

Tip #9: Take advantage of the UNLOAD statement for large datasets

Using the UNLOAD command, Amazon Redshift can export SQL statement output to S3 in a massively parallel fashion. This technique greatly improves the export performance and lessens the impact of running the data through the leader node. You can compress the exported data on its way off the Amazon Redshift cluster. As the size of the output grows, so does the benefit of using this feature.

The query monitoring rule (QMR) metric ‘return_row_count’ is a readily available value. It identifies SQL statements that would benefit from a move to an UNLOAD methodology.

Tip #10: Use query monitoring rules and Amazon CloudWatch metrics to drive additional performance improvements

In addition to the Amazon Redshift Advisor recommendations, you can get performance insights through other channels.

The Amazon Redshift cluster continuously and automatically collects query monitoring rules metrics, whether you institute any rules on the cluster. This convenient mechanism lets you view attributes like the following:

  • The CPU time for an SQL statement (query_cpu_time)
  • The amount of temporary space a job might ‘spill to disk’ (query_temp_blocks_to_disk)
  • The ratio of the highest number of blocks read over the average (io_skew)

It also makes Amazon Redshift Spectrum metrics available, such as the number of Spectrum rows and MBs scanned by a query (spectrum_scan_row_count and spectrum_scan_size_mb respectively). The Amazon Redshift system view SVL_QUERY_METRICS_SUMMARY shows the maximum values of metrics for completed queries, and STL_QUERY_METRICS and STV_QUERY_METRICS carry the information at one-second intervals for the completed and executing queries respectively.

The Amazon Redshift CloudWatch metrics are data points for use with CloudWatch monitoring. These can be cluster-wide metrics, such as health status or read/write, IOPS, latency, or throughput. It also offers compute node–level data, such as network transmit/receive throughput and read/write latency. At the WLM queue grain, there are the number of queries completed per second, queue length, and others.

Although the automatic trend continues to facilitate ease-of-use, personal care and attention to your clusters still offers additional insights.


Amazon Redshift is a powerful, fully managed data warehouse that can offer increased performance and lower cost in the cloud. Although Amazon Redshift can run any data model, you can avoid possible pitfalls that might decrease performance or increase cost, by being aware of how data is stored and managed. Run a simple set of diagnostic queries for common issues and ensure that you get the best performance possible.

We hope you learned a great deal about making the most of your Amazon Redshift account with the resources here.

If you have questions or suggestions, please leave a comment.

NOTE: This blog post has been translated into Japanese.



Best Practices for Micro-Batch Loading on Amazon Redshift

Social media - Microbatch_2