Stay Up to Date: Subscribe via RSS »
-
Amazon Redshift now updates table statistics by running ANALYZE automatically
Posted On: Jan 18, 2019Analyze operations now run automatically on your Amazon Redshift tables in the background to deliver improved query performance and optimal use of system resources.
-
Amazon Redshift now runs VACUUM DELETE automatically
Posted On: Dec 19, 2018Amazon Redshift now automatically runs the VACUUM DELETE operation to reclaim disk space occupied by rows that were marked for deletion by previous UPDATE and DELETE operations. It also defragments the tables to free up consumed space and improves performance for your workloads.
-
Amazon Redshift now available in the EU (Stockholm) AWS Region
Posted On: Dec 12, 2018Amazon Redshift is now available in the EU (Stockholm) AWS Region.
-
Amazon Redshift announces Deferred Maintenance and Advance Event Notifications
Posted On: Nov 20, 2018You can now defer maintenance of your Amazon Redshift cluster to keep your data warehouse running without interruption during critical business periods. You will now also receive advance notifications from Amazon Redshift prior to any upcoming maintenance on your cluster.
-
Amazon Redshift announces Elastic resize: Add and remove nodes in minutes
Posted On: Nov 15, 2018You can now quickly resize your Amazon Redshift cluster in minutes by adding nodes to get better performance and more storage for demanding workloads or by removing nodes to save cost.
-
Amazon Redshift now available in AWS GovCloud (US-East) Region
Posted On: Nov 12, 2018We are excited to announce that Amazon Redshift is now available in the AWS GovCloud (US-East) Region.
-
Encrypt your previously unencrypted Amazon Redshift cluster with 1-click
Posted On: Oct 16, 2018You can now easily encrypt a previously unencrypted Amazon Redshift cluster with an AWS Key Management Service (AWS KMS) encryption key.
-
Amazon Redshift announces Query Editor to run queries directly from the AWS Management Console
Posted On: Oct 4, 2018You can now query data in your Amazon Redshift cluster directly from your AWS Management console, using the new Query Editor. This provides an easier way for admins and end users to run SQL queries without having to install and setup an external JDBC/ODBC client. Query results are instantly visible within the console.
-
Amazon Redshift automatically enables short query acceleration
Posted On: Aug 8, 2018Amazon Redshift now enables short query acceleration by default to speed up execution of short-running queries such as reports, dashboards, and interactive analysis. Short query acceleration uses machine learning to provide higher performance, faster results, and better predictability of query execution times.
-
Amazon Redshift announces support for nested data with Redshift Spectrum
Posted On: Aug 8, 2018You can now use Amazon Redshift to directly query nested data in Apache Parquet, Apache ORC, JSON and Amazon Ion file formats stored in external tables in Amazon S3. Redshift Spectrum, a feature of Amazon Redshift, enables you to use your existing Business Intelligence tools and intuitive and powerful SQL extensions to analyze both scalar and nested data stored in your Amazon S3 data lake.
-
Amazon Redshift announces support for lateral column alias reference
Posted On: Aug 7, 2018Amazon Redshift now enables you to write queries that refer to a column alias within the same query immediately after it is declared, improving the readability of complex SQL queries.
-
Amazon Redshift announces new metrics to help optimize cluster performance
Posted On: Jul 30, 2018You can now get a detailed view of your Amazon Redshift cluster performance with the Workload Execution Breakdown graph on the console Database Performance page or the QueryRuntimeBreakdown metric via Cloudwatch. You can use these new metrics to optimize cluster performance to provide higher throughput and faster results.
-
Amazon Redshift now supports current and trailing tracks for release updates
Posted On: Jul 26, 2018Amazon Redshift is now available on two different release cycles - Current Maintenance Track and Trailing Maintenance Track. With the Current Maintenance Track you will get the most up-to-date certified release version with the latest features, security updates, and performance enhancements. With the Trailing Maintenance Track you will be on the previous certified release.
-
Amazon Redshift now provides customized best practice recommendations with Advisor
Posted On: Jul 26, 2018Amazon Redshift announces Advisor, a new feature that provides automated recommendations to help you optimize database performance and decrease operating costs. Advisor is available via the Amazon Redshift console at no charge.
-
Amazon Redshift announces free upgrade for DC1 Reserved Instances to DC2
Posted On: Jul 19, 2018You can now upgrade your Amazon Redshift DC1 Reserved Instances to DC2 Reserved Instances for the remainder of your DC1 reserved term, and get up to twice the performance of DC1 at the same price. DC2 nodes are designed for demanding data warehousing workloads that require low latency and high throughput.
-
Amazon Redshift Can Now COPY from Parquet and ORC File Formats
Posted On: Jun 5, 2018You can now COPY Apache Parquet and Apache ORC file formats from Amazon S3 to your Amazon Redshift cluster. Apache Parquet and ORC are columnar data formats that allow users to store their data more efficiently and cost-effectively. With this update, Redshift now supports COPY from six file formats: AVRO, CSV, JSON, Parquet, ORC and TXT.
-
Amazon Redshift Makes Short Query Acceleration Self-Optimizing
Posted On: May 31, 2018Amazon Redshift improved Short Query Acceleration (SQA) by automating the maximum time-out setting for short queries. On Nov 20th, 2017, we announced SQA which uses machine learning algorithms to predict the execution time of a query, and move short-running queries to an express queue for immediate processing. In the past, SQA needed user defined 'maximum run time' (between 1-20 seconds) to identify short queries. With this update, by setting the maximum short query run time to 'dynamic,' you can let Redshift automate this setting. In addition, SQA’s improved machine learning algorithm adapts the maximum run time to the changing workload, making it easier to use to minimize short query queuing time and increase throughput.
-
Amazon Redshift Adds New CloudWatch Metrics for Easy Visualization of Cluster Performance
Posted On: May 23, 2018You can now monitor the performance and health of your Amazon Redshift cluster with two new Amazon CloudWatch metrics, Query Throughput and Query Duration. These metrics monitor Redshift cluster performance and provide insights to adjust workload management settings to improve query performance. We have also improved the user interface to view the Query Performance Data on the console.
-
Stream Real-Time Data in Apache Parquet or ORC Format Using Amazon Kinesis Data Firehose
Posted On: May 10, 2018We have added support for Apache Parquet and Apache ORC formats in Amazon Kinesis Data Firehose, so you can stream real-time data into Amazon S3 for cost-effective storage and analytics.
-
Amazon Redshift announces Dense Compute (DC2) nodes in the AWS GovCloud (US) Region with twice the performance as DC1 at the same price
Posted On: Apr 25, 2018You can now launch Amazon Redshift clusters on our second-generation Dense Compute (DC2) nodes in the AWS GovCloud (US) Region. DC2 nodes deliver up to twice the performance as the previous generation DC1 nodes, at the same price.
-
Amazon Redshift Spectrum Launches in Two Additional AWS Regions
Posted On: Mar 15, 2018Amazon Redshift Spectrum is now available in two additional AWS regions: Asia Pacific (Mumbai) and South America (Sao Paulo).
-
Amazon Redshift Doubles the Number of Tables You Can Create in a Cluster
Posted On: Mar 15, 2018Amazon Redshift increases the number of tables you can create to 20,000 for 8xlarge cluster node types. Doubling the number of tables gives you more control and granularity for organizing data in your cluster.
-
Amazon Redshift Spectrum Now Supports Scalar JSON and Ion Data Types
Posted On: Mar 8, 2018You can now use Amazon Redshift Spectrum to directly query scalar JSON and Ion data types stored in external tables in Amazon S3 - without loading or transforming the data.
-
Amazon Redshift Introduces Late Materialization for Faster Query Processing
Posted On: Dec 21, 2017Amazon Redshift now uses late materialization to reduce the amount of data scanned and improve performance for queries with predicate filters.
-
Amazon Redshift Spectrum Launches in Three Additional AWS Regions
Posted On: Dec 19, 2017Amazon Spectrum is now available in three additional AWS regions: US West (Northern California), EU (London), and Canada (Central).
-
Announcing the AWS EU (Paris) Region
Posted On: Dec 18, 2017AWS is excited to announce immediate availability of the new AWS EU (Paris) Region. Paris joins Ireland, Frankfurt, and London as the fourth AWS Region in Europe and as the 18th region worldwide, bringing the global total of AWS Availability Zones to 49.
-
Amazon Redshift Spectrum Now Supports DATE Data Type
Posted On: Dec 21, 2017You can now leverage Amazon Redshift Spectrum to query DATE data type stored in Optimized Row Columnar (ORC) and text files in Amazon S3. For example, you can use the DATE data type to query clickstream data within specific time windows to gain insights into business trends. To learn more, visit our documentation.
-
Amazon Redshift Introduces Result Caching for Sub-Second Response for Repeat Queries
Posted On: Nov 21, 2017Amazon Redshift improves performance for repeat queries by caching the result and returning the cached result when queries are re-run.
-
Amazon Redshift Spectrum is Now Available in Four Additional AWS Regions, and Enhances Query Performance in All Available AWS Regions
Posted On: Nov 20, 2017Amazon Redshift Spectrum is now available in four additional AWS Regions: EU (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul). Additionally, large bzip2-compressed files and ORC files are automatically split to enhance query performance in all available AWS Regions.
-
Amazon Redshift Uses Machine Learning to Accelerate Dashboards and Interactive Analysis
Posted On: Nov 20, 2017Amazon Redshift introduces Short Query Acceleration to speed up execution of short running queries. Short Query Acceleration provides higher performance, faster results, and better predictability of query execution times.
-
Amazon Redshift Allows Regular Users to be Granted Access to All Rows in Selected System Tables
Posted On: Nov 17, 2017Starting today, Amazon Redshift allows Superusers to grant regular users access to all rows in selected system tables and views.
-
Amazon Redshift Improves Performance by Automatically Hopping Queries Without Restarts
Posted On: Nov 17, 2017Starting today, Amazon Redshift improves query performance by automatically moving read and write queries to the next matching queue without restarting the moved queries. This enhancement to Workload Management enables more efficient use of resources to improve query performance.
-
Utilization Alerts for Amazon Redshift, Amazon RDS, and Amazon ElastiCache Reservations
Posted On: Nov 16, 2017AWS Budgets now allows you to set RI utilization alerts for your Amazon Redshift, Amazon RDS, and Amazon ElastiCache reservations, in addition to setting alerts on your Amazon EC2 Reserved Instances.
RI Utilization alerts allow you to set a custom utilization target for one or more reservations pertaining to the same AWS service, and notify you when your reservation utilization drops below that threshold. Reservation utilization tracks the percentage of reserved hours that were used by matching instances, and can be monitored by AWS Budgets at a daily, monthly, quarterly, or yearly level. You can monitor your reservation utilization at an aggregate level (e.g., monthly utilization of your Amazon RDS fleet) or at a granular level of detail (e.g., daily utilization of db.m4.2xlarge instances running in a specific region). From there, you can define up to five notifications. Each notification can be sent to a maximum of ten email subscribers, and can be broadcast to an Amazon Simple Notification Service (Amazon SNS) topic.
To get started with utilization alerts, please access the AWS Budgets dashboard or refer to the Managing your Costs with Budgets user guide.
-
Monitor your Amazon Redshift, Amazon RDS, and Amazon ElastiCache reservations using AWS Cost Explorer’s RI Utilization report
Posted On: Nov 10, 2017Starting today, you can monitor your Amazon Redshift, Amazon RDS, and Amazon ElastiCache reservations, in addition to your Amazon EC2 Reserved Instances (RI), using the RI Utilization report, available in AWS Cost Explorer.
-
New Quick Start: Build a data lake on the AWS Cloud with Talend Big Data Platform and AWS services
Posted On: Nov 7, 2017This Quick Start automates the design, setup, and configuration of hardware and software to implement a data lake on the Amazon Web Services (AWS) Cloud. The Quick Start provisions Talend Big Data Platform components and AWS services such as Amazon EMR, Amazon Redshift, Amazon Simple Storage Service (Amazon S3), and Amazon Relational Database Service (Amazon RDS) to build a data lake. It also provides an optional sample dataset and Talend jobs developed by Cognizant Technology Solutions to illustrate big data practices for integrating Apache Spark, Apache Hadoop, Amazon EMR, Amazon Redshift, and Amazon S3 technologies into a data lake implementation.
-
Amazon Redshift Spectrum is now available in Europe (Ireland) and Asia Pacific (Tokyo)
Posted On: Oct 19, 2017Amazon Redshift Spectrum is now available in the Europe (Ireland) and Asia Pacific (Tokyo) AWS Regions. Redshift Spectrum is a feature of Amazon Redshift that enables you to analyze all of your data in Amazon S3 using standard SQL, with no data loading or transformations needed.
-
Amazon Redshift announces Dense Compute (DC2) nodes with twice the performance as DC1 at the same price
Posted On: Oct 17, 2017You can now launch Amazon Redshift clusters on our second-generation Dense Compute (DC2) nodes. DC2 is designed for demanding data warehousing workloads that require low latency and high throughput. They feature powerful Intel E5-2686 v4 (Broadwell) CPUs, fast DDR4 memory, and NVMe-based solid state disks (SSDs). We’ve tuned Amazon Redshift to leverage the better CPU, network, and disk on DC2 nodes, providing up to twice the performance of DC1 at the same price. Our DC2.8xlarge instances now provide twice the memory per slice of data and an optimized storage layout with 30% better storage utilization. You can save up to 75% over On-Demand rates by committing to use Amazon Redshift on Reserved Instances for a 1 or 3 year term. Reserved Instance pricing is specific to the node type purchased, and remains in effect until the reservation term ends.
-
Amazon Redshift announces support for uppercase column names
Posted On: Oct 11, 2017You can now specify whether column names returned by SELECT statements are uppercase or lowercase. With this feature, you can now set a session-based parameter to enable your case-sensitive applications to easily query Amazon Redshift. For more information, see describe_field_name_in_uppercase in the Amazon Redshift Database Developer Guide.
-
Amazon Redshift announces support for LISTAGG DISTINCT
Posted On: Oct 11, 2017The LISTAGG aggregate function orders the rows for each group in a query according to the ORDER BY expression, then concatenates the values into a single string. With the new DISTINCT argument, you can now eliminate duplicate values from the specified expression before concatenating the values into a single string. For more information, see LISTAGG Function in the Amazon Redshift Database Developer Guide.
-
Amazon Redshift now supports late-binding views referencing Amazon Redshift and Redshift Spectrum external tables
Posted On: Sep 14, 2017You can now create a view that spans Amazon Redshift and Redshift Spectrum external tables. With late-binding views, table binding will take place at runtime, providing your users and applications with seamless access to query data. Late-binding views allows you to drop and make changes to referenced tables without affecting the views. With this feature, you can query frequently accessed data in your Amazon Redshift cluster and less-frequently accessed data in Amazon S3, using a single view. Simply archive historical data to Amazon S3, create an external table referencing the relevant files, and then create a view referencing both the Amazon Redshift and the Redshift Spectrum external tables.
-
New Quick Start: Build a Data Lake Foundation on the AWS Cloud with AWS Services
Posted On: Sep 8, 2017This Quick Start deploys a data lake foundation that integrates Amazon Web Services (AWS) Cloud services such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Kinesis, Amazon Athena, Amazon Elasticsearch Service (Amazon ES), and Amazon QuickSight.
-
Amazon Redshift Introduces SQL Scalar User-Defined Functions
Posted On: Aug 31, 2017You can now create and run scalar user-defined functions (UDFs) using SQL in Amazon Redshift.
-
Amazon Redshift introduces new OCTET_LENGTH Function
Posted On: Aug 23, 2017You can now use OCTET_LENGTH to count the number of bytes (octets) in a specified string.
-
Amazon Redshift Spectrum now supports ORC and Grok file formats
Posted On: Aug 23, 2017You can now leverage Amazon Redshift Spectrum to query data stored in Optimized Row Columnar (ORC) and Grok file formats. Amazon Redshift Spectrum also supports multiple other open file formats, including Avro, CSV, Parquet, RCFile, RegexSerDe, SequenceFile, TextFile, and TSV. For more information, see supported file formats in the Amazon Redshift Database Developer Guide.
-
Amazon Redshift Spectrum Now Integrates with AWS Glue
Posted On: Aug 15, 2017You can now use the AWS Glue Data Catalog as the metadata repository for Amazon Redshift Spectrum. The AWS Glue Data Catalog provides a central metadata repository for all of your data assets regardless of where they are located.
-
Amazon Redshift announces enhanced support for viewing external Redshift Spectrum tables
Posted On: Aug 11, 2017Using new Amazon Redshift ODBC and JDBC drivers, you can view external Redshift Spectrum tables in your existing SQL client and BI tools. Download the new drivers from the Connect client tab on the Amazon Redshift Management Console.
-
Amazon Redshift announces Federated Authentication with Single Sign-On
Posted On: Aug 11, 2017You can now use the new Amazon Redshift database authentication to simplify the credential management of database users. You can configure Amazon Redshift to automatically generate temporary database credentials based on permissions granted through an AWS IAM policy. You can leverage your corporate directory and third-party SAML-2.0 identity provider, such as ADFS, PingFederate and Okta, to enable your users to easily access their Amazon Redshift clusters using their corporate user names, without managing database users and passwords. Furthermore, database users are automatically created at their first login based on their corporate privileges. The new Amazon Redshift ODBC and JDBC drivers support Windows Integrated Authentication for a simplified client experience. This feature is supported starting with Amazon Redshift ODBC driver version 1.3.6.1000 and JDBC driver version 1.2.7.1003. For more information, see Using IAM Authentication to Generate Database User Credentials in the Amazon Redshift Database Developer Guide.
-
Amazon QuickSight adds support for Amazon Redshift Spectrum
Posted On: Jun 1, 2017Starting today, Amazon QuickSight customers can leverage Amazon Redshift Spectrum to visualize and analyze vast amounts of unstructured data in their Amazon S3 “data lake” – without having to load or transform any data. In addition, customers can now visualize combined data sets that include frequently accessed data stored in Amazon Redshift and bulk data sets stored cost-effectively in Amazon S3 using the same SQL syntax of Amazon Redshift.
-
AWS Schema Conversion Tool Exports from SQL Server to Amazon Redshift
Posted On: May 11, 2017AWS Schema Conversion Tool (SCT) can now extract data from a Microsoft SQL Server data warehouse for direct import into Amazon Redshift. This follows the recently announced capability to convert SQL Server data warehouse schemas.
-
Amazon Redshift announces query monitoring rules (QMR), a new feature that automates workload management, and a new function to calculate percentiles
Posted On: Apr 21, 2017You can use the new Amazon Redshift query monitoring rules feature to set metrics-based performance boundaries for workload management (WLM) queues, and specify what action to take when a query goes beyond those boundaries. For example, for a queue that’s dedicated to short running queries, you might create a rule that aborts queries that run for more than 60 seconds. To track poorly designed queries, you might have another rule that logs queries that contain nested loops. We also provide pre-defined rule templates in the Amazon Redshift management console to get you started.
-
Introducing Amazon Redshift Spectrum: Run Amazon Redshift Queries directly on Datasets as Large as an Exabyte in Amazon S3
Posted On: Apr 19, 2017Today we announced the general availability of Amazon Redshift Spectrum, a new feature that allows you to run SQL queries against exabytes of data in Amazon Simple Storage Service (Amazon S3). With Redshift Spectrum, you can extend the analytic power of Amazon Redshift beyond data stored on local disks in your data warehouse to query vast amounts of unstructured data in your Amazon S3 “data lake” — without having to load or transform any data. Redshift Spectrum applies sophisticated query optimization, scaling processing across thousands of nodes so results are fast – even with large data sets and complex queries.
-
AWS Schema Conversion Tool Exports from Oracle and Teradata Data Warehouses to Amazon Redshift
Posted On: Feb 16, 2017We are pleased to announce that the AWS Schema Conversion Tool (SCT) can now extract data from Teradata and Oracle data warehouses for direct import into Amazon Redshift. Amazon Redshift is a fast, fully managed, petabyte scale data warehouse that was designed for the cloud from the ground up. AWS SCT will run an analysis of your data warehouse, automate the schema conversion, apply the schema to the Amazon Redshift target and extract your warehouse data, regardless of volume. You can use Amazon S3 or Amazon Snowball to move your exports to the cloud, where Amazon Redshift can natively import the data for use.
-
Amazon Redshift now supports encrypting unloaded data using Amazon S3 server-side encryption with AWS KMS keys
Posted On: Feb 10, 2017The Amazon Redshift UNLOAD command now supports Amazon S3 server-side encryption using an AWS KMS key. The UNLOAD command unloads the results of a query to one or more files on Amazon S3. You can let Amazon Redshift automatically encrypt your data files using Amazon S3 server-side encryption, or you can specify a symmetric encryption key that you manage. With this release, you can use Amazon S3 server-side encryption with a key managed by AWS KMS. In addition, the COPY command loads Amazon S3 server-side encrypted data files without requiring you to provide the key. For more information, see COPY and UNLOAD in the Amazon Redshift Database Developer Guide.
-
Amazon Redshift announces improved Workload Management console experience
Posted On: Jan 26, 2017Amazon Redshift Workload Management (WLM) enables you to flexibly manage priorities within workloads so that short, fast-running queries don't get stuck in queues behind long-running queries. Today we are announcing an improved WLM experience in the Amazon Redshift console. The new features include in-line validations, simpler error messages, and more so you can easily create WLM queues and manage workloads. For more information, see Workload Management in the Amazon Redshift Database Developer Guide.
-
Amazon Redshift now supports the Zstandard high data compression encoding and two new aggregate functions
Posted On: Jan 20, 2017Amazon Redshift now supports Zstandard (ZSTD) column compression encoding, which delivers better data compression thereby reducing the amount of storage and I/O needed. With the addition of ZSTD, Amazon Redshift now offers seven compression encodings to choose from depending on your dataset.
-
Amazon Kinesis Firehose can now prepare and transform streaming data before loading it to data stores
Posted On: Dec 21, 2016You can now configure Amazon Kinesis Firehose to prepare your streaming data before it is loaded to data stores. With this new feature, you can easily convert raw streaming data from your data sources into formats required by your destination data stores, without having to build your own data processing pipelines.
-
Amazon Redshift now supports Python UDF logging module
Posted On: Dec 9, 2016You can now use the standard Python logging module to log error and warning messages from Amazon Redshift user-defined functions (UDF). You can then query the SVL_UDF_LOG system view to retrieve the messages logged from your UDF’s and troubleshoot your UDF’s easily.
-
Announcing the AWS Canada (Central) Region
Posted On: Dec 8, 2016AWS is excited to announce immediate availability of the new Canada (Central) Region. Canada joins Northern Virginia, Ohio, Oregon, Northern California, and AWS GovCloud as the sixth AWS Region in North America and as the fifteenth worldwide, bringing the total number of AWS Availability Zones to 40 globally.
-
Record and govern Amazon Redshift configurations with AWS Config
Posted On: Dec 7, 2016You can now record configuration changes to your Amazon Redshift clusters with AWS Config. The detailed configuration recorded by AWS Config includes changes made to Amazon Redshift clusters, cluster parameter groups, cluster security groups, cluster snapshots, cluster subnet groups, and event subscriptions. In addition, you can run two new managed Config Rules to check whether your Amazon Redshift clusters have the appropriate configuration and maintenance settings. These checks include verifying that your cluster database is encrypted, logging is enabled, snapshot data retention period is set appropriately, and much more.
-
Amazon Redshift introduces multibyte (UTF-8) character support for database object names and updated ODBC/JDBC
Posted On: Nov 18, 2016You can now use multibyte (UTF-8) characters in Amazon Redshift table, column, and other database object names. For more information, see Names and Identifiers in the Amazon Redshift Database Developer Guide. To support this new feature, we have updated the Amazon Redshift ODBC and JDBC drivers. The driver updates include support for multibyte characters and other enhancements. For details, see Amazon Redshift JDBC Release Notes and Amazon Redshift ODBC Release Notes.
-
Amazon Redshift announces new data compression, connection management, and data loading features
Posted On: Nov 11, 2016We are excited to announce four new Amazon Redshift features that improve data compression, connection management, and data loading.
-
Amazon Redshift now available in South America (São Paulo) Region
Posted On: Oct 31, 2016We are excited to announce that Amazon Redshift is now available in the South America (São Paulo) Region.
-
Announcing the AWS US East (Ohio) Region
Posted On: Oct 17, 2016AWS is excited to announce immediate availability of the new US East (Ohio) Region. Ohio joins Northern Virginia, Oregon, Northern California, and AWS GovCloud as the fifth AWS Region in North America and as the fourteenth worldwide, bringing the total number of AWS Availability Zones to 38 globally.
-
Amazon Redshift introduces new data type to support time zones in time stamps
Posted On: Sep 30, 2016You can now use time zones as part of time stamps in Amazon Redshift. The new TIMESTAMPTZ data type allows you to input timestamp values that include a time zone. Amazon Redshift automatically converts timestamps to Coordinated Universal Time (UTC) and stores the UTC values. Also, the COPY command now recognizes timestamp values with time zones in the source data and automatically converts them to UTC. You can retrieve and display timestamps in Amazon Redshift by setting your preferred time zone at the session level, user level or client connection level.
-
Amazon Redshift now supports Enhanced VPC Routing
Posted On: Sep 15, 2016You can now use Amazon Redshift’s Enhanced VPC Routing to force all of your COPY and UNLOAD traffic to go through your Amazon Virtual Private Cloud (VPC). Enhanced VPC Routing supports the use of standard VPC features such as VPC Endpoints, security groups, network ACLs, managed NAT and internet gateways, enabling you to tightly manage the flow of data between your Amazon Redshift cluster and all of your data sources. In particular, when your Amazon Redshift cluster is on a private subnet and you enable Enhanced VPC Routing, all the COPY and UNLOAD traffic between your cluster and Amazon S3 will be restricted to your VPC. You can also add a policy to your VPC endpoint to restrict unloading data only to a specific S3 bucket in your account, and monitor all COPY and UNLOAD traffic using VPC flow logs.
-
AWS Cost and Usage Report Data is Now Easy to Upload Directly into Amazon Redshift and Amazon QuickSight
Posted On: Aug 18, 2016AWS Cost and Usage Report data is now available for easy and quick upload directly into Amazon Redshift and Amazon Quicksight.
-
Amazon Redshift improves throughput performance up to 2X
Posted On: May 25, 2016You can now get up to 60% higher query throughput (as measured by standard benchmarks TPC-DS, 3TB) in Amazon Redshift as a result of improved memory allocation, which reduces the number of queries spilled to disk. This new improvement is available in version 1.0.1056 and above. Combined with the I/O and commit logic enhancement released in version 1.0.1012, it delivers up to 2 times faster performance for complex queries that spill to disk, and queries like SELECT INTO TEMP TABLE that create temporary tables.
-
Amazon Redshift UNION ALL queries and VACUUM commands now run up to 10x faster
Posted On: May 24, 2016UNION ALL performance improvement: Business analytics often involves time-series data, which is data generated or aggregated daily, weekly, monthly or at other intervals. By storing time-series data in separate tables—one table for each time interval—and using a UNION ALL view over those tables, you can avoid potentially costly table updates. Amazon Redshift now runs UNION ALL queries up to 10 times faster if they involve joins, and up to 2 times faster if they don’t involve any joins. This performance improvement is automatic and requires no action on your part and is available in version 1.0.1057 and above. For more information about UNION ALL views and time-series tables, see Using Time-Series Tables in the Amazon Redshift Database Developer Guide.
-
AWS Database Migration Service Now Supports Migrations to Amazon Redshift
Posted On: May 4, 2016AWS Database Migration Service now supports Amazon Redshift as a migration target. This allows you to stream data to Amazon Redshift from any of the supported sources including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, and SQL Server, enabling consolidation and easy analysis of data in the petabyte-scale data warehouse.
-
Amazon Redshift announces Enhancements to Data Loading, Security, and SAS Integration
Posted On: Apr 29, 2016BACKUP NO option when creating tables: You can now use the BACKUP NO option with the CREATE TABLE command, and improve data loading and cluster performance. For tables, such as staging tables, which contain only transient and pre-processed data, specify BACKUP NO to save processing time when creating snapshots and restoring from snapshots. This option also reduces storage space used by snapshots.
-
Easily Create New Amazon Redshift Datasources in Amazon ML by Copying Existing Datasource Settings
Posted On: Apr 11, 2016You now have the ability to quickly and easily create new Amazon Redshift datasources in Amazon Machine Learning (Amazon ML) by copying settings from an existing Amazon Redshift datasource. A new option on the Amazon ML console allows you to select an existing Redshift datasource to copy the Redshift cluster name, database name, IAM role, SQL query and staging data location, to automatically populate these fields in the Create Datasource wizard. You can modify the settings before the new datasource is created, for example, to change the SQL query, or to specify a different IAM role to access the cluster.
-
Amazon Redshift is now available in China (Beijing) Region
Posted On: Apr 6, 2016We are excited to announce that Amazon Redshift is now available in the AWS China (Beijing) Region.
-
Amazon Redshift now supports using IAM roles with COPY and UNLOAD commands
Posted On: Mar 29, 2016You can now assign one or more AWS Identity and Access Management (IAM) roles to your Amazon Redshift cluster for data loading and exporting. Amazon Redshift assumes the assigned IAM roles when you load data into your cluster using the COPY command or export data from your cluster using the UNLOAD command. It uses the resulting credentials to access other AWS services, such as Amazon S3, securely during these operations. IAM roles enhance security of your cluster and simplify data loading and exporting by eliminating the need for you to embed AWS access credentials within SQL commands. They also enable your cluster to periodically re-assume an IAM role during long-running operations. Handling of data encryption keys for COPY and UNLOAD commands remains unchanged.
-
Amazon Machine Learning Console Now Makes It Easier to Connect to Amazon Redshift
Posted On: Mar 21, 2016You can now more easily set up or select your Identity and Access Management (IAM) role when connecting to an Amazon Redshift cluster from the Amazon Machine Learning (Amazon ML) console. To streamline the process of setting up your connection to Amazon Redshift, Amazon ML now pre-populates an interactive drop-down menu of existing IAM roles that have an Amazon ML managed policy for Amazon Redshift, and other IAM roles that you might prefer. From the Amazon ML console, you have the option of dynamically creating a new IAM role, enabling you to quickly connect to your Amazon Redshift cluster.
-
Amazon Redshift now supports table level restore
Posted On: Mar 10, 2016You can now restore a single table from an Amazon Redshift snapshot instead of restoring the entire cluster. This new feature enables you to restore a table that you might have dropped accidentally, or reconcile data from a table that you might have updated or deleted unintentionally. To restore a table from a snapshot, simply navigate to the “Table Restore” tab for a cluster and click on the “Restore Table” button.
-
Amazon Redshift is now available in US West (N. California) Region
Posted On: Feb 25, 2016We are excited to announce that Amazon Redshift is now available in the AWS US West (N. California) Region.
-
Improved Amazon Redshift Data Schema Conversion from Amazon Machine Learning Console
Posted On: Feb 9, 2016You can now use the Amazon Machine Learning (Amazon ML) console to retrieve data from Amazon Redshift with an improved data schema conversion functionality. Data types supported by Amazon ML are not equivalent to Amazon Redshift’s supported data types, requiring a schema conversion when creating an Amazon ML datasource. Using the Amazon ML console, you will now be able to take advantage of more accurate rules for this schema conversion process, based on the data type information provided by Amazon Redshift. For more information about using Amazon Redshift with Amazon ML, please reference the documentation in the Amazon ML developer guide.
-
Amazon Redshift Now Supports Appending Rows to Tables and Exporting Query Results to BZIP2-compressed Files
Posted On: Feb 8, 2016Append rows to a target table: Using the ALTER TABLE APPEND command, you can now append rows to a target table. When you issue this command, Amazon Redshift moves the data from the source table to matching columns in the target table. ALTER TABLE APPEND is usually much faster than a similar CREATE TABLE AS or INSERT INTO operation because it moves the data instead of duplicating it. This could be particularly useful in cases where you load data into a staging table, process it, and then copy the results into a production table. For more details, refer to the ALTER TABLE APPEND command.
-
Amazon Redshift supports automatic queue hopping for timed-out queries
Posted On: Jan 7, 2016You can now configure Amazon Redshift Work Load Management (WLM) settings to move timed-out queries automatically to the next matching queue and restart them. The matching queue has the same Query Group or User Group as the original queue. Please see the WLM Queue Hopping section of our documentation for more detail.
-
Amazon Redshift announces tag-based permissions, default access privileges, and BZIP2 compression format
Posted On: Dec 10, 2015Tag-based, resource-level permissions and the ability to apply default access privileges to new database objects make it easier to manage access control in Amazon Redshift. In addition, you can now use the Amazon Redshift COPY command to load data in BZIP2 compression format. More details on these features below:
-
Amazon Redshift now supports modifying cluster accessibility and specifying sort order for NULL values
Posted On: Nov 20, 2015We are pleased to announce two new features for Amazon Redshift, making it easier for you to control access to your clusters and expanding query capabilities.
-
Amazon Redshift now supports Scalar User-Defined Functions in Python
Posted On: Sep 11, 2015You can now create and run scalar user-defined functions (UDFs) in Amazon Redshift. With scalar UDFs, you can perform analytics that were previously impossible or too complex for plain SQL.
-
Amazon Redshift now supports dynamic work load management and list aggregate functions
Posted On: Aug 3, 2015We are excited to announce two new features for Amazon Redshift that make it easier to manage your clusters and expand query capabilities.
-
Amazon Redshift now supports cross-region backups for KMS-encrypted clusters
Posted On: Jul 28, 2015You can now configure Amazon Redshift to automatically copy snapshots of your KMS-encrypted clusters to another region of your choice. By storing a copy of your snapshots in a secondary region, you have the ability to restore your cluster from recent data if anything affects the primary region. For details on how to enable automatic cross-region backups for your KMS-encrypted clusters, refer to the Snapshots section of the Amazon Redshift management guide.
Amazon Redshift makes it easy to launch a high-performance, petabyte-scale data warehouse for less than $1000/TB/year. Get started with a free 2-month trial.
-
Amazon Redshift now supports AVRO ingestion
Posted On: Jul 13, 2015We are excited to announce that you can now ingest AVRO files directly into Amazon Redshift. Use the COPY command to ingest data in AVRO format in parallel from Amazon S3, Amazon EMR, and remote hosts (SSH clients). For details, refer to the data ingestion section of the documentation.
Amazon Redshift makes it easy to launch a high-performance, petabyte-scale data warehouse for less than $1000/TB/year. Get started with a free 2-month trial.
-
Amazon Redshift Adds New Dense Storage (DS2) Instances and Reserved Node Payment Options
Posted On: Jun 9, 2015You can now launch Amazon Redshift clusters on second-generation Dense Storage (DS2) instances. DS2 has twice the memory and compute power of its Dense Storage predecessor, DS1 (formerly DW1), and the same storage capacity. DS2 also supports Enhanced Networking and provides 50% more disk throughput than DS1. On average, DS2 provides 50% better performance than DS1, but is priced the same as DS1. To move from DS1 to DS2, simply restore a DS2 cluster from a snapshot of a DS1 cluster of the same size.
-
Quickly Filter Data in Amazon Redshift using Interleaved Sorting
Posted On: May 11, 2015You can use Interleaved Sort Keys to quickly filter data without the need for indices or projections in Amazon Redshift. A table with interleaved keys arranges your data so each sort key column has equal importance. While Compound Sort Keys are more performant if you filter on the leading sort key columns, interleaved sort keys provide fast filtering no matter which sort key columns you specify in your WHERE clause. To create an interleaved sort, simply define your sort keys as INTERLEAVED in your CREATE TABLE statement.
The performance benefit of interleaved sorting increases with table size, and is most effective with highly selective queries that filter on multiple columns. For example, assume your table contains 1,000,000 blocks (1 TB per column) with an interleaved sort key of both customer ID and product ID. You will scan 1,000 blocks when you filter on a specific customer or a specific product, a 1000x increase in query speed compared to the unsorted case. If you filter on both customer and product, you will only need to scan a single block.
The interleaved sorting feature will be deployed in every region over the next seven days. The new cluster version will be 1.0.921.
For more information, please see our AWS Blog Post on Interleaved Sorting and review our documentation on Best Practices for Designing Tables.
-
Introducing Amazon Mobile Analytics Auto Export To Amazon Redshift
Posted On: Mar 3, 2015You can now use the Amazon Mobile Analytics Auto Export feature to automatically export your app event data to Amazon Redshift. With your app event data in Amazon Redshift, you can run SQL queries, build custom dashboards, and gain deep insights about your application usage. Additionally, you can use your existing business intelligence and data warehouse tools to report on your app event data.
You can turn on the Auto Export feature from your Amazon Mobile Analytics Console. To learn more, visit our webpage and check out the documentation.
-
Amazon Redshift Announces Custom ODBC/JDBC Drivers and Query Visualization in the Console
Posted On: Feb 26, 2015Amazon Redshift’s new custom ODBC and JDBC drivers make it easier and faster to connect to and query Amazon Redshift from your Business Intelligence (BI) tool of choice. Amazon Redshift’s JDBC driver features JDBC 4.1 and 4.0 support, a 35% performance gain over open source options, and improved memory management. Amazon Redshift’s ODBC drivers feature ODBC 3.8 support, a 6% performance gain, and better Unicode data and password handling, among other benefits. Additionally, AWS partners Informatica, Microstrategy, Pentaho, Qlik, SAS, and Tableau will be supporting these Redshift drivers with their solutions. For more information please see Connecting to a Cluster in our documentation. If you need to distribute these drivers to your customers or other third parties, please contact us at redshift-pm@amazon.com so we can arrange an appropriate license.
-
Amazon Redshift is Now Available in the AWS GovCloud (US) Region
Posted On: Nov 18, 2014We are delighted to announce that Amazon Redshift is now available in the AWS GovCloud (US) Region.
-
Amazon Redshift adds Four New Features and Sixteen New SQL Commands and Functions
Posted On: Nov 4, 2014Amazon Redshift has added a number of features this week and over the past month, including the ability to tag resources and cancel queries from the console, enhancements to data load and unload, and sixteen new SQL commands and functions. Amazon Redshift is a fast, easy-to-use, petabyte-scale data warehouse service that costs as little as $1,000/TB/Year. To get started for free with Amazon Redshift and partner tools, please see our Free Trial page. -
Amazon Redshift Free Trial and Price Reductions in Asia Pacific
Posted On: Jul 1, 2014AWS is delighted to announce a free trial and reserved instance price reductions for Amazon Redshift, a fast, fully-managed, petabyte-scale data warehouse for as little as $1,000/TB/Year. You can now try Amazon Redshift's SSD node for free for two months. What's more, a number of Business Intelligence and Data Integration partners are offering free trials of their own to help you ingest and report on your data in Amazon Redshift. Amazon Redshift has also reduced three year Reserved Instance prices in Japan, Singapore, and Sydney by over 25%.
Two Month Free Trial
If you are new to Amazon Redshift, you may be eligible for 750 free hours per month for two months to try the dw2.large node - enough hours to continuously run one node with 160GB of compressed SSD storage. You can also build clusters with multiple dw2.large nodes to test larger data sets, which will consume your free hours more quickly. Please see the Amazon Redshift Free Trial Page for more details.Price Reductions in Asia Pacific
You can now purchase a three year reserved dw1.8xlarge instance in Japan for $30,000 upfront and $1.326 per hour, down 28% from $30,400 upfront and $2.288 hourly. A three-year reserved dw1.8xlarge instance in Singapore and Sydney now costs $32,000 upfront and $1.462 per hour, down 26% from $32,000 upfront and $2.40 hourly. The dw1.xlarge instance price has also decreased and continues to be one eighth the cost of dw1.8xlarge. Please see the Amazon Redshift Pricing Page for more details.To learn more about Amazon Redshift, please visit our detail page and getting started page. To find out about recently released features, please visit the Developer Guide and the Management Guide history. To receive alerts when new features are announced, please subscribe to our feature announcements thread in the Amazon Redshift forum.
-
Amazon Redshift Announces Cross Region Ingestion and Improved Query Functionality
Posted On: Jun 29, 2014We are delighted to announce cross region ingestion and improved query functionality for Amazon Redshift, a fast, easy-to-use, petabyte-scale data warehouse service in the cloud that costs as little as $1,000/TB/Year. Customers can now COPY data directly into Amazon Redshift from an Amazon S3 bucket or Amazon DynamoDB table that is not in the same region as the Amazon Redshift cluster. We've also launched new numeric SQL functions, greatest and least, as well as new window functions, percentile_cont and percentile_disc, for more advanced analytics. These features will be rolling out to all new and existing Amazon Redshift customers over the next week, during maintenance windows.
To get started with Amazon Redshift, please visit our detail page. To learn more about recently released features, please visit the Developer Guide and the Management Guide history. To receive alerts when new features are announced, please subscribe to our feature announcements thread in the Amazon Redshift forum.