's (Amazon Web Services)Wed, 22 Nov 2017 06:03:41 +0000Wed, 22 Nov 2017 06:03:41 +0000's New Redshift Introduces Result Caching for Sub-Second Response for Repeat Queries<p><a href="/redshift/">Amazon Redshift</a> improves performance for repeat queries by caching the result and returning the cached result when queries are re-run.&nbsp;</p>Tue, 21 Nov 2017 21:19:08 +0000general:products/amazon-redshiftaws@amazon.com Redshift Spectrum is Now Available in Four Additional AWS Regions, and Enhances Query Performance in All Available AWS Regions<p>Amazon Redshift Spectrum is now available in four additional AWS Regions: EU (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul). Additionally, large bzip2-compressed files and ORC files are automatically split to enhance query performance in all available AWS Regions.&nbsp;</p>Mon, 20 Nov 2017 19:42:11 +0000general:products/amazon-redshiftaws@amazon.com Redshift Uses Machine Learning to Accelerate Dashboards and Interactive Analysis<p><a href="/redshift/">Amazon Redshift</a> introduces Short Query Acceleration to speed up execution of short running queries. Short Query Acceleration provides higher performance, faster results, and better predictability of query execution times.&nbsp;</p>Mon, 20 Nov 2017 17:52:49 +0000general:products/amazon-redshiftaws@amazon.com Redshift Allows Regular Users to be Granted Access to All Rows in Selected System Tables<p>Starting today, <a href="">Amazon Redshift</a> allows Superusers to grant regular users access to all rows in selected system tables and views.&nbsp;</p>Sat, 18 Nov 2017 02:35:19 +0000general:products/amazon-redshiftaws@amazon.com Redshift Improves Performance by Automatically Hopping Queries Without Restarts<p>Starting today, <a href="">Amazon Redshift</a> improves query performance by automatically moving read and write queries to the next matching queue without restarting the moved queries. This enhancement to Workload Management enables more efficient use of resources to improve query performance.&nbsp;</p>Sat, 18 Nov 2017 02:29:22 +0000general:products/amazon-redshiftaws@amazon.com Alerts for Amazon Redshift, Amazon RDS, and Amazon ElastiCache Reservations<p>AWS Budgets now allows you to set RI utilization alerts for your Amazon Redshift, Amazon RDS, and Amazon ElastiCache reservations, in addition to setting alerts on your Amazon EC2 Reserved Instances.</p> <p>RI Utilization alerts allow you to set a custom utilization target for one or more reservations pertaining to the same AWS service, and notify you when your reservation utilization drops below that threshold. Reservation utilization tracks the percentage of reserved hours that were used by matching instances, and can be monitored by AWS Budgets at a daily, monthly, quarterly, or yearly level. You can monitor your reservation utilization at an aggregate level (e.g., monthly utilization of your Amazon RDS fleet) or at a granular level of detail (e.g., daily utilization of db.m4.2xlarge instances running in a specific region). From there, you can define up to five notifications. Each notification can be sent to a maximum of ten email subscribers, and can be broadcast to an Amazon Simple Notification Service (Amazon SNS) topic.</p> <p>To get started with utilization alerts, please access the <a href="">AWS Budgets</a> dashboard or refer to the <a href="">Managing your Costs with Budgets</a> user guide.</p> <p>&nbsp;</p>Thu, 16 Nov 2017 20:14:41 +0000general:products/amazon-redshift,general:products/amazon-rds,general:products/amazon-elasticacheaws@amazon.com your Amazon Redshift, Amazon RDS, and Amazon ElastiCache reservations using AWS Cost Explorer’s RI Utilization report <p>Starting today, you can monitor your Amazon Redshift, Amazon RDS, and Amazon ElastiCache reservations, in addition to your Amazon EC2 Reserved Instances (RI), using the RI Utilization report, available in AWS Cost Explorer.</p>Fri, 10 Nov 2017 19:26:18 +0000general:products/amazon-redshift,general:products/amazon-rds,general:products/amazon-elasticacheaws@amazon.com Quick Start: Build a data lake on the AWS Cloud with Talend Big Data Platform and AWS services<p>This Quick Start automates the design, setup, and configuration of hardware and software to implement a data lake on the Amazon Web Services (AWS) Cloud. The Quick Start provisions Talend Big Data Platform components and AWS services such as Amazon EMR, Amazon Redshift, Amazon Simple Storage Service (Amazon S3), and Amazon Relational Database Service (Amazon RDS) to build a data lake. It also provides an optional sample dataset and Talend jobs developed by Cognizant Technology Solutions to illustrate big data practices for integrating Apache Spark, Apache Hadoop, Amazon EMR, Amazon Redshift, and Amazon S3 technologies into a data lake implementation.&nbsp;</p>Tue, 07 Nov 2017 23:37:23 +0000general:products/amazon-redshift,general:products/amazon-s3,general:products/amazon-rds,general:products/amazon-emraws@amazon.com Redshift Spectrum is now available in Europe (Ireland) and Asia Pacific (Tokyo)<p>Amazon Redshift Spectrum is now available in the Europe (Ireland) and Asia Pacific (Tokyo) AWS Regions. Redshift Spectrum is a feature of Amazon Redshift that enables you to analyze all of your data in Amazon S3 using standard SQL, with no data loading or transformations needed.&nbsp;</p>Thu, 19 Oct 2017 21:51:25 +0000general:products/amazon-redshiftaws@amazon.com Redshift announces Dense Compute (DC2) nodes with twice the performance as DC1 at the same price<p>You can now launch Amazon Redshift clusters on our second-generation Dense Compute (DC2) nodes. DC2 is designed for demanding data warehousing workloads that require low latency and high throughput. They feature powerful Intel E5-2686 v4 (Broadwell) CPUs, fast DDR4 memory, and NVMe-based solid state disks (SSDs). We’ve tuned Amazon Redshift to leverage the better CPU, network, and disk on DC2 nodes, providing up to twice the performance of DC1 <a href="/redshift/pricing/">at the same price</a>. Our DC2.8xlarge instances now provide twice the memory <a href="" target="_blank">per slice of data</a> and an optimized storage layout with 30% better storage utilization.&nbsp;</p>Tue, 17 Oct 2017 22:20:21 +0000general:products/amazon-redshiftaws@amazon.com Redshift announces support for uppercase column names<p>You can now specify whether column names returned by SELECT statements are uppercase or lowercase. With this feature, you can now set a session-based parameter to enable your case-sensitive applications to easily query Amazon Redshift. For more information, see <a href="" target="_blank">describe_field_name_in_uppercase</a>&nbsp;in the Amazon Redshift Database Developer Guide.&nbsp;</p>Wed, 11 Oct 2017 22:20:41 +0000general:products/amazon-redshiftaws@amazon.com Redshift announces support for LISTAGG DISTINCT<p>The LISTAGG aggregate function orders the rows for each group in a query according to the ORDER BY expression, then concatenates the values into a single string. With the new DISTINCT argument, you can now eliminate duplicate values from the specified expression before concatenating the values into a single string. For more information, see&nbsp;<a href="" target="_blank">LISTAGG Function</a>&nbsp;in the Amazon Redshift Database Developer Guide.</p>Wed, 11 Oct 2017 22:20:36 +0000general:products/amazon-redshiftaws@amazon.com Redshift now supports late-binding views referencing Amazon Redshift and Redshift Spectrum external tables<p> You can now create a view that spans Amazon Redshift and Redshift Spectrum external tables. With late-binding views, table binding will take place at runtime, providing your users and applications with seamless access to query data. Late-binding views allows you to drop and make changes to referenced tables without affecting the views. With this feature, you can query frequently accessed data in your Amazon Redshift cluster and less-frequently accessed data in Amazon S3, using a single view. Simply archive historical data to Amazon S3, <a href="">create an external table</a> referencing the relevant files, and then <a href="">create a view</a> referencing both the Amazon Redshift and the Redshift Spectrum external tables.</p>Thu, 14 Sep 2017 17:52:31 +0000general:products/amazon-redshiftaws@amazon.com Quick Start: Build a Data Lake Foundation on the AWS Cloud with AWS Services<p>This Quick Start deploys a data lake foundation that integrates Amazon Web Services (AWS) Cloud services such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Kinesis, Amazon Athena, Amazon Elasticsearch Service (Amazon ES), and Amazon QuickSight. &nbsp;</p>Fri, 08 Sep 2017 19:27:36 +0000general:products/amazon-s3,general:products/amazon-redshift,general:products/amazon-kinesis,general:products/amazon-athena,general:products/amazon-elasticsearch-service,general:products/amazon-quicksightaws@amazon.com Redshift Introduces SQL Scalar User-Defined Functions<p>You can now create and run scalar user-defined functions (UDFs) using SQL in <a href="/redshift/">Amazon Redshift</a>. &nbsp;</p>Fri, 01 Sep 2017 00:33:52 +0000general:products/amazon-redshift,general:products/aws-govcloud-usaws@amazon.com Redshift Spectrum now supports ORC and Grok file formats<p>You can now leverage Amazon Redshift Spectrum to query data stored in Optimized Row Columnar (ORC) and Grok file formats. Amazon Redshift Spectrum also supports multiple other open file formats, including Avro, CSV, Parquet, RCFile, RegexSerDe, SequenceFile, TextFile, and TSV. For more information, see <a href="">supported file formats</a>&nbsp;in the&nbsp;<i>Amazon Redshift Database Developer Guide</i>.<br /> </p>Wed, 23 Aug 2017 21:10:38 +0000general:products/amazon-redshiftaws@amazon.com Redshift introduces new OCTET_LENGTH Function<p> You can now use <a href="">OCTET_LENGTH</a> to count the number of bytes (octets) in a specified string. </p>Wed, 23 Aug 2017 21:10:48 +0000general:products/amazon-redshiftaws@amazon.com Redshift Spectrum Now Integrates with AWS Glue<p>You can now use the AWS Glue Data Catalog as the metadata repository for Amazon Redshift Spectrum. The AWS Glue Data Catalog provides a central metadata repository for all of your data assets regardless of where they are located. &nbsp;</p>Tue, 15 Aug 2017 22:39:22 +0000general:products/amazon-redshift,general:products/amazon-athena,general:products/aws-glueaws@amazon.com Redshift announces enhanced support for viewing external Redshift Spectrum tables<p>Using new Amazon Redshift&nbsp;ODBC and JDBC drivers, you can view external <a href="/redshift/spectrum/">Redshift Spectrum</a> tables in your existing SQL client and BI tools. Download the new drivers from the Connect client tab on the <i><a href="" target="_blank">Amazon Redshift Management Console</a></i>.</p>Fri, 11 Aug 2017 18:33:53 +0000general:products/amazon-redshiftaws@amazon.com Redshift announces Federated Authentication with Single Sign-On <p>You can now use the new Amazon Redshift&nbsp;database authentication to simplify the credential management of database users. You can configure Amazon Redshift to automatically generate temporary database credentials based on permissions granted through an AWS IAM policy.&nbsp;You can leverage your corporate directory and third-party SAML-2.0 identity provider, such as ADFS, PingFederate and Okta, to enable your users to easily access their Amazon Redshift clusters using their corporate user names, without managing database users and passwords. Furthermore, database users are automatically created at their first login based on their corporate privileges. The new Amazon Redshift ODBC and JDBC drivers support Windows Integrated Authentication for a simplified client experience. This feature is supported starting with Amazon Redshift ODBC driver version and JDBC driver version For more information, see&nbsp;<a href="" target="_blank">Using IAM Authentication to Generate Database User Credentials</a> in the&nbsp;<i>Amazon Redshift Database Developer Guide</i>.</p>Fri, 11 Aug 2017 18:27:54 +0000general:products/amazon-redshiftaws@amazon.com QuickSight adds support for Amazon Redshift Spectrum<p>Starting today, <a href="/quicksight/">Amazon QuickSight</a> customers can leverage <a href="/redshift/">Amazon Redshift</a> Spectrum to visualize and analyze vast amounts of unstructured data in their Amazon S3 “data lake” – without having to load or transform any data. In addition, customers can now visualize combined data sets that include frequently accessed data stored in Amazon Redshift and bulk data sets stored cost-effectively in Amazon S3 using the same SQL syntax of Amazon Redshift.&nbsp;</p>Thu, 01 Jun 2017 22:18:45 +0000general:products/amazon-quicksight,general:products/amazon-redshiftaws@amazon.com Schema Conversion Tool Exports from SQL Server to Amazon Redshift<p><a href="/dms/#sct">AWS Schema Conversion Tool</a> (SCT) can now extract data from a Microsoft SQL Server data warehouse for direct import into <a href="/redshift/">Amazon Redshift</a>. This follows the <a href="/about-aws/whats-new/2017/04/aws-schema-conversion-tool-exports-vertica-greenplum-and-netezza-data-warehouse-to-amazon-redshift/">recently announced</a> capability to convert SQL Server data warehouse schemas. </p>Thu, 11 May 2017 18:18:39 +0000general:products/aws-database-migration-service,general:products/amazon-redshiftaws@amazon.com Redshift announces query monitoring rules (QMR), a new feature that automates workload management, and a new function to calculate percentiles<p>You can use the new Amazon Redshift <a href="" target="_blank">query monitoring rules</a>&nbsp;feature to set metrics-based performance boundaries for workload management (WLM) queues, and specify what action to take when a query goes beyond those boundaries. For example, for a queue that’s dedicated to short running queries, you might create a rule that aborts queries that run for more than 60 seconds. To track poorly designed queries, you might have another rule that logs queries that contain nested loops. We also provide pre-defined rule templates in the Amazon Redshift management console to get you started.</p>Fri, 21 Apr 2017 16:21:10 +0000general:products/amazon-redshiftaws@amazon.com Amazon Redshift Spectrum: Run Amazon Redshift Queries directly on Datasets as Large as an Exabyte in Amazon S3<p>Today we announced the general availability of Amazon Redshift Spectrum, a new feature that allows you to run SQL queries against exabytes of data in Amazon Simple Storage Service (Amazon S3). With Redshift Spectrum, you can extend the analytic power of Amazon Redshift beyond data stored on local disks in your data warehouse to query vast amounts of unstructured data in your Amazon S3 “data lake” — without having to load or transform any data. Redshift Spectrum applies sophisticated query optimization, scaling processing across thousands of nodes so results are fast – even with large data sets and complex queries.</p>Wed, 19 Apr 2017 19:57:33 +0000general:products/amazon-redshiftaws@amazon.com Schema Conversion Tool Exports from Oracle and Teradata Data Warehouses to Amazon Redshift<p>We are pleased to announce that the <a href="/dms/#sct">AWS Schema Conversion Tool</a>&nbsp;(SCT) can now extract data from Teradata and Oracle data warehouses for direct import into <a href="/redshift/">Amazon Redshift</a>. Amazon Redshift is a fast, fully managed, petabyte scale data warehouse that was designed for the cloud from the ground up. AWS SCT will run an analysis of your data warehouse, automate the schema conversion, apply the schema to the Amazon Redshift target and extract your warehouse data, regardless of volume. You can use Amazon S3 or Amazon Snowball to move your exports to the cloud, where Amazon Redshift can natively import the data for use.</p>Thu, 16 Feb 2017 23:43:50 +0000general:products/amazon-redshift,general:products/aws-database-migration-serviceaws@amazon.com Redshift now supports encrypting unloaded data using Amazon S3 server-side encryption with AWS KMS keys<p>The <a href="/redshift/">Amazon Redshift</a> UNLOAD command now supports Amazon S3 server-side encryption using an AWS KMS key. The UNLOAD command unloads the results of a query to one or more files on Amazon S3. You can let Amazon Redshift automatically encrypt your data files using Amazon S3 server-side encryption, or you can specify a symmetric encryption key that you manage. With this release, you can use Amazon S3 server-side encryption with a key managed by AWS KMS. In addition, the COPY command loads Amazon S3 server-side encrypted data files without requiring you to provide the key. For more information, see&nbsp;<a href="" target="_blank">COPY</a> and <a href="" target="_blank">UNLOAD</a> in the <i>Amazon Redshift Database Developer Guide</i>.</p>Fri, 10 Feb 2017 18:03:05 +0000general:products/amazon-redshift,general:products/aws-kmsaws@amazon.com Redshift announces improved Workload Management console experience <p><a href="/redshift/">Amazon Redshift</a> Workload Management (WLM) enables you to flexibly manage priorities within workloads so that short, fast-running queries don't get stuck in queues behind long-running queries. Today we are announcing an improved WLM experience in the Amazon Redshift console. The new features include in-line validations, simpler error messages, and more so you can easily <a target="_blank" href="">create WLM queues</a> and <a target="_blank" href="">manage workloads</a>. For more information, see <a target="_blank" href="">Workload Management</a> in the <i>Amazon Redshift Database Developer Guide</i>.</p>Thu, 26 Jan 2017 21:13:02 +0000general:products/amazon-redshiftaws@amazon.com Redshift now supports the Zstandard high data compression encoding and two new aggregate functions<p> <a href="/redshift/">Amazon Redshift</a> now supports <a href="">Zstandard (ZSTD)</a> column compression encoding, which delivers better data compression thereby reducing the amount of storage and I/O needed. With the addition of ZSTD, Amazon Redshift now offers <a href="">seven compression encodings</a> to choose from depending on your dataset.</p>Fri, 20 Jan 2017 19:03:08 +0000general:products/amazon-redshiftaws@amazon.com Kinesis Firehose can now prepare and transform streaming data before loading it to data stores<p>You can now configure <a href="/kinesis/firehose/">Amazon Kinesis Firehose</a> to prepare your streaming data before it is loaded to data stores. With this new feature, you can easily convert raw streaming data from your data sources into formats required by your destination data stores, without having to build your own data processing pipelines.&nbsp;</p>Wed, 21 Dec 2016 17:19:40 +0000general:products/amazon-kinesis-firehose,general:products/amazon-elasticsearch-service,general:products/amazon-kinesis,general:products/amazon-redshift,general:products/amazon-s3aws@amazon.com Redshift now supports Python UDF logging module<p> You can now use the standard <a href="" target="_blank">Python logging module</a> to log error and warning messages from Amazon Redshift user-defined functions (UDF). You can then query the <a href="" target="_blank">SVL_UDF_LOG</a> system view to retrieve the messages logged from your UDF’s and troubleshoot your UDF’s easily. </p>Fri, 09 Dec 2016 17:07:46 +0000general:products/amazon-redshiftaws@amazon.com the AWS Canada (Central) Region<p> AWS is excited to announce immediate availability of the new Canada (Central) Region. Canada joins Northern Virginia, Ohio, Oregon, Northern California, and AWS GovCloud as the sixth AWS Region in North America and as the fifteenth worldwide, bringing the total number of AWS Availability Zones to 40 globally.</p>Thu, 08 Dec 2016 17:44:12 +0000local:aws-region,marketing:marchitecture/global-infrastructure,general:products/amazon-redshift,general:products/amazon-kinesis,general:products/amazon-kinesis-streamsaws@amazon.com and govern Amazon Redshift configurations with AWS Config <p>You can now record configuration changes to your Amazon Redshift clusters with AWS Config. The detailed configuration recorded by AWS Config includes changes made to Amazon Redshift clusters, cluster parameter groups, cluster security groups, cluster snapshots, cluster subnet groups, and event subscriptions. In addition, you can run two new managed Config Rules to check whether your Amazon Redshift clusters have the appropriate configuration and maintenance settings. These checks include verifying that your cluster database is encrypted, logging is enabled, snapshot data retention period is set appropriately, and much more.&nbsp;</p>Wed, 07 Dec 2016 21:50:35 +0000general:products/amazon-redshift,general:products/aws-configaws@amazon.com Redshift introduces multibyte (UTF-8) character support for database object names and updated ODBC/JDBC<p>You can now use multibyte (UTF-8) characters in <a href="/redshift/">Amazon Redshift</a> table, column, and other database object names. For more information, see <a href="" target="_blank">Names and Identifiers</a> in the Amazon Redshift Database Developer Guide. To support this new feature, we have updated the Amazon Redshift <a href="" target="_blank">ODBC</a>&nbsp;and <a href="" target="_blank">JDBC</a>&nbsp;drivers. The driver updates include support for multibyte characters and other enhancements. For details, see <a href="" target="_blank">Amazon Redshift JDBC Release Notes</a> and <a href="" target="_blank">Amazon Redshift ODBC Release Notes</a>.</p>Sat, 19 Nov 2016 01:02:40 +0000general:products/amazon-redshiftaws@amazon.com Redshift announces new data compression, connection management, and data loading features<p>We are excited to announce four new <a href="/redshift/">Amazon Redshift</a> features that improve data compression, connection management, and data loading.&nbsp;</p>Fri, 11 Nov 2016 20:46:19 +0000general:products/amazon-redshiftaws@amazon.com Redshift now available in South America (São Paulo) Region<p> We are excited to announce that <a href="/redshift/">Amazon Redshift</a> is now available in the South America (S&atilde;o Paulo) Region.</p>Mon, 31 Oct 2016 23:22:27 +0000general:products/amazon-redshiftaws@amazon.com the AWS US East (Ohio) Region<p>AWS is excited to announce immediate availability of the new US East (Ohio) Region. Ohio joins Northern Virginia, Oregon, Northern California, and AWS GovCloud as the fifth AWS Region in North America and as the fourteenth worldwide, bringing the total number of AWS Availability Zones to 38 globally.&nbsp;</p>Thu, 20 Oct 2016 05:38:43 +0000general:products/amazon-redshift,marketing:marchitecture/global-infrastructure,local:aws-region,general:products/amazon-elasticsearch-serviceaws@amazon.com Redshift introduces new data type to support time zones in time stamps<p>You can now use time zones as part of time stamps in <a href="/redshift/">Amazon Redshift</a>. The new <a href="">TIMESTAMPTZ</a> data type allows you to input timestamp values that include a time zone. Amazon Redshift automatically converts timestamps to Coordinated Universal Time (UTC) and stores the UTC values. Also, the COPY command now recognizes timestamp values with time zones in the source data and automatically converts them to UTC.&nbsp; You can retrieve and display timestamps in Amazon Redshift by setting your preferred time zone at the <a href="">session level</a>, <a href="">user level</a> or client connection level. &nbsp;</p>Fri, 30 Sep 2016 16:13:04 +0000general:products/amazon-redshiftaws@amazon.com Redshift now supports Enhanced VPC Routing<p>You can now use Amazon Redshift’s Enhanced VPC Routing to force all of your <a href="" target="_blank">COPY</a> and <a href="" target="_blank">UNLOAD</a>&nbsp;traffic to go through your&nbsp;<a href="" target="_blank">Amazon&nbsp;Virtual Private Cloud (VPC)</a>.&nbsp; Enhanced VPC Routing supports the use of standard VPC features such as VPC Endpoints, security groups, network ACLs, managed NAT and internet gateways, enabling you to tightly manage the flow of data between your Amazon Redshift cluster and all of your data sources.&nbsp;In particular, when your Amazon Redshift cluster is on a private subnet and you enable Enhanced VPC Routing, all the COPY and UNLOAD traffic between your cluster and Amazon S3 will be restricted to your VPC. You can also add a policy to your VPC endpoint to restrict unloading data only to a specific S3 bucket in your account, and monitor all COPY and UNLOAD traffic using VPC flow logs.&nbsp;</p>Thu, 15 Sep 2016 23:17:47 +0000general:products/amazon-redshiftaws@amazon.com Cost and Usage Report Data is Now Easy to Upload Directly into Amazon Redshift and Amazon QuickSight<p><i>AWS Cost and Usage Report</i> data is now available for easy and quick upload directly into Amazon Redshift and Amazon Quicksight.&nbsp;</p>Fri, 19 Aug 2016 00:58:18 +0000general:products/amazon-redshift,general:products/amazon-quicksightaws@amazon.com Redshift improves throughput performance up to 2X<p>You can now get up to 60% higher query throughput (as measured by standard benchmarks TPC-DS, 3TB) in Amazon Redshift as a result of improved memory allocation, which reduces the number of queries spilled to disk. This new improvement is available in version 1.0.1056 and above. Combined with the I/O and commit logic enhancement released in version 1.0.1012, it delivers up to 2 times faster performance for complex queries that spill to disk, and queries like SELECT INTO TEMP TABLE that create temporary tables.&nbsp;</p>Wed, 25 May 2016 18:23:34 +0000general:products/amazon-redshiftaws@amazon.com Redshift UNION ALL queries and VACUUM commands now run up to 10x faster<p><b>UNION ALL performance improvement:</b> Business analytics often involves time-series data, which is data generated or aggregated daily, weekly, monthly or at other intervals. By storing <a target="_blank" href="">time-series data in separate tables</a>—one table for each time interval—and using a <a target="_blank" href="">UNION ALL view</a> over those tables, you can avoid potentially costly table updates. Amazon Redshift now runs UNION ALL queries up to 10 times faster if they involve joins, and up to 2 times faster if they don’t involve any joins. This performance improvement is automatic and requires no action on your part and is available in version 1.0.1057 and above. For more information about UNION ALL views and time-series tables, see <a target="_blank" href="">Using Time-Series Tables</a> in the Amazon Redshift Database Developer Guide.<br /> </p>Tue, 24 May 2016 22:36:07 +0000general:products/amazon-redshiftaws@amazon.com Database Migration Service Now Supports Migrations to Amazon Redshift<p>AWS Database Migration Service now supports Amazon Redshift as a migration target. This allows you to stream data to Amazon Redshift from any of the supported sources including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, and SQL Server, enabling consolidation and easy analysis of data in the petabyte-scale data warehouse.<br /> </p>Wed, 04 May 2016 15:44:21 +0000general:products/aws-database-migration-service,general:products/amazon-redshiftaws@amazon.com Redshift announces Enhancements to Data Loading, Security, and SAS Integration<p> <b>BACKUP NO option when creating tables:</b> You can now use the <a href="" target="_blank">BACKUP NO</a> option with the <a href="" target="_blank">CREATE TABLE command</a>, and improve data loading and cluster performance. For tables, such as staging tables, which contain only transient and pre-processed data, specify BACKUP NO to save processing time when creating snapshots and restoring from snapshots. This option also reduces storage space used by snapshots.</p>Fri, 29 Apr 2016 18:09:45 +0000general:products/amazon-redshiftaws@amazon.com Create New Amazon Redshift Datasources in Amazon ML by Copying Existing Datasource Settings<p>You now have the ability to quickly and easily create new Amazon Redshift datasources in Amazon Machine Learning (Amazon ML) by copying settings from an existing Amazon Redshift datasource. A new option on the Amazon ML console allows you to select an existing Redshift datasource to copy the Redshift cluster name, database name, IAM role, SQL query and staging data location, to automatically populate these fields in the Create Datasource wizard. You can modify the settings before the new datasource is created, for example, to change the SQL query, or to specify a different IAM role to access the cluster.</p>Mon, 11 Apr 2016 22:36:09 +0000general:products/amazon-redshift,general:products/amazon-machine-learningaws@amazon.com Redshift is now available in China (Beijing) Region <p> We are excited to announce that Amazon Redshift is now available in the AWS China (Beijing) Region.</p>Thu, 07 Apr 2016 00:06:34 +0000general:products/amazon-redshiftaws@amazon.com Redshift now supports using IAM roles with COPY and UNLOAD commands<p>You can now assign one or more AWS Identity and Access Management (IAM) roles to your <a href="/redshift/">Amazon Redshift</a> cluster for data loading and exporting. Amazon Redshift assumes the assigned IAM roles when you load data into your cluster using the COPY command or export data from your cluster using the UNLOAD command. It uses the resulting credentials to access other AWS services, such as Amazon S3, securely during these operations. IAM roles enhance security of your cluster and simplify data loading and exporting by eliminating the need for you to embed AWS access credentials within SQL commands. They also enable your cluster to periodically re-assume an IAM role during long-running operations. Handling of data encryption keys for COPY and UNLOAD commands remains unchanged.<br /> </p>Wed, 30 Mar 2016 00:20:35 +0000general:products/amazon-redshiftaws@amazon.com Machine Learning Console Now Makes It Easier to Connect to Amazon Redshift<p>You can now more easily set up or select your <a href="">Identity and Access Management (IAM) role</a> when connecting to an Amazon Redshift cluster from the Amazon Machine Learning (Amazon ML) console. To streamline the process of setting up your connection to Amazon Redshift, Amazon ML now pre-populates an interactive drop-down menu of existing IAM roles that have an Amazon ML managed policy for Amazon Redshift, and other IAM roles that you might prefer. From the Amazon ML console, you have the option of dynamically creating a new IAM role, enabling you to quickly connect to your Amazon Redshift cluster.&nbsp;</p>Tue, 22 Mar 2016 00:19:38 +0000general:products/amazon-machine-learning,general:products/amazon-redshiftaws@amazon.com Redshift now supports table level restore<p>You can now restore a single table from an Amazon Redshift snapshot instead of restoring the entire cluster. This new feature enables you to restore a table that you might have dropped accidentally, or reconcile data from a table that you might have updated or deleted unintentionally. To restore a table from a snapshot, simply navigate to the “Table Restore” tab for a cluster and click on the “Restore Table” button. <br /> </p>Thu, 10 Mar 2016 23:45:07 +0000general:products/amazon-redshiftaws@amazon.com Redshift is now available in US West (N. California) Region<p>We are excited to announce that <a href="/redshift/">Amazon Redshift</a> is now available in the AWS US West (N. California) Region.</p>Fri, 26 Feb 2016 01:37:01 +0000general:products/amazon-redshiftaws@amazon.com Amazon Redshift Data Schema Conversion from Amazon Machine Learning Console<p>You can now use the Amazon Machine Learning (Amazon ML) console to retrieve data from Amazon Redshift with an improved data schema conversion functionality. Data types supported by Amazon ML are not equivalent to Amazon Redshift’s supported data types, requiring a schema conversion when creating an Amazon ML datasource. Using the Amazon ML console, you will now be able to take advantage of more accurate rules for this schema conversion process, based on the data type information provided by Amazon Redshift. For more information about using Amazon Redshift with Amazon ML, please reference the <a href="">documentation</a> in the Amazon ML developer guide.</p>Wed, 10 Feb 2016 00:18:11 +0000general:products/amazon-redshift,general:products/amazon-machine-learningaws@amazon.com Redshift Now Supports Appending Rows to Tables and Exporting Query Results to BZIP2-compressed Files<p><b><u>Append rows to a target table</u></b>: Using the <a href="">ALTER TABLE APPEND</a> command, you can now append rows to a target table. When you issue this command, <a href="/redshift/">Amazon Redshift</a> moves the data from the source table to matching columns in the target table. ALTER TABLE APPEND is usually much faster than a similar <a href="">CREATE TABLE AS</a> or <a href="">INSERT INTO</a> operation because it moves the data instead of duplicating it. This could be particularly useful in cases where you load data into a staging table, process it, and then copy the results into a production table. For more details, refer to the <a href="">ALTER TABLE APPEND</a> command.&nbsp;</p>Tue, 09 Feb 2016 00:37:02 +0000general:products/amazon-redshiftaws@amazon.com Redshift supports automatic queue hopping for timed-out queries<p>You can now configure Amazon Redshift Work Load Management (WLM) settings to move timed-out queries automatically to the next matching queue and restart them. The matching queue has the same Query Group or User Group as the original queue. Please see the <a href="">WLM Queue Hopping</a> section of our documentation for more detail.<br /> </p>Fri, 08 Jan 2016 15:50:19 +0000general:products/amazon-redshiftaws@amazon.com Redshift announces tag-based permissions, default access privileges, and BZIP2 compression format<p>Tag-based, resource-level permissions and the ability to apply default access privileges to new database objects make it easier to manage access control in Amazon Redshift. In addition, you can now use the Amazon Redshift COPY command to load data in BZIP2 compression format. More details on these features below:<br /> </p>Fri, 11 Dec 2015 01:30:45 +0000general:products/amazon-redshiftaws@amazon.com Redshift now supports modifying cluster accessibility and specifying sort order for NULL values<p>We are pleased to announce two new features for Amazon Redshift, making it easier for you to control access to your clusters and expanding query capabilities.</p>Fri, 20 Nov 2015 23:35:00 +0000general:products/amazon-redshiftaws@amazon.com Redshift now supports Scalar User-Defined Functions in Python <p>You can now create and run scalar user-defined functions (UDFs) in <a href="/redshift/">Amazon Redshift</a>. With scalar UDFs, you can perform analytics that were previously impossible or too complex for plain SQL.</p>Fri, 11 Sep 2015 17:09:00 +0000general:products/amazon-redshiftaws@amazon.com Redshift now supports dynamic work load management and list aggregate functions<p> We are excited to announce two new features for Amazon Redshift that make it easier to manage your clusters and expand query capabilities.</p>Tue, 04 Aug 2015 01:48:01 +0000general:products/amazon-redshiftaws@amazon.com Redshift now supports cross-region backups for KMS-encrypted clusters<p>You can now configure Amazon Redshift to automatically copy snapshots of your KMS-encrypted clusters to another region of your choice. By storing a copy of your snapshots in a secondary region, you have the ability to restore your cluster from recent data if anything affects the primary region. For details on how to enable automatic cross-region backups for your KMS-encrypted clusters, refer to the <a href="" target="_blank">Snapshots</a> section of the Amazon Redshift management guide.</p> <p></p> <p><a href="/redshift/">Amazon Redshift</a> makes it easy to launch a high-performance, petabyte-scale data warehouse for less than $1000/TB/year. Get started with a <a target="_blank" href="">free 2-month trial</a>.</p>Tue, 28 Jul 2015 23:26:21 +0000general:products/amazon-redshiftaws@amazon.com Redshift now supports AVRO ingestion<p>We are excited to announce that you can now ingest AVRO files directly into Amazon Redshift. Use the COPY command to ingest data in AVRO format in parallel from Amazon S3, Amazon EMR, and remote hosts (SSH clients). For details, refer to the <a href="">data ingestion</a> section of the documentation.</p> <p><a href="/redshift/">Amazon Redshift</a> makes it easy to launch a high-performance, petabyte-scale data warehouse for less than $1000/TB/year. Get started with a <a href="">free 2-month trial</a>.<br /> </p>Mon, 13 Jul 2015 18:28:20 +0000general:products/amazon-redshiftaws@amazon.com Redshift Adds New Dense Storage (DS2) Instances and Reserved Node Payment Options<p>You can now launch Amazon Redshift clusters on second-generation Dense Storage (DS2) instances. &nbsp;DS2 has twice the memory and compute power of its Dense Storage predecessor, DS1 (formerly DW1), and the same storage capacity. DS2 also supports <a href="">Enhanced Networking</a> and provides 50% more disk throughput than DS1. On average, DS2 provides 50% better performance than DS1, but is <a href="/redshift/pricing/">priced the same as DS1</a>. To move from DS1 to DS2, simply <a href="">restore</a> a DS2 cluster from a snapshot of a DS1 cluster of the same size.</p> <p style="font-family: tahoma, arial, helvetica, sans-serif; font-size: 12px;"></p>Wed, 10 Jun 2015 04:26:15 +0000general:products/amazon-redshiftaws@amazon.com Filter Data in Amazon Redshift using Interleaved Sorting<p>You can use <a target="_blank" href="">Interleaved Sort Keys</a> to quickly filter data without the need for indices or projections in <a href="/redshift/">Amazon Redshift</a>. A table with interleaved keys arranges your data so each sort key column has equal importance. While <a target="_blank" href="">Compound Sort Keys</a> are more performant if you filter on the leading sort key columns, interleaved sort keys provide fast filtering no matter which sort key columns you specify in your WHERE clause. To create an interleaved sort, simply define your sort keys as INTERLEAVED in your <a target="_blank" href="">CREATE TABLE</a> statement.</p> <p></p> <p>The performance benefit of interleaved sorting increases with table size, and is most effective with highly selective queries that filter on multiple columns. For example, assume your table contains 1,000,000 blocks (1 TB per column) with an interleaved sort key of both customer ID and product ID. You will scan 1,000 blocks when you filter on a specific customer or a specific product, a 1000x increase in query speed compared to the unsorted case. If you filter on both customer and product, you will only need to scan a single block.</p> <p>The interleaved sorting feature will be deployed in every region over the next seven days. The new cluster version will be 1.0.921. </p> <p></p> <p>For more information, please see our <a target="_blank" href="">AWS Blog Post on Interleaved Sorting</a> and review our documentation on <a target="_blank" href="">Best Practices for Designing Tables</a>.</p>Mon, 11 May 2015 22:29:45 +0000general:products/amazon-redshiftaws@amazon.com Amazon Mobile Analytics Auto Export To Amazon Redshift<p> You can now use the <a href="/mobileanalytics/">Amazon Mobile Analytics</a> Auto Export feature to automatically export your app event data to <a href="/redshift/">Amazon Redshift</a>. With your app event data in Amazon Redshift, you can run SQL queries, build custom dashboards, and gain deep insights about your application usage. Additionally, you can use your existing business intelligence and data warehouse tools to report on your app event data.</p> <p> You can turn on the Auto Export feature from your Amazon <a href="" target="_blank">Mobile Analytics Console</a>. To learn more, visit our <a href="/mobileanalytics/">webpage</a> and check out the <a href="" target="_blank">documentation</a>.</p>Tue, 03 Mar 2015 17:47:46 +0000general:products/amazon-mobile-analytics,general:products/amazon-redshiftaws@amazon.com Redshift Announces Custom ODBC/JDBC Drivers and Query Visualization in the Console<p>Amazon Redshift’s new custom <a href="">ODBC</a> and <a href="">JDBC</a> drivers make it easier and faster to connect to and query Amazon Redshift from your Business Intelligence (BI) tool of choice. Amazon Redshift’s JDBC driver features JDBC 4.1 and 4.0 support, a 35% performance gain over open source options, and improved memory management. Amazon Redshift’s ODBC drivers feature ODBC 3.8 support, a 6% performance gain, and better Unicode data and password handling, among other benefits. Additionally, AWS partners <a target="_blank" href="">Informatica</a>, <a href="/redshift/partners-detail/microstrategy/">Microstrategy</a>, <a href="/redshift/partners-detail/pentaho/">Pentaho</a>, <a href="/redshift/partners-detail/qlik/">Qlik</a>, <a target="_blank" href="">SAS</a>, and <a href="/redshift/partners-detail/tableau/">Tableau</a> will be supporting these Redshift drivers with their solutions. For more information please see <a href="">Connecting to a Cluster</a> in our documentation. If you need to distribute these drivers to your customers or other third parties, please contact us at <a href=""></a> so we can arrange an appropriate license.<br /> </p>Thu, 26 Feb 2015 21:43:07 +0000general:products/amazon-redshiftaws@amazon.com Redshift is Now Available in the AWS GovCloud (US) Region<p>We are delighted to announce that <a adhocenable="false" href="/redshift/">Amazon Redshift</a> is now available in the <a adhocenable="false" href="/govcloud-us/">AWS GovCloud (US) Region</a>.<br /> </p>Wed, 19 Nov 2014 03:45:29 +0000general:products/amazon-redshift,general:products/aws-govcloud-usaws@amazon.com Redshift adds Four New Features and Sixteen New SQL Commands and Functions<a href="">Amazon Redshift</a> has added a number of features this week and over the past month, including the ability to tag resources and cancel queries from the console, enhancements to data load and unload, and sixteen new SQL commands and functions. Amazon Redshift is a fast, easy-to-use, petabyte-scale data warehouse service that costs as little as $1,000/TB/Year. To get started for free with Amazon Redshift and partner tools, please see our <a href="">Free Trial</a> page.Wed, 17 Dec 2014 04:32:44 +0000general:products/amazon-redshiftaws@amazon.com Redshift Free Trial and Price Reductions in Asia Pacific<p>AWS is delighted to announce a <a href="">free trial</a> and <a href="">reserved instance price reductions</a> for <a href="">Amazon Redshift</a>, a fast, fully-managed, petabyte-scale data warehouse for as little as $1,000/TB/Year. You can now try Amazon Redshift's SSD node for <a href="">free for two months</a>. What's more, a number of <a href="">Business Intelligence and Data Integration partners</a> are offering free trials of their own to help you ingest and report on your data in Amazon Redshift. Amazon Redshift has also reduced three year Reserved Instance prices in Japan, Singapore, and Sydney by over 25%. </p> <p> <span style="text-decoration: underline;"><strong>Two Month Free Trial</strong></span><br /> If you are new to Amazon Redshift, you may be eligible for 750 free hours per month for two months to try the <strong>dw2.large</strong> node - enough hours to continuously run one node with 160GB of compressed SSD storage. You can also build clusters with multiple <strong>dw2.large</strong> nodes to test larger data sets, which will consume your free hours more quickly. Please see the <a href="">Amazon Redshift Free Trial Page</a> for more details.</p> <p> <span style="text-decoration: underline;"><strong>Price Reductions in Asia Pacific</strong></span><br /> You can now purchase a three year reserved <strong>dw1.8xlarge</strong> instance in Japan for $30,000 upfront and $1.326 per hour, down 28% from $30,400 upfront and $2.288 hourly. A three-year reserved <strong>dw1.8xlarge</strong> instance in Singapore and Sydney now costs $32,000 upfront and $1.462 per hour, down 26% from $32,000 upfront and $2.40 hourly. The <strong>dw1.xlarge</strong> instance price has also decreased and continues to be one eighth the cost of <strong>dw1.8xlarge</strong>. Please see the <a href="">Amazon Redshift Pricing Page</a> for more details. </p> <p> To learn more about Amazon Redshift, please visit our <a href="">detail page</a> and <a href="">getting started page</a>. To find out about recently released features, please visit the <a href="">Developer Guide</a> and the <a href="">Management Guide</a> history. To receive alerts when new features are announced, please subscribe to our <a href="">feature announcements</a> thread in the Amazon Redshift <a href=";start=0">forum</a>. </p>Wed, 17 Dec 2014 04:53:05 +0000general:products/amazon-redshiftaws@amazon.com Redshift Announces Cross Region Ingestion and Improved Query Functionality <p>We are delighted to announce <a href="">cross region ingestion</a> and improved query functionality for Amazon Redshift, a fast, easy-to-use, petabyte-scale data warehouse service in the cloud that costs as little as $1,000/TB/Year. Customers can now COPY data directly into Amazon Redshift from an Amazon S3 bucket or Amazon DynamoDB table that is not in the same region as the Amazon Redshift cluster. We've also launched new numeric SQL functions, <a href="">greatest and least</a>, as well as new window functions, <a href="">percentile_cont</a> and <a href="">percentile_disc</a>, for more advanced analytics. These features will be rolling out to all new and existing Amazon Redshift customers over the next week, during maintenance windows.</p> <p>To get started with Amazon Redshift, please visit our <a href="">detail page</a>. To learn more about recently released features, please visit the <a href="">Developer Guide</a> and the <a href="">Management Guide</a> history. To receive alerts when new features are announced, please subscribe to our <a href="">feature announcements</a> thread in the Amazon Redshift <a href=";start=0">forum</a>.</p>Wed, 17 Dec 2014 04:54:00 +0000general:products/