What’s New?

Apr 10, 2014

R3: Announcing the next generation of Amazon EC2 Memory-optimized instances

We are very excited to announce the immediate availability of the R3 instance type -the next generation of Amazon EC2 Memory-optimized instances. Our goal with R3 is to provide you with the best price point per GiB of RAM and high memory performance. R3 instances are available as On-Demand Instances, Reserved Instances, or Spot Instances with the following specifications.


Instance Sizes vCPUs Memory (GiB) SSD Storage (GB) Linux On-Demand Price* ($/hr) Linux Reserved Instances Upfront* Linux Reserved Instances Price*($/hr)
r3.large 2 15 1 x 32 $ 0.175 $ 1,033 $ 0.026
r3.xlarge 4 30.5 1 x 80 $ 0.350 $ 2,066 $ 0.052
r3.2xlarge 8 61 1 x 160 $ 0.700 $ 4,132 $ 0.104
r3.4xlarge 16 122 1 x 320 $ 1.400 $ 8,264 $ 0.208
r3.8xlarge 32 244 2 x 320 $ 2.800 $ 16,528 $ 0.416

* Pricing for US East (N. Virginia) and US West (Oregon).

Compared to M2 and CR1 instances, R3 instances provide:

  • The latest Intel Xeon Ivy Bridge Processors
  • Support for Enhanced Networking that provides lower latency, low jitter, and very high packet per second performance
  • Higher sustained memory bandwidth - up to 63,000 MB/s
  • Faster random I/O Performance - up to 150,000 4kb random reads per second
  • Support for EBS optimization (r3.xlarge, r3.2xlarge, and r3.4xlarge only)

R3 instances are recommended for In-memory analytics, high performance databases including relational databases and NoSQL databases such as MongoDB, and MemcacheD/Redis applications. R3 instances support Hardware Virtualization (HVM) Amazon Machine Images (AMIs) only. For additional information on this, or other technical information on R3, see R3 Documentation.

R3 instances are available in all AWS Regions except GovCloud (US), China (Beijing), and South America (São Paulo). To learn more about R3 and other Amazon EC2 instances, visit the Amazon EC2 Instances page.

Apr 09, 2014

AWS Elastic Beanstalk announces VPC Public IP support

We are pleased to announce that AWS Elastic Beanstalk now supports assigning public IP addresses to EC2 instances in a VPC. This feature allows you to create Elastic Beanstalk environments in a VPC with a single public subnet. You no longer need to create a VPC with a private and public subnet, and a NAT instance.

To begin using this feature, read Launching an AWS Elastic Beanstalk Application with Public Instances in a VPC. To learn more about this feature, please refer to IP Addressing in Your VPC.

For more information about AWS Elastic Beanstalk please visit the product page, read the documentation or watch this introductory video.

Apr 08, 2014

AWS Console for iOS and Android Now Supports Amazon S3

We’re excited to announce support for Amazon S3 in the AWS Console mobile app, available on Amazon Appstore, Google Play, and iTunes. Use the mobile app to view status and health of your AWS resources on your mobile device. The mobile app also features support for EC2, Elastic Load Balancing, Amazon Relational Database Service, Auto Scaling, AWS OpsWorks, CloudWatch, and the Service Health Dashboard.

The latest update introduces new features for app users to view information about Amazon S3 buckets and objects. Amazon S3 customers can browse buckets, view bucket properties, browse objects in a bucket, and view object properties.

Let us know how you use the mobile app and tell us what features you’d like using the feedback link in the app’s menu.

Apr 08, 2014

Amazon EMR Adds New EC2 Instance Types and Lowers Prices

Amazon Elastic MapReduce (Amazon EMR) is a web service that makes it easy to process large amounts of data using Hadoop on AWS. Amazon EMR now supports 12 additional EC2 instance types and supports hs1.8xlarge, hi1.4xlarge, and cc2.8xlarge in additional AWS regions. These EC2 instance types are well suited to a variety of popular applications such as HBase, Spark, and Impala.

In addition, Amazon EMR lowered prices 27% to 61% (depending on the instance type) effective April 1. For example, the price of hs1.8xlarge was reduced from $0.69 to $0.27 per hour, making it possible to operate a large EMR cluster for less than $1000 per TB per year (including the cost of EMR and EC2, assuming 3x data replication).

For more information, see:

New to Hadoop/EMR? Follow this step-by-step tutorial to launch your first cluster in minutes.

Apr 08, 2014

Introducing Cost Explorer: View and analyze your historical AWS spend

We are excited to introduce Cost Explorer, a tool that lets you analyze your historical AWS spend data with a graphical interface. Cost Explorer provides you with interactive graphical reports, designed to make it easier for you to view and analyze your historical spend on AWS. The data behind these reports is updated daily, so that you can view the most up-to-date information about your spending.

Cost Explorer can be accessed from the AWS Management Console and you are provided with pre-configured views that show three common spend queries. These show your monthly spend broken down by AWS service, monthly spend broken down by linked account and total daily spend. You can further customize these preconfigured views to meet your needs, for example by customizing the time range you wish to view, view data at a monthly or daily grain or drill down further into your data based on services, linked accounts, and tags. You can also bookmark views to save custom settings for use in the future.

You can start using this tool, by going to the Cost Explorer tab in the billing section of the AWS Management Console. For more information about the reports, please see the Cost Explorer Documentation.

Apr 03, 2014

You can now use Oracle GoldenGate with Amazon RDS for Oracle

We are excited to announce that you can now use Oracle Golden Gate with Amazon RDS for Oracle. Starting today, you can set your RDS Oracle Database Instances as a source and target using Oracle GoldenGate. You can use Oracle GoldenGate with Amazon RDS for Oracle for active-active database replication, zero-downtime migration and upgrades, disaster recovery and data protection, and cross region replication.

Oracle Golden Gate allows real-time data integration. It enables high availability solutions, real-time data integration, transactional change data capture, data replication, transformations, and verification between operational and analytical enterprise systems.

Oracle GoldenGate with Amazon RDS is available under the “Bring-your-own-license” model and is available in all AWS regions. To extract data from an RDS DB Instance using Oracle GoldenGate you need to use DB Engine Version 11.2.0.3.

To learn more about using Oracle GoldenGate with Amazon RDS for Oracle, please visit our User Documentation.

Apr 02, 2014

AWS Elastic Beanstalk announces Ruby 2.0 support

We are pleased to announce that Ruby 2.0 is now available on AWS Elastic Beanstalk. Elastic Beanstalk already makes it easier to quickly deploy, manage and scale Java, Node.js, Python, PHP, .NET and Ruby 1.9.3 applications on AWS. Now, Elastic Beanstalk offers the same functionality for Ruby 2.0. For a complete list of supported platforms, see Supported Platforms.

Elastic Beanstalk for Ruby 2.0 comes with a choice of Passenger Standalone or Puma web server for Ruby with Nginx as a reverse proxy. This highly requested combination allows you to utilize all the virtual cores available on the instance type running your environment. To get started using Elastic Beanstalk for Ruby 2.0, visit the AWS Elastic Beanstalk Developer Guide or check out the walkthroughs for how to deploy Rails and Sinatra applications on Elastic Beanstalk.

To launch a sample application click on 'Launch Now' (will present the AWS Elastic Beanstalk launch application wizard) below:

Sample Sinatra application

Launch Now

Sample Rails 4 application that creates a simple pre-launch marketing web application (Note: Requires selecting the 'Create an RDS DB Instance with this environment' option on the 'Additional Resources' step of the create wizard)

Launch Now

To learn more about creating your own "Launch Now" links, see Constructing a Launch Now URL. For more information about AWS Elastic Beanstalk please visit the product page, read the documentation or watch this introductory video.

Apr 02, 2014

Amazon CloudFront Adds EDNS-Client-Subnet Support

We’re excited to let you know that Amazon CloudFront has added support for EDNS-Client-Subnet. With this enhancement, Amazon CloudFront now provides even better routing, hence improving performance for your end users who use Google Public DNS or Open DNS resolvers.

Amazon CloudFront automatically routes requests for your content to the optimal edge location by looking at the IP address of the resolver making the DNS query. We do this because DNS resolvers are typically a good proxy for an end user's location. However, in some cases, your end users may be using DNS resolvers that are far from their geographic location and in those cases, end user requests may be routed to an Amazon CloudFront edge location that isn’t optimal for your end user. By supporting EDNS-Client-Subnet, Amazon CloudFront can now route requests to the optimal edge location by looking at a truncated version of the end user’s IP address added into the DNS request. Today, Google Public DNS and Open DNS are two providers that include this truncated IP address (specifically, the first three octets) of the end user in the DNS request. For more information about how EDNS-Client-Subnet works, see A Faster Internet.

You don’t need to do anything to enable this feature; Amazon CloudFront will automatically route all requests using Google Public DNS and Open DNS to the edge location that provides the best possible performance.

You can also learn more about Amazon CloudFront by visiting the Amazon CloudFront Developer Guide or the Amazon CloudFront product detail page.

Apr 01, 2014

Latest Features Launched in the AWS GovCloud (US) Region

Amazon EBS Provisioned IOPS Volumes and EBS-Optimized Instances


We are delighted to announce that Amazon EBS Provisioned IOPS Volumes are now available in the AWS GovCloud (US) Region.

Amazon EBS Provisioned IOPS Volumes offer storage with consistent and low-latency performance, and are designed for applications with I/O-intensive workloads such as databases. Backed by Solid-State Drives (SSDs), Provisioned IOPS volumes support up to 30 IOPS per GB which enables you to provision 4000 IOPS on a volume as small as 134 GB. You can also stripe multiple volumes together to achieve up to 48,000 IOPS when attached to larger EC2 instances.

To enable Amazon EC2 instances to fully utilize the IOPS provisioned on an EBS volume, we’re also introducing the ability to launch selected Amazon EC2 instance types as EBS-Optimized instances. Provisioned IOPS volumes can achieve single digit millisecond latencies and are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time. For more information about instance types that can be launched as EBS-optimized instances, see Amazon EC2 Instance Types.

Amazon EC2 M3 Medium and Large Instance Sizes and Functionality


We are excited to announce the availability of two new Amazon EC2 M3 instance sizes, m3.medium and m3.large in the AWS GovCloud (US) Region.

Amazon EC2 M3 instance sizes and features: We have introduced two new sizes for M3 instances: m3.medium and m3.large with 1 and 2 vCPUs respectively. We have also added SSD-based instance storage and support for instance store-backed AMIs (previously known as S3-backed AMIs) for all M3 instance sizes. M3 instances feature high frequency Intel Xeon E5-2670 (Sandy Bridge or Ivy Bridge) processors. When compared to previous generation M1 instances, M3 instances provide higher, more consistent compute performance at a lower price. To learn more about M3 instances, please visit Amazon EC2 Instance Types.

Learn More about AWS GovCloud (US) Region


Join us for our weekly AWS GovCloud (US) Region Office Hours on April 1st at 1pm EST to learn more about the AWS GovCloud (US) Region.

Mar 31, 2014

Amazon SWF announces new samples and recipes for Ruby

We are excited to announce that Amazon Simple Workflow (Amazon SWF) has released more samples and recipes for the AWS Flow Framework for Ruby.

Amazon Simple Workflow is a task coordination and state management service for cloud and on-premise applications. It provides features that let you build and run reliable background jobs and other applications that need to track and manage state. With Amazon SWF, you can stop writing complex glue-code and state machinery and invest more in the business logic that makes your applications unique.

The AWS Flow Framework for Ruby is a Ruby gem that makes it faster and easier to build applications with Amazon Simple Workflow. Using the AWS Flow Framework, you write your code in a straightforward programming model and let the framework handle the details of Amazon Simple Workflow APIs.

With the new samples and recipes you can use starter code to handle a range of use cases. Some of the use cases covered include: running recurring scheduled tasks, parallel processing of a large data set, file/media processing, and more.

The samples are available at the AWS Flow Framework for Ruby page on GitHub.

For more information about Amazon SWF and the AWS Flow Framework, please visit the detail page.

For more information about the samples, please visit the AWS Flow Framework for Ruby Sample Documentation.
Mar 27, 2014

AWS OpsWorks now supports Chef 11.10

We are pleased to announce that AWS OpsWorks now supports Chef 11.10. This release of Chef:
  • Improves compatibility with cookbooks written for Chef Server, including support for features such as search and databags, making it easier for you to use community cookbooks such as those written for MongoDB (see our new blog post, Deploying MongoDB With OpsWorks, to learn more about this use case).
  • Includes support for Berkshelf so you can easily reference multiple cookbook repositories. You can now easily use community cookbooks and your own custom cookbooks in the same stack.
  • Uses Ruby 2.0 for Chef recipe execution.
To get started, create a stack. You can choose the Chef version you would like to use under Advanced settings and whether to use Berkshelf. To learn more and see examples, see our documentation.
Mar 26, 2014

Amazon WorkSpaces Available for All Customers

We are excited to announce that Amazon WorkSpaces is now available to all AWS customers!

Amazon WorkSpaces is a fully managed desktop computing service in the cloud that allows you to easily provision cloud-based desktops that allow end-users to access the documents, applications and resources they need with the device of their choice, including laptops, iPad, Kindle Fire, or Android tablets. You can also integrate Amazon WorkSpaces securely with your corporate Active Directory so that your users can continue using their existing enterprise credentials to seamlessly access company resources.

With a few clicks in the AWS Management Console, you can provision a high-quality desktop experience for any number of users at a cost that is highly competitive with traditional desktops and half the cost of most virtual desktop infrastructure (VDI) solutions.

Amazon WorkSpaces is initially available in the US-East-1 (N.Virginia) and US-West-2 (Oregon) AWS Regions with support for more regions coming soon.

For more information about Amazon WorkSpaces, please visit the product detail page, where you can learn more and watch a short introductory video that will explain the service. You can get started with the service using the AWS Management Console.

Mar 24, 2014

Amazon CloudSearch introduces powerful new search and admin features

Amazon CloudSearch is a fully-managed service that makes it easy to set up, manage, and scale a search solution for your website or application. Customers from various industries including media and publishing, social networking, healthcare, eCommerce, legal, and the public sector use Amazon CloudSearch to build cutting edge content discovery systems.

Traditional search solutions require significant time and resources to maintain and operate. Search query traffic is often hard to predict making it difficult to size and configure search platforms. In addition to the complexity involved, administration of a search system is also expensive. Amazon CloudSearch not only significantly lowers the cost of a search solution, but it also makes it easy to set up a search system that can change with the needs of the business. Amazon CloudSearch already offers key benefits including easy setup and configuration, hands off auto scaling, automatic node monitoring and recovery, and built in data durability.

With this launch, Amazon CloudSearch now supports several popular search engine features in addition to providing an enhanced managed search service. Key new features of the service include:

  • Support for 33 languages: Arabic, Armenian, Basque, Bulgarian, Catalan, simplified Chinese, traditional Chinese, Czech, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hindi, Hungarian, Indonesian, Irish, Italian, Japanese, Korean, Latvian, Norwegian, Persian, Portuguese, Romanian, Russian, Spanish, Swedish, Thai, and Turkish
  • Support for new data types
  • Autocomplete
  • Highlighting
  • Additional text processing features
  • Native Geospatial Support
  • Multi-AZ
  • IAM integration
  • User control for initial instance type selection
  • Availability in additional AWS regions: Sydney, Tokyo, and Sao Paulo

Customer Success Stories

News UK is a leading news publisher. "We use Amazon CloudSearch to let our members search our archive of news articles, dating back to 1785. We love the fact that CloudSearch automatically scales for traffic and data.” said Danny Tedora, Transformation Program Manager, News UK. “Having a low cost, high throughput, low-latency search without maintenance and operational overhead lets us focus more time on delighting our members”

Bizo helps B2B marketers reach their target prospects online and shape purchase decisions through targeted display, social advertising, retargeting and other integrated multi-channel programs. “We use Amazon CloudSearch and have been impressed with its performance. CloudSearch also eliminated some administrative headaches we experienced with the Solr implementation we had been using.” said Alex Boisvert, Director of Engineering, Bizo. “We are excited about the new capabilities now available in CloudSearch – including a broader set of search features and support for enhanced security through IAM. We look forward to implementing these in our production environment."

SMART INSIGHT develops search applications for specialized industry applications. The company develops SMART/InSight, a search application that integrates and analyzes enterprise information. “We view Amazon CloudSearch as key to our strategy for our flagship product, SMART/InSight. Pay as you go pricing for a managed search service removes the operational burden of maintaining and managing complex search systems, and lets us focus on what’s most important to us: our customers.” said Mack K. Machida, CEO of SMART INSIGHT. “Search capability on AWS enables integration of data without thinking of data types, size, or location and lets SMART/InSight visualize them in the ways our customers need. We will continue to collaboratively work with CloudSearch.”

SnapDish helps over one million users find and share recipes and photos for their favorite foods. “We have been using Amazon CloudSearch to deliver SnapDish's food, recipe, and user search. CloudSearch is very useful for building high available search applications.” said Fumikazu Kiyota, CTO of Vuzz, Inc. “With the new language support from CloudSearch we were able to move Japanese language analysis functions to CloudSearch. The service provides us a cost effective option to provide a great search experience to our users”

Customers can get started with the new features from Amazon CloudSearch with just a few clicks on the AWS console. Learn more about the new features available with Amazon CloudSearch by reading Jeff Barr’s latest blog post.

To get started visit:

Mar 24, 2014

Announcing Support for VPC Peering within a Region

We are excited to announce VPC Peering within a region. You can now create a one-to-one networking connection between two VPCs in the same region and route traffic between them using private IP addresses. Instances in either VPC can communicate with each other as if they were within the same VPC. You can establish multiple VPC peering connections for each VPC that you own and you can create a VPC peering connection between VPCs in your own account, or with a VPC in another AWS account in the same region. VPC Peering is available in the new VPC Management Console.

We are also excited to announce design improvements to the Amazon VPC Management Console. These changes make it easier to add tags to VPC resources and view those tags across resources on other VPC pages. To access the new VPC console, including the new VPC Peering feature, just visit the VPC Management Console and click the link to try the new console design. Don't forget to let us know what you think via the console feedback button!

Mar 24, 2014

New AWS Training Course - “Big Data on AWS”

AWS Training & Certification released a new technical training course for individuals who are responsible for implementing big data environments, namely Data Scientists, Data Analysts, and Enterprise Big Data Solution Architects. This course is designed to teach technical end users how to use Amazon EMR to process data using the broad ecosystem of Hadoop tools like Pig and Hive. We also cover how to create big data environments, work with Amazon DynamoDB and Amazon Redshift, understand the benefits of Amazon Kinesis, and leverage best practices to design big data environments for security and cost-effectiveness.

Click here to review the course details and outline.

Click here to learn more about AWS Training available to help you build technical expertise with the AWS Cloud.

Mar 20, 2014

Elastic Load Balancing adds support for Connection Draining

We are pleased to announce Connection Draining, a new feature for Elastic Load Balancing. When you enable Connection Draining on a load balancer, any back-end instances that you deregister will complete requests that are in progress before deregistration. Likewise, if a back-end instance fails health checks, the load balancer will not send any new requests to the unhealthy instance but will allow existing requests to complete.

This means that you can perform maintenance such as deploying software upgrades or replacing back-end instances without impacting your customers’ experience.

Connection Draining is also integrated with Auto Scaling, making it even easier to manage the capacity behind your load balancer. When Connection Draining is enabled, Auto Scaling will wait for outstanding requests to complete before terminating instances.

You can enable Connection Draining via the AWS Management Console, API, or Command Line Interface (CLI), as well as AWS CloudFormation.

To learn more, please see the blog post and documentation.

Mar 18, 2014

AWS Trusted Advisor adds five new checks on AWS CloudTrail and Amazon Route 53

AWS Support announces five new AWS Trusted Advisor checks that offer best practices for using AWS CloudTrail (for logging AWS API activity) and Amazon Route 53 (for DNS services). AWS Trusted Advisor now provides 37 AWS best practices, and the five new checks focus on security, cost optimization, and fault tolerance:

  • AWS CloudTrail Logging (Security category): Checks for your use of AWS CloudTrail. CloudTrail provides increased visibility into activity in your AWS account by recording information about AWS API calls made on the account.
  • Amazon Route 53 Latency Resource Record Sets (Cost Optimization category): Checks for Amazon Route 53 latency record sets that are configured inefficiently and can lead to cost saving. If you create only one latency resource record set for a domain name, all queries are routed to one region, and you may pay extra for latency-based routing without getting the benefits.
  • Amazon Route 53 MX and SPF Resource Record Sets (Security category): Checks for an SPF resource record set for each MX resource record set. An SPF (sender policy framework) record publishes a list of servers that are authorized to send email for your domain, which helps reduce spam by detecting and stopping email address spoofing.
  • Amazon Route 53 Deleted Health Checks (Fault Tolerance category): Checks for resource record sets that are associated with health checks that have been deleted. If you delete a health check without updating the associated resource record sets, the routing of DNS queries for your DNS failover configuration may be unpredictable.
  • Amazon Route 53 Failover Resource Record Sets (Fault Tolerance category): Checks for Amazon Route 53 failover resource record sets that are misconfigured. When Amazon Route 53 health checks determine that the primary resource is unhealthy, Amazon Route 53 responds to queries with a secondary, backup resource record set. You must create correctly configured primary and secondary resource record sets for failover to work.

For more information on Trusted Advisor and descriptions of the other 32 checks, visit AWS Trusted Advisor.

Mar 13, 2014

Amazon ElastiCache Announces Redis 2.8.6 Support

We are pleased to announce that ElastiCache for Redis now supports engine version 2.8.6. Customers can now launch new clusters with Redis 2.8.6, as well as upgrade existing ones to the new engine version. Among the improvements in Redis 2.8.6 are partial resynchronization that speeds master-slave sync, and improved consistency through the ability to suspend writes to master if the number of slaves is insufficient.

To learn more, we encourage you to read Jeff Barr’s blog. You can easily launch an ElastiCache for Redis cluster with engine version 2.8.6 via a few clicks on the AWS Management Console.

Mar 13, 2014

Announcing Amazon CloudFront Usage Charts for Web Distributions - Track Trends in Requests & Data Transfer

We are excited to let you know that you can now view your Amazon CloudFront usage with CloudFront Usage Charts, six new charts in the AWS Management Console. You can use the charts to track trends in data transfer and requests (both HTTP and HTTPS) for each of your active CloudFront Web distributions. The charts show your usage from each CloudFront region at daily or hourly granularity, going back up to 60 days, and they also include totals, average, and peak usage during the time interval selected.

Here are details on the six charts:
  • Number of HTTP Requests; Number of HTTPS Requests: These two charts (one for HTTP and one for HTTPS) show the number of HTTP or HTTPS requests served by edge locations in the selected region for the specified CloudFront distribution.
  • Data Transferred over HTTP; Data Transferred over HTTPS: These two charts (again, one each for HTTP and HTTPS) show the total amount of data transferred over HTTP or HTTPS from CloudFront.
  • Data Transferred from CloudFront Edge Locations to Your Users: This chart shows data transferred from CloudFront edge locations in the selected region to users, combining both HTTP and HTTPS usage.
  • Data Transferred from CloudFront to Your Origin: This chart shows data transferred from CloudFront edge locations in the selected region to your origin for POST, PUT PATCH, OPTIONS, and DELETE methods, again combining both HTTP and HTTPS.

There are no additional charges for CloudFront Usage Charts. You do not need to make any changes to your CloudFront configuration to view these charts - simply navigate to the Amazon CloudFront Management Console and select the Reports and Analytics link on the left navigation panel.

You can learn more about CloudFront Usage Charts by viewing our walk-through in the Amazon CloudFront Developer Guide or by visiting the Amazon CloudFront product detail page.

Mar 12, 2014

Amazon AppStream Available to All Customers

Amazon AppStream Available to All Customers

We are excited to announce that Amazon AppStream is now available to all AWS customers!

Amazon AppStream is a flexible, low-latency service that lets you stream resource intensive applications from the cloud. It deploys and renders your application on AWS infrastructure and streams the output to mass-market devices, such as personal computers, tablets, and mobile phones. Because your application is running in the cloud, it can scale to handle vast computational and storage needs, regardless of the devices your customers are using. You can choose to stream either all or parts of your application from the cloud. Amazon AppStream enables use cases for applications that wouldn’t be possible running natively on mass-market devices. Using Amazon AppStream, your applications are no longer constrained by the hardware in your customer’s hands.

Amazon AppStream includes a SDK that currently supports streaming applications from Microsoft Windows Server 2008 R2 to devices running FireOS, Android, iOS, Mac OS X, and Microsoft Windows.

Amazon AppStream is initially available in the US-East-1 (N.Virginia) region, with support for more regions coming soon.

For more information about Amazon AppStream, please visit the product detail page where you can learn more and watch some short introductory videos that explain more about the service. You can get started with the service using the AWS Management Console.

Mar 06, 2014

Announcing DynamoDB Cross-Region Export/Import

Amazon DynamoDB automatically replicates your data three-ways within a region. Now you can also backup your data across regions in a few steps via the Management Console. The Cross-Region Export/Import console feature enables you to back up the data from your DynamoDB tables to another AWS region, or within the same region, using AWS Data Pipeline, Amazon Elastic MapReduce (EMR), and Amazon S3. This feature allows you to set up the export/import without having to manually create the pipeline or manually provision and maintain the EMR cluster.

To learn more about this new feature, please visit our blog.

To get started, visit:

Mar 06, 2014

Elastic Load Balancing Announces Access Logs

We are excited to announce a new feature for Elastic Load Balancing: Access Logs. This feature records all requests sent to your load balancer, and stores the logs in Amazon S3 for later analysis.

With Access Logs, you can obtain request-level details in addition to the existing load balancer metrics provided via Amazon CloudWatch. The logs contain information such as client IP address, request path, latencies, and server responses. You can use this data to pinpoint when application errors occurred or response times increased, and which requests were impacted.

These logs can also be used for web analytics to determine popular pages, page trends over time, or unique visitors. To make this analysis easier, we have integrated with Amazon Elastic MapReduce as well as analytics tools from our partners, Splunk and Sumo Logic.

To learn more about Access Logs, please see the documentation.

Mar 05, 2014

Amazon CloudFront Adds SNI Custom SSL and HTTP to HTTPS Redirect Features

We are excited to announce that you can now use your own SSL certificates with Amazon CloudFront at no additional charge with Server Name Indication (SNI) Custom SSL. SNI is supported by most modern browsers, and provides an efficient way to deliver content over HTTPS using your own domain and SSL certificate. There are no additional certificate management fees to use this feature; you simply pay normal Amazon CloudFront rates for data transfer and HTTPS requests.

SNI Custom SSL relies on the SNI extension of the Transport Layer Security protocol, which allows multiple domains to serve SSL traffic over the same IP address by including the hostname viewers are trying to connect to. Amazon CloudFront delivers your content from each edge location and offers the same security as the Dedicated IP Custom SSL feature. SNI Custom SSL works with most modern browsers, including Chrome version 6 and later (running on Windows XP and later or OS X 10.5.7 and later), Safari version 3 and later (running on Windows Vista and later or Mac OS X 10.5.6. and later), Firefox 2.0 and later, and Internet Explorer 7 and later (running on Windows Vista and later). Some users may not be able to access your content because some older browsers do not support SNI and will not be able to establish a connection with CloudFront to load the HTTPS version of your content. If you need to support non-SNI compliant browsers for HTTPS content, we recommend using our Dedicated IP Custom SSL feature.

Set up is easy: simply follow the instructions outlined in the CloudFront Developer Guide and start serving your content quickly and securely.

You can also now configure Amazon CloudFront to require viewers to interact with your content over an HTTPS connection using the HTTP to HTTPS Redirect feature. When you enable HTTP to HTTPS Redirect, CloudFront will respond to an HTTP request with a 301 redirect response requiring the viewer to resend the request over HTTPS. There are no additional charges for using HTTP to HTTPS Redirect, but standard request fees apply.

To learn more about the Amazon CloudFront SNI Custom SSL or HTTP to HTTPS Redirect features, please visit the Amazon CloudFront Custom SSL Page or the CloudFront Developer Guide.

Mar 05, 2014

VM Import for Windows Server 2012

VM Import now supports virtual machines running Windows Server 2012. You can import VMs running Windows Server 2012 R1 (Datacenter or Standard edition) to Amazon EC2 from VMware ESX and VMware Workstation VMDK images, Citrix Xen VHD images, and Microsoft Hyper-V VHD images. Once imported, these VMs will run as Windows instances within Amazon EC2. You can also export previously imported EC2 instances running Windows 2012 to VMware ESX VMDK, VMware ESX OVA, Microsoft Hyper-V VHD or Citrix Xen VHD file formats using VM Export.

In addition to adding Windows 2012 R1 support, VM Import has also enhanced the import experience for Windows Server 2003 and Windows Server 2008 VMs. Amazon EC2 instances created from these VMs will now benefit from having the EC2Config service installed and from having the latest-generation Citrix PV drivers .

VM Import can help you migrate your existing workloads, Enterprise applications, or VM catalog to Amazon EC2. In addition to the newly-added support for Windows Server 2012 VMs, you can also import Windows Server 2003, Windows Server 2008, Red Hat Enterprise Linux (RHEL) 5.1-6.5 (using Cloud Access), Centos 5.1-6.5, Ubuntu 12.04, 12.10, 13.04, 13.10, and Debian 6.0.0-6.0.8, 7.0.0-7.2.0 VMs to EC2 using VM Import. You can also use VM Export to export your previously imported Linux and Windows VMs.

To learn more about VM Import, please visit: http://aws.amazon.com/ec2/vm-import/.
Mar 04, 2014

Announcing Red Hat Enterprise Linux availability in AWS GovCloud (US)

RHEL Logo

Red Hat Enterprise Linux is now available in the AWS GovCloud (US) Region. Amazon Web Services (AWS) and Red Hat® have teamed to offer Red Hat Enterprise Linux on Amazon EC2, a complete, enterprise-class computing environment for running business-critical applications and workloads.

Red Hat maintains the base Red Hat Enterprise Linux images for Amazon EC2. AWS customers receive updates at the same time that updates are made available from Red Hat, so your computing environment remains reliable and secure and your Red Hat Enterprise Linux-certified applications maintain their supportability.

You can launch Red Hat Enterprise Linux directly from the Amazon EC2 Launch Wizard in the Management Console for the AWS GovCloud (US) Region using your AWS GovCloud credentials.


Learn more about the AWS GovCloud (US) Region or join us for our weekly AWS GovCloud (US) Office Hours every Tuesday at 1:00 – 2:00 PM EST and the Intro to AWS GovCloud (US) Region webinar on March 12th, 1:30 – 2:30 PM EST to learn more.

Mar 03, 2014

AWS CloudFormation supports AWS OpsWorks

You can now use AWS OpsWorks and AWS CloudFormation together to manage applications on AWS. AWS CloudFormation enables modeling, provisioning and version-controlling of a wide range of AWS resources. AWS OpsWorks is an application management service that simplifies software configuration, application deployment, scaling, and monitoring.

You can now model OpsWorks components (stacks, layers, instances, and applications) inside CloudFormation templates, and provision them as CloudFormation stacks. This enables you to document, version control, and share your OpsWorks configuration. You have the flexibility to provision OpsWorks components and other related AWS resources such as Amazon VPC and AWS Elastic Load Balancer with a unified CloudFormation template or separate CloudFormation templates.

Here is a sample CloudFormation template provisioning an OpsWorks PHP application, and here is a sample CloudFormation template provisioning a load balanced OpsWorks application inside a VPC. Please refer to the documentation for details.

Mar 03, 2014

Announcing new features in AWS Activate

We’re excited to announce three new features for members of AWS Activate:

  • One-on-one office hours with an AWS Solutions Architect. AWS Activate members from around the world can now book one-on-one office hours with an AWS Solutions Architect. A Solutions Architect has deep technical expertise and can address issues such as security, architectural best practices, application performance, high availability, and cost optimization. Discussing your use cases and architectural requirements with a Solutions Architect can help make sure that you are properly leveraging AWS services and are deploying scalable, resilient, cost-effective solutions in the cloud.
  • Expanded Training in the Self-Starter package. Members of the Self-Starter Package now have access to AWS Essentials eLearning training (normally $600) plus eight tokens for self-paced labs (normally $30 per token). These trainings can help you learn the fundamentals of AWS products and services and give you hands-on experience working with AWS technologies. Startups eligible for the Portfolio Package will continue to have access to these trainings.
  • New exclusive offers: As a member of AWS Activate, you have access to exclusive offers from third parties that can add valuable tools to your startup. We're happy to announce seven new offers today, including offers from Bitnami, Cloudability, CopperEgg, Amazon Login and Pay, Nitrous.IO, Stackdriver, and Trend Micro.

To learn more and sign up for AWS Activate, click here.

Feb 28, 2014

AWS Command Line Interface Launches AWS Data Pipeline Commands

The AWS Command Line Interface now fully supports AWS Data Pipeline commands. You can create and manage data processing workflows in AWS Data Pipeline using the same familiar tool you use to manage other AWS services, including data sources such as Amazon S3, Amazon DynamoDB, Amazon RDS, and Amazon Redshift. The CLI lets you easily create and update pipelines with your existing pipeline definitions, query for specific objects and attributes within a definition or other command outputs using the --query option, and write scripts to automate your processes.

To start using the AWS CLI, visit http://aws.amazon.com/cli/. To learn more about AWS Data Pipeline, visit http://aws.amazon.com/datapipeline/.

Feb 27, 2014

AWS Identity and Access Management (IAM) announces MFA protection for cross-account access

We are excited to announce support for multi-factor authentication (MFA) protection for cross-account access.

MFA is a security best practice that adds an extra layer of protection to your AWS account. It requires users to present two independent credentials: what the user knows (password or secret access key) and what the user has (MFA device). IAM already supports adding MFA protection when you grant access to users within a single AWS account. With today’s announcement, you can add similar protection when granting access to users across accounts, by requiring them to authenticate with MFA before assuming an IAM role.

For more information, visit the Configuring MFA-Protected API Access section in the Using IAM guide.

Feb 27, 2014

Announcing New Features in the Amazon EC2 Management Console

We are pleased to announce a number of new features and design improvements to the Amazon EC2 Management Console. These changes follow updates we made in late 2013 when we introduced the new Launch Instance Wizard, AWS Marketplace integration and a refreshed look and feel to parts of the console. Now, we have updated the console with the new design consistently throughout, including in Events, Spot Requests, Bundle Tasks, Volumes, Snapshots, Security Groups, Placement Groups, Load Balancers and Network Interfaces.

We've also added new features that make it easier to manage your EC2 resources:

  • Easily create a new security group based on an existing one using 'Copy to new'.
  • Manage inbound and outbound VPC security group rules directly from within the EC2 console.
  • Locate resources that are associated with one another such as a snapshot that is associated to an EBS volume using deep links.
  • Easily compare Spot pricing across AZs using the Spot Pricing History graph.
  • Manage the tags of your Spot requests in the EC2 Console.

You can find more details about these new features in Jeff Barr's blog post.

To access these features and experience the new look and feel, just visit the EC2 Management Console and navigate to one of the updated sections, and you will be presented with the option to try out the new console. Don't forget to let us know what you think via the console feedback button!

Feb 25, 2014

New Trusted Advisor check on CloudFront Content Delivery Optimization

The new CloudFront Content Delivery Optimization check in AWS Trusted Advisor can help you optimize the delivery of popular content from Amazon Simple Storage Service (Amazon S3).

This new check calculates the ratio of data transferred out to data stored in your Amazon S3 bucket to identify cases where you could improve performance and perhaps save money by delivering your content with Amazon CloudFront instead of directly from Amazon S3.

When you use Amazon CloudFront to deliver this content, requests for your content are automatically routed to the nearest edge location, and your content is cached so it can be delivered with lower latency. When the data transferred is more than 10 TB per month, pricing for Amazon CloudFront is lower than for data transferred directly from Amazon S3, meaning you can also save money.

CloudFront Content Delivery Optimization check in AWS Trusted Advisor

To date, AWS Trusted Advisor provides 32 checks of your AWS environment and makes recommendations to reduce cost, improve system performance and reliability, or help close security gaps. You can stay up-to-date with your AWS resource deployment via weekly AWS Trusted Advisor notifications. Up to 3 recipients can get weekly status updates and savings estimations in English or Japanese. Learn more about setting up notifications on Jeff Barr's Blog and FAQ.

For more information on AWS Trusted Advisor and the other 31 checks covering major AWS services, including Amazon EC2, Amazon S3, Amazon EBS, Amazon RDS, Elastic Load Balancing, Amazon Route 53, and Amazon CloudFront, visit AWS Trusted Advisor. Read more about Amazon CloudFront and Amazon Simple Storage Service.

Feb 20, 2014

AWS Data Pipeline Now Available in Four New Regions

AWS Data Pipeline is now supported in four additional regions as follows:

  • US West (Oregon) Region or us-west-2
  • EU (Ireland) Region or eu-west-1
  • Asia Pacific (Sydney) Region or ap-southeast-2
  • Asia Pacific (Tokyo) Region or ap-northeast-1

Even though AWS Data Pipeline was earlier based out of US East (Northern Virginia) Region or us-east-1, it always supported cross-region data flows. The new regions will help reduce the service latency and provide greater redundancy for customers. The pricing for new regions will be same as that for us-east-1.

AWS Data Pipeline helps customers move, integrate, and process data across AWS compute and storage resources, as well as customer’s on-premises resources. To get started with AWS Data Pipeline for free, visit the AWS Data Pipeline detail page.

Feb 20, 2014

Amazon CloudFront Expands Media Streaming Capabilities by Offering Smooth Streaming Support

We’re excited to let you know that Amazon CloudFront now supports a new option for streaming on-demand video to your end users: Microsoft Smooth Streaming. You can now use CloudFront to deliver video using the Smooth Streaming format without the need to setup and operate any media servers. This adds Smooth Streaming to the set of video streaming technologies that CloudFront supports, which includes native on-demand streaming using HLS and multi-format live streaming using third party media servers such as Wowza and Adobe.

There are no additional charges for using this feature. As with other Amazon CloudFront features, you pay only for what you use and there are no upfront fees or minimum monthly usage commitments. You can learn more about Smooth Streaming using Amazon CloudFront by visiting the Amazon CloudFront streaming page or by reading the Amazon CloudFront Developer Guide.

You can also join our webinar at 11:00 AM Pacific (UTC-7) on March 19, 2014 to learn more about video streaming using Smooth Streaming over Amazon CloudFront and other Amazon CloudFront media specific capabilities that enable you to deliver your content at scale to a global audience. Register for this webinar.

Feb 20, 2014

Amazon RDS now offers new faster and cheaper DB instances

We are pleased to announce the immediate availability of a set of new M3 database instances for Amazon RDS. These new instances provide you with a similar ratio of CPU and memory resources as our previous M1 database instances but offer 50% more computational capability/core, significantly improving overall compute capacity. These new instances are priced about 6% lower than M1 instances, providing you with significantly higher and more consistent compute performance at a lower price.

These new instances are available for all database engines and in all AWS regions, with AWS GovCloud (US) support coming in the future.

Out of these new DB instances, the db.m3.xlarge and db.m3.2xlarge are optimized for Provisioned IOPS storage. For a workload with 50% writes and 50% reads running on db.m3.2xlarge instance type, it is possible to realize up to 12,500 IOPS for MySQL and 25,000 IOPS for Oracle and PostgreSQL. Refer to the Provisioned IOPS storage section of the User Guide to learn more.

Pricing for M3 database instances start at $0.053/hour (effective price) for 3 year Heavy Utilization RI and $0.150/hour for On-demand usage in the US West (Oregon) Region for MySQL. For more information on pricing, visit the Amazon RDS pricing page.

Feb 20, 2014

Analyze Streaming Data from Amazon Kinesis with Amazon Elastic MapReduce (EMR)

We are pleased to announce the release of the Amazon Elastic MapReduce (Amazon EMR) Connector to Amazon Kinesis. Kinesis can collect data from hundreds of thousands of sources, such as web site click-streams, marketing and financial information, manufacturing instrumentation, social media and more. This connector enables batch processing of data in Kinesis streams with familiar Hadoop ecosystem tools such as Hive, Pig, Cascading, and standard MapReduce. You can now analyze data in Kinesis streams without having to write, deploy and maintain any independent stream processing applications.

You can use this connector, for example, to write a SQL query using Hive against a Kinesis stream or to build reports that join and process Kinesis stream data with multiple data sources such as Amazon Dynamo DB, Amazon S3 and HDFS. You can build reliable and scalable ETL processes that filter and archive Kinesis data into permanent data stores including Amazon S3, Amazon DynamoDB, or Amazon Redshift.

To facilitate end-to-end log processing scenarios using Kinesis and EMR, we have created a Log4J Appender that streams log events directly into a Kinesis stream, making the log entries available for processing in EMR. You can get started today by launching a new EMR cluster and using the code samples provided in the tutorials and FAQs. If you’re new to Kinesis you can learn more by visiting the Kinesis detail page.

Feb 19, 2014

Elastic Load Balancing – Perfect Forward Secrecy and more new security features

We have made several enhancements to Elastic Load Balancing to further improve the security of your application traffic, making it easier for you to better protect end users’ confidential data and privacy.

You can now use these new security features:

  • Perfect Forward Secrecy is a feature that provides additional safeguards against the eavesdropping of encrypted data, through the use of a unique random session key. This prevents the decoding of captured data, even if the secret long-term key is compromised.
  • Server Order Preference lets you configure the load balancer to enforce cipher ordering, providing more control over the level of security used by clients to connect with your load balancer.
  • The new Predefined Security Policy simplifies the configuration of your load balancer by providing a recommended cipher suite that adheres to AWS security best practices. The policy includes the latest security protocols (TLS 1.1 and 1.2), enables Server Order Preference, and offers high security ciphers such as those used for Elliptic Curve signatures and key exchanges.

You can configure these new features with the AWS Management Console, API, or Command Line Interface (CLI).

To learn more about these new features, see the documentation.

Feb 18, 2014

Amazon Route 53 Announces Fast Interval Health Checks and Configurable Failover Thresholds

We are excited to announce two new features for Route 53 health checks and DNS Failover: fast interval health checks and configurable failover thresholds.

With fast interval health checks, Route 53 performs health check observations of your endpoint (for example, a web server) every 10 seconds instead of the default interval of 30 seconds. This enables Route 53 to confirm more quickly that an endpoint is unavailable and shortens the time required for DNS Failover to redirect traffic.

Configurable failover thresholds let you specify the number of consecutive health check observations required for Route 53 to confirm that an endpoint has switched from a healthy to unhealthy state, or vice versa, from 1 to 10 observations. You can select a lower threshold in order to fail over more quickly after an endpoint becomes unavailable, or a higher threshold to prevent traffic from being redirected in response to temporary or transient events.

You can use health checks along with Route 53's DNS Failover feature to help detect an outage of your website and redirect your end users to alternate locations where your application is operating properly. You can also use health checks to monitor your website's availability. Route 53 health checks are integrated with Amazon CloudWatch, so you can view the current and past status of your health checks, or configure alarms and notifications.

Getting started is easy. To learn more, visit the Route 53 product page for full details and pricing, or see our documentation.

Feb 12, 2014

Amazon EC2 G2 Instances Available in Asia Pacific (Tokyo)

G2 Instances are now available in our Asia Pacific (Tokyo) Region. G2 instances are designed for applications that require 3D graphics capabilities. The instance is backed by a high-performance NVIDIA GPU, making it ideally suited for video creation services, 3D visualizations, streaming graphics-intensive applications, and other server-side workloads requiring massive parallel processing power. With this new instance type, customers can build high-performance DirectX, OpenGL, CUDA, and OpenCL applications and services without making expensive up-front capital investments.

Customers can launch G2 instances using the AWS console, Amazon EC2 command line interface, AWS SDKs and third party libraries. To learn more about G2 instances, visit Amazon EC2 details page. To get started immediately, visit the AWS Marketplace for GPU machine images from NVIDIA and other Marketplace sellers.

Feb 10, 2014

AWS CloudFormation supports Amazon Redshift and updating AWS Elastic Beanstalk

AWS CloudFormation now supports provisioning Amazon Redshift clusters and updating AWS Elastic Beanstalk applications. AWS CloudFormation is a service that simplifies provisioning and management of a wide range of AWS resources.

You can now model a Redshift cluster configuration in a CloudFormation template file and have CloudFormation launch the cluster with a few clicks or CLI commands. The template enables you to version control, replicate, or share your Redshift configuration. Here is a sample CloudFormation template that provisions a Redshift cluster.

Previously, you could provision an Elastic Beanstalk application as part of a CloudFormation stack, as shown in this sample template which provisions an Elastic Beanstalk application. Now as well as provisioning, you can also update an Elastic Beanstalk application by updating the associated CloudFormation template and the stack.

To learn more about AWS CloudFormation, please see our detail page, documentation or watch this introductory video. We also have a large collection of sample templates that makes it easy to get started with CloudFormation within minutes.

Jan 30, 2014

Amazon Route 53 Adds Health Checking Features: String Matching and HTTPS Support

We are excited to announce two new health-checking features for Amazon Route 53: string matching and HTTPS support.

When you enable Route 53 health checks, Route 53 regularly makes Internet requests to your application's endpoints—for example, web or application servers—from multiple locations around the world to determine whether the endpoint is available.

With string matching health checks, Route 53 searches the body of the response that your endpoint returns. If the response body (typically a web page) contains a string that you specify, Route 53 considers the endpoint healthy. Using string matching health checks, you can help ensure that your web application is serving the correct content, in addition to verifying that the web server is running and reachable over the Internet.

With HTTPS support, you can now create health checks for secure websites that are available only over SSL, to confirm that the web server is available and responding to requests over HTTPS. You can also combine string matching health checks with HTTPS health checks to verify that your secure website is returning the correct content.

You can use health checks along with Route 53's DNS failover feature. With DNS failover, Amazon Route 53 can help detect an outage of your website and redirect your end users to alternate locations where your application is operating properly. Using Route 53 DNS failover, you can run your primary application simultaneously in multiple AWS regions around the world. Route 53 automatically removes from service any region where your application is unavailable. You can also take advantage of a simple backup site hosted on Amazon Simple Storage Service (Amazon S3), with Route 53 directing users to this backup site in the event that your application becomes unavailable.

You can also use health checks to monitor your website's availability. Route 53 health checks are integrated with Amazon CloudWatch, and you can view current and past statuses of your health checks, or configure alarms and notifications.

Getting started is easy. To learn more, visit the Route 53 product page or the Amazon Route 53 Developer Guide. See the Route 53 product page for full details and pricing.

Jan 29, 2014

Amazon SES Adds Support for Additional AWS Regions

In addition to US East (Northern Virginia), Amazon Simple Email Service (Amazon SES) is now available in two further AWS regions: EU (Ireland) and US West (Oregon).

Support for these additional regions means that you can reduce the network latency of your email-sending application by choosing to use the Amazon SES endpoint in the AWS region that is closest to your application. And, if you are in Europe, you can now have an email delivery service hosted entirely in the EU Region.

To learn more about Amazon SES, visit the detail page. For details about using Amazon SES in multiple regions, see the Amazon SES Developer Guide.

Jan 29, 2014

Amazon SQS announces Dead Letter Queues

We are excited to announce that Amazon SQS (SQS) now offers Dead Letter Queues (DLQ). Now, with DLQs, you can designate special queues to collect messages that could not be delivered after a given number of attempts. With DLQs you can write applications or assign people to more effectively analyze and understand why messages cannot be delivered. This helps you troubleshoot events that may be impacting your end users or other systems.

DLQs are easy to setup. In fact, they are regular SQS queues. You simply assign a DLQ to a "source queue" you have already created. You can use either the AWS Management Console or SQS APIs to connect a source queue to a Dead Letter Queue. Dead Letter Queues are available starting today in all AWS regions except for AWS GovCloud (US).

For more information about SQS Dead Letter Queues, please visit Jeff Barr's Blog, the SQS Documentation, and the SQS API Reference. You can learn more about SQS at the SQS Detail Page.

Jan 29, 2014

Track and manage instance use and spending with new EC2 Usage Reports

We are excited to announce the immediate availability of EC2 Usage Reports, which are designed to make it easier for you to track and better manage your EC2 usage and spending. You can use these interactive usage reports to view your historical EC2 instance usage, and help you plan for future EC2 usage. There are currently two reports available:

  • EC2 Instance Usage Report – This report shows you your instance usage in instance hours or cost at the hourly, daily, or monthly grain. You can also filter or group the data by region, instance type, platform, tenancy, purchase option, consolidated account, or tags.
  • EC2 Reserved Instance Utilization Report – For previously purchased Reserved Instances, this report shows you the usage cost, total cost, and savings versus on-demand instance usage, as well as average and maximum utilization.

You can easily access these reports by visiting the Reports section of the billing console. You can customize the reports, and bookmark them for easy access in the future. For more information about the reports, see the EC2 Users Guide.

Jan 28, 2014

Announcing Amazon Kinesis Storm Spout

We are pleased to make the Amazon Kinesis Storm Spout available. The Amazon Kinesis Storm Spout helps developers use Amazon Kinesis with Storm, an open source, distributed real-time computation system. This version of the Amazon Kinesis Storm Spout fetches data from the Amazon Kinesis stream and emits it as tuples that Storm topologies can process. Developers can add the Spout to their existing Storm topologies, and leverage Amazon Kinesis as a reliable, scalable, stream capture, storage, and replay service that powers their Storm processing applications.

To get started, see http://aws.amazon.com/kinesis/developer-resources/. You can learn more about Amazon Kinesis here.

Jan 27, 2014

CloudFormation supports Auto Scaling Scheduled Actions and DynamoDB Secondary Indexes

AWS CloudFormation now supports Auto Scaling scheduled actions and DynamoDB local and global secondary indexes. AWS CloudFormation is a service that simplifies provisioning and management of a wide range of AWS resources.

You can model your Auto Scaling architecture in a CloudFormation template. The CloudFormation service can automatically create the desired architecture from the template in a fast and consistent manner. With support for scheduled actions, you can now model Auto Scaling schedules in CloudFormation templates. If you have a predictable traffic pattern, you can scale Auto Scaling groups using scheduled actions. We have created a sample template to show you how.

CloudFormation already provided the ability to provision DynamoDB tables. Now, you can also provision DynamoDB tables with local and global secondary indexes using CloudFormation. DynamoDB local and global secondary indexes enable more flexible queries based on a wider range of attributes other than the primary keys. We’ve created a sample template that creates DynamoDB tables with local and global secondary indexes.

To learn more about CloudFormation, please visit the detail page, documentation or watch this introductory video. We have a large collection of sample templates that makes it easy to get started with CloudFormation in minutes.

Jan 23, 2014

Amazon Redshift announces new SSD-based node type

We're delighted to announce the availability of Dense Compute nodes, a new SSD-based node type for Amazon Redshift. Dense Compute nodes allow customers to create very high performance data warehouses using fast CPUs, large amounts of RAM and SSDs. Customers can get started with a single Dense Compute node for as little as $0.25/hour with no commitments or an effective price of $0.10/hour when using 3 year Reserved Instances.

For customers with less than 500GB of data in their data warehouses, Dense Compute nodes are the most cost-effective and highest performance option. Above 500GB, customers whose primary focus is performance can continue with Dense Compute nodes up to hundreds of terabytes, giving them the highest ratio of CPU, Memory and I/O to storage. If performance isn’t as critical for a customer’s use case, or if customers want to prioritize reducing costs further, they can use the larger Dense Storage nodes and scale up to a petabyte or more of compressed user data for under $1,000/TB/Year (3 Year Reserved Instance pricing). Scaling clusters up and down or switching between node types requires a single API call or a few clicks in the AWS Console.

On-Demand prices for a single Large Dense Compute node start at $0.25/hour in the US East (Northern Virginia) Region and drop to an effective price of $0.10/hour with a three year reserved instance. Dense Compute and Dense Storage nodes for Amazon Redshift are available in the US East (N. Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions. To get started, please visit the Amazon Redshift detail page.

Jan 21, 2014

Announcing New Amazon EC2 M3 Instance Sizes and Lower Prices for Amazon S3 and Amazon EBS

We are excited to announce the availability of two new Amazon EC2 M3 instance sizes, m3.medium and m3.large. We are also lowering the prices of storage for Amazon S3 and Amazon EBS in all regions, effective February 1st, 2014.

Amazon EC2 M3 instance sizes and features: We have introduced two new sizes for M3 instances: m3.medium and m3.large with 1 and 2 vCPUs respectively. We have also added SSD-based instance storage and support for instance store-backed AMIs (previously known as S3-backed AMIs) for all M3 instance sizes. M3 instances feature high frequency Intel Xeon E5-2670 (Sandy Bridge or Ivy Bridge) processors. When compared to previous generation M1 instances, M3 instances provide higher, more consistent compute performance at a lower price. These new instance sizes are available in all AWS regions, with AWS GovCloud (US) support coming soon. You can launch M3 instances as On-Demand, Reserved or Spot instance. To learn more about M3 instances, please visit the Amazon EC2 Instance Types page.

Amazon S3 storage prices are lowered up to 22%: All Amazon S3 standard storage and Reduced Redundancy Storage (RRS) customers will see a reduction in their storage costs. In the US Standard region, we are lowering S3 standard storage prices up to 22%, with similar price reductions across all other regions. The new lower prices can be found on the Amazon S3 pricing page.

Amazon EBS prices are lowered up to 50%: EBS Standard volume prices are lowered up to 50% for both storage and I/O requests. For example, in the US East region, the price for Standard volumes is now $0.05 per GB-month of provisioned storage and $0.05 per 1 million I/O requests. The new lower prices can be found on the Amazon EBS pricing page.

Jan 17, 2014

SUSE Linux Enterprise Server (SLES) now available in AWS GovCloud (US)

SUSE Logo

We are delighted to announce that SUSE Linux Enterprise Server (SLES) is now available in AWS GovCloud (US).

Amazon Web Services and SUSE® have teamed to offer SUSE Linux Enterprise Server (SLES) on Amazon EC2, a complete, enterprise-class computing environment for running business-critical applications and workloads.

SUSE maintains the base SLES images for Amazon EC2. AWS customers receive updates at the same time that updates are made available from SUSE, so your computing environment remains reliable and secure and your SLES-certified apps maintain their supportability.

Launching a SUSE EC2 instance in the AWS GovCloud (US) Region is quick and easy using the EC2 Console Launch Wizard in the Management Console for the AWS GovCloud (US) Region.

AWS GovCloud (US) is an isolated AWS region designed to allow U.S. government agencies, contractors and customers with regulatory needs to move more sensitive workloads into the cloud. Please join us for our weekly AWS GovCloud (US) Office Hours every Tuesday at 1:00 – 2:00 PM EST and the Intro to AWS GovCloud (US) Region webinar on February 12th, 1:30 – 2:30 PM EST to learn more.

Please contact us to get started in AWS GovCloud (US) today!

Jan 14, 2014

Launch Popular Software into Amazon VPCs with 1-Click

We are pleased to announce that AWS Marketplace now supports 1-Click launch for Amazon Virtual Private Cloud (Amazon VPC). Now, you can launch all our AMI-based AWS Marketplace products in a private, logically isolated network that you control. Once you have configured an Amazon VPC, just select the VPC and subnet, and launch with 1-Click. Additionally, for a select set of networking and security products, you will be able to configure multiple subnets and associate Elastic IPs directly from the AWS Marketplace website during the 1-Click launch process. Learn more on Jeff Barr's Blog.

Both of these new capabilities make it even easier to apply security best practices to the Marketplace software you run in the AWS cloud.

Jan 14, 2014

New “Introduction to AWS” Instructional Videos and Labs

AWS Training & Certification offers a new training series called “Introduction to AWS” designed to help you quickly get started using an AWS Service in 30 minutes or less. Start with a short video to learn about key concepts and terminology and watch a step-by-step console demonstration about an AWS Service. Following the video, you can get hands-on practice using that AWS service with a free self-paced training lab on run.qwiklabs.com.

Our first set of videos and labs include the below topics:

  • Introduction to Amazon Simple Storage Service (S3)
  • Introduction to Amazon Elastic Compute Cloud (EC2)
  • Introduction to AWS Identity and Access Management (IAM)
  • Introduction to Amazon Relational Database Service (RDS)
  • Introduction to Amazon Elastic Block Store (EBS)
  • Introduction to Elastic Load Balancing

To learn more about this and other AWS Training resources that are available to you, visit http://aws.amazon.com/training/intro_series.

Jan 13, 2014

Amazon RDS for Oracle now supports time zone change

We are pleased to announce that starting today Amazon RDS for Oracle allows you to change the system time zone of your RDS Oracle DB Instance. This will allow you to have time compatibility with your on-premise environments or legacy applications.

To modify the time zone of a new or existing RDS Oracle instance, use the “Option Group” option in the AWS Management Console. Please note that this option changes the time zone at the host level and impacts all date columns and values. We recommend that you analyze your data to determine what impact a time zone change will have. Before modifying the time zone in a production DB instance, we recommend you test the change in a test DB Instance.

Learn more by visiting the Oracle section of the Amazon RDS User Guide.

Jan 13, 2014

New Certification Exams Available for Developers and SysOps Administrators

Two new AWS Certification exams, the AWS Certified Developer – Associate Level and the AWS Certified SysOps Administrator – Associate Level, are now available. AWS Certifications designate individuals with IT competence and confidence in working with AWS technology, helping both employers to identify qualified candidates and IT professionals to acquire certifications that validate their expertise for continued career growth. Exams are administered through Kryterion testing centers in more than 750 testing locations worldwide.

Two role-based training courses for Developing on AWS and Systems Operations on AWS are offered to help individuals prepare for the new certification exams.

To learn more about the AWS Certification Program, visit http://aws.amazon.com/certification.

Jan 07, 2014

New Edge Locations Added in Taipei and Rio de Janeiro for Amazon CloudFront and Amazon Route 53

We are excited to announce the launch of edge locations in Taipei, Taiwan and Rio de Janeiro, Brazil. This is our first edge location in Taiwan and our second edge location in Brazil (joining Sao Paulo). These new locations will improve performance and availability for end users of your applications being served by Amazon CloudFront and Amazon Route 53, and they bring the total number of AWS edge locations to 51 worldwide.

These new edge locations support all Amazon CloudFront functionality, including accelerating your entire website (static, dynamic and interactive content), live and on-demand streaming media, and security features like custom SSL certificates, private content and geo-restriction of content. They also support all Amazon Route 53 functionality including health checks, DNS failover, and latency-based routing.

The pricing for the edge location in Taipei is the same as that in Hong Kong, the Philippines, South Korea and Singapore, and the pricing for the edge location in Rio de Janeiro is the same as that in Sao Paulo. That means your end users in these regions will benefit from the lower latency without any additional costs.

To learn more about Amazon CloudFront, please visit the Amazon CloudFront detail page. To learn more about Amazon Route 53, please visit the Amazon Route 53 detail page.

Jan 02, 2014

Create Auto Scaling Groups from Running Amazon EC2 Instances

We are pleased to announce that you can now create Auto Scaling groups based on running instances and attach running instances to existing groups. You can also retrieve your limits for Auto Scaling groups and launch configurations.

Create Auto Scaling resources from running instances

You can now:

  • Add a running instance to an existing Auto Scaling group using the new AttachInstances action.
  • Create an Auto Scaling Group based on a running instance using the CreateAutoScalingGroup action by specifying an instance ID. This will also create a launch configuration based on the instance and associate it with the group.
  • Create a launch configuration based on a running instance using the CreateLaunchConfiguration action by specifying an instance ID.

You can use these features if you are interested in enabling Auto Scaling for your existing applications, and want to do so without having to shut down your instances. You can also use these features to warm up instances ahead of time before bringing them into service.

View limits on Auto Scaling groups and launch configurations

In addition, you can also use the new DescribeAccountLimits action to view your limits for Auto Scaling groups and launch configurations. If you want to raise these limits, submit an Amazon EC2 limit increase request and specify your desired number of Auto Scaling groups or launch configurations in the Use Case Description field.

Specify additional EBS volume and block device mapping settings

When creating launch configurations, you can also specify provisioned IOPs EBS volumes as well as DeleteOnTermination and NoDevice block device mappings.

These features are available using the AWS SDKs, Auto Scaling APIs, and command-line tools. CloudFormation also supports these new features.





RSS Feed
Subscribe to stay on top of what’s new with AWS.
©2014, Amazon Web Services, Inc. or its affiliates. All rights reserved.