Category: Price Reduction


AWS Storage Update – S3 & Glacier Price Reductions + Additional Retrieval Options for Glacier

by Jeff Barr | on | in Amazon Glacier, Amazon S3, Launch, Price Reduction | | Comments

Back in 2006, we launched S3 with a revolutionary pay-as-you-go pricing model, with an initial price of 15 cents per GB per month. Over the intervening decade, we reduced the price per GB by 80%, launched S3 in every AWS Region, and enhanced the original one-size-fits-all model with user-driven features such as web site hosting, VPC integration, and IPv6 support, while adding new storage options including S3 Infrequent Access.

Because many AWS customers archive important data for legal, compliance, or other reasons and reference it only infrequently, we launched Glacier in 2012, and then gave you the ability to transition data between S3, S3 Infrequent Access, and Glacier by using lifecycle rules.

Today I have two big pieces of news for you: we are reducing the prices for S3 Standard Storage and for Glacier storage. We are also introducing additional retrieval options for Glacier.

S3 & Glacier Price Reduction
As long-time AWS customers already know, we work relentlessly to reduce our own costs, and to pass the resulting savings along in the form of a steady stream of AWS Price Reductions.

We are reducing the per-GB price for S3 Standard Storage in most AWS regions, effective December 1, 2016. The bill for your December usage will automatically reflect the new, lower prices. Here are the new prices for Standard Storage:

Regions 0-50 TB
($ / GB / Month)
51 – 500 TB
($ / GB / Month)
500+ TB
($ / GB / Month)
  • US East (Northern Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • EU (Ireland)

(Reductions range from 23.33% to 23.64%)

 $0.0230 $0.0220 $0.0210
  • US West (Northern California)

(Reductions range from 20.53% to 21.21%)

 $0.0260 $0.0250 $0.0240
  • EU (Frankfurt)

(Reductions range from 24.24% to 24.38%)

 $0.0245 $0.0235 $0.0225
  • Asia Pacific (Singapore)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Sydney)
  • Asia Pacific (Seoul)
  • Asia Pacific (Mumbai)

(Reductions range from 16.36% to 28.13%)

 $0.0250 $0.0240 $0.0230

As you can see from the table above, we are also simplifying the pricing model by consolidating six pricing tiers into three new tiers.

We are also reducing the price of Glacier storage in most AWS Regions. For example, you can now store 1 GB for 1 month in the US East (Northern Virginia), US West (Oregon), or EU (Ireland) Regions for just $0.004 (less than half a cent) per month, a 43% decrease. For reference purposes, this amount of storage cost $0.010 when we launched Glacier in 2012, and $0.007 after our last Glacier price reduction (a 30% decrease).

The lower pricing is a direct result of the scale that comes about when our customers trust us with trillions of objects, but it is just one of the benefits. Based on the feedback that I get when we add new features, the real value of a cloud storage platform is the rapid, steady evolution. Our customers often tell me that they love the fact that we anticipate their needs and respond with new features accordingly.

New Glacier Retrieval Options
Many AWS customers use Amazon Glacier as the archival component of their tiered storage architecture. Glacier allows them to meet compliance requirements (either organizational or regulatory) while allowing them to use any desired amount of cloud-based compute power to process and extract value from the data.

Today we are enhancing Glacier with two new retrieval options for your Glacier data. You can now pay a little bit more to expedite your data retrieval. Alternatively, you can indicate that speed is not of the essence and pay a lower price for retrieval.

We launched Glacier with a pricing model for data retrieval that was based on the amount of data that you had stored in Glacier and the rate at which you retrieved it. While this was an accurate reflection of our own costs to provide the service, it was somewhat difficult to explain. Today we are replacing the rate-based retrieval fees with simpler per-GB pricing.

Our customers in the Media and Entertainment industry archive their TV footage to Glacier. When an emergent situation calls for them to retrieve a specific piece of footage, minutes count and they want fast, cost-effective access to the footage. Healthcare customers are looking for rapid, “while you wait” access to archived medical imagery and genome data; photo archives and companies selling satellite data turn out to have similar requirements. On the other hand, some customers have the ability to plan their retrievals ahead of time, and are perfectly happy to get their data in 5 to 12 hours.

Taking all of this in to account, you can now select one of the following options for retrieving your data from Glacier (The original rate-based retrieval model is no longer applicable):

Standard retrieval is the new name for what Glacier already provides, and is the default for all API-driven retrieval requests. You get your data back in a matter of hours (typically 3 to 5), and pay $0.01 per GB along with $0.05 for every 1,000 requests.

Expedited retrieval addresses the need for “while you wait access.” You can get your data back quickly, with retrieval typically taking 1 to 5 minutes.  If you store (or plan to store) more than 100 TB of data in Glacier and need to make infrequent, yet urgent requests for subsets of your data, this is a great model for you (if you have less data, S3’s Infrequent Access storage class can be a better value). Retrievals cost $0.03 per GB and $0.01 per request.

Retrieval generally takes between 1 and 5 minutes, depending on overall demand. If you need to get your data back in this time frame even in rare situations where demand is exceptionally high, you can provision retrieval capacity. Once you have done this, all Expedited retrievals will automatically be served via your Provisioned capacity. Each unit of Provisioned capacity costs $100 per month and ensures that you can perform at least 3 Expedited Retrievals every 5 minutes, with up to 150 MB/second of retrieval throughput.

Bulk retrieval is a great fit for planned or non-urgent use cases, with retrieval typically taking 5 to 12 hours at a cost of $0.0025 per GB (75% less than for Standard Retrieval) along with $0.025 for every 1,000 requests. Bulk retrievals are perfect when you need to retrieve large amounts of data within a day, and are willing to wait a few extra hours in exchange for a very significant discount.

If you do not specify a retrieval option when you call InitiateJob to retrieve an archive, a Standard Retrieval will be initiated. Your existing jobs will continue to work as expected, and will be charged at the new rate.

To learn more, read about Data Retrieval in the Glacier FAQ.

As always, I am thrilled to be able to share this news with you, and I hope that you are equally excited!

If you want to learn more- we have a webinar coming up December 12th. Register here.

Jeff;

 

AWS Price Reduction – CloudWatch Metrics

by Jeff Barr | on | in Amazon CloudWatch, Price Reduction | | Comments

Back in 2011 I introduced you to Custom Metrics for CloudWatch and showed you how to publish them from your applications and scripts. At that time, the first ten custom metrics were free of charge and additional metrics were $0.50 per metric per month, regardless of the number of metrics that you published.

Today, I am happy to announce a price change and a quantity discount for CloudWatch metrics. Based on the number of metrics that you publish every month, you can realize savings of up to 96%. Here is the new pricing for the US East (Northern Virginia) Region (the first ten metrics are still free of charge):

Tier From To Price Per Metric
Per Month
Discount Over
Current Price
First 10,000 Metrics 0 10,000 $0.30 40%
Next 240,000 Metrics 10,001 250,000 $0.10 80%
Next 750,000 Metrics 250,001 1,000,000 $0.05 90%
All Remaining Metrics 1,000,001 $0.02 96%

If you have EC2 Detailed Monitoring enabled you will also see a price reduction with per-month charges reduced from $3.50 per instance per month to $2.10 or lower based on the volume tier. The new prices will take effect on December 1, 2016 with no effort on your part. At that time, the updated prices will be published on the CloudWatch Pricing page.

By the way, if you are using CloudWatch Metrics, be sure to take advantage of other recently announced features such as Extended Metrics Retention, the CloudWatch Plugin for Collectd, CloudWatch Dashboards, and the new Metrics-to-Logs navigation feature.

Jeff;

 

New for Amazon Simple Queue Service – FIFO Queues with Exactly-Once Processing & Deduplication

by Jeff Barr | on | in Amazon Simple Queue Service (SQS), Launch, Price Reduction | | Comments

As the very first member of the AWS family of services, Amazon Simple Queue Service (SQS) has certainly withstood the test of time!  Back in 2004, we described it as a “reliable, highly scalable hosted queue for buffering messages between distributed application components.” Over the years, we have added many features including a dead letter queue, 256 KB payloads, SNS integration, long polling, batch operations, a delay queue, timers, CloudWatch metrics, and message attributes.

New FIFO Queues
Today we are making SQS even more powerful and flexible with support for FIFO (first-in, first-out) queues. We are rolling out this new type of queue in two regions now, and plan to make it available in many others in early 2017.

These queues are designed to guarantee that messages are processed exactly once, in the order that they are sent, and without duplicates. We expect that FIFO queues will be of particular value to our financial services and e-commerce customers,  and to those who use messages to update database tables. Many of these customers have systems that depend on receiving messages in the order that they were sent.

FIFO ordering means that, if you send message A, wait for a successful response, and then send message B, message B will be enqueued after message A, and then delivered accordingly. This ordering does not apply if you make multiple SendMessage calls in parallel. It does apply to the individual messages within a call to SendMessageBatch, and across multiple consecutive calls to SendMessageBatch.

Exactly-once processing applies to both single-consumer and multiple-consumer scenarios. If you use FIFO queues in a multiple-consumer environment, you can configure your queue to make messages visible to other consumers only after the current message has been deleted or the visibility timeout expires. In this scenario, at most one consumer will actively process messages; the other consumers will be waiting until the first consumer finishes or fails.

Duplicate messages can sometimes occur when a networking issue outside of SQS prevents the message sender from learning the status of an action and causes the sender to retry the call. FIFO queues use multiple strategies to detect and eliminate duplicate messages. In addition to content-based deduplication, you can include a MessageDeduplicationId when you call SendMessage for a FIFO queue. The ID can be up to 128 characters long, and, if present, takes higher precedence than content-based deduplication.

When you call SendMessage for a FIFO queue, you can now include a MessageGroupId. Messages that belong to the same group (as indicated by the ID) are processed in order, allowing you to create and process multiple, ordered streams within a single queue and to use multiple consumers while keeping data from multiple groups distinct and ordered.

You can create standard queues (the original queue type) or the new FIFO queues using the CreateQueue function, the create-queue command, or the AWS Management Console. The same API functions apply to both types of queues, but you cannot convert one queue type into the other.

Although the same API calls apply to both queue types, the newest AWS SDKs and SQS clients provide some additional functionality. This includes automatic, idempotent retries of failed ReceiveMessage calls.

Individual FIFO queues can handle up to 300 send, receive, or delete requests per second.

Some SQS Resources
Here are some resources to help you to learn more about SQS and the new FIFO queues:

If you’re coming to Las Vegas for AWS re:Invent and would like to hear more about how AWS customer Capital One is making use of SQS and FIFO queues, register and plan to attend ENT-217, Migrating Enterprise Messaging to the Cloud on Wednesday, November 30 at 3:30 PM.

Available Now
FIFO queues are available now in the US East (Ohio) and US West (Oregon) regions and you can start using them today. If you are running in US East (Northern Virginia) and want to give them a try, you can create them in US East (Ohio) and take advantage of the low-cost, low-latency connectivity between the regions.

As part of today’s launch, we are also reducing the price for standard queues by 20%. For the updated pricing, take a look at the SQS Pricing page.

Jeff;

 

EC2 Price Reduction (C4, M4, and T2 Instances)

by Jeff Barr | on | in Amazon EC2, Price Reduction | | Comments

I am happy to be able to announce that an EC2 price reduction will go in to effect on December 1, 2016, just in time to make your holiday season just a little bit more cheerful! Our engineering investments, coupled with our scale and our time-tested ability to manage our capacity, allow us to identify and pass on the cost savings to you.

We are reducing the On-Demand, Reserved Instance (Standard and Convertible) and Dedicated Host prices for C4, M4, and T2 instances by up to 25%, depending on region and platform (Linux, RHEL, SUSE, Windows, and so forth). Price cuts apply across all AWS Regions. For example:

  • C4 – Reductions of up to 5% in US East (Northern Virginia) and EU (Ireland) and 20% in Asia Pacific (Mumbai) and Asia Pacific (Singapore).
  • M4  – Reductions of up to 10% in US East (Northern Virginia), EU (Ireland), and EU (Frankfurt) and 25% in Asia Pacific (Singapore).
  • T2 – Reductions of up to 10% in US East (Northern Virginia) and 25% in Asia Pacific (Singapore).

As always, you do not need to take any action in order to benefit from the reduction in On-Demand prices. If you are using billing alerts or our newly revised budget feature, you may want to consider revising your thresholds downward as appropriate.

Jeff;

PS – By my count, this is our 53rd price reduction.

Amazon Elastic Block Store (EBS) Update – Snapshot Price Reduction & More PIOPS/GiB

by Jeff Barr | on | in Amazon Elastic Block Store, Launch, Price Reduction | | Comments

Amazon Elastic Block Store (EBS) makes it easy for you to create persistent block level storage volumes and attach them to your EC2 instances. Among many other features, EBS allows you to provision SSD-backed volumes with the desired level of performance (Provisioned IOPS or PIOPS) and to create snapshot backups either manually or programmatically.

Today I would like to tell you about some improvements and changes that we are making to both of these features. We are increasing the number of IOPS that you can provision per GB of storage and we are reducing the price of snapshot storage by 47%. Together, these changes make EBS more powerful and even more economical.

More PIOPS per GiB
We introduced the concept of Provisioned IOPS back in 2012 (read my post, Fast Forward – Provisioned IOPS for EBS Volumes, to learn more). With Provisioned IOPS, you can dial in the precise level of performance that you need for each EBS volume. The number of PIOPS that you can configure is a function of the volume size; the larger the volume, the more PIOPS you can configure, up to a per-volume maximum of 20,000 PIOPS.

Until now, you could provision up to 30 IOPS per GiB of SSD-backed storage. Now, you can provision up to 50 IOPS per GiB for new EBS volumes (a 66% increase). As has always been the case, Provisioned IOPS SSD (io1) volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time in a given year (read the EBS Performance FAQ to learn more).

Some AWS customers create small EBS volumes that are intended to run extremely “hot”, maxing out the available PIOPS either occasionally or continuously. With this change, these small, hot volumes can be considerably smaller while still delivering the same level of performance. For example, if you need 20,000 PIOPS you can create a 400 GiB volume instead of a 667 GiB volume:

This change applies to all newly created SSD-backed volumes in all commercial AWS Regions.

Snapshot Price Reduction
We are reducing the prices for EBS snapshots by 47% for all AWS Regions.

With this change, snapshots are even more economical! As a result, you can take backups more frequently in order to reduce recovery time after human errors. If you are not making backups of your EBS volumes on a regular basis, now is a good time to start!

This price reduction is retroactive to August 1, 2016 and will be applied automatically; it also applies to snapshots of Gateway-Cached volumes that are used with the AWS Storage Gateway.

Join our Team
If you are a developer, development manager, or product manager and would like to build systems like this, please visit the EBS Jobs page.

Jeff;

 

New – Scheduled Reserved Instances

by Jeff Barr | on | in Amazon EC2, Price Reduction | | Comments

Many AWS customers run some of their mission-critical applications on a periodic (daily, weekly, or monthly), part-time basis. Here are some of the kinds of things that they like to do:

  • A bank or mutual fund performs Value at Risk calculations every weekday afternoon.
  • A phone company does a multi-day bill calculation run at the start of each month.
  • A trucking company optimizes routes and shipments on Monday, Wednesday, and Friday mornings.
  • An animation studio performs a detailed, compute-intensive 3D rendering every night.

Our new Scheduled Reserved Instances are a great fit for use cases of this type (and many more). They allow you to reserve capacity on a recurring basis with a daily, weekly, or monthly schedule over the course of a one-year term. After you complete your purchase, the instances are available to launch during the time windows that you specified.

Purchasing Scheduled Instances
Let’s step through the purchase process using the EC2 Console. I start by selecting Scheduled Instances on the left:

Then I click on the Purchase Scheduled Instances button and find a schedule that suits my needs.

Let’s say that I am based in Seattle and want to set up a schedule for Monday, Wednesday, and Friday mornings. I convert my time (6 AM) to UTC, choose my duration (8 hours of processing for my particular use case), and set my recurrence. Then I specify a c3.4xlarge instance (I can select one or more types using the menu):

I can see the local starting time while I am setting up the schedule:

When I click on Find schedules, I can see what’s available at my desired time:

As you can see, the results include instances in several different Availability Zones because I chose Any in the previous step. Leaving the Availability Zone and/or the instance type unspecified will give me more options.

I can add the desired instance(s) to my cart, adjusting the quantity if necessary. I can see my choices in my cart:

Once I have what I need I click on Review and purchase to proceed, verify my intent, and click on Purchase:

I can then see all of my Scheduled Reserved instances in the console:

Launching Scheduled Instances
Each Scheduled Reserved instance becomes active according to the schedule that you chose when you made the purchase. You can then launch the instance by selecting it in the Console and clicking on Launch Scheduled Instances:

Then I configure the launch as usual and click on Review:

Scheduled Reserved instances can also be launched via the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, and the new RunScheduledInstances function. We are also working on support for Auto Scaling, AWS Lambda, and AWS CloudFormation.

Things to Know
With this launch, we now have two types of Reserved Instances. The original Reserved Instance (now called Standard Reserved Instances) model allows you to reserved EC2 compute capacity for a one or three year term and use them at any time. The new Scheduled Reserved Instance model allows you to reserve instances for predefined blocks of time on a recurring basis for a one-year term, with prices that are generally 5 to 10% lower than the equivalent On-Demand rates.

This feature is available today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) regions, with support for the C3, C4, M4, and R3 instance types.

Jeff;

Happy New Year – EC2 Price Reduction (C4, M4, and R3 Instances)

by Jeff Barr | on | in Amazon EC2, Price Reduction | | Comments

I am happy to be able to announce that we are making yet another EC2 price reduction!

We are reducing the On-Demand and Reserved instance, and Dedicated host prices for C4 and M4 instances running Linux by 5% in the US East (Northern Virginia), US West (Northern California), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney) regions.

We are also reducing the On-Demand, Reserved instance, and Dedicated host prices for R3 instances running Linux by 5% in the US East (Northern Virginia), US West (Northern California), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and South America (São Paulo) regions.

Finally, we are reducing the On-Demand and Reserved instance prices for R3 instances running Linux by 5% in the AWS GovCloud (US) regions.

Smaller reductions apply to the same instance types that run SLES and RHEL in the regions mentioned.

Changes to the On-Demand and Dedicated host pricing are retroactive to the beginning of the month (January 1, 2016); the new Reserved instance pricing is in effect today. During the month, your billing estimates may not reflect the reduced prices. They will be reflected in the statement at the end of the month.

The new AWS Price List API will be updated later in the month.

If you are keeping score, this is our 51st price reduction!

— Jeff;

AWS Storage Update – New Lower Cost S3 Storage Option & Glacier Price Reduction

by Jeff Barr | on | in Amazon Glacier, Amazon S3, Price Reduction | | Comments

Like all AWS services, the Amazon S3 team is always listening to customers in order to better understand their needs. After studying a lot of feedback and doing some analysis on access patterns over time, the team saw an opportunity to provide a new storage option that would be well-suited to data that is accessed infrequently.

The team found that many AWS customers store backups or log files that are almost never read. Others upload shared documents or raw data for immediate analysis. These files generally see frequent activity right after upload, with a significant drop-off as they age. In most cases, this data is still very important, so durability is a requirement. Although this storage model is characterized by infrequent access, customers still need quick access to their files, so retrieval performance remains as critical as ever.

New Infrequent Access Storage Option
In order to meet the needs of this group of customers, we are adding a new storage class for data that is accessed infrequently. The new S3 Standard – Infrequent Access (Standard – IA) storage class offers the same high durability, low latency, and high throughput of S3 Standard. You now have the choice of three S3 storage classes (Standard, Standard – IA, and Glacier) that are designed to offer 99.999999999% (eleven nines) of durability.‎  Standard – IA has an availability SLA of 99%.

This new storage class inherits all of the existing S3 features that you know (and hopefully love) including security and access management, data lifecycle policies, cross-region replication, and event notifications.

Prices for Standard – IA start at $0.0125 / gigabyte / month (one and one-quarter US pennies), with a 30 day minimum storage duration for billing, and a $0.01 / gigabyte charge for retrieval (in addition to the usual data transfer and request charges). Further, for billing purposes, objects that are smaller than 128 kilobytes are charged for 128 kilobytes of storage. We believe that this pricing model will make this new storage class very economical for long-term storage, backups, and disaster recovery, while still allowing you to quickly retrieve older data if necessary.

You can define data lifecycle policies that move data between Amazon S3 storage classes over time. For example, you could store freshly uploaded data using the Standard storage class, move it to Standard – IA 30 days after it has been uploaded, and then to Amazon Glacier after another 60 days have gone by.

The new Standard – IA storage class is simply one of several attributes associated with each S3 object. Because the objects stay in the same S3 bucket and are accessed from the same URLs when they transition to Standard – IA, you can start using Standard – IA immediately through lifecycle policies without changing your application code. This means that you can add a policy and reduce your S3 costs immediately, without having to make any changes to your application or affecting its performance.

You can choose this new storage class (which is available today in all AWS regions) when you upload new objects via the AWS Management Console:

You can set up lifecycle rules for each of your S3 buckets. Here’s how you would establish the policies that I described above:

These functions are also available through the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, the AWS SDKs, and the S3 API.

Here’s what some of our early users have to say about S3 Standard – Infrequent Access:

“For more than 13 years, SmugMug has provided unlimited storage for our customer’s priceless photos. With many petabytes of them stored on Amazon S3, it’s vital that customers have immediate, instant access to any of them at a moment’s notice – even if they haven’t been viewed in years. Amazon S3 Standard – IA offers the same high durability and performance as Amazon S3 Standard so we can continue to deliver the same amazing experience for our customers even as their cameras continue to shoot bigger, higher-quality photos and videos.”

Don MacAskill, CEO & Chief Geek

SmugMug

“We store a ton of video, and in many cases an object in Amazon S3 is the only copy of a user’s video. This means durability is absolutely critical, and so we are thrilled that Amazon S3 Standard – IA lets us significantly reduce storage costs on our older video objects without sacrificing durability. We also really appreciate how easy it is to start using Amazon S3 Standard – IA. With a few clicks we set up lifecycle policies that will transition older objects to Amazon S3 Standard – IA at regular intervals –we don’t have to worry about migrating them to new buckets, or impacting the user experience in any way.”

Brian Kaiser, CTO

Hudl

See the S3 Pricing page for complete pricing information on this new storage class.

Reduced Price for Glacier Storage
Effective September 1, 2015, we are reducing the price for data stored in Amazon Glacier from $0.01 / gigabyte / month to $0.007 / gigabyte / month. As usual, this price reduction will take effect automatically and you need not do anything in order to benefit from it. This price is for the US East (Northern Virginia), US West (Oregon), and EU (Ireland) regions; take a look at the Glacier Pricing page for full information on pricing in other regions.

Jeff;

The New M4 Instance Type (Bonus: Price Reduction on M3 & C4)

by Jeff Barr | on | in Amazon EC2, Price Reduction |

We launched Amazon Elastic Compute Cloud (EC2) with a single instance type (m1.small) way back in 2006! Since then, we have added many new types in response to customer demand, enabled by improvements in memory and processor technology (see my recent post, EC2 Instance History, for a look back in time).

Today we are adding new M4 instances in five sizes. These are General Purpose instances, with a balance of compute, memory, and network resources.

Let’s take a closer look!

New M4 Instances
The new M4 instances feature a custom Intel Xeon E5-2676 v3 Haswell processor optimized specifically for EC2. They run at a base clock rate of 2.4 GHz and can go as high as 3.0 GHz with Intel Turbo Boost. Here are the specs:

Instance Name vCPU Count
RAM
Instance Storage Network Performance EBS-Optimized
m4.large 2 8 GiB EBS Only Moderate 450 Mbps
m4.xlarge 4 16 GiB EBS Only High 750 Mbps
m4.2xlarge 8 32 GiB EBS Only High 1,000 Mbps
m4.4xlarge 16 64 GiB EBS Only High 2,000 Mbps
m4.10xlarge 40 160 GiB EBS Only 10 Gbps 4,000 Mbps

If you are running Linux on an m4.10xlarge instance, you can also control the C states and the P states (see my post on the New C4 Instances to learn more about this). The supersized core count on this instance will be great for applications that use multiple processes to achieve a high degree of concurrency.

These instances also offer Enhanced Networking  which delivers up to 4 times the packet rate of instances without Enhanced Networking, while ensuring consistent latency, even when under high network I/O. Within placement groups, Enhanced Networking also reduces average latencies between instances by 50% or more. The M4 instances are EBS-Optimized by default, with additional, dedicated network capacity for I/O operations. The instances support 64-bit HVM AMIs launched within a VPC.

The M4 instances are available today in the US East (Northern Virginia), US West (Northern California), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) regions. You can launch them in On-Demand or Spot form, and you can also purchase Reserved Instances.

Price Reductions on M3 and C4 Instances
As part of today’s launch we are lowering the On-Demand and One Year Reserved Instances prices for the M3 and C4 instances by 5% in the US East (Northern Virginia), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Sydney) regions.

On-Demand Instance price reductions are effective June 1, 2015. Reserved Instance price reductions will apply to purchases after June 11, 2015. For more information, see the EC2 Pricing page.

Jeff;

Amazon Kinesis Update – Simplified Capture of Streaming Data

by Jeff Barr | on | in Amazon Kinesis, Price Reduction |

Amazon Kinesis is a managed service designed to handle real-time streaming of big data. It can accept any amount of data, from any number of sources, scaling up and down as needed (see my introductory post for more information on Kinesis). Developers can use the Kinesis Client Library (KCL) to simplify the implementation of apps that consume and process streamed data.

Today we are making the capture of streaming data with Kinesis even easier, with a powerful new Kinesis Producer Library, a big step up in the maximum record size, and a price reduction that makes capture of small-sized records even more cost-effective.

Let’s take a closer look!

Increased Record Size
A Kinesis record is simply a blob of data, also known as the payload. Kinesis does not look inside the data; it simply accepts the record (via PutRecord or PutRecords) from a producer and puts it into a stream.

We launched Kinesis with support records that could be as large as 50 KB. With today’s update we are raising this limit by a factor of 20; individual records can now be as large as 1 MB. This gives you a lot more flexibility and opens the door to some interesting new ways to use Kinesis. For example, you can now send larger log files, semi-structured documents, email messages, and other data types without having to split them in to small chunks.

Price Reduction for Put Calls
Up until now, pricing for Put operations was based on the number of records, with a charge of $0.028 for every million records.

Going forward, pricing for Put operations will be based on the size of the payload, as expressed in “payload units” of 25 KB. The charge will be $0.014 per million units. In other words, Putting small records (25 KB or less) now costs half as much as it did before. The vast majority of our customers use Kinesis in this way today and they’ll benefit from the price reduction.

For more information, take a look at the Kinesis Pricing page.

Kinesis Producer Library (KPL)
I’ve saved the biggest news for last!

You can use Kinesis to handle the data streaming needs of many different types of applications including websites (clickstream data), ad servers (publisher data), mobile apps (customer engagement data), and so forth.

In order to achieve high throughput, you should combine multiple records into a single call to PutRecords. You should also consider aggregating multiple user records into a single Kinesis record, and then de-aggregating them immediately prior to consumption. Finally, you will need code to detect and retry failed calls.

The new Kinesis Producer Library (KPL) will help you with all of the tasks that I identified above. It will allow you to write to one or more Kinesis streams with automatic and configurable retry logic; collect multiple records and write them in batch fashion using PutRecords; aggregate user records to increase payload size and throughput, and submit Amazon CloudWatch metrics (including throughput and error rates) on your behalf.

The KPL plays well with the Kinesis Client Library (KCL). The KCL takes care of many of the more complex tasks associated with consuming and processing streaming data in a distributed fashion, including load balancing across multiple instances, responding to instance failures, checkpointing processed records, and reacting to chances in sharding.

When the KCL receives an aggregated record with multiple KPL user records inside, it will automatically de-aggregate the records before making them available to the client application (you will need to upgrade to the newest version of the KCL in order to take advantage of this feature).

The KPL presents a Java API that is asynchronous and non-blocking; you simply hand records to it and receive a Future object in return Here’s a sample call to the addUserRecord method:

public void run() {
  ByteBuffer data = Utils.generateData(sequenceNumber.get(), DATA_SIZE);
  // TIMESTAMP is our partition key
  ListenableFuture f =
    producer.addUserRecord(STREAM_NAME, TIMESTAMP, Utils.randomExplicitHashKey(), data);
  Futures.addCallback(f, callback);
}

The core of the KPL takes the form of a C++ module; wrappers in other languages will be available soon.

KPL runs on Linux and OSX. Self-contained binaries are available for the Amazon Linux AMI, Ubuntu, RHEL, OSX, and OSX Server. Source code and unit tests are also available (note that the KCL and the KPL are made available in separate packages).

For more information, read about Developing Producers with KPL.

Jeff;