Category: Launch


New – HTTP/2 Support for Amazon CloudFront

When I interview a candidate for a technical position, I often ask them to explain what happens when they see a interesting link and decide to click on it. I encourage them to go in to as much detail as they would like. The answers let me know how well they understand and can explain a complex concept. Some candidates will sum up the entire process in a sentence or two. Others have filled an entire whiteboard with detailed diagrams. At a very simple level, here are the steps that I like to see (If you know much about HTTP, you know that I have failed to mention the SSL handshake, cookies, response codes, caches, content distribution networks, and all sorts of other details):

  1. The domain name is converted to an IP address by way of a DNS lookup.
  2. A TCP connection is made to the remote server.
  3. A GET request is issued.
  4. The remote server locates (or generates) the desired content and returns it to fulfill the request.
  5. The TCP connection is closed.
  6. The client processes and displays the result..

A complex web page might contain references to scripts, style sheets, images, and so forth. If this is the case, the entire sequence must be repeated for each reference. On mobile devices, each connection request wakes up the radio, adding hundreds or thousands of milliseconds of overhead (read about Application Network Latency Overhead to learn more).

Amazon CloudFront is a global content distribution network, or CDN. Several of CloudFront’s features help to make the process above more efficient. For example, it caches frequently used content in dozens of edge locations scattered across the planet. This allows CloudFront to respond to requests more quickly and over a shorter distance, thereby improving application performance. With no minimum usage commitments, CloudFront can help you to deliver web sites, videos, and other types of content to your end users in an economical and expeditious way.

New HTTP/2 Support
The retrieval process that I described above contains a lot of room for improvement. Repeated DNS lookups are avoidable, as are TCP connection requests. HTTP/2, a new version of the HTTP protocol, streamlines the process by reusing the TCP connection if possible. This core feature, combined with many other changes to the existing HTTP model, has the potential to reduce latency and to improve the performance of all types of web applications.

Today we are launching HTTP/2 support for CloudFront. You can enable it on a per-distribution basis today and your HTTP/2-aware clients and applications will start to make use of it right away. While HTTP/2 does not mandate the use of encryption, it turns out that all of the common web browsers require the use of HTTPS connections in conjunction with HTTP/2. Therefore, you may need to make some changes to your site or application in order to take full advantage of HTTP/2. Due to the (fairly clever) way in which HTTP/2 works, older clients remain compatible with web endpoints that do support it.

The connection from CloudFront back to your origin server is still made using HTTP/1. You don’t need to make any server-side changes in order to make your static or dynamic content accessible via HTTP/2.

Several AWS customers have already been testing CloudFront’s HTTP/2 support and have seen clear performance improvements. Marfeel is an ad tech platform that helps publishers to create, optimize, and monetize mobile web sites. They told us that CloudFront’s HTTP/2 support has reduced their first-render time by 17%. This allows the sites that they create to consistently load within 0.8 seconds. making them more accessible to mobile readers.

Enabling HTTP/2
To enable HTTP/2 for an existing CloudFront distribution, simply open up the CloudFront Console, locate the distribution, and click on Edit. Then change the Supported HTTP Versions to include HTTP/2:

The change will be effective within minutes and your users should start to see the benefits shortly thereafter. As I noted earlier, HTTP/2 must be used in conjunction with HTTPS. You can use your browser’s developer tools to verify that HTTP/2 is being used. Here’s what I see when I use the Network tool in Firefox:

You can also add HTTP/2 support to Curl and test from the command line:

$ curl --http2 -I https://d25c7x5dprwhn6.cloudfront.net/images/amazon_fulfilment_center_phoenix.jpg
HTTP/2.0 200
content-type:image/jpeg
content-length:650136
date:Sun, 04 Sep 2016 23:32:39 GMT
last-modified:Sat, 03 Sep 2016 15:21:01 GMT
etag:"b95a82b8df7373895a44a01a3c1e6f8d"
x-amz-version-id:fgWz_QaWo_.4GF7_VOl0gkBwnOmOALz6
accept-ranges:bytes
server:AmazonS3
age:644
x-cache:Hit from cloudfront
via:1.1 91e54ea7c5cc54f4a3500c72b19a2a23.cloudfront.net (CloudFront)
x-amz-cf-id:Dr_A3emW7OdxWfs3O0lDZfiBFL7loKMFCP9XC0_FYmCkeRuyXcu5BQ==

Learning More
Here are some resources that you can use to learn more about HTTP/2 and how it can benefit your application:

Available Now
This feature is available now and you can start using it today at no extra charge.

Jeff;

 

Amazon Aurora Update – Parallel Read Ahead, Faster Indexing, NUMA Awareness

Amazon Aurora is currently the fastest-growing AWS service!

As a relational database designed for the cloud (read Amazon Aurora – New Cost-Effective MySQL-Compatible Database Engine for Amazon RDS to learn more), Aurora offers great performance, effortless storage scaling all the way up to 64 TB, durability, and high availability. Because Aurora was designed to be compatible with MySQL, our customers have been able to move existing applications and to build new ones with ease.

With MySQL compatibility “on top” and the unique, cloud-native Aurora architecture underneath, we have a lot of room to innovate. We can continue to make Aurora more efficient while still remaining compatible with all of those applications.

Today we are making three such performance improvements to Aurora, each one aimed at making Aurora more performant on a wide range of workloads commonly run by AWS customers. Here’s an overview:

Parallel Read Ahead – Range selects, full table scans, table alterations, and index generation are now up to 5x faster.

Faster Index Build – Generation of indexes is now about 75% faster.

NUMA-Aware Scheduling – When run on instances with more than one CPU chip, reads from the query cache and the buffer cache are faster, improving overall throughput by up to 10%.

Let’s dive in…

Parallel Read Ahead
The InnoDB storage engine used by MySQL organizes table rows and the underlying storage (disk pages) using the index keys. This makes sequential scans over full tables fast and efficient for freshly created tables. However, as rows are updated, inserted, and deleted over time, the storage becomes fragmented, the pages are no longer physically sequential, and scans can slow down dramatically. InnoDB’s Linear Read Ahead feature attempts to deal with this fragmentation by bringing up to 64 pages in to memory before they are actually needed. While well-intentioned, this feature does not provide a meaningful performance improvement on enterprise-scale workloads.

With today’s update, Aurora is now a lot smarter about handling this very common situation. When Aurora scans a table, it logically (as opposed to physically) identifies and then performs a parallel prefetch of the additional pages. The parallel prefetch takes advantage of Aurora’s  replicated storage architecture (two copies in each of three Availability Zones) and helps to ensure that the pages in the database cache are relevant to the scan operation.

As a result of this change, range selects, full table scans, the ALTER TABLE operation, and index generation are up to 5x faster than before.

You will see the improved performance as soon as you upgrade to Aurora 1.7 (see below for more information).

Faster Index Build
When you create a primary or secondary index on a table, the storage engine creates a tree structure that contains the new keys. This process entails a lot of top-down tree searching and plenty of page-splitting as the tree is restructured to accommodate more and more keys.

Aurora now builds the trees in a bottom-up fashion, building the leaves first and then adding parent pages as needed. This reduces the amount of back-and-forth to storage, and also obviates the need to split pages since each page is filled once.

With this change, adding indexes and rebuilding tables is now up to 4x faster than before, depending on the table schema. For example, the Aurora team created a table with the following schema, and added 100 million rows, resulting in a 5 GB table:

create table test01 (id int not null auto_increment primary key, i int, j int, k int);

Then they added four additional indexes:

alter table test01 add index (i), add index (j), add index (k), add index comp_idx(i, j, k);

On a db.r3.large instance, the time to run this query dropped from 67 minutes to 25 minutes. On a db.r3.8xlarge instance, the time dropped from 29 minutes to 11.5 minutes.

This is a brand new feature and we would like you to try it out on your non-production workloads.  You’ll need to upgrade to Aurora 1.7 and then set aurora_lab_mode to 1 in the DB Instance Parameter group (see DB Cluster and DB Instance Parameters to learn more):

The team is very interested in your feedback on this performance enhancement. Please feel free to post your observations in the Amazon RDS Forum.

NUMA-Aware Scheduling
The largest DB Instance (db.r3.8xlarge) has two CPU chips and a feature commonly known as NUMA, short for Non-Uniform Memory Access. On systems of this type, each an equal fraction of main memory is directly and efficiently accessible to each CPU. The remaining memory is accessible via a somewhat less efficient cross-CPU access path.

Aurora now does a better job of scheduling threads across the CPUs in order to take advantage of this disparity in access times. The threads no longer need to fight against each other for access to the less-efficient memory attached to the other CPUs. As a result, CPU-bound operations that make heavy use of the query cache and the buffer cache now run up to 10% faster. The performance improvement will be most apparent when you are making hundreds or thousands of connections to the same database instance. As an example, performance on the Sysbench oltp.lua benchmark grew from 570,000 reads/second to 625,000 reads/second.  The test was run on a  db.r3.8xlarge DB Instance with the following parameters:

  • oltp_table_count=25
  • oltp_table_size=10000
  • num-threads=1500

You will see the improved performance as soon as you upgrade to Aurora 1.7.

Upgrading to Aurora 1.7
Newly created DB Instances will run Aurora 1.7 automatically. For exiting DB Instances, you can choose to install the update immediately or during your next maintenance window.

You can confirm that you are running Aurora 1.7 by running the following query:

mysql> show global variables like "aurora_version";
+----------------+-------+
| Variable_name  | Value |
+----------------+-------+
| aurora_version | 1.7   |
+----------------+-------+ 

Jeff;

New – Auto Scaling for EC2 Spot Fleets

The EC2 Spot Fleet model (see Amazon EC2 Spot Fleet API – Manage Thousands of Spot Instances with one Request for more information) allows you to create a fleet of EC2 instances with a single request. You simply specify the fleet’s target capacity, enter a bid price per hour, and choose the instance types that you would like to have as part of your fleet.

Behind the scenes, AWS will maintain the desired target capacity (expressed in terms of instances or a vCPU count) by launching Spot instances that result in the best prices for you. Over time, as instances in the fleet are terminated due to rising prices, replacement instances will be launched using the specifications that result in the lowest price at that point in time.

New Auto Scaling
Today we are enhancing the Spot Fleet model with the addition of Auto Scaling. You can now arrange to scale your fleet up and down based on a Amazon CloudWatch metric. The metric can originate from an AWS service such as EC2, Amazon EC2 Container Service, or Amazon Simple Queue Service (SQS). Alternatively, your application can publish a custom metric and you can use it to drive the automated scaling. Either way, using these metrics to control the size of your fleet gives you very fine-grained control over application availability, performance, and cost even as conditions and loads change. Here are some ideas to get you started:

  • Containers – Scale container-based applications running on Amazon ECS using CPU or memory usage metrics.
  • Batch Jobs – Scale queue-driven batch jobs based on the number of messages in an SQS queue.
  • Spot Fleets – Scale a fleet based on Spot Fleet metrics such as MaxPercentCapacityAllocation.
  • Web Service – Scale web services based on measured response time and average requests per second.

You can set up Auto Scaling using the Spot Fleet Console, the AWS Command Line Interface (CLI), AWS CloudFormation, or by making API calls using one of the AWS SDKs.

I started by launching a fleet. I used the request type Request and maintain in order to be able to scale the fleet up and down:

My fleet was up and running within a minute or so:

Then (for illustrative purposes) I created an SQS queue, put some messages in it, and defined a CloudWatch alarm (AppQueueBackingUp) that would fire if there were 10 or more messages visible in the queue:

I also defined an alarm (AppQueueNearlyEmpty) that would fire if the queue was just about empty (2 messages or less).

Finally, I attached the alarms to the ScaleUp and ScaleDown policies for my fleet:

Before I started writing this post, I put 5 messages into the SQS queue. With the fleet launched and the scaling policies in place, I added 5 more, and then waited for the alarm to fire:

Then I checked in on my fleet, and saw that the capacity had been increased as expected. This was visible in the History tab (“New targetCapacity: 5”):

To wrap things up I purged all of the messages from my queue, watered my plants, and returned to find that my fleet had been scaled down as expected (“New targetCapacity: 2”):

Available Now
This new feature is available now and you can start using it today in all regions where Spot instances are supported.

Jeff;

 

Improvements to CloudWatch Logs & Dashboards

Amazon CloudWatch helps you to see, diagnose, react to, and resolve issues that arise in your AWS infrastructure and in the applications that you run on AWS. Today, I would like to talk about several usability and functionality improvements to CloudWatch Logs (Store and Monitor OS & Application Log Files with Amazon CloudWatch) and to CloudWatch Dashboards (CloudWatch Dashboards – Create & Use Customized Metrics Views).

Usability Improvements to CloudWatch Logs
CloudWatch Logs is a highly available, scalable, durable, and secure service to manage your operating system and application log files. It allows you to ingest, store, filter, search, and archive the logs, reducing your operational burden and allowing you to focus on your application and your business.

In order to help you to stay efficient and productive even as the number and size of your logs grows, we have made several usability improvements to the CloudWatch Logs Console:

  • Improved formatting for log data.
  • Simplified access to lengthy log files.
  • Easier searching within a log group.
  • Simplified collaboration around log files.
  • Better searching within a specific time frame.

Prior to today’s launch we also made some improvements to the CloudWatch Dashboards:

  • Full screen mode.
  • Dark theme.
  • Control over range of the Y axis on charts.
  • Simplified renaming of charts.
  • Persistent storage of chart settings.

CloudWatch Logs Console in Action
Let’s take a look at each of these improvements!

Open up the CloudWatch Logs Console, click on a Log Group, and then on a Log Stream within the group. Find the View options menu on the right:

Click on Expand all in order to see the log messages in expanded, multi-line form like this:

You can also Switch to text view in order to see the logs in their unadorned, plain-text form:

We have also improved the display of log data across all streams within a log group. Once you select a Log Group and click Search Events you can see the log data from all streams with that log group. For example, I can easily identify the Billed Duration for multiple invocations of a single Lambda function:

Even better, we have replaced the original paginated view with an infinite scroll bar. You can now scroll to your heart’s content through log files of any length:

You can now refine your search to a specific time frame or to a custom date range with a single click, like this:

If you are working as part of a team, you can now share the URL of your log analysis session. The URL includes the search parameters and filters, and includes a fragment that looks like this:

group=<log_group_name>_log;stream=<log_stream_name>;filter=<filter_parameter>;start=PT<time_frame>

These improvements to the CloudWatch Logs Console are available now and you can start using them today. To learn more, read Getting Started with CloudWatch Logs.

Recent Improvements to the CloudWatch Dashboards
You may have already noticed the improvements that we recently made to the CloudWatch Dashboards. First, there’s a new full screen mode for Dashboards, accessible by clicking on Enter full screen in the Actions menu:

Once you are in full screen mode, you can click on Dark to switch to the new, night-owl-friendly dark theme:

Here’s a simple Redis dashboard in full screen mode using the dark theme:

Sometimes you want to have more control over how a chart is displayed on your dashboard. As an example, outliers in your data may make your chart less readable, and you may want to keep the dashboard focused on a specific Y axis range. Here’s a chart where that’s the case; the outlier masks the trend that happened after the big spike:

To edit the Y axis, click on the tool selector and select Edit:

Choose Graph Options and then edit the values for the Y axis until you are satisfied with the appearance of the chart, then click on Update widget:

Here’s what the chart looks like after that:

Many of our customers wanted to be able to rename a chart without leaving the dashboard. You can now do that with a click (hover your mouse near the name and then click on the pencil):

Finally, CloudWatch now remembers the time range, timezone preference, refresh interval, and auto-refresh setting for each chart!

Amazon CloudWatch Partner Ecosystem
I’d like to wrap things up by sharing some of the great work that our partners are doing. The following partners are building value-added solutions on top of CloudWatch:

  • Datadog provides integrations to key items in your infrastructure, and gives you the ability to collaborate with your team directly when dealing with incidents.
  • Librato provides integrations across elements of your infrastructure, and supports composite metrics and mathematical transformations to time series data.
  • SignalFx helps provide you with instant visibility into your metrics, and focuses on data analytics and on delivering alerts on service-wide patterns.
  • Splunk offers a platform for operational intelligence that enables you to collect machine data and find insights.
  • Sumo Logic is a machine data analytics service for log management and time series metrics that helps you build, run and secure your applications.
  • CloudCheckr integrates regular and custom metrics across your environment to assist in cost, security, and performance management by identifying right-sizing opportunities and alerting for potential security and performance issues.

If you are a partner and offer something that belongs on this list, let me know and I’ll update it ASAP!

Jeff;

 

 

New – Upload AWS Cost & Usage Reports to Redshift and QuickSight

Many AWS customers have been asking us for a way to programmatically analyze their Cost and Usage Reports (read New – AWS Cost and Usage Reports for Comprehensive and Customizable Reporting for more info). These customers are often using AWS to run multiple lines of business, making use of a wide variety of services, often spread out across multiple regions. Because we provide very detailed billing and cost information, this is a Big Data problem and one that can be easily addressed using AWS services!

While I was on vacation earlier this month, we launched a new feature that allows you to upload your Cost and Usage reports to Amazon Redshift and Amazon QuickSight. Now that I am caught up, I’d like to tell you about this feature.

Upload to Redshift
I started by creating a new Redshift cluster (if you already have a running cluster, you need not create another one). Here’s my cluster:

Next, I verified that I had enabled the Billing Reports feature:

Then I hopped over to the Cost and Billing Reports and clicked on Create report:

Next, I named my report (MyReportRedshift), made it Hourly, and enabled support for both Redshift and QuickSight:

I wrapped things up by selecting my delivery options:

I confirmed my desire to create a report on the next page, and then clicked on Review and Complete. The report was created and I was informed that the first report would arrive in the bucket within 24 hours:

While I was waiting I installed PostgreSQL on my EC2 instance (sudo yum install postgresql94) and verified that I was signed up for the Amazon QuickSight preview. Also, following the directions in Create an IAM Role, I made a read-only IAM role and captured its ARN:

Back in the Redshift console, I clicked on Manage IAM Roles and associated the ARN with my Redshift cluster:

The next day, I verified that the files were arriving in my bucket as expected, and then returned to the console in order to retrieve a helper file so that I could access Redshift:

I clicked on Redshift file and then copied the SQL command:

I inserted the ARN and the S3 region name into the SQL (I had to add quotes around the region name in order to make the query work as expected):

And then I connected to Redshift using psql (I can use any visual or CLI-based SQL client):

$ psql -h jbcluster.XYZ.us-east-1.redshift.amazonaws.com \
  -U root -p 5439 -d dev

Then I ran the SQL command. It created a pair of tables and imported the billing data from S3.

Querying Data in Redshift
Using some queries supplied by my colleagues as a starting point, I summed up my S3 usage for the month:

And then I looked at my costs on a per-AZ basis:

And on a per-AZ, per-service basis:

Just for fun, I spent some time examining the Redshift Console. I was able to see all of my queries:

Analyzing Data with QuickSight
I also spent some time analyzing the cost and billing data using Amazon QuickSight. I signed in and clicked on Connect to another data source or upload a file:

Then I dug in to my S3 bucket (jbarr-bcm) and captured the URL of the manifest file (MyReportRedshift-RedshiftManifest.json):

I selected S3 as my data source and entered the URL:

QuickSight imported the data in a few seconds and the new data source was available. I loaded it into SPICE (QuickSight’s in-memory calculation engine). With three or four  more clicks I focused on the per-AZ data, and excluded the data that was not specific to an AZ:

Another click and I switched to a pie chart view:

I also examined the costs on a per-service basis:

As you can see, the new data and the analytical capabilities of QuickSight allow me (and you) to dive deep into your AWS costs in minutes.

Available Now
This new feature is available now and you can start using it today!

Jeff;

 

Amazon WorkSpaces Update – Hourly Usage and Expanded Root Volume

In my recent post, I Love My Amazon WorkSpace, I shared the story of how I became a full-time user and big fan of Amazon WorkSpaces. Since writing the post I have heard similar sentiments from several other AWS customers.

Today I would like to tell you about some new and recent developments that will make WorkSpaces more economical, more flexible, and more useful:

  • Hourly WorkSpaces – You can now pay for your WorkSpace by the hour.
  • Expanded Root Volume – Newly launched WorkSpaces now have an 80 GB root volume.

Let’s take a closer look at these new features.

Hourly WorkSpaces
If you only need part-time access to your WorkSpace, you (or your organization, to be more precise) will benefit from this feature. In addition to the existing monthly billing, you can now use and pay for a WorkSpace on an hourly basis, allowing you to save money on your AWS bill. If you are a part-time employee, a road warrior, share your job with another part-timer, or work on multiple short-term projects, this feature is for you. It is also a great fit for corporate training, education, and remote administration.

There are now two running modes – AlwaysOn and AutoStop:

  • AlwaysOn – This is the existing mode. You have instant access to a WorkSpace that is always running, billed by the month.
  • AutoStop – This is new. Your WorkSpace starts running and billing when you log in, and stops automatically when you remain disconnected for a specified period of time.

A WorkSpace that is running in AutoStop mode will automatically stop a predetermined amount of time after you disconnect (1 to 48 hours). Your WorkSpaces Administrator can also force a running WorkSpace to stop. When you next connect, the WorkSpace will resume, with all open documents and running programs intact. Resuming a stopped WorkSpace generally takes less than 90 seconds.

Your WorkSpaces Administrator has the ability to choose your running mode when launching your WorkSpace:

WorkSpaces Configuration

The Administrator can change the AutoStop time and the running mode at any point during the month. They can also track the number of working hours that your WorkSpace accumulates during the month using the new UserConnected CloudWatch metric, and switch from AutoStop to AlwaysOn when this becomes more economical. Switching from hourly to monthly billing takes place upon request; however, switching the other way takes place at the the start of the following month.

All new Amazon WorkSpaces can take advantage of hourly billing today. If you’re using a custom image for your WorkSpaces, you’ll need to refresh your custom images from the latest Amazon WorkSpaces bundles. The ability for existing WorkSpaces to switch to hourly billing will be added in the future.

To learn more about pricing for hourly WorkSpaces, visit the WorkSpaces Pricing page.

Expanded Root Volume
By popular demand we have expanded the size of the root volume for newly launched WorkSpaces to 80 GB, allowing you to run more applications and store more data at no additional cost. Your WorkSpaces Administrator can rebuild existing WorkSpaces in order to upgrade them to the larger root volumes (read Rebuild a WorkSpace to learn more). Rebuilding a WorkSpace will restore the root volume (C:) to the most recent image of the bundle that was used to create the WorkSpace. It will also restore the data volume (D:) from the last automatic snapshot.

Some WorkSpaces Resources
While I have your attention, I would like to let you know about a couple of other important WorkSpaces resources:

Available Now
The features that I described above are available now and you can start using them today!

Jeff;

AWS Snowball Update – Job Management API & S3 Adapter

We introduced AWS Snowball last fall from the re:Invent stage. The Snowball appliance is designed for customers who need to transfer large amounts of data into or out of AWS on a one-time or recurring basis (read AWS Import/Export Snowball – Transfer 1 Petabyte Per Week Using Amazon-Owned Storage Appliances to learn more).

Today we are launching two important additions to Snowball. Here’s the scoop:

  • Snowball Job Management API – The new Snowball API lets you build applications that create and manage Snowball jobs.
  • S3 Adapter – The new Snowball S3 Adapter lets you access a Snowball appliance as if it were an S3 endpoint.

Time to dive in!

Snowball Job Management API
The original Snowball model was interactive and console-driven. You could create a job (basically “Send me a Snowball”) and then monitor its progress, tracking the shipment, transit, delivery, and return to AWS visually. This was great for one-off jobs, but did not meet the needs of customers who wanted to integrate Snowball into their existing backup or data transfer model. Based on the requests that we received from these customers and from our Storage Partners, we are introducing a Snowball Job Management API today.

The Snowball Job Management API gives our customers and partners the power to make Snowball an intrinsic, integrated part of their data management solutions. Here are the primary functions:

  • CreateJob – Create an import or export job & initiates shipment of an appliance.
  • ListJobs – Fetch a list of jobs and associated job states.
  • DescribeJob – Fetch information about a specific job.

Read the API Reference to learn more!

I’m looking forward to reading about creative and innovative applications that make use of this new API! Leave me a comment and let me know what you come up with.

S3 Adapter
The new Snowball S3 Adapter allows you to access a Snowball as if it were an Amazon S3 endpoint running on-premises. This allows you to use your existing, S3-centric tools to move data to or from a Snowball.

The adapter is available for multiple Linux distributions and Windows releases, and is easy to install:

  1. Download the appropriate file from the Snowball Tools page and extract its contents to a local directory.
  2. Verify that the adapter’s configuration is appropriate for your environment (the adapter listens on port 8080 by default).
  3. Connect your Snowball to your network and get its IP address from the built-in display on the appliance.
  4. Visit the Snowball Console to obtain the unlock code and the job manifest.
  5. Launch the adapter, providing it with the IP address, unlock code, and manifest file.

With the adapter up and running, you can use your existing S3-centric tools by simply configuring them to use the local endpoint (the IP address of the on-premises host and the listener port). For example, here’s how you would run the s3 ls command on the on-premises host:

$ aws s3 ls --endpoint http://localhost:8080

After you copy your files to the Snowball, you can easily verify that the expected number of files were copied:

$ snowball validate

The initial release of the adapter supports a subset of the S3 API including GET on buckets and on the service, HEAD on a bucket and on objects, PUT and DELETE on objects, and all of the multipart upload operations. If you plan to access the adapter using your own code or third party tools, some testing is advisable.

To learn more, read about the Snowball Transfer Adapter.

Available Now
These new features are available now and you can start using them today!

Jeff;

 

New – Bring Your Own Keys with AWS Key Management Service

AWS Key Management Service (KMS) provides you with seamless, centralized control over your encryption keys. Our customers have told us that they love this fully managed service because it automatically handles all of the availability, scalability, physical security, and hardware maintenance for the underlying Key Management Infrastructure (KMI). It also centralizes key management, with one dashboard that offers creation, rotation, and lifecycle management functions. With no up-front cost and usage-based pricing that starts at $1 per Customer Master Key (CMK) per month, KMS makes it easy for you to encrypt data stored in S3, EBS, RDS, Redshift, and any other AWS service that’s integrated with KMS.

Many AWS customers use KMS to create and manage their keys. A few, however, would like to maintain local control over their keys while still taking advantage of the other features offered by KMS. Our customers tell us that local control over the generation and storage of keys would help them meet their security and compliance requirements in order to run their most sensitive workloads in the cloud.

Bring Your Own Keys
In order to support this important use case, I am happy to announce that you can now bring your own keys to KMS. This allows you to protect extremely sensitive workloads and to maintain a secure copy of the keys outside of AWS. This new feature allows you to import keys from any key management and HSM (Hardware Security Module) solution that supports the RSA PKCS #1 standard, and use them with AWS services and your own applications. It also works in concert with AWS CloudTrail to provide you with detailed auditing information. Putting it all together, you get greater control over the lifecycle and durability of your keys while you use AWS to provide high availability. Most key management solutions in use today use an HSM in the back end, but not all HSMs provide a key management solution.

The import process can be initiated from the AWS Management Console, AWS Command Line Interface (CLI), or by making calls to the KMS API. Because you never want to transmit secret keys in the open, the import process requires you to wrap the key in your KMI beforehand with a public key provided by KMS that is unique to your account. You can use the PKCS #1 scheme of your choice to wrap the key.

Following the directions (Importing Key Material in AWS Key Management Service), I started out by clicking on Create key in the KMS Console:

I entered an Alias and a Description, selected External, and checked the “I understand…” checkbox:

Then I picked the set of IAM users that have permission to use the KMS APIs to administer the key (this step applies to both KMS and External keys, as does the next one):

Then I picked the set of IAM users that can use the key to encrypt and decrypt data:

I verified the key policy, and then I downloaded my wrapping key and my import token. The wrapping key is the 2048-bit RSA public key that I’ll use to encrypt the 256-bit secret key I want to import into KMS. The import token contains metadata to ensure that my exported key can be imported into KMS correctly.

I opened up the ZIP file and put the wrapping key into a directory on my EC2 instance. Then I used the openssl command twice: once to generate my secret key and a second time to wrap the secret key with the wrapping key. Note that I used openssl as a convenient way to generate a 256-bit key and prepare it for import. For production data, you should use a more secure method (preferably a commercial key management or HSM solution) of generating and storing the local copy of your keys.

$ openssl rand -out plain_text_aes_key.bin 32
$ openssl rsautl -encrypt -in plain_text_aes_key.bin -oaep \
  -inkey wrappingKey_fcb572d3-6680-449c-91ab-ac3a5c07dc09_0804104355 \
  -pubin -keyform DER -out enc.aes.key

Finally, I brought it all together by checking “I am ready to upload…”  and clicking on Next, then specifying my key materials along with an expiration time for the key. Since the key will be unusable by AWS after the expiration date, you may want to choose the option where the key doesn’t expire until you better understand your requirements. You can always re-import the same key and reset the expiration time later.

I clicked on Finish and the key was Enabled and ready for me to use:

And that’s all I had to do!

Because I set an expiration date for the key, KMS automatically created a CloudWatch metric to track the remaining time until the key expires. I can create a CloudWatch Alarm for this metric as a reminder to re-import the key when it is about to expire. When the key expires, a CloudWatch Event will be generated; I can use this to take an action programmatically.

Available Now
This new feature is now available in AWS GovCloud (US) and all commercial AWS regions except for China (Beijing) and you can start using it today.

Jeff;

Now Available – IPv6 Support for Amazon S3

As you probably know, every server and device that is connected to the Internet must have a unique IP address. Way back in 1981, RFC 791 (“Internet Protocol”) defined an IP address as a 32-bit entity, with three distinct network and subnet sizes (Classes A, B, and C – essentially large, medium, and small) designed for organizations with requirements for different numbers of IP addresses. In time, this format came to be seen as wasteful and the more flexible CIDR (Classless Inter-Domain Routing) format was standardized and put in to use. The 32-bit entity (commonly known as an IPv4 address) has served the world well, but the continued growth of the Internet means that all available IPv4 addresses will ultimately be assigned and put to use.

In order to accommodate this growth and to pave the way for future developments, networks, devices, and service providers are now in the process of moving to IPv6. With 128 bits per IP address, IPv6 has plenty of address space (according to my rough calculation, 128 bits is enough to give 3.5 billion IP addresses to every one of the 100 octillion or so stars in the universe). While the huge address space is the most obvious benefit of IPv6, there are other more subtle benefits as well. These include extensibility, better support for dynamic address allocation, and additional built-in support for security.

Today I am happy to announce that objects in Amazon S3 buckets are now accessible via IPv6 addresses via new “dual-stack” endpoints. When a DNS lookup is performed on an endpoint of this type, it returns an “A” record with an IPv4 address and an “AAAA” record with an IPv6 address. In most cases the network stack in the client environment will automatically prefer the AAAA record and make a connection using the IPv6 address.

Accessing S3 Content via IPv6
In order to start accessing your content via IPv6, you need to switch to new dual-stack endpoints that look like this:

https://BUCKET.s3.dualstack.REGION.amazonaws.com

or this:

https://s3.dualstack.REGION.amazonaws.com/BUCKET

If you are using the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell read Using Dual-Stack endpoints from the AWS CLI for more info. If you are using one of the AWS SDKs, read Using Dual-Stack Endpoints from the AWS SDKs.

Things to Know
Here are some things that you need to know in order to make a smooth transition to IPv6:

Bucket and IAM Policies – If you use policies to grant or restrict access via IP address, update them to include the desired IPv6 ranges before you switch to the new endpoints. If you don’t do this, clients may incorrectly gain or lose access to the AWS resources. Update any policies that exclude access from certain IPv4 addresses by adding the corresponding IPv6 addresses.

IPv6 Connectivity – Because the network stack will prefer an IPv6 address to an IPv4 address, an unusual situation can arise under certain circumstances. The client system can be configured for IPv6 but connected to a network that is not configured to route IPv6 packets to the Internet. Be sure to test for end-to-end connectivity before you switch to the dual-stack endpoints.

Log Entries – Log entries will include the IPv4 or IPv6 address, as appropriate. If you analyze your log files using internal or third-party applications, you should ensure that they are able to recognize and process entries that include an IPv6 address.

S3 Feature Support – IPv6 support is available for all S3 features with the exception of Website Hosting, S3 Transfer Acceleration, and access via BitTorrent.

Region Support – IPv6 support is available in all commercial AWS Regions and in AWS GovCloud (US). It is not available in the China (Beijing) Region.

Jeff;

New – Usage Plans for Amazon API Gateway

We introduced the Amazon API Gateway last year in order to allow developers to build backend web services for mobile, web, enterprise, and IoT applications (read Amazon API Gateway – Build and Run Scalable Application Backend to learn more). Since that time, AWS customers have built API implementations that run on AWS Lambda, Amazon Elastic Compute Cloud (EC2), and on servers running outside of AWS.

In many cases, our customers plan to create an ecosystem of partner developers building applications on top of their APIs. The API Gateway allows our customers to create API keys for each of their customers:

These keys identify each user of the API, and allow the API developer to control the set of services and service stages (environments such as test, beta, and production) that the key holder can access. Because the APIs often provide substantial business value, our customers have told us that they would like to build APIs, regulate access to them, and monetize them by charging based on usage.

New Usage Plans
In order to support this use case, we are introducing Usage Plans for API Gateway. This new feature allows developers to build and monetize APIs and to create ecosystems around them. You can create usage plans for different levels of access (Bronze, Silver, and Gold), different categories of users (Student, Individual, Professional, or Enterprise), and so forth. Plans are named and control the following aspects of access to an API:

  • Throttling – Overall request rate (average requests per second) and a burst capacity.
  • Quota – Number of requests that can be made per day, week, or month.
  • API / Stages – The API and API stages that can be accessed.

If you choose to make use of Usage Plans, each of your APIs must be associated with a plan. Fortunately, the API Gateway will be more than happy to create default plans and associate them with your APIs. You need only confirm that you want this to happen:

The default plans have no throttling and no quota, and will not change the behavior of the API.

Creating a Usage Plan
Let’s step through the process of creating a Usage Plan.  Open up the API Gateway Console, navigate to Usage Plans, and click on Create. Assign a name and a description, then set the Throttling and Quota options as desired:

Throttling is implemented using a Token Bucket model.  The bucket is large enough to hold the number of tokens denoted by the Burst value, and gains new tokens at the specified Rate. Each API request removes one token from the bucket. Using a Token Bucket allows you to have APIs that support a steady stream of requests with the capability to accommodate the occasional burst. You can use/think of throttling in two different ways. From the business side, it allows you to use a Usage Plan to control how many requests each of your customers can make. From the technical side, it allows you to insulate the services that are used to implement the APIs from excessive requests. This is especially important if those services are implemented outside of AWS and cannot scale to meet demand.

Click on Next, and then select the API and API Stages that can be accessed via the Usage Plan:

Click on Next to create the plan, and then add some API Keys to it You can add existing keys or create new ones:

If you are planning to attach the usage plan to an existing API Key, you must first remove the default plan from the key because the key cannot reference multiple plans that refer to the same stage. You can do this by opening up the API Keys in a second browser tab and clicking on the “x” to the right of the default plan:

Now (on the tab where you are adding the API Keys to the plan), select one or more API Keys (representing subscribers to the API), and click on Done:

As soon as your users (subscribers) start to make calls to the APIs using their API Keys, their usage will be throttled and limited as specified in the plan. You can view their usage at any time by clicking on Usage:

Quotas are applied and respected in real time. Usage data can be up to 30 minutes behind.

You can download usage data for the plan by clicking on Export Usage Data:

You can then process and analyze the data as desired. For example, you could bill your subscribers on a per-call basis.

If one of your subscribers is making exceptionally good use of your API and is getting close to their quota for the period, you can grant a usage extension to them without changing the Usage Plan. Simply click on Extension and enter the number of requests that they are permitted to make for the remainder of the period:

Using Usage Plans
As I mentioned earlier, you can use Usage Plans to bill for usage and to create an ecosystem around your APIs.

You can control and police access, and you can selective grant special access to individual subscribers as needed. For example, you can create API Keys and Usage Plans that allow access to specific API stages. Most of your subscribers will need access to your production stage; a few will need access to your development or beta testing stages.

Before I wrap up, I should point out that the API Keys are for identification, not for authentication. The keys are not used to sign requests, and should not be used as a security mechanism (this is a perfect use case for Cognito Your User Pools).

Available Now
This feature is available now and you can start using it today.

Jeff;