AWS Blog

AWS Week in Review – Coming Back With Your Help!

by Jeff Barr | on | in Week in Review | | Comments

Back in 2012 I realized that something interesting happened in AWS-land just about every day. In contrast to the periodic bursts of activity that were the norm back in the days of shrink-wrapped software, the cloud became a place where steady, continuous development took place.

In order to share all of this activity with my readers and to better illustrate the pace of innovation, I published the first AWS Week in Review in the spring of 2012. The original post took all of about 5 minutes to assemble, post and format. I got some great feedback on it and I continued to produce a steady stream of new posts every week for over 4 years. Over the years I added more and more content generated within AWS and from the ever-growing community of fans, developers, and partners.

Unfortunately, finding, saving, and filtering links, and then generating these posts grew to take a substantial amount of time. I reluctantly stopped writing new posts early this year after spending about 4 hours on the post for the week of April 25th.

After receiving dozens of emails and tweets asking about the posts, I gave some thought to a new model that would be open and more scalable.

Going Open
The AWS Week in Review is now a GitHub project (https://github.com/aws/aws-week-in-review). I am inviting contributors (AWS fans, users, bloggers, and partners) to contribute.

Every Monday morning I will review and accept pull requests for the previous week, aiming to publish the Week in Review by 10 AM PT. In order to keep the posts focused and highly valuable, I will approve pull requests only if they meet our guidelines for style and content.

At that time I will also create a file for the week to come, so that you can populate it as you discover new and relevant content.

Content & Style Guidelines
Here are the guidelines for making contributions:

  • Relevance -All contributions must be directly related to AWS.
  • Ownership – All contributions remain the property of the contributor.
  • Validity – All links must be to publicly available content (links to free, gated content are fine).
  • Timeliness – All contributions must refer to content that was created on the associated date.
  • Neutrality – This is not the place for editorializing. Just the facts / links.

I generally stay away from generic news about the cloud business, and I post benchmarks only with the approval of my colleagues.

And now a word or two about style:

  • Content from this blog is generally prefixed with “I wrote about POST_TITLE” or “We announced that TOPIC.”
  • Content from other AWS blogs is styled as “The BLOG_NAME wrote about POST_TITLE.”
  • Content from individuals is styled as “PERSON wrote about POST_TITLE.”
  • Content from partners and ISVs is styled as “The BLOG_NAME wrote about POST_TITLE.”

There’s room for some innovation and variation to keep things interesting, but keep it clean and concise. Please feel free to review some of my older posts to get a sense for what works.

Over time we might want to create a more compelling visual design for the posts. Your ideas (and contributions) are welcome.

Sections
Over the years I created the following sections:

  • Daily Summaries – content from this blog, other AWS blogs, and everywhere else.
  • New & Notable Open Source.
  • New SlideShare Presentations.
  • New YouTube Videos including APN Success Stories.
  • New AWS Marketplace products.
  • New Customer Success Stories.
  • Upcoming Events.
  • Help Wanted.

Some of this content comes to my attention via RSS feeds. I will post the OPML file that I use in the GitHub repo and you can use it as a starting point. The New & Notable Open Source section is derived from a GitHub search for aws. I scroll through the results and pick the 10 or 15 items that catch my eye. I also watch /r/aws and Hacker News for interesting and relevant links and discussions.

Over time, it is possible that groups or individuals may become the primary contributor for a section. That’s fine, and I would be thrilled to see this happen. I am also open to the addition to new sections, as long as they are highly relevant to AWS.

Adding Content / Creating a Pull Request
It is very easy to participate in this process. You don’t need to use any shell commands or text editors. Start by creating a GitHub account and logging in. I set up two-factor authentication for my account and you might want to do the same.

Now, find a piece of relevant content. As an example, I’ll use the presentation Amazon Aurora for Enterprise Database Applications. I visit the current aws-week-in-review file and click on the Edit button (the pencil icon):

Then I insert the new content (line 81):

I could have inserted several pieces of new content if desired.

Next, I enter a simple commit message, indicate that the commit should go to a branch (this is important), and click on Propose file change.

And that’s it! In my role as owner of the file, I’ll see the pull request, review it, and then merge it in to the master branch.

Automation
Earlier this year I tried to automate the process, but I did not like the results. You are welcome to give this a shot on your own. I do want to make sure that we continue to exercise human judgement in order to keep the posts as valuable as possible.

Let’s Do It
I am super excited about this project and I cannot wait to see those pull requests coming in. Please let me know (via a blog comment) if you have any suggestions or concerns.

I should note up front that I am very new to Git-based collaboration and that this is going to be a learning exercise for me. Do not hesitate to let me know if there’s a better way to do things!

Jeff;

 

Amazon WorkSpaces Update – Hourly Usage and Expanded Root Volume

by Jeff Barr | on | in Amazon WorkSpaces, Launch | | Comments

In my recent post, I Love My Amazon WorkSpace, I shared the story of how I became a full-time user and big fan of Amazon WorkSpaces. Since writing the post I have heard similar sentiments from several other AWS customers.

Today I would like to tell you about some new and recent developments that will make WorkSpaces more economical, more flexible, and more useful:

  • Hourly WorkSpaces – You can now pay for your WorkSpace by the hour.
  • Expanded Root Volume – Newly launched WorkSpaces now have an 80 GB root volume.

Let’s take a closer look at these new features.

Hourly WorkSpaces
If you only need part-time access to your WorkSpace, you (or your organization, to be more precise) will benefit from this feature. In addition to the existing monthly billing, you can now use and pay for a WorkSpace on an hourly basis, allowing you to save money on your AWS bill. If you are a part-time employee, a road warrior, share your job with another part-timer, or work on multiple short-term projects, this feature is for you. It is also a great fit for corporate training, education, and remote administration.

There are now two running modes – AlwaysOn and AutoStop:

  • AlwaysOn – This is the existing mode. You have instant access to a WorkSpace that is always running, billed by the month.
  • AutoStop – This is new. Your WorkSpace starts running and billing when you log in, and stops automatically when you remain disconnected for a specified period of time.

A WorkSpace that is running in AutoStop mode will automatically stop a predetermined amount of time after you disconnect (1 to 48 hours). Your WorkSpaces Administrator can also force a running WorkSpace to stop. When you next connect, the WorkSpace will resume, with all open documents and running programs intact. Resuming a stopped WorkSpace generally takes less than 90 seconds.

Your WorkSpaces Administrator has the ability to choose your running mode when launching your WorkSpace:

WorkSpaces Configuration

The Administrator can change the AutoStop time and the running mode at any point during the month. They can also track the number of working hours that your WorkSpace accumulates during the month using the new UserConnected CloudWatch metric, and switch from AutoStop to AlwaysOn when this becomes more economical. Switching from hourly to monthly billing takes place upon request; however, switching the other way takes place at the the start of the following month.

All new Amazon WorkSpaces can take advantage of hourly billing today. If you’re using a custom image for your WorkSpaces, you’ll need to refresh your custom images from the latest Amazon WorkSpaces bundles. The ability for existing WorkSpaces to switch to hourly billing will be added in the future.

To learn more about pricing for hourly WorkSpaces, visit the WorkSpaces Pricing page.

Expanded Root Volume
By popular demand we have expanded the size of the root volume for newly launched WorkSpaces to 80 GB, allowing you to run more applications and store more data at no additional cost. Your WorkSpaces Administrator can rebuild existing WorkSpaces in order to upgrade them to the larger root volumes (read Rebuild a WorkSpace to learn more). Rebuilding a WorkSpace will restore the root volume (C:) to the most recent image of the bundle that was used to create the WorkSpace. It will also restore the data volume (D:) from the last automatic snapshot.

Some WorkSpaces Resources
While I have your attention, I would like to let you know about a couple of other important WorkSpaces resources:

Available Now
The features that I described above are available now and you can start using them today!

Jeff;

AWS Webinars – August, 2016

by Jeff Barr | on | in Webinars | | Comments

Everyone on the AWS team understands the value of educating our customers on the best ways to use our services. We work hard to create documentation, training materials, and blog posts for you! We run live events such as our Global AWS Summits and AWS re:Invent where the focus is on education. Last but not least, we put our heads together and create a fresh lineup of webinars for you each and every month.

We have a great selection of webinars on the schedule for August. As always they are free, but they do fill up and I strongly suggest that you register ahead of time. All times are PT, and each webinar runs for one hour:

August 23

August 24

August 25

August 30

August 31

Jeff;

PS – Check out the AWS Webinar Archive for more great content!

 

AWS Solution – Transit VPC

by Jeff Barr | on | in Amazon VPC, AWS Marketplace, Quick Start | | Comments

Today I would like to tell you about a new AWS Solution. This one is cool because of what it does and how it works! Like the AWS Quick Starts, this one was built by AWS Solutions Architects and incorporates best practices for security and high availability.

The new Transit VPC Solution shows you how to implement a very useful networking construct that we call a transit VPC. You can use this to connect multiple Virtual Private Clouds (VPCs) that might be geographically disparate and/or running in separate AWS accounts, to a common VPC that serves as a global network transit center. This network topology simplifies network management and minimizes the number of connections that you need to set up and manage. Even better, it is implemented virtually and does not require any physical network gear or a physical presence in a colocation transit hub. Here’s what this looks like:

In this diagram, the transit VPC is central, surrounded by additional “spoke” VPCs, corporate data centers, and other networks.

The transit VPC supports several important use cases:

  • Private Networking – You can build a private network that spans two or more AWS Regions.
  • Shared Connectivity – Multiple VPCs can share connections to data centers, partner networks, and other clouds.
  • Cross-Account AWS Usage – The VPCs and the AWS resources within them can reside in multiple AWS accounts.

The solution uses a AWS CloudFormation stack to launch and configure all of the AWS resources. It provides you with three throughput options ranging from 500 Mbps to 2 Gbps, each implemented over a pair of connections for high availability. The stack makes use of the Cisco Cloud Services Router (CSR), which is now available in AWS Marketplace. You can use your existing CSR licenses (the BYOL model) or you can pay for your CSR usage on an hourly basis. The cost to run a transit VPC is based on the throughput option and licensing model that you choose, and ranges from $0.21 to $8.40 per hour, with an additional cost (for AWS resources) of $0.10 per hour for each spoke VPC. There’s an additional cost of $1 per month for a AWS Key Management Service (KMS) customer master key that is specific to the solution. All of these prices are exclusive of network transit costs.

The template installs and uses a pair of AWS Lambda functions in a creative way!

The VGW Poller function runs every minute. It scans all of the AWS Regions in the account, looking for appropriately tagged Virtual Private Gateways in spoke VPCs that do not have a VPN connection. When it finds one, it creates (if necessary) the corresponding customer gateway and the VPN connections to the CSR, and then saves the information in an S3 bucket.

The Cisco Configurator function is triggered by the Put event on the bucket. It parses the VPN connection information and generates the necessary config files, then pushes them to the CSR instances using SSH. This allows the VPN tunnels to come up and (via the magic of BGP), neighbor relationships will be established with the spoke VPCs.

By using Lambda in this way, new spoke VPCs can be brought online quickly without the overhead of keeping an underutilized EC2 instance up and running.

The solution’s implementation guide, as always, contains step-by-step directions and security recommendations.

Jeff;

PS – Check out additional network best practice guidance to find answers to common network questions!

AWS Snowball Update – Job Management API & S3 Adapter

by Jeff Barr | on | in AWS Import/Export, Launch | | Comments

We introduced AWS Import/Export Snowball last fall from the re:Invent stage. The Snowball appliance is designed for customers who need to transfer large amounts of data into or out of AWS on a one-time or recurring basis (read AWS Import/Export Snowball – Transfer 1 Petabyte Per Week Using Amazon-Owned Storage Appliances to learn more).

Today we are launching two important additions to Snowball. Here’s the scoop:

  • Snowball Job Management API – The new Snowball API lets you build applications that create and manage Snowball jobs.
  • S3 Adapter – The new Snowball S3 Adapter lets you access a Snowball appliance as if it were an S3 endpoint.

Time to dive in!

Snowball Job Management API
The original Snowball model was interactive and console-driven. You could create a job (basically “Send me a Snowball”) and then monitor its progress, tracking the shipment, transit, delivery, and return to AWS visually. This was great for one-off jobs, but did not meet the needs of customers who wanted to integrate Snowball into their existing backup or data transfer model. Based on the requests that we received from these customers and from our Storage Partners, we are introducing a Snowball Job Management API today.

The Snowball Job Management API gives our customers and partners the power to make Snowball an intrinsic, integrated part of their data management solutions. Here are the primary functions:

  • CreateJob – Create an import or export job & initiates shipment of an appliance.
  • ListJobs – Fetch a list of jobs and associated job states.
  • DescribeJob – Fetch information about a specific job.

Read the API Reference to learn more!

I’m looking forward to reading about creative and innovative applications that make use of this new API! Leave me a comment and let me know what you come up with.

S3 Adapter
The new Snowball S3 Adapter allows you to access a Snowball as if it were an Amazon S3 endpoint running on-premises. This allows you to use your existing, S3-centric tools to move data to or from a Snowball.

The adapter is available for multiple Linux distributions and Windows releases, and is easy to install:

  1. Download the appropriate file from the Snowball Tools page and extract its contents to a local directory.
  2. Verify that the adapter’s configuration is appropriate for your environment (the adapter listens on port 8080 by default).
  3. Connect your Snowball to your network and get its IP address from the built-in display on the appliance.
  4. Visit the Snowball Console to obtain the unlock code and the job manifest.
  5. Launch the adapter, providing it with the IP address, unlock code, and manifest file.

With the adapter up and running, you can use your existing S3-centric tools by simply configuring them to use the local endpoint (the IP address of the on-premises host and the listener port). For example, here’s how you would run the s3 ls command on the on-premises host:

$ aws s3 ls --endpoint http://localhost:8080

After you copy your files to the Snowball, you can easily verify that the expected number of files were copied:

$ snowball validate

The initial release of the adapter supports a subset of the S3 API including GET on buckets and on the service, HEAD on a bucket and on objects, PUT and DELETE on objects, and all of the multipart upload operations. If you plan to access the adapter using your own code or third party tools, some testing is advisable.

To learn more, read about the Snowball Transfer Adapter.

Available Now
These new features are available now and you can start using them today!

Jeff;

 

New – Bring Your Own Keys with AWS Key Management Service

by Jeff Barr | on | in Key Management Service, Launch | | Comments

AWS Key Management Service (KMS) provides you with seamless, centralized control over your encryption keys. Our customers have told us that they love this fully managed service because it automatically handles all of the availability, scalability, physical security, and hardware maintenance for the underlying Key Management Infrastructure (KMI). It also centralizes key management, with one dashboard that offers creation, rotation, and lifecycle management functions. With no up-front cost and usage-based pricing that starts at $1 per Customer Master Key (CMK) per month, KMS makes it easy for you to encrypt data stored in S3, EBS, RDS, Redshift, and any other AWS service that’s integrated with KMS.

Many AWS customers use KMS to create and manage their keys. A few, however, would like to maintain local control over their keys while still taking advantage of the other features offered by KMS. Our customers tell us that local control over the generation and storage of keys would help them meet their security and compliance requirements in order to run their most sensitive workloads in the cloud.

Bring Your Own Keys
In order to support this important use case, I am happy to announce that you can now bring your own keys to KMS. This allows you to protect extremely sensitive workloads and to maintain a secure copy of the keys outside of AWS. This new feature allows you to import keys from any key management and HSM (Hardware Security Module) solution that supports the RSA PKCS #1 standard, and use them with AWS services and your own applications. It also works in concert with AWS CloudTrail to provide you with detailed auditing information. Putting it all together, you get greater control over the lifecycle and durability of your keys while you use AWS to provide high availability. Most key management solutions in use today use an HSM in the back end, but not all HSMs provide a key management solution.

The import process can be initiated from the AWS Management Console, AWS Command Line Interface (CLI), or by making calls to the KMS API. Because you never want to transmit secret keys in the open, the import process requires you to wrap the key in your KMI beforehand with a public key provided by KMS that is unique to your account. You can use the PKCS #1 scheme of your choice to wrap the key.

Following the directions (Importing Key Material in AWS Key Management Service), I started out by clicking on Create key in the KMS Console:

I entered an Alias and a Description, selected External, and checked the “I understand…” checkbox:

Then I picked the set of IAM users that have permission to use the KMS APIs to administer the key (this step applies to both KMS and External keys, as does the next one):

Then I picked the set of IAM users that can use the key to encrypt and decrypt data:

I verified the key policy, and then I downloaded my wrapping key and my import token. The wrapping key is the 2048-bit RSA public key that I’ll use to encrypt the 256-bit secret key I want to import into KMS. The import token contains metadata to ensure that my exported key can be imported into KMS correctly.

I opened up the ZIP file and put the wrapping key into a directory on my EC2 instance. Then I used the openssl command twice: once to generate my secret key and a second time to wrap the secret key with the wrapping key. Note that I used openssl as a convenient way to generate a 256-bit key and prepare it for import. For production data, you should use a more secure method (preferably a commercial key management or HSM solution) of generating and storing the local copy of your keys.

$ openssl rand -out plain_text_aes_key.bin 32
$ openssl rsautl -encrypt -in plain_text_aes_key.bin -oaep \
  -inkey wrappingKey_fcb572d3-6680-449c-91ab-ac3a5c07dc09_0804104355 \
  -pubin -keyform DER -out enc.aes.key

Finally, I brought it all together by checking “I am ready to upload…”  and clicking on Next, then specifying my key materials along with an expiration time for the key. Since the key will be unusable by AWS after the expiration date, you may want to choose the option where the key doesn’t expire until you better understand your requirements. You can always re-import the same key and reset the expiration time later.

I clicked on Finish and the key was Enabled and ready for me to use:

And that’s all I had to do!

Because I set an expiration date for the key, KMS automatically created a CloudWatch metric to track the remaining time until the key expires. I can create a CloudWatch Alarm for this metric as a reminder to re-import the key when it is about to expire. When the key expires, a CloudWatch Event will be generated; I can use this to take an action programmatically.

Available Now
This new feature is now available in AWS GovCloud (US) and all commercial AWS regions except for China (Beijing) and you can start using it today.

Jeff;

Now Available – IPv6 Support for Amazon S3

by Jeff Barr | on | in Amazon S3, Launch | | Comments

As you probably know, every server and device that is connected to the Internet must have a unique IP address. Way back in 1981, RFC 791 (“Internet Protocol”) defined an IP address as a 32-bit entity, with three distinct network and subnet sizes (Classes A, B, and C – essentially large, medium, and small) designed for organizations with requirements for different numbers of IP addresses. In time, this format came to be seen as wasteful and the more flexible CIDR (Classless Inter-Domain Routing) format was standardized and put in to use. The 32-bit entity (commonly known as an IPv4 address) has served the world well, but the continued growth of the Internet means that all available IPv4 addresses will ultimately be assigned and put to use.

In order to accommodate this growth and to pave the way for future developments, networks, devices, and service providers are now in the process of moving to IPv6. With 128 bits per IP address, IPv6 has plenty of address space (according to my rough calculation, 128 bits is enough to give 3.5 billion IP addresses to every one of the 100 octillion or so stars in the universe). While the huge address space is the most obvious benefit of IPv6, there are other more subtle benefits as well. These include extensibility, better support for dynamic address allocation, and additional built-in support for security.

Today I am happy to announce that objects in Amazon S3 buckets are now accessible via IPv6 addresses via new “dual-stack” endpoints. When a DNS lookup is performed on an endpoint of this type, it returns an “A” record with an IPv4 address and an “AAAA” record with an IPv6 address. In most cases the network stack in the client environment will automatically prefer the AAAA record and make a connection using the IPv6 address.

Accessing S3 Content via IPv6
In order to start accessing your content via IPv6, you need to switch to new dual-stack endpoints that look like this:

http://BUCKET.s3.dualstack.REGION.amazonaws.com

or this:

http://s3.dualstack.REGION.amazonaws.com/BUCKET

If you are using the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell you can use the --enabledualstack flag to switch to the dual-stack endpoints.

We are currently updating the AWS SDKs to support the use_dualstack_endpoint setting and expect to push them out to production by the middle of next week. Until then, refer to the developer guide for your SDK to learn how to enable this feature.

Things to Know
Here are some things that you need to know in order to make a smooth transition to IPv6:

Bucket and IAM Policies – If you use policies to grant or restrict access via IP address, update them to include the desired IPv6 ranges before you switch to the new endpoints. If you don’t do this, clients may incorrectly gain or lose access to the AWS resources. Update any policies that exclude access from certain IPv4 addresses by adding the corresponding IPv6 addresses.

IPv6 Connectivity – Because the network stack will prefer an IPv6 address to an IPv4 address, an unusual situation can arise under certain circumstances. The client system can be configured for IPv6 but connected to a network that is not configured to route IPv6 packets to the Internet. Be sure to test for end-to-end connectivity before you switch to the dual-stack endpoints.

Log Entries – Log entries will include the IPv4 or IPv6 address, as appropriate. If you analyze your log files using internal or third-party applications, you should ensure that they are able to recognize and process entries that include an IPv6 address.

S3 Feature Support – IPv6 support is available for all S3 features with the exception of Website Hosting, S3 Transfer Acceleration, and access via BitTorrent.

Region Support – IPv6 support is available in all commercial AWS Regions and in AWS GovCloud (US). It is not available in the China (Beijing) Region.

Jeff;

New – Usage Plans for Amazon API Gateway

by Jeff Barr | on | in Amazon API Gateway, Launch | | Comments

We introduced the Amazon API Gateway last year in order to allow developers to build backend web services for mobile, web, enterprise, and IoT applications (read Amazon API Gateway – Build and Run Scalable Application Backend to learn more). Since that time, AWS customers have built API implementations that run on AWS Lambda, Amazon Elastic Compute Cloud (EC2), and on servers running outside of AWS.

In many cases, our customers plan to create an ecosystem of partner developers building applications on top of their APIs. The API Gateway allows our customers to create API keys for each of their customers:

These keys identify each user of the API, and allow the API developer to control the set of services and service stages (environments such as test, beta, and production) that the key holder can access. Because the APIs often provide substantial business value, our customers have told us that they would like to build APIs, regulate access to them, and monetize them by charging based on usage.

New Usage Plans
In order to support this use case, we are introducing Usage Plans for API Gateway. This new feature allows developers to build and monetize APIs and to create ecosystems around them. You can create usage plans for different levels of access (Bronze, Silver, and Gold), different categories of users (Student, Individual, Professional, or Enterprise), and so forth. Plans are named and control the following aspects of access to an API:

  • Throttling – Overall request rate (average requests per second) and a burst capacity.
  • Quota – Number of requests that can be made per day, week, or month.
  • API / Stages – The API and API stages that can be accessed.

If you choose to make use of Usage Plans, each of your APIs must be associated with a plan. Fortunately, the API Gateway will be more than happy to create default plans and associate them with your APIs. You need only confirm that you want this to happen:

The default plans have no throttling and no quota, and will not change the behavior of the API.

Creating a Usage Plan
Let’s step through the process of creating a Usage Plan.  Open up the API Gateway Console, navigate to Usage Plans, and click on Create. Assign a name and a description, then set the Throttling and Quota options as desired:

Throttling is implemented using a Token Bucket model.  The bucket is large enough to hold the number of tokens denoted by the Burst value, and gains new tokens at the specified Rate. Each API request removes one token from the bucket. Using a Token Bucket allows you to have APIs that support a steady stream of requests with the capability to accommodate the occasional burst. You can use/think of throttling in two different ways. From the business side, it allows you to use a Usage Plan to control how many requests each of your customers can make. From the technical side, it allows you to insulate the services that are used to implement the APIs from excessive requests. This is especially important if those services are implemented outside of AWS and cannot scale to meet demand.

Click on Next, and then select the API and API Stages that can be accessed via the Usage Plan:

Click on Next to create the plan, and then add some API Keys to it You can add existing keys or create new ones:

If you are planning to attach the usage plan to an existing API Key, you must first remove the default plan from the key because the key cannot reference multiple plans that refer to the same stage. You can do this by opening up the API Keys in a second browser tab and clicking on the “x” to the right of the default plan:

Now (on the tab where you are adding the API Keys to the plan), select one or more API Keys (representing subscribers to the API), and click on Done:

As soon as your users (subscribers) start to make calls to the APIs using their API Keys, their usage will be throttled and limited as specified in the plan. You can view their usage at any time by clicking on Usage:

Quotas are applied and respected in real time. Usage data can be up to 30 minutes behind.

You can download usage data for the plan by clicking on Export Usage Data:

You can then process and analyze the data as desired. For example, you could bill your subscribers on a per-call basis.

If one of your subscribers is making exceptionally good use of your API and is getting close to their quota for the period, you can grant a usage extension to them without changing the Usage Plan. Simply click on Extension and enter the number of requests that they are permitted to make for the remainder of the period:

Using Usage Plans
As I mentioned earlier, you can use Usage Plans to bill for usage and to create an ecosystem around your APIs.

You can control and police access, and you can selective grant special access to individual subscribers as needed. For example, you can create API Keys and Usage Plans that allow access to specific API stages. Most of your subscribers will need access to your production stage; a few will need access to your development or beta testing stages.

Before I wrap up, I should point out that the API Keys are for identification, not for authentication. The keys are not used to sign requests, and should not be used as a security mechanism (this is a perfect use case for Cognito Your User Pools).

Available Now
This feature is available now and you can start using it today.

Jeff;

Amazon Kinesis Analytics – Process Streaming Data in Real Time with SQL

by Jeff Barr | on | in Amazon Kinesis, Launch | | Comments

As you may know, Amazon Kinesis greatly simplifies the process of working with real-time streaming data in the AWS Cloud. Instead of setting up and running your own processing and short-term storage infrastructure, you simply create a Kinesis Stream or Kinesis Firehose, arrange to pump data in to it, and then build an application to process or analyze it.

While it is relatively easy to build streaming data solutions using Kinesis Streams and Kinesis Firehose, we want to make it even easier. We want you, whether you are a procedural developer, a data scientist, or a SQL developer, to be able to process voluminous clickstreams from web applications, telemetry and sensor reports from connected devices, server logs, and more using a standard query language, all in real time!

Amazon Kinesis Analytics
Today I am happy to be able to announce the availability of Amazon Kinesis Analytics. You can now run continuous SQL queries against your streaming data, filtering, transforming, and summarizing the data as it arrives. You can focus on processing the data and extracting business value from it instead of wasting your time on infrastructure. You can build a powerful, end-to-end stream processing pipeline in 5 minutes without having to write anything more complex than a SQL query.

When I think of running a series of SQL queries against a database table, I generally think of the data as staying more or less static while the queries come and go pretty quickly. Rows are added, changed, and deleted all the time, but this does not generally matter when considering a single query that runs at a particular point in time. Running a Kinesis Analytics query against streaming data turns this model sideways.  The queries are long-running and the data changes many times per second as new records, observations, or log entries arrive. Once you wrap your head around this, you will see that the query processing model is very easy to understand: You build persistent queries that process records as they arrive.

In order to control the set of records that will be processed by a given query, you make use of a processing “window.” Kinesis Analytics supports three different types of windows:

Tumbling windows are used for periodic reports. You could use a tumbling window to summarize data over time. Perhaps you get thousands or millions of requests per second, and would like to know how many arrive each minute. When the current tumbling window closes, the next one begins after it. A new result is generated each time the window fills up.

Sliding windows are used for monitoring and other types of trend detection. For example, you could use a sliding window to compute a real-time moving average for an error rate. Records enter the window, contribute to the result as long as they are within it, and the window advances. A new result is generated each time a new record enters the window. You can adjust the size of the window to control the sensitive of the results.

Custom windows are used when the appropriate grouping is not strictly based on time. If you are processing clickstream data or server logs, you can use a custom window to perform an action known as sessionization. In other words, you can bound each query by the first and last actions performed by each user, as identified by a session identifier within the incoming data. You can write a query that computes the number of pages visited by each user or the time that they spend on your site.

While all of this might sound somewhat complicated, it is actually pretty easy to implement. Kinesis Analytics will analyze a sample of the incoming records and then propose a suitable schema. You can use it as-is, or you can fine-tune it to better reflect your actual data model. Once the schema has been defined, you can use the built-in SQL editor (complete with syntax checking and easy testing against live data). You can configure Kinesis Analytics to route the results of the query to up to four destinations including Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, or an Amazon Kinesis Stream.

When you build your first Amazon Kinesis Analytics application you need to write a pair of cooperating SQL statements (more complex applications can use more, but all it takes is two to get up and running):

A statement to create an in-application stream to store intermediate SQL results (a stream is a like a SQL table that is continuously updated, which you can select from and insert into).

Your SQL query, which selects from one in-application stream and inserts into another in-application stream.

Your SQL statements can also JOIN the records to reference data that originates in S3. This can be handy when you want to enhance or modify the records to include additional, perhaps more descriptive, information.

Amazon Kinesis Analytics in Action
Let’s spend a few minutes looking at Amazon Kinesis Analytics in action!

I log in to the Amazon Kinesis Analytics Console and clicking on Create new application. Then I enter a name and a description for my app:

Now I can manage my data source, my queries, and the destination(s):

I can select one of my existing input streams:

Or I can configure a new one (I’ll do that):

I click on Create demo stream to create a stream that will be populated with sample stock ticker data. This takes 30 to 40 seconds!

Kinesis Analytics peeks at the stream and proposes a schema. I can accept it as-is or fine tune it:

Then I hop over to the SQL editor. It offers to start my app. That seems like a good idea, so I agree  and click on Yes, start application:

Here’s the actual SQL editor:

I can write my query from scratch or I can use a template:

I picked Continuous filter; here’s the SQL:

I inspected it, nodded in agreement, and then clicked on Save and run SQL.  Within seconds, results began to flow in and were visible in the Console:

I used the SQL editor to modify the query to remove the sector and price columns and ran the query again. When I did this I learned that I needed to remove the columns from the CREATE STREAM statement (this is obvious in retrospect but it was the end of a long day).

Here’s the revised result set:

In most cases the next step would be to route the results to a new or existing stream. I can do that from the Console:

With just a couple of clicks and a little bit of typing, I have created an Amazon Kinesis Analytics app that is capable of process a production-scale stock ticker stream. This “demo” needs no changes whatsoever before being used in production. I think that’s kind of cool.

Learn More & Try it Yourself
As usual, I have barely scratched the surface of this exciting new service!  To learn more, you should read the new post, Writing SQL on Streaming Data with Amazon Kinesis Analytics.

You should be able to replicate my steps above in 5 minutes or less and I strongly recommend that you do so. Create your application, customize the SQL query, and learn how to process streaming data at scale.

Available Now
Amazon Kinesis Analytics is available now and you can start running queries against your streaming data today!

Jeff;

Powerful AWS Platform Features, Now for Containers

by Jeff Barr | on | in Amazon EC2, EC2 Container Service, Launch | | Comments

Containers are great but they come with their own management challenges. Our customers have been using containers on AWS for quite some time to run workloads ranging from microservices to batch jobs. They told us that managing a cluster, including the state of the EC2 instances and containers, can be tricky, especially as the environment grows. They also told us that integrating the capabilities you get with the AWS platform, such as load balancing, scaling, security, monitoring, and more, with containers is a key requirement. Amazon ECS was designed to meet all of these needs and more.

We created Amazon ECS  to make it easy for customers to run containerized applications in production. There is no container management software to install and operate because it is all provided to you as a service. You just add the EC2 capacity you need to your cluster and upload your container images. Amazon ECS takes care of the rest, deploying your containers across a cluster of EC2 instances and monitoring their health. Customers such as Expedia and Remind have built Amazon ECS into their development workflow, creating PaaS platforms on top of it. Others, such as Prezi and Shippable, are leveraging ECS to eliminate operational complexities of running containers, allowing them to spend more time delivering features for their apps.

AWS has highly reliable and scalable fully-managed services for load balancing, auto scaling, identity and access management, logging, and monitoring. Over the past year, we have continued to natively integrate the capabilities of the AWS platform with your containers through ECS, giving you the same capabilities you are used to on EC2 instances.

Amazon ECS recently delivered container support for application load balancing (Today), IAM roles (July), and Auto Scaling (May). We look forward to bringing more of the AWS platform to containers over time.

Let’s take a look at the new capabilities!

Application Load Balancing
Load balancing and service discovery are essential parts of any microservices architecture. Because Amazon ECS uses Elastic Load Balancing, you don’t need to manage and scale your own load balancing layer. You also get direct access to other AWS services that support ELB such as AWS Certificate Manager (ACM) to automatically manage your service’s certificates and Amazon API Gateway to authenticate callers, among other features.

Today, I am happy to announce that ECS supports the new application load balancer, a high-performance load balancing option that operates at the application layer and allows you to define content-based routing rules. The application load balancer includes two features that simplify running microservices on ECS: dynamic ports and the ability for multiple services to share a single load balancer.

Dynamic ports makes it easier to start tasks in your cluster without having to worry about port conflicts. Previously, to use Elastic Load Balancing to route traffic to your applications, you had to define a fixed host port in the ECS task. This added operational complexity, as you had to track the ports each application used, and it reduced cluster efficiency, as only one task could be placed per instance. Now, you can specify a dynamic port in the ECS task definition, which gives the container an unused port when it is scheduled on the EC2 instance. The ECS scheduler automatically adds the task to the application load balancer’s target group using this port. To get started, you can create an application load balancer from the EC2 Console or using the AWS Command Line Interface (CLI). Create a task definition in the ECS console with a container that sets the host port to 0. This container automatically receives a port in the ephemeral port range when it is scheduled.

Previously, there was a one-to-one mapping between ECS services and load balancers. Now, a load balancer can be shared with multiple services, using path-based routing. Each service can define its own URI, which can be used to route traffic to that service. In addition, you can create an environment variable with the service’s DNS name, supporting basic service discovery. For example, a stock service could be http://example.com/stock and a weather service could be http://example.com/weather, both served from the same load balancer. A news portal could then use the load balancer to access both the stock and weather services.

IAM Roles for ECS Tasks
In Amazon ECS, you have always been able to use IAM roles for your Amazon EC2 container instances to simplify the process of making API requests from your containers. This also allows you to follow AWS best practices by not storing your AWS credentials in your code or configuration files, as well as providing benefits such as automatic key rotation.

With the introduction of the recently launched IAM roles for ECS tasks, you can secure your infrastructure by assigning an IAM role directly to the ECS task rather than to the EC2 container instance. This way, you can have one task that uses a specific IAM role for access to, let’s say, S3 and another task that uses an IAM role to access a DynamoDB table, both running on the same EC2 instance.

Service Auto Scaling
The third feature I want to highlight is Service Auto Scaling. With Service Auto Scaling and Amazon CloudWatch alarms, you can define scaling policies to scale your ECS services in the same way that you scale your EC2 instances up and down. With Service Auto Scaling, you can achieve high availability by scaling up when demand is high, and optimize costs by scaling down your service and the cluster, when demand is lower, all automatically and in real-time.

You simply choose the desired, minimum and maximum number of tasks, create one or more scaling policies, and Service Auto Scaling handles the rest. The service scheduler is also Availability Zone–aware, so you don’t have to worry about distributing your ECS tasks across multiple zones.

Available Now
These features are available now and you can start using them today!

Jeff;