Category: Launch


Amazon CloudWatch Update – Percentile Statistics and New Dashboard Widgets

There sure is a lot going on with Amazon CloudWatch these days! Earlier this month I showed you how to Jump From Metrics to Associated Logs and told you about Extended Metrics Retention and the User Interface Update.

Today we are improving CloudWatch yet again, adding percentile statistics and two new dashboard widgets. Time is super tight due to AWS re:Invent, so I’ll be brief!

Percentile Statistics
When you run a web site or a cloud application at scale, you need to make sure that you are delivering the expected level of performance to the vast majority of your customers. While it is always a good idea to watch the numerical averages, you may not be getting the whole picture. The average may mask some performance outliers and you might not be able to see, for example, that 1% of your customers are not having a good experience.

In order to understand and visualize performance and behavior in a way that properly conveys the customer experience, percentiles are a useful tool. For example, you can use percentiles to know that 99% of the requests to your web site are being satisfied within 1 second. At Amazon, we use percentiles extensively and now you can do the same. We prefix them with a “p” and express our goals and observed performance in terms of the p90, p99, and p100 (worst case) response times for sites and services. Over the years we have found that responses in the long tail (p99 and above) can be used to detect database hot spots and other trouble spots.

Percentiles are supported for EC2, RDS, and Kinesis as well as for newly created Elastic Load Balancers and Application Load Balancers. They are also available for custom metrics. You can display the percentiles in CloudWatch (including Custom Dashboards) and you can also set alarms.

Percentiles can be displayed in conjunction with other metrics. For example, the orange and green lines indicate p90 and p95 CPU Utilization:

You can set any desired percentile in the CloudWatch Console:

Read Elastic Load Balancing: Support for CloudWatch Percentile Metrics to learn more about how to use the new percentile metrics to gain additional visibility into the performance of your applications.

New Dashboard Widgets
You can now add Stacked Area and Number widgets to your CloudWatch Custom Dashboards:

Here’s a Stacked Area widget with my network traffic:

And here’s a Number widget with some EC2 and EBS metrics:

Available Now
These new features are now available in all AWS Regions and you can start using them today!

Jeff;

 

New for Amazon Simple Queue Service – FIFO Queues with Exactly-Once Processing & Deduplication

As the very first member of the AWS family of services, Amazon Simple Queue Service (SQS) has certainly withstood the test of time!  Back in 2004, we described it as a “reliable, highly scalable hosted queue for buffering messages between distributed application components.” Over the years, we have added many features including a dead letter queue, 256 KB payloads, SNS integration, long polling, batch operations, a delay queue, timers, CloudWatch metrics, and message attributes.

New FIFO Queues
Today we are making SQS even more powerful and flexible with support for FIFO (first-in, first-out) queues. We are rolling out this new type of queue in two regions now, and plan to make it available in many others in early 2017.

These queues are designed to guarantee that messages are processed exactly once, in the order that they are sent, and without duplicates. We expect that FIFO queues will be of particular value to our financial services and e-commerce customers,  and to those who use messages to update database tables. Many of these customers have systems that depend on receiving messages in the order that they were sent.

FIFO ordering means that, if you send message A, wait for a successful response, and then send message B, message B will be enqueued after message A, and then delivered accordingly. This ordering does not apply if you make multiple SendMessage calls in parallel. It does apply to the individual messages within a call to SendMessageBatch, and across multiple consecutive calls to SendMessageBatch.

Exactly-once processing applies to both single-consumer and multiple-consumer scenarios. If you use FIFO queues in a multiple-consumer environment, you can configure your queue to make messages visible to other consumers only after the current message has been deleted or the visibility timeout expires. In this scenario, at most one consumer will actively process messages; the other consumers will be waiting until the first consumer finishes or fails.

Duplicate messages can sometimes occur when a networking issue outside of SQS prevents the message sender from learning the status of an action and causes the sender to retry the call. FIFO queues use multiple strategies to detect and eliminate duplicate messages. In addition to content-based deduplication, you can include a MessageDeduplicationId when you call SendMessage for a FIFO queue. The ID can be up to 128 characters long, and, if present, takes higher precedence than content-based deduplication.

When you call SendMessage for a FIFO queue, you can now include a MessageGroupId. Messages that belong to the same group (as indicated by the ID) are processed in order, allowing you to create and process multiple, ordered streams within a single queue and to use multiple consumers while keeping data from multiple groups distinct and ordered.

You can create standard queues (the original queue type) or the new FIFO queues using the CreateQueue function, the create-queue command, or the AWS Management Console. The same API functions apply to both types of queues, but you cannot convert one queue type into the other.

Although the same API calls apply to both queue types, the newest AWS SDKs and SQS clients provide some additional functionality. This includes automatic, idempotent retries of failed ReceiveMessage calls.

Individual FIFO queues can handle up to 300 send, receive, or delete requests per second.

Some SQS Resources
Here are some resources to help you to learn more about SQS and the new FIFO queues:

If you’re coming to Las Vegas for AWS re:Invent and would like to hear more about how AWS customer Capital One is making use of SQS and FIFO queues, register and plan to attend ENT-217, Migrating Enterprise Messaging to the Cloud on Wednesday, November 30 at 3:30 PM.

Available Now
FIFO queues are available now in the US East (Ohio) and US West (Oregon) regions and you can start using them today. If you are running in US East (Northern Virginia) and want to give them a try, you can create them in US East (Ohio) and take advantage of the low-cost, low-latency connectivity between the regions.

As part of today’s launch, we are also reducing the price for standard queues by 20%. For the updated pricing, take a look at the SQS Pricing page.

Jeff;

 

New – SaaS Subscriptions on AWS Marketplace

You can now find, buy, and use a nice variety of SaaS (Software as a Service) solutions from AWS Marketplace Vendors.

The new SaaS solutions run on AWS infrastructure and you will pay only for the service that you consume, with no monthly fees or subscription costs. For example, you can buy security services on a per-host basis, log processing on a per-GB-ingested basis, geocoding on a per-request basis, or caching on a per-GB-cached basis. Usage charge for the services that you consume will appear on your AWS bill.

The list of vendors and products is growing every day; here’s what we have lined up so far:

Application Development and Monitoring
  • Cloudyn
  • Cloudinary
  • Datapath.io
  • Dynatrace Cloud-Native Monitoring
  • New Relic Infrastructure (Pro & Essential)
  • Solano Labs CI
  • Solodev Enterprise Cloud
Security and Log Management
  • Alert Logic Cloud Insight
  • Bitium Identity and Access Management for AWS
  • Datadog Apps Monitoring
  • Dome9 Serenity for AWS Enterprise Edition
  • Sumo Logic Log Analytics
  • Trend Micro Deep Security
Databases, BI, and Big Data
  • HERE Forward Geocoder Global Service
  • Pitney Bowes GeoCode API
  • Qubole Data Service
Media
  • Aspera Transfer Service
  • NetApp DataSync
  • Signiant Flight
Storage
  • Druva Phoenix (Enterprise & Business)
Other Business Applications and Services
  • Avalara AvaTax

The AWS Marketplace page for each of these offerings includes the relevant per-unit pricing information. Here are a couple of examples:

 You can locate these applications by selecting SaaS as your delivery method when you search Marketplace:

To learn more, visit the AWS Marketplace SaaS page.

Attention ISVs
If you are an ISV and would like to offer a new SaaS solution or modify an existing offering to become a SaaS solution, visit the Sell in AWS Marketplace page.

Jeff;

Amazon QuickSight Now Generally Available – Fast & Easy to Use Business Analytics for Big Data

After a preview period that included participants from over 1,500 AWS customers ranging from startups to global enterprises, I am happy to be able to announce that Amazon QuickSight is now generally available! When I invited you to join the preview last year, I wrote:

In the past, Business Intelligence required an incredible amount of undifferentiated heavy lifting. You had to pay for, set up and run the infrastructure and the software, manage scale (while users fret), and hire consultants at exorbitant rates to model your data. After all that your users were left to struggle with complex user interfaces for data exploration while simultaneously demanding support for their mobile devices. Access to NoSQL and streaming data? Good luck with that!

Amazon QuickSight provides you with very fast, easy to use, cloud-powered business analytics at 1/10th the cost of traditional on-premises solutions. QuickSight lets you get started in minutes. You log in, point to a data source, and begin to visualize your data. Behind the scenes, the SPICE (Super-fast, Parallel, In-Memory Calculation Engine) will run your queries at lightning speed and provide you with highly polished data visualizations.

Deep Dive into Data
Every customer that I speak with wants to get more value from their stored data. They realize that the potential value locked up within the data is growing by the day, but are sometimes disappointed to learn that finding and unlocking that value can be expensive and difficult. On-premises business analytics tools are expensive to license and can place a heavy load on existing infrastructure. Licensing costs and the complexity of the tools can restrict the user base to just a handful of specialists. Taken together, all of these factors have led many organizations to conclude that they are not ready to make the investment in a true business analytics function.

QuickSight is here to change that! It runs as a service and makes business analytics available to organizations of all shapes and sizes. It is fast and easy to use, does not impose a load on your existing infrastructure, and is available for a monthly fee that starts at just $9 per user.

As you’ll see in a moment, QuickSight allows you to work on data that’s stored in many different services and locations. You can get to your Amazon Redshift data warehouse, your Amazon Relational Database Service (RDS) relational databases, or your flat files in S3. You can also use a set of connectors to access data stored in on-premises MySQL, PostgreSQL, and SQL Server databases, Microsoft Excel spreadsheets, Salesforce and other services.

QuickSight is designed to scale with you. You can add more users, more data sources, and more data without having to purchase more long-term licenses or roll more hardware into your data center.

Take the Tour
Let’s take a tour through QuickSight. The administrator for my organization has already invited me to use QuickSight, so I am ready to log in and get started. Here’s the main screen:

I’d like to start by getting some data from a Redshift cluster. I click on Manage data and review my existing data sets:

I don’t see what I am looking for, so I click on New data set and review my options:

I click on Redshift (manual connect) and enter the credentials so that I can access my data warehouse (if I had a Redshift cluster running within my AWS account it would be available as an auto-discovered source):

QuickSight queries the data warehouse and shows me the schemas (sets of tables) and the tables that are available to me. I’ll select the public schema and the all_flights table to get started:

Now I have two options. I can pull the table in to SPICE for quick analysis or I can query it directly. I’ll pull it in to SPICE:

Again, I have two options! I can click on Edit/Preview data and select the rows and columns to import, or I can click on Visualize to import all of the data and proceed to the fun part! I’ll go for Edit/Preview. I can see the fields (on the left), and I can select only those that are interest using the checkboxes:

I can also click on New Filter, select a field from the popup menu, and then create a filter:

Both options (selecting fields and filtering on rows) allow me to control the data that I pull in to SPICE. This allows me to control the data that I want to visualize and also helps me to make more efficient use of memory. Once I am ready to proceed, I click on Prepare data & visualize. At this point the data is loaded in to SPICE and I’m ready to start visualizing it. I simply select a field to get started. For example, I can select the origin_state_abbr field and see how many flights originate in each state:

The miniaturized view on the right gives me some additional context. I can scroll up or down or select the range of values to display. I can also click on a second field to learn more. I’ll click on flights, set the sort order to descending, and scroll to the top. Now I can see how many of the flights in my data originated in each state:

QuickSight’s AutoGraph feature automatically generates an appropriate visualization based on the data selected. For example, if I add the fl_date field, I get a state-by-state line chart over time:

Based on my query, the data types, and properties of the data, QuickSight also proposes alternate visualizations:

I also have my choice of many other visual types including vertical & horizontal bar charts, line charts, pivot tables, tree maps, pie charts, and heat maps:

Once I have created some effective visualizations, I can capture them and use the resulting storyboard to tell a data-driven story:

I can also share my visualizations with my colleagues:

Finally, my visualizations are accessible from my mobile device:

Pricing & SPICE Capacity
QuickSight comes with one free user and 1 GB of SPICE capacity for free, perpetually. This allows every AWS user to analyze their data and to gain business insights at no cost. The Standard Edition of Amazon QuickSight starts at $9 per month and includes 10 GB of SPICE capacity (see the QuickSight Pricing page for more info).

It is easy to manage SPICE capacity. I simply click on Manage QuickSight in the menu (I must have the ADMIN role in order to be able to make changes):

Then I can see where I stand:

I can click on Purchase more capacity to do exactly that:

I can also click on Release unused purchased capacity in order to reduce the amount of SPICE capacity that I own:

Get Started Today
Amazon QuickSight is now available in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) regions and you can start using it today.

Despite the length of this blog post I have barely scratched the surface of QuickSight. Given that you can use it at no charge, I would encourage you to sign up, load some of your data, and take QuickSight for a spin!

We have a webinar coming up on January 16th where you can learn even more! Sign up here.

Jeff;

 

New – GPU-Powered Amazon Graphics WorkSpaces

As you can probably tell from my I Love My Amazon WorkSpace post I am kind of a fan-boy!

Since writing that post I have found out that I am not alone, and that there are many other WorkSpaces fan-boys and fan-girls out there. Many AWS customers are enjoying their fully managed, secure desktop computing environments almost as much as I am. From their perspective as users, they like to be able to access their WorkSpace from a multitude of supported devices including Windows and Mac computers, PCoIP Zero Clients, Chromebooks, iPads, Fire tablets, and Android tablets. As administrators, they appreciate the ability to deploy high-quality cloud desktops for any number of users. And, finally, as business leaders they like the ability to pay hourly or monthly for the WorkSpaces that they launch.

New Graphics Bundle
These fans already have access to several different hardware choices: the Value, Standard, and Performance bundles. With 1 or 2 vCPUs (virtual CPUs) and 2 to 7.5 GiB of memory, these bundles are a good fit for many office productivity use cases.

Today we are expanding the WorkSpaces family by adding a new GPU-powered Graphics bundle. This bundle offers a high-end virtual desktop that is a great fit for 3D application developers, 3D modelers, and engineers that use CAD, CAM, or CAE tools at the office. Here are the specs:

  • Display – NVIDIA GPU with 1,536 CUDA cores and 4 GiB of graphics memory.
  • Processing – 8 vCPUs.
  • Memory – 15 GiB.
  • System volume – 100 GB.
  • User volume – 100 GB.

This new bundle is available in all regions where WorkSpaces currently operates, and can be used with any of the devices that I mentioned above. You can run the license-included operating system (Windows Server 2008 with Windows 7 Desktop Experience), or you can bring your own licenses for Windows 7 or 10. Applications that make use of OpenGL 4.x, DirectX, CUDA, OpenCL, and the NVIDIA GRID SDK will be able to take advantage of the GPU.

As you start to think about your petabyte-scale data analysis and visualization, keep in mind that these instances are located just light-feet away from EC2, RDS, Amazon Redshift, S3, and Kinesis. You can do your compute-intensive analysis server-side, and then render it in a visually compelling way on an adjacent WorkSpace. I am highly confident that you can use this combination of AWS services to create compelling applications that would simply not be cost-effective or achievable in any other way.

There is one important difference between the Graphics Bundle and the other bundles. Due to the way that the underlying hardware operates, WorkSpaces that run this bundle do not save the local state (running applications and open documents) when used in conjunction with the AutoStop running mode that I described in my Amazon WorkSpaces Update – Hourly Usage and Expanded Root Volume post. We recommend saving open documents and closing applications before disconnecting from your WorkSpace or stepping away from it for an extended period of time.

Demo
I don’t build 3D applications or use CAD, CAM, or CAE tools. However, I do like to design and build cool things with LEGO® bricks! I fired up the latest version of LEGO Digital Designer (LDD) and spent some time enhancing a design. Although I was not equipped to do any benchmarks, the GPU-enhanced version definitely ran more quickly and produced a higher quality finished product. Here’s a little design study I’ve been working on:

With my design all set up it was time to start building. Instead of trying to re-position my monitor so that it would be visible from my building table, I simply logged in to my Graphics WorkSpace from my Fire tablet. I was able to scale and rotate my design very quickly, even though I had very modest local computing power. Here’s what I saw on my Fire:

As you can see, the two screens (desktop and Fire) look identical! I stepped over to my building table and was able to set things up so that I could see my design and find my bricks:

Pricing
Graphics WorkSpaces are available with an hourly billing option. You pay a small, fixed monthly fee to cover infrastructure costs and storage, and an hourly rate for each hour that the WorkSpace is used during the month. Prices start at $22/month + $1.75 per hour in the US East (Northern Virginia) Region; see the WorkSpaces Pricing page for more information.

Jeff;

 

New – CloudWatch Events for EBS Snapshots

Cloud computing can improve upon traditional IT operations by giving you the power to automate complex high-level operations that were formerly kept in a runbook or passed along as tribal knowledge. Far too many of these operations involve backup and recovery operations, especially in smaller and less mature organizations.

Many AWS customers make great use of Amazon Elastic Block Store (EBS) volumes, especially given the ease with which they can generate and manage snapshot backups. They are also copying snapshots between regions on a regular basis for disaster recovery and other operational reasons.

Today we are bringing the benefits of automation to EBS with the addition of new CloudWatch Events for EBS snapshots. You can use these events to add additional automation to your cloud-based backup environment. Here are the new events:

  • createSnapshot – Fired after the status of a newly created EBS snapshot changes to Complete.
  • copySnapshot – Fired after the status of a snapshot copy changes to Complete.
  • shareSnapshot – Fired after a snapshot is shared with your AWS account.

A lot of AWS customers monitor the status of their snapshots by making repeated calls to the DescribeSnapshots function and then stepping through the paginated output in order to locate a specific snapshot. These new events open the door to all sorts of event-driven automation, including the cross-region copy that I mentioned earlier.

Using Snapshot Events
In order to get a better understanding of how this feature helps to automate data backup workflows, I’ll create a workflow that copies a completed snapshot to another region. First, I’ll create an IAM policy that grants appropriate permissions. Then I will incorporate an AWS Lambda function (created by my colleagues) that takes action on the createSnapshot event. Finally, I’ll create a CloudWatch Events rule to capture the event and route it to the Lambda function.

I start out by creating an IAM role (CopySnapshotToRegion) with this policy:

Then I created a new Lambda function (you can find the code at Amazon CloudWatch Events for EBS):

Next, I hopped over to the CloudWatch Events Console, clicked on Create rule, and set it up to handle successful createSnapshot events:

And gave it a name:

To test it out, I create a new EBS snapshot in my source region:

The function was invoked as expected and the snapshot was copied to the target region within seconds (in practice, the copy time will depend on the size of the snapshot):

You can also use these events to make copies of snapshots that are shared with you from other accounts. Many AWS customers partition their usage across multiple accounts for various organizational and security reasons; take a look at our AWS Multiple Account Security Strategy to see our in-depth recommendations in this area. Here are two of the five models included therein:

Available Now
The new events are available  in the US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), and South America (São Paulo) Regions and you can start using them today! Take a look and let me know what you come up with.

Jeff;

 

PS – If you are a developer, development manager, or a product manager and would like to build systems like this, check out the EBS Jobs page.

 

New – HIPAA Eligibility for AWS Snowball

Many of the tools and technologies now in use at your local doctor, dentist, hospital, or other healthcare provider generate massive amounts of sensitive digital data. Other prolific data generators include genomic sequencers and any number of activity and fitness trackers. We all want to benefit from the insights that can be produced by this “data tsunami,” but we also want to be confident that it will be stored in a protected fashion and processed in a responsible manner.

In the United States, protection of healthcare data is governed by HIPAA (the Health Insurance Portability and Accountability Act). Because many AWS customers would like to store and process sensitive health care data on the cloud, we have worked to make multiple AWS services HIPAA-eligible; this means that the services can be used to process Protected Health Information (PHI) and to build applications that are HIPAA-compliant (read HIPAA in the Cloud to learn more about what Cleveland Clinic, Orion Health, Eliza, Philips, and other AWS customers are doing).

Last year I introduced you to AWS Snowball. This is an AWS-owned storage appliance that you can use to move large amounts of data (generally 10 terabytes or more) to AWS on a one-time or recurring basis. You simply request a Snowball from the AWS Management Console, connect it to your network when it arrives, copy your data to it, and then send it back to us so that we can copy the data to the AWS storage service of your choice. Snowball encrypts your data using keys that you specify and control.

Today, I am happy to announce that we are adding Snowball to the list of HIPAA-eligible services, joining Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Elastic Load Balancing, Amazon EMR, Amazon Glacier, Amazon Relational Database Service (RDS) (MySQL and Oracle), Amazon Redshift, and Amazon Simple Storage Service (S3). This brings the total number of eligible services to 10 and represents our commitment to make the AWS Cloud a safe, secure, and reliable destination for PHI and many other types of sensitive data. If you already have a Business Associate Agreement (BAA) with AWS, you can begin using Snowball to transfer data into your HIPAA accounts immediately.

With Snowball now on the list of HIPAA-eligible services, AWS customers in the Healthcare and Life Sciences space can quickly move on-premises data to Snowball and then process it using any of the services that I just mentioned. For example, they can use the new HDFS Import feature to migrate an existing on-premises Hadoop cluster to the cloud and analyze it using a scalable EMR cluster. They can also move existing petabyte-scale data (medical images, patient records, and the like) to AWS and store it in S3 or Glacier, both already HIPAA-eligible.  These services are proven, easy to use, and offer high data durability at low cost.

Jeff;

 

New – Burst Balance Metric for EC2’s General Purpose SSD (gp2) Volumes

Many AWS customers are getting great results with the General Purpose SSD (gp2) EBS volumes that we launched in mid-2014 (see New SSD-Backed Elastic Block Storage for more information).  If you’re unsure of which volume type to use for your workload, gp2 volumes are the best default choice because they offer balanced price/performance for a wide variety of database, dev and test, and boot volume workloads. One of the more interesting aspects of this volume type is the burst feature.

We designed gp2‘s burst feature to suit the I/O patterns of real world workloads we observed across our customer base. Our data scientists found that volume I/O is extremely bursty, spiking for short periods, with plenty of idle time between bursts. This unpredictable and bursty nature of traffic is why we designed the gp2 burst-bucket to allow even the smallest of volumes to burst up to 3000 IOPS and to replenish their burst bucket during idle times or when performing low levels of I/O. The burst-bucket design allows us to provide consistent  and predictable performance for all gp2 users. In practice, very few gp2 volumes ever completely deplete their burst-bucket, and now customers can track their usage patterns and adjust accordingly.

We’ve written extensively about performance optimization across different volume types and the differences between benchmarking and real-world workloads (see I/O Characteristics for more information). As I described in my original post, burst credits accumulate at a rate of 3 per configured GB per second, and each one pays for one read or one write. Each volume can accumulate up to 5.4 million credits, and they can be spent at up to 3,000 per second per volume. To get started, you simply create gp2 volumes of the desired size, launch your application, and your I/O to the volume will proceed as rapidly and efficiently as possible.

New Metric
Effective today, we are making the Burst Balance metric available for each General Purpose (SSD) volume. You can observe this metric in the CloudWatch Console and you can set up an alarm that will be triggered if the balance becomes too low. The metric is expressed as percentage; 100% means that the volume has accumulated the maximum number of credits.

I launched a c4.8xlarge instance and attached a 100 GB volume to it:

Then I created an alarm to let me know if the volume’s burst balance went below 40% (in a real-world scenario you might want to set this considerably lower, but I was impatient and it takes a fair amount of time to drain the balance):

I confirmed my SNS subscription, and then ran fio to generate a load:

$ sudo fio --filename=/dev/sdb --rw=randread --bs=16k --runtime=2400 --time_based=1 \
  --iodepth=32 --ioengine=libaio --direct=1  --name=gp2-16kb-burst-bucket-test

Then I watched as the balance declined:

As expected, I received a notification email:

In a production scenario, I could choose to increase the size of the volume, fine-tune my application’s I/O behavior, or simply note for the record that I was making good use of the burst-bucket.

After the end of the test, I had lunch and watched the burst balance increase (I used the updated CloudWatch Console this time):

If you’re one of the few customers where your burst-bucket is depleting more than you’d like, you can either increase the size of your gp2 volume for more performance or transition to a Provisioned IOPS SSD (io1) volume, which delivers consistent, provisioned performance 99.9% of the time.

Available Now
This feature is available now and you can start using it today in all AWS Commercial Regions at no charge. The usual charges for CloudWatch Alarms will apply.

Jeff;

 

PS – If you are a developer, development manager, or a product manager and would like to build systems like this, check out the EBS Jobs page.

CodePipeline Update – Build Continuous Delivery Workflows for CloudFormation Stacks

When I begin to write about a new way for you to become more productive by using two AWS services together, I think about a 1980’s TV commercial for Reese’s Peanut Butter Cups! The intersection of two useful services or two delicious flavors creates a new one that is even better.

Today’s chocolate / peanut butter intersection takes place where AWS CodePipeline meets AWS CloudFormation. You can now use CodePipeline to build a continuous delivery pipeline for CloudFormation stacks. When you practice continuous delivery, each code change is automatically built, tested, and prepared for release to production. In most cases, the continuous delivery release process includes a combination of manual and automatic approval steps. For example, code that successfully passes through a series of automated tests can be routed to a development or product manager for final review and approval before it is pushed to production.

This important combination of features allows you to use the infrastructure as code model while gaining all of the advantages of continuous delivery. Each time you change a CloudFormation template, CodePipeline can initiate a workflow that will build a test stack, test it, await manual approval, and then push the changes to production.The workflow can create and manipulate stacks in many different ways:

As you will soon see, the workflow can take advantage of advanced CloudFormation features such as the ability to generate and then apply change sets (read New – Change Sets for AWS CloudFormation to learn more) to an operational stack.

The Setup
In order to learn more about this feature, I used a CloudFormation template to set up my continuous delivery pipeline (this is yet another example of infrastructure as code). This template (available here and described in detail here) sets up a full-featured pipeline. When I use the template to create my pipeline, I specify the name of an S3 bucket and the name of a source file:

The SourceS3Key points to a ZIP file that is enabled for S3 versioning. This file contains the CloudFormation template (I am using the WordPress Single Instance example) that will be deployed via the pipeline that I am about to create. It can also contain other deployment artifacts such as configuration or parameter files; here’s what mine looks like:

The entire continuous delivery pipeline is ready just a few seconds after I click on Create Stack. Here’s what it looks like:

The Action
At this point I have used CloudFormation to set up my pipeline. With the stage set (so to speak), now I can show you how this pipeline makes use of the new CloudFormation actions.

Let’s focus on the second stage, TestStage. Triggered by the first stage, this stage uses CloudFormation to create a test stack:

The stack is created using parameter values from the test-stack-configuration.json file in my ZIP. Since you can use different configuration files for each CloudFormation action, you can use the same template for testing and production.

After the stack is up and running, the ApproveTestStack step is used to await manual approval (it says “Waiting for approval above.”). Playing the role of the dev manager, I verify that the test stack behaves and performs as expected, and then approve it:

After approval, the DeleteTestStack step deletes the test stack.

Now we are just about ready to deploy to production. ProdStage creates a CloudFormation change set and then submits it for manual approval. This stage uses the parameter values from the prod-stack-configuration.json file in my ZIP. I can use the parameters to launch a modestly sized test environment on a small EC2 instance and a large production environment from the same template.

Now I’m playing the role of the big boss, responsible for keeping the production site up and running. I review the change set in order to make sure that I understand what will happen when I deploy to production. This is the first time that I am running the pipeline, so the change set indicates that an EC2 instance and a security group will be created:

And then I approve it:

With the change set approved, it is applied to the existing production stack in the ExecuteChangeSet step. Applying the change to an  existing stack keeps existing resources in play where possible and avoids a wholesale restart of the application. This is generally more efficient and less disruptive than replacing the entire stack. It keeps in-memory caches warmed up and avoids possible bursts of cold-start activity.

Implementing a Change
Let’s say that I decide to support HTTPS. In order to do this, I need to add port 443 to my application’s security group. I simply edit the CloudFormation template, put it into a fresh ZIP, and upload it to S3. Here’s what I added to my template:

      - CidrIp: 0.0.0.0/0
        FromPort: '443'
        IpProtocol: tcp
        ToPort: '443'

Then I return to the Console and see that CodePipeline has already detected the change and set the pipeline in to motion:

The pipeline runs again, I approve the test stack, and then inspect the change set, confirming that it will simply modify an existing security group:

One quick note before I go. The CloudFormation template for the pipeline creates an IAM role and uses it to create the test and deployment stacks (this is a new feature; read about the AWS CloudFormation Service Role to learn more). For best results, you should delete the stacks before you delete the pipeline. Otherwise, you’ll need to re-create the role in order to delete the stacks.

There’s More
I’m just about out of space and time, but I’ll briefly summarize a couple of other aspects of this new capability.

Parameter Overrides – When you define a CloudFormation action, you may need to exercise additional control over the parameter values that are defined for the template. You can do this by opening up the Advanced pane and entering any desired parameter overrides:

Artifact References – In some situations you may find that you need to reference an attribute of an artifact that was produced by an earlier stage of the pipeline. For example, suppose that an early stage of your pipeline copies a Lambda function to an S3 bucket and calls the resulting artifact LambdaFunctionSource. Here’s how you would retrieve the bucket name and the object key from the attribute using a parameter override:

{
  "BucketName" : { "Fn::GetArtifactAtt" : ["LambdaFunctionSource", "BucketName"]},
  "ObjectKey" : { "Fn::GetArtifactAtt" : ["LambdaFunctionSource", "ObjectKey"]}
}

Access to JSON Parameter – You can use the new Fn::GetParam function to retrieve a value from a JSON-formatted file that is included in an artifact.

Note that Fn:GetArtifactAtt and Fn::GetParam are designed to be used within the parameter overrides.

S3 Bucket Versioning – The first step of my pipeline (the Source action) refers to an object in an S3 bucket. By enabling S3 versioning for the object, I simply upload a new version of my template after each change:

If I am using S3 as my source, I must use versioning (uploading a new object over the existing one is not supported). I can also use AWS CodeCommit or a GitHub repo as my source.

Create Pipeline Wizard
I started out this blog post by using a CloudFormation template to create my pipeline. I can also click on Create pipeline in the console and build my initial pipeline (with source, build, and beta deployment stages) using a wizard. The wizard now allows me to select CloudFormation as my deployment provider. I can create or update a stack or a change set in the beta deployment stage:

Available Now
This new feature is available now and you can start using it today. To learn more, check out the Continuous Delivery with AWS CodePipeline  in CodePipeline Documentation.

Jeff;

 

New – Sending Metrics for Amazon Simple Email Service (SES)

Amazon Simple Email Service (SES) focuses on deliverability – getting email through to the intended recipients. In my launch blog post (Introducing the Amazon Simple Email Service), I noted that several factors influence delivery, including the level of trust that you have earned with multiple Internet Service Providers (ISPs) and the number of complaints and bounces that you generate.

Today we are launching a new set of sending metrics for SES. There are two aspects to this feature:

Event Stream – You can configure SES to publish a JSON-formatted record to Amazon Kinesis Firehose each time a significant event (sent, rejected, delivered, bounced, or complaint generated) occurs.

Metrics – You can configure SES to publish aggregate metrics to Amazon CloudWatch. You can add one or more tags to each message and use them as CloudWatch dimensions. Tagging messages gives you the power to track deliverability based on campaign, team, department, and so forth.  You can then use this information to fine-tune your messages and your email strategy.

To learn more, read about Email Sending Metrics on the Amazon Simple Email Service Blog.

Jeff;