Category: AWS CloudFormation

Use CloudFormation StackSets to Provision Resources Across Multiple AWS Accounts and Regions

AWS CloudFormation helps AWS customers implement an Infrastructure as Code model. Instead of setting up their environments and applications by hand, they build a template and use it to create all of the necessary resources, collectively known as a CloudFormation stack. This model removes opportunities for manual error, increases efficiency, and ensures consistent configurations over time.

Today I would like to tell you about a new feature that makes CloudFormation even more useful. This feature is designed to help you to address the challenges that you face when you use Infrastruccrture as Code in situations that include multiple AWS accounts and/or AWS Regions. As a quick review:

Accounts – As I have told you in the past, many organizations use a multitude of AWS accounts, often using AWS Organizations to arrange the accounts into a hierarchy and to group them into Organizational Units, or OUs (read AWS Organizations – Policy-Based Management for Multiple AWS Accounts to learn more). Our customers use multiple accounts for business units, applications, and developers. They often create separate accounts for development, testing, staging, and production on a per-application basis.

Regions – Customers also make great use of the large (and ever-growing) set of AWS Regions. They build global applications that span two or more regions, implement sophisticated multi-region disaster recovery models, replicate S3, Aurora, PostgreSQL, and MySQL data in real time, and choose locations for storage and processing of sensitive data in accord with national and regional regulations.

This expansion into multiple accounts and regions comes with some new challenges with respect to governance and consistency. Our customers tell us that they want to make sure that each new account is set up in accord with their internal standards. Among other things, they want to set up IAM users and roles, VPCs and VPC subnets, security groups, Config Rules, logging, and AWS Lambda functions in a consistent and reliable way.

Introducing StackSet
In order to address these important customer needs, we are launching CloudFormation StackSet today. You can now define an AWS resource configuration in a CloudFormation template and then roll it out across multiple AWS accounts and/or Regions with a couple of clicks. You can use this to set up a baseline level of AWS functionality that addresses the cross-account and cross-region scenarios that I listed above. Once you have set this up, you can easily expand coverage to additional accounts and regions.

This feature always works on a cross-account basis. The administrator account owns one or more StackSets and controls deployment to one or more target accounts. The administrator account must include an assumable IAM role and the target accounts must delegate trust to the administrator account. To learn how to do this, read Prerequisites in the StackSet Documentation.

Each StackSet references a CloudFormation template and contains lists of accounts and regions. All operations apply to the Cartesian product of the accounts and regions in the StackSet. If the StackSet references three accounts (A1, A2, and A3) and four regions (R1, R2, R3, and R4), there are twelve targets:

  • Region R1: Accounts A1, A2, and A3.
  • Region R2: Accounts A1, A2, and A3.
  • Region R3: Accounts A1, A2, and A3.
  • Region R4: Accounts A1, A2, and A3.

Deploying a template initiates creation of a CloudFormation stack in an account/region pair. Templates are deployed sequentially to regions (you control the order) to multiple accounts within the region (you control the amount of parallelism). You can also set an error threshold that will terminate deployments if stack creation fails.

You can use your existing CloudFormation templates (taking care to make sure that they are ready to work across accounts and regions), create new ones, or use one of our sample templates. We are launching with support for the AWS partition (all public regions except those in China), and expect to expand it to to the others before too long.

Using StackSets
You can create and deploy StackSets from the CloudFormation Console, via the CloudFormation APIs, or from the command line.

Using the Console, I start by clicking on Create StackSet. I can use my own template or one of the samples. I’ll use the last sample (Add config rule encrypted volumes):

I click on View template to learn more about the template and the rule:

I give my StackSet a name. The template that I selected accepts an optional parameter, and I can enter it at this time:

Next, I choose the accounts and regions. I can enter account numbers directly, reference an AWS organizational unit, or upload a list of account numbers:

I can set up the regions and control the deployment order:

I can also set the deployment options. Once I am done I click on Next to proceed:

I can add tags to my StackSet. They will be applied to the AWS resources created during the deployment:

The deployment begins, and I can track the status from the Console:

I can open up the Stacks section to see each stack. Initially, the status of each stack is OUTDATED, indicating that the template has yet to be deployed to the stack; this will change to CURRENT after a successful deployment. If a stack cannot be deleted, the status will change to INOPERABLE.

After my initial deployment, I can click on Manage StackSet to add additional accounts, regions, or both, to create additional stacks:

Now Available
This new feature is available now and you can start using it today at no extra charge (you pay only for the AWS resources created on your behalf).


PS – If you create some useful templates and would like to share them with other AWS users, please send a pull request to our AWS Labs GitHub repo.

CodePipeline Update – Build Continuous Delivery Workflows for CloudFormation Stacks

When I begin to write about a new way for you to become more productive by using two AWS services together, I think about a 1980’s TV commercial for Reese’s Peanut Butter Cups! The intersection of two useful services or two delicious flavors creates a new one that is even better.

Today’s chocolate / peanut butter intersection takes place where AWS CodePipeline meets AWS CloudFormation. You can now use CodePipeline to build a continuous delivery pipeline for CloudFormation stacks. When you practice continuous delivery, each code change is automatically built, tested, and prepared for release to production. In most cases, the continuous delivery release process includes a combination of manual and automatic approval steps. For example, code that successfully passes through a series of automated tests can be routed to a development or product manager for final review and approval before it is pushed to production.

This important combination of features allows you to use the infrastructure as code model while gaining all of the advantages of continuous delivery. Each time you change a CloudFormation template, CodePipeline can initiate a workflow that will build a test stack, test it, await manual approval, and then push the changes to production.The workflow can create and manipulate stacks in many different ways:

As you will soon see, the workflow can take advantage of advanced CloudFormation features such as the ability to generate and then apply change sets (read New – Change Sets for AWS CloudFormation to learn more) to an operational stack.

The Setup
In order to learn more about this feature, I used a CloudFormation template to set up my continuous delivery pipeline (this is yet another example of infrastructure as code). This template (available here and described in detail here) sets up a full-featured pipeline. When I use the template to create my pipeline, I specify the name of an S3 bucket and the name of a source file:

The SourceS3Key points to a ZIP file that is enabled for S3 versioning. This file contains the CloudFormation template (I am using the WordPress Single Instance example) that will be deployed via the pipeline that I am about to create. It can also contain other deployment artifacts such as configuration or parameter files; here’s what mine looks like:

The entire continuous delivery pipeline is ready just a few seconds after I click on Create Stack. Here’s what it looks like:

The Action
At this point I have used CloudFormation to set up my pipeline. With the stage set (so to speak), now I can show you how this pipeline makes use of the new CloudFormation actions.

Let’s focus on the second stage, TestStage. Triggered by the first stage, this stage uses CloudFormation to create a test stack:

The stack is created using parameter values from the test-stack-configuration.json file in my ZIP. Since you can use different configuration files for each CloudFormation action, you can use the same template for testing and production.

After the stack is up and running, the ApproveTestStack step is used to await manual approval (it says “Waiting for approval above.”). Playing the role of the dev manager, I verify that the test stack behaves and performs as expected, and then approve it:

After approval, the DeleteTestStack step deletes the test stack.

Now we are just about ready to deploy to production. ProdStage creates a CloudFormation change set and then submits it for manual approval. This stage uses the parameter values from the prod-stack-configuration.json file in my ZIP. I can use the parameters to launch a modestly sized test environment on a small EC2 instance and a large production environment from the same template.

Now I’m playing the role of the big boss, responsible for keeping the production site up and running. I review the change set in order to make sure that I understand what will happen when I deploy to production. This is the first time that I am running the pipeline, so the change set indicates that an EC2 instance and a security group will be created:

And then I approve it:

With the change set approved, it is applied to the existing production stack in the ExecuteChangeSet step. Applying the change to an  existing stack keeps existing resources in play where possible and avoids a wholesale restart of the application. This is generally more efficient and less disruptive than replacing the entire stack. It keeps in-memory caches warmed up and avoids possible bursts of cold-start activity.

Implementing a Change
Let’s say that I decide to support HTTPS. In order to do this, I need to add port 443 to my application’s security group. I simply edit the CloudFormation template, put it into a fresh ZIP, and upload it to S3. Here’s what I added to my template:

      - CidrIp:
        FromPort: '443'
        IpProtocol: tcp
        ToPort: '443'

Then I return to the Console and see that CodePipeline has already detected the change and set the pipeline in to motion:

The pipeline runs again, I approve the test stack, and then inspect the change set, confirming that it will simply modify an existing security group:

One quick note before I go. The CloudFormation template for the pipeline creates an IAM role and uses it to create the test and deployment stacks (this is a new feature; read about the AWS CloudFormation Service Role to learn more). For best results, you should delete the stacks before you delete the pipeline. Otherwise, you’ll need to re-create the role in order to delete the stacks.

There’s More
I’m just about out of space and time, but I’ll briefly summarize a couple of other aspects of this new capability.

Parameter Overrides – When you define a CloudFormation action, you may need to exercise additional control over the parameter values that are defined for the template. You can do this by opening up the Advanced pane and entering any desired parameter overrides:

Artifact References – In some situations you may find that you need to reference an attribute of an artifact that was produced by an earlier stage of the pipeline. For example, suppose that an early stage of your pipeline copies a Lambda function to an S3 bucket and calls the resulting artifact LambdaFunctionSource. Here’s how you would retrieve the bucket name and the object key from the attribute using a parameter override:

  "BucketName" : { "Fn::GetArtifactAtt" : ["LambdaFunctionSource", "BucketName"]},
  "ObjectKey" : { "Fn::GetArtifactAtt" : ["LambdaFunctionSource", "ObjectKey"]}

Access to JSON Parameter – You can use the new Fn::GetParam function to retrieve a value from a JSON-formatted file that is included in an artifact.

Note that Fn:GetArtifactAtt and Fn::GetParam are designed to be used within the parameter overrides.

S3 Bucket Versioning – The first step of my pipeline (the Source action) refers to an object in an S3 bucket. By enabling S3 versioning for the object, I simply upload a new version of my template after each change:

If I am using S3 as my source, I must use versioning (uploading a new object over the existing one is not supported). I can also use AWS CodeCommit or a GitHub repo as my source.

Create Pipeline Wizard
I started out this blog post by using a CloudFormation template to create my pipeline. I can also click on Create pipeline in the console and build my initial pipeline (with source, build, and beta deployment stages) using a wizard. The wizard now allows me to select CloudFormation as my deployment provider. I can create or update a stack or a change set in the beta deployment stage:

Available Now
This new feature is available now and you can start using it today. To learn more, check out the Continuous Delivery with AWS CodePipeline  in CodePipeline Documentation.



AWS CloudFormation Update – YAML, Cross-Stack References, Simplified Substitution

AWS CloudFormation gives you the ability to express entire stacks (collections of related AWS resources) declaratively, by constructing templates. You can define a stack, specify and configure the desired resources and their relationship to each other, and then launch as many copies of the stack as desired. CloudFormation will create and set up the resources for you, while also taking care to address any ordering dependencies between the resources.

Today we are making three important additions to CloudFormation:

  • YAML Support – You can now write your CloudFormation templates in YAML.
  • Cross Stack References – You can now export values from one stack and use them in another.
  • Simplified Substitution – You can more easily perform string replacements within templates.

Let’s take a look!

YAML Support
You can now write your CloudFormation templates in YAML (short for YAML Ain’t Markup Language). Up until now, templates were written in JSON. While YAML and JSON have similar expressive powers, YAML was designed to be human-readable while JSON was (let’s be honest) not. YAML-based templates use less punctuation and should be substantially easier to write and to read. They also allow the use of comments. CloudFormation supports essentially all of YAML, with the exception of hash merges, aliases, and some tags (binary, imap, pairs, TIMESTAMP, and set).

When you write a CloudFormation template in YAML, you will use the same top-level structure (Description, Metadata, Mappings, Outputs, Parameters, Conditions, and Resources).  Here’s what a parameter definition looks like:

    AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
    ConstraintDescription: must begin with a letter and contain only alphanumeric
    Default: wordpressdb
    Description: The WordPress database name
    MaxLength: '64'
    MinLength: '1'
    Type: String

When you use YAML, you can also use a new, abbreviated syntax to refer to CloudFormation functions such as GetAtt, Base64, and FindInMap. You can now use the existing syntax ("Fn::GetAtt") or the new, tag-based syntax (!GetAtt). Note that the “!” is part of the YAML syntax for tags; it is not the “logical not” operator. Here’s the old syntax:

- Fn::FindInMap:
    - AWSInstanceType2Arch
    - Ref: InstanceType
    - Arch

And the new one:

!FindInMap [AWSInstanceType2Arch, !Ref InstanceType, Arch]

As you can see, the newer syntax is shorter and cleaner. Note, however, that you cannot put two tags next to each other. You can intermix the two forms and you can also nest them. For example, !Base64 !Sub is invalid but !Base64 Fn::Sub is fine.

The CloudFormations API functions (CreateChangeSet, CreateStack, UpdateStack, and so forth) now accept templates in either JSON or YAML. The GetTemplate function returns the template in the original format. The CloudFormation designer does not support YAML templates today, but this is on our roadmap.

Cross Stack References
Many AWS customers use one “system” CloudFormation stack to set up their environment (VPCs, VPC subnets, security groups, IP addresses, and so forth) and several other “application” stacks to populate it (EC2 & RDS instances, message queues, and the like). Until now there was no easy way for the application stacks to reference resources created by the system stack.

You can now create and export values from one stack and make use of them in other stacks without going to the trouble of creating custom CloudFormation resources. The first stack exports values like this:

    Value: !Ref TroubleShootingSG
      Name: AccountSG

The other stacks then reference them using the new ImportValue function:

  Type: AWS::EC2::Instance
      - !ImportValue AccountSG

The exported names must be unique with the AWS account and the region. A stack that is referenced by another stack cannot be deleted and it cannot modify or remove the exported value.

Simplified Substitution
Many CloudFormation templates perform some intricate string manipulation in order to construct command lines, file paths, and other values that cannot be fully determined until the stack is created.  Until now, this required the use of fn::Join. In combination with the JSON syntax, this results in some messy templates that were hard to understand and maintain. In order to simplify this important aspect of template development, we are introducing a new substitution function, fn::Sub. This function replaces variables (denoted by the syntax ${variable_name}) with their evaluated values. For example:

      command: !Sub |
           mysqladmin -u root password '${DBRootPassword}'
      test: !Sub |
           $(mysql ${DBName} -u root --password='${DBRootPassword}' >/dev/null 2>&1 </dev/null); (( $? != 0 ))
      command: !Sub |  
           mysql -u root --password='${DBRootPassword}' < /tmp/setup.mysql
      test: !Sub |
           $(mysql ${DBName} -u root --password='${DBRootPassword}' >/dev/null 2>&1 </dev/null); (( $? !=0))

If you need to generate ${} or ${variable}, simply write ${!} or ${!variable}.

Coverage Updates
As part of this release we also added additional support for AWS Key Management Service (KMS), EC2 Spot Fleet, and Amazon EC2 Container Service. See the CloudFormation Release History for more information.

Available Now
All of these features are available now and you can start using them today!

If you are interested in learning more about CloudFormation, please plan to attend our upcoming webinar, AWS Infrastructure as Code. You will learn how to take advantage of best practices for planning and provisioning your infrastructure, and you will have the opportunity to see the new features in action.



New – Change Sets for AWS CloudFormation

AWS CloudFormation lets you create, manage, and update a collection of AWS resources (a “stack”) in a controlled, predictable manner. Every day, customers use CloudFormation to perform hundreds of thousands of updates to the stacks that support their production workloads. They define an initial template and then revise it as their requirements change.

This model, commonly known as infrastructure as code, gives developers, architects, and operations teams detailed control of the provisioning and configuration of their AWS resources. This detailed level of control and accountability is one of the most visible benefits that you get when you use CloudFormation. However, there are several others that are less visible but equally important:

Consistency – The CloudFormation team works with the AWS teams to make sure that newly added resource models have consistent semantics for creating, updating, and deleting resources. They take care to account for retries, idempotency, and management of related resources such as KMS keys for encrypting EBS or RDS volumes.

Stability – In any distributed system, issues related to eventual consistency often arise and must be dealt with. CloudFormation is intimately aware of these issues and automatically waits for any necessary propagation to complete before proceeding. In many cases they work with the service teams to ensure that their APIs and success signals are properly tuned for use with CloudFormation.

Uniformity – CloudFormation will choose between in-place updates and resource replacement when you make updates to your stacks.

All of this work takes time, and some of it cannot be completely tested until the relevant services have been launched or updated.

Improved Support for Updates
As I mentioned earlier, many AWS customers use CloudFormation to manage updates to their production stacks. They edit their existing template (or create a new one) and then use CloudFormation’s Update Stack operation to activate the changes.

Many of our customers have asked us for additional insight into the changes that CloudFormation is planning to perform when it updates a stack in accord with the more recent template and/or parameter values. They want to be able to preview the changes, verify that they are in line with their expectations, and proceed with the update.

In order to support this important CloudFormation use case, we are introducing the concept of a change set. You create a change set by submitting changes against the stack you want to update. CloudFormation compares the stack to the new template and/or parameter values and produces a change set that you can review and then choose to apply (execute).

In addition to additional insight into potential changes, this new model also opens the door to additional control over updates. You can use IAM to control access to specific CloudFormation functions such as UpdateStack, CreateChangeSet, DescribeChangeSet, and ExecuteChangeSet. You could allow a large group of developers to create and preview change sets, and restrict execution to a smaller and more experienced group. With some additional automation, you could raise alerts or seek additional approvals for changes to key resources such as database servers or networks.

Using Change Sets
Let’s walk through the steps involved in working with change sets. As usual, you can get to the same functions using the AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, and the CloudFormation API.

I started by creating a stack that runs a LAMP stack on a single EC2 instance. Here are the resources that it created:

Then I decided to step up to a more complex architecture. One of my colleagues shared a suitable template with me. Using the “trust but verify” model, I created a change set in order to see what would happen were I to use the template. I clicked on Create Change Set:

Then I uploaded the new template and assigned a name to the change set. If the template made use of parameters, I could have entered values for them at this point.

At this point I had the option to modify the existing tags and to add new ones. I also had the option to set up advanced options for the stack (none of these will apply until I actually execute the change set, of course):

After another click or two to confirm my intent, the console analyzed the template, checks the results against the stack, and displayed the list of changes:

At this point I can click on Execute to effect the changes. I can also leave the change set as-is, or create several others in order to explore some alternate paths forward. When I am ready to go, I can locate the change set and execute it:

CloudFormation springs to action and implements the changes per the change set:

A few minutes later my new stack configuration was in place and fully operational:

And there you have it! As I mentioned earlier, I can create and inspect multiple change sets before choosing the one that I would like to execute. When I do this, the other change sets are no longer meaningful and are discarded.

Managing Rollbacks
If a stack update fails, CloudFormation does its best to put things back the way there were before the update. The rollback operation can fail on occasion; in many cases this is due to a change that was made outside of CloudFormation’s purview. We recently launched a new option that gives you additional control over what happens next. To learn more about this option, read Continue Rolling Back an Update for AWS CloudFormation stacks in the UPDATE_ROLLBACK_FAILED state.

Available Now
This functionality is available now and you can start using it today!


New – AWS CloudFormation Designer + Support for More Services

AWS CloudFormation makes it easy for you to create and manage a collection of related AWS resources (which we call a stack). Starting from a template, CloudFormation creates the resources in an orderly and predictable fashion, taking in to account dependencies between them, and connecting them together as defined in the template.

A CloudFormation template is nothing more than a text file. Inside the file, data in JSON format defines the AWS resources: their names, properties, relationships to other resources, and so forth. While the text-based model is very powerful, it is effectively linear and, as such, does not make relationships between the resources very obvious.

Today we are launching the new AWS CloudFormation Designer. This visual tool allows you to create and modify CloudFormation templates using a drag-and-drop interface. You can easily add, modify, or remove resources and the underlying JSON will be altered accordingly. If you modify a template that is associated with a running stack, you can update the stack so that it conforms to the template.

We are also launching CloudFormation support for four additional AWS services.

Free Tour!
Let’s take a quick tour of the CloudFormation Designer. Here’s the big picture:

The design surface is center stage, with the resource menu on the left and a JSON editor at the bottom. I simply select the desired AWS resources on the left, drag them to the design surface, and create relationships between them. Here’s an EC2 instance and 3 EBS volumes:

I  created the relationships between the instance and the volumes by dragging the “dot” on the lower right corner of the instance (labeled AWS::EC2::Volume) to each volume in turn.

I can select an object and then edit its properties in the JSON editor:

Here’s a slightly more complex example:

The dotted blue lines denote resource-to-resource references (this is the visual equivalent of CloudFormation’s Ref attribute). For example, the DBSecurityGroup (middle row, left) refers to the EC2 SecurityGroup (top row, left); here’s the JSON:

I should point out that this tool does not perform any magic!  You will still need to have a solid understanding of the AWS resources that you include in your template, including a sense of how to put them together to form a complete system. You can right-click on any resource to display a menu; from there you can click on the ? to open the CloudFormation documentation for the resource:

Clicking on the eye icon will allow you to edit the resource’s properties. Once you have completed your design you can launch a stack from within the Designer.

You can also open up the sample CloudFormation templates and examine them in the Designer:

The layout data (positions and sizes) for the AWS resources is stored within the template.

Support for Additional Services
We are also adding support for the following services today:

Visit the complete list of supported services and resources to learn more.

Available Now
The new CloudFormation Designer is available now and you can start using it today by opening the CloudFormation Console. Like CloudFormation itself, there is no charge to use the Designer; you pay only for the AWS resources that you use when you launch a stack.


CloudWatch Logs Subscription Consumer + Elasticsearch + Kibana Dashboards

Many of the things that I blog about lately seem to involve interesting combinations of two or more AWS services and today’s post is no exception. Before I dig in, I’d like to briefly introduce all of the services that I plan to name-drop later in this post. Some of this will be review material, but I do like to make sure that every one of my posts makes sense to someone who knows little or nothing about AWS.

The last three items above have an important attribute in common — they can each create voluminous streams of event data that must be efficiently stored, index, and visualized in order to be of value.

Visualize Event Data
Today I would like to show you how you can use Kinesis and a new CloudWatch Logs Subscription Consumer to do just that. The subscription consumer is a specialized Kinesis stream reader. It comes with built-in connectors for Elasticsearch and S3, and can be extended to support other destinations.

We have created a CloudFormation template that will launch an Elasticsearch cluster on EC2 (inside of a VPC created by the template), set up a log subscription consumer to route the event data in to ElasticSearch, and provide a nice set of dashboards powered by the Kibana exploration and visualization tool. We have set up default dashboards for VPC Flow Logs, Lambda, and CloudTrail; you can customize them as needed or create other new ones for your own CloudWatch Logs log groups.

The stack takes about 10 minutes to create all of the needed resources. When it is ready, the Output tab in the CloudFormation Console will show you the URLs for the dashboards and administrative tools:

The stack includes versions 3 and 4 of Kibana, along with sample dashboards for the older version (if you want to use Kibana 4, you’ll need to do a little bit of manual configuration). The first sample dashboard shows the VPC Flow Logs. As you can see, it includes a considerable amount of information:

The next sample displays information about Lambda function invocations, augmented by data generated by the function itself:

The final three columns were produced by the following code in the Lambda function. The function is processing a Kinesis stream, and logs some information about each invocation:

exports.handler = function(event, context) {
    var start = new Date().getTime();
    var bytesRead = 0;

    event.Records.forEach(function(record) {
        // Kinesis data is base64 encoded so decode here
        payload = new Buffer(, 'base64').toString('ascii');
        bytesRead += payload.length;

        // log each record
        console.log(JSON.stringify(record, null, 2));

    // collect statistics on the function's activity and performance
        "recordsProcessed": event.Records.length,
        "processTime": new Date().getTime() - start,
        "bytesRead": bytesRead,
    }, null, 2));

    context.succeed("Successfully processed " + event.Records.length + " records.");

There’s a little bit of magic happening behind the scenes here! The subscription consumer noticed that the log entry was a valid JSON object and instructed Elasticsearch to index each of the values. This is cool, simple, and powerful; I’d advise you to take some time to study this design pattern and see if there are ways to use it in your own systems.

For more information on configuring and using this neat template, visit the CloudWatch Logs Subscription Consumer home page.

Consume the Consumer
You can use the CloudWatch Logs Subscription Consumer in your own applications. You can extend it to add support for other destinations by adding another connector (use the Elasticsearch and S3 connectors as examples and starting points).

— Jeff;

New AWS Quick Start – SAP Business One, version for SAP HANA

We have added another AWS Quick Start Reference Deployment. The new SAP Business One, Version for SAP HANA document will show you how to get on the fast track to plan, deploy, and configure this enterprise resource planning (ERP) solution. It is powered by SAP HANA, SAP’s in-memory database.

This deployment builds on our existing SAP HANA on AWS Quick Start. It makes use of Amazon Elastic Compute Cloud (EC2) and Amazon Virtual Private Cloud, and is launched via a AWS CloudFormation template.

The CloudFormation template creates the following resources, all within a new or existing VPC:

  • A NAT instance in the public subnet to support inbound SSH access and outbound Internet access.
  • A Microsoft Windows Server instance in the public subnet for downloading SAP HANA media and to provide a remote desktop connection to the SAP Business One client instance.
  • Security groups and IAM roles.
  • A SAP HANA system installed with Amazon Elastic Block Store (EBS) volumes configured to meet HANA’s performance requirements.
  • SAP Business One, version for SAP HANA, client and server components.

The document will help you to choose the appropriate EC2 instance types for both production and non-production scenarios. It also includes a comprehensive, step-by-step walk-through of the entire setup process. During the process, you will need to log in to the Windows instance using an RDP client in order to download and stage the SAP media.

After you make your choices, you simply launch the template, fill in the blanks, and sit back while the resources are created and configured. Exclusive of the media download (a manual step), this process will take about 90 minutes.

The quick start reference guide is available now and you can read it today!


AWS CloudFormation Update – Lambda-Backed Custom Resources & More

I’m playing catch-up today in order to make sure that you know about some AWS CloudFormation releases that have gone out over the last couple of weeks.  You can now create custom CloudFormation resources by calling AWS Lambda functions. We added support for some additional Auto Scaling and RDS resources. We also updated our support for some existing CloudFront, ElastiCache, EC2, OpsWorks, RDS, and Route 53 resources.

Lambda-Backed Custom Resources
You can now write AWS Lambda functions that are invoked whenever you create, update, or delete a CloudFormation stack. This allows you to write functions that provide access to values generated in other stacks, look up AMI IDs during stack creation, access existing AWS resources, and perform utility functions such as string reversal.

The functions that you write are invoked synchronously and communicate with CloudFormation by writing results to a designated S3 object in JSON form. To learn more about how to use this powerful new addition to CloudFormation, take a look at the new documentation on AWS Lambda-backed Custom Resources.

With this release, Lambda is now the extensibility and customization mechanism for CloudFormation. I am really interested in seeing how you put this to use in your own applications and system designs; please let me know ( what you come up with!

Updated Resource Support
CloudFormation now supports the following additional resources:

  • Auto Scaling – LifecycleHook
  • RDS – EventSubscription

CloudFormation now includes enhanced support for the following resources:

  • Auto Scaling – AutoScalingGroup, LaunchConfiguration, ScalingPolicy
  • CloudFront – Distribution
  • ElastiCache – CacheCluster
  • EC2 – Volume
  • OpsWorks – Instance, Layer
  • RDS – DBInstance
  • Route 53 – HealthCheck, HostedZone

For more information on these updates, take a look at the CloudFormation Release Notes.


MongoDB on the AWS Cloud – New Quick Start Reference Deployment

We have added a new AWS Quick Start Reference Deployment. The new MongoDB on the AWS Cloud document will show you how to design and configure a MongoDB (an open source NoSQL database) cluster that runs on AWS.

The MongoDB cluster (version 2.6 or 3.0) makes use of Amazon Elastic Compute Cloud (EC2) and Amazon Virtual Private Cloud, and is launched via a AWS CloudFormation template. You can use the template directly or you can copy and then customize it as needed.  The template creates the following resources:

  • VPC with private and public subnets (you can also launch the cluster into an existing VPC).
  • A NAT instance in the public subnet to support SSH access to the cluster and outbound Internet connectivity.
  • An IAM instance role with fine-grained permissions.
  • Security groups
  • A fully customized MongoDB cluster with replica sets, shards, and config servers, along with customized EBS storage, all running in the private subnet.

The document examines scaling, replication, and performance tradeoffs in depth, and provides guidance to help you to choose appropriate types of EC2 instances and EBS volumes.

After you make your choices, you simply launch the template, fill in the blanks, and wait about 15 minutes while the resources are created and configured:

The document is available now and you can read it today!


AWS Quick Start Reference Deployment – Exchange Server 2013

Would you like to run Exchange Server 2013 on AWS? If so, I’ve got some good news for you! Our newest Quick Start Reference Deployment will show you how to do just that. Formally titled Microsoft Exchange Server 2013 on the AWS Cloud, this 34 page document addresses all of the architectural considerations needed to bring this deployment to life.

The reference deployment is in alignment with the AWS best practices for high availability with minimal infrastructure, and supports up to 250 mailboxes. Guidance for larger scenarios that use the Microsoft Preferred Architecture, with support for 250, 2,500 or 10,000 mailboxes is also provided. The architecture is fault-tolerant, and eliminates the need for traditional backups. You can use it as-is, or you can modify it as needed.

Launched by means of a AWS CloudFormation template, the architecture includes a Virtual Private Cloud with two subnets in each Availability Zone and supports remote administration. Each subnet includes a public (DMZ) address space and a private address space. The DMZ includes Remote Desktop (RD) gateways and NAT gateways for outbound Internet access. The private address space in each subnet hosts a Domain Controller and Exchange 2013 in Multi Role mode. Here’s a block diagram:

The guide will walk you through the process of processor sizing, memory sizing, and storage sizing.  It will also help you to choose the appropriate type of EBS volume.

The CloudFormation template is fully parameterized. When you use it to launch the reference deployment, you will be prompted for the necessary parameters (each of which is fully described and documented in the reference deployment document):

To get started, download the Quick Start Reference Deployment from the AWS Quick Start Reference Deployments catalog.