Category: Launch


New – Product Support Connection for AWS Marketplace Customers

There are now over 2,700 software products listed in AWS Marketplace. Tens of thousands of AWS customers routinely find, buy, and start using offerings from more than 925 Independent Software Vendors (ISVs).

Today we are giving you, as a consumer of software through AWS Marketplace, the ability to selectively share your contact information (name, title, phone number, email address, location, and organization) with software vendors in order to simplify and streamline your requests for product support. The vendors can programmatically access this information and store it within their own support systems so that they can easily verify your identity and provide you with better support.

This is an opt-in program. Sellers can choose to participate, and you can choose to share your contact information.

Product Support Connection as a Buyer
In order to test out this feature I launched Barracuda Web Application Firewall (WAF). After selecting my options and clicking on the Accept Software Terms & Launch with 1-click button, I was given the option to share my contact details:

Then I entered my name and other information:

I have the option to enter up to 5 contacts for each subscription at this point. I can also add, change, or delete them later if necessary:

If products that I already own are now enabled for Product Support Connection, I can add contact details here as well.

Product Support Connection as a Seller
If I am an ISV and want to participate in this program, I contact the AWS Marketplace Seller & Catalog Operations Team (aws-marketplace-seller-ops@amazon.com). The team will enroll me in the program and provide me with access to a secure API that I can use to access contact information. Per the terms of the program, I must register each contact in my support or CRM system within one business day, and use it only for support purposes. To learn more, read AWS Marketplace Product Support Connection Helps Software Vendors Provide More Seamless Product Support on the AWS Partner Network Blog.

Getting Started
When I am searching for products in AWS Marketplace, I can select Product Support Connection as a desired product attribute:

As part of today’s launch, I would like to thank the following vendors who worked with us to shape this program and to add the Product Support Connection to their offerings:

  • Barracuda – Web Application Firewall (WAF), NextGen Firewall F-Series, Load Balancer ADC, Email Security Gateway, Message Archiver.
  • Chef – Chef Server, Chef Compliance.
  • Matillion – Matillion ETL for Redshift.
  • Rogue Wave – OpenLogic Enhanced Support for CentOS (6 & 7, Standard & Security Hardened).
  • SoftNAS – SoftNAS Cloud (Standard & Express).
  • Sophos -Sophos UTM Manager 4, Sophos UTM 9.
  • zData – Greenplum database.
  • Zend – PHP, Zend Server.
  • Zoomdata – Zoomdata.

We’re looking forward to working with other vendors who would like to follow their lead!

To get started, contact the AWS Marketplace Seller & Catalog Operations Team at aws-marketplace-seller-ops@amazon.com.

Jeff;

 

New – AWS Server Migration Service

I love to use the historical photo at right to emphasize a situation that many of our customers face. They need to move their existing IT infrastructure to the AWS Cloud without scheduling prolonged maintenance periods for data migration. Because many of these applications are mission-critical and are heavily data-driven, taking systems offline in order to move gigabytes or terabytes of stored data is simply not practical.

New Service
Today I would like to tell you about AWS Server Migration Service.

This service simplifies and streamlines the process of migrating existing virtualized applications to Amazon EC2. In order to support the IT equivalent of the use case illustrated in the photo, it allows you to incrementally replicate live Virtual Machines (VMs) to the cloud without the need for a prolonged maintenance period. You can automate, schedule, and track incremental replication of your live server volumes, simplifying the process of coordinating and implementing large-scale migrations that span tens or hundreds of volumes.

You get full control of the replication process, from the AWS Management Console, AWS Command Line Interface (CLI), and through a set of migration APIs. After choosing the Windows or Linux servers to migrate, you can choose the replication frequency that best matches your application’s usage pattern and minimizes network bandwidth. Behind the scenes, AWS Server Migration Service will replicate your server’s volumes to the cloud, creating a new Amazon Machine Image (AMI) for each one. You can track the status of each replication job from the console.  Each incremental sync generates a fresh AMI, allowing you to test the migrated volumes in advance of your actual cut-over.

Migration Service Tour
Before you start the actual migration process, you need to download and deploy the AWS Server Migration Service Connector. The Connector runs within your existing virtualized environment, and allows the migration itself to be done in agentless fashion, sparing you the trouble of installing an agent on each existing server. If you run a large organization and/or have multiple virtualized environments, you can deploy multiple copies of the Connector.

The Connector has a web UI that you’ll access from within your existing environment. After you click through the license agreement, you will be prompted to create a password, configure the local network settings, and finalize a couple of preferences. Next, you will need to provide the Connector with a set of AWS account or IAM User credentials so that it can access the SMS, S3, and SNS APIs. If you use an IAM User, you’ll also need to create an appropriate IAM Role (the User Guide contains a sample).

With the Connector up and running, you can log in to the AWS Management Console, navigate to Server Management Service, and see a list of all of the Connectors that have registered with the service. From there you can import the server catalog from each Connector and inspect the Server inventory:

Then you can pick some servers to replicate, select them, and click on Create replication jobs. Next, you configure the license type (AWS or Bring Your Own) for server:

With that out of the way, you can choose to initiate replication immediately or at a date and time in the future. You can also choose the replication interval:

After you review and approve the settings, you can view all of your replication jobs in the dashboard:

You can also examine individual jobs:

And you can see the AMIs created after each incremental run:

From there you can click on Launch instance, choose an EC2 instance type, and perform acceptance testing on the migrated server.

Available Now
AWS Server Migration Service is now available in the US East (Northern Virginia), EU (Ireland), and Asia Pacific (Sydney) Regions and you can start using it today. There is no charge for the use of the service; you pay for S3 storage used during the replication process and for the EBS snapshots created when the migration is complete.

Jeff;

 

Run Windows Server 2016 on Amazon EC2

You can now run Windows Server 2016 on Amazon Elastic Compute Cloud (EC2). This version of Windows Server is packed with new features including support for Docker and Windows containers.  We are making it available in all AWS regions today, in four distinct forms:

  • Windows Server 2016 Datacenter with Desktop Experience – The mainstream version of Windows Server, designed with security and scalability in mind, with support for both traditional and cloud-native applications. To learn a lot more about Windows Server 2016, download The Ultimate Guide to Windows Server 2016 (registration required).
  • Windows Server 2016 Nano Server -A cloud-native, minimal install that takes up a modest amount of disk space and boots more swiftly than the Datacenter version, while leaving more system resources (memory, storage, and CPU) available to run apps and services. You can read Moving to Nano Server to learn how to migrate your code and your applications. Nano Server does not include a desktop UI so you’ll need to administer it remotely using PowerShell or WMI. To learn how to do this, read Connecting to a Windows Server 2016 Nano Server Instance.
  • Windows Server 2016 with Containers – Windows Server 2016 with Windows containers and Docker already installed.
  • Windows Server 2016 with SQL Server 2016 – Windows Server 2016 with SQL Server 2016 already installed.

Here are a couple of things to keep in mind with respect to Windows Server 2016 on EC2:

  • Memory – Microsoft recommends a minimum of 2 GiB of memory for Windows Server. Review the EC2 Instance Types to find the type that is the best fit for your application.
  • Pricing – The standard Windows EC2 Pricing applies; you can launch On-Demand and Spot Instances, and you can purchase Reserved Instances.
  • Licensing – You can (subject to your licensing terms with Microsoft) bring your own license to AWS.
  • SSM Agent – An upgraded version of our SSM Agent is now used in place of EC2Config. Read the User Guide to learn more.

Containers in Action
I launched the Windows Server 2016 with Containers AMI and logged in to it in the usual way:

Then I opened up PowerShell and ran the command docker run microsoft/sample-dotnet . Docker downloaded the image, and launched it. Here’s what I saw:

We plan to add Windows container support to Amazon ECS by the end of 2016. You can register here to learn more.

Get Started Today
You can get started with Windows Server 2016 on EC2 today. Try it out and let me know what you think!

Jeff;

X1 Instance Update – X1.16xlarge + More Regions

Earlier this year we made the x1.32xlarge instance available. With nearly 2 TiB of memory, this instance type is a great fit for memory-intensive big data, caching, and analytics workloads. Our customers are running the SAP HANA in-memory database, large-scale Apache Spark and Hadoop jobs, and many types of high performance computing (HPC) workloads.

Today we are making two updates to the X1 instance type:

  • New Instance Size – The new x1.16xlarge instance size provides another option for running smaller-footprint workloads.
  • New Regions – The X1 instances are now available in three additional regions, bringing the total to ten.

New Instance Size
Here are the specifications for the new x1.16xlarge:

  • Processor: 2 x Intel™ Xeon E7 8880 v3 (Haswell) running at 2.3 GHz – 32 cores / 64 vCPUs.
  • Memory: 976 GiB with Single Device Data Correction (SDDC+1).
  • Instance Storage: 1,920 GB SSD.
  • Network Bandwidth: 10 Gbps.
  • Dedicated EBS Bandwidth: 5 Gbps (EBS Optimized at no additional cost).

Like the x1.32xlarge, this instance supports Turbo Boost 2.0 (up to 3.1 GHz), AVX 2.0, AES-NI, and TSX-NI.  They are available as On-Demand Instances, Spot Instances, or Dedicated Instances; you can also purchase Reserved Instances and Dedicated Host Reservations.

New Regions
Both sizes of X1 instances are available in the following regions:

  • US East (Northern Virginia)
  • US West (Oregon)
  • EU (Ireland)
  • EU (Frankfurt)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)
  • Asia Pacific (Mumbai) – New
  • AWS GovCloud (US) – New
  • Asia Pacific (Seoul) – New

Jeff;

 

Snowball HDFS Import

If you are running MapReduce jobs on premises and storing data in HDFS (the Hadoop Distributed File System), you can now copy that data directly from HDFS to an AWS Snowball without using an intermediary staging file. Because HDFS is often used for Big Data workloads, this can greatly simplify the process of importing large amounts of data to AWS for further processing.

To use this new feature, download and configure the newest version of the Snowball Client on the on-premises host that is running the desired HDFS cluster. Then use commands like this to copy files from HDFS to S3 via Snowball:

$ snowball cp -n hdfs://HOST:PORT/PATH_TO_FILE_ON_HDFS s3://BUCKET-NAME/DESTINATION-PATH

You can use the -r option to recursively copy an entire folder:

$ snowball cp -n -r hdfs://HOST:PORT/PATH_TO_FOLDER_ON_HDFS s3://BUCKET_NAME/DESTINATION_PATH

To learn more, read Using the HDFS Client.

Jeff;

New P2 Instance Type for Amazon EC2 – Up to 16 GPUs

I like to watch long-term technology and business trends and watch as they shape the products and services that I get to use and to write about. As I was preparing to write today’s post, three such trends came to mind:

  • Moore’s Law – Coined in 1965, Moore’s Law postulates that the number of transistors on a chip doubles every year.
  • Mass Market / Mass Production – Because all of the technologies that we produce, use, and enjoy every day consume vast numbers of chips, there’s a huge market for them.
  • Specialization  – Due to the previous trend, even niche markets can be large enough to be addressed by purpose-built products.

As the industry pushes forward in accord with these trends, a couple of interesting challenges have surfaced over the past decade or so. Again, here’s a quick list (yes, I do think in bullet points):

  • Speed of Light – Even as transistor density increases, the speed of light imposes scaling limits (as computer pioneer Grace Hopper liked to point out, electricity can travel slightly less than 1 foot in a nanosecond).
  • Semiconductor Physics – Fundamental limits in the switching time (on/off) of a transistor ultimately determine the minimum achievable cycle time for a CPU.
  • Memory Bottlenecks – The well-known von Neumann Bottleneck imposes limits on the value of additional CPU power.

The GPU (Graphics Processing Unit) was born of these trends, and addresses many of the challenges! Processors have reached the upper bound on clock rates, but Moore’s Law gives designers more and more transistors to work with. Those transistors can be used to add more cache and more memory to a traditional architecture, but the von Neumann Bottleneck limits the value of doing so. On the other hand, we now have large markets for specialized hardware (gaming comes to mind as one of the early drivers for GPU consumption). Putting all of this together, the GPU scales out (more processors and parallel banks of memory) instead of up (faster processors and bottlenecked memory). Net-net: the GPU is an effective way to use lots of transistors to provide massive amounts of compute power!

With all of this as background, I would like to tell you about the newest EC2 instance type, the P2. These instances were designed to chew through tough, large-scale machine learning, deep learning, computational fluid dynamics (CFD), seismic analysis, molecular modeling, genomics, and computational finance workloads.

New P2 Instance Type
This new instance type incorporates up to 8 NVIDIA Tesla K80 Accelerators, each running a pair of NVIDIA GK210 GPUs. Each GPU provides 12 GiB of memory (accessible via 240 GB/second of memory bandwidth), and 2,496 parallel processing cores. They also include ECC memory protection, allowing them to fix single-bit errors and to detect double-bit errors. The combination of ECC memory protection and double precision floating point operations makes these instances a great fit for all of the workloads that I mentioned above.

Here are the instance specs:

Instance Name GPU Count vCPU Count Memory Parallel Processing Cores
GPU Memory
Network Performance
p2.xlarge 1 4 61 GiB 2,496 12 GiB High
p2.8xlarge 8 32 488 GiB 19,968 96 GiB 10 Gigabit
p2.16xlarge 16 64 732 GiB 39,936 192 GiB 20 Gigabit

All of the instances are powered by an AWS-Specific version of Intel’s Broadwell processor, running at 2.7 GHz. The p2.16xlarge gives you control over C-states and P-states, and can turbo boost up to 3.0 GHz when running on 1 or 2 cores.

The GPUs support CUDA 7.5 and above, OpenCL 1.2, and the GPU Compute APIs. The GPUs on the p2.8xlarge and the p2.16xlarge are connected via a common PCI fabric. This allows for low-latency, peer to peer GPU to GPU transfers.

All of the instances make use of our new Enhanced Network Adapter (ENA – read Elastic Network Adapter – High Performance Network Interface for Amazon EC2 to learn more) and can, per the table above, support up to 20 Gbps of low-latency networking when used within a Placement Group.

Having a powerful multi-vCPU processor and multiple, well-connected GPUs on a single instance, along with low-latency access to other instances with the same features creates a very impressive hierarchy for scale-out processing:

  • One vCPU
  • Multiple vCPUs
  • One GPU
  • Multiple GPUs in an instance
  • Multiple GPUs in multiple instances within a Placement Group

P2 instances are VPC only, require the use of 64-bit, HVM-style, EBS-backed AMIs, and you can launch them today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) regions as On-Demand Instances, Spot Instances, Reserved Instances, or Dedicated Hosts.

Here’s how I installed the NVIDIA drivers and the CUDA toolkit on my P2 instance, after first creating, formatting, attaching, and mounting (to /ebs) an EBS volume that had enough room for the CUDA toolkit and the associated samples (10 GiB is more than enough):

$ cd /ebs
$ sudo yum update -y
$ sudo yum groupinstall -y "Development tools"
$ sudo yum install -y kernel-devel-`uname -r`
$ wget http://us.download.nvidia.com/XFree86/Linux-x86_64/352.99/NVIDIA-Linux-x86_64-352.99.run
$ wget http://developer.download.nvidia.com/compute/cuda/7.5/Prod/local_installers/cuda_7.5.18_linux.run
$ chmod +x NVIDIA-Linux-x86_64-352.99.run
$ sudo ./NVIDIA-Linux-x86_64-352.99.run
$ chmod +x cuda_7.5.18_linux.run
$ sudo ./cuda_7.5.18_linux.run   # Don't install driver, just install CUDA and sample
$ sudo nvidia-smi -pm 1
$ sudo nvidia-smi -acp 0
$ sudo nvidia-smi --auto-boost-permission=0
$ sudo nvidia-smi -ac 2505,875

Note that NVIDIA-Linux-x86_64-352.99.run and cuda_7.5.18_linux.run are interactive programs; you need to accept the license agreements, choose some options, and enter some paths. Here’s how I set up the CUDA toolkit and the samples when I ran cuda_7.5.18_linux.run:

P2 and OpenCL in Action
With everything set up, I took this Gist and compiled it on a p2.8xlarge instance:

[ec2-user@ip-10-0-0-242 ~]$ gcc test.c -I /usr/local/cuda/include/ -L /usr/local/cuda-7.5/lib64/ -lOpenCL -o test

Here’s what it reported:

[ec2-user@ip-10-0-0-242 ~]$ ./test
1. Device: Tesla K80
 1.1 Hardware version: OpenCL 1.2 CUDA
 1.2 Software version: 352.99
 1.3 OpenCL C version: OpenCL C 1.2
 1.4 Parallel compute units: 13
2. Device: Tesla K80
 2.1 Hardware version: OpenCL 1.2 CUDA
 2.2 Software version: 352.99
 2.3 OpenCL C version: OpenCL C 1.2
 2.4 Parallel compute units: 13
3. Device: Tesla K80
 3.1 Hardware version: OpenCL 1.2 CUDA
 3.2 Software version: 352.99
 3.3 OpenCL C version: OpenCL C 1.2
 3.4 Parallel compute units: 13
4. Device: Tesla K80
 4.1 Hardware version: OpenCL 1.2 CUDA
 4.2 Software version: 352.99
 4.3 OpenCL C version: OpenCL C 1.2
 4.4 Parallel compute units: 13
5. Device: Tesla K80
 5.1 Hardware version: OpenCL 1.2 CUDA
 5.2 Software version: 352.99
 5.3 OpenCL C version: OpenCL C 1.2
 5.4 Parallel compute units: 13
6. Device: Tesla K80
 6.1 Hardware version: OpenCL 1.2 CUDA
 6.2 Software version: 352.99
 6.3 OpenCL C version: OpenCL C 1.2
 6.4 Parallel compute units: 13
7. Device: Tesla K80
 7.1 Hardware version: OpenCL 1.2 CUDA
 7.2 Software version: 352.99
 7.3 OpenCL C version: OpenCL C 1.2
 7.4 Parallel compute units: 13
8. Device: Tesla K80
 8.1 Hardware version: OpenCL 1.2 CUDA
 8.2 Software version: 352.99
 8.3 OpenCL C version: OpenCL C 1.2
 8.4 Parallel compute units: 13

As you can see, I have a ridiculous amount of compute power available at my fingertips!

New Deep Learning AMI
As I said at the beginning, these instances are a great fit for machine learning, deep learning, computational fluid dynamics (CFD), seismic analysis, molecular modeling, genomics, and computational finance workloads.

In order to help you to make great use of one or more P2 instances, we are launching a  Deep Learning AMI today. Deep learning has the potential to generate predictions (also known as scores or inferences) that are more reliable than those produced by less sophisticated machine learning, at the cost of a most complex and more computationally intensive training process. Fortunately, the newest generations of deep learning tools are able to distribute the training work across multiple GPUs on a single instance as well as across multiple instances each containing multiple GPUs.

The new AMI contains the following frameworks, each installed, configured, and tested against the popular MNIST database:

MXNet – This is a flexible, portable, and efficient library for deep learning. It supports declarative and imperative programming models across a wide variety of programming languages including C++, Python, R, Scala, Julia, Matlab, and JavaScript.

Caffe – This deep learning framework was designed with  expression, speed, and modularity in mind. It was developed at the Berkeley Vision and Learning Center (BVLC) with assistance from many community contributors.

Theano – This Python library allows you define, optimize, and evaluate mathematical expressions that involve multi-dimensional arrays.

TensorFlow – This is an open source library for numerical calculation using data flow graphs (each node in the graph represents a mathematical operation; each edge represents multidimensional data communicated between them).

Torch – This is a GPU-oriented scientific computing framework with support for machine learning algorithms, all accessible via LuaJIT.

Consult the README file in ~ec2-user/src to learn more about these frameworks.

AMIs from NVIDIA
You may also find the following AMIs to be of interest:

Jeff;

Expanding the M4 Instance Type – New M4.16xlarge

EC2’s M4 instances offer a balance of compute, memory, and networking resources and are a good choice for many different types of applications.

We launched the M4 instances last year (read The New M4 Instance Type to learn more) and gave you a choice of five sizes, from large up to 10xlarge. Today we are expanding the range with the introduction of a new m4.16xlarge with 64 vCPUs and 256 GiB of RAM. Here’s the complete set of specs:

Instance Name vCPU Count
RAM
Instance Storage Network Performance EBS-Optimized
m4.large 2 8 GiB EBS Only Moderate 450 Mbps
m4.xlarge 4 16 GiB EBS Only High 750 Mbps
m4.2xlarge 8 32 GiB EBS Only High 1,000 Mbps
m4.4xlarge 16 64 GiB EBS Only High 2,000 Mbps
m4.10xlarge 40 160 GiB EBS Only 10 Gbps 4,000 Mbps
m4.16xlarge 64 256 GiB EBS Only 20 Gbps 10,000 Mbps

The new instances are based on Intel Xeon E5-2686 v4 (Broadwell) processors that are optimized specifically for EC2. When used with Elastic Network Adapter (ENA) inside of a placement group, the instances can deliver up to 20 Gbps of low-latency network bandwidth. To learn more about the ENA, read my post, Elastic Network Adapter – High Performance Network Interface for Amazon EC2.

Like the m4.10xlarge, the m4.x16large allows you to control the C states to enable higher turbo frequencies when you are using just a few cores. You can also control the P states to lower performance variability (read my extended description in New C4 Instances to learn more about both of these features).

You can purchase On-Demand Instances, Spot Instances, and Reserved Instances; visit the EC2 Pricing page for more information.

Available Now
As part of today’s launch we are also making the M4 instances available in the China (Beijing), South America (São Paulo), and AWS GovCloud (US) regions.

Jeff;

Additional At-Rest and In-Transit Encryption Options for Amazon EMR

Our customers use Amazon EMR (including Apache Hadoop and the full range of tools that make up the Apache Spark ecosystem) to handle many types of mission-critical big data use cases. For example:

  • Yelp processes over a terabyte of log files and photos every day.
  • Expedia processes streams of clickstream, user interaction, and supply data.
  • FINRA analyzes billions of brokerage transaction records daily.
  • DataXu evaluates 30 trillion ad opportunities monthly.

Because customers like these (see our big data use cases for many others) are processing data that is mission-critical and often sensitive, they need to keep it safe and sound.

We already offer several data encryption options for EMR including server and client side encryption for Amazon S3 with EMRFS and Transparent Data Encryption for HDFS. While these solutions do a good job of protecting data at rest, they do not address data stored in temporary files or data that is in flight, moving between job steps. Each of these encryption options must be individually enabled and configured, making the process of implementing encryption more tedious that it need be.

It is time to change this!

New Encryption Support
Today we are launch a new, comprehensive encryption solution for EMR. You can now easily enable at-rest and in-transit encryption for Apache Spark, Apache Tez, and Hadoop MapReduce on EMR.

The at-rest encryption addresses the following types of storage:

  • Data stored in S3 via EMRFS.
  • Data stored in the local file system of each node.
  • Data stored on the cluster using HDFS.

The in-transit encryption makes use of the open-source encryption features native to the following frameworks:

  • Apache Spark
  • Apache Tez
  • Apache Hadoop MapReduce

This new feature can be configured using an Amazon EMR security configuration.  You can create a configuration from the EMR Console, the EMR CLI, or via the EMR API.

The EMR Console now includes a list of security configurations:

Click on Create to make a new one:

Enter a name, and then choose the desired mode and type for each aspect of this new feature. Based on the mode or the type, the console will prompt you for additional information.

S3 Encryption:

Local disk encryption:

In-transit encryption:

If you choose PEM as the certificate provider type, you will need to enter the S3 location of a ZIP file that contains the PEM file(s) that you want to use for encryption. If you choose Custom, you will need to enter the S3 location of a JAR file and the class name of the custom certificate provider.

After you make all of your choices and click on Create, your security configuration will appear in the console:

You can then specify the configuration when you create a new EMR Cluster. This feature is available for clusters that are running Amazon EMR release 4.8.0 or 5.0.0. To learn more, read about Amazon EMR Encryption with Security Configurations.

Jeff;

 

AWS CloudFormation Update – YAML, Cross-Stack References, Simplified Substitution

AWS CloudFormation gives you the ability to express entire stacks (collections of related AWS resources) declaratively, by constructing templates. You can define a stack, specify and configure the desired resources and their relationship to each other, and then launch as many copies of the stack as desired. CloudFormation will create and set up the resources for you, while also taking care to address any ordering dependencies between the resources.

Today we are making three important additions to CloudFormation:

  • YAML Support – You can now write your CloudFormation templates in YAML.
  • Cross Stack References – You can now export values from one stack and use them in another.
  • Simplified Substitution – You can more easily perform string replacements within templates.

Let’s take a look!

YAML Support
You can now write your CloudFormation templates in YAML (short for YAML Ain’t Markup Language). Up until now, templates were written in JSON. While YAML and JSON have similar expressive powers, YAML was designed to be human-readable while JSON was (let’s be honest) not. YAML-based templates use less punctuation and should be substantially easier to write and to read. They also allow the use of comments. CloudFormation supports essentially all of YAML, with the exception of hash merges, aliases, and some tags (binary, imap, pairs, TIMESTAMP, and set).

When you write a CloudFormation template in YAML, you will use the same top-level structure (Description, Metadata, Mappings, Outputs, Parameters, Conditions, and Resources).  Here’s what a parameter definition looks like:

Parameters:
  DBName:
    AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
    ConstraintDescription: must begin with a letter and contain only alphanumeric
      characters.
    Default: wordpressdb
    Description: The WordPress database name
    MaxLength: '64'
    MinLength: '1'
    Type: String

When you use YAML, you can also use a new, abbreviated syntax to refer to CloudFormation functions such as GetAtt, Base64, and FindInMap. You can now use the existing syntax ("Fn::GetAtt") or the new, tag-based syntax (!GetAtt). Note that the “!” is part of the YAML syntax for tags; it is not the “logical not” operator. Here’s the old syntax:

- Fn::FindInMap:
    - AWSInstanceType2Arch
    - Ref: InstanceType
    - Arch

And the new one:

!FindInMap [AWSInstanceType2Arch, !Ref InstanceType, Arch]

As you can see, the newer syntax is shorter and cleaner. Note, however, that you cannot put two tags next to each other. You can intermix the two forms and you can also nest them. For example, !Base64 !Sub is invalid but !Base64 Fn::Sub is fine.

The CloudFormations API functions (CreateChangeSet, CreateStack, UpdateStack, and so forth) now accept templates in either JSON or YAML. The GetTemplate function returns the template in the original format. The CloudFormation designer does not support YAML templates today, but this is on our roadmap.

Cross Stack References
Many AWS customers use one “system” CloudFormation stack to set up their environment (VPCs, VPC subnets, security groups, IP addresses, and so forth) and several other “application” stacks to populate it (EC2 & RDS instances, message queues, and the like). Until now there was no easy way for the application stacks to reference resources created by the system stack.

You can now create and export values from one stack and make use of them in other stacks without going to the trouble of creating custom CloudFormation resources. The first stack exports values like this:

Outputs: 
  TSSG: 
    Value: !Ref TroubleShootingSG
    Export:
      Name: AccountSG

The other stacks then reference them using the new ImportValue function:

EC2Instance:
  Type: AWS::EC2::Instance
  Properties:
    SecurityGroups:
      - !ImportValue AccountSG

The exported names must be unique with the AWS account and the region. A stack that is referenced by another stack cannot be deleted and it cannot modify or remove the exported value.

Simplified Substitution
Many CloudFormation templates perform some intricate string manipulation in order to construct command lines, file paths, and other values that cannot be fully determined until the stack is created.  Until now, this required the use of fn::Join. In combination with the JSON syntax, this results in some messy templates that were hard to understand and maintain. In order to simplify this important aspect of template development, we are introducing a new substitution function, fn::Sub. This function replaces variables (denoted by the syntax ${variable_name}) with their evaluated values. For example:

configure_wordpress:
  commands:
    01_set_mysql_root_password:
      command: !Sub |
           mysqladmin -u root password '${DBRootPassword}'
      test: !Sub |
           $(mysql ${DBName} -u root --password='${DBRootPassword}' >/dev/null 2>&1 </dev/null); (( $? != 0 ))
    02_create_database:
      command: !Sub |  
           mysql -u root --password='${DBRootPassword}' < /tmp/setup.mysql
      test: !Sub |
           $(mysql ${DBName} -u root --password='${DBRootPassword}' >/dev/null 2>&1 </dev/null); (( $? !=0))

If you need to generate ${} or ${variable}, simply write ${!} or ${!variable}.

Coverage Updates
As part of this release we also added additional support for AWS Key Management Service (KMS), EC2 Spot Fleet, and Amazon EC2 Container Service. See the CloudFormation Release History for more information.

Available Now
All of these features are available now and you can start using them today!

If you are interested in learning more about CloudFormation, please plan to attend our upcoming webinar, AWS Infrastructure as Code. You will learn how to take advantage of best practices for planning and provisioning your infrastructure, and you will have the opportunity to see the new features in action.

Jeff;

 

New Reader Endpoint for Amazon Aurora – Load Balancing & Higher Availability

Feature-by-feature, Amazon Aurora has become more powerful and easier to use. Over the past months we have given you the ability to create a cluster from a MySQL backup, create cross-region read replicas, share snapshots across accounts, exercise additional control over failover, and migrate from other in-cloud or on-premises databases to Aurora.

Today, as an extension of Aurora’s existing read replica model, we are introducing a new cluster-level read endpoint. Your application can continue to direct read queries to the individual replicas as before. However, you can also update it to make use of the new endpoint. Doing so will give you two important advantages: load balancing and higher availability:

Load Balancing – Connecting to the cluster endpoint allows Aurora to load-balance connections across the replicas in the DB cluster. This helps to spread the read workload around and can lead to better performance and more equitable use of the resources available to each replica. In the event of a failover, if the replica that you are connected to is promoted to the primary instance, the connection will be dropped. You can then reconnect to the reader endpoint in order to send your read queries to the other replicas in the cluster.

Higher Availability – You can place multiple Aurora replicas in distinct Availability Zones and connect to them via the new endpoint. In the unlikely event that an Availability Zone fails, applications that make use of the new endpoint will continue to send read traffic to the other replicas with minimal disruption.

Find the Endpoint
You can find the new Reader Endpoint in the Aurora Console:

This handy new feature is available now and you can start using it today!

To learn more about this and other new Amazon Aurora features, attend our Amazon Aurora – New Features webinar on September 28th.

Jeff;