Category: Amazon EC2


Route 53 Update – Private DNS and More

Amazon Route 53 is a highly available and scalable Domain Name Service. As you probably know, it translates domain names in to numerical IP addresses. This level of indirection allows you to refer to a computer by its name (which usually remains the same for an extended period of time) instead of by its address (which could change from minute to minute for any number of reasons).

Up until now, the primary use for [r53_u] is for lookup of global, public names. While it was sometimes possible to use it for private names within an Amazon Virtual Private Cloud, the names were still globally visible, even if the IP addresses were internal to the VPC and hence unreachable.

Today we are announcing Private DNS for Route 53. You can now easily manage authoritative DNS within your Virtual Private Clouds. This allows you to use custom DNS names for your internal resources without exposing the names or IP addresses to the public Internet.

As part of today’s launch, we are upgrading the AWS Management Console so that it provides you with additional information when a health check fails. We are also announcing support for reusable delegation sets. This will simplify management of name servers when you are using Route 53 to manage multiple domains.

Let’s take a look at each of these new features!

Private DNS
You can now use Route 53 to manage the internal DNS names for your application resources (web servers, application servers, databases, and so forth) without exposing this information to the public Internet. This adds an additional layer of security, and also allows you to fail over from a primary resource to a secondary one (often called a “flip”) by simply mapping the DNS name to a different IP address.

Route 53 also allows you set up Split-horizon DNS. Once set up, a given DNS name will map to one IP address when a lookup is initiated from within a VPC, and to a different address when the lookup originates elsewhere.

You can get started with Route 53 Private DNS by creating a Route 53 Hosted Zone, choosing the Private Hosted Zone option, and designating a VPC:

The console will display the type of each of your hosted zones:

To learn, read the documentation for Working with Private Hosted Zones.

Reusable Delegation Sets
When you use Route 53 to host DNS for a domain, it sets up four authoritative name servers collectively known as a delegation set. As part of today’s release we are simplifying domain management by allowing you to use the same delegation set for any number of your domains. This is a somewhat advanced, API-only feature that can prove to be useful in a couple of different ways:

  • If you are moving a large group of domains from another provider to Route 53, you can provide them with a single list of four name servers and have them applied to all of the domains that you are moving.
  • You can create generic “white label” name servers such as ns1.example.com and ns2.example.com, use them in your delegation set, and point them to your actual Route 53 name servers.

To learn more, read the API documentation for Actions on Reusable Delegation Sets.

Health Check Failure Reasons
We introduced Health Checks for Route 53 last year and added editing and tagging of health checks earlier this year. We are now extending this feature again and are making the results of each health check available in the Console and the Route 53 API. Here’s how they appear in the Console:

Note that the health checks cannot connect with services that are running within a private subnet of a VPC. Similarly, Route 53 Private DNS records cant be associated with health checks.

Go For It
These features are available now and you can start using them today!

Jeff;

Now Available – SUSE SLES 12 AMIs for Amazon EC2

Our friends at SUSE recently announced the release of SUSE Linux Enterprise Server 12. This release introduces a new module-based system that allows customers to stay current with the desired parts of the software stack while sticking with a stable set of core packages.

I am happy to announce that this new release is now available in Amazon Machine Image (AMI) form for use on Amazon Elastic Compute Cloud (EC2). The AMIs can receive packages from the following SUSE Linux Enterprise Server 12 modules:

  • Public Cloud – This module provides a collection of tools that enables you to create and manage cloud images from the command line on SUSE Linux Enterprise Server
  • Web and Scripting – This module includes a suite of scripting languages, frameworks and related tools, to help developers and systems administrators create of stable, modern web applications using dynamic languages, such as PHP, Ruby on Rails, and Python.
  • Advanced Systems Management – This module contains three components to support system administrators to automate routine tasks in the data center and cloud: CFEngine, Puppet, and the new Machinery.
  • Legacy – This module helps you to migrate applications from SUSE Linux Enterprise 10 and 11 and other systems to SUSE Linux Enterprise 12 by providing packages which have been discontinued in the latest release.

SUSE has also included a technical Preview of Docker in SUSE Linux Enterprise Server 12. Linux Containers and Docker are great ways to build, deploy and manage applications and can be used with tools like Open Build Service and the KIWI image system for easy and powerful image building.

The new AMIs are available now in all AWS Regions and you can start using them today! You can launch them from the Quick Start tab in the AWS Management Console:

Here are the AMI IDs (these are all x86_64):

Region HVM PV
US East (Northern Virginia) ami-48b63120 ami-aeb532c6
Asia Pacific (Tokyo) ami-d54a79d4 ami-df4b78de
South America (São Paulo) ami-c102b6dc ami-c102b6dc
Asia Pacific (Singapore) ami-84b392d6 ami-dcb3928e
Asia Pacific (Sydney) ami-590e6263 ami-b90e6283
US West (Oregon) ami-c5440af5 ami-d7450be7
US West (Northern California) ami-c5440af5 ami-cd5b4f88
EU (Frankfurt) ami-cd5b4f88 ami-aa2610b7
EU (Ireland) ami-1804aa6f ami-e801af9f

To learn more about SUSE Linux Enterprise Server 12, take a look at the Release Notes.

Jeff;

New Microsoft System Center Virtual Machine Manager Add-In

Many enterprise-scale AWS customers also have a large collection of virtualized Windows servers on their premises. These customers are now moving all sorts of workloads to the Cloud and have been looking for a unified solution to their on-premises and cloud-based system management needs. Using multiple tools to accomplish the same basic tasks (monitoring, and controlling virtualized servers or instances) is inefficient and adds complexity to the development of solutions that use a combination of on-premises and cloud resources.

New Add-In
In order to allow this important customer base to manage their resources with greater efficiency, we are launching the AWS System Manager for Microsoft System Center Virtual Machine Manager (SCVMM). This add-in allows you to monitor and manage your Amazon Elastic Compute Cloud (EC2) instances (running either Windows or Linux) from within Microsoft System Center Virtual Machine Manager. You can use this add-in to perform common maintenance tasks such as restarting, stopping, and removing instances. You can also connect to the instances using the Remote Desktop Protocol (RDP).

Let’s take a quick tour of the add-in! Here’s the main screen:

You can select any public AWS Region:

After you launch an EC2 instance running Windows, you can use the add-in to retrieve, decrypt, and display the administrator password:

You can select multiple instances and operate on them as a group:

Available Now
The add-in is available for download today at no charge. After you download and install it, you simply enter your IAM credentials. The credentials will be associated with the logged-in Windows user on the host system so you’ll have to enter them just once.

As is the case with every AWS product, we would be thrilled to get your feedback (feature suggestions, bug reports, and anything else that comes to mind). Send it to scvmm-info@amazon.com.

Jeff;

Now Open – AWS Germany (Frankfurt) Region – EC2, DynamoDB, S3, and Much More

It is time to expand the AWS footprint once again, this time with a new Region in Frankfurt, Germany. AWS customers in Europe can now use the new EU (Frankfurt) Region along with the existing EU (Ireland) Region for fast, low-latency access to the suite of AWS infrastructure services. You can now build multi-Region applications with the assurance that your content will stay within the EU.

New Region
The new Frankfurt Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, and Elastic Load Balancing.

It also supports AWS Elastic Beanstalk, AWS CloudFormation, Amazon CloudFront, Amazon CloudSearch, AWS CloudTrail, Amazon CloudWatch, AWS Direct Connect, Amazon DynamoDB, Amazon EMR, AWS Storage Gateway, Amazon Glacier, AWS CloudHSM, AWS Identity and Access Management (IAM), Amazon Kinesis, AWS OpsWorks, Amazon Route 53, Amazon Relational Database Service (RDS), Amazon Redshift, Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), and Amazon Simple Workflow Service (SWF).

The Region supports all sizes of T2, M3, C3, R3, and I2 instances. All EC2 instances must be launched within a Virtual Private Cloud in this Region (see my blog post, Virtual Private Clouds for Everyone for more information).

There are also three edge locations in Frankfurt for Amazon Route 53 and Amazon CloudFront.

This is our eleventh Region (see the AWS Global Infrastructure map for more information). As usual, you can see the full list in the Region menu of the AWS Management Console:

Rigorous Compliance
Every AWS Region is designed and built to meet rigorous compliance standards including ISO 27001, SOC 1, PCI DSS Level 1, to name a few (see the AWS Compliance page for more info). AWS is fully compliant with all applicable EU Data Protection laws. For customers who wish to use AWS to store personal data, AWS provides a data processing agreement. More information on how customers can use AWS to meet EU Data Protection requirements can be found at AWS Data Protection.

Customers
Many organizations in Europe are already making use of AWS. Here’s a very small sample:

mytaxi (Slideshare presentation) is a very popular (10 million users and 45,000 taxis) taxi booking application. They use AWS to help them to service their global customer base in real time. They plan to use the new Region to provide even better service to their customers in Germany.


Wunderlist (case study) was first attracted to AWS by, as they say, the “fantastic technology stack.” Empowered by AWS, they have developed an agile deployment model that allows them to deploy new code several times per day. They can experiment more often (with very little risk) and can launch new products more quickly. They believe that the new AWS Region will benefit their customers in Germany and will also inspire the local startup scene.

AWS Partner Network
Members of the AWS Partner Network (APN) have been preparing for the launch of the new Region. Here’s a sampling (send me email with launch day updates).

Software AG is using AWS as a global host for ARIS Cloud, a Business Process Analysis-as-a-Service (BPAaaS) product. AWS allows Software AG to focus on their core competency—the development of great software and gives them the power to roll out new cloud products globally within days.

Trend Micro is bringing their security solutions to the new region. Trend Micro Deep Security helps customers secure their AWS deployments and instances against the latest threats, including Shellshock and Heartbleed.

Here are a few late-breaking (post-launch additions):

  1. BitNami – Support for the new Amazon Cloud Region in Germany.
  2. Appian – Appian Cloud Adds Local Hosting in Germany

Here are some of the latest and greatest third party operating system AMIs in the new Region:

  1. Canonical – Ubuntu Server 14.04 LTS
  2. SUSE – SUSE Linux Enterprise Server 11 SP3

For Developers – Signature Version 4 Support
This new Region supports only Signature Version 4. If you have built applications with the AWS SDKs or the AWS Command Line Interface (CLI) and your API calls are being rejected, you should update to the newest SDK and CLI. To learn more, visit Using the AWS SDKs and Explorers.

AWS Offices in Europe
In order to support enterprises, government agencies, academic institutions, small-to-mid size companies, startups, and developers, there are AWS offices in Germany (Berlin, Munich), the UK (London), Ireland (Dublin), France (Paris), Luxembourg (Luxembourg City), Spain (Madrid), Sweden (Stockholm), and Italy (Milan).

Use it Now
This new Region is open for business now and you can start using it today!

Jeff;

PS – Like our US West (Oregon) and AWS GovCloud (US) Regions, this region uses carbon-free power!

New AWS Directory Service

Virtually every organization uses a directory service such as Active Directory to allow computers to join domains, list and authenticate users, and to locate and connect to printers, and other network services including SQL Server databases. A centralized directory reduces the amount of administrative work that must be done when an employee joins the organization, changes roles, or leaves.

With the advent of cloud-based services, an interesting challenge has arisen. By design, the directory is intended to be a central source of truth with regard to user identity. Administrators should not have to maintain one directory service for on-premises users and services, and a separate, parallel one for the cloud. Ideally, on-premises and cloud-based services could share and make use of a single, unified directory service.

Perhaps you want to run Microsoft Windows on EC2 or centrally control access to AWS applications such as Amazon WorkSpaces or Amazon Zocalo. Setting up and then running a directory can be a fairly ambitious undertaking once you take in to account the need to procure and run hardware, install, configure and patch the operating system, and the directory, and so forth. This might be overkill if you have a user base of modest size and just want to use the AWS applications and exercise centralized control over users and permissions.

The New AWS Directory Service
Today we are introducing the AWS Directory Service to address these challenges! This managed service provides two types of directories. You can connect to an existing on-premises directory or you can set up and run a new, Samba-based directory in the Cloud.

If your organization already has a directory, you can now make use of it from within the cloud using the AD Connector directory type. This is a gateway technology that serves as a cloud proxy to your existing directory, without the need for complex synchronization technology or federated sign-on. All communication between the AWS Cloud and your on-premises directory takes place over AWS Direct Connect or a secure VPN connection within a Amazon Virtual Private Cloud. The AD Connector is easy to set up (just a few parameters) and needs very little in the way of operational care and feeding. Once configured, your users can use their existing credentials (user name and password, with optional RADIUS authentication) to log in to WorkSpaces, Zocalo, EC2 instances running Microsoft Windows, and the AWS Management Console. The AD Connector is available in Small (up to 10,000 users, computers, groups, and other directory objects) and Large (up to 100,000 users, computers, groups, and other directory objects).

If you don’t currently have a directory and don’t want to be bothered with all of the care and feeding that’s traditionally been required, you can quickly and easily provision and run a Samba-based directory in the cloud using the Simple AD directory type. This directory supports most of the common Active Directory features including joins to Windows domains, management of Group Policies, and single sign-on to directory- powered apps. EC2 instances that run Windows can join domains and can be administered en masse using Group Policies for consistency. Amazon WorkSpaces and Amazon Zocalo can make use of the directory. Developers and system administrators can use their directory credentials to sign in to the AWS Management Console in order to manage AWS resources such as EC2 instances or S3 buckets.

Getting Started
Regardless of the directory type that you choose, getting started is quick and easy. Keep in mind, of course, that you are setting up an important piece of infrastructure and choose your names and passwords accordingly. Let’s walk through the process of setting up each type of directory.

I can create an AD Connector as a cloud-based proxy to an existing Active Directory running within my organization. I’ll have to create a VPN connection from my Virtual Private Cloud to my on-premises network, making use of AWS Direct Connect if necessary. Then I will need to create an account with sufficient privileges to allow it handle lookup, authentication, and domain join requests. I’ll also need the DNS name of the existing directory. With that information in hand, creating the AD Connector is a simple matter of filling in a form:

I also have to provide it within information about my VPC, including the subnets where I’d like the directory servers to be hosted:

The AD Connector will be up & running and ready to use within minutes!

Creating a Simple AD in the cloud is also very simple and straightforward. Again, I need to choose one of my VPCs and then pick a pair of subnets within it for my directory servers:

Again, the Simple AD will be up, running, and ready for use within minutes.

Managing Directories
Let’s take a look at the management features that are available for the AD Connector and Simple AD. The Console shows me a list of all of my directories:

I can dive in to the details with a click. As you can see at the bottom of this screen, I can also create a public endpoint for my directory. This will allow it to be used for sign-in to AWS applications such as Zocalo and WorkSpaces, and to the AWS Management Console:

I can also configure the AWS applications and the Console to use the directory:

I can also create, restore, and manage snapshot backups of my Simple AD (backups are done automatically every 24 hours; I can also initiate a manual backup at any desired time):

Get Started Today
Both types of directory are available now and you can start creating and using them today in the US East (Northern Virginia), US West (Oregon), Asia Pacific (Sydney), Asia Pacific (Tokyo), and EU (Ireland) Regions. Prices start at $0.05 per hour for Small directories of either type and $0.15 per hour for Large directories of either type in the US East (Northern Virginia) Region. See the AWS Directory Service page for pricing information in the other AWS Regions.

Jeff;

CloudWatch Update – Enhanced Support for Windows Log Files

Earlier this year, we launched a log storage and monitoring feature for Amazon CloudWatch. As a quick recap, this feature allows you to upload log files from your Amazon Elastic Compute Cloud (EC2) instances to CloudWatch, where they are stored durably and easily monitored for specific symbols or messages.

The EC2Config service runs on Microsoft Windows instances on EC2, and takes on a number of important tasks. For example it is responsible for uploading log files to CloudWatch. Today we are enhancing this service with support for Windows Performance Counter data and ETW (Event Tracing for Windows) logs. We are also adding support for custom log files.

In order to use this feature, you must enable CloudWatch logs integration and then tell it which files to upload. You can do this from the instance by running EC2Config and checking Enable CloudWatch Logs integration:

The file %PROGRAMFILES%\Amazon\Ec2ConfigService\Settings\AWS.EC2.Windows.CloudWatch.json specifies the files to be uploaded.

To learn more about how this feature works and how to configure it, head on over to the AWS DevOps Blog and read about Using CloudWatch Logs with Amazon EC2 Running Microsoft Windows Server.

Jeff;

Container Computing and AWS

Big changes in the technology world seem to come about in two ways. Sometimes there’s a big splashy announcement and a visible public leap in to the future. Most of the time, however, change is a bit more subtle. Early adopters find a new technology that makes them more productive and share it amongst themselves. Over time the news spreads to others. At some point the once-new technology suddenly (for those who haven’t been paying attention) seems to have become very popular, seemingly overnight! This technology adoption model can be seen in the recent growth in the popularity of container computing, exemplified by the rising awareness of Docker. Containers are lightweight, portable, and self-sufficient. Even better, they can be run in a wide variety of environments. You can, if you’d like, build and test a container locally and then deploy it to Amazon Elastic Compute Cloud (EC2) for production.

Benefits of Container Computing
Let’s take a closer look at some of the benefits that accrue when you create your cloud-based application as a collection of containers, each specified declaratively and mapped to a single, highly specific aspect of your architecture:

  • Consistency & Fidelity – There’s nothing worse than creating something that works great in a test environment yet fails or runs inconsistently when moved to production. When you are building and releasing code in an agile fashion, wasting time debugging issues that arise from differences between environments is a huge barrier to productivity. The declarative, all-inclusive packaging model used by Docker gives you the power to enumerate your application’s dependencies. Your application will have access to the same libraries and utilities, regardless of where it is running.
  • Distributed Application Platform – If you build your application as a set of distributed services, each in a Docker container running CoreOS, they can easily find and connect to each other, perhaps with the aid of a scheduler like Mesosphere. This will allow you to deploy and then easily scale containers across a “grid” of EC2 instances.
  • Development Efficiency – Building your application as a collection of tight, focused containers allows you to build them in parallel with strict, well-defined interfaces. With better interfaces between moving parts, you have the freedom to improve and even totally revise implementations without fear of breaking running code. Because your application’s dependencies are spelled out explicitly and declaratively, less time will be lost diagnosing, identifying, and fixing issues that arise from missing or obsolete packages.
  • Operational Efficiency – Using containers allows you to build components that run in isolated environments (limiting the ability of one container to accidentally disrupt the operation of another) while still being able to cooperatively share libraries and other common resources. This opportunistic sharing reduces memory pressure and leads to increased runtime efficiency. If you are running on EC2 (Docker is directly supported on the Amazon Linux AMI and on AWS Elastic Beanstalk, and can easily be used with AWS OpsWorks), you can achieve isolation without running each component on a separate instance. Containers are not a replacement for instances; they are destined to run on them!

Container Computing Resources
In order to prepare to write this post, I spent some time reading up on container computing and Docker. Here are the articles, blog posts, and videos that I liked the best:

Moving Forward
I am really excited by container computing and hope that you are as well. Please feel free to share additional resources and success stories with me and I’ll update this post and our new page accordingly.

Jeff;

Setup Enhancements for AWS Management Portal for vCenter

My colleague Derek Lyon sent along a great guest post to introduce some important enhancements to the AWS Management Portal for vCenter.

Jeff;


We have recently added a number of new features to the AWS Management Portal for vCenter. These enhancements make it significantly easier for VMware professionals to setup the portal and start managing their AWS resources using their vSphere Client.

New Federation Proxy Option
We recently added a new setup option that significant reduces the complexity of setting-up the portal. You now have the option to use the portal without having to setup SAML integration yourself. To do this, you can use the AWS Connector as an authentication proxy. This provides an easy way to offer end-users federated access to your AWS resources via the portal. With the proxy option, your end-users will access the portal using the same credentials they use to login to vCenter, with support for both system domain users and directory users.

Previously, the portal only supported SAML-based authentication. This required you to setup Active Directory Federation Services (ADFS) or an equivalent SAML-based identity provider (IdP) for federating identity into AWS. The new SAML-based authentication provides a powerful tool for customers who want to manage their own single-sign on (SSO) infrastructure. However, it can also be challenging to set up if you are not familiar with these technologies, or if you do not already have a compatible Identity Provider (IdP) configured.

Now you have an alternative option. You can choose to have the AWS Connector act as an identity federation proxy. When you use this option, you eliminate the complexity that comes with configuring the single sign-on infrastructure yourself. This is significantly simpler to set up and will provide the best experience for customers who do not wish to manage their own IdP.

To set up the portal using the new federation proxy option, begin by visiting the AWS Management Portal for vCenters Setup Page.

After you click on Get started now you will be asked to pick the authentication provider that you would like to use. To use the new option, select AWS Connector as the authentication provider.

Next, you will need to provide the name of an IAM user that the AWS Connector will be able to use to access your account. You will be asked to authorize the AWS Management Portal for vCenter to create a trust role and service role, which it will use to authenticate users and to grant permission for users to take actions in your account when they use the portal. Because you have selected to use the federation proxy setup, AWS will handle the complexity of setting up the underlying trust relationships for you, as opposed to the SAML-based setup process where you need to configure these yourself. For more information on this portion of the setup process, please see the portals User Guide.

Next, you will add a set of users to act as Administrators for the AWS resources that you are managing through the portal. You will also create a key that will be used to pair your AWS Connector with your account. To complete the setup process, you will also need to deploy and configure the AWS Connector. You can learn more about that process from the User Guide.

Reset Configuration
We have also added a new option within the setup process to reset the portals configuration. If you have previously set up the portal using SAML and would like to switch to using the new federation proxy option, or if you would like to start the setup process over again from a clean slate, you can use this tool to reset your configuration. When you reset the configuration, will need to redo the setup process in order to use the portal again.

Manage Existing Instances
We have also recently added support for managing your existing Amazon Elastic Compute Cloud (EC2) instances using the AWS Management Portal for vCenter. If you are already using AWS and are looking to add the ability to manage your instances through the portal, this makes it easy to keep track of all of your instances, whether or not you created them through the portal.

Existing EC2 instances now show-up under your Default Environment in the portals dashboard. As with other instances, you can perform basic administrative tasks on you existing instances, including starting/stopping them, terminating them, or viewing monitoring information.

You can also manage permissions for the Default Environment, just like you do today for other environments. Simply click on the environment and navigate to the Permissions tab to manage which users have access to your existing instances.

Getting Started
If youre looking to get started with the AWS Management Portal for vCenter and want to take advantage of the new setup features, you can learn more in the User Guide.

— Derek Lyon, Principal Product Manager

EC2 Maintenance Update II

I’d like to give you an update on the EC2 Maintenance announcement that I posted last week. Late yesterday (September 30th), we completed a reboot of less than 10% of the EC2 fleet to protect you from any security risks associated with the Xen Security Advisory (XSA-108).

This Xen Security Advisory was embargoed until a few minutes ago; we were obligated to keep all information about the issue confidential until it was published. The Xen community (in which we are active participants) has designed a two-stage disclosure process that operates as follows:

  • Early disclosure to select organizations (a list maintained and regularly evaluated by the Xen Security Team based on a set of public criteria established by the Xen Project community) with a limited time to make accommodations and apply updates before it becomes widely known.
  • Full disclosure to everyone on the public disclosure date.

Because our customers security is our top priority and because the issue was potentially harmful to our customers, we needed to take fast action to protect them. For the reasons mentioned above, we couldnt be as expansive as wed have liked on why we had to take such fast action.

The zone by zone reboots were completed as planned and we worked very closely with our customers to ensure that the reboots went smoothly for them.

We’ll continue to be vigilant and will do our best to protect all AWS customers from similar issues in the future. As an AWS user, you may also want to take this opportunity to re-examine your AWS architecture to look for possible ways to make it even more fault-tolerant. Here are a few suggestions to get you started:

  • Run instances in two or more Availability Zones.
  • Pay attention to your Inbox and to the alerts on the AWS Management Console. Make sure that you fill in the “Alternate Contacts” in the AWS Billing Console.
  • Review the personalized assessment of your architecture in the Trusted Advisor, then open up AWS Support Cases to get engineering assistance as you implement architectural best practices.
  • Use Chaos Monkey to induce various kinds of failures in a controlled environment.
  • Examine and consider expanding your use of Amazon Route 53 and Elastic Load Balancing checks to ensure that web traffic is routed to healthy instances.
  • Use Auto Scaling to keep a defined number of healthy instances up and running.

You should also consult our Overview of Security Practices whitepaper for more information around AWS and security.

Jeff;

New Quick Start – Windows PowerShell Desired State Configuration (DSC) on AWS

PowerShell Desired State Configuration (DSC) is a powerful tool for system administrators. Introduced as part of Windows Management Framework 4.0, it helps to automate system setup and maintenance for Windows Server 2008 R2 and Windows Server 2012 R2, Windows 7, Windows 8.1, and Linux environments. It can install or remove server roles and features, manage registry settings, environment variables, files, directories, and services, and processes. It can also manage local users and groups, install and manage MSI and EXE packages, and run PowerShell scripts. DSC can discover the system configuration on a given instance, and it can also fix a configuration that has drifted away from the desired state.

We have just published a new Quick Start Reference Deployment to make it easier for you to take advantage of PowerShell Desired State Configuration in your AWS environment.

This new document will show you how to:

  • Use AWS CloudFormation and PowerShell DSC to bootstrap your servers and applications from scratch.
  • Deploy a highly available PowerShell DSC pull server environment on AWS.
  • Detect and remedy configuration drift after your application stack has been deployed.

This detailed (24 page) document contains all of the information that you will need to get started. The deployed pull server is robust and fault-tolerant; it includes a pair of web servers and Active Directory domain controllers. It can be accessed from on-premises devices and from instances running in the AWS Cloud.

Jeff;