Category: PowerShell*


Deploy an Amazon ECS Cluster Running Windows Server with AWS Tools for PowerShell – Part 1

This is a guest post from Trevor Sullivan, a Seattle-based Solutions Architect at Amazon Web Services (AWS). In this blog post, Trevor shows you how to deploy a Windows Server-based container cluster using the AWS Tools for PowerShell.

Building and deploying applications on the Windows Server platform is becoming a significantly lighter-weight process. Although you might be accustomed to developing monolithic applications on Windows, and scaling vertically instead of horizontally, you might want to rethink how you design and build Windows-based apps. Now that application containers are a native feature in the Windows Server platform, you can design your applications in a similar fashion to how developers targeting the Linux platform have designed them for years.

Throughout this blog post, we explore how to automate the deployment of a Windows Server container cluster, managed by the Amazon EC2 Container Service (Amazon ECS). The architecture of the cluster can help you efficiently and cost-effectively scale your containerized application components.

Due to the large amount of information contained in this blog post, we’ve separated it into two parts. Part 1 guides you through the process of deploying an ECS cluster. Part 2 covers the creation and deployment of your own custom container images to the cluster.

Assumptions

In this blog post, we assume the following:

  • You’re using a Windows, Mac, or Linux system with PowerShell Core installed and you’ve installed the AWS Tools for PowerShell Core.
  • Or you’re using a Windows 7 or later system, with Windows PowerShell 4.0 or later installed (use $PSVersionTable to check your PowerShell version) and have installed the AWS Tools for PowerShell (aka AWSPowerShell module).
  • You already have access to an AWS account.
  • You’ve created an AWS Identity and Access Management (IAM) user account with administrative policy permissions, and generated an access key ID and secret key.
  • You’ve configured your AWS access key ID and secret key in your ~/.aws/credentials file.

This article was authored, and the code tested, on a MacBook Pro with PowerShell Core Edition and the AWSPowerShell.NetCore module.

NOTE: In the PowerShell code examples, I’m using a technique known as PowerShell Splatting to pass parameters into various PowerShell commands. Splatting syntax helps your PowerShell code look cleaner, enables auditing of parameter values prior to command invocation, and improves code maintenance.

Create your Amazon ECS cluster

The first task is to create an empty ECS cluster. Right now, the ECS beta support for Windows Server containers doesn’t enable you to provision your compute layer for the cluster, at cluster creation time. In a moment, you’ll provision some Windows Server-based compute capacity on Amazon EC2, and associate it with your empty ECS cluster.

If you prefer to use the AWS PowerShell module to create the empty ECS cluster, you can use the following command.

New-ECSCluster -ClusterName ECSRivendell

Set up an IAM role for Amazon EC2 instances

Amazon EC2 instances can have IAM “roles” assigned to them. By associating one or more IAM policies with an IAM role, which is assigned to an EC2 instance by way of an “instance profile”, you can grant access to various services in AWS directly to the EC2 instance, without having to embed and manage any credentials. AWS handles that for you, via the IAM role. Whenever the EC2 instance is running, it has access to the IAM role that’s assigned to it, and can make calls to various AWS APIs that it’s authorized to call via associated IAM policies.

Before you actually deploy the Windows Server compute layer for your ECS cluster, you must first perform the following preparation steps:

  • Create an IAM policy that defines the required permissions for ECS container instances.
  • Create an IAM role.
  • Register the IAM policy with the role.
  • Create an IAM instance profile, which will be associated with your EC2 instances.
  • Associate the IAM role with the instance profile.

Create an IAM policy for your EC2 container instances

First, you use PowerShell to create the IAM policy. I simply copied and pasted the IAM policy that’s documented in the Amazon ECS documentation. This IAM policy will be associated with the IAM role that will be associated with your EC2 container instances, which actually run your containers via ECS. This policy is especially important, because it grants your container instances access to Amazon EC2 Container Registry (Amazon ECR). This is a private storage area for your container images, similar to the Docker Store.

$IAMPolicy = @{
    Path = '/ECSRivendell/'
    PolicyName = 'ECSRivendellInstanceRole'
    Description = 'This policy is used to grant EC2 instances access to ECS-related API calls. It enables EC2 instances to push and pull from ECR, and most importantly, register with our ECS Cluster.'
    PolicyDocument = @'
{
    "Version": "2012-10-17",
    "Statement": [
        {
        "Effect": "Allow",
        "Action": [
            "ecs:CreateCluster",
            "ecs:DeregisterContainerInstance",
            "ecs:DiscoverPollEndpoint",
            "ecs:Poll",
            "ecs:RegisterContainerInstance",
            "ecs:StartTelemetrySession",
            "ecs:Submit*",
            "ecr:GetAuthorizationToken",
            "ecr:BatchCheckLayerAvailability",
            "ecr:GetDownloadUrlForLayer",
            "ecr:BatchGetImage",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
        ],
        "Resource": "*"
        }
    ]
    }
'@
}
New-IAMPolicy @IAMPolicy

Create a container instance IAM role

The next part is easy. You just need to create an IAM role. This role has a name, optional path, optional description and, the important part, an “AssumeRolePolicyDocument”. This is also known as the IAM “trust policy”. This IAM trust policy enables the EC2 instances to use or “assume” the IAM role, and use its policies to access AWS APIs. Without this trust policy in place, your EC2 instances won’t be able to use this role and the permissions granted to it by the IAM policy.

$IAMRole = @{
    RoleName = 'ECSRivendell'
    Path = '/ECSRivendell/'
    Description = 'This IAM role grants the container instances that are part of the ECSRivendell ECS cluster access to various AWS services, required to operate the ECS Cluster.'
    AssumeRolePolicyDocument = @'
{
    "Version": "2008-10-17",
    "Statement": [
        {
        "Sid": "",
        "Effect": "Allow",
        "Principal": {
            "Service": "ec2.amazonaws.com"
        },
        "Action": "sts:AssumeRole"
        }
    ]
    }
'@
}
New-IAMRole @IAMRole

Register the IAM policy with the IAM role

Now you simply need to associate the IAM policy that you created with the IAM role. This association is very easy to make with PowerShell, using the Register-IAMRolePolicy command. If necessary, you can associate more than one policy with your IAM roles that grant or deny additional permissions.

$RolePolicy = @{
    PolicyArn = $NewIAMPolicy.Arn
    Role = $NewIAMRole.RoleName
}
Register-IAMRolePolicy @RolePolicy

Create an IAM instance profile

With the IAM role prepared, you now need to create the IAM instance profile. The IAM instance profile is the actual object that gets associated with our EC2 instances. This IAM instance profile ultimately grants the EC2 instance permission to call AWS APIs directly, without any stored credentials.

$InstanceProfile = @{
    InstanceProfileName = 'ECSRivendellInstanceRole'
    Path = '/ECSRivendell/'
}
$NewInstanceProfile = New-IAMInstanceProfile @InstanceProfile

Associate the IAM role with the instance profile

Finally, you need to associate our IAM Role with the instance profile.

$RoleInstanceProfile = @{
    InstanceProfileName = $NewInstanceProfile.InstanceProfileName
    RoleName = $NewIAMRole.RoleName
}
$null = Add-IAMRoleToInstanceProfile @RoleInstanceProfile

Add EC2 compute capacity for Windows

Now that you’ve created an empty ECS cluster and have pre-staged your IAM configuration, you need to add some EC2 compute instances running Windows Server to it. These are the actual Windows Server instances that will run containers (tasks) via the ECS cluster scheduler.

To provision EC2 capacity in a frugal and scalable fashion, we create an EC2 Auto Scaling group using Spot instances. Spot instances are one of my favorite services in AWS, because you can provision a significant amount of compute capacity for a huge discount, by bidding on unused capacity. There is some risk that you might need to design around, as Spot instances can be terminated or stopped if your bid price is exceeded. However, depending on your needs, you can run powerful applications on EC2, for a low price, using EC2 Spot instances.

Create the Auto Scaling launch configuration

Before you can create the Auto Scaling group itself, you need to create what’s known as an Auto Scaling “launch configuration”. The launch configuration is essentially a blueprint for Auto Scaling groups. It stores a variety of input parameters that define how new EC2 instances will look every time the Auto Scaling group scales up, and spins up a new instance. For example, you need to specify:

  • The Amazon Machine Image (AMI) that instances will be launched from.
    • NOTE: Be sure you use the Microsoft Windows Server 2016 Base with Containers image for Windows Server container instances.
  • The EC2 instance type (size, vCPUs, memory).
  • Whether to enable or disable public IP addresses for EC2 instances.
  • The bid price for Spot instances (optional, but recommended).
  • The EC2 User Data – PowerShell script to bootstrap instance configuration.

Although the code snippet below might look a little scary, it basically contains an EC2 user data script, which I copied directly from the AWS beta documentation for Windows. This automatically installs the ECS container agent into new instances that are brought online by your Auto Scaling Group, and registers them with your ECS cluster.

$ClusterName = 'ECSRivendell'
$UserDataScript = @'

## The string 'windows' should be replaced with your cluster name

# Set agent env variables for the Machine context (durable)
[Environment]::SetEnvironmentVariable("ECS_CLUSTER", "YOURCLUSTERNAME", "Machine")
[Environment]::SetEnvironmentVariable("ECS_ENABLE_TASK_IAM_ROLE", "true", "Machine")
$agentVersion = 'v1.14.5'
$agentZipUri = "https://s3.amazonaws.com/amazon-ecs-agent/ecs-agent-windows-$agentVersion.zip"
$agentZipMD5Uri = "$agentZipUri.md5"


### --- Nothing user configurable after this point ---
$ecsExeDir = "$env:ProgramFiles\Amazon\ECS"
$zipFile = "$env:TEMP\ecs-agent.zip"
$md5File = "$env:TEMP\ecs-agent.zip.md5"

### Get the files from Amazon S3
Invoke-RestMethod -OutFile $zipFile -Uri $agentZipUri
Invoke-RestMethod -OutFile $md5File -Uri $agentZipMD5Uri

## MD5 Checksum
$expectedMD5 = (Get-Content $md5File)
$md5 = New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider
$actualMD5 = [System.BitConverter]::ToString($md5.ComputeHash([System.IO.File]::ReadAllBytes($zipFile))).replace('-', '')

if($expectedMD5 -ne $actualMD5) {
    echo "Download doesn't match hash."
    echo "Expected: $expectedMD5 - Got: $actualMD5"
    exit 1
}

## Put the executables in the executable directory
Expand-Archive -Path $zipFile -DestinationPath $ecsExeDir -Force

## Start the agent script in the background
$jobname = "ECS-Agent-Init"
$script =  "cd '$ecsExeDir'; .\amazon-ecs-agent.ps1"
$repeat = (New-TimeSpan -Minutes 1)

$jobpath = $env:LOCALAPPDATA + "\Microsoft\Windows\PowerShell\ScheduledJobs\$jobname\ScheduledJobDefinition.xml"
if($(Test-Path -Path $jobpath)) {
    echo "Job definition already present"
    exit 0

}

$scriptblock = [scriptblock]::Create("$script")
$trigger = New-JobTrigger -At (Get-Date).Date -RepeatIndefinitely -RepetitionInterval $repeat -Once
$options = New-ScheduledJobOption -RunElevated -ContinueIfGoingOnBattery -StartIfOnBattery
Register-ScheduledJob -Name $jobname -ScriptBlock $scriptblock -Trigger $trigger -ScheduledJobOption $options -RunNow
Add-JobTrigger -Name $jobname -Trigger (New-JobTrigger -AtStartup -RandomDelay 00:1:00)

true
'@ -replace 'YOURCLUSTERNAME', $ClusterName

$UserDataBase64 = [System.Convert]::ToBase64String(([Byte[]][Char[]] $UserDataScript))

# Create a block device mapping to increase the size of the root volume
$BlockDevice = [Amazon.AutoScaling.Model.BlockDeviceMapping]::new()
$BlockDevice.DeviceName = '/dev/sda1'
$BlockDevice.Ebs = [Amazon.AutoScaling.Model.Ebs]::new()
$BlockDevice.Ebs.DeleteOnTermination = $true
$BlockDevice.Ebs.VolumeSize = 200
$BlockDevice.Ebs.VolumeType = 'gp2'

$LaunchConfig = @{
    LaunchConfigurationName = 'ECSRivendell'
    AssociatePublicIpAddress = $true
    EbsOptimized = $true
    BlockDeviceMapping = $BlockDevice
    InstanceType = 'r4.large'
    SpotPrice = '0.18'
    InstanceMonitoring_Enabled = $true
    IamInstanceProfile = 'ECSRivendellInstanceRole'
    ImageId = 'ami-6a887b12'
    UserData = $UserDataBase64
}
$NewLaunchConfig = New-ASLaunchConfiguration @LaunchConfig

Be sure you give your EC2 container instances enough storage to cache container images over time. In my example, I’m instructing the launch configuration to grant a 200 GB Amazon EBS root volume to each EC2 instance that is deployed into the Auto Scaling group, once it’s set up.

Create the Auto Scaling group

Now that you’ve set up the Auto Scaling launch configuration, you can deploy a new Auto Scaling group from it. Once created, the Auto Scaling group actually submits the EC2 Spot request for the desired numbers of EC2 instances when they’re needed. All you have to do is tell the Auto Scaling group how many instances you want, and it handles spinning them up or down.

Optionally, you can even set up Auto Scaling policies so that you don’t have to manually scale the cluster. We’ll save the topic of Auto Scaling policies for another article, however, and instead focus on setting up the ECS cluster.

The PowerShell code to create the Auto Scaling group, from the launch configuration that you built, is included below. Keep in mind that although I’ve tried to keep the code snippets fairly generic, this particular code snippet does have a hard-coded command that retrieves all of your Amazon Virtual Private Cloud (VPC) subnet IDs dynamically. It should work fine, as long as you only have a single VPC in the region you’re currently operating in. If you have more than one VPC, you’ll want to filter out the list of subnets from the Get-EC2Subnet command that are contained in the desired target VPC.

$AutoScalingGroup = @{
    AutoScalingGroupName = 'ECSRivendell'
    LaunchConfigurationName = 'ECSRivendell'
    MinSize = 2  
    MaxSize = 5
    DesiredCapacity = 3
    VPCZoneIdentifier = [String]::Join(',', (Get-EC2Subnet).SubnetId)
    Tag = $( $Tag = [Amazon.AutoScaling.Model.Tag]::new()
             $Tag.PropagateAtLaunch = $true; $Tag.Key = 'Name'; $Tag.Value = 'ECS: ECSRivendell'; $Tag )
}
$NewAutoScalingGroup = New-ASAutoScalingGroup @AutoScalingGroup

After running this code snippet, your Auto Scaling Group will take a few minutes to deploy. Additionally, it can sometimes take 15-30 minutes for the ECS container agent, running on each of the EC2 instances, to finish registering with the ECS cluster. Once you’ve waited a little while, check out the EC2 instance area of the AWS Management Console and examine your shiny new container instances!

You can also visit the ECS area of the AWS Management Console, and examine the number of container instances that are registered with the ECS cluster. You now have a functioning ECS cluster!

You can even use the Get-ECSClusterDetail command in PowerShell to examine the current count of container instances.

PS /Users/tsulli> (Get-ECSClusterDetail -Cluster ECSRivendell).Clusters

ActiveServicesCount               : 0
ClusterArn                        : arn:aws:ecs:us-west-2:676655494252:cluster/ECSRivendell
ClusterName                       : ECSRivendell
PendingTasksCount                 : 0
RegisteredContainerInstancesCount : 3
RunningTasksCount                 : 0
Status                            : ACTIVE

Conclusion

In this article, you set up an EC2 Container Service (ECS) cluster with several EC2 instances running Windows Server registered to it. Prior to that, you also set up the IAM policy, IAM role, and instance profile that are necessary prerequisites to ensure that the ECS cluster operates correctly. You used the AWS Tools for PowerShell to accomplish this automation, and learned about key ECS and EC2 Auto Scaling commands.

Although you have a functional ECS Cluster at this point, you haven’t yet deployed any containers (ECS tasks) to it. Keep an eye out for part 2 of this blog post, where you’ll explore building your own custom container images, pushing those container images up to Amazon EC2 Container Registry (Amazon ECR), and running ECS tasks and services.

Writing and Archiving Custom Metrics using Amazon CloudWatch and AWS Tools for PowerShell

This is a guest post from Trevor Sullivan, a Seattle-based Solutions Architect at Amazon Web Services (AWS). Since 2004, Trevor has worked intimately with Microsoft technologies, including PowerShell since its release in 2006. In this article, Trevor takes you through the process of using the AWS Tools for PowerShell to write and export metrics data from Amazon CloudWatch.

Amazon’s CloudWatch service is an umbrella that covers a few major areas: logging, metrics, charting, dashboards, alarms, and events.

I wanted to take a few minutes to cover the CloudWatch Metrics area, specifically as it relates to interacting with metrics from PowerShell. We’ll start off with a discussion and demonstration of how to write metric data into CloudWatch, then move on to how to find existing metrics in CloudWatch, and finally how to retrieve metric data points from a specific metric.

Amazon CloudWatch stores metrics data for up to 15 months. However, you can export data from Amazon CloudWatch into a long-term retention tool of your choice, depending on your requirements for metric data retention, and required level of metric granularity. While historical, exported data may not be usable inside CloudWatch, after aging out, you can use other AWS data analytics tools, such as Amazon QuickSight and Amazon Athena to build reports against your historical data.

Assumptions

For this article, we assume that you have an AWS account. We also assume you understand PowerShell at a fundamental level, have installed PowerShell and the AWS Tools for PowerShell on your platform of choice, and have already set up your AWS credentials file and necessary IAM policies granting access to CloudWatch. We’ll discuss and demonstrate how to call various CloudWatch APIs from PowerShell, so be sure you’re prepared for this topic.

For more information, see the Getting Started guide for AWS Tools for PowerShell.

Write metric data into Amazon CloudWatch

Let’s start by talking about storing custom metrics in CloudWatch.

In the AWS Tools for PowerShell, there’s a command named Write-CWMetricData. This PowerShell command ultimately calls the PutMetricData API to write metrics to Amazon CloudWatch. It’s fairly easy to call this command, as there are only a handful of parameters. However, you should understand how CloudWatch works before attempting to use the command.

  • CloudWatch metrics are stored inside namespaces
  • Metric data points:
    • Must have a name.
    • May have zero or more dimensions.
    • May have a value, time stamp and unit of measure (eg. Bytes, BytesPerSecond, Count, etc.).
    • May specify a custom storage resolution (eg., 1 second, 5 seconds—the default is 60 seconds).
  • In the AWS Tools for PowerShell, you construct one or more MetricDatum .NET objects, before passing these into Write-CWMetricData.

With that conceptual information out of the way, let’s look at the simplest way to create a custom metric. Writing metric data points into CloudWatch is how you create a metric. There isn’t a separate operation to create a metric and then write data points into it.

### First, we create one or more MetricDatum objects
$Metric = [Amazon.CloudWatch.Model.MetricDatum]::new()
$Metric.MetricName = 'UserCount'
$Metric.Value = 98

### Second, we write the metric data points to a CloudWatch metrics namespace
Write-CWMetricData -MetricData $Metric -Namespace trevortest/tsulli.loc

If you have lots of metrics to track, and you’d prefer to avoid cluttering up your top-level metric namespaces, this is where metric dimensions can come in handy. For example, let’s say we want to track a “UserCount” metric for over 100 different Active Directory domains. We can store them all under a single namespace, but create a “DomainName” dimension, on each metric, whose value is the name of each Active Directory domain. The following screenshot shows an example of this in action.

Here’s a PowerShell code example that shows how to write a metric to CloudWatch, with a dimension. Although the samples we’ve looked at in this article show how to write a single metric, you should strive to reduce the number of disparate AWS API calls that you make from your application code. Try to consolidate the gathering and writing of multiple metric data points in the same PutMetricData API call, as an array of MetricDatum objects. Your application will perform better, with fewer HTTP connections being created and destroyed, and you’ll still be able to gather as many metrics as you want.

$Metric = [Amazon.CloudWatch.Model.MetricDatum]::new()
### Create a metric dimension, and set its name and value
$Dimension = [Amazon.CloudWatch.Model.Dimension]::new()
$Dimension.Name = 'DomainName'
$Dimension.Value = 'awstrevor.loc'

$Metric.MetricName = 'UserCount'
$Metric.Value = 76
### NOTE: Be sure you assign the Dimension object to the Dimensions property of the MetricDatum object
$Metric.Dimensions = $Dimension
Write-CWMetricData -MetricData $Metric -Namespace trevortest

Retrieve a list of metrics from Amazon CloudWatch

Now that we’ve written custom metrics to CloudWatch, let’s discuss how we search for metrics. Over time, you might find that you have thousands or tens of thousands of metrics in your AWS accounts, across various regions. As a result, it’s imperative that you know how to locate metrics relevant to your analysis project.

You can, of course, use the AWS Management Console to view metric namespaces, and explore the metrics and metric dimensions contained within each namespace. Although this approach will help you gain initial familiarity with the platform, you’ll most likely want to use automation to help you find relevant data within the platform. Automation is especially important when you introduce metrics across multiple AWS Regions and multiple AWS accounts, as they can be harder to find via a graphical interface.

In the AWS Tools for PowerShell, the Get-CWMetricList command maps to the AWS ListMetrics API. This returns a list of high-level information about the metrics stored in CloudWatch. If you have lots of metrics stored in your account, you might get back a very large list. Thankfully, PowerShell has some generic sorting and filtering commands that can help you find the metrics you’re seeking, with some useful filtering parameters on the Get-CWMetricList command itself.

Let’s explore a few examples of how to use this command.

Starting with the simplest example, we’ll retrieve a list of all the CloudWatch metrics from the current AWS account and region.

Get-CWMetricList

If the results of this command are little overwhelming, that’s okay. We can filter down the returned metrics to a specific metric namespace, using the -Namespace parameter.

Get-CWMetricList -Namespace AWS/Lambda

What if you don’t know which metric namespaces exist? PowerShell provides a useful command that enables you to filter for unique values.

(Get-CWMetricList).Namespace | Select-Object -Unique

If these results aren’t in alphabetical order, it might be hard to visually scan through them, so let’s sort them.

(Get-CWMetricList).Namespace | Select-Object -Unique | Sort-Object

Much better! Another option is to search for metrics based on a dimension key-value pair. It’s a bit more typing, but it’s a useful construct to search through thousands of metrics. You can even write a simple wrapper PowerShell function to make it easier to construct one of these DimensionFilter objects.

$Filter = [Amazon.CloudWatch.Model.DimensionFilter]::new()
$Filter.Name = 'DomainName'
$Filter.Value = 'tsulli.loc'
Get-CWMetricList -Dimension $Filter

If you know the name of a specific metric, you can query for a list of metrics that match that name. You might get back multiple results, if there are multiple metrics with the same name, but different dimensions exist in the same namespace. You can also have similarly named metrics across multiple namespaces, with or without dimensions.

Get-CWMetricList -MetricName UserCount

PowerShell’s built-in, generic Where-Object command is infinitely useful in finding metrics or namespaces, if you don’t know their exact, full name.

This example shows how to filter for any metric names that contain “User”.

Get-CWMetricList | Where-Object -FilterScript { $PSItem.Name -match 'User' }

Filtering metrics by namespace is just as easy. Let’s search for metrics that are stored inside any metric namespace that ends with “EBS”.

Get-CWMetricList | Where-Object -FilterScript { $PSItem.Namespace -match 'EBS$' }

That’s likely enough examples of how to find metrics in CloudWatch, using PowerShell! Let’s move on and talk about pulling actual metric data points from CloudWatch, using PowerShell.

Pull metric data from CloudWatch

Concepts

Metric data is stored in CloudWatch for a finite period of time. Before metrics age out of CloudWatch, metric data points (metric “statistics”) move through a tiered system where they are aggregated and stored as less granular metric data points. For example, metrics gathered on a per-minute period are aggregated and stored as five-minute metrics, when they reach an age of fifteen (15) days. You can find detailed information about the aggregation process and retention period in the Amazon CloudWatch metrics documentation.

Data aggregation in CloudWatch, as of this writing, starts when metric data points reach an age of three (3) hours. You need to ensure that you’re exporting your metric data before your data is aged, if you want to keep the most detailed resolution of your metric data points. Services such as AWS Lambda or even PowerShell applications deployed onto Amazon EC2 Container Service (ECS) can help you achieve this export process in a scalable fashion.

The longest period that metrics are stored in CloudWatch is 15 months. If you want to store metrics data beyond a 15-month period, you must query the metrics data, before CloudWatch performs aggregation on your metrics, and store it in an alternate repository, such as Amazon DynamoDBAmazon S3, or Amazon RDS.

PowerShell Deep Dive

Now that we’ve covered some of the conceptual topics around retrieving and archiving CloudWatch metrics, let’s look at the actual PowerShell command to retrieve data points.

The AWS Tools for PowerShell include a command named Get-CWMetricStatistic, which maps to the GetMetricStatistics API in AWS. You can use this command to retrieve granular data points from your CloudWatch metrics.

There are quite a few parameters that you need to specify on this command, because you are querying a potentially massive dataset. You need to be very specific about the metric namespace, name, start time, end time, period, and statistic that you want to retrieve.

Let’s find the metric data points for the Active Directory UserCount metric, for the past 60 minutes, every minute. We assign the API response to a variable, so we can dig into it further. You most likely don’t have this metric, but I’ve been gathering this metric for awhile, so I’ve got roughly a week’s worth of data at a per-minute level. Of course, my metrics are subject to the built-in aggregation policies, so my per-minute data is only good for up to 15 days.

$Data = Get-CWMetricStatistic -Namespace ActiveDirectory/tsulli.loc -ExtendedStatistic p0.0 -MetricName UserCount -StartTime ([DateTime]::UtcNow.AddHours(-1)) -EndTime ([DateTime]::UtcNow) -Period 60

As you can see, the command is a little lengthy, but when you use PowerShell’s tab completion to help finish the command and parameter names, typing it out isn’t too bad.

Because we’re querying the most recent hour’s worth of data, we should have exactly 60 data points in our response. We can confirm this by examining PowerShell’s built-in Count property on the Datapoints property in our API response.

$Data.Datapoints.Count

The data points aren’t in chronological order when returned, so let’s use PowerShell to sort them, and grab the most recent data point.

$Data.Datapoints | Sort-Object -Property Timestamp | Select-Object -Last 1
Average            : 0
ExtendedStatistics : {[p0.0, 19]}
Maximum            : 0
Minimum            : 0
SampleCount        : 0
Sum                : 0
Timestamp          : 10/14/17 4:27:00 PM
Unit               : Count

Now we can take this data and start exporting it into our preferred external data store, for long-term retention! I’ll leave it to you to explore this API further.

Due to the detailed options available in the GetMetricStatistics API, I would strongly encourage you to read through the documentation, and more importantly, run your own experiments with the API. You need to use this API extensively, if you want to export data points from CloudWatch metrics to an alternative data source, as described earlier.

Conclusion

In this article, we’ve explored the use of the AWS Tools for PowerShell to assist you with writing metrics to Amazon CloudWatch, searching or querying for metrics, and retrieving metric data points. I would encourage you to think about how Amazon CloudWatch can integrate with a variety of other services, to help you achieve your objectives.

Don’t forget that after you’ve stored metrics data in CloudWatch, you can then build dashboards around that data, connect CloudWatch alarms to notify you about infrastructure and application issues, and even perform automated remediation tasks. The sky is the limit, so put on your builder’s hat and start creating!

Please feel free to follow me on Twitter, and keep an eye on my YouTube channel.

New Get-ECRLoginCommand for AWS Tools for PowerShell

Today’s post is from AWS Solution Architect and Microsoft MVP for Cloud and Data Center Management, Trevor Sullivan.

The AWS Tools for PowerShell now offer a new command that makes it easier to authenticate to the Amazon EC2 Container Registry (Amazon ECR).

Amazon EC2 Container Registry (ECR) is a service that enables customers to upload and store their Windows-based and Linux-based container images. Once a developer uploads these container images, they can then be deployed to stand-alone container hosts or container clusters, such as those running under the Amazon EC2 Container Service (Amazon ECS).

To push or pull container images from ECR, you must authenticate to the registry using the Docker API. ECR provides a GetAuthorizationToken API that retrieves the credential you’ll use to authenticate to ECR. In the AWS PowerShell modules, this API is mapped to the cmdlet Get-ECRAuthorizationToken. The response you receive from this service invocation includes a username and password for the registry, encoded as base64. To retrieve the credential, you must decode the base64 response into a byte array, and then decode the byte array as a UTF-8 string. After retrieving the UTF-8 string, the username and password are provided to you in a colon-delimited format. You simply split the string on the colon character to receive the username as array index 0, and the password as array index 1.

Now, with Get-ECRLoginCommand, you can retrieve a pregenerated Docker login command that authenticates your container hosts to ECR. Although you can still directly call the GetAuthorizationToken API, Get-ECRLoginCommand provides a helpful shortcut that reduces the amount of required conversion effort.

Let’s look at a short example of how you can use this new command from PowerShell:

PS> Invoke-Expression –Command (Get-ECRLoginCommand –Region us-west-2).Command
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

As you can see, all you have to do is call the Get-ECRLoginCommand, and then pass the prebuilt Command property into the built-in Invoke-Expression PowerShell cmdlet. Upon running this PowerShell cmdlet, you’re authenticated to ECR, and can then proceed to create image repositories and pushing and pulling container images.

Note: You might receive a warning about specifying the registry password on the Docker CLI. However, you can also build your own Docker login command by using the other properties on the object returned from the Get-ECRLoginCommand.

I hope you find the new cmdlet useful! If you have ideas for other cmdlets we should add, be sure to let us know in the comments.

Improvements for AWS CloudFormation and Amazon CloudWatch in the AWS Tools for PowerShell Modules

Trevor Sullivan, a Systems Development Engineer here at Amazon, recently contributed some new AWS CloudFormation helper cmdlets and improved formatting for types he works with on a daily basis. These updates were released in version 3.3.119.0 of the AWS Tools for PowerShell modules (AWSPowerShell and AWSPowerShell.NetCore), in addition to new support in Amazon CloudWatch metrics for customizable dashboards. In this guest post, Trevor takes us through the updates.

Pause a script until a CloudFormation stack status is reached

If you want to pause your PowerShell script until a CloudFormation stack reaches a certain status, you can use the Wait-CFNStack cmdlet. You use Wait-CFNStack to specify a CloudFormation stack name and the status code that you want to wait for. All of the supported CloudFormation statuses are provided with IntelliSense/tab-completion for the -Status parameter, so you don’t need to look them up! Let’s take a look at how you use this cmdlet.

$Common = @{
    ProfileName = 'default'
    Region = 'us-east-2'
}
$CloudFormation = @{
    StackName = 'AWSCloudFormation'
    TemplateBody = @'
    AWSTemplateFormatVersion: '2010-09-09'
        Resources:
            myBucket:
                Type: AWS::S3::Bucket
        Outputs:
            BucketName:
            Value: !Ref myBucket
'@
}
New-CFNStack @CloudFormation @Common
Wait-CFNStack -StackName $CloudFormation.StackName @Common

Test the existence of the CloudFormation stack

Have you ever wanted to simply test whether a CloudFormation stack exists in a certain AWS Region? If so, we now have a cmdlet for that. The Test-CFNStack cmdlet simply returns a Boolean $true if the specified stack exists, or $false if it doesn’t. If your stack doesn’t exist, you no longer have to worry about catching exceptions thrown by the Get-CFNStack cmdlet!

$Common = @{
    ProfileName = 'default'
    Region = 'us-east-2'
}

if (Test-CFNStack -StackName $CloudFormation.StackName @Common) {
    Remove-CFNStack -StackName $CloudFormation.StackName –Force @Common
}

Format types

Another customer-obsessed enhancement in the latest version of the modules deals with the default display of certain objects. In earlier versions complex objects such as CloudFormation stacks were typically displayed in the vertical “list” format (see the Format-List PowerShell cmdlet). The “list” output format doesn’t use horizontal screen space very effectively. As a result, you have to scroll a lot to find what you want and the output isn’t easy to consume.

Instead, we opted to improve the default output to use the PowerShell table format. This makes data easier to consume, so you don’t have to scroll as much. It also limits focus to the object properties that you care about the most.

If you prefer the “list” format, you can still use it by piping your objects into the Format-List PowerShell cmdlet. The default output has simply been changed to use a tabular format to make data easier to interact with and consume.

The new format types work with cmdlets that emit complex objects, such as:

  • Get-CFNStackEvent
  • Get-CFNStack
  • Get-IAMRoleList
  • Get-CWERule
  • Get-LMFunctionList
  • Get-ASAutoScalingGroup
  • Get-WKSWorkspace
  • Get-CWAlarm

The changelog for version 3.3.119.0 of the module on the PowerShell Gallery lists all the types that new formats have been specified for. You can view the changelog for the release on the PowerShell Gallery.

Manage CloudWatch dashboards

AWS customers who use CloudWatch to store and view metrics will appreciate the new CloudWatch dashboard APIs. You can now use PowerShell cmdlets to create, list, and delete CloudWatch dashboards!

I’ve already created a CloudWatch dashboard in my account, so let’s check out how we can export it, modify it, and then update it. Let’s start by discovering which AWS cmdlets relate to CloudWatch dashboards by using Get-AWSCmdletName.

PS /Users/tsulli> Get-AWSCmdletName –MatchWithRegex dashboard

CmdletName           ServiceOperation         ServiceName       CmdletNounPrefix
----------           ----------------         -----------       ----------------
Get-CWDashboard      GetDashboard             Amazon CloudWatch CW
Get-CWDashboardList  ListDashboards           Amazon CloudWatch CW
Remove-CWDashboard   DeleteDashboards         Amazon CloudWatch CW
Write-CWDashboard    PutDashboard             Amazon CloudWatch CW

Now, let’s discover which CloudWatch dashboards already exist in the us-west-2 AWS Region by using Get-CWDashboardList.

PS /Users/tsulli> Get-CWDashboardList -Region us-west-2

DashboardArn   DashboardName   LastModified        Size
------------   -------------   ------------        ----
               MacBook-Pro     7/6/17 7:50:16 PM   1510

As you can see, I’ve got a single CloudWatch dashboard in my test account, with some interesting metrics about my MacBook Pro. Coincidentally, these hardware metrics are also being written to CloudWatch metrics using the AWSPowerShell.NETCore module.

Now let’s grab some detailed information about this specific CloudWatch dashboard. We do this using the Get-CWDashboard cmdlet, and simply passing in the region and name of the dashboard. Be sure to remember that the dashboard name is a case-sensitive input parameter.

PS /Users/tsulli> $Dashboard = Get-CWDashboard -DashboardName MacBook-Pro -Region us-west-2 | Format-List

LoggedAt : 7/7/17 1:44:44 PM
DashboardArn : arn:aws:cloudwatch::123456789012:dashboard/MacBook-Pro
DashboardBody : {"widgets......
DashboardName :
ResponseMetadata : Amazon.Runtime.ResponseMetadata
ContentLength : 3221
HttpStatusCode : OK

For readability in this article, I’ve trimmed the DashboardBody property. However, it contains a lengthy string with the JSON that represents my CloudWatch dashboard. I can use the ConvertFrom-Json cmdlet to convert the string to a usable object in PowerShell.

PS /Users/tsulli> $DashboardObject = $Dashboard.DashboardBody | ConvertFrom-Json

Now let’s update the title field of all the widgets on the CloudWatch dashboard. Let’s change the beginning of each widget’s title from “Trevor” to “David”. Right now, the title reads “Trevor’s MacBook Pro”. After updating it, the widget titles will read “David’s MacBook Pro”. We’ll use the ForEach method syntax in PowerShell to do this. Each widget has a property named //properties//, which has a //title// string property. We’ll do a simple string replacement operation on this property’s value.

PS /Users/tsulli> $DashboardObject.widgets.ForEach({ $PSItem.properties.title = $PSItem.properties.title.Replace('Trevor', 'David') })

Now that we’ve modified the widget titles, let’s convert the dashboard back to JSON and overwrite our dashboard! We’ll use ConvertTo-Json to convert the dashboard object back into its JSON representation. Then we’ll call Write-CWDashboard to commit the updated dashboard back to the CloudWatch service.

PS /Users/tsulli> $DashboardJson = $DashboardObject | ConvertTo-Json -Depth 8
PS /Users/tsulli> Write-CWDashboard -DashboardBody $DashboardJson -DashboardName MacBook-Pro -Region us-west-2

Great! Now if you go back to the AWS Management Console and visit your CloudWatch dashboard, you’ll see that your widgets have updated titles!

Conclusion

We hope you enjoy the continued improvements to the AWS Tools for PowerShell customer experience! If you have feedback on these improvements, please let us know. You can:

* Leave comments and feedback in our AWS SDK forums.
* Tweet to us at @awscloud and @awsfornet.
* Comment on this article!

Updates to AWSPowerShell Cmdlet Names

Since we launched the first AWS module for PowerShell five years ago, we’ve been hugely encouraged by the feedback from the user community, from first-time PowerShell users to PowerShell MVPs. In that time, we’ve acted immediately on some feedback, and put more complex changes into our backlog for future consideration.

One common request from experienced community members was to rename some cmdlets to change plural nouns to singular, as is the recommended practice. (We had initially created them with plural nouns as we tried to map them closely to the underlying service operation names).

Today we’re pleased to announce we’ve completed the remapping work. We’ve released new versions of the AWSPowerShell and AWSPowerShell.NetCore modules – version 3.3.96.0 – with singular cmdlet names and other cross-service consistency changes.

All the cmdlet name changes have backward-compatible aliases, so you should not have to update any existing scripts. You can continue to use the earlier names.

As we analyzed the cmdlets to determine where we needed to make changes, we found fewer problematic cases than we feared, but more than we’d like! To list them all in this blog post would be overwhelming. Instead, we added a page to our  cmdlet cross-reference on the web at https://docs.aws.amazon.com/powershell/latest/reference/items/pstoolsref-legacyaliases.html. You can also quickly inspect the changes by opening the AWSPowerShellLegacyAliases.psm1 file in the module distribution. For convenience both these resources list the aliases by service.

Please keep your feedback and feature requests coming! We really enjoy getting feedback (good or bad) and using it to plan improvements to the tools to address the problems you handle day to day.

Using Amazon Kinesis Firehose

Amazon Kinesis Firehose, a new service announced at this year’s re:Invent conference, is the easiest way to load streaming data into to AWS. Firehose manages all of the resources and automatically scales to match the throughput of your data. It can capture and automatically load streaming data into Amazon S3 and Amazon Redshift.

An example use for Firehose is to keep track of traffic patterns in a web application. To do that, we want to stream the records generated for each request to a web application with a record that contains the current page and the page being requested. Let’s take a look.

Creating the Delivery Stream

First, we need to create our Firehose delivery stream. Although we can do this through the Firehose console, let’s take a look at how we can automate the creation of the delivery stream with PowerShell.

In our PowerShell script, we need to set up the account ID and variables for the names of the resources we will create. The account ID is used in our IAM role to restrict access to just the account with the delivery stream.

$accountId = '<account-id>'
$roleName = '<iam-role-name>'
$s3BucketName = '<s3-bucket-name>'
$firehoseDeliveryStreamName = '<delivery-stream-name>'

Because Firehose will push our streaming data to S3, our script will need to make sure the bucket exists.

$s3Bucket = Get-S3Bucket -BucketName $s3BucketName
if($s3Bucket -eq $null)
{
    New-S3Bucket -BucketName $s3BucketName
}

We also need to set up an IAM role that gives Firehose permission to push data to S3. The role will need access to the Firehose API and the S3 destination bucket. For the Firehose access, our script will use the AmazonKinesisFirehoseFullAccess managed policy. For the S3 access, our script will use an inline policy that restricts access to the destination bucket.

$role = (Get-IAMRoles | ? { $_.RoleName -eq $roleName })

if($role -eq $null)
{
    # Assume role policy allowing Firehose to assume a role
    $assumeRolePolicy = @"
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "firehose.amazonaws.com"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId":"$accountId"
        }
      }
    }
  ]
}
"@

    $role = New-IAMRole -RoleName $roleName -AssumeRolePolicyDocument $assumeRolePolicy

    # Add managed policy AmazonKinesisFirehoseFullAccess to role
    Register-IAMRolePolicy -RoleName $roleName -PolicyArn 'arn:aws:iam::aws:policy/AmazonKinesisFirehoseFullAccess'

    # Add policy giving access to S3
    $s3AccessPolicy = @"
{
"Version": "2012-10-17",  
    "Statement":
    [    
        {      
            "Sid": "",      
            "Effect": "Allow",      
            "Action":
            [        
                "s3:AbortMultipartUpload",        
                "s3:GetBucketLocation",        
                "s3:GetObject",        
                "s3:ListBucket",        
                "s3:ListBucketMultipartUploads",        
                "s3:PutObject"
            ],      
            "Resource":
            [        
                "arn:aws:s3:::$s3BucketName",
                "arn:aws:s3:::$s3BucketName/*"		    
            ]    
        } 
    ]
}
"@

    Write-IAMRolePolicy -RoleName $roleName -PolicyName "S3Access" -PolicyDocument $s3AccessPolicy

    # Sleep to wait for the eventual consistency of the role creation
    Start-Sleep -Seconds 2
}

Now that the S3 bucket and IAM role are set up, we will create the delivery stream. We just need to set up an S3DestinationConfiguration object and call the New-KINFDeliveryStream cmdlet.

$s3Destination = New-Object Amazon.KinesisFirehose.Model.S3DestinationConfiguration
$s3Destination.BucketARN = "arn:aws:s3:::" + $s3Bucket.BucketName
$s3Destination.RoleARN = $role.Arn

New-KINFDeliveryStream -DeliveryStreamName $firehoseDeliveryStreamName -S3DestinationConfiguration $s3Destination 

After the New-KINFDeliveryStream cmdlet is called, it will take a few minutes to create the delivery stream. We can use the Get-KINFDeliveryStream cmdlet to check the status. As soon as it is active, we can run the following cmdlet to test our stream.

Write-KINFRecord -DeliveryStreamName $firehoseDeliveryStreamName -Record_Text "test record"

This will send one record to our stream, which will be pushed to the S3 bucket. By default, delivery streams buffer data to either 5 MB or 5 minutes before pushing to S3, so check the bucket in 5 minutes.

Writing to the Delivery Stream

In an ASP.NET application, we can write an IHttpModule so we know about every request. With an IHttpModule, we can add an event handler to the BeginRequest event and inspect where the request is coming from and going to. Here is code for our IHttpModule. The Init method adds the event handler. The RecordRequest method grabs the current URL and the request URL and sends that to the delivery stream.

using System;
using System.IO;
using System.Text;
using System.Web;

using Amazon;
using Amazon.KinesisFirehose;
using Amazon.KinesisFirehose.Model;

namespace KinesisFirehoseDemo
{
    /// 
    /// This http module adds an event handler for incoming requests.
	/// For each request a record is sent to Kinesis Firehose. For this demo a
    /// single record is sent at time with the PutRecord operation to
	/// keep the demo simple. This can be optimized by batching records and
	/// using the PutRecordBatch operation.
    /// 
    public class FirehoseSiteTracker : IHttpModule
    {
        IAmazonKinesisFirehose _client;

        // The delivery stream that was created using the setup.ps1 script.
        string _deliveryStreamName = "";

        public FirehoseSiteTracker()
        {
            this._client = new AmazonKinesisFirehoseClient(RegionEndpoint.USWest2);
        }

        public void Dispose() 
        {
            this._client.Dispose(); 
        }

        public bool IsReusable
        {
            get { return true; }
        }

        /// 
        /// Setup the event handler for BeginRequest events.
        /// 
        /// 
        public void Init(HttpApplication application)
        {
            application.BeginRequest +=
                (new EventHandler(this.RecordRequest));
        }

        /// 
        /// Write to Firehose a record with the starting page and the page being requested.
        /// 
        /// 
        /// 
        private void RecordRequest(Object source, EventArgs e)
        {
            // Create HttpApplication and HttpContext objects to access
            // request and response properties.
            HttpApplication application = (HttpApplication)source;
            HttpContext context = application.Context;

            string startingRequest = string.Empty;
            if (context.Request.UrlReferrer != null)
                startingRequest = context.Request.UrlReferrer.PathAndQuery;

            var record = new MemoryStream(UTF8Encoding.UTF8.GetBytes(string.Format("{0}t{1}n",
                startingRequest, context.Request.Path)));

            var request = new PutRecordRequest
            {
                DeliveryStreamName = this._deliveryStreamName,
                Record = new Record
                {
                    Data = record
                }
            };
            this._client.PutRecordAsync(request);
        }
    }
}

 

<system.webServer>
  <modules>
    <add name="siterecorder" type="KinesisFirehoseDemo.FirehoseSiteTracker"/>
  </modules>
</system.webServer>

Now we can navigate through our ASP.NET application and watch data flow into our S3 bucket.

What’s Next

Now that our data is flowing into S3, we have many options for what to do with that data. Firehose has built-in support for pushing our S3 data straight to Amazon Redshift, giving us lots of power for running queries and doing analytics. We could also set up event notifications to have Lambda functions or SQS pollers read the data getting pushed to Amazon S3 in real time.

AWS re:Invent 2015 Recap

Another AWS re:Invent in the bag. It was great to talk to so many of our customers about .NET and PowerShell. Steve and I gave two talks this year. The first session was about how to take advantage of ASP.NET 5 in AWS. The second session was our first-ever PowerShell talk at re:Invent. It was great to see community excitement for our PowerShell support. If you weren’t able to come to re:Invent this year, you can view our sessions online.

We published the source code and scripts used in our talks in the reInvent-2015 folder in our .NET SDK samples repository.

Hope to see you at next year’s AWS re:Invent!

Amazon EC2 ImageUtilities and Get-EC2ImageByName Updates

Versions 2.3.14 of the AWS SDK for .NET and AWS Tools for Windows PowerShell, released today (December 18, 2014), contain updates to the utilities and the Get-EC2ImageByName cmdlet used to query common Microsoft Windows 64-bit Amazon Machine Images using version-independent names. Briefly, we renamed some of the keys used to identify Microsoft Windows Server 2008 images to address confusion over what versions are actually returned, and we added the ability to retrieve some additional images. In the Get-EC2ImageByName cmdlet, we made a small behavior change to help when running the cmdlet in a pipeline when more than one image version exists (as happens when Amazon periodically revises the images) – the cmdlet by default now outputs only the very latest image. The previous behavior that output all available versions (latest + prior) can be enabled using a new switch.

Renamed and New Image Keys

This change affects both the SDK Amazon.EC2.Util.ImageUtilities class and the Get-EC2ImageByName cmdlet. For some time now, the keys prefixed with Windows_2008_* have returned Microsoft Windows Server 2008 R2 images, not the original Windows Server 2008 editions, leading to some confusion. We addressed this by adding a new set of R2-specific keys—these all have the prefix Windows_2008R2_*. To maintain backward compatibility, the SDK retains the old keys, but we have tagged them with the [Obsolete] attribute and a message detailing the corresponding R2-based key you should use. Additionally, these old keys will still return Windows Server 2008 R2 images.

Note that the Get-EC2ImageByName cmdlet will not display the obsolete keys (when run with no parameters), but you can still supply them for the -Name parameter so your existing scripts will continue to function.

We also added three new keys enabling you to retrieve 64-bit Windows Server 2008 SP3 editions (base image, plus SQL Server 2008 Standard and SQL Server 2008 Express images). The keys for these images are WINDOWS_2008RTM_BASE, WINDOWS_2008RTM_SQL_SERVER_EXPRESS_2008, and WINDOWS_2008RTM_SQL_SERVER_STANDARD_2008.

The following keys are displayed when you run the cmdlet with no parameters:

PS C:> Get-EC2ImageByName
WINDOWS_2012R2_BASE
WINDOWS_2012R2_SQL_SERVER_EXPRESS_2014
WINDOWS_2012R2_SQL_SERVER_STANDARD_2014
WINDOWS_2012R2_SQL_SERVER_WEB_2014
WINDOWS_2012_BASE
WINDOWS_2012_SQL_SERVER_EXPRESS_2014
WINDOWS_2012_SQL_SERVER_STANDARD_2014
WINDOWS_2012_SQL_SERVER_WEB_2014
WINDOWS_2012_SQL_SERVER_EXPRESS_2012
WINDOWS_2012_SQL_SERVER_STANDARD_2012
WINDOWS_2012_SQL_SERVER_WEB_2012
WINDOWS_2012_SQL_SERVER_EXPRESS_2008
WINDOWS_2012_SQL_SERVER_STANDARD_2008
WINDOWS_2012_SQL_SERVER_WEB_2008
WINDOWS_2008R2_BASE
WINDOWS_2008R2_SQL_SERVER_EXPRESS_2012
WINDOWS_2008R2_SQL_SERVER_STANDARD_2012
WINDOWS_2008R2_SQL_SERVER_WEB_2012
WINDOWS_2008R2_SQL_SERVER_EXPRESS_2008
WINDOWS_2008R2_SQL_SERVER_STANDARD_2008
WINDOWS_2008R2_SQL_SERVER_WEB_2008
WINDOWS_2008RTM_BASE
WINDOWS_2008RTM_SQL_SERVER_EXPRESS_2008
WINDOWS_2008RTM_SQL_SERVER_STANDARD_2008
WINDOWS_2008_BEANSTALK_IIS75
WINDOWS_2012_BEANSTALK_IIS8
VPC_NAT

The following keys are deprecated but still recognized:

WINDOWS_2008_BASE
WINDOWS_2008_SQL_SERVER_EXPRESS_2012
WINDOWS_2008_SQL_SERVER_STANDARD_2012
WINDOWS_2008_SQL_SERVER_WEB_2012
WINDOWS_2008_SQL_SERVER_EXPRESS_2008
WINDOWS_2008_SQL_SERVER_STANDARD_2008
WINDOWS_2008_SQL_SERVER_WEB_2008

Get-EC2ImageByName Enhancements

Amazon periodically revises the set of Microsoft Windows images that Amazon makes available to customers and for a period, the Get-EC2ImageByName cmdlet could return the latest image for a key, plus one or more prior versions. For example, at the time of writing this post, running the command Get-EC2ImageByName -Name windows_2012r2_base emitted two images as output. If run in a pipeline that then proceeds to invoke the New-EC2Instance cmdlet, for example, instances of multiple images could then be started—perhaps not what was expected. To obtain and start the latest image only, you would have to either index the returned collection, which could contain one or several objects, or insert a call to Select-Object in your pipeline to extract the first item before then calling New-EC2Instance (the first item in the output from Get-EC2ImageByName is always the latest version).

With the new release, when a single key is supplied to the -Name parameter, the cmdlet emits only the single latest machine image that is available. This makes using the cmdlet in a ‘get | start’ pattern much safer and more convenient:

# guaranteed to only return one image to launch
PS C:> Get-EC2ImageByName -Name windows_2012r2_base | New-EC2Instance -InstanceType t1.micro ...

If you do need to get all versions of a given image, this is supported using the new ”-AllAvailable” switch. The following command outputs all available versions of the Windows Server 2012 R2 image, which may be one or several images:

PS C:> Get-EC2ImageByName -Name windows_2012r2_base -AllAvailable

The cmdlet can also emit all available versions when either more than one value is supplied for the -Name parameter or a custom key value is supplied, as it is assumed in these scenarios you are expecting a collection to work with:

# use of multiple keys (custom or built-in) yields all versions
PS C:> Get-EC2ImageByName -Name windows_2012r2_base,windows_2008r2_base

# use of a custom key, single or multiple, yields all versions
PS C:> Get-EC2ImageByName -Name "Windows_Server-2003*"

These updates to the Get-EC2ImageByName cmdlet were driven in part by feedback from our users. If you have an idea or suggestion for new features that would make your scripting life easier, please get in touch with us! One way is via the AWS PowerShell Scripting forum here.

Referencing Credentials using Profiles

There are a number of ways to provide AWS credentials to your .NET applications. One approach is to embed your credentials in the appSettings sections of your App.config file. While this is easy and convenient, your AWS credentials might end up getting checked into source control or published to places that you didn’t mean. A better approach is to use profiles, which was introduced in version 2.1 of the AWS SDK for .NET. Profiles offer an easy-to-use mechanism to safely store credentials in a central location outside your application directory. After setting up your credential profiles once, you can refer to them by name in all of the applications you run on that machine. The App.config file will look similar to this example when using profiles.

<configuration>
   <appSettings>
      <add key="AWSProfileName" value="development"/>
      <add key="AWSRegion" value="us-west-2" />
   </appSettings>
</configuration>

The SDK supports two different profile stores. The first is what we call the SDK store which stores the profiles encrypted in the C:Users<username>AppDataLocalAWSToolkit folder. This is the same store used by the AWS Toolkit for Visual Studio and AWS Tools for PowerShell. The second store is the credentials file under c:Users<username>.aws. The credentials file is used by the other AWS SDKs and AWS Command Line Interface. The SDK will always check the SDK store first and then fallback to the credentials file.

Setting up Profiles with Visual Studio

The Visual Studio Toolkit lists all the profiles registered in the SDK store in the AWS Explorer. To add new profiles click the New Account Profile button.

When you create a new project in Visual Studio using one of the AWS project templates the project wizard will allow you to pick an existing profile or create a new profile. The selected profile will be referenced in the App.config of the new project.

 

Setting up Profiles with PowerShell

Profiles can also be setup using the AWS Tools for Windows PowerShell.

PS C:> Set-AWSCredentials -AccessKey 123MYACCESSKEY -SecretKey 456SECRETKEY -StoreAs development

Like the Toolkit these credentials will be accessible to the SDK and Toolkit after running this command. To use the profile in PowerShell run the following command before using AWS cmdlets.

PS C:> Set-AWSCredentials -ProfileName development

Setting up Profiles with the SDK

Profiles can also be managed using just the AWS SDK for .NET using the Amazon.Util.ProfileManager class. Here is how you can register a profile using the ProfileManager.

Amazon.Util.ProfileManager.RegisterProfile(profileName, accessKey, secretKey)

You can also list the registered profiles and unregistered profiles using the ListProfileNames and UnregisterProfile methods.

Getting the SDK from Nuget

If you get the SDK from NuGet the package’s install script will add an empty AWSProfileName tag to the App.config file if the app setting doesn’t already exist. You can use any of the already mentioned methods for registering profiles. Alternatively, you can use the PowerShell script account-management.ps1 that comes with the NuGet package and will be placed in /packages/AWSSDK-X.X.X.X/tools/ folder. This is an interactive script that will let you register, list and unregister profiles.

Credentials File Format

The previous methods for adding profiles have all been about adding credentials to the SDK store. To put credentials in the SDK store requires using one of these tools because the credentials are encrypted. The alternative is to use the credentials file. This is a plain text file similar to a .ini file. Here is an example of a credentials file with two profiles.

[default]
aws_access_key_id = <access-key>
aws_secret_access_key = <secret-key>

[development]
aws_access_key_id = <access-key>
aws_secret_access_key = <secret-key>

Default Profile

When you create a service client without specifying credentials or profile name the SDK will search for a default profile. The default profile’s name is "default" and it will first be searched for in the SDK store and then the credentials file. When the AWS Tools for PowerShell was released last year it introduced a default profile called "AWS PS Default". To make all of our tools have a consistent experience, we have changed AWS Tools for PowerShell to now use "default" for the default. To make sure we didn’t break any existing users, the AWS Tools for PowerShell will still try to load the old profile ("AWS PS Default") when "default" is not found, but will now save credentials to "default" profile unless otherwise specified.

Credentials Search Path

If an application is creating a service client without specifying credentials then the SDK uses the following order to find credentials.

  • Look for AWSAccessKey and AWSSecretKey in App.config.

    • Important to note that the 2.1 version of the SDK didn’t break any existing applications using the AWSAccessKey and AWSSecretKey app settings.
  • Search the SDK Store

    • If the AWSProfileName exists then see if this profile exists. If no AWSProfileName is specified look for the default profile called "default" in the SDK Store.
  • Search the credentials file

    • If the AWSProfileName exists then see if this profile exists. If no AWSProfileName is specified look for the default profile called "default" in the credentials file.
  • Search for Instance Profiles

    • These are credentials that can be found on EC2 instance that were created with instance profiles.

Setting Profile in Code

It is also possible to specify the profile to use in code, in addition to using App.config. This code shows how to create an Amazon S3 client for the development profile.

Amazon.Runtime.AWSCredentials credentials = new Amazon.Runtime.StoredProfileAWSCredentials("development");
Amazon.S3.IAmazonS3 s3Client = new AmazonS3Client(credentials, Amazon.RegionEndpoint.USWest2);

Alternative Credentials File

Both the SDK store and the credentials file are located under the current user’s home directory. If your application is running under a different user – such as Local System – then the AWSProfilesLocation app setting can be set to use an alternative credentials file. For example, this App.Config tells the SDK to look for credentials in the C:aws_service_credentialscredentials file.

<configuration>
   <appSettings>
      <add key="AWSProfileName" value="development"/>
      <add key="AWSProfilesLocation" value="C:aws_service_credentialscredentials"/>
      <add key="AWSRegion" value="us-west-2" />
   </appSettings>
</configuration>

Steve Roberts Interviewed in Episode 255 of the PowerScripting Podcast

A few weeks ago, Steve Roberts, from the AWS SDK and Tools team for .NET, was pleased to be invited to take part in an episode of the PowerScripting Podcast, chatting with fellow developers about PowerShell here at AWS, the AWS SDK for .NET and other general topics (including his choice of superhero!). The recording of the event has now been published and can be accessed here.

As mentioned in the podcast, a new book has also just been published about using PowerShell with AWS. More details can be found on the publisher’s website at Pro PowerShell for Amazon Web Services.