Category: AWS Tools for PowerShell*


Deploy an Amazon ECS Cluster Running Windows Server with AWS Tools for PowerShell – Part 1

This is a guest post from Trevor Sullivan, a Seattle-based Solutions Architect at Amazon Web Services (AWS). In this blog post, Trevor shows you how to deploy a Windows Server-based container cluster using the AWS Tools for PowerShell.

Building and deploying applications on the Windows Server platform is becoming a significantly lighter-weight process. Although you might be accustomed to developing monolithic applications on Windows, and scaling vertically instead of horizontally, you might want to rethink how you design and build Windows-based apps. Now that application containers are a native feature in the Windows Server platform, you can design your applications in a similar fashion to how developers targeting the Linux platform have designed them for years.

Throughout this blog post, we explore how to automate the deployment of a Windows Server container cluster, managed by the Amazon EC2 Container Service (Amazon ECS). The architecture of the cluster can help you efficiently and cost-effectively scale your containerized application components.

Due to the large amount of information contained in this blog post, we’ve separated it into two parts. Part 1 guides you through the process of deploying an ECS cluster. Part 2 covers the creation and deployment of your own custom container images to the cluster.

Assumptions

In this blog post, we assume the following:

  • You’re using a Windows, Mac, or Linux system with PowerShell Core installed and you’ve installed the AWS Tools for PowerShell Core.
  • Or you’re using a Windows 7 or later system, with Windows PowerShell 4.0 or later installed (use $PSVersionTable to check your PowerShell version) and have installed the AWS Tools for PowerShell (aka AWSPowerShell module).
  • You already have access to an AWS account.
  • You’ve created an AWS Identity and Access Management (IAM) user account with administrative policy permissions, and generated an access key ID and secret key.
  • You’ve configured your AWS access key ID and secret key in your ~/.aws/credentials file.

This article was authored, and the code tested, on a MacBook Pro with PowerShell Core Edition and the AWSPowerShell.NetCore module.

NOTE: In the PowerShell code examples, I’m using a technique known as PowerShell Splatting to pass parameters into various PowerShell commands. Splatting syntax helps your PowerShell code look cleaner, enables auditing of parameter values prior to command invocation, and improves code maintenance.

Create your Amazon ECS cluster

The first task is to create an empty ECS cluster. Right now, the ECS beta support for Windows Server containers doesn’t enable you to provision your compute layer for the cluster, at cluster creation time. In a moment, you’ll provision some Windows Server-based compute capacity on Amazon EC2, and associate it with your empty ECS cluster.

If you prefer to use the AWS PowerShell module to create the empty ECS cluster, you can use the following command.

New-ECSCluster -ClusterName ECSRivendell

Set up an IAM role for Amazon EC2 instances

Amazon EC2 instances can have IAM “roles” assigned to them. By associating one or more IAM policies with an IAM role, which is assigned to an EC2 instance by way of an “instance profile”, you can grant access to various services in AWS directly to the EC2 instance, without having to embed and manage any credentials. AWS handles that for you, via the IAM role. Whenever the EC2 instance is running, it has access to the IAM role that’s assigned to it, and can make calls to various AWS APIs that it’s authorized to call via associated IAM policies.

Before you actually deploy the Windows Server compute layer for your ECS cluster, you must first perform the following preparation steps:

  • Create an IAM policy that defines the required permissions for ECS container instances.
  • Create an IAM role.
  • Register the IAM policy with the role.
  • Create an IAM instance profile, which will be associated with your EC2 instances.
  • Associate the IAM role with the instance profile.

Create an IAM policy for your EC2 container instances

First, you use PowerShell to create the IAM policy. I simply copied and pasted the IAM policy that’s documented in the Amazon ECS documentation. This IAM policy will be associated with the IAM role that will be associated with your EC2 container instances, which actually run your containers via ECS. This policy is especially important, because it grants your container instances access to Amazon EC2 Container Registry (Amazon ECR). This is a private storage area for your container images, similar to the Docker Store.

$IAMPolicy = @{
    Path = '/ECSRivendell/'
    PolicyName = 'ECSRivendellInstanceRole'
    Description = 'This policy is used to grant EC2 instances access to ECS-related API calls. It enables EC2 instances to push and pull from ECR, and most importantly, register with our ECS Cluster.'
    PolicyDocument = @'
{
    "Version": "2012-10-17",
    "Statement": [
        {
        "Effect": "Allow",
        "Action": [
            "ecs:CreateCluster",
            "ecs:DeregisterContainerInstance",
            "ecs:DiscoverPollEndpoint",
            "ecs:Poll",
            "ecs:RegisterContainerInstance",
            "ecs:StartTelemetrySession",
            "ecs:Submit*",
            "ecr:GetAuthorizationToken",
            "ecr:BatchCheckLayerAvailability",
            "ecr:GetDownloadUrlForLayer",
            "ecr:BatchGetImage",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
        ],
        "Resource": "*"
        }
    ]
    }
'@
}
New-IAMPolicy @IAMPolicy

Create a container instance IAM role

The next part is easy. You just need to create an IAM role. This role has a name, optional path, optional description and, the important part, an “AssumeRolePolicyDocument”. This is also known as the IAM “trust policy”. This IAM trust policy enables the EC2 instances to use or “assume” the IAM role, and use its policies to access AWS APIs. Without this trust policy in place, your EC2 instances won’t be able to use this role and the permissions granted to it by the IAM policy.

$IAMRole = @{
    RoleName = 'ECSRivendell'
    Path = '/ECSRivendell/'
    Description = 'This IAM role grants the container instances that are part of the ECSRivendell ECS cluster access to various AWS services, required to operate the ECS Cluster.'
    AssumeRolePolicyDocument = @'
{
    "Version": "2008-10-17",
    "Statement": [
        {
        "Sid": "",
        "Effect": "Allow",
        "Principal": {
            "Service": "ec2.amazonaws.com"
        },
        "Action": "sts:AssumeRole"
        }
    ]
    }
'@
}
New-IAMRole @IAMRole

Register the IAM policy with the IAM role

Now you simply need to associate the IAM policy that you created with the IAM role. This association is very easy to make with PowerShell, using the Register-IAMRolePolicy command. If necessary, you can associate more than one policy with your IAM roles that grant or deny additional permissions.

$RolePolicy = @{
    PolicyArn = $NewIAMPolicy.Arn
    Role = $NewIAMRole.RoleName
}
Register-IAMRolePolicy @RolePolicy

Create an IAM instance profile

With the IAM role prepared, you now need to create the IAM instance profile. The IAM instance profile is the actual object that gets associated with our EC2 instances. This IAM instance profile ultimately grants the EC2 instance permission to call AWS APIs directly, without any stored credentials.

$InstanceProfile = @{
    InstanceProfileName = 'ECSRivendellInstanceRole'
    Path = '/ECSRivendell/'
}
$NewInstanceProfile = New-IAMInstanceProfile @InstanceProfile

Associate the IAM role with the instance profile

Finally, you need to associate our IAM Role with the instance profile.

$RoleInstanceProfile = @{
    InstanceProfileName = $NewInstanceProfile.InstanceProfileName
    RoleName = $NewIAMRole.RoleName
}
$null = Add-IAMRoleToInstanceProfile @RoleInstanceProfile

Add EC2 compute capacity for Windows

Now that you’ve created an empty ECS cluster and have pre-staged your IAM configuration, you need to add some EC2 compute instances running Windows Server to it. These are the actual Windows Server instances that will run containers (tasks) via the ECS cluster scheduler.

To provision EC2 capacity in a frugal and scalable fashion, we create an EC2 Auto Scaling group using Spot instances. Spot instances are one of my favorite services in AWS, because you can provision a significant amount of compute capacity for a huge discount, by bidding on unused capacity. There is some risk that you might need to design around, as Spot instances can be terminated or stopped if your bid price is exceeded. However, depending on your needs, you can run powerful applications on EC2, for a low price, using EC2 Spot instances.

Create the Auto Scaling launch configuration

Before you can create the Auto Scaling group itself, you need to create what’s known as an Auto Scaling “launch configuration”. The launch configuration is essentially a blueprint for Auto Scaling groups. It stores a variety of input parameters that define how new EC2 instances will look every time the Auto Scaling group scales up, and spins up a new instance. For example, you need to specify:

  • The Amazon Machine Image (AMI) that instances will be launched from.
    • NOTE: Be sure you use the Microsoft Windows Server 2016 Base with Containers image for Windows Server container instances.
  • The EC2 instance type (size, vCPUs, memory).
  • Whether to enable or disable public IP addresses for EC2 instances.
  • The bid price for Spot instances (optional, but recommended).
  • The EC2 User Data – PowerShell script to bootstrap instance configuration.

Although the code snippet below might look a little scary, it basically contains an EC2 user data script, which I copied directly from the AWS beta documentation for Windows. This automatically installs the ECS container agent into new instances that are brought online by your Auto Scaling Group, and registers them with your ECS cluster.

$ClusterName = 'ECSRivendell'
$UserDataScript = @'

## The string 'windows' should be replaced with your cluster name

# Set agent env variables for the Machine context (durable)
[Environment]::SetEnvironmentVariable("ECS_CLUSTER", "YOURCLUSTERNAME", "Machine")
[Environment]::SetEnvironmentVariable("ECS_ENABLE_TASK_IAM_ROLE", "true", "Machine")
$agentVersion = 'v1.14.5'
$agentZipUri = "https://s3.amazonaws.com/amazon-ecs-agent/ecs-agent-windows-$agentVersion.zip"
$agentZipMD5Uri = "$agentZipUri.md5"


### --- Nothing user configurable after this point ---
$ecsExeDir = "$env:ProgramFiles\Amazon\ECS"
$zipFile = "$env:TEMP\ecs-agent.zip"
$md5File = "$env:TEMP\ecs-agent.zip.md5"

### Get the files from Amazon S3
Invoke-RestMethod -OutFile $zipFile -Uri $agentZipUri
Invoke-RestMethod -OutFile $md5File -Uri $agentZipMD5Uri

## MD5 Checksum
$expectedMD5 = (Get-Content $md5File)
$md5 = New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider
$actualMD5 = [System.BitConverter]::ToString($md5.ComputeHash([System.IO.File]::ReadAllBytes($zipFile))).replace('-', '')

if($expectedMD5 -ne $actualMD5) {
    echo "Download doesn't match hash."
    echo "Expected: $expectedMD5 - Got: $actualMD5"
    exit 1
}

## Put the executables in the executable directory
Expand-Archive -Path $zipFile -DestinationPath $ecsExeDir -Force

## Start the agent script in the background
$jobname = "ECS-Agent-Init"
$script =  "cd '$ecsExeDir'; .\amazon-ecs-agent.ps1"
$repeat = (New-TimeSpan -Minutes 1)

$jobpath = $env:LOCALAPPDATA + "\Microsoft\Windows\PowerShell\ScheduledJobs\$jobname\ScheduledJobDefinition.xml"
if($(Test-Path -Path $jobpath)) {
    echo "Job definition already present"
    exit 0

}

$scriptblock = [scriptblock]::Create("$script")
$trigger = New-JobTrigger -At (Get-Date).Date -RepeatIndefinitely -RepetitionInterval $repeat -Once
$options = New-ScheduledJobOption -RunElevated -ContinueIfGoingOnBattery -StartIfOnBattery
Register-ScheduledJob -Name $jobname -ScriptBlock $scriptblock -Trigger $trigger -ScheduledJobOption $options -RunNow
Add-JobTrigger -Name $jobname -Trigger (New-JobTrigger -AtStartup -RandomDelay 00:1:00)

true
'@ -replace 'YOURCLUSTERNAME', $ClusterName

$UserDataBase64 = [System.Convert]::ToBase64String(([Byte[]][Char[]] $UserDataScript))

# Create a block device mapping to increase the size of the root volume
$BlockDevice = [Amazon.AutoScaling.Model.BlockDeviceMapping]::new()
$BlockDevice.DeviceName = '/dev/sda1'
$BlockDevice.Ebs = [Amazon.AutoScaling.Model.Ebs]::new()
$BlockDevice.Ebs.DeleteOnTermination = $true
$BlockDevice.Ebs.VolumeSize = 200
$BlockDevice.Ebs.VolumeType = 'gp2'

$LaunchConfig = @{
    LaunchConfigurationName = 'ECSRivendell'
    AssociatePublicIpAddress = $true
    EbsOptimized = $true
    BlockDeviceMapping = $BlockDevice
    InstanceType = 'r4.large'
    SpotPrice = '0.18'
    InstanceMonitoring_Enabled = $true
    IamInstanceProfile = 'ECSRivendellInstanceRole'
    ImageId = 'ami-6a887b12'
    UserData = $UserDataBase64
}
$NewLaunchConfig = New-ASLaunchConfiguration @LaunchConfig

Be sure you give your EC2 container instances enough storage to cache container images over time. In my example, I’m instructing the launch configuration to grant a 200 GB Amazon EBS root volume to each EC2 instance that is deployed into the Auto Scaling group, once it’s set up.

Create the Auto Scaling group

Now that you’ve set up the Auto Scaling launch configuration, you can deploy a new Auto Scaling group from it. Once created, the Auto Scaling group actually submits the EC2 Spot request for the desired numbers of EC2 instances when they’re needed. All you have to do is tell the Auto Scaling group how many instances you want, and it handles spinning them up or down.

Optionally, you can even set up Auto Scaling policies so that you don’t have to manually scale the cluster. We’ll save the topic of Auto Scaling policies for another article, however, and instead focus on setting up the ECS cluster.

The PowerShell code to create the Auto Scaling group, from the launch configuration that you built, is included below. Keep in mind that although I’ve tried to keep the code snippets fairly generic, this particular code snippet does have a hard-coded command that retrieves all of your Amazon Virtual Private Cloud (VPC) subnet IDs dynamically. It should work fine, as long as you only have a single VPC in the region you’re currently operating in. If you have more than one VPC, you’ll want to filter out the list of subnets from the Get-EC2Subnet command that are contained in the desired target VPC.

$AutoScalingGroup = @{
    AutoScalingGroupName = 'ECSRivendell'
    LaunchConfigurationName = 'ECSRivendell'
    MinSize = 2  
    MaxSize = 5
    DesiredCapacity = 3
    VPCZoneIdentifier = [String]::Join(',', (Get-EC2Subnet).SubnetId)
    Tag = $( $Tag = [Amazon.AutoScaling.Model.Tag]::new()
             $Tag.PropagateAtLaunch = $true; $Tag.Key = 'Name'; $Tag.Value = 'ECS: ECSRivendell'; $Tag )
}
$NewAutoScalingGroup = New-ASAutoScalingGroup @AutoScalingGroup

After running this code snippet, your Auto Scaling Group will take a few minutes to deploy. Additionally, it can sometimes take 15-30 minutes for the ECS container agent, running on each of the EC2 instances, to finish registering with the ECS cluster. Once you’ve waited a little while, check out the EC2 instance area of the AWS Management Console and examine your shiny new container instances!

You can also visit the ECS area of the AWS Management Console, and examine the number of container instances that are registered with the ECS cluster. You now have a functioning ECS cluster!

You can even use the Get-ECSClusterDetail command in PowerShell to examine the current count of container instances.

PS /Users/tsulli> (Get-ECSClusterDetail -Cluster ECSRivendell).Clusters

ActiveServicesCount               : 0
ClusterArn                        : arn:aws:ecs:us-west-2:676655494252:cluster/ECSRivendell
ClusterName                       : ECSRivendell
PendingTasksCount                 : 0
RegisteredContainerInstancesCount : 3
RunningTasksCount                 : 0
Status                            : ACTIVE

Conclusion

In this article, you set up an EC2 Container Service (ECS) cluster with several EC2 instances running Windows Server registered to it. Prior to that, you also set up the IAM policy, IAM role, and instance profile that are necessary prerequisites to ensure that the ECS cluster operates correctly. You used the AWS Tools for PowerShell to accomplish this automation, and learned about key ECS and EC2 Auto Scaling commands.

Although you have a functional ECS Cluster at this point, you haven’t yet deployed any containers (ECS tasks) to it. Keep an eye out for part 2 of this blog post, where you’ll explore building your own custom container images, pushing those container images up to Amazon EC2 Container Registry (Amazon ECR), and running ECS tasks and services.

Writing and Archiving Custom Metrics using Amazon CloudWatch and AWS Tools for PowerShell

This is a guest post from Trevor Sullivan, a Seattle-based Solutions Architect at Amazon Web Services (AWS). Since 2004, Trevor has worked intimately with Microsoft technologies, including PowerShell since its release in 2006. In this article, Trevor takes you through the process of using the AWS Tools for PowerShell to write and export metrics data from Amazon CloudWatch.

Amazon’s CloudWatch service is an umbrella that covers a few major areas: logging, metrics, charting, dashboards, alarms, and events.

I wanted to take a few minutes to cover the CloudWatch Metrics area, specifically as it relates to interacting with metrics from PowerShell. We’ll start off with a discussion and demonstration of how to write metric data into CloudWatch, then move on to how to find existing metrics in CloudWatch, and finally how to retrieve metric data points from a specific metric.

Amazon CloudWatch stores metrics data for up to 15 months. However, you can export data from Amazon CloudWatch into a long-term retention tool of your choice, depending on your requirements for metric data retention, and required level of metric granularity. While historical, exported data may not be usable inside CloudWatch, after aging out, you can use other AWS data analytics tools, such as Amazon QuickSight and Amazon Athena to build reports against your historical data.

Assumptions

For this article, we assume that you have an AWS account. We also assume you understand PowerShell at a fundamental level, have installed PowerShell and the AWS Tools for PowerShell on your platform of choice, and have already set up your AWS credentials file and necessary IAM policies granting access to CloudWatch. We’ll discuss and demonstrate how to call various CloudWatch APIs from PowerShell, so be sure you’re prepared for this topic.

For more information, see the Getting Started guide for AWS Tools for PowerShell.

Write metric data into Amazon CloudWatch

Let’s start by talking about storing custom metrics in CloudWatch.

In the AWS Tools for PowerShell, there’s a command named Write-CWMetricData. This PowerShell command ultimately calls the PutMetricData API to write metrics to Amazon CloudWatch. It’s fairly easy to call this command, as there are only a handful of parameters. However, you should understand how CloudWatch works before attempting to use the command.

  • CloudWatch metrics are stored inside namespaces
  • Metric data points:
    • Must have a name.
    • May have zero or more dimensions.
    • May have a value, time stamp and unit of measure (eg. Bytes, BytesPerSecond, Count, etc.).
    • May specify a custom storage resolution (eg., 1 second, 5 seconds—the default is 60 seconds).
  • In the AWS Tools for PowerShell, you construct one or more MetricDatum .NET objects, before passing these into Write-CWMetricData.

With that conceptual information out of the way, let’s look at the simplest way to create a custom metric. Writing metric data points into CloudWatch is how you create a metric. There isn’t a separate operation to create a metric and then write data points into it.

### First, we create one or more MetricDatum objects
$Metric = [Amazon.CloudWatch.Model.MetricDatum]::new()
$Metric.MetricName = 'UserCount'
$Metric.Value = 98

### Second, we write the metric data points to a CloudWatch metrics namespace
Write-CWMetricData -MetricData $Metric -Namespace trevortest/tsulli.loc

If you have lots of metrics to track, and you’d prefer to avoid cluttering up your top-level metric namespaces, this is where metric dimensions can come in handy. For example, let’s say we want to track a “UserCount” metric for over 100 different Active Directory domains. We can store them all under a single namespace, but create a “DomainName” dimension, on each metric, whose value is the name of each Active Directory domain. The following screenshot shows an example of this in action.

Here’s a PowerShell code example that shows how to write a metric to CloudWatch, with a dimension. Although the samples we’ve looked at in this article show how to write a single metric, you should strive to reduce the number of disparate AWS API calls that you make from your application code. Try to consolidate the gathering and writing of multiple metric data points in the same PutMetricData API call, as an array of MetricDatum objects. Your application will perform better, with fewer HTTP connections being created and destroyed, and you’ll still be able to gather as many metrics as you want.

$Metric = [Amazon.CloudWatch.Model.MetricDatum]::new()
### Create a metric dimension, and set its name and value
$Dimension = [Amazon.CloudWatch.Model.Dimension]::new()
$Dimension.Name = 'DomainName'
$Dimension.Value = 'awstrevor.loc'

$Metric.MetricName = 'UserCount'
$Metric.Value = 76
### NOTE: Be sure you assign the Dimension object to the Dimensions property of the MetricDatum object
$Metric.Dimensions = $Dimension
Write-CWMetricData -MetricData $Metric -Namespace trevortest

Retrieve a list of metrics from Amazon CloudWatch

Now that we’ve written custom metrics to CloudWatch, let’s discuss how we search for metrics. Over time, you might find that you have thousands or tens of thousands of metrics in your AWS accounts, across various regions. As a result, it’s imperative that you know how to locate metrics relevant to your analysis project.

You can, of course, use the AWS Management Console to view metric namespaces, and explore the metrics and metric dimensions contained within each namespace. Although this approach will help you gain initial familiarity with the platform, you’ll most likely want to use automation to help you find relevant data within the platform. Automation is especially important when you introduce metrics across multiple AWS Regions and multiple AWS accounts, as they can be harder to find via a graphical interface.

In the AWS Tools for PowerShell, the Get-CWMetricList command maps to the AWS ListMetrics API. This returns a list of high-level information about the metrics stored in CloudWatch. If you have lots of metrics stored in your account, you might get back a very large list. Thankfully, PowerShell has some generic sorting and filtering commands that can help you find the metrics you’re seeking, with some useful filtering parameters on the Get-CWMetricList command itself.

Let’s explore a few examples of how to use this command.

Starting with the simplest example, we’ll retrieve a list of all the CloudWatch metrics from the current AWS account and region.

Get-CWMetricList

If the results of this command are little overwhelming, that’s okay. We can filter down the returned metrics to a specific metric namespace, using the -Namespace parameter.

Get-CWMetricList -Namespace AWS/Lambda

What if you don’t know which metric namespaces exist? PowerShell provides a useful command that enables you to filter for unique values.

(Get-CWMetricList).Namespace | Select-Object -Unique

If these results aren’t in alphabetical order, it might be hard to visually scan through them, so let’s sort them.

(Get-CWMetricList).Namespace | Select-Object -Unique | Sort-Object

Much better! Another option is to search for metrics based on a dimension key-value pair. It’s a bit more typing, but it’s a useful construct to search through thousands of metrics. You can even write a simple wrapper PowerShell function to make it easier to construct one of these DimensionFilter objects.

$Filter = [Amazon.CloudWatch.Model.DimensionFilter]::new()
$Filter.Name = 'DomainName'
$Filter.Value = 'tsulli.loc'
Get-CWMetricList -Dimension $Filter

If you know the name of a specific metric, you can query for a list of metrics that match that name. You might get back multiple results, if there are multiple metrics with the same name, but different dimensions exist in the same namespace. You can also have similarly named metrics across multiple namespaces, with or without dimensions.

Get-CWMetricList -MetricName UserCount

PowerShell’s built-in, generic Where-Object command is infinitely useful in finding metrics or namespaces, if you don’t know their exact, full name.

This example shows how to filter for any metric names that contain “User”.

Get-CWMetricList | Where-Object -FilterScript { $PSItem.Name -match 'User' }

Filtering metrics by namespace is just as easy. Let’s search for metrics that are stored inside any metric namespace that ends with “EBS”.

Get-CWMetricList | Where-Object -FilterScript { $PSItem.Namespace -match 'EBS$' }

That’s likely enough examples of how to find metrics in CloudWatch, using PowerShell! Let’s move on and talk about pulling actual metric data points from CloudWatch, using PowerShell.

Pull metric data from CloudWatch

Concepts

Metric data is stored in CloudWatch for a finite period of time. Before metrics age out of CloudWatch, metric data points (metric “statistics”) move through a tiered system where they are aggregated and stored as less granular metric data points. For example, metrics gathered on a per-minute period are aggregated and stored as five-minute metrics, when they reach an age of fifteen (15) days. You can find detailed information about the aggregation process and retention period in the Amazon CloudWatch metrics documentation.

Data aggregation in CloudWatch, as of this writing, starts when metric data points reach an age of three (3) hours. You need to ensure that you’re exporting your metric data before your data is aged, if you want to keep the most detailed resolution of your metric data points. Services such as AWS Lambda or even PowerShell applications deployed onto Amazon EC2 Container Service (ECS) can help you achieve this export process in a scalable fashion.

The longest period that metrics are stored in CloudWatch is 15 months. If you want to store metrics data beyond a 15-month period, you must query the metrics data, before CloudWatch performs aggregation on your metrics, and store it in an alternate repository, such as Amazon DynamoDBAmazon S3, or Amazon RDS.

PowerShell Deep Dive

Now that we’ve covered some of the conceptual topics around retrieving and archiving CloudWatch metrics, let’s look at the actual PowerShell command to retrieve data points.

The AWS Tools for PowerShell include a command named Get-CWMetricStatistic, which maps to the GetMetricStatistics API in AWS. You can use this command to retrieve granular data points from your CloudWatch metrics.

There are quite a few parameters that you need to specify on this command, because you are querying a potentially massive dataset. You need to be very specific about the metric namespace, name, start time, end time, period, and statistic that you want to retrieve.

Let’s find the metric data points for the Active Directory UserCount metric, for the past 60 minutes, every minute. We assign the API response to a variable, so we can dig into it further. You most likely don’t have this metric, but I’ve been gathering this metric for awhile, so I’ve got roughly a week’s worth of data at a per-minute level. Of course, my metrics are subject to the built-in aggregation policies, so my per-minute data is only good for up to 15 days.

$Data = Get-CWMetricStatistic -Namespace ActiveDirectory/tsulli.loc -ExtendedStatistic p0.0 -MetricName UserCount -StartTime ([DateTime]::UtcNow.AddHours(-1)) -EndTime ([DateTime]::UtcNow) -Period 60

As you can see, the command is a little lengthy, but when you use PowerShell’s tab completion to help finish the command and parameter names, typing it out isn’t too bad.

Because we’re querying the most recent hour’s worth of data, we should have exactly 60 data points in our response. We can confirm this by examining PowerShell’s built-in Count property on the Datapoints property in our API response.

$Data.Datapoints.Count

The data points aren’t in chronological order when returned, so let’s use PowerShell to sort them, and grab the most recent data point.

$Data.Datapoints | Sort-Object -Property Timestamp | Select-Object -Last 1
Average            : 0
ExtendedStatistics : {[p0.0, 19]}
Maximum            : 0
Minimum            : 0
SampleCount        : 0
Sum                : 0
Timestamp          : 10/14/17 4:27:00 PM
Unit               : Count

Now we can take this data and start exporting it into our preferred external data store, for long-term retention! I’ll leave it to you to explore this API further.

Due to the detailed options available in the GetMetricStatistics API, I would strongly encourage you to read through the documentation, and more importantly, run your own experiments with the API. You need to use this API extensively, if you want to export data points from CloudWatch metrics to an alternative data source, as described earlier.

Conclusion

In this article, we’ve explored the use of the AWS Tools for PowerShell to assist you with writing metrics to Amazon CloudWatch, searching or querying for metrics, and retrieving metric data points. I would encourage you to think about how Amazon CloudWatch can integrate with a variety of other services, to help you achieve your objectives.

Don’t forget that after you’ve stored metrics data in CloudWatch, you can then build dashboards around that data, connect CloudWatch alarms to notify you about infrastructure and application issues, and even perform automated remediation tasks. The sky is the limit, so put on your builder’s hat and start creating!

Please feel free to follow me on Twitter, and keep an eye on my YouTube channel.