Category: .NET


Deploy an Amazon ECS Cluster Running Windows Server with AWS Tools for PowerShell – Part 1

This is a guest post from Trevor Sullivan, a Seattle-based Solutions Architect at Amazon Web Services (AWS). In this blog post, Trevor shows you how to deploy a Windows Server-based container cluster using the AWS Tools for PowerShell.

Building and deploying applications on the Windows Server platform is becoming a significantly lighter-weight process. Although you might be accustomed to developing monolithic applications on Windows, and scaling vertically instead of horizontally, you might want to rethink how you design and build Windows-based apps. Now that application containers are a native feature in the Windows Server platform, you can design your applications in a similar fashion to how developers targeting the Linux platform have designed them for years.

Throughout this blog post, we explore how to automate the deployment of a Windows Server container cluster, managed by the Amazon EC2 Container Service (Amazon ECS). The architecture of the cluster can help you efficiently and cost-effectively scale your containerized application components.

Due to the large amount of information contained in this blog post, we’ve separated it into two parts. Part 1 guides you through the process of deploying an ECS cluster. Part 2 covers the creation and deployment of your own custom container images to the cluster.

Assumptions

In this blog post, we assume the following:

  • You’re using a Windows, Mac, or Linux system with PowerShell Core installed and you’ve installed the AWS Tools for PowerShell Core.
  • Or you’re using a Windows 7 or later system, with Windows PowerShell 4.0 or later installed (use $PSVersionTable to check your PowerShell version) and have installed the AWS Tools for PowerShell (aka AWSPowerShell module).
  • You already have access to an AWS account.
  • You’ve created an AWS Identity and Access Management (IAM) user account with administrative policy permissions, and generated an access key ID and secret key.
  • You’ve configured your AWS access key ID and secret key in your ~/.aws/credentials file.

This article was authored, and the code tested, on a MacBook Pro with PowerShell Core Edition and the AWSPowerShell.NetCore module.

NOTE: In the PowerShell code examples, I’m using a technique known as PowerShell Splatting to pass parameters into various PowerShell commands. Splatting syntax helps your PowerShell code look cleaner, enables auditing of parameter values prior to command invocation, and improves code maintenance.

Create your Amazon ECS cluster

The first task is to create an empty ECS cluster. Right now, the ECS beta support for Windows Server containers doesn’t enable you to provision your compute layer for the cluster, at cluster creation time. In a moment, you’ll provision some Windows Server-based compute capacity on Amazon EC2, and associate it with your empty ECS cluster.

If you prefer to use the AWS PowerShell module to create the empty ECS cluster, you can use the following command.

New-ECSCluster -ClusterName ECSRivendell

Set up an IAM role for Amazon EC2 instances

Amazon EC2 instances can have IAM “roles” assigned to them. By associating one or more IAM policies with an IAM role, which is assigned to an EC2 instance by way of an “instance profile”, you can grant access to various services in AWS directly to the EC2 instance, without having to embed and manage any credentials. AWS handles that for you, via the IAM role. Whenever the EC2 instance is running, it has access to the IAM role that’s assigned to it, and can make calls to various AWS APIs that it’s authorized to call via associated IAM policies.

Before you actually deploy the Windows Server compute layer for your ECS cluster, you must first perform the following preparation steps:

  • Create an IAM policy that defines the required permissions for ECS container instances.
  • Create an IAM role.
  • Register the IAM policy with the role.
  • Create an IAM instance profile, which will be associated with your EC2 instances.
  • Associate the IAM role with the instance profile.

Create an IAM policy for your EC2 container instances

First, you use PowerShell to create the IAM policy. I simply copied and pasted the IAM policy that’s documented in the Amazon ECS documentation. This IAM policy will be associated with the IAM role that will be associated with your EC2 container instances, which actually run your containers via ECS. This policy is especially important, because it grants your container instances access to Amazon EC2 Container Registry (Amazon ECR). This is a private storage area for your container images, similar to the Docker Store.

$IAMPolicy = @{
    Path = '/ECSRivendell/'
    PolicyName = 'ECSRivendellInstanceRole'
    Description = 'This policy is used to grant EC2 instances access to ECS-related API calls. It enables EC2 instances to push and pull from ECR, and most importantly, register with our ECS Cluster.'
    PolicyDocument = @'
{
    "Version": "2012-10-17",
    "Statement": [
        {
        "Effect": "Allow",
        "Action": [
            "ecs:CreateCluster",
            "ecs:DeregisterContainerInstance",
            "ecs:DiscoverPollEndpoint",
            "ecs:Poll",
            "ecs:RegisterContainerInstance",
            "ecs:StartTelemetrySession",
            "ecs:Submit*",
            "ecr:GetAuthorizationToken",
            "ecr:BatchCheckLayerAvailability",
            "ecr:GetDownloadUrlForLayer",
            "ecr:BatchGetImage",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
        ],
        "Resource": "*"
        }
    ]
    }
'@
}
New-IAMPolicy @IAMPolicy

Create a container instance IAM role

The next part is easy. You just need to create an IAM role. This role has a name, optional path, optional description and, the important part, an “AssumeRolePolicyDocument”. This is also known as the IAM “trust policy”. This IAM trust policy enables the EC2 instances to use or “assume” the IAM role, and use its policies to access AWS APIs. Without this trust policy in place, your EC2 instances won’t be able to use this role and the permissions granted to it by the IAM policy.

$IAMRole = @{
    RoleName = 'ECSRivendell'
    Path = '/ECSRivendell/'
    Description = 'This IAM role grants the container instances that are part of the ECSRivendell ECS cluster access to various AWS services, required to operate the ECS Cluster.'
    AssumeRolePolicyDocument = @'
{
    "Version": "2008-10-17",
    "Statement": [
        {
        "Sid": "",
        "Effect": "Allow",
        "Principal": {
            "Service": "ec2.amazonaws.com"
        },
        "Action": "sts:AssumeRole"
        }
    ]
    }
'@
}
New-IAMRole @IAMRole

Register the IAM policy with the IAM role

Now you simply need to associate the IAM policy that you created with the IAM role. This association is very easy to make with PowerShell, using the Register-IAMRolePolicy command. If necessary, you can associate more than one policy with your IAM roles that grant or deny additional permissions.

$RolePolicy = @{
    PolicyArn = $NewIAMPolicy.Arn
    Role = $NewIAMRole.RoleName
}
Register-IAMRolePolicy @RolePolicy

Create an IAM instance profile

With the IAM role prepared, you now need to create the IAM instance profile. The IAM instance profile is the actual object that gets associated with our EC2 instances. This IAM instance profile ultimately grants the EC2 instance permission to call AWS APIs directly, without any stored credentials.

$InstanceProfile = @{
    InstanceProfileName = 'ECSRivendellInstanceRole'
    Path = '/ECSRivendell/'
}
$NewInstanceProfile = New-IAMInstanceProfile @InstanceProfile

Associate the IAM role with the instance profile

Finally, you need to associate our IAM Role with the instance profile.

$RoleInstanceProfile = @{
    InstanceProfileName = $NewInstanceProfile.InstanceProfileName
    RoleName = $NewIAMRole.RoleName
}
$null = Add-IAMRoleToInstanceProfile @RoleInstanceProfile

Add EC2 compute capacity for Windows

Now that you’ve created an empty ECS cluster and have pre-staged your IAM configuration, you need to add some EC2 compute instances running Windows Server to it. These are the actual Windows Server instances that will run containers (tasks) via the ECS cluster scheduler.

To provision EC2 capacity in a frugal and scalable fashion, we create an EC2 Auto Scaling group using Spot instances. Spot instances are one of my favorite services in AWS, because you can provision a significant amount of compute capacity for a huge discount, by bidding on unused capacity. There is some risk that you might need to design around, as Spot instances can be terminated or stopped if your bid price is exceeded. However, depending on your needs, you can run powerful applications on EC2, for a low price, using EC2 Spot instances.

Create the Auto Scaling launch configuration

Before you can create the Auto Scaling group itself, you need to create what’s known as an Auto Scaling “launch configuration”. The launch configuration is essentially a blueprint for Auto Scaling groups. It stores a variety of input parameters that define how new EC2 instances will look every time the Auto Scaling group scales up, and spins up a new instance. For example, you need to specify:

  • The Amazon Machine Image (AMI) that instances will be launched from.
    • NOTE: Be sure you use the Microsoft Windows Server 2016 Base with Containers image for Windows Server container instances.
  • The EC2 instance type (size, vCPUs, memory).
  • Whether to enable or disable public IP addresses for EC2 instances.
  • The bid price for Spot instances (optional, but recommended).
  • The EC2 User Data – PowerShell script to bootstrap instance configuration.

Although the code snippet below might look a little scary, it basically contains an EC2 user data script, which I copied directly from the AWS beta documentation for Windows. This automatically installs the ECS container agent into new instances that are brought online by your Auto Scaling Group, and registers them with your ECS cluster.

$ClusterName = 'ECSRivendell'
$UserDataScript = @'

## The string 'windows' should be replaced with your cluster name

# Set agent env variables for the Machine context (durable)
[Environment]::SetEnvironmentVariable("ECS_CLUSTER", "YOURCLUSTERNAME", "Machine")
[Environment]::SetEnvironmentVariable("ECS_ENABLE_TASK_IAM_ROLE", "true", "Machine")
$agentVersion = 'v1.14.5'
$agentZipUri = "https://s3.amazonaws.com/amazon-ecs-agent/ecs-agent-windows-$agentVersion.zip"
$agentZipMD5Uri = "$agentZipUri.md5"


### --- Nothing user configurable after this point ---
$ecsExeDir = "$env:ProgramFiles\Amazon\ECS"
$zipFile = "$env:TEMP\ecs-agent.zip"
$md5File = "$env:TEMP\ecs-agent.zip.md5"

### Get the files from Amazon S3
Invoke-RestMethod -OutFile $zipFile -Uri $agentZipUri
Invoke-RestMethod -OutFile $md5File -Uri $agentZipMD5Uri

## MD5 Checksum
$expectedMD5 = (Get-Content $md5File)
$md5 = New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider
$actualMD5 = [System.BitConverter]::ToString($md5.ComputeHash([System.IO.File]::ReadAllBytes($zipFile))).replace('-', '')

if($expectedMD5 -ne $actualMD5) {
    echo "Download doesn't match hash."
    echo "Expected: $expectedMD5 - Got: $actualMD5"
    exit 1
}

## Put the executables in the executable directory
Expand-Archive -Path $zipFile -DestinationPath $ecsExeDir -Force

## Start the agent script in the background
$jobname = "ECS-Agent-Init"
$script =  "cd '$ecsExeDir'; .\amazon-ecs-agent.ps1"
$repeat = (New-TimeSpan -Minutes 1)

$jobpath = $env:LOCALAPPDATA + "\Microsoft\Windows\PowerShell\ScheduledJobs\$jobname\ScheduledJobDefinition.xml"
if($(Test-Path -Path $jobpath)) {
    echo "Job definition already present"
    exit 0

}

$scriptblock = [scriptblock]::Create("$script")
$trigger = New-JobTrigger -At (Get-Date).Date -RepeatIndefinitely -RepetitionInterval $repeat -Once
$options = New-ScheduledJobOption -RunElevated -ContinueIfGoingOnBattery -StartIfOnBattery
Register-ScheduledJob -Name $jobname -ScriptBlock $scriptblock -Trigger $trigger -ScheduledJobOption $options -RunNow
Add-JobTrigger -Name $jobname -Trigger (New-JobTrigger -AtStartup -RandomDelay 00:1:00)

true
'@ -replace 'YOURCLUSTERNAME', $ClusterName

$UserDataBase64 = [System.Convert]::ToBase64String(([Byte[]][Char[]] $UserDataScript))

# Create a block device mapping to increase the size of the root volume
$BlockDevice = [Amazon.AutoScaling.Model.BlockDeviceMapping]::new()
$BlockDevice.DeviceName = '/dev/sda1'
$BlockDevice.Ebs = [Amazon.AutoScaling.Model.Ebs]::new()
$BlockDevice.Ebs.DeleteOnTermination = $true
$BlockDevice.Ebs.VolumeSize = 200
$BlockDevice.Ebs.VolumeType = 'gp2'

$LaunchConfig = @{
    LaunchConfigurationName = 'ECSRivendell'
    AssociatePublicIpAddress = $true
    EbsOptimized = $true
    BlockDeviceMapping = $BlockDevice
    InstanceType = 'r4.large'
    SpotPrice = '0.18'
    InstanceMonitoring_Enabled = $true
    IamInstanceProfile = 'ECSRivendellInstanceRole'
    ImageId = 'ami-6a887b12'
    UserData = $UserDataBase64
}
$NewLaunchConfig = New-ASLaunchConfiguration @LaunchConfig

Be sure you give your EC2 container instances enough storage to cache container images over time. In my example, I’m instructing the launch configuration to grant a 200 GB Amazon EBS root volume to each EC2 instance that is deployed into the Auto Scaling group, once it’s set up.

Create the Auto Scaling group

Now that you’ve set up the Auto Scaling launch configuration, you can deploy a new Auto Scaling group from it. Once created, the Auto Scaling group actually submits the EC2 Spot request for the desired numbers of EC2 instances when they’re needed. All you have to do is tell the Auto Scaling group how many instances you want, and it handles spinning them up or down.

Optionally, you can even set up Auto Scaling policies so that you don’t have to manually scale the cluster. We’ll save the topic of Auto Scaling policies for another article, however, and instead focus on setting up the ECS cluster.

The PowerShell code to create the Auto Scaling group, from the launch configuration that you built, is included below. Keep in mind that although I’ve tried to keep the code snippets fairly generic, this particular code snippet does have a hard-coded command that retrieves all of your Amazon Virtual Private Cloud (VPC) subnet IDs dynamically. It should work fine, as long as you only have a single VPC in the region you’re currently operating in. If you have more than one VPC, you’ll want to filter out the list of subnets from the Get-EC2Subnet command that are contained in the desired target VPC.

$AutoScalingGroup = @{
    AutoScalingGroupName = 'ECSRivendell'
    LaunchConfigurationName = 'ECSRivendell'
    MinSize = 2  
    MaxSize = 5
    DesiredCapacity = 3
    VPCZoneIdentifier = [String]::Join(',', (Get-EC2Subnet).SubnetId)
    Tag = $( $Tag = [Amazon.AutoScaling.Model.Tag]::new()
             $Tag.PropagateAtLaunch = $true; $Tag.Key = 'Name'; $Tag.Value = 'ECS: ECSRivendell'; $Tag )
}
$NewAutoScalingGroup = New-ASAutoScalingGroup @AutoScalingGroup

After running this code snippet, your Auto Scaling Group will take a few minutes to deploy. Additionally, it can sometimes take 15-30 minutes for the ECS container agent, running on each of the EC2 instances, to finish registering with the ECS cluster. Once you’ve waited a little while, check out the EC2 instance area of the AWS Management Console and examine your shiny new container instances!

You can also visit the ECS area of the AWS Management Console, and examine the number of container instances that are registered with the ECS cluster. You now have a functioning ECS cluster!

You can even use the Get-ECSClusterDetail command in PowerShell to examine the current count of container instances.

PS /Users/tsulli> (Get-ECSClusterDetail -Cluster ECSRivendell).Clusters

ActiveServicesCount               : 0
ClusterArn                        : arn:aws:ecs:us-west-2:676655494252:cluster/ECSRivendell
ClusterName                       : ECSRivendell
PendingTasksCount                 : 0
RegisteredContainerInstancesCount : 3
RunningTasksCount                 : 0
Status                            : ACTIVE

Conclusion

In this article, you set up an EC2 Container Service (ECS) cluster with several EC2 instances running Windows Server registered to it. Prior to that, you also set up the IAM policy, IAM role, and instance profile that are necessary prerequisites to ensure that the ECS cluster operates correctly. You used the AWS Tools for PowerShell to accomplish this automation, and learned about key ECS and EC2 Auto Scaling commands.

Although you have a functional ECS Cluster at this point, you haven’t yet deployed any containers (ECS tasks) to it. Keep an eye out for part 2 of this blog post, where you’ll explore building your own custom container images, pushing those container images up to Amazon EC2 Container Registry (Amazon ECR), and running ECS tasks and services.

Writing and Archiving Custom Metrics using Amazon CloudWatch and AWS Tools for PowerShell

This is a guest post from Trevor Sullivan, a Seattle-based Solutions Architect at Amazon Web Services (AWS). Since 2004, Trevor has worked intimately with Microsoft technologies, including PowerShell since its release in 2006. In this article, Trevor takes you through the process of using the AWS Tools for PowerShell to write and export metrics data from Amazon CloudWatch.

Amazon’s CloudWatch service is an umbrella that covers a few major areas: logging, metrics, charting, dashboards, alarms, and events.

I wanted to take a few minutes to cover the CloudWatch Metrics area, specifically as it relates to interacting with metrics from PowerShell. We’ll start off with a discussion and demonstration of how to write metric data into CloudWatch, then move on to how to find existing metrics in CloudWatch, and finally how to retrieve metric data points from a specific metric.

Amazon CloudWatch stores metrics data for up to 15 months. However, you can export data from Amazon CloudWatch into a long-term retention tool of your choice, depending on your requirements for metric data retention, and required level of metric granularity. While historical, exported data may not be usable inside CloudWatch, after aging out, you can use other AWS data analytics tools, such as Amazon QuickSight and Amazon Athena to build reports against your historical data.

Assumptions

For this article, we assume that you have an AWS account. We also assume you understand PowerShell at a fundamental level, have installed PowerShell and the AWS Tools for PowerShell on your platform of choice, and have already set up your AWS credentials file and necessary IAM policies granting access to CloudWatch. We’ll discuss and demonstrate how to call various CloudWatch APIs from PowerShell, so be sure you’re prepared for this topic.

For more information, see the Getting Started guide for AWS Tools for PowerShell.

Write metric data into Amazon CloudWatch

Let’s start by talking about storing custom metrics in CloudWatch.

In the AWS Tools for PowerShell, there’s a command named Write-CWMetricData. This PowerShell command ultimately calls the PutMetricData API to write metrics to Amazon CloudWatch. It’s fairly easy to call this command, as there are only a handful of parameters. However, you should understand how CloudWatch works before attempting to use the command.

  • CloudWatch metrics are stored inside namespaces
  • Metric data points:
    • Must have a name.
    • May have zero or more dimensions.
    • May have a value, time stamp and unit of measure (eg. Bytes, BytesPerSecond, Count, etc.).
    • May specify a custom storage resolution (eg., 1 second, 5 seconds—the default is 60 seconds).
  • In the AWS Tools for PowerShell, you construct one or more MetricDatum .NET objects, before passing these into Write-CWMetricData.

With that conceptual information out of the way, let’s look at the simplest way to create a custom metric. Writing metric data points into CloudWatch is how you create a metric. There isn’t a separate operation to create a metric and then write data points into it.

### First, we create one or more MetricDatum objects
$Metric = [Amazon.CloudWatch.Model.MetricDatum]::new()
$Metric.MetricName = 'UserCount'
$Metric.Value = 98

### Second, we write the metric data points to a CloudWatch metrics namespace
Write-CWMetricData -MetricData $Metric -Namespace trevortest/tsulli.loc

If you have lots of metrics to track, and you’d prefer to avoid cluttering up your top-level metric namespaces, this is where metric dimensions can come in handy. For example, let’s say we want to track a “UserCount” metric for over 100 different Active Directory domains. We can store them all under a single namespace, but create a “DomainName” dimension, on each metric, whose value is the name of each Active Directory domain. The following screenshot shows an example of this in action.

Here’s a PowerShell code example that shows how to write a metric to CloudWatch, with a dimension. Although the samples we’ve looked at in this article show how to write a single metric, you should strive to reduce the number of disparate AWS API calls that you make from your application code. Try to consolidate the gathering and writing of multiple metric data points in the same PutMetricData API call, as an array of MetricDatum objects. Your application will perform better, with fewer HTTP connections being created and destroyed, and you’ll still be able to gather as many metrics as you want.

$Metric = [Amazon.CloudWatch.Model.MetricDatum]::new()
### Create a metric dimension, and set its name and value
$Dimension = [Amazon.CloudWatch.Model.Dimension]::new()
$Dimension.Name = 'DomainName'
$Dimension.Value = 'awstrevor.loc'

$Metric.MetricName = 'UserCount'
$Metric.Value = 76
### NOTE: Be sure you assign the Dimension object to the Dimensions property of the MetricDatum object
$Metric.Dimensions = $Dimension
Write-CWMetricData -MetricData $Metric -Namespace trevortest

Retrieve a list of metrics from Amazon CloudWatch

Now that we’ve written custom metrics to CloudWatch, let’s discuss how we search for metrics. Over time, you might find that you have thousands or tens of thousands of metrics in your AWS accounts, across various regions. As a result, it’s imperative that you know how to locate metrics relevant to your analysis project.

You can, of course, use the AWS Management Console to view metric namespaces, and explore the metrics and metric dimensions contained within each namespace. Although this approach will help you gain initial familiarity with the platform, you’ll most likely want to use automation to help you find relevant data within the platform. Automation is especially important when you introduce metrics across multiple AWS Regions and multiple AWS accounts, as they can be harder to find via a graphical interface.

In the AWS Tools for PowerShell, the Get-CWMetricList command maps to the AWS ListMetrics API. This returns a list of high-level information about the metrics stored in CloudWatch. If you have lots of metrics stored in your account, you might get back a very large list. Thankfully, PowerShell has some generic sorting and filtering commands that can help you find the metrics you’re seeking, with some useful filtering parameters on the Get-CWMetricList command itself.

Let’s explore a few examples of how to use this command.

Starting with the simplest example, we’ll retrieve a list of all the CloudWatch metrics from the current AWS account and region.

Get-CWMetricList

If the results of this command are little overwhelming, that’s okay. We can filter down the returned metrics to a specific metric namespace, using the -Namespace parameter.

Get-CWMetricList -Namespace AWS/Lambda

What if you don’t know which metric namespaces exist? PowerShell provides a useful command that enables you to filter for unique values.

(Get-CWMetricList).Namespace | Select-Object -Unique

If these results aren’t in alphabetical order, it might be hard to visually scan through them, so let’s sort them.

(Get-CWMetricList).Namespace | Select-Object -Unique | Sort-Object

Much better! Another option is to search for metrics based on a dimension key-value pair. It’s a bit more typing, but it’s a useful construct to search through thousands of metrics. You can even write a simple wrapper PowerShell function to make it easier to construct one of these DimensionFilter objects.

$Filter = [Amazon.CloudWatch.Model.DimensionFilter]::new()
$Filter.Name = 'DomainName'
$Filter.Value = 'tsulli.loc'
Get-CWMetricList -Dimension $Filter

If you know the name of a specific metric, you can query for a list of metrics that match that name. You might get back multiple results, if there are multiple metrics with the same name, but different dimensions exist in the same namespace. You can also have similarly named metrics across multiple namespaces, with or without dimensions.

Get-CWMetricList -MetricName UserCount

PowerShell’s built-in, generic Where-Object command is infinitely useful in finding metrics or namespaces, if you don’t know their exact, full name.

This example shows how to filter for any metric names that contain “User”.

Get-CWMetricList | Where-Object -FilterScript { $PSItem.Name -match 'User' }

Filtering metrics by namespace is just as easy. Let’s search for metrics that are stored inside any metric namespace that ends with “EBS”.

Get-CWMetricList | Where-Object -FilterScript { $PSItem.Namespace -match 'EBS$' }

That’s likely enough examples of how to find metrics in CloudWatch, using PowerShell! Let’s move on and talk about pulling actual metric data points from CloudWatch, using PowerShell.

Pull metric data from CloudWatch

Concepts

Metric data is stored in CloudWatch for a finite period of time. Before metrics age out of CloudWatch, metric data points (metric “statistics”) move through a tiered system where they are aggregated and stored as less granular metric data points. For example, metrics gathered on a per-minute period are aggregated and stored as five-minute metrics, when they reach an age of fifteen (15) days. You can find detailed information about the aggregation process and retention period in the Amazon CloudWatch metrics documentation.

Data aggregation in CloudWatch, as of this writing, starts when metric data points reach an age of three (3) hours. You need to ensure that you’re exporting your metric data before your data is aged, if you want to keep the most detailed resolution of your metric data points. Services such as AWS Lambda or even PowerShell applications deployed onto Amazon EC2 Container Service (ECS) can help you achieve this export process in a scalable fashion.

The longest period that metrics are stored in CloudWatch is 15 months. If you want to store metrics data beyond a 15-month period, you must query the metrics data, before CloudWatch performs aggregation on your metrics, and store it in an alternate repository, such as Amazon DynamoDBAmazon S3, or Amazon RDS.

PowerShell Deep Dive

Now that we’ve covered some of the conceptual topics around retrieving and archiving CloudWatch metrics, let’s look at the actual PowerShell command to retrieve data points.

The AWS Tools for PowerShell include a command named Get-CWMetricStatistic, which maps to the GetMetricStatistics API in AWS. You can use this command to retrieve granular data points from your CloudWatch metrics.

There are quite a few parameters that you need to specify on this command, because you are querying a potentially massive dataset. You need to be very specific about the metric namespace, name, start time, end time, period, and statistic that you want to retrieve.

Let’s find the metric data points for the Active Directory UserCount metric, for the past 60 minutes, every minute. We assign the API response to a variable, so we can dig into it further. You most likely don’t have this metric, but I’ve been gathering this metric for awhile, so I’ve got roughly a week’s worth of data at a per-minute level. Of course, my metrics are subject to the built-in aggregation policies, so my per-minute data is only good for up to 15 days.

$Data = Get-CWMetricStatistic -Namespace ActiveDirectory/tsulli.loc -ExtendedStatistic p0.0 -MetricName UserCount -StartTime ([DateTime]::UtcNow.AddHours(-1)) -EndTime ([DateTime]::UtcNow) -Period 60

As you can see, the command is a little lengthy, but when you use PowerShell’s tab completion to help finish the command and parameter names, typing it out isn’t too bad.

Because we’re querying the most recent hour’s worth of data, we should have exactly 60 data points in our response. We can confirm this by examining PowerShell’s built-in Count property on the Datapoints property in our API response.

$Data.Datapoints.Count

The data points aren’t in chronological order when returned, so let’s use PowerShell to sort them, and grab the most recent data point.

$Data.Datapoints | Sort-Object -Property Timestamp | Select-Object -Last 1
Average            : 0
ExtendedStatistics : {[p0.0, 19]}
Maximum            : 0
Minimum            : 0
SampleCount        : 0
Sum                : 0
Timestamp          : 10/14/17 4:27:00 PM
Unit               : Count

Now we can take this data and start exporting it into our preferred external data store, for long-term retention! I’ll leave it to you to explore this API further.

Due to the detailed options available in the GetMetricStatistics API, I would strongly encourage you to read through the documentation, and more importantly, run your own experiments with the API. You need to use this API extensively, if you want to export data points from CloudWatch metrics to an alternative data source, as described earlier.

Conclusion

In this article, we’ve explored the use of the AWS Tools for PowerShell to assist you with writing metrics to Amazon CloudWatch, searching or querying for metrics, and retrieving metric data points. I would encourage you to think about how Amazon CloudWatch can integrate with a variety of other services, to help you achieve your objectives.

Don’t forget that after you’ve stored metrics data in CloudWatch, you can then build dashboards around that data, connect CloudWatch alarms to notify you about infrastructure and application issues, and even perform automated remediation tasks. The sky is the limit, so put on your builder’s hat and start creating!

Please feel free to follow me on Twitter, and keep an eye on my YouTube channel.

Deploying .NET Web Applications Using AWS CodeDeploy with Visual Studio Team Services

Today’s post is from AWS Solution Architect Aravind Kodandaramaiah.

We recently announced the new AWS Tools for Microsoft Visual Studio Team Services. In this post, we show you how you can use these tools to deploy your .NET web applications from Team Services to Amazon EC2 instances by using AWS CodeDeploy.

We don’t cover setting up the tools in Team Services in this post. We assume you already have a Team Services account or are using an on-premises TFS instance. We also assume you know how to push your source code to the repository used in your Team Services build. You can use the AWS tasks in build or in release definitions. For simplicity, this post shows you how to use the tasks in a build definition.

AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate deployments, eliminating error-prone manual operations. The service also scales with your infrastructure, so you can easily deploy to one instance or a thousand.

Setting up an AWS environment

Before we get started configuring a build definition within Team Services, we need to set up an AWS environment. Follow these steps to set up the AWS environment to enable deployments using the AWS CodeDeploy Application Deployment task with Team Services.

  1. Provision an AWS Identity and Access Management (IAM) user. See the AWS documentation for how to prepare IAM users to use AWS CodeDeploy. We’ll configure the access key ID and secret key ID of this IAM user within Team Services to initiate the deployment.
  2. Create a service role for AWS CodeDeploy. See the AWS documentation for details about how to create a service role for AWS CodeDeploy. This service role provides AWS CodeDeploy access to your AWS resources for deployment.
  3. Create an IAM instance profile for EC2 instances. See the AWS documentation for details about how to create IAM instance profiles. An IAM instance profile provides applications running within an EC2 instance access to AWS services.
  4. Launch, tag, and configure EC2 instances. Follow these instructions for launching and configuring EC2 instances to work with AWS CodeDeploy. We’ll use these EC2 Instances to run the deployed application.
  5. Create an Amazon S3 bucket to store application revisions. See the AWS documentation for details about creating S3 buckets.
  6. Create an AWS CodeDeploy application and a deployment group. After you configure instances, but before you can deploy a revision, you must create an application and deployment group in AWS CodeDeploy. AWS CodeDeploy uses a deployment group to identify EC2 instances that need the application to be deployed. Tags help group EC2 instances into a deployment group.

We’ve outlined the manual steps to create the AWS environment. However, it’s possible to completely automate creation of such AWS environments by using AWS CloudFormation, and this is the recommended approach. AWS CloudFormation enables you to represent infrastructure as code and to perform predictable, repeatable, and automated deployments. This enables you to control and track changes to your infrastructure.

Setting up an AWS CodeDeploy environment

In our Git repository in Team Services, we have an ASP.NET web application, the AWS CodeDeploy AppSpec file, and a few deployment PowerShell scripts. These files are located at the root of the repo, as shown below.

The appspec.yml file is a YAML-formatted file that AWS CodeDeploy uses to determine the artifacts to install and the lifecycle events to run. You must place it in the root of an application’s source code directory structure. Here’s the content of the sample appspec.yml file.

version: 0.0
os: windows
files:
  - source: \
    destination: C:\temp\WebApp\MusicWorld
  
hooks:
  BeforeInstall:
    - location: .\StageArtifact.ps1
    - location: .\CreateWebSite.ps1

AWS CodeDeploy executes the PowerShell scripts before copying the revision files to the final destination folder. These PowerShell scripts register the web application with IIS, and copy the application files to the physical path associated with the IIS web application.

The StageArtifacts.ps1 file is a PowerShell script that unpacks the Microsoft Web Deploy (msdeploy) web artifact, and copies the application files to the physical path that is associated with the IIS web application.

$target = "C:\inetpub\wwwroot\MusicWorld\" 

function DeleteIfExistsAndCreateEmptyFolder($dir )
{
    if ( Test-Path $dir ) {    
           Get-ChildItem -Path  $dir -Force -Recurse | Remove-Item -force –
							  recurse
           Remove-Item $dir -Force
    }
    New-Item -ItemType Directory -Force -Path $dir
}
# Clean up target directory
DeleteIfExistsAndCreateEmptyFolder($target )

# msdeploy creates a web artifact with multiple levels of folders. We only need the content 
# of the folder that has Web.config within it 
function GetWebArtifactFolderPath($path)
{
    foreach ($item in Get-ChildItem $path)
    {   
        if (Test-Path $item.FullName -PathType Container)
        {   
            # return the full path for the folder which contains Global.asax
            if (Test-Path ($item.fullname + "\Global.asax"))
            {
                #$item.FullName
                return $item.FullName;
            }
            GetWebArtifactFolderPath $item.FullName
        }
    }
}

$path = GetWebArtifactFolderPath("C:\temp\WebApp\MusicWorld")
$path2 = $path + "\*"
Copy-Item $path2 $target -recurse -force

The CreateWebSite.ps1 file is a PowerShell script that creates a web application in IIS.

New-WebApplication -Site "Default Web Site" -Name MusicWorld -PhysicalPath c:\inetpub\wwwroot\MusicWorld -Force

Setting up the build definition for an ASP.NET web application

The AWS CodeDeploy Application Deployment task in the AWS Tools for Microsoft Visual Studio Team Services supports deployment of any type of application, as long as you register the deployment script in the appspec.yml file. In this post, we deploy ASP.NET applications that are packaged as a Web Deploy archive.

We use the ASP.NET build template to get an ASP.NET application built and packaged as a Web Deploy archive.

Be sure to set up the following MSBuild arguments within the Build Solution task.

/p:WebPublishMethod=Package /p:PackageAsSingleFile=false /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactstagingdirectory)\\publish\\application"

Remove the Publish Artifacts task from the build pipeline, as the build artifacts will be uploaded into Amazon S3 by the AWS CodeDeploy Application Deployment task.

Now that our application has been built using MSBuild, we need to copy the appspec.yml file and the PowerShell deployment scripts to the root of the publish folder. This is so that AWS CodeDeploy can find the appspec.yml file at the root of the application folder.

Add a new Copy Files task. Choose Add Task, and then search for “Copy”. On the found task, choose Add to include the task in the build definition.

Configure this task to copy the appspec.yml file and the PowerShell scripts to the parent folder of the “packagelocation” defined within the Build Solution task. This step allows the AWS CodeDeploy Application Deployment task to zip up the contents of the revision bundle recursively before uploading the archive to Amazon S3.

Next, add the AWS CodeDeploy Application Deployment task. Choose Add Task, and then search for “CodeDeploy”. On the found task, choose Add to include the task in the build definition.

For the AWS CodeDeploy Application Deployment task, make the following configuration changes:

  • AWS Credentials – The AWS credentials used to perform the deployment. Our previous post on the Team Services tools discusses setting up AWS credentials in Team Services. We recommend that the credentials be those for an IAM user, with a policy that enables the user to perform an AWS CodeDeploy deployment.
  • AWS Region – The AWS Region that AWS CodeDeploy is running in.
  • Application Name – The name of the AWS CodeDeploy application.
  • Deployment Group Name – The name of the deployment group to deploy to.
  • Revision Bundle – The artifacts to deploy. You can supply a folder or a file name to this parameter. If you supply a folder, the task will zip the contents of the folder recursively into an archive file before uploading the archive to Amazon S3. If you supply a file name, the task uploads it, unmodified, to Amazon S3. Note that AWS CodeDeploy requires the appspec.yml file describing the application to be located at the root of the specified folder or archive file.
  • Bucket Name – The name of the bucket to which the revision bundle will be uploaded. The target Amazon S3 bucket must exist in the same AWS Region as the target instance.
  • Target Folder – Optional folder (key prefix) for the uploaded revision bundle in the bucket. If you don’t specify a target folder, the bundle will be uploaded to the root of the bucket.

Now that we’ve configured all the tasks, we’re ready to deploy to Amazon EC2 instances using AWS CodeDeploy. If you queue a build now, you should see output similar to this for the deployment.

Next, navigate to the AWS CodeDeploy console and choose Deployments. Choose the deployment that AWS CodeDeploy completed to view the deployment progress. In this example, the web application has been deployed to all the EC2 instances within the deployment group.

If your AWS CodeDeploy application was created with a load balancer, you can verify the web application deployment by navigating to the DNS name of the load balancer using a web browser.

Conclusion

We hope Visual Studio Team Services users find the AWS CodeDeploy Application Deployment task helpful and easy to use. We also appreciate hearing your feedback on our GitHub repository for these Team Services tasks.

New Get-ECRLoginCommand for AWS Tools for PowerShell

Today’s post is from AWS Solution Architect and Microsoft MVP for Cloud and Data Center Management, Trevor Sullivan.

The AWS Tools for PowerShell now offer a new command that makes it easier to authenticate to the Amazon EC2 Container Registry (Amazon ECR).

Amazon EC2 Container Registry (ECR) is a service that enables customers to upload and store their Windows-based and Linux-based container images. Once a developer uploads these container images, they can then be deployed to stand-alone container hosts or container clusters, such as those running under the Amazon EC2 Container Service (Amazon ECS).

To push or pull container images from ECR, you must authenticate to the registry using the Docker API. ECR provides a GetAuthorizationToken API that retrieves the credential you’ll use to authenticate to ECR. In the AWS PowerShell modules, this API is mapped to the cmdlet Get-ECRAuthorizationToken. The response you receive from this service invocation includes a username and password for the registry, encoded as base64. To retrieve the credential, you must decode the base64 response into a byte array, and then decode the byte array as a UTF-8 string. After retrieving the UTF-8 string, the username and password are provided to you in a colon-delimited format. You simply split the string on the colon character to receive the username as array index 0, and the password as array index 1.

Now, with Get-ECRLoginCommand, you can retrieve a pregenerated Docker login command that authenticates your container hosts to ECR. Although you can still directly call the GetAuthorizationToken API, Get-ECRLoginCommand provides a helpful shortcut that reduces the amount of required conversion effort.

Let’s look at a short example of how you can use this new command from PowerShell:

PS> Invoke-Expression –Command (Get-ECRLoginCommand –Region us-west-2).Command
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

As you can see, all you have to do is call the Get-ECRLoginCommand, and then pass the prebuilt Command property into the built-in Invoke-Expression PowerShell cmdlet. Upon running this PowerShell cmdlet, you’re authenticated to ECR, and can then proceed to create image repositories and pushing and pulling container images.

Note: You might receive a warning about specifying the registry password on the Docker CLI. However, you can also build your own Docker login command by using the other properties on the object returned from the Get-ECRLoginCommand.

I hope you find the new cmdlet useful! If you have ideas for other cmdlets we should add, be sure to let us know in the comments.

Using AWS KMS Master Keys with the AmazonS3EncryptionClient in the AWS SDK for .NET

by John Vellozzi | on | in .NET | Permalink | Comments |  Share

You can now use an AWS KMS key as your master key when you use the AmazonS3EncryptionClient class for client-side encryption.

The main advantage of using an AWS KMS key as your master key is that you don’t need to store and manage your own master keys. It’s done by AWS. A second advantage of the new feature is that it makes the AWS SDK for .NET’s AmazonS3EncryptionClient class interoperable with the AWS SDK for Java’s AmazonS3EncryptionClient class.* This means you can encrypt with the AWS SDK for Java and decrypt with the AWS SDK for .NET, and vice versa.

For more information about client-side encryption with the AmazonS3EncryptionClient class, and how envelope encryption works, see our original blog post.

The following examples demonstrate how to use AWS KMS keys with the AmazonS3EncryptionClient class. Note that your project must reference the latest version of the AWSSDK.KeyManagementService Nuget package to use this feature.

// Encryption
// ----------
var bucketName = "bucket";
var kmsKeyID = "kms-key-id";
var objectKey = "key";
var objectContent = "object content";

var kmsEncryptionMaterials = new EncryptionMaterials(kmsKeyID);
// CryptoStorageMode.ObjectMetadata is required for KMS EncryptionMaterials
var config = new AmazonS3CryptoConfiguration()
{
    StorageMode = CryptoStorageMode.ObjectMetadata
};

using (var client = new AmazonS3EncryptionClient(config, kmsEncryptionMaterials))
{
    var request = new PutObjectRequest
    {
        BucketName = bucketName,
        Key = objectKey,
        ContentBody = objectContent
    };
    client.PutObject(request);
}

// Decryption
// ----------
var bucketName = "bucket";
var kmsKeyID = "kms-key-id";
var objectKey = "key";
string objectContent = null;

var kmsEncryptionMaterials = new EncryptionMaterials(kmsKeyID);
// CryptoStorageMode.ObjectMetadata is required for KMS EncryptionMaterials
var config = new AmazonS3CryptoConfiguration()
{
    StorageMode = CryptoStorageMode.ObjectMetadata
};

using (var client = new AmazonS3EncryptionClient(config, kmsEncryptionMaterials))
{
    var request = new GetObjectRequest
    {
        BucketName = bucketName,
        Key = objectKey
    };

    using (var response = client.GetObject(request))
    using (var stream = response.ResponseStream)
    using (var reader = new StreamReader(stream))
    {
        objectContent = reader.ReadToEnd();
    }
}
// use objectContent

*The AWS SDK for .NET’s AmazonS3EncryptionClient only supports KMS master keys when run in metadata mode. The instruction file mode of the AWS SDK for .NET’s AmazonS3EncryptionClient is still incompatible with the AWS SDK for Java’s AmazonS3EncryptionClient.

Using the AWS_PROFILE Environment Variable to Choose a Profile

by John Vellozzi | on | in .NET | Permalink | Comments |  Share

In an upcoming release of the AWS SDK for .NET, the FallbackCredentialsFactory class and the FallbackRegionFactory class will allow the use of the AWS_PROFILE environment variable.

The SDK currently looks for a profile named “default” when retrieving credentials and region settings. After this change is released, users will be able to set the AWS_PROFILE environment variable to the name of a profile for the FallbackCredentialsFactory and FallbackRegionFactory classes to search for. We’re making this change to bring the AWS SDK for .NET into alignment with the way that the AWS CLI already works.

This is a significant change for users who already have the AWS_PROFILE set to something other than “default”. Currently, if you have set AWS_PROFILE, the SDK ignores it. After this change, the SDK will no longer ignore the AWS_PROFILE environment variable. This means that if the profile named by AWS_PROFILE is different from the “default” profile, your application will start using different credentials or a different region.

AWS Elastic Beanstalk Updated with .NET Core 2.0

We are happy to announce that Elastic Beanstalk’s Windows platform now supports .NET Core 2.0, which is now preinstalled along with the previous 1.0 and 1.1 versions of .NET Core. The easiest way to get started with deploying ASP.NET Core, the web framework for .NET Core, is using the AWS Toolkit for Visual Studio which allows you to deploy right from Visual Studio. This earlier blog post contains a tutorial for deploying ASP.NET Core applications with Visual Studio.

When writing ASP.NET Core applications check out these NuGet packages. The NuGet package AWSSDK.Extensions.NETCore.Setup can be used to integrate the AWS SDK for .NET into ASP.NET Core’s dependency injection framework. To help with logging the NuGet package AWS.Logger.AspNetCore can be used to integrate ASP.NET Core’s logging framework with CloudWatch Logs making it easy to view the stream of logs coming from your applications.

Working with Lambda Functions and Visual Studio Team Services

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In previous posts, we talked about our new AWS Tools for Microsoft Visual Studio Team Services (VSTS), which provides AWS tasks you can add to your build definition. We also talked about our AWS Elastic Beanstalk task that you can use to deploy .NET web applications. In our initial release, there are also two tasks to interact with AWS Lambda.

Deployment .NET Core Lambda functions

To deploy Lambda .NET Core functions, we provide the AWS Lambda .NET Core Deployment task. This task uses our dotnet CLI integration, which means your Lambda project needs a DotNetCliToolReference reference to Amazon.Lambda.Tools. If you created your Lambda project from one of the templates provided by AWS, the reference is already set up. If the reference is not set, add the following to the Lambda project’s .csproj file.

<ItemGroup>
  <DotNetCliToolReference Include="Amazon.Lambda.Tools" Version="1.7.0" />
</ItemGroup>

In addition to the usual credential and region properties, the task requires two parameters. The first is the Path to Lambda Project, which you should set to the file path of the .csproj file for the Lambda project. If you’re still using Visual Studio 2015, you can set this to the directory containing the project.json file

The other required parameter is the Command property. When you use the Lambda integration with the dotnet CLI there are two commands for deployment. The deploy-function command will package a single function and upload it to the Lambda service. The other command, deploy-serverless, packages a project that can contain a collection of functions and uses AWS CloudFormation to deploy the Lambda functions. The Command property controls whether you use deploy-function or deploy-serverless.

All the other parameters are optional because they could be set already in the project’s aws-lambda-tools-defaults.json file.That means if required parameters for the deploy-function or deploy-serverless commands are missing from both aws-lambda-tools-defaults.json and the VSTS build definition, the task will fail.

Note: We highly recommend that you update the version of Amazon.Lambda.Tools to version 1.7.0 and later. When you use the dotnet CLI integration and there are parameters missing from the command line, the integration will prompt for them. In the context of a VSTS build, prompting would obviously be inappropriate. Version 1.7.0 added the ability to suppress prompting for missing parameters.

Invoking the Lambda function

Once you have your Lambda function deployed, you can use the AWS Lambda Invoke Function to execute Lambda functions from your build definition. To invoke a function, you need to set the following properties:

  • Function Name – The name of the Lambda function.
  • Payload – Optional JSON document to pass to the Lambda function.
  • Invocation Type – Selector you use to decide if the function is called synchronously, which will return the results, or asynchronously.

Conclusion

By using these tasks, you can automate .NET Core Lambda deployment and trigger Lambda functions as part of your build to continue your build process through AWS. Hopefully, you will find these tasks useful. As always, we greatly appreciate your feedback and you can reach out to us on our VSTS GitHub repository.

Deploying .NET Web Applications Using AWS Elastic Beanstalk with Visual Studio Team Services

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

We recently announced the new AWS Tools for Microsoft Visual Studio Team Services. Today let’s take a deeper look at how you can use the new tools to support deploying your .NET web applications from Team Services to AWS Elastic Beanstalk.

Elastic Beanstalk uses environments to run .NET web applications. Before using this task, you first need to create an environment. You can do this from the Elastic Beanstalk console with a sample application, or by using the AWS Toolkit for Visual Studio with an initial version of your application.

In this post, we won’t cover setting up the tools in Team Services. However, we’re going to assume you already have a Team Services account and know how to push your source code to the repository used in your Team Services build. This post focuses on build definitions that show how to use the AWS Elastic Beanstalk Deployment task.

Setting up the build definition for an ASP.NET application

The AWS Elastic Beanstalk Deployment task in the AWS Tools for Team Services supports either ASP.NET applications that are packaged as a Web Deploy archive, or ASP.NET Core applications published with the dotnet publish command. First let’s take a look at using an ASP.NET application.

Probably the easiest way to get an ASP.NET application built and packaged as a Web Deploy archive is by using the ASP.NET Build Template and removing the Published Artifacts task. This task will be replaced with the AWS Elastic Beanstalk Deployment task.

If you already have a build definition, be sure your Build Solution task has the following MSBuild arguments. These arguments ensure the web application is packaged as a Web Deploy archive and placed into the build artifacts staging directory.


/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactstagingdirectory)\\"

Now that the project is building, we need to add the AWS Elastic Beanstalk Deployment task. Choose Add Task, and then search for “Beanstalk”. On the found task, choose Add to include the task in the build definition.

For the new task, we need to make the following configuration changes:

  • AWS Credentials: The AWS credentials used to perform the deployment. Our previous post on the Team Services tools discusses setting up AWS credentials in Team Services. For this task, the credentials should be for an AWS Identity and Access Management (IAM) user, with a policy that enables the user to update an Elastic Beanstalk environment and describe an environment status and events.
  • AWS Region: The AWS Region that the Elastic Beanstalk environment is running in.
  • Application Type: Set to ASP.NET.
  • Web Deploy Archive: The path to the Web Deploy archive. If the archive was created using the arguments above, the file will have the same name as the directory that contains the web application, and will have a .zip extension. You can find it in the build artifacts staging directory, which can be referenced as $(build.artifactstagingdirectory).
  • Beanstalk Application Name: The name you used to create the Elastic Beanstalk application. An Elastic Beanstalk application is the container for a collection of environments running the .NET web application.
  • Beanstalk Environment Name: The name you used to create the Elastic Beanstalk environment. An Elastic Beanstalk environment contains the actual provisioned resources that are running the .NET web application.

Now that we have configured the task, we’re ready to deploy to Elastic Beanstalk. If you queue a build now, you should see output similar to this for the deployment.

Setting up the build definition for an ASP.NET Core

For an ASP.NET Core deployment, we don’t use Web Deploy. Instead we need the output folder from the dotnet publish command. The easiest way to get started for this type of deployment is to use the ASP.NET Core build template, and again remove the Publish Artifacts task.

This creates a build definition using the dotnet CLI that restores NuGet dependencies, builds the projects, runs any tests, and then finally executes the dotnet publish command on any web projects. After the publish task, we need to add the AWS Elastic Beanstalk Deployment task the same way we added it for the ASP.NET application.

There are two parameters in the configurations on the AWS Elastic Beanstalk Deployment task that are different for ASP.NET Core:

  • Application Type: Set to ASP.NET Core.
  • Published Application Path: The path to the .zip file archive of the dotnet publish output. If you didn’t choose Zip Published Projects in the publish task, this parameter can point to the directory the dotnet publish command wrote to.

That’s it. Everything else is the same as the ASP.NET deployment.

Conclusion

We hope Visual Studio Team Services users will find the AWS Elastic Beanstalk Deployment task helpful and easy to use. We also appreciate hearing your feedback on our GitHub repository for these Team Services tasks.

CognitoAuthentication Extension Library Developer Preview

by Sam Mousigian | on | in .NET | Permalink | Comments |  Share

We are pleased to announce the Developer Preview of the CognitoAuthentication extension library. This library simplifies the authentication process of Amazon Cognito User Pools for .NET 4.5, .NET Core, and Xamarin developers. Many customers reported that they directly implemented the Secure Remote Password (SRP) protocol themselves. This process requires hundreds of lines of difficult cryptography implementation. Our goal in creating the CognitoAuthentication extension library is to eliminate this hassle and allow you to use the authentication methods for Amazon Cognito User Pools with only a few short method calls. We also want to make the process more intuitive. Instead of having to read through pages of documentation to know which parameters to send for each type of authentication, we ask for the necessary fields in the corresponding request parameter of each authentication method and then produce the proper service request on your behalf. The library is built on top of the Amazon Cognito Identity Provider API to create and send these API calls to authenticate users. We hope this library helps you use Amazon Cognito User Pools to authenticate users, and we look forward to your feedback.

Getting Started

To set up an AWS account and install the AWS SDK for .NET to take advantage of this library, see Getting Started with the AWS SDK for .NET. Create a new project in Visual Studio and add the CognitoAuthentication extension library as a reference to the project. You can find it in the NuGet gallery as AWSSDK.Extensions.CognitoAuthentication. Using the library to make calls to the Amazon Cognito Identity Provider API from the AWS SDK for .NET is as simple as creating the necessary CognitoAuthentication objects and calling the appropriate AmazonCognitoIdentityProviderClient methods. The principal Amazon Cognito authentication objects are:

  • CognitoUserPool objects store information about a user pool, including the poolID, clientID, and other pool attributes.
  • CognitoUser objects contain a user’s username, the pool they are associated with, session information, and other user properties.
  • CognitoDevice objects include device information, such as the device key.

You can use AnonymousAWSCredentials when creating the identity provider client, which results in requests not being signed before they are sent to the service. Any service that does not accept unsigned requests returns a service exception in this case. This is appropriate for releasing in your final product to end users that should not have proper credentials until they are authenticated.

Authenticating with Secure Remote Protocol (SRP)

Instead of implementing hundreds of lines of cryptographic methods yourself, you now only need to create the necessary AmazonCognitoIdentityProviderClient, CognitoUserPool, CognitoUser, and InitiateSrpAuthRequest objects and then call StartWithSrpAuthAsync. We made these structures as lightweight as possible while still providing a large amount of functionality. The InitiateSrpAuthRequest currently only requires the password for the user; all other required information is already stored in the CognitoAuthentication objects. This handles creating and responding to the USER_SRP_AUTH and PASSWORD_VERIFIER challenges during the authentication flow on behalf of the developer, and then returns an AuthFlowResponse object. The AuthenticationResult property of the AuthFlowResponse object contains the user’s session tokens if the user was successfully authenticated. If more challenge responses are required, this field is null and the ChallengeName property describes the next challenge, such as multi-factor authentication. You would then call the appropriate method to continue the authentication flow. The following code snippet shows how the library performs SRP authentication:

using Amazon.Runtime;
using Amazon.CognitoIdentityProvider;
using Amazon.Extensions.CognitoAuthentication;

public async void AuthenticateWithSrpAsync()
{
    var provider = new AmazonCognitoIdentityProviderClient(new AnonymousAWSCredentials(),
                                                           FallbackRegionFactory.GetRegionEndpoint());
    CognitoUserPool userPool = new CognitoUserPool("poolID", "clientID", provider);
    CognitoUser user = new CognitoUser("username", "clientID", userPool, provider);

    string password = "userPassword";

    AuthFlowResponse context = await user.StartWithSrpAuthAsync(new InitiateSrpAuthRequest()
    {
        Password = password
    }).ConfigureAwait(false);
}

Authenticating with Multiple Forms of Authentication

Continuing the authentication flow with challenges, such as with NewPasswordRequired and Multi-Factor Authentication (MFA), is simpler as well. The only things required are the CognitoAuthentication objects, user’s password for SRP, and the necessary information for the next challenge, acquired after prompting the user to enter it. The following code shows one way to check the challenge type and get the appropriate responses for MFA and NewPasswordRequired challenges during the authentication flow:

using System;

using Amazon.Runtime;
using Amazon.CognitoIdentityProvider;
using Amazon.Extensions.CognitoAuthentication;

public async void AuthenticateUserAsync()
{
    var provider = new AmazonCognitoIdentityProviderClient(new AnonymousAWSCredentials(),
                                                           FallbackRegionFactory.GetRegionEndpoint());
    CognitoUserPool userPool = new CognitoUserPool("poolID", "clientID", provider);
    CognitoUser user = new CognitoUser("username", "clientID", userPool, provider);

    string password = "userPassword";
    AuthFlowResponse authResponse = null;

    authResponse = await user.StartWithSrpAuthAsync(new InitiateSrpAuthRequest()
    {
        Password = password
    }).ConfigureAwait(false);

    while (authResponse.AuthenticationResult == null)
    {
        if (authResponse.ChallengeName == ChallengeNameType.NEW_PASSWORD_REQUIRED)
        {
            Console.WriteLine("Enter your desired new password:");
            string newPassword = Console.ReadLine();

            authResponse = 
                await user.RespondToNewPasswordRequiredAsync(new RespondToNewPasswordRequiredRequest()
                {
                    SessionID = authResponse.SessionID,
                    NewPassword = newPassword
                }).ConfigureAwait(false);
        }
        else if (authResponse.ChallengeName == ChallengeNameType.SMS_MFA)
        {
            Console.WriteLine("Enter the MFA Code sent to your device:");
            string mfaCode = Console.ReadLine();

            authResponse = await user.RespondToSmsMfaAuthAsync(new RespondToSmsMfaRequest()
            {
                 SessionID = authResponse.SessionID,
                 MfaCode = mfaCode
            }).ConfigureAwait(false);
         }
         else
         {
             Console.WriteLine("Unrecognized authentication challenge.");
             break;
         }
    }

    if (authResponse.AuthenticationResult != null)
    {
        Console.WriteLine("User successfully authenticated.");
    }
    else
    {
        Console.WriteLine("Error in authentication process.");
    }
}

Similar to the SRP authentication model, if the user is authenticated after these method calls, the AuthenticationResult property of the mfaResponse object contains the user’s session tokens. Otherwise, continue prompting the user for the necessary information required for the next authentication challenge described in the mfaResponse ChallengeName field. As shown above, the main variables to maintain between authentication flow calls are the SessionID and ChallengeName of the AuthFlowResponse. You should also handle the case of an unrecognized challenge. This can occur when there is an out-of-date SDK and if the service introduced a new authentication challenge. It can also occur if the application does not support a certain type of challenge. Here, we simply output this to the user and tell the user that there was an error in the authentication process; however, this scenario can be handled in the way that best suits the application.

Using AWS Resources After Authentication

Once a user is authenticated using the CognitoAuthentication library, the next step is to allow them to access the appropriate AWS resources. This requires you to create an identity pool through the Amazon Cognito Federated Identities console. By specifying your created Amazon Cognito User Pool as a provider using its poolID and clientID, you can allow your Amazon Cognito User Pool users to access AWS resources connected to your account. You can also specify different roles for both unauthenticated and authenticated users to be able to access different resources. These roles can be changed in the IAM console where you can add or remove permissions in the “Action” field of the role’s attached policy. Then, using the appropriate identity pool, user pool, and Amazon Cognito user information, calls can be made to different AWS resources. The following shows a user authenticated with SRP accessing the developer’s different S3 buckets permitted by the associated identity pool’s role:

using System;

using Amazon;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.CognitoIdentity;
using Amazon.CognitoIdentityProvider;
using Amazon.Extensions.CognitoAuthentication;

public async void GetS3BucketsAsync()
{
    var provider = new AmazonCognitoIdentityProviderClient(new AnonymousAWSCredentials(),
                                                            FallbackRegionFactory.GetRegionEndpoint());
    CognitoUserPool userPool = new CognitoUserPool("poolID", "clientID", provider);
    CognitoUser user = new CognitoUser("username", "clientID", userPool, provider);

    string password = "userPassword";

    AuthFlowResponse context = await user.StartWithSrpAuthAsync(new InitiateSrpAuthRequest()
    {
        Password = password
    }).ConfigureAwait(false);

    CognitoAWSCredentials credentials = 
        user.GetCognitoAWSCredentials("identityPoolID", RegionEndpoint.<YourIdentityPoolRegion>);

    using (var client = new AmazonS3Client(credentials))
    {
        ListBucketsResponse response = 
            await client.ListBucketsAsync(new ListBucketsRequest()).ConfigureAwait(false);

        foreach (S3Bucket bucket in response.Buckets)
        {
            Console.WriteLine(bucket.BucketName);
        }
    }
}

Other Forms of Authentication

In addition to SRP, NewPasswordRequired, and MFA, the CognitoAuthentication extension library offers an easier authentication flow for:

  • Custom – Begins with a call to StartWithCustomAuthAsync(InitiateCustomAuthRequest customRequest)
  • RefreshToken – Begins with a call to StartWithRefreshTokenAuthAsync(InitiateRefreshTokenAuthRequest refreshTokenRequest)
  • RefreshTokenSRP – Begins with a call to StartWithRefreshTokenAuthAsync(InitiateRefreshTokenAuthRequest refreshTokenRequest)
  • AdminNoSRP – Begins with a call to StartWithAdminNoSrpAuth(InitiateAdminNoSrpAuthRequest adminAuthRequest)

Call the appropriate method depending on the desired flow, and then continue prompting the user with challenges as they are presented in the AuthFlowResponse objects of each method call. Also call the appropriate response method, such as RespondToSmsMfaAuthAsync for MFA challenges and RespondToCustomAuthAsync for custom challenges. If new authentication methods are created for Amazon Cognito User Pools in the future, they will be added to the library as well.

Conclusion

We hope you find this preview of the CognitoAuthentication extension library useful as it simplifies the authentication process from hundreds of lines of code to only a few. We appreciate all of your feedback. You can provide public feedback on our GitHub page or our Gitter page. Those who prefer not to give public feedback can reach out to our support team by filing a support ticket through the AWS Management Console.