Category: .NET


Serverless ASP.NET Core 2.0 Applications

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In our previous post, we announced the release of the .NET Core 2.0 AWS Lambda runtime and new versions of our .NET tooling to help you develop .NET Core 2.0-based serverless applications. Also, with the new .NET Core 2.0 Lambda runtime, we’ve released our ASP.NET Core NuGet Package, Amazon.Lambda.AspNetCoreServer, for general availability.

Version 2.0.0 of the Amazon.Lambda.AspNetCoreServer has been upgraded to target .NET Core 2.0. If you’re already using this library, you need to update your ASP.NET Core project to .NET Core 2.0 before using this latest version.

ASP.NET Core 2.0 has a lot of changes that make running a serverless ASP.NET Core Lambda function even more exciting. These include performance improvements in the ASP.NET Core and underlying .NET Core libraries.

Razor Pages

The Lambda runtime now supports Razor Pages. This means we can deploy both ASP.NET Core Web API and ASP.NET Core web applications. An important change with ASP.NET Core 2.0 is that Razor Pages are now precompiled at publish time. This means when our serverless Razor Pages are first rendered, Lambda compute time isn’t spent compiling the Razor Pages from cshtml to machine instructions.

Runtime package store

Starting with .NET Core 2.0 there is a new runtime package store feature, which is a cache of NuGet packages already installed on the target deployment platform. These packages have also been pre-jitted, meaning they’re already compiled from .NET’s intermediate language (IL) to machine instructions. This improves startup time when you use these packages. The store also reduces your deployment package size, further improving the cold startup time. For example, our existing ASP.NET Core Web API blueprint for .NET Core 1.0 had a minimum size of about 2.5 MB for the deployment package. For the .NET Core 2.0 version of the blueprint, the size is about 0.5 MB.

To indicate that you want to use the runtime package store for an ASP.NET Core application, you add a NuGet dependency to Microsoft.AspNetCore.All. Adding this dependency makes all of the ASP.NET Core packages and Entity Framework Core packages available to your application. However, it doesn’t include them in the deployment package because they’re already available in Lambda.

The Lambda blueprints that are available in Visual Studio are configured to use Microsoft.AspNetCore.All, just like the Microsoft-provided ASP.NET Core Web project templates inside Visual Studio. If you’re migrating a .NET Core 1.0 project to .NET Core 2.0, I highly recommend swapping out individual ASP.NET Core references to Microsoft.AspNetCore.All.

.NET Core and runtime package store version

Currently, the .NET Core 2.0 Lambda runtime is running .NET Core 2.0.4 and includes version 2.0.3 of Microsoft.AspNetCore.All. As the .NET Core 2.0 Lambda runtime was rolling out to the AWS Regions, Microsoft released version 2.0.5 of the .NET Core runtime and 2.0.5 of Microsoft.AspNetCore.All in the runtime package store. The Lambda runtime will be updated to include the latest versions shortly. However, in the meantime, if you update your Microsoft.AspNetCore.All reference to version 2.0.5, the Lambda function will fail to find the dependency when it runs. If you use either the AWS Toolkit for Visual Studio or our dotnet CLI extensions to perform the deployment, and attempt to deploy with a newer version of Microsoft.AspNetCore.All than is available in Lambda, our packaging will prevent the deployment and inform you of the latest version you can use with Lambda. This is another reason we recommend you use either the AWS Toolkit for Visual Studio or our dotnet CLI extensions to create the Lambda deployment package, so that we can provide that extra verification of your project.

Getting started

The AWS Toolkit for Visual Studio provides two blueprints for ASP.NET Core applications. The first is the ASP.NET Core Web API blueprint, which we updated from the preview in .NET Core 1.0 to take advantage of the new .NET Core 2.0 features. The second is a new ASP.NET Core Web App blueprint, which demonstrates the use of the ASP.NET Core 2.0 new Razor Pages feature in a serverless environment. Let’s take a look at that blueprint now.

To access the Lambda blueprints, choose File, New Project in Visual Studio. Under Visual C#, choose AWS Lambda.

The ASP.NET Core blueprints are serverless applications, because we want to use AWS CloudFormation to configure Amazon API Gateway to expose the Lambda function running ASP.NET Core to an HTTP endpoint. To continue, choose AWS Serverless Application (.NET Core), name your project, and then click OK.

On the Select Blueprint page, you can see the two ASP.NET Core blueprints. Choose the ASP.NET Core Web App blueprint, and then click Finish.

When the project is created, it looks just like a regular ASP.NET Core project. The main difference is that Program.cs was renamed to LocalEntryPoint.cs, which enables you to run the ASP.NET Core project locally. Another difference is the file LambdaEntryPoint.cs. This file contains a class that derives from Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction and implements the Init method that’s used to configure the IWebHostBuilder, similar to LocalEntryPoint.cs. The only required element to configure is the startup class that ASP.NET Core will call to configure the web application.

The APIGatewayProxyFunction base class contains the FunctionHandlerAsync method. This method is declared as the Lambda handler in the serverless.template file that defines the AWS Lambda function and configures Amazon API Gateway. If you rename the class or namespace, be sure to update the Lambda handler in the serverless.template file to reflect the new name.


public class LambdaEntryPoint : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction
{
    /// <summary>
    /// The builder has configuration, logging, and Amazon API Gateway already configured. The startup class
    /// needs to be configured in this method using the UseStartup<>() method.
    /// </summary>
    /// <param name="builder"></param>
    protected override void Init(IWebHostBuilder builder)
    {
        builder
            .UseStartup<Startup>();
    }
}

To deploy the ASP.NET Core application to Lambda, right-click the project in Solution Explorer, and then choose Publish to AWS Lambda. This starts the deployment wizard. Because no parameters are defined in the serverless.template, we just need to enter an AWS CloudFormation stack name and an Amazon S3 bucket in the region the application is being deployed, to which the Lambda deployment package will be uploaded. After that, choose Publish to begin the deployment process.

Once the Lambda deployment package is created and uploaded to Amazon S3, and the creation of the AWS CloudFormation stack is initiated, the AWS CloudFormation stack view is launched. This view lists the events as the AWS resources are created. When the stack is created, a URL to the generated API Gateway endpoint is shown.

Clicking the link displays your new serverless ASP.NET Core web application.

Using images

If your web application displays images, we recommend you serve those images from Amazon S3. This is more efficient for returning static content like images, Cascading Style Sheets, etc. Also, to return images from your Lambda function to the browser, you need to do extra configuration in API Gateway for binary data.

Migrating Existing ASP.NET Core Web API Projects

Before our new release we already had a preview blueprint for using ASP.NET Core Web API on Lambda using .NET Core 1.0. To migrate that project make the following changes to your csproj file project.

  • Make sure the Sdk attribute in root element of your csproj is set to Microsoft.NET.Sdk.Web. The preview blueprint had this attribute set to Microsoft.NET.Sdk.
    <Project Sdk="Microsoft.NET.Sdk.Web">
  • Update Amazon.Lambda.AspNetCoreServer reference to 2.0.0
    <PackageReference Include="Amazon.Lambda.AspNetCoreServer" Version="2.0.0" />
  • Replace any references to Microsoft.AspNetCore.* and Microsoft.Extensions.* with Microsoft.AspNetCore.All version 2.0.3
    <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.3" />
  • Update target framework to netcoreapp2.0
  • Set the property GenerateRuntimeConfigurationFiles to true to make sure a project-name.runtimeconfig.json is created.
    <PropertyGroup>
        <TargetFramework>netcoreapp2.0</TargetFramework>
        <GenerateRuntimeConfigurationFiles>true</GenerateRuntimeConfigurationFiles>
    </PropertyGroup>
  • If your csproj has the following xml you can remove it because appsetings.json will now be included by default since you changed the Sdk attribute to Microsoft.NET.Sdk.Web.
    <ItemGroup>
      <Content Include="appsettings.json">
        <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
      </Content>
    </ItemGroup> 

After that make any changed necessary to your code to be compatible with ASP.NET Core 2.0 and you are ready to deploy.

Conclusion

With all of the improvements in .NET Core and ASP.NET Core 2.0, it’s exciting to see it running in a serverless environment. There’s a lot of potential with running ASP.NET Core on Lambda, and we’re excited to hear your thoughts about running a serverless ASP.NET Core application. Check out our GitHub repository which contains our libraries that make this possible. Feel free to open issues for any questions you have.

AWS Lambda .NET Core 2.0 Support Released

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today we’ve released the highly anticipated .NET Core 2.0 AWS Lambda runtime that is available in all Lambda-supported regions. With .NET Core 2.0, it’s easier to move existing .NET Framework code to .NET Core with the much larger API defined in .NET Standard 2.0, which .NET Core 2.0 implements.

Using Visual Studio 2017

The easiest way to get started with .NET Core Lambda is to use Visual Studio 2017 and our AWS Toolkit for Visual Studio. We released version 1.14.0.0 of the toolkit today with updates to support using .NET Core 2.0 on AWS Lambda. The AWS Lambda project templates have been updated to .NET Core 2.0. You can easily deploy to Lambda by right-clicking your Lambda project and selecting Publish to AWS Lambda.

If you haven’t used the toolkit before, these previous blog posts can help you get started:

Deploying from the command line

Although you can create a Lambda package bundle by zipping up the output of the dotnet publish command, we recommend that you use our dotnet CLI extension, Amazon.Lambda.Tools. Using this tool over dotnet publish enables our tooling to ensure the package bundle has all of the required files. These include the <my-project>.runtimeconfig.json file that the .NET Core 2.0 Lambda runtime requires, but which isn’t always produced by dotnet publish. The tooling also shrinks your package bundle by removing Windows-specific and macOS-specific dependencies that dotnet publish would put in the publish folder.

This tool is set up by default in all of our AWS Lambda project templates because we added the following section in the project file.


<ItemGroup>
  <DotNetCliToolReference Include="Amazon.Lambda.Tools" Version="2.0.0" />
</ItemGroup>

As part of our release today, version 2.0.0 of Amazon.Lambda.Tools was pushed to NuGet.org to add support for .NET Core 2.0.

Depending on the type of project you create, you can use this extension to deploy your Lambda functions from the command line by using the dotnet lambda deploy-function command or the dotnet lambda deploy-serverless command.

If you’re just building your Lambda package bundle as part of your CI system and don’t want the extension to deploy, you can use the dotnet lambda package command to produce the package bundle .zip file to pass along through your CI system.

This earlier blog post has more details about our Lambda CLI extension.

Creating Lambda projects without Visual Studio

If you’re not using Visual Studio, you can create any of our Lambda projects using the dotnet new command by installing our Amazon.Lambda.Templates package with the following command.

dotnet new -i Amazon.Lambda.Templates::* 

The ::* syntax at the end of the command indicates that the latest version should be installed. This is version 2.0.0, also released today, to update the project templates to support .NET Core 2.0. See this blog post for more details about these templates.

Updating existing functions

Because the programming model hasn’t changed, it’s easy to migrate your existing .NET Core 1.0 Lambda functions to the new runtime. To migrate, you need to update the target framework of your project to netcoreapp2.0 and, optionally, update any of the dependencies for your project to the latest version. Your project probably has an aws-lambda-tools-defaults.json file, which is a JSON file of saved settings from your deployment. Update the framework property to netcoreapp2.0. If the file also contains the field function-runtime, update that to dotnetcore2.0. If you’re deploying a Lambda function as a serverless application using an AWS CloudFormation template (usually named serverless.template), update the Runtime property of any AWS::Serverless::Function or AWS::Lambda::Function AWS CloudFormation resources to dotnetcore2.0.

With these changes, you should be able to simply redeploy using the new .NET Core 2.0 runtime with our AWS Toolkit for Visual Studio or dotnet CLI extension.

Using AWS Tools for Visual Studio Team Services

The AWS Tools for VSTS support two tasks related to performing Lambda deployments from within your VSTS or TFS pipelines. The general-purpose AWS Lambda deployment task, which can deploy prepackaged functions that target any supported AWS Lambda runtime, has been updated in version 1.0.16 of the tools to support selection of the new dotnetcore2.0 runtime. The .NET Core-specific Lambda task, which uses the Lambda dotnet CLI extension, will operate without requiring changes to the task configuration. You just need to update the project files built by this task, as described earlier.

Conclusion

We’re excited to see what you build with our new .NET Core runtime and to expand our .NET Core 2.0 support across AWS. Visit our GitHub repository for our .NET Core tooling and libraries for additional help with .NET Core and Lambda.

Remote Debug an IIS .NET Application Running in AWS Elastic Beanstalk

In this guest post by AWS Partner Solution Architect Sriwantha Attanayake, we take a look at how you can set up remote debugging for ASP.NET applications deployed to AWS Elastic Beanstalk.

We love to run IIS websites on AWS Elastic Beanstalk. With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.

How can you remote debug a .NET application running on Elastic Beanstalk? This article describes a one-time setup of Elastic Beanstalk that enables you to remote debug in real time. You can use this approach in your development environments.

First, we create an Amazon EC2 instance from a base Elastic Beanstalk image. Next, we install Visual Studio remote debugger as a service and create a custom image from it. Then, we start an Elastic Beanstalk environment with this custom image. To allow communication with the Visual Studio remote debugger, we set up proper security groups. Finally, we attach the Visual Studio debugger to the remote process running inside the EC2 instance started by Elastic Beanstalk.

How to identify which Elastic Beanstalk image to customize

  1. Open the Elastic Beanstalk console and create a new application by choosing Create New Application.
  2. Create a new Web server environment.

  3. On the Create new environment page, choose .NET (Windows/IIS) as the preconfigured platform.
  4. Choose Configure more options.
  5. Under Instances, you’ll find the default AMI that Elastic Beanstalk will use. This is determined by the selected platform and the region. For example, for the Sydney region, for the 64bit Windows Server 2016 v1.2.0 running IIS 10.0 platform, the AMI ID is ami-e04aa682. Make a note of this AMI ID. This is the base image you’ll customize later.

Customize the image

  1. Now that you know the base AMI used by Elastic Beanstalk, start an EC2 instance with this base image. You can find this image under Community AMIs.
  2. Once the EC2 instance is started, remotely log in to it.
    Install Remote Tools as a Service. The installer depends on the Visual Studio version you use for development. See Remote Debugging for the steps to install the remote debugger.
  3. When the installation is complete, run the Visual Studio remote debugger configuration wizard.

Note: If you do not want to create a custom image another approach you can use to install the Visual Studio remote debugger is to use .ebextensions. As detailed in Customizing Software on Windows Servers an .ebextension file can include commands that can run the installation when Elastic Beanstalk deploys the application.

Whichever approach you use, be sure of the following:

  • You run the remote debugger as a service. The service account has to have permissions to run as a Windows service and must be a member of the local Administrators group.
  • You allow network connections from all types of networks.
  • The remote debugger service has started.
  • Windows firewall doesn’t block the remote debugger.

Create an image from a customized EC2 instance

  1. When the installation is complete, Sysprep the machine using EC2 launch settings. You can find the EC2 launch settings at C:\ProgramData\Amazon\EC2-Windows\Launch\Settings\Ec2LaunchSettings.exe. Choose Shutdown with Sysprep.

    For a detailed explanation, see Configuring EC2Launch.
  2. After the instance shuts down, you can create an image from it. Make a note of this AMI ID. The next time you start an Elastic Beanstalk environment, use this custom image ID.

Connecting to your Elastic Beanstalk environment

  1. When you start your Elastic Beanstalk environment, be sure you configure your security groups in a way that opens remote debugger ports to your development machine. Which ports to open depends on which Visual Studio environment you’re running. In the following example, port 4022 is for Visual Studio 2017, and port 4016 is for Visual Studio 2012.

    See Remote Debugger Port Assignments to learn about the ports used in different Visual Studio environments. In the previous example, I have opened remote debugger ports corresponding to different editions of Visual Studio to any network. This poses a security risk. Please ensure you open only the ports necessary for your edition of Visual Studio to the development networks you trust. Once you are done with debugging, you can remove these security groups.
  2. Be sure you specify a key pair for the Elastic Beanstalk EC2 instance, so that you can retrieve the autogenerated Administrator password for remote access.
  3. Make a note of the IP address (public/private) of the EC2 instance started by the Elastic Beanstalk environment.
  4. Once you open the Visual Studio project (e.g., ASP.NET application) that is being deployed to Elastic Beanstalk, select Debug, Attach to Process.
  5. For Connection Target, enter the IP address of the EC2 instance started by Elastic Beanstalk. For example, if your development machine is in a private network with network reachability to the EC2 instance, use the private IP address.  Depending on where your development machine is, you can use the public IP address. Finally, choose Show processes from all users.
  6. In the popup window that appears, you can enter your login information to the EC2 instance. Enter the Administrator user name and password of the EC2 instance that Elastic Beanstalk has started. The reason we started the Elastic Beanstalk EC2 instances with a key pair is to retrieve this password.
  7. If the login succeeds, you will see all the processes running inside the EC2 instance started by Elastic Beanstalk. If you don’t see the IIS worker process (w3wp.exe), ensure you have viewed your website at least once, and then choose Refresh. Choose Attach to attach the remote IIS worker process to Visual Studio and then confirm the attachment.
  8. You can now live debug the .NET application running inside Elastic Beanstalk. You will get a hit on a  debug point when you execute the relevant code fragment.

Conclusions

In this post, we showed how you can remote debug a .NET web application running on Elastic Beanstalk.  .NET remote debugging on Elastic Beanstalk is no different from .NET remote debugging you would do on a Windows server. Once you have an AMI with your custom tools installed, you can use it as your preferred Elastic Beanstalk image.

As noted earlier, another way to install the Visual Studio remote debugger is through an .ebextensions file. Using this approach, you don’t need to create a custom image. See Customizing Software on Windows Servers for details about advanced environment customization using Elastic Beanstalk configuration files.

Although you have the option of doing remote debugging on Elastic Beanstalk, don’t enable this feature on a production environment. In addition, don’t open the ports related to remote debugging on a production environment. The proper way to analyze issues on a production environment is to do proper logging. For example, in an ASP/MVC .NET application, you can catch all the unhandled exceptions in Global.asax and log them. For a large-scale complex logging solution, you can explore the best practices in Centralized Logging.

AWS Support for PowerShell Core 6.0

Announced in a Microsoft blog post yesterday, PowerShell Core 6.0 is now generally available. AWS continues to support this new cross-platform version of PowerShell with our AWS Tools for PowerShell Core module also known by its module name, AWSPowerShell.NetCore. This post recaps the modules available from AWS for PowerShell users wanting to script their AWS resources.

AWS Tools for Windows PowerShell

Released in 2012, this module, also known by the module name AWSPowerShell, supports users working with the traditional Windows-only version of PowerShell. It supports PowerShell version 2.0 through to 5.1. It can be installed from the AWS SDK and Tools for .NET Windows Installer, which also contains .NET 3.5 and 4.5 versions of the AWS SDK for .NET and the AWS Toolkit for Visual Studio 2013 and 2015. The module is also distributed on the PowerShell Gallery and is pre-installed on Amazon EC2 Windows-based images.

AWS Tools for PowerShell Core

This version of the tools was first released in August 2016 to coincide with the announcement of the first public alpha release of PowerShell Core 6.0. Since then it has continued to be updated in sync with the AWS Tools for Windows PowerShell module. This module, named AWSPowerShell.NetCore, is only distributed on the PowerShell Gallery.

Module Compatibility

Both modules are highly compatible with each other. In terms of the cmdlets they expose for AWS service APIs, they match completely and both modules are updated in sync. As noted in our original launch blog post for our module running on PowerShell Core, back in August 2016, the AWSPowerShell.NetCore module is missing only a handful of cmdlets, as follows.

Proxy cmdlets:

Set-AWSProxy
Clear-AWSProxy

Logging cmdlets:

Add-AWSLoggingListener
Remove-AWSLoggingListener
Set-AWSResponseLogging
Enable-AWSMetricsLogging
Disable-AWSMetricsLogging

SAML federated credentials cmdlets:

Set-AWSSamlEndpoint
Set-AWSSamlRoleProfile

Now that PowerShell Core is generally available (GA), we’ll be taking another look at these to see if we can add them.

We hope you’re enjoying the new PowerShell Core GA release and the ability to script and access your AWS resources from PowerShell on any system!

Send Real-Time Amazon CloudWatch Alarm Notifications to Amazon Chime

This post was authored by Trevor Sullivan, a Solutions Architect for Amazon Web Services (AWS) based in Seattle, Washington. The post was also peer-reviewed by Andy Elmhorst, Senior Solutions Architect for AWS.

Overview

When you’re developing, deploying, and supporting business-critical applications, timely system notifications are crucial to keeping your services up and running reliably for your customers. If your team actively collaborates using Amazon Chime, you might want to receive critical system notifications directly within your team chat rooms. This is possible using the Amazon Chime incoming webhooks feature.

Using Amazon CloudWatch alarms, you can set up metric thresholds and send alerts to Amazon Simple Notification Service (SNS). SNS can send notifications using e-mail, HTTP(S) endpoints, and Short Message Service (SMS) messages to mobile phones, and it can even trigger a Lambda function.

Because SNS doesn’t currently support sending messages directly to Amazon Chime chat rooms, we’ll insert a Lambda function in between them. By triggering a Lambda function from SNS instead, we can consume the event data from the CloudWatch alarm and craft a human-friendly message before sending it to Amazon Chime.

Here’s a simple architectural diagram that demonstrates how the various components will work together to make this solution work. Feel free to refer back to this diagram as you make your way through the remainder of this article.

Assumptions

Throughout this article, we make the following assumptions:

  • You have created an Amazon EC2 instance running Ubuntu Linux.
  • Detailed CloudWatch monitoring is enabled for this EC2 instance.
  • Amazon Chime is already set up and accessible to you.
  • You’ve installed PowerShell Core, or can run it in a Docker container.
  • You have installed and configured IAM credentials for the AWS Tools for PowerShell.
  • Python 3.6 and pip3 are installed on your development system.

NOTE: There is an additional cost to capture detailed CloudWatch metrics for EC2 instances, detailed here.

Set up Amazon Chime

Before implementing your backend application code, there are a couple steps you need to perform within Amazon Chime. To set up your incoming webhook, you first need to create a new Amazon Chime chat room. Webhooks are created as a resource in the context of the chat room. As of this writing, Chime Webhooks must be created using the native Amazon Chime client for Microsoft Windows or Apple MacOS.

Create an Amazon Chime chat room

First you create a new chat room in Amazon Chime. You’ll use this chat room for testing and, once you understand and successfully implement this solution, you can replicate it in your live chat rooms.

  1. Open Amazon Chime.
  2. Choose the Rooms button.
  3. Choose the New room button.
  4. Give the chat room a name, and then choose the Create button.

Create an Amazon Chime incoming webhook

Now that you’ve created your new Amazon Chime chat room, you need to generate a webhook URL. This webhook URL authorizes your application to send messages to the chat room. Be sure to handle the URL with the same level of security that you would handle any other secrets or passwords.

In the Amazon Chime chat room, click the gear icon, and then select the Manage Webhooks menu item. In the webhook management window, choose the New button and use the name CriticalAlerts. Click the Copy webhook URL link and paste it into a temporary notepad. We’ll need to configure this URL on our Lambda function later on.

Create an SNS topic

In this section, you create a Simple Notification Service (SNS) topic. This SNS topic will be triggered by a CloudWatch alarm when its configured metric threshold is exceeded. You can name the SNS topic whatever you prefer, but in this example, I’ll use the name chimewebhook.

It’s possible to create the SNS topic after creating your CloudWatch alarm. However, in this case, you would have to go back and reconfigure your CloudWatch alarm to point to the new SNS topic. In this example, we create the topic first to minimize the amount of context switching between services.

Use the following PowerShell command to create an SNS topic, and store the resulting topic Amazon Resource Name (ARN) in a variable named $Topic. We’ll use this variable later on, so don’t close your PowerShell session.

$TopicArn = New-SNSTopic -Name chimewebhook -Region us-west-2

Create a CloudWatch alarm

In this section, you create an Amazon CloudWatch alarm. Then you configure this alarm to trigger an alert state when the CPU usage metric of your EC2 instance exceeds 10%. Alarms can be configured with zero or more actions; you’ll configure a single action to send a notification to the SNS topic you previously created.

  • Navigate to the CloudWatch alarms feature in the AWS Management Console.
  • Choose the blue Create Alarm button.
  • Search for your EC2 instance ID.
  • Select the CPUUtilization metric for your EC2 instance.
  • On the next screen, give the CloudWatch alarm a name and useful description.
  • Configure the CPUUtilization threshold for 10%, and be sure the Period is set to 1 minute.
  • In the Actions section, select your SNS topic.
  • Save the CloudWatch alarm.

If you’d prefer to use a PowerShell script to deploy the CloudWatch alarm, use the following example script. Be sure you specify the correct parameter values for your environment:

  • EC2 instance ID that you’re monitoring
  • AWS Region that your EC2 instance resides in
  • ARN of the SNS topic that CloudWatch will publish alerts to
### Create a CloudWatch dimension object, to alarm against the correct EC2 instance ID
$MetricDimension = [Amazon.CloudWatch.Model.Dimension]::new()
$MetricDimension.Name = 'InstanceId'
$MetricDimension.Value = 'i-04043befbbfcdc51e'

### Set up the parameters to create a CloudWatch alarm in a PowerShell HashTable
$Alarm = @{
  AlarmName = 'EC2 instance exceeded 10% CPU'
  ActionsEnabled = $true
  AlarmAction = $TopicArn
  ComparisonOperator = ([Amazon.CloudWatch.ComparisonOperator]::GreaterThanOrEqualToThreshold)
  Threshold = 10
  Namespace = 'AWS/EC2'
  MetricName = 'CPUUtilization'
  Dimension = $MetricDimension
  Period = 60
  EvaluationPeriod = 1
  Statistic = [Amazon.CloudWatch.Statistic]::Maximum
  Region = 'us-west-2'
}
Write-CWMetricAlarm @Alarm

Set up AWS Lambda

In this section, you will create an AWS Lambda function, based on Python 3, that will be triggered by the SNS topic that you created earlier. This Lambda function will parse some of the fields of the message that’s forwarded from CloudWatch to SNS.

Create the Lambda Function

To successfully invoke an Amazon Chime webhook, your HTTP invocation must follow these criteria:

  • Webhook URL is predefined by Amazon Chime
  • Request is sent using the HTTP POST verb
  • Content-Type HTTP header must be application/json
  • HTTP body must contain a JSON object with Content property

We’ll use the open source Python requests library to make the Amazon Chime webhook invocation, as it provides a simple development interface. Because you’re adding a dependency on an external library, you need to author your Lambda function locally, package it up into a ZIP archive, and then deploy the ZIP archive to Lambda.

Start by creating the following three files in a working directory.

index.py

'''
This file contains the AWS Lambda function that is invoked when a CloudWatch alarm is triggered.
'''

import os
import boto3
import requests
from base64 import b64decode

def get_message(event):
  '''
  This function retrieves the message that will be sent to the Amazon Chime webhook. If the Lambda
  function is triggered manually, it will return some static text. However, if the Lambda function
  is invoked by SNS from CloudWatch Alarms, it will emit the Alarm's subject line.
  '''
  try:
    return event['Records'][0]['Sns']['Subject']
  except KeyError:
    return 'test message'

def handler(event, context):
  '''
  The 'handler' Python function is the entry point for AWS Lambda function invocations.
  '''
  print('Getting ready to send message to Amazon Chime room')
  content = 'CloudWatch Alarm! {0}'.format(get_message(event))
  webhook_uri = os.environ['CHIME_WEBHOOK']
  requests.post(url=webhook_uri, json={ 'Content': content })
  print('Finished sending notification to Amazon Chime room')

requirements.txt

requests

setup.cfg

[install]
prefix=

Build and deploy the Lambda package

Now that you’ve created the previous source files, you’ll need a PowerShell script to build the ZIP archive for Lambda, create the Lambda function, and give SNS access to invoke the Lambda function. Save the following PowerShell script file into the same working directory, update the <YourChimeWebhookURL> text with your actual Amazon Chime webhook URL, and then run the script.

NOTE: This PowerShell script has a dependency on the Mac and Linux zip utility. If you’re running this code on Windows 10, you can use the Compress-Archive PowerShell command, or run the PowerShell script in the Windows Subsystem for Linux feature.

Deploy.ps1

Set-DefaultAWSRegion -Region us-west-2

$ZipFileName = 'lambda.zip'

Set-Location -Path $PSScriptRoot

Write-Host -Object 'Restoring dependencies ...'
pip3 install -r $PSScriptRoot/requirements.txt -t $PSScriptRoot/

Write-Host -Object 'Compressing files ...'
Get-ChildItem -Recurse | ForEach-Object -Process {
  $NewPath = $PSItem.FullName.Substring($PSScriptRoot.Length + 1)
  zip -u "$PSScriptRoot/$ZipFileName" $NewPath
}

Write-Host -Object 'Deploying Lambda function'

$Function = @{
  FunctionName = 'AmazonChimeAlarm'
  Runtime = 'python3.6'
  Description = 'Sends a message to an Amazon Chime room when a CloudWatch alarm is triggered.'
  ZipFilename = $ZipFileName
  Handler = 'index.handler'
  Role = 'arn:aws:iam::{0}:role/service-role/lambda' -f (Get-STSCallerIdentity).Account
  Environment_Variable = @{
    CHIME_WEBHOOK = '<YourChimeWebhookURL>'
  }
}
Remove-LMFunction -FunctionName $Function.FunctionName -Force
Publish-LMFunction @Function

Write-Host -Object 'Deployment completed' -ForegroundColor Green

Once you’ve executed this deployment script, you should see an AWS Lambda function named AmazonChimeAlarm in the AWS Management Console.

Configure Lambda function policy

AWS Lambda functions have their own resource-level policies that are somewhat similar to IAM policies. These function policies are what grant other cloud resources the access that they need to invoke the function. In this scenario, you need to grant the SNS service access to trigger your Lambda function.

The following PowerShell script adds the necessary permissions to your Lamdba function.

### Enables the Amazon Simple Notification Service (SNS) to invoke your Lambda function
$LMPermission = @{
  FunctionName = $Function.FunctionName
  Action = 'lambda:InvokeFunction'
  Principal = 'sns.amazonaws.com'
  StatementId = 1
}
Add-LMPermission @LMPermission

Keep in mind that this Lambda function policy broadly allows any SNS topic, in any AWS account, to trigger your Lambda function. For production applications, you should use the -SourceArn parameter to restrict access to specific event sources that will be able to trigger your Lambda function.

Subscribe the Lambda function to the SNS topic

Now you’ve created your Lambda function, and granted access to SNS to trigger it, you need to subscribe the Lambda function to the topic. This subscription/association is what starts the flow of events from SNS to Lambda. Without the subscription, CloudWatch alarms would trigger your SNS topic successfully, but the event flow would stop there.

$Subscription = @{
  Protocol = 'lambda'
  Endpoint = $NewFunction.FunctionArn
  TopicArn = $TopicArn
}
Connect-SNSNotification @Subscription

Trigger a test event

Now that you’ve finished configuring your AWS account, you can go ahead and test the end-to-end process to ensure it’s working properly. Ensure you’ve got your Amazon Chime client running, and select your test chat room that you created earlier.

Next, invoke a process on your instance that will consume many CPU cycles. Connect to your EC2 instance using SSH and run the following shell commands.

sudo apt install docker.io --yes
sudo usermod --append --groups docker ubuntu
exit # SSH back in after this, so group memberships take effect

docker run --rm --detach trevorsullivan/cpuburn burnP5

This Ubuntu-based Docker container image contains the preinstalled CPU burn program, which will cause your EC2 instance’s CPU consumption to spike to 100%. Because you’ve enabled detailed CloudWatch metrics on your EC2 instance, after a couple of minutes, the CloudWatch alarm that you created should get triggered.

Once you’ve finished with the test, or if you want to trigger the CloudWatch alarm again, make sure that you stop the Docker container running the CPU burn program. Because you specified the --rm argument upon running the container, the container will be deleted after it has stopped.

docker ps # Find the container ID
docker rm -f <containerID> # Remove the container

Potential problems

If you run into any problems with testing the end-to-end solution, check out the following list of potential issues you could run into and ways to troubleshoot:

  • The CPU burn program might not result in adequate CPU consumption, which wouldn’t trigger the test event correctly. Use the Linux top command to ensure that you trigger the test event. Or simply pull up the CPUUtilization metric in CloudWatch and see what values are being recorded.
  • If your Lambda function is not correctly configured to accept invocations from SNS, your SNS topic will fail to invoke it. Be sure that you understand how Lambda function policies work, and ensure that your Lambda function has the appropriate resource-level IAM policy to enable SNS to invoke it.
  • By default, your EC2 instances include basic metrics for a five-minute period. If you don’t have detailed monitoring enabled for the EC2 instance you’ve used in this article, you might have to wait several minutes for the next metric data point to be recorded. For more immediate feedback, ensure that your EC2 instance has detailed monitoring configured, so that you get instance-level metrics on a per-minute period instead.
  • Ensure your Lambda function is subscribed to your SNS topic. If your SNS topic doesn’t have any subscribers, it won’t know how to “handle” the alarm state from CloudWatch, and will effectively discard the message.
  • If you aren’t receiving any notifications in your Amazon Chime chat room, ensure that your CloudWatch alarm is in an OK state before attempting to retrigger it. CloudWatch sends a single notification to the configured SNS topics when the alarm’s threshold is breached, and doesn’t continually send notifications while it’s in an alarm state.
  • If your HTTP POST request to the Chime Webhook URL fails with HTTP 429, then your application might be rate-limited by Amazon Chime. Please refer to the official product documentation for more information. As of this writing, Chime Webhook URLs support 1 transaction per second (TPS).

Conclusion

In this article, you configured your AWS account to send Amazon Chime notifications to a team chat room, when CloudWatch alarm thresholds are breached. You can repeat this process as many times as you need to, in order to send the right notifications to your team’s chat room. Chat rooms in general can get noisy quickly, if you don’t take care to determine which notifications are the most critical to your team. Hence, I recommend that you discuss with your team which notifications you need immediate notification about, before spending the effort to build a solution using this technique.

Thanks for taking the time to read this article and learn about how you can integrate Amazon Chime with your critical system notifications.

Build on!
Trevor Sullivan, Solutions Architect
Amazon Web Services (AWS)

 

AWS .NET Team at AWS re:Invent 2017

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Steve and I, from the AWS .NET Team, just got back from AWS re:Invent, where we met up with a lot of .NET developers doing some really cool projects on AWS. We also presented two sessions, which are online now. If you weren’t able come to re:Invent and see us, check out the videos online.

Developing applications on AWS with .NET Core

https://www.youtube.com/watch?v=IfF1E2RJ6Do

In this session, we talked about how to add AWS services to your .NET Core applications, and then covered how to deploy your applications.

We announced new Visual Studio tooling to deploy .NET Core applications as a container to Amazon Elastic Container Service (ECS). The new tooling also supports the new ECS Fargate support. Fargate enables you to run container applications in a fully managed environment, removing the responsibility for managing any Amazon EC2 instances. Expect more information and examples on how to use this new tooling soon.

We also preannounced that support for .NET Core 2.0 in AWS Lambda is coming soon. We showed a demo of the upcoming runtime executing a full ASP.NET Core application with ASP.NET Razor pages. We made the demo extra exciting by also showing off upcoming .NET Core 2.0 support for AWS X-Ray.

Extending VSTS build or release pipelines to AWS

https://www.youtube.com/watch?v=8fXDOlFWmZU

This session covered our new AWS Tools for Microsoft Visual Studio Team Services (VSTS) that we launched this summer. Steve and I drilled deep into how you can use this tool to make using AWS from VSTS and TFS simple. We covered deploying to both AWS Elastic Beanstalk and AWS CodeDeploy, in addition to using AWS CloudFormation to define our infrastructure for our AWS CodeDeploy deployments.

Right before our session, we released a new version of the tool with four new tasks:

  • Amazon Elastic Container Registry (ECR) – Pushes your Docker images to Amazon’s own container registry.
  • AWS Lambda Deploy Function – A new Lambda deploy task that supports deploying code using any of the AWS Lambda-supported runtimes.
  • AWS Systems Manager Run Command – Enables you to run a command remotely on a fleet of EC2 instances or on-premises machines.
  • AWS Systems Manager Get Parameter – Reads the value of one or more parameters from Parameter Store and adds them as variables to the current build or release definition.

We used this last task, AWS Systems Manager Get Parameter, heavily in our demos to enable us to parameterize our builds based on the deployment stage (beta, gamma, prod) that we were deploying to.

Hope to see you next year!

It was great to go to re:Invent and meet so many awesome developers. Also good for us Seattleites to get some sun this time of year! Hope to see you next year.

Deploy an Amazon ECS Cluster Running Windows Server with AWS Tools for PowerShell – Part 1

This is a guest post from Trevor Sullivan, a Seattle-based Solutions Architect at Amazon Web Services (AWS). In this blog post, Trevor shows you how to deploy a Windows Server-based container cluster using the AWS Tools for PowerShell.

Building and deploying applications on the Windows Server platform is becoming a significantly lighter-weight process. Although you might be accustomed to developing monolithic applications on Windows, and scaling vertically instead of horizontally, you might want to rethink how you design and build Windows-based apps. Now that application containers are a native feature in the Windows Server platform, you can design your applications in a similar fashion to how developers targeting the Linux platform have designed them for years.

Throughout this blog post, we explore how to automate the deployment of a Windows Server container cluster, managed by the Amazon EC2 Container Service (Amazon ECS). The architecture of the cluster can help you efficiently and cost-effectively scale your containerized application components.

Due to the large amount of information contained in this blog post, we’ve separated it into two parts. Part 1 guides you through the process of deploying an ECS cluster. Part 2 covers the creation and deployment of your own custom container images to the cluster.

Assumptions

In this blog post, we assume the following:

  • You’re using a Windows, Mac, or Linux system with PowerShell Core installed and you’ve installed the AWS Tools for PowerShell Core.
  • Or you’re using a Windows 7 or later system, with Windows PowerShell 4.0 or later installed (use $PSVersionTable to check your PowerShell version) and have installed the AWS Tools for PowerShell (aka AWSPowerShell module).
  • You already have access to an AWS account.
  • You’ve created an AWS Identity and Access Management (IAM) user account with administrative policy permissions, and generated an access key ID and secret key.
  • You’ve configured your AWS access key ID and secret key in your ~/.aws/credentials file.

This article was authored, and the code tested, on a MacBook Pro with PowerShell Core Edition and the AWSPowerShell.NetCore module.

NOTE: In the PowerShell code examples, I’m using a technique known as PowerShell Splatting to pass parameters into various PowerShell commands. Splatting syntax helps your PowerShell code look cleaner, enables auditing of parameter values prior to command invocation, and improves code maintenance.

Create your Amazon ECS cluster

The first task is to create an empty ECS cluster. Right now, the ECS beta support for Windows Server containers doesn’t enable you to provision your compute layer for the cluster, at cluster creation time. In a moment, you’ll provision some Windows Server-based compute capacity on Amazon EC2, and associate it with your empty ECS cluster.

If you prefer to use the AWS PowerShell module to create the empty ECS cluster, you can use the following command.

New-ECSCluster -ClusterName ECSRivendell

Set up an IAM role for Amazon EC2 instances

Amazon EC2 instances can have IAM “roles” assigned to them. By associating one or more IAM policies with an IAM role, which is assigned to an EC2 instance by way of an “instance profile”, you can grant access to various services in AWS directly to the EC2 instance, without having to embed and manage any credentials. AWS handles that for you, via the IAM role. Whenever the EC2 instance is running, it has access to the IAM role that’s assigned to it, and can make calls to various AWS APIs that it’s authorized to call via associated IAM policies.

Before you actually deploy the Windows Server compute layer for your ECS cluster, you must first perform the following preparation steps:

  • Create an IAM policy that defines the required permissions for ECS container instances.
  • Create an IAM role.
  • Register the IAM policy with the role.
  • Create an IAM instance profile, which will be associated with your EC2 instances.
  • Associate the IAM role with the instance profile.

Create an IAM policy for your EC2 container instances

First, you use PowerShell to create the IAM policy. I simply copied and pasted the IAM policy that’s documented in the Amazon ECS documentation. This IAM policy will be associated with the IAM role that will be associated with your EC2 container instances, which actually run your containers via ECS. This policy is especially important, because it grants your container instances access to Amazon EC2 Container Registry (Amazon ECR). This is a private storage area for your container images, similar to the Docker Store.

$IAMPolicy = @{
    Path = '/ECSRivendell/'
    PolicyName = 'ECSRivendellInstanceRole'
    Description = 'This policy is used to grant EC2 instances access to ECS-related API calls. It enables EC2 instances to push and pull from ECR, and most importantly, register with our ECS Cluster.'
    PolicyDocument = @'
{
    "Version": "2012-10-17",
    "Statement": [
        {
        "Effect": "Allow",
        "Action": [
            "ecs:CreateCluster",
            "ecs:DeregisterContainerInstance",
            "ecs:DiscoverPollEndpoint",
            "ecs:Poll",
            "ecs:RegisterContainerInstance",
            "ecs:StartTelemetrySession",
            "ecs:Submit*",
            "ecr:GetAuthorizationToken",
            "ecr:BatchCheckLayerAvailability",
            "ecr:GetDownloadUrlForLayer",
            "ecr:BatchGetImage",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
        ],
        "Resource": "*"
        }
    ]
    }
'@
}
New-IAMPolicy @IAMPolicy

Create a container instance IAM role

The next part is easy. You just need to create an IAM role. This role has a name, optional path, optional description and, the important part, an “AssumeRolePolicyDocument”. This is also known as the IAM “trust policy”. This IAM trust policy enables the EC2 instances to use or “assume” the IAM role, and use its policies to access AWS APIs. Without this trust policy in place, your EC2 instances won’t be able to use this role and the permissions granted to it by the IAM policy.

$IAMRole = @{
    RoleName = 'ECSRivendell'
    Path = '/ECSRivendell/'
    Description = 'This IAM role grants the container instances that are part of the ECSRivendell ECS cluster access to various AWS services, required to operate the ECS Cluster.'
    AssumeRolePolicyDocument = @'
{
    "Version": "2008-10-17",
    "Statement": [
        {
        "Sid": "",
        "Effect": "Allow",
        "Principal": {
            "Service": "ec2.amazonaws.com"
        },
        "Action": "sts:AssumeRole"
        }
    ]
    }
'@
}
New-IAMRole @IAMRole

Register the IAM policy with the IAM role

Now you simply need to associate the IAM policy that you created with the IAM role. This association is very easy to make with PowerShell, using the Register-IAMRolePolicy command. If necessary, you can associate more than one policy with your IAM roles that grant or deny additional permissions.

$RolePolicy = @{
    PolicyArn = $NewIAMPolicy.Arn
    Role = $NewIAMRole.RoleName
}
Register-IAMRolePolicy @RolePolicy

Create an IAM instance profile

With the IAM role prepared, you now need to create the IAM instance profile. The IAM instance profile is the actual object that gets associated with our EC2 instances. This IAM instance profile ultimately grants the EC2 instance permission to call AWS APIs directly, without any stored credentials.

$InstanceProfile = @{
    InstanceProfileName = 'ECSRivendellInstanceRole'
    Path = '/ECSRivendell/'
}
$NewInstanceProfile = New-IAMInstanceProfile @InstanceProfile

Associate the IAM role with the instance profile

Finally, you need to associate our IAM Role with the instance profile.

$RoleInstanceProfile = @{
    InstanceProfileName = $NewInstanceProfile.InstanceProfileName
    RoleName = $NewIAMRole.RoleName
}
$null = Add-IAMRoleToInstanceProfile @RoleInstanceProfile

Add EC2 compute capacity for Windows

Now that you’ve created an empty ECS cluster and have pre-staged your IAM configuration, you need to add some EC2 compute instances running Windows Server to it. These are the actual Windows Server instances that will run containers (tasks) via the ECS cluster scheduler.

To provision EC2 capacity in a frugal and scalable fashion, we create an EC2 Auto Scaling group using Spot instances. Spot instances are one of my favorite services in AWS, because you can provision a significant amount of compute capacity for a huge discount, by bidding on unused capacity. There is some risk that you might need to design around, as Spot instances can be terminated or stopped if your bid price is exceeded. However, depending on your needs, you can run powerful applications on EC2, for a low price, using EC2 Spot instances.

Create the Auto Scaling launch configuration

Before you can create the Auto Scaling group itself, you need to create what’s known as an Auto Scaling “launch configuration”. The launch configuration is essentially a blueprint for Auto Scaling groups. It stores a variety of input parameters that define how new EC2 instances will look every time the Auto Scaling group scales up, and spins up a new instance. For example, you need to specify:

  • The Amazon Machine Image (AMI) that instances will be launched from.
    • NOTE: Be sure you use the Microsoft Windows Server 2016 Base with Containers image for Windows Server container instances.
  • The EC2 instance type (size, vCPUs, memory).
  • Whether to enable or disable public IP addresses for EC2 instances.
  • The bid price for Spot instances (optional, but recommended).
  • The EC2 User Data – PowerShell script to bootstrap instance configuration.

Although the code snippet below might look a little scary, it basically contains an EC2 user data script, which I copied directly from the AWS beta documentation for Windows. This automatically installs the ECS container agent into new instances that are brought online by your Auto Scaling Group, and registers them with your ECS cluster.

$ClusterName = 'ECSRivendell'
$UserDataScript = @'

## The string 'windows' should be replaced with your cluster name

# Set agent env variables for the Machine context (durable)
[Environment]::SetEnvironmentVariable("ECS_CLUSTER", "YOURCLUSTERNAME", "Machine")
[Environment]::SetEnvironmentVariable("ECS_ENABLE_TASK_IAM_ROLE", "true", "Machine")
$agentVersion = 'v1.14.5'
$agentZipUri = "https://s3.amazonaws.com/amazon-ecs-agent/ecs-agent-windows-$agentVersion.zip"
$agentZipMD5Uri = "$agentZipUri.md5"


### --- Nothing user configurable after this point ---
$ecsExeDir = "$env:ProgramFiles\Amazon\ECS"
$zipFile = "$env:TEMP\ecs-agent.zip"
$md5File = "$env:TEMP\ecs-agent.zip.md5"

### Get the files from Amazon S3
Invoke-RestMethod -OutFile $zipFile -Uri $agentZipUri
Invoke-RestMethod -OutFile $md5File -Uri $agentZipMD5Uri

## MD5 Checksum
$expectedMD5 = (Get-Content $md5File)
$md5 = New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider
$actualMD5 = [System.BitConverter]::ToString($md5.ComputeHash([System.IO.File]::ReadAllBytes($zipFile))).replace('-', '')

if($expectedMD5 -ne $actualMD5) {
    echo "Download doesn't match hash."
    echo "Expected: $expectedMD5 - Got: $actualMD5"
    exit 1
}

## Put the executables in the executable directory
Expand-Archive -Path $zipFile -DestinationPath $ecsExeDir -Force

## Start the agent script in the background
$jobname = "ECS-Agent-Init"
$script =  "cd '$ecsExeDir'; .\amazon-ecs-agent.ps1"
$repeat = (New-TimeSpan -Minutes 1)

$jobpath = $env:LOCALAPPDATA + "\Microsoft\Windows\PowerShell\ScheduledJobs\$jobname\ScheduledJobDefinition.xml"
if($(Test-Path -Path $jobpath)) {
    echo "Job definition already present"
    exit 0

}

$scriptblock = [scriptblock]::Create("$script")
$trigger = New-JobTrigger -At (Get-Date).Date -RepeatIndefinitely -RepetitionInterval $repeat -Once
$options = New-ScheduledJobOption -RunElevated -ContinueIfGoingOnBattery -StartIfOnBattery
Register-ScheduledJob -Name $jobname -ScriptBlock $scriptblock -Trigger $trigger -ScheduledJobOption $options -RunNow
Add-JobTrigger -Name $jobname -Trigger (New-JobTrigger -AtStartup -RandomDelay 00:1:00)

true
'@ -replace 'YOURCLUSTERNAME', $ClusterName

$UserDataBase64 = [System.Convert]::ToBase64String(([Byte[]][Char[]] $UserDataScript))

# Create a block device mapping to increase the size of the root volume
$BlockDevice = [Amazon.AutoScaling.Model.BlockDeviceMapping]::new()
$BlockDevice.DeviceName = '/dev/sda1'
$BlockDevice.Ebs = [Amazon.AutoScaling.Model.Ebs]::new()
$BlockDevice.Ebs.DeleteOnTermination = $true
$BlockDevice.Ebs.VolumeSize = 200
$BlockDevice.Ebs.VolumeType = 'gp2'

$LaunchConfig = @{
    LaunchConfigurationName = 'ECSRivendell'
    AssociatePublicIpAddress = $true
    EbsOptimized = $true
    BlockDeviceMapping = $BlockDevice
    InstanceType = 'r4.large'
    SpotPrice = '0.18'
    InstanceMonitoring_Enabled = $true
    IamInstanceProfile = 'ECSRivendellInstanceRole'
    ImageId = 'ami-6a887b12'
    UserData = $UserDataBase64
}
$NewLaunchConfig = New-ASLaunchConfiguration @LaunchConfig

Be sure you give your EC2 container instances enough storage to cache container images over time. In my example, I’m instructing the launch configuration to grant a 200 GB Amazon EBS root volume to each EC2 instance that is deployed into the Auto Scaling group, once it’s set up.

Create the Auto Scaling group

Now that you’ve set up the Auto Scaling launch configuration, you can deploy a new Auto Scaling group from it. Once created, the Auto Scaling group actually submits the EC2 Spot request for the desired numbers of EC2 instances when they’re needed. All you have to do is tell the Auto Scaling group how many instances you want, and it handles spinning them up or down.

Optionally, you can even set up Auto Scaling policies so that you don’t have to manually scale the cluster. We’ll save the topic of Auto Scaling policies for another article, however, and instead focus on setting up the ECS cluster.

The PowerShell code to create the Auto Scaling group, from the launch configuration that you built, is included below. Keep in mind that although I’ve tried to keep the code snippets fairly generic, this particular code snippet does have a hard-coded command that retrieves all of your Amazon Virtual Private Cloud (VPC) subnet IDs dynamically. It should work fine, as long as you only have a single VPC in the region you’re currently operating in. If you have more than one VPC, you’ll want to filter out the list of subnets from the Get-EC2Subnet command that are contained in the desired target VPC.

$AutoScalingGroup = @{
    AutoScalingGroupName = 'ECSRivendell'
    LaunchConfigurationName = 'ECSRivendell'
    MinSize = 2  
    MaxSize = 5
    DesiredCapacity = 3
    VPCZoneIdentifier = [String]::Join(',', (Get-EC2Subnet).SubnetId)
    Tag = $( $Tag = [Amazon.AutoScaling.Model.Tag]::new()
             $Tag.PropagateAtLaunch = $true; $Tag.Key = 'Name'; $Tag.Value = 'ECS: ECSRivendell'; $Tag )
}
$NewAutoScalingGroup = New-ASAutoScalingGroup @AutoScalingGroup

After running this code snippet, your Auto Scaling Group will take a few minutes to deploy. Additionally, it can sometimes take 15-30 minutes for the ECS container agent, running on each of the EC2 instances, to finish registering with the ECS cluster. Once you’ve waited a little while, check out the EC2 instance area of the AWS Management Console and examine your shiny new container instances!

You can also visit the ECS area of the AWS Management Console, and examine the number of container instances that are registered with the ECS cluster. You now have a functioning ECS cluster!

You can even use the Get-ECSClusterDetail command in PowerShell to examine the current count of container instances.

PS /Users/tsulli> (Get-ECSClusterDetail -Cluster ECSRivendell).Clusters

ActiveServicesCount               : 0
ClusterArn                        : arn:aws:ecs:us-west-2:676655494252:cluster/ECSRivendell
ClusterName                       : ECSRivendell
PendingTasksCount                 : 0
RegisteredContainerInstancesCount : 3
RunningTasksCount                 : 0
Status                            : ACTIVE

Conclusion

In this article, you set up an EC2 Container Service (ECS) cluster with several EC2 instances running Windows Server registered to it. Prior to that, you also set up the IAM policy, IAM role, and instance profile that are necessary prerequisites to ensure that the ECS cluster operates correctly. You used the AWS Tools for PowerShell to accomplish this automation, and learned about key ECS and EC2 Auto Scaling commands.

Although you have a functional ECS Cluster at this point, you haven’t yet deployed any containers (ECS tasks) to it. Keep an eye out for part 2 of this blog post, where you’ll explore building your own custom container images, pushing those container images up to Amazon EC2 Container Registry (Amazon ECR), and running ECS tasks and services.

Writing and Archiving Custom Metrics using Amazon CloudWatch and AWS Tools for PowerShell

This is a guest post from Trevor Sullivan, a Seattle-based Solutions Architect at Amazon Web Services (AWS). Since 2004, Trevor has worked intimately with Microsoft technologies, including PowerShell since its release in 2006. In this article, Trevor takes you through the process of using the AWS Tools for PowerShell to write and export metrics data from Amazon CloudWatch.

Amazon’s CloudWatch service is an umbrella that covers a few major areas: logging, metrics, charting, dashboards, alarms, and events.

I wanted to take a few minutes to cover the CloudWatch Metrics area, specifically as it relates to interacting with metrics from PowerShell. We’ll start off with a discussion and demonstration of how to write metric data into CloudWatch, then move on to how to find existing metrics in CloudWatch, and finally how to retrieve metric data points from a specific metric.

Amazon CloudWatch stores metrics data for up to 15 months. However, you can export data from Amazon CloudWatch into a long-term retention tool of your choice, depending on your requirements for metric data retention, and required level of metric granularity. While historical, exported data may not be usable inside CloudWatch, after aging out, you can use other AWS data analytics tools, such as Amazon QuickSight and Amazon Athena to build reports against your historical data.

Assumptions

For this article, we assume that you have an AWS account. We also assume you understand PowerShell at a fundamental level, have installed PowerShell and the AWS Tools for PowerShell on your platform of choice, and have already set up your AWS credentials file and necessary IAM policies granting access to CloudWatch. We’ll discuss and demonstrate how to call various CloudWatch APIs from PowerShell, so be sure you’re prepared for this topic.

For more information, see the Getting Started guide for AWS Tools for PowerShell.

Write metric data into Amazon CloudWatch

Let’s start by talking about storing custom metrics in CloudWatch.

In the AWS Tools for PowerShell, there’s a command named Write-CWMetricData. This PowerShell command ultimately calls the PutMetricData API to write metrics to Amazon CloudWatch. It’s fairly easy to call this command, as there are only a handful of parameters. However, you should understand how CloudWatch works before attempting to use the command.

  • CloudWatch metrics are stored inside namespaces
  • Metric data points:
    • Must have a name.
    • May have zero or more dimensions.
    • May have a value, time stamp and unit of measure (eg. Bytes, BytesPerSecond, Count, etc.).
    • May specify a custom storage resolution (eg., 1 second, 5 seconds—the default is 60 seconds).
  • In the AWS Tools for PowerShell, you construct one or more MetricDatum .NET objects, before passing these into Write-CWMetricData.

With that conceptual information out of the way, let’s look at the simplest way to create a custom metric. Writing metric data points into CloudWatch is how you create a metric. There isn’t a separate operation to create a metric and then write data points into it.

### First, we create one or more MetricDatum objects
$Metric = [Amazon.CloudWatch.Model.MetricDatum]::new()
$Metric.MetricName = 'UserCount'
$Metric.Value = 98

### Second, we write the metric data points to a CloudWatch metrics namespace
Write-CWMetricData -MetricData $Metric -Namespace trevortest/tsulli.loc

If you have lots of metrics to track, and you’d prefer to avoid cluttering up your top-level metric namespaces, this is where metric dimensions can come in handy. For example, let’s say we want to track a “UserCount” metric for over 100 different Active Directory domains. We can store them all under a single namespace, but create a “DomainName” dimension, on each metric, whose value is the name of each Active Directory domain. The following screenshot shows an example of this in action.

Here’s a PowerShell code example that shows how to write a metric to CloudWatch, with a dimension. Although the samples we’ve looked at in this article show how to write a single metric, you should strive to reduce the number of disparate AWS API calls that you make from your application code. Try to consolidate the gathering and writing of multiple metric data points in the same PutMetricData API call, as an array of MetricDatum objects. Your application will perform better, with fewer HTTP connections being created and destroyed, and you’ll still be able to gather as many metrics as you want.

$Metric = [Amazon.CloudWatch.Model.MetricDatum]::new()
### Create a metric dimension, and set its name and value
$Dimension = [Amazon.CloudWatch.Model.Dimension]::new()
$Dimension.Name = 'DomainName'
$Dimension.Value = 'awstrevor.loc'

$Metric.MetricName = 'UserCount'
$Metric.Value = 76
### NOTE: Be sure you assign the Dimension object to the Dimensions property of the MetricDatum object
$Metric.Dimensions = $Dimension
Write-CWMetricData -MetricData $Metric -Namespace trevortest

Retrieve a list of metrics from Amazon CloudWatch

Now that we’ve written custom metrics to CloudWatch, let’s discuss how we search for metrics. Over time, you might find that you have thousands or tens of thousands of metrics in your AWS accounts, across various regions. As a result, it’s imperative that you know how to locate metrics relevant to your analysis project.

You can, of course, use the AWS Management Console to view metric namespaces, and explore the metrics and metric dimensions contained within each namespace. Although this approach will help you gain initial familiarity with the platform, you’ll most likely want to use automation to help you find relevant data within the platform. Automation is especially important when you introduce metrics across multiple AWS Regions and multiple AWS accounts, as they can be harder to find via a graphical interface.

In the AWS Tools for PowerShell, the Get-CWMetricList command maps to the AWS ListMetrics API. This returns a list of high-level information about the metrics stored in CloudWatch. If you have lots of metrics stored in your account, you might get back a very large list. Thankfully, PowerShell has some generic sorting and filtering commands that can help you find the metrics you’re seeking, with some useful filtering parameters on the Get-CWMetricList command itself.

Let’s explore a few examples of how to use this command.

Starting with the simplest example, we’ll retrieve a list of all the CloudWatch metrics from the current AWS account and region.

Get-CWMetricList

If the results of this command are little overwhelming, that’s okay. We can filter down the returned metrics to a specific metric namespace, using the -Namespace parameter.

Get-CWMetricList -Namespace AWS/Lambda

What if you don’t know which metric namespaces exist? PowerShell provides a useful command that enables you to filter for unique values.

(Get-CWMetricList).Namespace | Select-Object -Unique

If these results aren’t in alphabetical order, it might be hard to visually scan through them, so let’s sort them.

(Get-CWMetricList).Namespace | Select-Object -Unique | Sort-Object

Much better! Another option is to search for metrics based on a dimension key-value pair. It’s a bit more typing, but it’s a useful construct to search through thousands of metrics. You can even write a simple wrapper PowerShell function to make it easier to construct one of these DimensionFilter objects.

$Filter = [Amazon.CloudWatch.Model.DimensionFilter]::new()
$Filter.Name = 'DomainName'
$Filter.Value = 'tsulli.loc'
Get-CWMetricList -Dimension $Filter

If you know the name of a specific metric, you can query for a list of metrics that match that name. You might get back multiple results, if there are multiple metrics with the same name, but different dimensions exist in the same namespace. You can also have similarly named metrics across multiple namespaces, with or without dimensions.

Get-CWMetricList -MetricName UserCount

PowerShell’s built-in, generic Where-Object command is infinitely useful in finding metrics or namespaces, if you don’t know their exact, full name.

This example shows how to filter for any metric names that contain “User”.

Get-CWMetricList | Where-Object -FilterScript { $PSItem.Name -match 'User' }

Filtering metrics by namespace is just as easy. Let’s search for metrics that are stored inside any metric namespace that ends with “EBS”.

Get-CWMetricList | Where-Object -FilterScript { $PSItem.Namespace -match 'EBS$' }

That’s likely enough examples of how to find metrics in CloudWatch, using PowerShell! Let’s move on and talk about pulling actual metric data points from CloudWatch, using PowerShell.

Pull metric data from CloudWatch

Concepts

Metric data is stored in CloudWatch for a finite period of time. Before metrics age out of CloudWatch, metric data points (metric “statistics”) move through a tiered system where they are aggregated and stored as less granular metric data points. For example, metrics gathered on a per-minute period are aggregated and stored as five-minute metrics, when they reach an age of fifteen (15) days. You can find detailed information about the aggregation process and retention period in the Amazon CloudWatch metrics documentation.

Data aggregation in CloudWatch, as of this writing, starts when metric data points reach an age of three (3) hours. You need to ensure that you’re exporting your metric data before your data is aged, if you want to keep the most detailed resolution of your metric data points. Services such as AWS Lambda or even PowerShell applications deployed onto Amazon EC2 Container Service (ECS) can help you achieve this export process in a scalable fashion.

The longest period that metrics are stored in CloudWatch is 15 months. If you want to store metrics data beyond a 15-month period, you must query the metrics data, before CloudWatch performs aggregation on your metrics, and store it in an alternate repository, such as Amazon DynamoDBAmazon S3, or Amazon RDS.

PowerShell Deep Dive

Now that we’ve covered some of the conceptual topics around retrieving and archiving CloudWatch metrics, let’s look at the actual PowerShell command to retrieve data points.

The AWS Tools for PowerShell include a command named Get-CWMetricStatistic, which maps to the GetMetricStatistics API in AWS. You can use this command to retrieve granular data points from your CloudWatch metrics.

There are quite a few parameters that you need to specify on this command, because you are querying a potentially massive dataset. You need to be very specific about the metric namespace, name, start time, end time, period, and statistic that you want to retrieve.

Let’s find the metric data points for the Active Directory UserCount metric, for the past 60 minutes, every minute. We assign the API response to a variable, so we can dig into it further. You most likely don’t have this metric, but I’ve been gathering this metric for awhile, so I’ve got roughly a week’s worth of data at a per-minute level. Of course, my metrics are subject to the built-in aggregation policies, so my per-minute data is only good for up to 15 days.

$Data = Get-CWMetricStatistic -Namespace ActiveDirectory/tsulli.loc -ExtendedStatistic p0.0 -MetricName UserCount -StartTime ([DateTime]::UtcNow.AddHours(-1)) -EndTime ([DateTime]::UtcNow) -Period 60

As you can see, the command is a little lengthy, but when you use PowerShell’s tab completion to help finish the command and parameter names, typing it out isn’t too bad.

Because we’re querying the most recent hour’s worth of data, we should have exactly 60 data points in our response. We can confirm this by examining PowerShell’s built-in Count property on the Datapoints property in our API response.

$Data.Datapoints.Count

The data points aren’t in chronological order when returned, so let’s use PowerShell to sort them, and grab the most recent data point.

$Data.Datapoints | Sort-Object -Property Timestamp | Select-Object -Last 1
Average            : 0
ExtendedStatistics : {[p0.0, 19]}
Maximum            : 0
Minimum            : 0
SampleCount        : 0
Sum                : 0
Timestamp          : 10/14/17 4:27:00 PM
Unit               : Count

Now we can take this data and start exporting it into our preferred external data store, for long-term retention! I’ll leave it to you to explore this API further.

Due to the detailed options available in the GetMetricStatistics API, I would strongly encourage you to read through the documentation, and more importantly, run your own experiments with the API. You need to use this API extensively, if you want to export data points from CloudWatch metrics to an alternative data source, as described earlier.

Conclusion

In this article, we’ve explored the use of the AWS Tools for PowerShell to assist you with writing metrics to Amazon CloudWatch, searching or querying for metrics, and retrieving metric data points. I would encourage you to think about how Amazon CloudWatch can integrate with a variety of other services, to help you achieve your objectives.

Don’t forget that after you’ve stored metrics data in CloudWatch, you can then build dashboards around that data, connect CloudWatch alarms to notify you about infrastructure and application issues, and even perform automated remediation tasks. The sky is the limit, so put on your builder’s hat and start creating!

Please feel free to follow me on Twitter, and keep an eye on my YouTube channel.

Deploying .NET Web Applications Using AWS CodeDeploy with Visual Studio Team Services

Today’s post is from AWS Solution Architect Aravind Kodandaramaiah.

We recently announced the new AWS Tools for Microsoft Visual Studio Team Services. In this post, we show you how you can use these tools to deploy your .NET web applications from Team Services to Amazon EC2 instances by using AWS CodeDeploy.

We don’t cover setting up the tools in Team Services in this post. We assume you already have a Team Services account or are using an on-premises TFS instance. We also assume you know how to push your source code to the repository used in your Team Services build. You can use the AWS tasks in build or in release definitions. For simplicity, this post shows you how to use the tasks in a build definition.

AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate deployments, eliminating error-prone manual operations. The service also scales with your infrastructure, so you can easily deploy to one instance or a thousand.

Setting up an AWS environment

Before we get started configuring a build definition within Team Services, we need to set up an AWS environment. Follow these steps to set up the AWS environment to enable deployments using the AWS CodeDeploy Application Deployment task with Team Services.

  1. Provision an AWS Identity and Access Management (IAM) user. See the AWS documentation for how to prepare IAM users to use AWS CodeDeploy. We’ll configure the access key ID and secret key ID of this IAM user within Team Services to initiate the deployment.
  2. Create a service role for AWS CodeDeploy. See the AWS documentation for details about how to create a service role for AWS CodeDeploy. This service role provides AWS CodeDeploy access to your AWS resources for deployment.
  3. Create an IAM instance profile for EC2 instances. See the AWS documentation for details about how to create IAM instance profiles. An IAM instance profile provides applications running within an EC2 instance access to AWS services.
  4. Launch, tag, and configure EC2 instances. Follow these instructions for launching and configuring EC2 instances to work with AWS CodeDeploy. We’ll use these EC2 Instances to run the deployed application.
  5. Create an Amazon S3 bucket to store application revisions. See the AWS documentation for details about creating S3 buckets.
  6. Create an AWS CodeDeploy application and a deployment group. After you configure instances, but before you can deploy a revision, you must create an application and deployment group in AWS CodeDeploy. AWS CodeDeploy uses a deployment group to identify EC2 instances that need the application to be deployed. Tags help group EC2 instances into a deployment group.

We’ve outlined the manual steps to create the AWS environment. However, it’s possible to completely automate creation of such AWS environments by using AWS CloudFormation, and this is the recommended approach. AWS CloudFormation enables you to represent infrastructure as code and to perform predictable, repeatable, and automated deployments. This enables you to control and track changes to your infrastructure.

Setting up an AWS CodeDeploy environment

In our Git repository in Team Services, we have an ASP.NET web application, the AWS CodeDeploy AppSpec file, and a few deployment PowerShell scripts. These files are located at the root of the repo, as shown below.

The appspec.yml file is a YAML-formatted file that AWS CodeDeploy uses to determine the artifacts to install and the lifecycle events to run. You must place it in the root of an application’s source code directory structure. Here’s the content of the sample appspec.yml file.

version: 0.0
os: windows
files:
  - source: \
    destination: C:\temp\WebApp\MusicWorld
  
hooks:
  BeforeInstall:
    - location: .\StageArtifact.ps1
    - location: .\CreateWebSite.ps1

AWS CodeDeploy executes the PowerShell scripts before copying the revision files to the final destination folder. These PowerShell scripts register the web application with IIS, and copy the application files to the physical path associated with the IIS web application.

The StageArtifacts.ps1 file is a PowerShell script that unpacks the Microsoft Web Deploy (msdeploy) web artifact, and copies the application files to the physical path that is associated with the IIS web application.

$target = "C:\inetpub\wwwroot\MusicWorld\" 

function DeleteIfExistsAndCreateEmptyFolder($dir )
{
    if ( Test-Path $dir ) {    
           Get-ChildItem -Path  $dir -Force -Recurse | Remove-Item -force –
							  recurse
           Remove-Item $dir -Force
    }
    New-Item -ItemType Directory -Force -Path $dir
}
# Clean up target directory
DeleteIfExistsAndCreateEmptyFolder($target )

# msdeploy creates a web artifact with multiple levels of folders. We only need the content 
# of the folder that has Web.config within it 
function GetWebArtifactFolderPath($path)
{
    foreach ($item in Get-ChildItem $path)
    {   
        if (Test-Path $item.FullName -PathType Container)
        {   
            # return the full path for the folder which contains Global.asax
            if (Test-Path ($item.fullname + "\Global.asax"))
            {
                #$item.FullName
                return $item.FullName;
            }
            GetWebArtifactFolderPath $item.FullName
        }
    }
}

$path = GetWebArtifactFolderPath("C:\temp\WebApp\MusicWorld")
$path2 = $path + "\*"
Copy-Item $path2 $target -recurse -force

The CreateWebSite.ps1 file is a PowerShell script that creates a web application in IIS.

New-WebApplication -Site "Default Web Site" -Name MusicWorld -PhysicalPath c:\inetpub\wwwroot\MusicWorld -Force

Setting up the build definition for an ASP.NET web application

The AWS CodeDeploy Application Deployment task in the AWS Tools for Microsoft Visual Studio Team Services supports deployment of any type of application, as long as you register the deployment script in the appspec.yml file. In this post, we deploy ASP.NET applications that are packaged as a Web Deploy archive.

We use the ASP.NET build template to get an ASP.NET application built and packaged as a Web Deploy archive.

Be sure to set up the following MSBuild arguments within the Build Solution task.

/p:WebPublishMethod=Package /p:PackageAsSingleFile=false /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactstagingdirectory)\\publish\\application"

Remove the Publish Artifacts task from the build pipeline, as the build artifacts will be uploaded into Amazon S3 by the AWS CodeDeploy Application Deployment task.

Now that our application has been built using MSBuild, we need to copy the appspec.yml file and the PowerShell deployment scripts to the root of the publish folder. This is so that AWS CodeDeploy can find the appspec.yml file at the root of the application folder.

Add a new Copy Files task. Choose Add Task, and then search for “Copy”. On the found task, choose Add to include the task in the build definition.

Configure this task to copy the appspec.yml file and the PowerShell scripts to the parent folder of the “packagelocation” defined within the Build Solution task. This step allows the AWS CodeDeploy Application Deployment task to zip up the contents of the revision bundle recursively before uploading the archive to Amazon S3.

Next, add the AWS CodeDeploy Application Deployment task. Choose Add Task, and then search for “CodeDeploy”. On the found task, choose Add to include the task in the build definition.

For the AWS CodeDeploy Application Deployment task, make the following configuration changes:

  • AWS Credentials – The AWS credentials used to perform the deployment. Our previous post on the Team Services tools discusses setting up AWS credentials in Team Services. We recommend that the credentials be those for an IAM user, with a policy that enables the user to perform an AWS CodeDeploy deployment.
  • AWS Region – The AWS Region that AWS CodeDeploy is running in.
  • Application Name – The name of the AWS CodeDeploy application.
  • Deployment Group Name – The name of the deployment group to deploy to.
  • Revision Bundle – The artifacts to deploy. You can supply a folder or a file name to this parameter. If you supply a folder, the task will zip the contents of the folder recursively into an archive file before uploading the archive to Amazon S3. If you supply a file name, the task uploads it, unmodified, to Amazon S3. Note that AWS CodeDeploy requires the appspec.yml file describing the application to be located at the root of the specified folder or archive file.
  • Bucket Name – The name of the bucket to which the revision bundle will be uploaded. The target Amazon S3 bucket must exist in the same AWS Region as the target instance.
  • Target Folder – Optional folder (key prefix) for the uploaded revision bundle in the bucket. If you don’t specify a target folder, the bundle will be uploaded to the root of the bucket.

Now that we’ve configured all the tasks, we’re ready to deploy to Amazon EC2 instances using AWS CodeDeploy. If you queue a build now, you should see output similar to this for the deployment.

Next, navigate to the AWS CodeDeploy console and choose Deployments. Choose the deployment that AWS CodeDeploy completed to view the deployment progress. In this example, the web application has been deployed to all the EC2 instances within the deployment group.

If your AWS CodeDeploy application was created with a load balancer, you can verify the web application deployment by navigating to the DNS name of the load balancer using a web browser.

Conclusion

We hope Visual Studio Team Services users find the AWS CodeDeploy Application Deployment task helpful and easy to use. We also appreciate hearing your feedback on our GitHub repository for these Team Services tasks.

New Get-ECRLoginCommand for AWS Tools for PowerShell

Today’s post is from AWS Solution Architect and Microsoft MVP for Cloud and Data Center Management, Trevor Sullivan.

The AWS Tools for PowerShell now offer a new command that makes it easier to authenticate to the Amazon EC2 Container Registry (Amazon ECR).

Amazon EC2 Container Registry (ECR) is a service that enables customers to upload and store their Windows-based and Linux-based container images. Once a developer uploads these container images, they can then be deployed to stand-alone container hosts or container clusters, such as those running under the Amazon EC2 Container Service (Amazon ECS).

To push or pull container images from ECR, you must authenticate to the registry using the Docker API. ECR provides a GetAuthorizationToken API that retrieves the credential you’ll use to authenticate to ECR. In the AWS PowerShell modules, this API is mapped to the cmdlet Get-ECRAuthorizationToken. The response you receive from this service invocation includes a username and password for the registry, encoded as base64. To retrieve the credential, you must decode the base64 response into a byte array, and then decode the byte array as a UTF-8 string. After retrieving the UTF-8 string, the username and password are provided to you in a colon-delimited format. You simply split the string on the colon character to receive the username as array index 0, and the password as array index 1.

Now, with Get-ECRLoginCommand, you can retrieve a pregenerated Docker login command that authenticates your container hosts to ECR. Although you can still directly call the GetAuthorizationToken API, Get-ECRLoginCommand provides a helpful shortcut that reduces the amount of required conversion effort.

Let’s look at a short example of how you can use this new command from PowerShell:

PS> Invoke-Expression –Command (Get-ECRLoginCommand –Region us-west-2).Command
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

As you can see, all you have to do is call the Get-ECRLoginCommand, and then pass the prebuilt Command property into the built-in Invoke-Expression PowerShell cmdlet. Upon running this PowerShell cmdlet, you’re authenticated to ECR, and can then proceed to create image repositories and pushing and pulling container images.

Note: You might receive a warning about specifying the registry password on the Docker CLI. However, you can also build your own Docker login command by using the other properties on the object returned from the Get-ECRLoginCommand.

I hope you find the new cmdlet useful! If you have ideas for other cmdlets we should add, be sure to let us know in the comments.