AWS Developer Blog

Writing and Archiving Custom Metrics using Amazon CloudWatch and AWS Tools for PowerShell

This is a guest post from Trevor Sullivan, a Seattle-based Solutions Architect at Amazon Web Services (AWS). Since 2004, Trevor has worked intimately with Microsoft technologies, including PowerShell since its release in 2006. In this article, Trevor takes you through the process of using the AWS Tools for PowerShell to write and export metrics data from Amazon CloudWatch.

Amazon’s CloudWatch service is an umbrella that covers a few major areas: logging, metrics, charting, dashboards, alarms, and events.

I wanted to take a few minutes to cover the CloudWatch Metrics area, specifically as it relates to interacting with metrics from PowerShell. We’ll start off with a discussion and demonstration of how to write metric data into CloudWatch, then move on to how to find existing metrics in CloudWatch, and finally how to retrieve metric data points from a specific metric.

Amazon CloudWatch stores metrics data for up to 15 months. However, you can export data from Amazon CloudWatch into a long-term retention tool of your choice, depending on your requirements for metric data retention, and required level of metric granularity. While historical, exported data may not be usable inside CloudWatch, after aging out, you can use other AWS data analytics tools, such as Amazon QuickSight and Amazon Athena to build reports against your historical data.

Assumptions

For this article, we assume that you have an AWS account. We also assume you understand PowerShell at a fundamental level, have installed PowerShell and the AWS Tools for PowerShell on your platform of choice, and have already set up your AWS credentials file and necessary IAM policies granting access to CloudWatch. We’ll discuss and demonstrate how to call various CloudWatch APIs from PowerShell, so be sure you’re prepared for this topic.

For more information, see the Getting Started guide for AWS Tools for PowerShell.

Write metric data into Amazon CloudWatch

Let’s start by talking about storing custom metrics in CloudWatch.

In the AWS Tools for PowerShell, there’s a command named Write-CWMetricData. This PowerShell command ultimately calls the PutMetricData API to write metrics to Amazon CloudWatch. It’s fairly easy to call this command, as there are only a handful of parameters. However, you should understand how CloudWatch works before attempting to use the command.

  • CloudWatch metrics are stored inside namespaces
  • Metric data points:
    • Must have a name.
    • May have zero or more dimensions.
    • May have a value, time stamp and unit of measure (eg. Bytes, BytesPerSecond, Count, etc.).
    • May specify a custom storage resolution (eg., 1 second, 5 seconds—the default is 60 seconds).
  • In the AWS Tools for PowerShell, you construct one or more MetricDatum .NET objects, before passing these into Write-CWMetricData.

With that conceptual information out of the way, let’s look at the simplest way to create a custom metric. Writing metric data points into CloudWatch is how you create a metric. There isn’t a separate operation to create a metric and then write data points into it.

### First, we create one or more MetricDatum objects
$Metric = [Amazon.CloudWatch.Model.MetricDatum]::new()
$Metric.MetricName = 'UserCount'
$Metric.Value = 98

### Second, we write the metric data points to a CloudWatch metrics namespace
Write-CWMetricData -MetricData $Metric -Namespace trevortest/tsulli.loc

If you have lots of metrics to track, and you’d prefer to avoid cluttering up your top-level metric namespaces, this is where metric dimensions can come in handy. For example, let’s say we want to track a “UserCount” metric for over 100 different Active Directory domains. We can store them all under a single namespace, but create a “DomainName” dimension, on each metric, whose value is the name of each Active Directory domain. The following screenshot shows an example of this in action.

Here’s a PowerShell code example that shows how to write a metric to CloudWatch, with a dimension. Although the samples we’ve looked at in this article show how to write a single metric, you should strive to reduce the number of disparate AWS API calls that you make from your application code. Try to consolidate the gathering and writing of multiple metric data points in the same PutMetricData API call, as an array of MetricDatum objects. Your application will perform better, with fewer HTTP connections being created and destroyed, and you’ll still be able to gather as many metrics as you want.

$Metric = [Amazon.CloudWatch.Model.MetricDatum]::new()
### Create a metric dimension, and set its name and value
$Dimension = [Amazon.CloudWatch.Model.Dimension]::new()
$Dimension.Name = 'DomainName'
$Dimension.Value = 'awstrevor.loc'

$Metric.MetricName = 'UserCount'
$Metric.Value = 76
### NOTE: Be sure you assign the Dimension object to the Dimensions property of the MetricDatum object
$Metric.Dimensions = $Dimension
Write-CWMetricData -MetricData $Metric -Namespace trevortest

Retrieve a list of metrics from Amazon CloudWatch

Now that we’ve written custom metrics to CloudWatch, let’s discuss how we search for metrics. Over time, you might find that you have thousands or tens of thousands of metrics in your AWS accounts, across various regions. As a result, it’s imperative that you know how to locate metrics relevant to your analysis project.

You can, of course, use the AWS Management Console to view metric namespaces, and explore the metrics and metric dimensions contained within each namespace. Although this approach will help you gain initial familiarity with the platform, you’ll most likely want to use automation to help you find relevant data within the platform. Automation is especially important when you introduce metrics across multiple AWS Regions and multiple AWS accounts, as they can be harder to find via a graphical interface.

In the AWS Tools for PowerShell, the Get-CWMetricList command maps to the AWS ListMetrics API. This returns a list of high-level information about the metrics stored in CloudWatch. If you have lots of metrics stored in your account, you might get back a very large list. Thankfully, PowerShell has some generic sorting and filtering commands that can help you find the metrics you’re seeking, with some useful filtering parameters on the Get-CWMetricList command itself.

Let’s explore a few examples of how to use this command.

Starting with the simplest example, we’ll retrieve a list of all the CloudWatch metrics from the current AWS account and region.

Get-CWMetricList

If the results of this command are little overwhelming, that’s okay. We can filter down the returned metrics to a specific metric namespace, using the -Namespace parameter.

Get-CWMetricList -Namespace AWS/Lambda

What if you don’t know which metric namespaces exist? PowerShell provides a useful command that enables you to filter for unique values.

(Get-CWMetricList).Namespace | Select-Object -Unique

If these results aren’t in alphabetical order, it might be hard to visually scan through them, so let’s sort them.

(Get-CWMetricList).Namespace | Select-Object -Unique | Sort-Object

Much better! Another option is to search for metrics based on a dimension key-value pair. It’s a bit more typing, but it’s a useful construct to search through thousands of metrics. You can even write a simple wrapper PowerShell function to make it easier to construct one of these DimensionFilter objects.

$Filter = [Amazon.CloudWatch.Model.DimensionFilter]::new()
$Filter.Name = 'DomainName'
$Filter.Value = 'tsulli.loc'
Get-CWMetricList -Dimension $Filter

If you know the name of a specific metric, you can query for a list of metrics that match that name. You might get back multiple results, if there are multiple metrics with the same name, but different dimensions exist in the same namespace. You can also have similarly named metrics across multiple namespaces, with or without dimensions.

Get-CWMetricList -MetricName UserCount

PowerShell’s built-in, generic Where-Object command is infinitely useful in finding metrics or namespaces, if you don’t know their exact, full name.

This example shows how to filter for any metric names that contain “User”.

Get-CWMetricList | Where-Object -FilterScript { $PSItem.Name -match 'User' }

Filtering metrics by namespace is just as easy. Let’s search for metrics that are stored inside any metric namespace that ends with “EBS”.

Get-CWMetricList | Where-Object -FilterScript { $PSItem.Namespace -match 'EBS$' }

That’s likely enough examples of how to find metrics in CloudWatch, using PowerShell! Let’s move on and talk about pulling actual metric data points from CloudWatch, using PowerShell.

Pull metric data from CloudWatch

Concepts

Metric data is stored in CloudWatch for a finite period of time. Before metrics age out of CloudWatch, metric data points (metric “statistics”) move through a tiered system where they are aggregated and stored as less granular metric data points. For example, metrics gathered on a per-minute period are aggregated and stored as five-minute metrics, when they reach an age of fifteen (15) days. You can find detailed information about the aggregation process and retention period in the Amazon CloudWatch metrics documentation.

Data aggregation in CloudWatch, as of this writing, starts when metric data points reach an age of three (3) hours. You need to ensure that you’re exporting your metric data before your data is aged, if you want to keep the most detailed resolution of your metric data points. Services such as AWS Lambda or even PowerShell applications deployed onto Amazon EC2 Container Service (ECS) can help you achieve this export process in a scalable fashion.

The longest period that metrics are stored in CloudWatch is 15 months. If you want to store metrics data beyond a 15-month period, you must query the metrics data, before CloudWatch performs aggregation on your metrics, and store it in an alternate repository, such as Amazon DynamoDBAmazon S3, or Amazon RDS.

PowerShell Deep Dive

Now that we’ve covered some of the conceptual topics around retrieving and archiving CloudWatch metrics, let’s look at the actual PowerShell command to retrieve data points.

The AWS Tools for PowerShell include a command named Get-CWMetricStatistic, which maps to the GetMetricStatistics API in AWS. You can use this command to retrieve granular data points from your CloudWatch metrics.

There are quite a few parameters that you need to specify on this command, because you are querying a potentially massive dataset. You need to be very specific about the metric namespace, name, start time, end time, period, and statistic that you want to retrieve.

Let’s find the metric data points for the Active Directory UserCount metric, for the past 60 minutes, every minute. We assign the API response to a variable, so we can dig into it further. You most likely don’t have this metric, but I’ve been gathering this metric for awhile, so I’ve got roughly a week’s worth of data at a per-minute level. Of course, my metrics are subject to the built-in aggregation policies, so my per-minute data is only good for up to 15 days.

$Data = Get-CWMetricStatistic -Namespace ActiveDirectory/tsulli.loc -ExtendedStatistic p0.0 -MetricName UserCount -StartTime ([DateTime]::UtcNow.AddHours(-1)) -EndTime ([DateTime]::UtcNow) -Period 60

As you can see, the command is a little lengthy, but when you use PowerShell’s tab completion to help finish the command and parameter names, typing it out isn’t too bad.

Because we’re querying the most recent hour’s worth of data, we should have exactly 60 data points in our response. We can confirm this by examining PowerShell’s built-in Count property on the Datapoints property in our API response.

$Data.Datapoints.Count

The data points aren’t in chronological order when returned, so let’s use PowerShell to sort them, and grab the most recent data point.

$Data.Datapoints | Sort-Object -Property Timestamp | Select-Object -Last 1
Average            : 0
ExtendedStatistics : {[p0.0, 19]}
Maximum            : 0
Minimum            : 0
SampleCount        : 0
Sum                : 0
Timestamp          : 10/14/17 4:27:00 PM
Unit               : Count

Now we can take this data and start exporting it into our preferred external data store, for long-term retention! I’ll leave it to you to explore this API further.

Due to the detailed options available in the GetMetricStatistics API, I would strongly encourage you to read through the documentation, and more importantly, run your own experiments with the API. You need to use this API extensively, if you want to export data points from CloudWatch metrics to an alternative data source, as described earlier.

Conclusion

In this article, we’ve explored the use of the AWS Tools for PowerShell to assist you with writing metrics to Amazon CloudWatch, searching or querying for metrics, and retrieving metric data points. I would encourage you to think about how Amazon CloudWatch can integrate with a variety of other services, to help you achieve your objectives.

Don’t forget that after you’ve stored metrics data in CloudWatch, you can then build dashboards around that data, connect CloudWatch alarms to notify you about infrastructure and application issues, and even perform automated remediation tasks. The sky is the limit, so put on your builder’s hat and start creating!

Please feel free to follow me on Twitter, and keep an eye on my YouTube channel.

Deploying .NET Web Applications Using AWS CodeDeploy with Visual Studio Team Services

Today’s post is from AWS Solution Architect Aravind Kodandaramaiah.

We recently announced the new AWS Tools for Microsoft Visual Studio Team Services. In this post, we show you how you can use these tools to deploy your .NET web applications from Team Services to Amazon EC2 instances by using AWS CodeDeploy.

We don’t cover setting up the tools in Team Services in this post. We assume you already have a Team Services account or are using an on-premises TFS instance. We also assume you know how to push your source code to the repository used in your Team Services build. You can use the AWS tasks in build or in release definitions. For simplicity, this post shows you how to use the tasks in a build definition.

AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate deployments, eliminating error-prone manual operations. The service also scales with your infrastructure, so you can easily deploy to one instance or a thousand.

Setting up an AWS environment

Before we get started configuring a build definition within Team Services, we need to set up an AWS environment. Follow these steps to set up the AWS environment to enable deployments using the AWS CodeDeploy Application Deployment task with Team Services.

  1. Provision an AWS Identity and Access Management (IAM) user. See the AWS documentation for how to prepare IAM users to use AWS CodeDeploy. We’ll configure the access key ID and secret key ID of this IAM user within Team Services to initiate the deployment.
  2. Create a service role for AWS CodeDeploy. See the AWS documentation for details about how to create a service role for AWS CodeDeploy. This service role provides AWS CodeDeploy access to your AWS resources for deployment.
  3. Create an IAM instance profile for EC2 instances. See the AWS documentation for details about how to create IAM instance profiles. An IAM instance profile provides applications running within an EC2 instance access to AWS services.
  4. Launch, tag, and configure EC2 instances. Follow these instructions for launching and configuring EC2 instances to work with AWS CodeDeploy. We’ll use these EC2 Instances to run the deployed application.
  5. Create an Amazon S3 bucket to store application revisions. See the AWS documentation for details about creating S3 buckets.
  6. Create an AWS CodeDeploy application and a deployment group. After you configure instances, but before you can deploy a revision, you must create an application and deployment group in AWS CodeDeploy. AWS CodeDeploy uses a deployment group to identify EC2 instances that need the application to be deployed. Tags help group EC2 instances into a deployment group.

We’ve outlined the manual steps to create the AWS environment. However, it’s possible to completely automate creation of such AWS environments by using AWS CloudFormation, and this is the recommended approach. AWS CloudFormation enables you to represent infrastructure as code and to perform predictable, repeatable, and automated deployments. This enables you to control and track changes to your infrastructure.

Setting up an AWS CodeDeploy environment

In our Git repository in Team Services, we have an ASP.NET web application, the AWS CodeDeploy AppSpec file, and a few deployment PowerShell scripts. These files are located at the root of the repo, as shown below.

The appspec.yml file is a YAML-formatted file that AWS CodeDeploy uses to determine the artifacts to install and the lifecycle events to run. You must place it in the root of an application’s source code directory structure. Here’s the content of the sample appspec.yml file.

version: 0.0
os: windows
files:
  - source: \
    destination: C:\temp\WebApp\MusicWorld
  
hooks:
  BeforeInstall:
    - location: .\StageArtifact.ps1
    - location: .\CreateWebSite.ps1

AWS CodeDeploy executes the PowerShell scripts before copying the revision files to the final destination folder. These PowerShell scripts register the web application with IIS, and copy the application files to the physical path associated with the IIS web application.

The StageArtifacts.ps1 file is a PowerShell script that unpacks the Microsoft Web Deploy (msdeploy) web artifact, and copies the application files to the physical path that is associated with the IIS web application.

$target = "C:\inetpub\wwwroot\MusicWorld\" 

function DeleteIfExistsAndCreateEmptyFolder($dir )
{
    if ( Test-Path $dir ) {    
           Get-ChildItem -Path  $dir -Force -Recurse | Remove-Item -force –
							  recurse
           Remove-Item $dir -Force
    }
    New-Item -ItemType Directory -Force -Path $dir
}
# Clean up target directory
DeleteIfExistsAndCreateEmptyFolder($target )

# msdeploy creates a web artifact with multiple levels of folders. We only need the content 
# of the folder that has Web.config within it 
function GetWebArtifactFolderPath($path)
{
    foreach ($item in Get-ChildItem $path)
    {   
        if (Test-Path $item.FullName -PathType Container)
        {   
            # return the full path for the folder which contains Global.asax
            if (Test-Path ($item.fullname + "\Global.asax"))
            {
                #$item.FullName
                return $item.FullName;
            }
            GetWebArtifactFolderPath $item.FullName
        }
    }
}

$path = GetWebArtifactFolderPath("C:\temp\WebApp\MusicWorld")
$path2 = $path + "\*"
Copy-Item $path2 $target -recurse -force

The CreateWebSite.ps1 file is a PowerShell script that creates a web application in IIS.

New-WebApplication -Site "Default Web Site" -Name MusicWorld -PhysicalPath c:\inetpub\wwwroot\MusicWorld -Force

Setting up the build definition for an ASP.NET web application

The AWS CodeDeploy Application Deployment task in the AWS Tools for Microsoft Visual Studio Team Services supports deployment of any type of application, as long as you register the deployment script in the appspec.yml file. In this post, we deploy ASP.NET applications that are packaged as a Web Deploy archive.

We use the ASP.NET build template to get an ASP.NET application built and packaged as a Web Deploy archive.

Be sure to set up the following MSBuild arguments within the Build Solution task.

/p:WebPublishMethod=Package /p:PackageAsSingleFile=false /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactstagingdirectory)\\publish\\application"

Remove the Publish Artifacts task from the build pipeline, as the build artifacts will be uploaded into Amazon S3 by the AWS CodeDeploy Application Deployment task.

Now that our application has been built using MSBuild, we need to copy the appspec.yml file and the PowerShell deployment scripts to the root of the publish folder. This is so that AWS CodeDeploy can find the appspec.yml file at the root of the application folder.

Add a new Copy Files task. Choose Add Task, and then search for “Copy”. On the found task, choose Add to include the task in the build definition.

Configure this task to copy the appspec.yml file and the PowerShell scripts to the parent folder of the “packagelocation” defined within the Build Solution task. This step allows the AWS CodeDeploy Application Deployment task to zip up the contents of the revision bundle recursively before uploading the archive to Amazon S3.

Next, add the AWS CodeDeploy Application Deployment task. Choose Add Task, and then search for “CodeDeploy”. On the found task, choose Add to include the task in the build definition.

For the AWS CodeDeploy Application Deployment task, make the following configuration changes:

  • AWS Credentials – The AWS credentials used to perform the deployment. Our previous post on the Team Services tools discusses setting up AWS credentials in Team Services. We recommend that the credentials be those for an IAM user, with a policy that enables the user to perform an AWS CodeDeploy deployment.
  • AWS Region – The AWS Region that AWS CodeDeploy is running in.
  • Application Name – The name of the AWS CodeDeploy application.
  • Deployment Group Name – The name of the deployment group to deploy to.
  • Revision Bundle – The artifacts to deploy. You can supply a folder or a file name to this parameter. If you supply a folder, the task will zip the contents of the folder recursively into an archive file before uploading the archive to Amazon S3. If you supply a file name, the task uploads it, unmodified, to Amazon S3. Note that AWS CodeDeploy requires the appspec.yml file describing the application to be located at the root of the specified folder or archive file.
  • Bucket Name – The name of the bucket to which the revision bundle will be uploaded. The target Amazon S3 bucket must exist in the same AWS Region as the target instance.
  • Target Folder – Optional folder (key prefix) for the uploaded revision bundle in the bucket. If you don’t specify a target folder, the bundle will be uploaded to the root of the bucket.

Now that we’ve configured all the tasks, we’re ready to deploy to Amazon EC2 instances using AWS CodeDeploy. If you queue a build now, you should see output similar to this for the deployment.

Next, navigate to the AWS CodeDeploy console and choose Deployments. Choose the deployment that AWS CodeDeploy completed to view the deployment progress. In this example, the web application has been deployed to all the EC2 instances within the deployment group.

If your AWS CodeDeploy application was created with a load balancer, you can verify the web application deployment by navigating to the DNS name of the load balancer using a web browser.

Conclusion

We hope Visual Studio Team Services users find the AWS CodeDeploy Application Deployment task helpful and easy to use. We also appreciate hearing your feedback on our GitHub repository for these Team Services tasks.

New Get-ECRLoginCommand for AWS Tools for PowerShell

Today’s post is from AWS Solution Architect and Microsoft MVP for Cloud and Data Center Management, Trevor Sullivan.

The AWS Tools for PowerShell now offer a new command that makes it easier to authenticate to the Amazon EC2 Container Registry (Amazon ECR).

Amazon EC2 Container Registry (ECR) is a service that enables customers to upload and store their Windows-based and Linux-based container images. Once a developer uploads these container images, they can then be deployed to stand-alone container hosts or container clusters, such as those running under the Amazon EC2 Container Service (Amazon ECS).

To push or pull container images from ECR, you must authenticate to the registry using the Docker API. ECR provides a GetAuthorizationToken API that retrieves the credential you’ll use to authenticate to ECR. In the AWS PowerShell modules, this API is mapped to the cmdlet Get-ECRAuthorizationToken. The response you receive from this service invocation includes a username and password for the registry, encoded as base64. To retrieve the credential, you must decode the base64 response into a byte array, and then decode the byte array as a UTF-8 string. After retrieving the UTF-8 string, the username and password are provided to you in a colon-delimited format. You simply split the string on the colon character to receive the username as array index 0, and the password as array index 1.

Now, with Get-ECRLoginCommand, you can retrieve a pregenerated Docker login command that authenticates your container hosts to ECR. Although you can still directly call the GetAuthorizationToken API, Get-ECRLoginCommand provides a helpful shortcut that reduces the amount of required conversion effort.

Let’s look at a short example of how you can use this new command from PowerShell:

PS> Invoke-Expression –Command (Get-ECRLoginCommand –Region us-west-2).Command
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

As you can see, all you have to do is call the Get-ECRLoginCommand, and then pass the prebuilt Command property into the built-in Invoke-Expression PowerShell cmdlet. Upon running this PowerShell cmdlet, you’re authenticated to ECR, and can then proceed to create image repositories and pushing and pulling container images.

Note: You might receive a warning about specifying the registry password on the Docker CLI. However, you can also build your own Docker login command by using the other properties on the object returned from the Get-ECRLoginCommand.

I hope you find the new cmdlet useful! If you have ideas for other cmdlets we should add, be sure to let us know in the comments.

Introducing Amazon DynamoDB Expression Builder in the AWS SDK for Go

This post was authored by Hajime Hayano.

The v1.11.0 release of the AWS SDK for Go adds a new expression package that enables you to create Amazon DynamoDB Expressions using statically typed builders. The expression package abstracts away the low-level detail of using DynamoDB Expressions and simplifies the process of using DynamoDB Expressions in DynamoDB Operations. In this blog post, we explain how to use the expression package.

In earlier versions of the AWS SDK for Go, you had to explicitly declare the member fields of the DynamoDB Operation input structs, such as QueryInput and UpdateItemInput. That meant the syntax and rules of DynamoDB Expressions were up to you to figure out. The goal of the expression package is to create the formatted DynamoDB Expression strings under the hood thus simplifying the process of using DynamoDB Expressions. The following example shows the verbosity of writing DynamoDB Expressions by hand.

input := &dynamodb.ScanInput{
    ExpressionAttributeNames: map[string]*string{
        "#AT": aws.String("AlbumTitle"),
        "#ST": aws.String("SongTitle"),
    },
    ExpressionAttributeValues: map[string]*dynamodb.AttributeValue{
        ":a": {
            S: aws.String("No One You Know"),
        },
    },
    FilterExpression:     aws.String("Artist = :a"),
    ProjectionExpression: aws.String("#ST, #AT"),
    TableName:            aws.String("Music"),
}

Representing DynamoDB Expressions

DynamoDB Expressions are represented by static builder types in the expression package. These builders, like ConditionBuilder and UpdateBuilder, are created in the package using a builder pattern. The static typing of the builders allows compile-time checks on the syntax of the DynamoDB Expressions that are being created. The following example shows how to create a builder that represents a FilterExpression and a ProjectionExpression.

filt := expression.Name("Artist").Equal(expression.Value("No One You Know"))
// let :a be an ExpressionAttributeValue representing the string "No One You Know"
// equivalent FilterExpression: "Artist = :a"

proj := expression.NamesList(
    expression.Name("SongTitle"),
    expression.Name("AlbumTitle"),
)
// equivalent ProjectionExpression: "SongTitle, AlbumTitle"

In this example, the variable filt represents a FilterExpression. Notice that DynamoDB item attributes are represented using the function Name() and DynamoDB item values are similarly represented using the function Value(). In this context, the string "Artist" represents the name of the item attribute that we want to evaluate, and the string "No One You Know" represents the value we want to evaluate the item attribute against. You specify the relationship between the two operands by using the method Equal().

Similarly, the variable proj represents a ProjectionExpression. The list of item attribute names comprising the ProjectionExpression are specified as arguments to the function NamesList(). The expression package uses the type safety of Go and, if an item value is to be used as an argument to the function NamesList(), a compile time error is returned. The pattern of representing DynamoDB Expressions by indicating relationships between operands with functions is consistent throughout the whole expression package.

Creating an Expression

The Expression type is the core of the expression package. An Expression represents a collection of DynamoDB Expressions with getter methods, such as Condition() and Projection(), used to retrieve specific formatted DynamoDB Expression strings. The following example shows how to create an Expression.

filt := expression.Name("Artist").Equal(expression.Value("No One You Know"))
proj := expression.NamesList(
    expression.Name("SongTitle"),
    expression.Name("AlbumTitle"),
)

expr, err := expression.NewBuilder().
    WithFilter(filt).
    WithProjection(proj).
    Build()
if err != nil {
  fmt.Println(err)
}

In this example, the variable expr is an instance of an Expression type. An Expression is built using a builder pattern. First, a new Builder is initialized by the NewBuilder() function. Then, types representing DynamoDB Expressions are added to the Builder by the WithFilter() and WithProjection() methods. The Build() method returns an instance of an Expression and an error. The error is either an InvalidParameterError or an UnsetParameterError.

There is no limit to the number of different kinds of DynamoDB Expressions that you can add to the Builder, but adding the same type of DynamoDB Expression will overwrite the previous DynamoDB Expression. The following example shows a specific instance of this problem.

cond1 := expression.Name("foo").Equal(expression.Value(5))
cond2 := expression.Name("bar").Equal(expression.Value(6))
expr, err := expression.NewBuilder().
    WithCondition(cond1).
    WithCondition(cond2).
    Build()
if err != nil {
  fmt.Println(err)
}

This example shows that the second call of WithCondition() overwrites the first call.

Filling in the fields of a DynamoDB Scan API

The following example shows how to use an Expression to fill in the member fields of a DynamoDB Operation API.

filt := expression.Name("Artist").Equal(expression.Value("No One You Know"))
proj := expression.NamesList(
    expression.Name("SongTitle"),
    expression.Name("AlbumTitle"),
)
expr, err := expression.NewBuilder().
    WithFilter(filt).
    WithProjection(proj).
    Build()
if err != nil {
  fmt.Println(err)
}

input := &dynamodb.ScanInput{
  ExpressionAttributeNames:  expr.Names(),
  ExpressionAttributeValues: expr.Values(),
  FilterExpression:          expr.Filter(),
  ProjectionExpression:      expr.Projection(),
  TableName:                 aws.String("Music"),
}

In this example, the getter methods of the Expression type are used to get the formatted DynamoDB Expression strings. When using Expression, you must always assign the ExpressionAttributeNames and ExpressionAttributeValues member fields of the DynamoDB API because all item attribute names and values are aliased. That means that if the ExpressionAttributeNames and ExpressionAttributeValues members are not assigned with the corresponding Names() and Values() methods, the DynamoDB operation will run into a logic error.

If you need a starting point, check out the working example in the AWS SDK for Go.

Overall, the expression package makes using the DynamoDB Expressions clean and simple. The complicated syntax and rules of DynamoDB Expressions are abstracted away so you no longer have to worry about them!

Using AWS KMS Master Keys with the AmazonS3EncryptionClient in the AWS SDK for .NET

by John Vellozzi | on | in .NET | Permalink | Comments |  Share

You can now use an AWS KMS key as your master key when you use the AmazonS3EncryptionClient class for client-side encryption.

The main advantage of using an AWS KMS key as your master key is that you don’t need to store and manage your own master keys. It’s done by AWS. A second advantage of the new feature is that it makes the AWS SDK for .NET’s AmazonS3EncryptionClient class interoperable with the AWS SDK for Java’s AmazonS3EncryptionClient class.* This means you can encrypt with the AWS SDK for Java and decrypt with the AWS SDK for .NET, and vice versa.

For more information about client-side encryption with the AmazonS3EncryptionClient class, and how envelope encryption works, see our original blog post.

The following examples demonstrate how to use AWS KMS keys with the AmazonS3EncryptionClient class. Note that your project must reference the latest version of the AWSSDK.KeyManagementService Nuget package to use this feature.

// Encryption
// ----------
var bucketName = "bucket";
var kmsKeyID = "kms-key-id";
var objectKey = "key";
var objectContent = "object content";

var kmsEncryptionMaterials = new EncryptionMaterials(kmsKeyID);
// CryptoStorageMode.ObjectMetadata is required for KMS EncryptionMaterials
var config = new AmazonS3CryptoConfiguration()
{
    StorageMode = CryptoStorageMode.ObjectMetadata
};

using (var client = new AmazonS3EncryptionClient(config, kmsEncryptionMaterials))
{
    var request = new PutObjectRequest
    {
        BucketName = bucketName,
        Key = objectKey,
        ContentBody = objectContent
    };
    client.PutObject(request);
}

// Decryption
// ----------
var bucketName = "bucket";
var kmsKeyID = "kms-key-id";
var objectKey = "key";
string objectContent = null;

var kmsEncryptionMaterials = new EncryptionMaterials(kmsKeyID);
// CryptoStorageMode.ObjectMetadata is required for KMS EncryptionMaterials
var config = new AmazonS3CryptoConfiguration()
{
    StorageMode = CryptoStorageMode.ObjectMetadata
};

using (var client = new AmazonS3EncryptionClient(config, kmsEncryptionMaterials))
{
    var request = new GetObjectRequest
    {
        BucketName = bucketName,
        Key = objectKey
    };

    using (var response = client.GetObject(request))
    using (var stream = response.ResponseStream)
    using (var reader = new StreamReader(stream))
    {
        objectContent = reader.ReadToEnd();
    }
}
// use objectContent

*The AWS SDK for .NET’s AmazonS3EncryptionClient only supports KMS master keys when run in metadata mode. The instruction file mode of the AWS SDK for .NET’s AmazonS3EncryptionClient is still incompatible with the AWS SDK for Java’s AmazonS3EncryptionClient.

Updated Composer Dependencies for AWS SDK for PHP Version 3

In version 3.34.0, the AWS SDK for PHP is updated to correctly communicate its dependencies on PHP extensions through Composer. Several extensions included in many common PHP distributions are not guaranteed to be installed or enabled. In earlier versions, the SDK silently required each of the following extensions, which would cause errors at runtime if not present:

After this update, when you install the SDK via Composer, you will be notified of the SDK’s reliance on these extensions. Updates via Composer will fail if you don’t have these extensions installed or enabled. You’ll receive a message similar to the following:

The compatibility-test.php file is also updated to reflect the correct dependencies.

Automated Changelog in AWS SDK for PHP

Starting with version 3.22.10 of the AWS SDK for PHP, released February 23, 2017, the Changelog Builder automatically processes all changelog entries. Each pull request is required to have a changelog JSON blob as part of the request. The system also calculates the next version for the SDK based on the type of the changes that are defined in the given changelog JSON blob.

The update simplifies the process of adding release notes to the CHANGELOG.md file for each pull request. Each merged pull request that was part of the release results in a new entry to the CHANGELOG.md file. The entry describes the change and provides the TAG number and release date.

The following is a sample changelog blob.

[
    {
        "type"       : "feature|enhancement|bugfix",
        "category"   : "Target of Update",
        "description": "English language simple description of your update."
    }
]

Each changelog blob is required to define the “type”, “category”, and “description” fields. The “category” explains the service that the change is associated with. For SDK changes that are not related to any service, the category field is left as an empty string. The “description” field should contain one or two sentences detailing the changes.

The “type” field describes the scope of the change being proposed. This field helps the Changelog Builder decide if a minor version bump is needed. The “type” field is assigned one of the following values:

  1. feature: A major change to the SDK that will cause a minor version bump. A feature will open a new use case or significantly improve upon an existing one. The update will result in a minor version bump. Example: a new AWS service.
  2. enhancement: A small update to the code. This should not cause any code to break and should only enhance given functionality of the SDK. The update will result in a patch version update. Example: Documentation Update.
  3. bugfix: A fix in the code that has caused some unwanted behavior. The update will result in a patch version update.

The changelog blob that will be included in the next release must be put inside the .changes/nextrelease folder. The changelog blob file name must be unique in that folder. A good practice is to give a descriptive name to the changelog blob file related to the request.

On each release, the SDK looks inside the .changes/nextrelease folder and consolidates all JSON blobs into a single JSON document. Then the SDK calculates the next version number, based on the full JSON document.

Example:

Current SDK Version: 1.2.3
Changelog Blob File 1: update-s3-client.json

[
    {
        "type"       : "enhancement",
        "category"   : "S3",
        "description": "Adds support for new feature in s3"
    }
]

Changelog Blob File 2: ec2-documentation-update.json

[
    {
        "type"       : "enhancement",
        "category"   : "EC2",
        "description": "Update Ec2 documentation for operation foo bar"
    }
]

Changelog Blob File 3: sdk-bugfix.json

[
    {
        "type"       : "bugfix",
        "category"   : "",
        "description": "Fixes a typo in Aws Client"
    }
]

Consolidated JSON document: .changes/1.2.4.json
Next SDK version: 1.2.4

[
  {
    "type": "enhancement",
    "category": "S3",
    "description": "Adds support for new feature in s3"
  },
  {
    "type": "enhancement",
    "category": "EC2",
    "description": "Update Ec2 documentation for operation foo bar"
  },
  {
    "type": "bugfix",
    "category": "",
    "description": "Fixes a typo in Aws Client"
  }
]

A new file, with the name VERSION_NUMBER.json, will be created in the .changes folder with the contents of the JSON blob for the changelog. You can see the previous changelog JSON blobs here. If needed, we can reconstruct the CHANGELOG.md file using the JSON documents in the .changes folder.

On a successful release, the changelog entries are written to the top of the CHANGELOG.md file. Then chag is used to tag the SDK and label the added release notes with the current version number.

For more information about Changelog Builder, see Class Aws\Build\ChangelogBuilder and CONTRIBUTING.md.

Using the AWS_PROFILE Environment Variable to Choose a Profile

by John Vellozzi | on | in .NET | Permalink | Comments |  Share

In an upcoming release of the AWS SDK for .NET, the FallbackCredentialsFactory class and the FallbackRegionFactory class will allow the use of the AWS_PROFILE environment variable.

The SDK currently looks for a profile named “default” when retrieving credentials and region settings. After this change is released, users will be able to set the AWS_PROFILE environment variable to the name of a profile for the FallbackCredentialsFactory and FallbackRegionFactory classes to search for. We’re making this change to bring the AWS SDK for .NET into alignment with the way that the AWS CLI already works.

This is a significant change for users who already have the AWS_PROFILE set to something other than “default”. Currently, if you have set AWS_PROFILE, the SDK ignores it. After this change, the SDK will no longer ignore the AWS_PROFILE environment variable. This means that if the profile named by AWS_PROFILE is different from the “default” profile, your application will start using different credentials or a different region.

Announcing the Modularized AWS SDK for Ruby (Version 3)

by Alex Wood | on | in Ruby | Permalink | Comments |  Share

We’re excited to announce today’s stable release of version 3 of the AWS SDK for Ruby. The SDK is now available with over 100 service-specific gems (starting with aws-sdk-*, such as aws-sdk-s3) on RubyGems. You can find a full list of available service gems can be found at our GitHub landing page.

Features

Version 3 of the AWS SDK for Ruby modularizes the monolithic SDK into service-specific gems, for example, aws-sdk-s3 and aws-sdk-dynamodb. Now each service gem uses strict semantic versioning, along with the benefits of continuous delivery of AWS API updates. With modularization, you can pick and choose which service gems your application or library requires, and update service gems independently of each other.

These new service-specific gems use statically generated code, rather than runtime-generated clients and types. This provides human-readable stack traces and code for API clients. Additionally, version 3 eliminates many thread safety issues by removing Ruby autoload statements. When you require a service gem, such as aws-sdk-ec2, all of the code is loaded upfront, avoiding sync issues with autoload.

Furthermore, the SDK provides AWS Signature Version 4 signing functionality as a separate gem aws-sigv4. This gem provides flexible signing usage for both AWS requests and customized scenarios.

Upgrading

We’ve provided a detailed upgrading guide with this release, which covers different upgrade scenarios. In short, the public-facing APIs are compatible, and so changes you need to make are focused on your Gemfile and require statements.

Most users of the SDK have a setup similar to this:

# Gemfile
gem 'aws-sdk', '~> 2'
# Code Files
require 'aws-sdk'

s3 = Aws::S3::Client.new

ddb = Aws::DynamoDB::Client.new

# etc.

If that’s you, the quickest migration path is to simply change your Gemfile like so:

# Gemfile
gem 'aws-sdk', '~> 3'

However, this will pull in many new dependencies, as each service client has its own individual gem. As a direct user, you can also change to using only the service gems actually required by your project, which is the recommended path. This would involve a change to both your Gemfile and to the code files where your require statements live, like so:

# Gemfile
gem 'aws-sdk-s3', '~> 1'
gem 'aws-sdk-dynamodb', '~> 1'
# Code Files
require 'aws-sdk-s3'
require 'aws-sdk-dynamodb'

s3 = Aws::S3::Client.new
ddb = Aws::DynamoDB::Client.new

# etc.

Other upgrade cases are covered in the guide.

Feedback

Please share your questions, comments, issues, etc. with us on GitHub. You can also catch us in our Gitter channel.

AWS Elastic Beanstalk Updated with .NET Core 2.0

We are happy to announce that Elastic Beanstalk’s Windows platform now supports .NET Core 2.0, which is now preinstalled along with the previous 1.0 and 1.1 versions of .NET Core. The easiest way to get started with deploying ASP.NET Core, the web framework for .NET Core, is using the AWS Toolkit for Visual Studio which allows you to deploy right from Visual Studio. This earlier blog post contains a tutorial for deploying ASP.NET Core applications with Visual Studio.

When writing ASP.NET Core applications check out these NuGet packages. The NuGet package AWSSDK.Extensions.NETCore.Setup can be used to integrate the AWS SDK for .NET into ASP.NET Core’s dependency injection framework. To help with logging the NuGet package AWS.Logger.AspNetCore can be used to integrate ASP.NET Core’s logging framework with CloudWatch Logs making it easy to view the stream of logs coming from your applications.