AWS Developer Blog

Introducing Amazon DynamoDB Expression Builder in the AWS SDK for Go

This post was authored by Hajime Hayano.

The v1.11.0 release of the AWS SDK for Go adds a new expression package that enables you to create Amazon DynamoDB Expressions using statically typed builders. The expression package abstracts away the low-level detail of using DynamoDB Expressions and simplifies the process of using DynamoDB Expressions in DynamoDB Operations. In this blog post, we explain how to use the expression package.

In earlier versions of the AWS SDK for Go, you had to explicitly declare the member fields of the DynamoDB Operation input structs, such as QueryInput and UpdateItemInput. That meant the syntax and rules of DynamoDB Expressions were up to you to figure out. The goal of the expression package is to create the formatted DynamoDB Expression strings under the hood thus simplifying the process of using DynamoDB Expressions. The following example shows the verbosity of writing DynamoDB Expressions by hand.

input := &dynamodb.ScanInput{
    ExpressionAttributeNames: map[string]*string{
        "#AT": aws.String("AlbumTitle"),
        "#ST": aws.String("SongTitle"),
    },
    ExpressionAttributeValues: map[string]*dynamodb.AttributeValue{
        ":a": {
            S: aws.String("No One You Know"),
        },
    },
    FilterExpression:     aws.String("Artist = :a"),
    ProjectionExpression: aws.String("#ST, #AT"),
    TableName:            aws.String("Music"),
}

Representing DynamoDB Expressions

DynamoDB Expressions are represented by static builder types in the expression package. These builders, like ConditionBuilder and UpdateBuilder, are created in the package using a builder pattern. The static typing of the builders allows compile-time checks on the syntax of the DynamoDB Expressions that are being created. The following example shows how to create a builder that represents a FilterExpression and a ProjectionExpression.

filt := expression.Name("Artist").Equal(expression.Value("No One You Know"))
// let :a be an ExpressionAttributeValue representing the string "No One You Know"
// equivalent FilterExpression: "Artist = :a"

proj := expression.NamesList(
    expression.Name("SongTitle"),
    expression.Name("AlbumTitle"),
)
// equivalent ProjectionExpression: "SongTitle, AlbumTitle"

In this example, the variable filt represents a FilterExpression. Notice that DynamoDB item attributes are represented using the function Name() and DynamoDB item values are similarly represented using the function Value(). In this context, the string "Artist" represents the name of the item attribute that we want to evaluate, and the string "No One You Know" represents the value we want to evaluate the item attribute against. You specify the relationship between the two operands by using the method Equal().

Similarly, the variable proj represents a ProjectionExpression. The list of item attribute names comprising the ProjectionExpression are specified as arguments to the function NamesList(). The expression package uses the type safety of Go and, if an item value is to be used as an argument to the function NamesList(), a compile time error is returned. The pattern of representing DynamoDB Expressions by indicating relationships between operands with functions is consistent throughout the whole expression package.

Creating an Expression

The Expression type is the core of the expression package. An Expression represents a collection of DynamoDB Expressions with getter methods, such as Condition() and Projection(), used to retrieve specific formatted DynamoDB Expression strings. The following example shows how to create an Expression.

filt := expression.Name("Artist").Equal(expression.Value("No One You Know"))
proj := expression.NamesList(
    expression.Name("SongTitle"),
    expression.Name("AlbumTitle"),
)

expr, err := expression.NewBuilder().
    WithFilter(filt).
    WithProjection(proj).
    Build()
if err != nil {
  fmt.Println(err)
}

In this example, the variable expr is an instance of an Expression type. An Expression is built using a builder pattern. First, a new Builder is initialized by the NewBuilder() function. Then, types representing DynamoDB Expressions are added to the Builder by the WithFilter() and WithProjection() methods. The Build() method returns an instance of an Expression and an error. The error is either an InvalidParameterError or an UnsetParameterError.

There is no limit to the number of different kinds of DynamoDB Expressions that you can add to the Builder, but adding the same type of DynamoDB Expression will overwrite the previous DynamoDB Expression. The following example shows a specific instance of this problem.

cond1 := expression.Name("foo").Equal(expression.Value(5))
cond2 := expression.Name("bar").Equal(expression.Value(6))
expr, err := expression.NewBuilder().
    WithCondition(cond1).
    WithCondition(cond2).
    Build()
if err != nil {
  fmt.Println(err)
}

This example shows that the second call of WithCondition() overwrites the first call.

Filling in the fields of a DynamoDB Scan API

The following example shows how to use an Expression to fill in the member fields of a DynamoDB Operation API.

filt := expression.Name("Artist").Equal(expression.Value("No One You Know"))
proj := expression.NamesList(
    expression.Name("SongTitle"),
    expression.Name("AlbumTitle"),
)
expr, err := expression.NewBuilder().
    WithFilter(filt).
    WithProjection(proj).
    Build()
if err != nil {
  fmt.Println(err)
}

input := &dynamodb.ScanInput{
  ExpressionAttributeNames:  expr.Names(),
  ExpressionAttributeValues: expr.Values(),
  FilterExpression:          expr.Filter(),
  ProjectionExpression:      expr.Projection(),
  TableName:                 aws.String("Music"),
}

In this example, the getter methods of the Expression type are used to get the formatted DynamoDB Expression strings. When using Expression, you must always assign the ExpressionAttributeNames and ExpressionAttributeValues member fields of the DynamoDB API because all item attribute names and values are aliased. That means that if the ExpressionAttributeNames and ExpressionAttributeValues members are not assigned with the corresponding Names() and Values() methods, the DynamoDB operation will run into a logic error.

If you need a starting point, check out the working example in the AWS SDK for Go.

Overall, the expression package makes using the DynamoDB Expressions clean and simple. The complicated syntax and rules of DynamoDB Expressions are abstracted away so you no longer have to worry about them!

Using AWS KMS Master Keys with the AmazonS3EncryptionClient in the AWS SDK for .NET

by John Vellozzi | on | in .NET | Permalink | Comments |  Share

You can now use an AWS KMS key as your master key when you use the AmazonS3EncryptionClient class for client-side encryption.

The main advantage of using an AWS KMS key as your master key is that you don’t need to store and manage your own master keys. It’s done by AWS. A second advantage of the new feature is that it makes the AWS SDK for .NET’s AmazonS3EncryptionClient class interoperable with the AWS SDK for Java’s AmazonS3EncryptionClient class.* This means you can encrypt with the AWS SDK for Java and decrypt with the AWS SDK for .NET, and vice versa.

For more information about client-side encryption with the AmazonS3EncryptionClient class, and how envelope encryption works, see our original blog post.

The following examples demonstrate how to use AWS KMS keys with the AmazonS3EncryptionClient class. Note that your project must reference the latest version of the AWSSDK.KeyManagementService Nuget package to use this feature.

// Encryption
// ----------
var bucketName = "bucket";
var kmsKeyID = "kms-key-id";
var objectKey = "key";
var objectContent = "object content";

var kmsEncryptionMaterials = new EncryptionMaterials(kmsKeyID);
// CryptoStorageMode.ObjectMetadata is required for KMS EncryptionMaterials
var config = new AmazonS3CryptoConfiguration()
{
    StorageMode = CryptoStorageMode.ObjectMetadata
};

using (var client = new AmazonS3EncryptionClient(config, kmsEncryptionMaterials))
{
    var request = new PutObjectRequest
    {
        BucketName = bucketName,
        Key = objectKey,
        ContentBody = objectContent
    };
    client.PutObject(request);
}

// Decryption
// ----------
var bucketName = "bucket";
var kmsKeyID = "kms-key-id";
var objectKey = "key";
string objectContent = null;

var kmsEncryptionMaterials = new EncryptionMaterials(kmsKeyID);
// CryptoStorageMode.ObjectMetadata is required for KMS EncryptionMaterials
var config = new AmazonS3CryptoConfiguration()
{
    StorageMode = CryptoStorageMode.ObjectMetadata
};

using (var client = new AmazonS3EncryptionClient(config, kmsEncryptionMaterials))
{
    var request = new GetObjectRequest
    {
        BucketName = bucketName,
        Key = objectKey
    };

    using (var response = client.GetObject(request))
    using (var stream = response.ResponseStream)
    using (var reader = new StreamReader(stream))
    {
        objectContent = reader.ReadToEnd();
    }
}
// use objectContent

*The AWS SDK for .NET’s AmazonS3EncryptionClient only supports KMS master keys when run in metadata mode. The instruction file mode of the AWS SDK for .NET’s AmazonS3EncryptionClient is still incompatible with the AWS SDK for Java’s AmazonS3EncryptionClient.

Updated Composer Dependencies for AWS SDK for PHP Version 3

In version 3.34.0, the AWS SDK for PHP is updated to correctly communicate its dependencies on PHP extensions through Composer. Several extensions included in many common PHP distributions are not guaranteed to be installed or enabled. In earlier versions, the SDK silently required each of the following extensions, which would cause errors at runtime if not present:

After this update, when you install the SDK via Composer, you will be notified of the SDK’s reliance on these extensions. Updates via Composer will fail if you don’t have these extensions installed or enabled. You’ll receive a message similar to the following:

The compatibility-test.php file is also updated to reflect the correct dependencies.

Automated Changelog in AWS SDK for PHP

Starting with version 3.22.10 of the AWS SDK for PHP, released February 23, 2017, the Changelog Builder automatically processes all changelog entries. Each pull request is required to have a changelog JSON blob as part of the request. The system also calculates the next version for the SDK based on the type of the changes that are defined in the given changelog JSON blob.

The update simplifies the process of adding release notes to the CHANGELOG.md file for each pull request. Each merged pull request that was part of the release results in a new entry to the CHANGELOG.md file. The entry describes the change and provides the TAG number and release date.

The following is a sample changelog blob.

[
    {
        "type"       : "feature|enhancement|bugfix",
        "category"   : "Target of Update",
        "description": "English language simple description of your update."
    }
]

Each changelog blob is required to define the “type”, “category”, and “description” fields. The “category” explains the service that the change is associated with. For SDK changes that are not related to any service, the category field is left as an empty string. The “description” field should contain one or two sentences detailing the changes.

The “type” field describes the scope of the change being proposed. This field helps the Changelog Builder decide if a minor version bump is needed. The “type” field is assigned one of the following values:

  1. feature: A major change to the SDK that will cause a minor version bump. A feature will open a new use case or significantly improve upon an existing one. The update will result in a minor version bump. Example: a new AWS service.
  2. enhancement: A small update to the code. This should not cause any code to break and should only enhance given functionality of the SDK. The update will result in a patch version update. Example: Documentation Update.
  3. bugfix: A fix in the code that has caused some unwanted behavior. The update will result in a patch version update.

The changelog blob that will be included in the next release must be put inside the .changes/nextrelease folder. The changelog blob file name must be unique in that folder. A good practice is to give a descriptive name to the changelog blob file related to the request.

On each release, the SDK looks inside the .changes/nextrelease folder and consolidates all JSON blobs into a single JSON document. Then the SDK calculates the next version number, based on the full JSON document.

Example:

Current SDK Version: 1.2.3
Changelog Blob File 1: update-s3-client.json

[
    {
        "type"       : "enhancement",
        "category"   : "S3",
        "description": "Adds support for new feature in s3"
    }
]

Changelog Blob File 2: ec2-documentation-update.json

[
    {
        "type"       : "enhancement",
        "category"   : "EC2",
        "description": "Update Ec2 documentation for operation foo bar"
    }
]

Changelog Blob File 3: sdk-bugfix.json

[
    {
        "type"       : "bugfix",
        "category"   : "",
        "description": "Fixes a typo in Aws Client"
    }
]

Consolidated JSON document: .changes/1.2.4.json
Next SDK version: 1.2.4

[
  {
    "type": "enhancement",
    "category": "S3",
    "description": "Adds support for new feature in s3"
  },
  {
    "type": "enhancement",
    "category": "EC2",
    "description": "Update Ec2 documentation for operation foo bar"
  },
  {
    "type": "bugfix",
    "category": "",
    "description": "Fixes a typo in Aws Client"
  }
]

A new file, with the name VERSION_NUMBER.json, will be created in the .changes folder with the contents of the JSON blob for the changelog. You can see the previous changelog JSON blobs here. If needed, we can reconstruct the CHANGELOG.md file using the JSON documents in the .changes folder.

On a successful release, the changelog entries are written to the top of the CHANGELOG.md file. Then chag is used to tag the SDK and label the added release notes with the current version number.

For more information about Changelog Builder, see Class Aws\Build\ChangelogBuilder and CONTRIBUTING.md.

Using the AWS_PROFILE Environment Variable to Choose a Profile

by John Vellozzi | on | in .NET | Permalink | Comments |  Share

In an upcoming release of the AWS SDK for .NET, the FallbackCredentialsFactory class and the FallbackRegionFactory class will allow the use of the AWS_PROFILE environment variable.

The SDK currently looks for a profile named “default” when retrieving credentials and region settings. After this change is released, users will be able to set the AWS_PROFILE environment variable to the name of a profile for the FallbackCredentialsFactory and FallbackRegionFactory classes to search for. We’re making this change to bring the AWS SDK for .NET into alignment with the way that the AWS CLI already works.

This is a significant change for users who already have the AWS_PROFILE set to something other than “default”. Currently, if you have set AWS_PROFILE, the SDK ignores it. After this change, the SDK will no longer ignore the AWS_PROFILE environment variable. This means that if the profile named by AWS_PROFILE is different from the “default” profile, your application will start using different credentials or a different region.

Announcing the Modularized AWS SDK for Ruby (Version 3)

by Alex Wood | on | in Ruby | Permalink | Comments |  Share

We’re excited to announce today’s stable release of version 3 of the AWS SDK for Ruby. The SDK is now available with over 100 service-specific gems (starting with aws-sdk-*, such as aws-sdk-s3) on RubyGems. You can find a full list of available service gems can be found at our GitHub landing page.

Features

Version 3 of the AWS SDK for Ruby modularizes the monolithic SDK into service-specific gems, for example, aws-sdk-s3 and aws-sdk-dynamodb. Now each service gem uses strict semantic versioning, along with the benefits of continuous delivery of AWS API updates. With modularization, you can pick and choose which service gems your application or library requires, and update service gems independently of each other.

These new service-specific gems use statically generated code, rather than runtime-generated clients and types. This provides human-readable stack traces and code for API clients. Additionally, version 3 eliminates many thread safety issues by removing Ruby autoload statements. When you require a service gem, such as aws-sdk-ec2, all of the code is loaded upfront, avoiding sync issues with autoload.

Furthermore, the SDK provides AWS Signature Version 4 signing functionality as a separate gem aws-sigv4. This gem provides flexible signing usage for both AWS requests and customized scenarios.

Upgrading

We’ve provided a detailed upgrading guide with this release, which covers different upgrade scenarios. In short, the public-facing APIs are compatible, and so changes you need to make are focused on your Gemfile and require statements.

Most users of the SDK have a setup similar to this:

# Gemfile
gem 'aws-sdk', '~> 2'
# Code Files
require 'aws-sdk'

s3 = Aws::S3::Client.new

ddb = Aws::DynamoDB::Client.new

# etc.

If that’s you, the quickest migration path is to simply change your Gemfile like so:

# Gemfile
gem 'aws-sdk', '~> 3'

However, this will pull in many new dependencies, as each service client has its own individual gem. As a direct user, you can also change to using only the service gems actually required by your project, which is the recommended path. This would involve a change to both your Gemfile and to the code files where your require statements live, like so:

# Gemfile
gem 'aws-sdk-s3', '~> 1'
gem 'aws-sdk-dynamodb', '~> 1'
# Code Files
require 'aws-sdk-s3'
require 'aws-sdk-dynamodb'

s3 = Aws::S3::Client.new
ddb = Aws::DynamoDB::Client.new

# etc.

Other upgrade cases are covered in the guide.

Feedback

Please share your questions, comments, issues, etc. with us on GitHub. You can also catch us in our Gitter channel.

AWS Elastic Beanstalk Updated with .NET Core 2.0

We are happy to announce that Elastic Beanstalk’s Windows platform now supports .NET Core 2.0, which is now preinstalled along with the previous 1.0 and 1.1 versions of .NET Core. The easiest way to get started with deploying ASP.NET Core, the web framework for .NET Core, is using the AWS Toolkit for Visual Studio which allows you to deploy right from Visual Studio. This earlier blog post contains a tutorial for deploying ASP.NET Core applications with Visual Studio.

When writing ASP.NET Core applications check out these NuGet packages. The NuGet package AWSSDK.Extensions.NETCore.Setup can be used to integrate the AWS SDK for .NET into ASP.NET Core’s dependency injection framework. To help with logging the NuGet package AWS.Logger.AspNetCore can be used to integrate ASP.NET Core’s logging framework with CloudWatch Logs making it easy to view the stream of logs coming from your applications.

Working with Lambda Functions and Visual Studio Team Services

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In previous posts, we talked about our new AWS Tools for Microsoft Visual Studio Team Services (VSTS), which provides AWS tasks you can add to your build definition. We also talked about our AWS Elastic Beanstalk task that you can use to deploy .NET web applications. In our initial release, there are also two tasks to interact with AWS Lambda.

Deployment .NET Core Lambda functions

To deploy Lambda .NET Core functions, we provide the AWS Lambda .NET Core Deployment task. This task uses our dotnet CLI integration, which means your Lambda project needs a DotNetCliToolReference reference to Amazon.Lambda.Tools. If you created your Lambda project from one of the templates provided by AWS, the reference is already set up. If the reference is not set, add the following to the Lambda project’s .csproj file.

<ItemGroup>
  <DotNetCliToolReference Include="Amazon.Lambda.Tools" Version="1.7.0" />
</ItemGroup>

In addition to the usual credential and region properties, the task requires two parameters. The first is the Path to Lambda Project, which you should set to the file path of the .csproj file for the Lambda project. If you’re still using Visual Studio 2015, you can set this to the directory containing the project.json file

The other required parameter is the Command property. When you use the Lambda integration with the dotnet CLI there are two commands for deployment. The deploy-function command will package a single function and upload it to the Lambda service. The other command, deploy-serverless, packages a project that can contain a collection of functions and uses AWS CloudFormation to deploy the Lambda functions. The Command property controls whether you use deploy-function or deploy-serverless.

All the other parameters are optional because they could be set already in the project’s aws-lambda-tools-defaults.json file.That means if required parameters for the deploy-function or deploy-serverless commands are missing from both aws-lambda-tools-defaults.json and the VSTS build definition, the task will fail.

Note: We highly recommend that you update the version of Amazon.Lambda.Tools to version 1.7.0 and later. When you use the dotnet CLI integration and there are parameters missing from the command line, the integration will prompt for them. In the context of a VSTS build, prompting would obviously be inappropriate. Version 1.7.0 added the ability to suppress prompting for missing parameters.

Invoking the Lambda function

Once you have your Lambda function deployed, you can use the AWS Lambda Invoke Function to execute Lambda functions from your build definition. To invoke a function, you need to set the following properties:

  • Function Name – The name of the Lambda function.
  • Payload – Optional JSON document to pass to the Lambda function.
  • Invocation Type – Selector you use to decide if the function is called synchronously, which will return the results, or asynchronously.

Conclusion

By using these tasks, you can automate .NET Core Lambda deployment and trigger Lambda functions as part of your build to continue your build process through AWS. Hopefully, you will find these tasks useful. As always, we greatly appreciate your feedback and you can reach out to us on our VSTS GitHub repository.

Deploying .NET Web Applications Using AWS Elastic Beanstalk with Visual Studio Team Services

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

We recently announced the new AWS Tools for Microsoft Visual Studio Team Services. Today let’s take a deeper look at how you can use the new tools to support deploying your .NET web applications from Team Services to AWS Elastic Beanstalk.

Elastic Beanstalk uses environments to run .NET web applications. Before using this task, you first need to create an environment. You can do this from the Elastic Beanstalk console with a sample application, or by using the AWS Toolkit for Visual Studio with an initial version of your application.

In this post, we won’t cover setting up the tools in Team Services. However, we’re going to assume you already have a Team Services account and know how to push your source code to the repository used in your Team Services build. This post focuses on build definitions that show how to use the AWS Elastic Beanstalk Deployment task.

Setting up the build definition for an ASP.NET application

The AWS Elastic Beanstalk Deployment task in the AWS Tools for Team Services supports either ASP.NET applications that are packaged as a Web Deploy archive, or ASP.NET Core applications published with the dotnet publish command. First let’s take a look at using an ASP.NET application.

Probably the easiest way to get an ASP.NET application built and packaged as a Web Deploy archive is by using the ASP.NET Build Template and removing the Published Artifacts task. This task will be replaced with the AWS Elastic Beanstalk Deployment task.

If you already have a build definition, be sure your Build Solution task has the following MSBuild arguments. These arguments ensure the web application is packaged as a Web Deploy archive and placed into the build artifacts staging directory.


/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactstagingdirectory)\\"

Now that the project is building, we need to add the AWS Elastic Beanstalk Deployment task. Choose Add Task, and then search for “Beanstalk”. On the found task, choose Add to include the task in the build definition.

For the new task, we need to make the following configuration changes:

  • AWS Credentials: The AWS credentials used to perform the deployment. Our previous post on the Team Services tools discusses setting up AWS credentials in Team Services. For this task, the credentials should be for an AWS Identity and Access Management (IAM) user, with a policy that enables the user to update an Elastic Beanstalk environment and describe an environment status and events.
  • AWS Region: The AWS Region that the Elastic Beanstalk environment is running in.
  • Application Type: Set to ASP.NET.
  • Web Deploy Archive: The path to the Web Deploy archive. If the archive was created using the arguments above, the file will have the same name as the directory that contains the web application, and will have a .zip extension. You can find it in the build artifacts staging directory, which can be referenced as $(build.artifactstagingdirectory).
  • Beanstalk Application Name: The name you used to create the Elastic Beanstalk application. An Elastic Beanstalk application is the container for a collection of environments running the .NET web application.
  • Beanstalk Environment Name: The name you used to create the Elastic Beanstalk environment. An Elastic Beanstalk environment contains the actual provisioned resources that are running the .NET web application.

Now that we have configured the task, we’re ready to deploy to Elastic Beanstalk. If you queue a build now, you should see output similar to this for the deployment.

Setting up the build definition for an ASP.NET Core

For an ASP.NET Core deployment, we don’t use Web Deploy. Instead we need the output folder from the dotnet publish command. The easiest way to get started for this type of deployment is to use the ASP.NET Core build template, and again remove the Publish Artifacts task.

This creates a build definition using the dotnet CLI that restores NuGet dependencies, builds the projects, runs any tests, and then finally executes the dotnet publish command on any web projects. After the publish task, we need to add the AWS Elastic Beanstalk Deployment task the same way we added it for the ASP.NET application.

There are two parameters in the configurations on the AWS Elastic Beanstalk Deployment task that are different for ASP.NET Core:

  • Application Type: Set to ASP.NET Core.
  • Published Application Path: The path to the .zip file archive of the dotnet publish output. If you didn’t choose Zip Published Projects in the publish task, this parameter can point to the directory the dotnet publish command wrote to.

That’s it. Everything else is the same as the ASP.NET deployment.

Conclusion

We hope Visual Studio Team Services users will find the AWS Elastic Beanstalk Deployment task helpful and easy to use. We also appreciate hearing your feedback on our GitHub repository for these Team Services tasks.

CognitoAuthentication Extension Library Developer Preview

by Sam Mousigian | on | in .NET | Permalink | Comments |  Share

We are pleased to announce the Developer Preview of the CognitoAuthentication extension library. This library simplifies the authentication process of Amazon Cognito User Pools for .NET 4.5, .NET Core, and Xamarin developers. Many customers reported that they directly implemented the Secure Remote Password (SRP) protocol themselves. This process requires hundreds of lines of difficult cryptography implementation. Our goal in creating the CognitoAuthentication extension library is to eliminate this hassle and allow you to use the authentication methods for Amazon Cognito User Pools with only a few short method calls. We also want to make the process more intuitive. Instead of having to read through pages of documentation to know which parameters to send for each type of authentication, we ask for the necessary fields in the corresponding request parameter of each authentication method and then produce the proper service request on your behalf. The library is built on top of the Amazon Cognito Identity Provider API to create and send these API calls to authenticate users. We hope this library helps you use Amazon Cognito User Pools to authenticate users, and we look forward to your feedback.

Getting Started

To set up an AWS account and install the AWS SDK for .NET to take advantage of this library, see Getting Started with the AWS SDK for .NET. Create a new project in Visual Studio and add the CognitoAuthentication extension library as a reference to the project. You can find it in the NuGet gallery as AWSSDK.Extensions.CognitoAuthentication. Using the library to make calls to the Amazon Cognito Identity Provider API from the AWS SDK for .NET is as simple as creating the necessary CognitoAuthentication objects and calling the appropriate AmazonCognitoIdentityProviderClient methods. The principal Amazon Cognito authentication objects are:

  • CognitoUserPool objects store information about a user pool, including the poolID, clientID, and other pool attributes.
  • CognitoUser objects contain a user’s username, the pool they are associated with, session information, and other user properties.
  • CognitoDevice objects include device information, such as the device key.

You can use AnonymousAWSCredentials when creating the identity provider client, which results in requests not being signed before they are sent to the service. Any service that does not accept unsigned requests returns a service exception in this case. This is appropriate for releasing in your final product to end users that should not have proper credentials until they are authenticated.

Authenticating with Secure Remote Protocol (SRP)

Instead of implementing hundreds of lines of cryptographic methods yourself, you now only need to create the necessary AmazonCognitoIdentityProviderClient, CognitoUserPool, CognitoUser, and InitiateSrpAuthRequest objects and then call StartWithSrpAuthAsync. We made these structures as lightweight as possible while still providing a large amount of functionality. The InitiateSrpAuthRequest currently only requires the password for the user; all other required information is already stored in the CognitoAuthentication objects. This handles creating and responding to the USER_SRP_AUTH and PASSWORD_VERIFIER challenges during the authentication flow on behalf of the developer, and then returns an AuthFlowResponse object. The AuthenticationResult property of the AuthFlowResponse object contains the user’s session tokens if the user was successfully authenticated. If more challenge responses are required, this field is null and the ChallengeName property describes the next challenge, such as multi-factor authentication. You would then call the appropriate method to continue the authentication flow. The following code snippet shows how the library performs SRP authentication:

using Amazon.Runtime;
using Amazon.CognitoIdentityProvider;
using Amazon.Extensions.CognitoAuthentication;

public async void AuthenticateWithSrpAsync()
{
    var provider = new AmazonCognitoIdentityProviderClient(new AnonymousAWSCredentials(),
                                                           FallbackRegionFactory.GetRegionEndpoint());
    CognitoUserPool userPool = new CognitoUserPool("poolID", "clientID", provider);
    CognitoUser user = new CognitoUser("username", "clientID", userPool, provider);

    string password = "userPassword";

    AuthFlowResponse context = await user.StartWithSrpAuthAsync(new InitiateSrpAuthRequest()
    {
        Password = password
    }).ConfigureAwait(false);
}

Authenticating with Multiple Forms of Authentication

Continuing the authentication flow with challenges, such as with NewPasswordRequired and Multi-Factor Authentication (MFA), is simpler as well. The only things required are the CognitoAuthentication objects, user’s password for SRP, and the necessary information for the next challenge, acquired after prompting the user to enter it. The following code shows one way to check the challenge type and get the appropriate responses for MFA and NewPasswordRequired challenges during the authentication flow:

using System;

using Amazon.Runtime;
using Amazon.CognitoIdentityProvider;
using Amazon.Extensions.CognitoAuthentication;

public async void AuthenticateUserAsync()
{
    var provider = new AmazonCognitoIdentityProviderClient(new AnonymousAWSCredentials(),
                                                           FallbackRegionFactory.GetRegionEndpoint());
    CognitoUserPool userPool = new CognitoUserPool("poolID", "clientID", provider);
    CognitoUser user = new CognitoUser("username", "clientID", userPool, provider);

    string password = "userPassword";
    AuthFlowResponse authResponse = null;

    authResponse = await user.StartWithSrpAuthAsync(new InitiateSrpAuthRequest()
    {
        Password = password
    }).ConfigureAwait(false);

    while (authResponse.AuthenticationResult == null)
    {
        if (authResponse.ChallengeName == ChallengeNameType.NEW_PASSWORD_REQUIRED)
        {
            Console.WriteLine("Enter your desired new password:");
            string newPassword = Console.ReadLine();

            authResponse = 
                await user.RespondToNewPasswordRequiredAsync(new RespondToNewPasswordRequiredRequest()
                {
                    SessionID = authResponse.SessionID,
                    NewPassword = newPassword
                }).ConfigureAwait(false);
        }
        else if (authResponse.ChallengeName == ChallengeNameType.SMS_MFA)
        {
            Console.WriteLine("Enter the MFA Code sent to your device:");
            string mfaCode = Console.ReadLine();

            authResponse = await user.RespondToSmsMfaAuthAsync(new RespondToSmsMfaRequest()
            {
                 SessionID = authResponse.SessionID,
                 MfaCode = mfaCode
            }).ConfigureAwait(false);
         }
         else
         {
             Console.WriteLine("Unrecognized authentication challenge.");
             break;
         }
    }

    if (authResponse.AuthenticationResult != null)
    {
        Console.WriteLine("User successfully authenticated.");
    }
    else
    {
        Console.WriteLine("Error in authentication process.");
    }
}

Similar to the SRP authentication model, if the user is authenticated after these method calls, the AuthenticationResult property of the mfaResponse object contains the user’s session tokens. Otherwise, continue prompting the user for the necessary information required for the next authentication challenge described in the mfaResponse ChallengeName field. As shown above, the main variables to maintain between authentication flow calls are the SessionID and ChallengeName of the AuthFlowResponse. You should also handle the case of an unrecognized challenge. This can occur when there is an out-of-date SDK and if the service introduced a new authentication challenge. It can also occur if the application does not support a certain type of challenge. Here, we simply output this to the user and tell the user that there was an error in the authentication process; however, this scenario can be handled in the way that best suits the application.

Using AWS Resources After Authentication

Once a user is authenticated using the CognitoAuthentication library, the next step is to allow them to access the appropriate AWS resources. This requires you to create an identity pool through the Amazon Cognito Federated Identities console. By specifying your created Amazon Cognito User Pool as a provider using its poolID and clientID, you can allow your Amazon Cognito User Pool users to access AWS resources connected to your account. You can also specify different roles for both unauthenticated and authenticated users to be able to access different resources. These roles can be changed in the IAM console where you can add or remove permissions in the “Action” field of the role’s attached policy. Then, using the appropriate identity pool, user pool, and Amazon Cognito user information, calls can be made to different AWS resources. The following shows a user authenticated with SRP accessing the developer’s different S3 buckets permitted by the associated identity pool’s role:

using System;

using Amazon;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.CognitoIdentity;
using Amazon.CognitoIdentityProvider;
using Amazon.Extensions.CognitoAuthentication;

public async void GetS3BucketsAsync()
{
    var provider = new AmazonCognitoIdentityProviderClient(new AnonymousAWSCredentials(),
                                                            FallbackRegionFactory.GetRegionEndpoint());
    CognitoUserPool userPool = new CognitoUserPool("poolID", "clientID", provider);
    CognitoUser user = new CognitoUser("username", "clientID", userPool, provider);

    string password = "userPassword";

    AuthFlowResponse context = await user.StartWithSrpAuthAsync(new InitiateSrpAuthRequest()
    {
        Password = password
    }).ConfigureAwait(false);

    CognitoAWSCredentials credentials = 
        user.GetCognitoAWSCredentials("identityPoolID", RegionEndpoint.<YourIdentityPoolRegion>);

    using (var client = new AmazonS3Client(credentials))
    {
        ListBucketsResponse response = 
            await client.ListBucketsAsync(new ListBucketsRequest()).ConfigureAwait(false);

        foreach (S3Bucket bucket in response.Buckets)
        {
            Console.WriteLine(bucket.BucketName);
        }
    }
}

Other Forms of Authentication

In addition to SRP, NewPasswordRequired, and MFA, the CognitoAuthentication extension library offers an easier authentication flow for:

  • Custom – Begins with a call to StartWithCustomAuthAsync(InitiateCustomAuthRequest customRequest)
  • RefreshToken – Begins with a call to StartWithRefreshTokenAuthAsync(InitiateRefreshTokenAuthRequest refreshTokenRequest)
  • RefreshTokenSRP – Begins with a call to StartWithRefreshTokenAuthAsync(InitiateRefreshTokenAuthRequest refreshTokenRequest)
  • AdminNoSRP – Begins with a call to StartWithAdminNoSrpAuth(InitiateAdminNoSrpAuthRequest adminAuthRequest)

Call the appropriate method depending on the desired flow, and then continue prompting the user with challenges as they are presented in the AuthFlowResponse objects of each method call. Also call the appropriate response method, such as RespondToSmsMfaAuthAsync for MFA challenges and RespondToCustomAuthAsync for custom challenges. If new authentication methods are created for Amazon Cognito User Pools in the future, they will be added to the library as well.

Conclusion

We hope you find this preview of the CognitoAuthentication extension library useful as it simplifies the authentication process from hundreds of lines of code to only a few. We appreciate all of your feedback. You can provide public feedback on our GitHub page or our Gitter page. Those who prefer not to give public feedback can reach out to our support team by filing a support ticket through the AWS Management Console.