AWS Developer Blog

Context Pattern added to the AWS SDK for Go

The AWS SDK for Go v1.8.0 release adds support for the API operation request functional options, and the Context pattern. Both of these features were high demand requests from our users. Request options allow you to easily configure and augment how the SDK makes API operation requests to AWS services. The SDK’s support for the Context pattern allows your application take advantage of cancellation, timeouts, and Context Values on requests.  The new request options and Context pattern give your application even more control over SDK’s request execution and handling.

Request Options

Request Options are functional arguments that you pass in to the SDK’s API operation methods. These enable you to configure the request in line with functional options. Functional options are a pattern you can use to configure an operation via passed-in functions or closures in line with the method call.

For example, you can configure the Amazon S3 API operation PutObject to log debug information about the request directly, without impacting the other API operations used by your application.

// Log this API operation only. 
resp, err := svc.PutObjectWithContext(ctx, params, request.WithLogLevel(aws.LogDebug))

This pattern is also helpful when you want your application to inject request handlers into the request. This allows you to do so in line with the API operation method call.

resp, err := svc.PutObjectWithContext(ctx, params, func(r *request.Request) {
	start := time.Now()
	r.Handlers.Complete.PushBack(func(req *request.Request) {
		fmt.Println("request %s took %s to complete", req.RequestID, time.Since(start))
	})
})

All of the SDK’s new service client methods that have a WithContext suffix support these request options. You can also apply request options to the SDK’s standard Request directly with the ApplyOptions method.

API Operations with Context

All of the new methods of the SDK’s API operations that have a WithContext suffix take a ContextValue. This value must be non-nil. Context allows your application to control API operation request cancellation. This means you can now easily institute request timeouts based on the Context pattern. Go introduced the Context pattern in the experimental package golang.org/x/net/context, and it was later added to the Go standard library in Go 1.7. For backward compatibility with previous Go versions, the SDK created the Context interface type in the github.com/aws/aws-sdk-go/aws package. The SDK’s Context type is compatible with Context from both golang.org/x/net/context and the Go 1.7 standard library Context package.

Here is an example of how to use a Context to cancel uploading an object to Amazon S3. If the put doesn’t complete within the timeout passed in, the API operation is canceled. When a Context is canceled, the SDK returns the CanceledErrorCode error code. A working version of this example can be found in the SDK.

sess := session.Must(session.NewSession())
svc := s3.New(sess)

// Create a context with a timeout that will abort the upload if it takes 
// more than the passed in timeout.
ctx := context.Background()
var cancelFn func()
if timeout > 0 {
	ctx, cancelFn = context.WithTimeout(ctx, timeout)
}
// Ensure the context is canceled to prevent leaking.
// See context package for more information, https://golang.org/pkg/context/
defer cancelFn()

// Uploads the object to S3. The Context will interrupt the request if the 
// timeout expires.
_, err := svc.PutObjectWithContext(ctx, &s3.PutObjectInput{
	Bucket: aws.String(bucket),
	Key:    aws.String(key),
	Body:   body,
})
if err != nil {
	if aerr, ok := err.(awserr.Error); ok && aerr.Code() == request.CanceledErrorCode {
		// If the SDK can determine the request or retry delay was canceled
		// by a context the CanceledErrorCode error code will be returned.
		fmt.Println("request's context canceled,", err)
	}
	return err
}

API Operation Waiters

Waiters were expanded to include support for request Context and waiter options. The new WaiterOption type defines functional options that are used to configure the waiter’s functionality.

For example, the WithWaiterDelay allows you to provide your own function that returns how long the waiter will wait before checking the waiter’s resource state again. This is helpful when you want to configure an exponential backoff, or longer retry delays with ConstantWaiterDelay.

The example below highlights this by configuring the WaitUntilBucketExists method to use a 30-second delay between checks to determine if the bucket exists.

svc := s3.New(sess)
ctx := contex.Background()

_, err := svc.CreateBucketWithContext(ctx, &s3.CreateBucketInput{
	Bucket: aws.String("myBucket"),
})
if err != nil {
	return fmt.Errorf("failed to create bucket, %v", err)
}

err := svc.WaitUntilBucketExistsWithContext(ctx,
	&s3.HeadBucket{
		Bucket: aws.String("myBucket"),
	},
	request.WithWaiterDelay(request.ConstantWaiterDelay(30 * time.Second)),
)
if err != nil {
	return fmt.Errorf("failed to wait for bucket exists, %v", err)
}

fmt.Println("bucket created")

API Operation Paginators

Paginators were also expanded to add support for Context and request options. Configuring request options for pagination applies the options to each new Request that the SDK creates to retrieve the next page. By extending the Pages API methods to include Context and request options the SDK gives you control over how the SDK will make each page request, and cancellation of the pagination.

svc := s3.New(sess)
ctx := context.Background()

err := svc.ListObjectsPagesWithContext(ctx,
	&s3.ListObjectsInput{
		Bucket: aws.String("myBucket"),
		Prefix: aws.String("some/key/prefix"),
		MaxKeys: aws.Int64(100),
	},
	func(page *s3.ListObjectsOutput, lastPage bool) bool {
		fmt.Println("Received", len(page.Contents), "objects in page")
		for _, obj := range page.Contents {
			fmt.Println("Key:", aws.StringValue(obj.Key))
		}
		return true
	},
)
if err != nil {
	return fmt.Errorf("failed to create bucket, %v", err)
}

API Operation Pagination without Callbacks

In addition to the Pages API operations, you can use the new Pagination type in the github.com/aws/aws-sdk-go/aws/request package. This type enables you to control the iterations of pages directly. This is helpful when you do not want to use callbacks for paginating AWS operations. This new type allows you to treat pagination similar to the Go stdlib bufio package’s Scanner type to iterate through pages with a for loop. You can also use this pattern with the Context pattern by calling Request.SetContext on each request in the NewRequest function.

svc := s3.New(sess)

params := s3.ListObjectsInput{
	Bucket: aws.String("myBucket"),
	Prefix: aws.String("some/key/prefix"),
	MaxKeys: aws.Int64(100),
}
ctx := context.Background()

p := request.Pagination{
	NewRequest: func() (*request.Request, error) {
		req, _ := svc.ListObjectsRequest(&params)
		req.SetContext(ctx)
		return req, nil
	},
}

for p.Next(){
	page := p.Page().(*s3.ListObjectsOutput)
	
	fmt.Println("Received", len(page.Contents), "objects in page")
	for _, obj := range page.Contents {
		fmt.Println("Key:", aws.StringValue(obj.Key))
	}
}

return p.Err()

Wrap Up

The addition of Context and request options expands the capabilities of the AWS SDK for Go, giving your applications the tools needed to implement request lifecycle and configuration with the SDK. Let us know your experiences using the new Context pattern and request options features.

Using Python and Amazon SQS FIFO Queues to Preserve Message Sequencing

by Tara Van Unen | on | in Python | Permalink | Comments |  Share

Thanks to Alexandre Pinhel, Solutions Architect from our team for writing this post!

Amazon SQS is a managed message queuing service that makes it simple to decouple application components. We recently announced an entirely new queue type, SQS FIFO (first-in, first out) queues with exactly-once processing and deduplication. SQS FIFO queues are now available in the US East (Ohio) and US West (Oregon) regions, with more regions to follow. This new type of queue lets you use Amazon SQS for systems that depend on receiving messages in exact order, and exactly once, such as financial services and e-commerce applications. For example, FIFO queues help ensure mobile banking transactions are processed in the correct sequence, and that inventory updates for online retail sites are processed in the right order. In this post, we show how to use FIFO queues to preserve message sequencing with Python.

FIFO queues complement our existing SQS standard queues, which offer higher throughput, best-effort ordering, and at-least-once delivery. The following diagram compares the features of standard queues vs. FIFO queues. The same API functions apply to both types of queues.

The below use case provides an example of how you can now use SQS FIFO queues to exchange sequence-sensitive information. For more information about developing applications using Amazon SQS, see the Amazon SQS Developer Guide.

SQS FIFO Queues Example

In the capital markets industry, some of the most common patterns for exchanging messages with partners and customers are based on messaging technologies with two types of scenarios:

  1. Communication channels between two messaging managers (one sender channel and one receiver channel). Each messaging manager hosts the local queue and has an alias to the remote queue hosted on the other side (an MQ manager). The messages sent from an MQ manager are not stored locally. The receiving MQ manager stores the messages for the client applications of the named queues.
  2. A single messaging manager that hosts all the queues and that has the associated responsibility for message exchange and backup.

You can use Amazon SQS to decouple the components of an application so that these components can run independently, as expected in a messaging use case. The following diagram shows a sample architecture using an SQS queue with processing servers.


To preserve the order of messages, we use FIFO queues. These queues help ensure that trades are received in the correct order, and a book event is received before an update event or a cancel event.

Important: The name of a FIFO queue must end with the .fifo suffix.

The following diagram shows a financial use case, where Amazon SQS FIFO queues are used with different processing servers based on the type of messages being managed.

 

 

In FIFO queues, Amazon SQS also provides content-based deduplication. Content-based deduplication allows SQS to distinguish the contents of one message from the contents of another message using the message body. This helps eliminate duplicates in referential systems such as those that manage pricing.

In the following example, we simulate the two parts of a capital market exchange. In the first part, we simulate the application sending the trade status and sending messages to the queue named Trade Status. (In Amazon SQS, the queue will be named TradeStatus.fifo.) The application regularly sends trade status received during the trade lifecycle in the queue (for example, trade received, trade checked, trade confirmed, and so on). In the second part, we simulate a client application that gets the trade status to update an internal website or to send status update notifications to other tools. The script stops after the message is read.

To accomplish this, you can use the following two Python code examples. This example is using boto3, the AWS SDK for Python.

This first script sends an XML message to a queue named TradeStatus.fifo, and the second script receives the message from the same queue. Messages can contain up to 256 KB of text in any format. Any component can later retrieve the messages programmatically using the Amazon SQS API. You can manage messages larger than 256 KB by using the SQS Extended Client Library for Java, which uses Amazon S3 to store larger payloads.

For queue creation, please see the Amazon SQS Developer guide.

Name: TradeStatus.fifo

URL: https://sqs.us-west-2.amazonaws.com/12345678/TradeStatus.fifo

The scripts below are in Python2.

import boto3

# Get the service resource
sqs = boto3.resource('sqs')

# Get the queue
queue = sqs.get_queue_by_name(QueueName='TradeStatus.fifo')

try:
    userInput = raw_input("Please enter file name: ")
except NameError:
    pass

with open(userInput, 'r') as myfile:
    data=myfile.read()

response = queue.send_message(
    MessageBody=data,
    MessageGroupId='messageGroup1'
)

# The response is NOT a resource, but gives you a message ID and MD5
print(response.get('MessageId'))
print(response.get('MD5OfMessageBody'))

The following Python code receives the message from the TradeStatus.fifo queue and deletes the message when it’s received. Afterward, the message is no longer available.

import boto3

# Get the service resource
sqs = boto3.resource('sqs')

# Get the queue
queue = sqs.get_queue_by_name(QueueName='TradeStatus.fifo')

# Process messages by printing out body
for message in queue.receive_messages():
    # Print out the body of the message
    print('Hello, {0}'.format(message.body))

    # Let the queue know that the message is processed
    message.delete()

Note: In Python, you need only the name of the queue.

More Resources

In this post, we showed how you can use Amazon SQS FIFO queues to exchange data between distributed systems that depend on receiving messages in exact order, and exactly once. You can get started with SQS FIFO queues using just three simple commands. For more information, see the following resources:

Creating .NET Core AWS Lambda Projects without Visual Studio

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In the last post, we talked about AWS Lambda deployment integration with the dotnet CLI, using the Amazon.Lambda.Tools NuGet package to deploy Lambda functions and serverless applications. But what if you want to create an AWS Lambda project outside of Visual Studio? This is especially important if you’re working on platforms other than Windows.

The “dotnet new” Command

The dotnet CLI has a command named new that you can use to create .NET Core projects from the command line. For example, by default there are options for creating many of the common project types.

C:\BlogContent> dotnet new -all                                                                                                             
Template Instantiation Commands for .NET Core CLI.                                                                            
                                                   
Templates                 Short Name       Language      Tags                                                                 
------------------------------------------------------------------------------------------------------                                                      
Console Application       console          [C#], F#      Common/Console                                                       
Class library             classlib         [C#], F#      Common/Library                                                       
Unit Test Project         mstest           [C#], F#      Test/MSTest                                                          
xUnit Test Project        xunit            [C#], F#      Test/xUnit                                                           
ASP.NET Core Empty        web              [C#]          Web/Empty                                                            
ASP.NET Core Web App      mvc              [C#], F#      Web/MVC                                                              
ASP.NET Core Web API      webapi           [C#]          Web/WebAPI                                                           
Nuget Config              nugetconfig                    Config                                                               
Web Config                webconfig                      Config                                                               
Solution File             sln                            Solution                                                             
                                                                                                                             
Examples:                                                                                                                     
    dotnet new mvc --auth None --framework netcoreapp1.1                                                                      
    dotnet new mvc --framework netcoreapp1.1                                                                                  
    dotnet new --help   

The new command also has the ability to add more project types via NuGet. We recently released a new NuGet package named Amazon.Lambda.Templates that wraps up all the templates we expose in Visual Studio as project types you can create from the dotnet CLI. To install this NuGet package, run the following command.

dotnet new -i Amazon.Lambda.Templates::*

The trailing ::* in the command specifies to install the latest version. Once the install is complete, the Lambda templates show up as part of dotnet new.

C:\BlogContent> dotnet new -all                                                                                                             
Template Instantiation Commands for .NET Core CLI.                                                                            
                                                                                                                             
Templates                            Short Name                    Language      Tags                                         
------------------------------------------------------------------------------------------------------                                                      
Lambda Detect Image Labels           lambda.DetectImageLabels      [C#]          AWS/Lambda/Function                          
Lambda Empty Function                lambda.EmptyFunction          [C#]          AWS/Lambda/Function                          
Lambda Simple DynamoDB Function      lambda.DynamoDB               [C#]          AWS/Lambda/Function                          
Lambda Simple Kinesis Function       lambda.Kinesis                [C#]          AWS/Lambda/Function                          
Lambda Simple S3 Function            lambda.S3                     [C#]          AWS/Lambda/Function                          
Lambda ASP.NET Core Web API          lambda.AspNetCoreWebAPI       [C#]          AWS/Lambda/Serverless                        
Lambda DynamoDB Blog API             lambda.DynamoDBBlogAPI        [C#]          AWS/Lambda/Serverless                        
Lambda Empty Serverless              lambda.EmptyServerless        [C#]          AWS/Lambda/Serverless                        
Console Application                  console                       [C#], F#      Common/Console                               
Class library                        classlib                      [C#], F#      Common/Library                               
Unit Test Project                    mstest                        [C#], F#      Test/MSTest                                  
xUnit Test Project                   xunit                         [C#], F#      Test/xUnit                                   
ASP.NET Core Empty                   web                           [C#]          Web/Empty                                    
ASP.NET Core Web App                 mvc                           [C#], F#      Web/MVC                                      
ASP.NET Core Web API                 webapi                        [C#]          Web/WebAPI                                   
Nuget Config                         nugetconfig                                 Config                                       
Web Config                           webconfig                                   Config                                       
Solution File                        sln                                         Solution                                     
                                                                                                                             
Examples:                                                                                                                     
    dotnet new mvc --auth None --framework netcoreapp1.1                                                                      
    dotnet new classlib                                                                                                       
    dotnet new --help                                                                                                         
C:\BlogContent>  

To get details about a template, you can use the help command.


dotnet new lambda.EmptyFunction –help

C:\BlogContent> dotnet new lambda.EmptyFunction --help                                                                                                    
Template Instantiation Commands for .NET Core CLI.                                                                                          
                                                                                                                                           
Lambda Empty Function (C#)                                                                                                                  
Author: AWS                                                                                                                                 
Options:                                                                                                                                    
  -p|--profile  The AWS credentials profile set in aws-lambda-tools-defaults.json and used as the default profile when interacting with AWS.
                string - Optional                                                                                                           
                                                                                                                                           
  -r|--region   The AWS region set in aws-lambda-tools-defaults.json and used as the default region when interacting with AWS.              
                string - Optional       

You can see here that the template takes two optional parameters to set the profile and region. These values are written to the aws-lambda-tools-default.json so you can get started deploying with the Lambda tooling right away.

To create a function, run the following command.

dotnet new lambda.EmptyFunction --name BlogFunction --profile default --region us-east-2

This creates a project for the Lambda function and a test project. We can now use any editor we want to build and test our .NET Core Lambda function. Once we’re ready to deploy the function, we run the following commands.

cd ./BlogFunction/src/BlogFunction
dotnet restore
dotnet lambda deploy-function BlogFunction –function-role TestRole

After deployment,we can even test the function from the command line by using the following command.

dotnet lambda invoke-function BlogFunction --payload "Hello World"
C:\BlogContent> dotnet lambda invoke-function BlogFunction --payload "Hello World"
Payload:
"HELLO WORLD"

Log Tail:
START RequestId: a54b750b-0dca-11e7-9099-27598ea7c35d Version: $LATEST
END RequestId: a54b750b-0dca-11e7-9099-27598ea7c35d
REPORT RequestId: a54b750b-0dca-11e7-9099-27598ea7c35d  Duration: 0.99 ms       Billed Duration: 100 ms         Memory Size: 256 MB     Max Memory Used: 42 MB

Summary

With our Lambda tooling provided by Amazon.Lambda.Tools and our project templates provided by Amazon.Lambda.Templates, you can develop .NET Core Lambda functions on any platform. As always, let us know what you think on our GitHub repository.

Deploying .NET Core AWS Lambda Functions from the Command Line

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In previous posts about our .NET Core support with AWS Lambda, we’ve shown how you can create Lambda functions and serverless applications with Visual Studio. But one of the most exciting things about .NET Core is its cross-platform support with the new command line interface (CLI) named dotnet. To help you develop Lambda functions outside of Visual Studio, we’ve released the Amazon.Lambda.Tools NuGet package that integrates with the dotnet CLI.

We released Amazon.Lambda.Tools as a preview with our initial release of .NET Core on Lambda. We kept it in preview while .NET Core tooling, including the dotnet CLI, was in preview. With the recent release of Visual Studio 2017, the dotnet CLI and our integration with it is now marked as generally available (GA). If you’re still using preview versions of the dotnet CLI and the pre-Visual Studio 2017 project structure, the GA release of Amazon.Lambda.Tools will still work for those projects.

.NET Core Project Structure

When .NET Core was originally released last summer, you would define a project in a JSON file named project.json. At that time, it was announced that this was temporary, that .NET Core was moving to be in line with other .NET projects and would be based on the msbuild XML format, and that each project would contain a .csproj file. As part of the GA release of the dotnet CLI tooling, the release includes the switch to the msbuild format.

Amazon.Lambda.Tools Registration

If you create an AWS Lambda project in Visual Studio, the command line integration is set up automatically so that you can easily transition from Visual Studio to the command line. If you inspect a project created in Visual Studio 2017, you’ll notice a DotNetCliToolReference for the Amazon.Lambda.Tools NuGet package.


<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Amazon.Lambda.Core" Version="1.0.0" />
    <PackageReference Include="Amazon.Lambda.Serialization.Json" Version="1.0.1" />
  </ItemGroup>

  <ItemGroup>
    <DotNetCliToolReference Include="Amazon.Lambda.Tools" Version="1.4.0" />
  </ItemGroup>

</Project>

In Visual Studio 2015, which uses the older project.json format, the Amazon.Lambda.Tools package is declared as a build dependency and is also registered in the tools section.


{
  "version": "1.0.0-*",
  "buildOptions": {
  },

  "dependencies": {
    "Microsoft.NETCore.App": {
      "type": "platform",
      "version": "1.0.0"
    },

    "Amazon.Lambda.Core": "1.0.0*",
    "Amazon.Lambda.Serialization.Json": "1.0.1",

    "Amazon.Lambda.Tools" : {
      "type" :"build",
      "version":"1.4.0 "
    }
  },

  "tools": {
    "Amazon.Lambda.Tools" : "1.4.0 "
  },

  "frameworks": {
    "netcoreapp1.0": {
      "imports": "dnxcore50"
    }
  }
}

Adding to an Existing Project

The Amazon.Lambda.Tools NuGet package is marked as DotNetCliTool package type. Right now Visual Studio 2017 doesn’t understand the new package type. If you attempt to add the NuGet package through Visual Studio’s Manage NuGet Packages dialog it won’t be able to add the reference. Till Visual Studio 2017 is updated you will need to manually add the DotNetCliToolReference in the csproj file.

Deploying Lambda Functions

All the tooling we developed for Visual Studio to deploy Lambda functions originated in the Amazon.Lambda.Tools package. That means all the deployment features you use inside Visual Studio you can also do from the command line.

To get started, in a command window navigate to a project you created in Visual Studio. To see the available commands, enter dotnet lambda help.

C:\BlogContent\BlogExample\BlogExample> dotnet lambda help                                                                                     
AWS Lambda Tools for .NET Core functions                                                                 
Project Home: https://github.com/aws/aws-lambda-dotnet                                                   
                                                                                                        
                                                                                                        
Commands to deploy and manage AWS Lambda functions:                                                      
                                                                                                        
        deploy-function         Command to deploy the project to AWS Lambda                              
        invoke-function         Command to invoke a function in Lambda with an optional input            
        list-functions          Command to list all your Lambda functions                                
        delete-function         Command to delete an AWS Lambda function                                 
        get-function-config     Command to get the current runtime configuration for a Lambda function   
        update-function-config  Command to update the runtime configuration for a Lambda function        
                                                                                                        
                                                                                                        
Commands to deploy and manage AWS Serverless applications using AWS CloudFormation:                      
                                                                                                        
        deploy-serverless       Command to deploy an AWS Serverless application                          
        list-serverless         Command to list all your AWS Serverless applications                     
        delete-serverless       Command to delete an AWS Serverless application                          
                                                                                                        
                                                                                                        
Other Commands:                                                                                          
                                                                                                        
        package                 Command to package a Lambda project into a zip file ready for deployment
                                                                                                        
                                                                                                        
To get help on individual commands execute:                                                              
        dotnet lambda help <command>  

By using the dotnet lambda command you have access to a collection of commands to manage Lambda functions and serverless applications. There is also a package command that packages your project into a .zip file, ready for deployment. This can be useful for CI systems.

To see help for an individual command, type dotnet lambda help followed by the command name; for example, dotnet lambda help deploy-function.

C:\BlogContent\BlogExample\BlogExample> dotnet lambda help deploy-function
AWS Lambda Tools for .NET Core functions
Project Home: https://github.com/aws/aws-lambda-dotnet

deploy-function:
   Command to deploy the project to AWS Lambda

   dotnet lambda deploy-function [arguments] [options]
   Arguments:
      <FUNCTION-NAME> The name of the function to deploy
   Options:
     --region                                The region to connect to AWS services, if not set region will be detected from the environment (Default Value: us-east-2)
      --profile                               Profile to use to look up AWS credentials, if not set environment credentials will be used (Default Value: normj+vpc)
     --profile-location                      Optional override to the search location for Profiles, points at a shared credentials file
      -pl    | --project-location             The location of the project, if not set the current directory will be assumed
      -cfg   | --config-file                  Configuration file storing default values for command line arguments. Default is aws-lambda-tools-defaults.json
      -c     | --configuration                Configuration to build with, for example Release or Debug (Default Value: Release)
      -f     | --framework                    Target framework to compile, for example netcoreapp1.0 (Default Value: netcoreapp1.0)
      -pac   | --package                      Application package to use for deployment, skips building the project
      -fn    | --function-name                AWS Lambda function name
      -fd    | --function-description         AWS Lambda function description
      -fp    | --function-publish             Publish a new version as an atomic operation
      -fh    | --function-handler             Handler for the function <assembly>::<type>::<method> (Default Value: BlogExample::BlogExample.Function::FunctionHandler)
      -fms   | --function-memory-size         The amount of memory, in MB, your Lambda function is given (Default Value: 256)
      -frole | --function-role                The IAM role that Lambda assumes when it executes your function
      -ft    | --function-timeout             The function execution timeout in seconds (Default Value: 30)
      -frun  | --function-runtime             The runtime environment for the Lambda function (Default Value: dotnetcore1.0)
      -fsub  | --function-subnets             Comma delimited list of subnet ids if your function references resources in a VPC
      -fsec  | --function-security-groups     Comma delimited list of security group ids if your function references resources in a VPC
      -dlta  | --dead-letter-target-arn       Target ARN of an SNS topic or SQS Queue for the Dead Letter Queue
      -ev    | --environment-variables        Environment variables set for the function. Format is <key1>=<value1>;<key2>=<value2>
      -kk    | --kms-key                      KMS Key ARN of a customer key used to encrypt the function's environment variables
      -sb    | --s3-bucket                    S3 bucket to upload the build output
      -sp    | --s3-prefix                    S3 prefix for for the build output
      -pcfg  | --persist-config-file          If true the arguments used for a successful deployment are persisted to a config file. Default config file is aws-lambda-tools-defaults.json
C:\BlogContent\BlogExample\BlogExample>

As you can see, you can set many options with this command. This is where the aws-lambda-tools-defaults.json file, which is created as part of your project, comes in handy. You can set the options in this file, which is read by the Lambda tooling by default. The project templates created in Visual Studio set many of these fields with default values.


{                                                                                   
  "profile":"default",                                                            
  "region" : "us-east-2",                                                           
  "configuration" : "Release",                                                      
  "framework" : "netcoreapp1.0",                                                    
  "function-runtime":"dotnetcore1.0",                                               
  "function-memory-size" : 256,                                                     
  "function-timeout" : 30,                                                          
  "function-handler" : "BlogExample::BlogExample.Function::FunctionHandler"         
}

When you use this aws-lambda-tools-default.json file, the only things left that the Lambda tooling needs to deploy the function are the name of the Lambda function and the IAM role. You do this by using the following command:

dotnet lambda deploy-function TheFunction --function-role TestRole
C:\BlogContent\BlogExample\BlogExample> dotnet lambda deploy-function TheFunction --function-role TestRole                                                                                                  
Executing publish command                                                                                                                                                           
Deleted previous publish folder                                                                                                                                                     
... invoking 'dotnet publish', working folder 'C:\BlogContent\BlogExample\BlogExample\bin\Release\netcoreapp1.0\publish'                                                            
... publish: Microsoft (R) Build Engine version 15.1.548.43366                                                                                                                      
... publish: Copyright (C) Microsoft Corporation. All rights reserved.                                                                                                              
... publish:   BlogExample -> C:\BlogContent\BlogExample\BlogExample\bin\Release\netcoreapp1.0\BlogExample.dll                                                                      
Zipping publish folder C:\BlogContent\BlogExample\BlogExample\bin\Release\netcoreapp1.0\publish to C:\BlogContent\BlogExample\BlogExample\bin\Release\netcoreapp1.0\BlogExample.zip
... zipping: Amazon.Lambda.Core.dll                                                                                                                                                 
... zipping: Amazon.Lambda.Serialization.Json.dll                                                                                                                                   
... zipping: BlogExample.deps.json                                                                                                                                                  
... zipping: BlogExample.dll                                                                                                                                                        
... zipping: BlogExample.pdb                                                                                                                                                        
... zipping: Newtonsoft.Json.dll                                                                                                                                                    
... zipping: System.Runtime.Serialization.Primitives.dll                                                                                                                            
Creating new Lambda function TheFunction                                                                                                                                            
New Lambda function created                                                                                                                                                         
C:\BlogContent\BlogExample\BlogExample>             

You can also pass an alternative file that contains option defaults by using the –config-file option. This enables you to reuse multiple Lambda configurations.

“dotnet publish” vs “dotnet lambda” Commands

Using the Amazon.Lambda.Tools package is the preferred way to deploy functions to Lambda from the command line versus using the dotnet publish command, zipping that output folder, and sending the zip file to Lambda. The Lambda tooling looks at the publish folder and removes any duplicate native dependency in it, which reduces the size of your Lambda function. For example, if you reference the SQL Server client NuGet package, System.Data.SqlClient, the Lambda tooling produces a package file that is about 1 MB smaller than the zipped publish folder from dotnet publish. It also reworks the layout of native dependencies to ensure that the Lambda service finds the native dependencies.

Summary

We hope the Amazon.Lambda.Tools package helps you with the transition from working in Visual Studio to working in the command line to script and automate your deployments. Let us know what you think on our GitHub repository, and what you’d like to see us add to the tooling.

AWS SDK for .NET Supports Assume Role Profiles and the Shared Credentials File

by John Vellozzi | on | in .NET | Permalink | Comments |  Share

The AWS SDK for .NET, AWS Tools for PowerShell, and the AWS Toolkit for Visual Studio now support the use of the AWS CLI credentials file. Some of the AWS SDKs have supported shared use of the AWS CLI credentials file for some time, and we’re happy to add the SDK for .NET to that list.

For a long time, the SDK for .NET has supported reading and writing of its own credentials file. We’ve added support for new credential profile types to facilitate feature parity with the shared credentials file. The SDK for .NET and Tools for PowerShell now support reading and writing of basic, session, and assume role credential profiles to both the .NET credentials file and the shared credentials file. The .NET credentials file maintains its support for federated credential profiles.

With the new Amazon.Runtime.CredentialManagement namespace, you now have programmatic access to read and write credential profiles to the .NET credentials file and the shared credentials file. This is a new namespace, and some older classes have been deprecated. Please see the developer guide topic Configuring AWS Credentials and the API Reference for details.

AWS Tools for PowerShell now enable you to read and write credential profiles to both credentials files as well. We’ve added parameters to the credentials-related cmdlets to support the new profile types and the shared credentials file. You can reference the new profiles with the -ProfileName argument in the service cmdlets. You can find more details about the changes to Tools for PowerShell in Shared Credentials in AWS Tools for PowerShell and the AWS Tools for PowerShell Cmdlet Reference.

In Visual Studio you’ll now see profiles stored in (user’s home directory)\.aws\credentials listed in the AWS Explorer. Reading is supported for all profile types and you can edit basic profiles.

What You Need to Know

In addition to the new Amazon.Runtime.CredentialManagement classes, the SDK has some internal changes. The SDK’s region resolution logic now looks for the region in the default credential profile. This is especially important for SDK for .NET applications running in Amazon EC2. The SDK for .NET determines the region for a request from:

  1. The client configuration, or what is explicitly set on the AWS service client.
  2. The AWSConfigs.RegionEndpoint property (set explicitly or in AppConfig).
  3. The AWS_REGION environment variable, if it’s non-empty.
  4. The “default” credential profile. (See “Credential Profile Resolution” below for details.).
  5. EC2 instance metadata.

Checking the “default” credential profile is a new step in the process. If your application relies on EC2 instance metadata for the region, ensure that the SDK doesn’t pick up the wrong region from one of the credentials files.

Although there aren’t any changes to the credentials resolution logic, it’s important to understand how credential profiles fit into that as well. The SDK for .NET will (continue to) determine the credentials to use for service requests from:

  1. The client configuration, or what is explicitly set on the AWS service client.
  2. BasicAWSCredentials that are created from the AWSAccessKey and AWSSecretKey AppConfig values, if they’re available.
  3. A search for a credentials profile with a name specified by a value in AWSConfigs.AWSProfileName (set explicitly or in AppConfig). (See “Credential Profile Resolution” below for details.)
  4. The “default” credentials profile. (See “Credential Profile Resolution” below for details.)
  5. SessionAWSCredentials that are created from the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN environment variables, if they’re all non-empty.
  6. BasicAWSCredentials that are created from the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, if they’re both non-empty.
  7. EC2 instance metadata.

Credential Profile Resolution

With two different credentials file types, it’s important to understand how to configure the SDK and Tools for PowerShell to use them. The AWSConfigs.AWSProfilesLocation (set explicitly or in AppConfig) controls how the SDK finds credential profiles. The -ProfileLocation command line argument controls how Tools for PowerShell find a profile. Here’s how the configuration works in both cases:

Profile Location Value Profile Resolution Behavior
null (not set) or empty *First search the .NET credentials file for a profile with the specified name. If the profile isn’t there, search (user’s home directory)\.aws\credentials. If the profile isn’t there, search (user’s home directory)\.aws\config.
The path to a file in the shared credentials file format Search only the specified file for a profile with the specified name.

*The .NET credentials file is not supported on Mac and Linux platforms, and is skipped when resolving credential profiles.

Time-to-Live Support in Amazon DynamoDB

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Amazon DynamoDB recently added Time-to-Live (TTL) support, a way to automatically delete expired items from your DynamoDB table. This blog post discusses this feature, how it’s exposed in the AWS SDK for .NET, and how you can take advantage of it.

Using Time-to-Live

At a high-level, you configure TTL by choosing a particular attribute on a table that will be treated as a timestamp. Then you simply store an expiration time into this attribute on every item that you need to expire. A periodic process in DynamoDB identifies whether an item’s TTL timestamp attribute is now in the past, and then schedules the removal of that item from the table. The timestamps must be stored as epoch seconds (number of seconds since 12:00:00 AM January 1st, 1970 UTC), which you must calculate or have the SDK calculate for you.

The AWS SDK for .NET has three different DynamoDB APIs, so you have three different ways to use TTL. In the following sections, we discuss these APIs and how you use the TTL feature from each of them.

Low-Level Model – Control Plane

First, the low-level model. This is a thin wrapper around the DynamoDB service operations that you use by instantiating AmazonDynamoDBClient and calling its various operations. This model provides you with the most control, but it also doesn’t have the helpful abstractions of the higher-level APIs. Using the low-level model, you can enable and disable the TTL feature and configure Time-to-Live for your data.

Here’s an example of checking the status of TTL for a table.


using (var client = new AmazonDynamoDBClient())
{
    // Retrieve TTL status
    var ttl = client.DescribeTimeToLive(new DescribeTimeToLiveRequest
    {
        TableName = "SessionData"
    }).TimeToLiveDescription;
    Console.WriteLine($"TTL status = {ttl.TimeToLiveStatus}");
    Console.WriteLine($"TTL attribute {(ttl.AttributeName == null ? "has not been set" : $"= {ttl.AttributeName}")}");

    // Enable TTL
    client.UpdateTimeToLive(new UpdateTimeToLiveRequest
    {
        TableName = "SessionData",
        TimeToLiveSpecification = new TimeToLiveSpecification
        {
            Enabled = true,
            AttributeName = "ExpirationTime"
        }
    });

    // Disable TTL
    client.UpdateTimeToLive(new UpdateTimeToLiveRequest
    {
        TableName = "SessionData",
        TimeToLiveSpecification = new TimeToLiveSpecification
        {
            Enabled = false,
            AttributeName = "ExpirationTime"
        }
    });
}

Note: There is a limit to how often you can enable or disable TTL in a given period of time. Running this sample multiple times will likely result in a ValidationException being thrown.

Low Level – Data Plane

Actually writing and reading TTL data in an item is fairly straightforward, but you are required to write epoch seconds into an AttributeValue. You can calculate the epoch seconds manually or use helper methods in AWSSDKUtils, as shown below.

Here’s an example of using the low-level API to work with TTL data.


using (var client = new AmazonDynamoDBClient())
{
    // Writing TTL attribute
    DateTime expirationTime = DateTime.Now.AddDays(7);
    Console.WriteLine($"Storing expiration time = {expirationTime}");
    int epochSeconds = AWSSDKUtils.ConvertToUnixEpochSeconds(expirationTime);
    client.PutItem("SessionData", new Dictionary<string, AttributeValue>
    {
        { "UserName", new AttributeValue { S = "user1" } },
        { "ExpirationTime", new AttributeValue { N = epochSeconds.ToString() } }
    });

    // Reading TTL attribute
    var item = client.GetItem("SessionData", new Dictionary<string, AttributeValue>
    {
        { "UserName", new AttributeValue { S = "user1" } },
    }).Item;
    string epochSecondsString = item["ExpirationTime"].N;
    epochSeconds = int.Parse(epochSecondsString);
    expirationTime = AWSSDKUtils.ConvertFromUnixEpochSeconds(epochSeconds);
    Console.WriteLine($"Stored expiration time = {expirationTime}");
}

Document Model

The Document Model provides you with Table objects that represent a DynamoDB table, and Document objects that represent a single row of data in a table. You can store primitive .NET types directly in a Document, with the required conversion to DynamoDB types happening in the background. This makes the Document Model API easier to use than the low-level model.

Using the Document Model API, you can easily configure which attributes you’d like to store as epoch seconds by setting the TableConfig.AttributesToStoreAsEpoch collection. Then you can use DateTime objects without needing to convert the data to epoch seconds manually. If you don’t specify which attributes to store as epoch seconds, then instead of writing epoch seconds in that attribute you would end up storing the DateTime as an ISO-8601 string, such as “2017-03-09T05:49:38.631Z”. In that case, DynamoDB Time-to-Live would NOT automatically delete the item. So you need to be sure to specify AttributesToStoreAsEpoch correctly when you’re creating the Table object.

Here’s an example of configuring the Table object, then writing and reading TTL items.


// Set up the Table object
var tableConfig = new TableConfig("SessionData")
{
    AttributesToStoreAsEpoch = new List { "ExpirationTime" }
};
var table = Table.LoadTable(client, tableConfig);

// Write TTL data
var doc = new Document();
doc["UserName"] = "user2";

DateTime expirationTime = DateTime.Now.AddDays(7);
Console.WriteLine($"Storing expiration time = {expirationTime}");
doc["ExpirationTime"] = expirationTime;

table.PutItem(doc);

// Read TTL data
doc = table.GetItem("user2");
expirationTime = doc["ExpirationTime"].AsDateTime();
Console.WriteLine($"Stored expiration time = {expirationTime}");

Object Persistence Model

The Object Persistence Model simplifies interaction with DynamoDB even more, by enabling you to use .NET classes with DynamoDB. This interaction is done by passing objects to the DynamoDBContext, which handles all the conversion logic. Using TTL with the Object Persistence Model is just as straightforward as using it with the Document model: you simply identify the attributes to store as epoch seconds and the SDK performs the required conversions for you.

Consider the following class definition.


[DynamoDBTable("SessionData")]
public class User
{
    [DynamoDBHashKey]
    public string UserName { get; set; }

    [DynamoDBProperty(StoreAsEpoch = true)]
    public DateTime ExpirationTime { get; set; }
}

Once we’ve added the [DynamoDBProperty(StoreAsEpoch = true)] attribute, we can use DateTime objects with the class just like we normally would. However, this time we store epoch seconds, and the items we create are eligible for TTL automatic deletion. And just as with the Document Model, if you omit the StoreAsEpoch attribution, the objects you write will contain ISO-8601 dates and are not eligible for TTL deletion.

Here’s an example of creating the DynamoDBContext object, writing a User object, and reading it out again.


using (var context = new DynamoDBContext(client))
{
    // Writing TTL data
    DateTime expirationTime = DateTime.Now.AddDays(7);
    Console.WriteLine($"Storing expiration time = {expirationTime}");

    var user = new User
    {
        UserName = "user3",
        ExpirationTime = expirationTime
    };
    context.Save(user);

    // Reading TTL data
    user = context.Load("user3");
    expirationTime = user.ExpirationTime;
    Console.WriteLine($"Stored expiration time = {expirationTime}");
}

Conclusion

In this blog post we showed how you can toggle the new Time-to-Live feature for a table. We’ve also showed multiple ways to work with this data. The approach you choose is up to you and, hopefully with these examples, you’ll find it quite easy to schedule your data for automatic deletion. Happy coding!

Client Constructors Now Deprecated in the AWS SDK for Java

by Kyle Thomson | on | in Java | Permalink | Comments |  Share

A couple of weeks ago you might have noticed that the 1.11.84 version of the AWS SDK for Java included several deprecations – the most notable being the deprecation of the client constructors.

Historically, you’ve been able to create a service client as shown here.

AmazonSNS sns = new AmazonSNSClient();

This mechanism is now deprecated in favor of using one of the builders to create the client as shown here.

AmazonSNS sns = AmazonSNSClient.builder().build();

The client builders (described in detail in this post) are superior to the basic constructors in the following ways.

Immutable

Clients created via the builder are immutable. The region/endpoint (and other data) can’t be changed. Therefore, clients are safe to reuse across multiple threads.

Explicit Region

At build time, the AWS SDK for Java can validate that a client has all the required information to function correctly – namely, a region. A client created via the builders must have a region that is defined either explicitly (i.e. by calling withRegion) or as part of the DefaultAwsRegionProviderChain. If the builder can’t determine the region for a client, an SdkClientException is thrown. Region is an important concept when communicating with services in AWS. It not only determines where your request will go, but also how it is signed. Requiring a region means the SDK can behave predictably without depending on hidden defaults.

Cleaner

Using the builder allows a client to be constructed in a single statement using method chaining.

AmazonSNS sns = AmazonSNSClient.builder()
						.withRegion("us-west-1")
						.withClientConfiguration(cfg)
						.withCredentials(creds)
						.build();

The deprecated constructors are no longer created for new service clients. They will be removed from existing clients in a future major version bump (although they’ll remain in all future releases of the 1.x family of the AWS SDK for Java).

AWS Toolkit for Eclipse: Support for Creating Maven Projects for AWS, Lambda, and Serverless Applications

I’m glad to announce that you can now leverage the AWS Toolkit for Eclipse to create Maven projects for AWS, Lambda, and serverless applications now. If you’re new to using the AWS Toolkit for Eclipse to create a Lambda application, you can see the Lambda plugin for more information. If you’re not familiar with serverless applications, see the Serverless Application Model for more information. If you have been using the AWS Toolkit for Eclipse, you’ll notice the extra Maven configuration panel in the user interface where you can create a new AWS, Lambda, or serverless application (see the following screenshots).

The AWS Toolkit for Eclipse no longer downloads the archived AWS Java SDK ZIP file automatically and puts it in the class path for your AWS application. Instead, it manages the dependencies for using Maven by checking for the latest AWS Java SDK version from the remote Maven repository and downloading it automatically, if you don’t already have it installed in your local Maven repository. This means that if a new version of the AWS SDK for Java released, it can take a while to download it before you can create the new application.

Create a Maven Project for an AWS application

In the Eclipse toolbar, choose the AWS icon drop-down button, and  then choose New AWS Project. You’ll see the following page, where you can configure the AWS SDK for Java samples you want to include in your application.

Sample

Here is the structure of the newly created AWS application Java project. You can edit the pom.xml file later to meet your needs to build, test, and deploy your application with Maven.

SampleStructure

Create a Maven Project for a Lambda Application

Similarly to how you create a new AWS application project, you can create a new AWS Lambda project.  In the Eclipse toolbar, choose the AWS icon drop-down button, and then choose New AWS Lambda Java Project.

Lambda

Here is the structure of the newly created AWS Lambda Java project.

LambdaStructure

Create a Maven Project for a  Serverless Application

To create a new serverless application, choose the AWS icon drop-down button and then choose New AWS Serverless Project. The following screenshot shows the status of the  project creation in the process of downloading application dependencies by Maven.

CreatingServerless

Here is the structure of the newly created serverless application Java project.

ArticleStructure

Build a Serverless Application Locally with Maven

You can also use the Maven command-line in the terminal to build and test the project you just created, as shown in the following screenshot.

MavenCommandLine

Please let us know what you think of the new Maven support in the AWS Toolkit for Eclipse. We appreciate your comments.

Preview of the AWS Toolkit for Visual Studio 2017

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today we released a preview of our AWS Toolkit for Visual Studio that includes support for the release candidate (RC) version of Visual Studio 2017. Because this Visual Studio release contains some significant changes for extension developers, we’re making this preview available in advance of the formal release. We highly encourage you to pass along feedback about any issues you find or whether you were successful using the preview by adding to the GitHub issue we describe below.

AWS Lambda .NET Core Support

Visual Studio 2017 also contains support for the new MSBuild project system for .NET Core projects. With this preview of the toolkit, we’ve updated the .NET Core Lambda support to use the new build system. For existing Lambda projects based on the Visual Studio 2015 project.json-based build system, Visual Studio 2017 offers to migrate them to the new build system when you open the projects.

Downloading and Installing the Preview

The AWS Toolkit for Visual Studio contains MSBuild target files for AWS CloudFormation projects. In previous releases of Visual Studio, you had to install these files and dependencies for an extension outside of the Visual Studio folder hierarchy. This required us to use a Windows Installer to install the toolkit. Starting with Visual Studio 2017, these MSBuild extensions exist within the Visual Studio folder hierarchy and can be installed from the VSIX extension package without an installer. As the installer technology we use doesn’t yet support Visual Studio 2017, we’ve decided to distribute the preview as a VSIX file only.

To install the preview

  1. Download the VSIX file from the preview link
  2. Double-click the VSIX file. This launches the VSIX Installer process, as shown.

After the installer finishes, the toolkit functions as it has in previous versions of Visual Studio.

Feedback

To track issues with installing and using the toolkit in Visual Studio 2017, we opened the following GitHub issue in our AWS SDK for .NET repository. For any issues or comments about our Visual Studio 2017 support, please add to the issue. Also, by letting us know if everything is working as you expected, you help us evaluate the readiness of our support for Visual Studio 2017.

Assume AWS IAM Roles with MFA Using the AWS SDK for Go

AWS SDK for Go v1.7.0 added the feature allowing your code to assume AWS Identity and Access Management (IAM) roles with Multi Factor Authentication (MFA). This feature allows your applications to easily support users assuming IAM roles with MFA token codes with minimal setup and configuration.

IAM roles enable you to manage granular permissions for a specific role or task, instead of applying those permissions directly to users and groups. Roles create a layer of separation, decoupling resource permissions from users, groups, and other accounts. With IAM roles you give third-party AWS accounts access to your resources, without having to create additional users for them in your AWS account.

Assuming IAM roles with MFA is a pattern used for roles that will be assumed by applications used directly by users instead of automated systems such as services. You can require that users assuming your role specify an MFA token code each time the role is assumed. The AWS SDK for Go now makes this easier to support in your Go applications.

Setting Up an IAM Role and User for MFA

To take advantage of this feature, enable MFA for your users and IAM roles. There are two categories of MFA that IAM supports, Security Token and SMS Text Message. The SDK support for MFA takes advantage of the Security Token based method. In the security token category, there are two types of security token devices, Hardware MFA device and Virtual MFA device. The AWS SDK for Go supports both of these device types equally.

In order for a user to assume an IAM role with MFA there must be an MFA device linked with the user. You can do this via the IAM console on the Security credentials tab of a user’s details, and using the Assigned MFA device field. Here you can assign an MFA device to a user. Only one MFA device can be assigned per user.

You can also configure IAM roles to require users who assume those roles to do so using an MFA token. This feature is enabled in the Trust Relationship section of a role’s details. Use the MultiFactorAuthPreset flag to require that any user who assumes the role must do so with an MFA token.

The following is an example of a Trust Relationship that enables this restriction.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<account>:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "Bool": {
          "aws:MultiFactorAuthPresent": "true"
        }
      }
    }
  ]
}

Assuming a Role with SDK Session

A common practice when using the AWS SDK for Go is to specify credentials and configuration in files such as the shared configuration file (~/.aws/config) and the shared credentials file (~/.aws/credentials). The SDK’s session package makes using these configurations easy, and will automatically configure  service clients based on them. You can enable the SDK’s support for assuming a role and the shared configuration file by setting the environment variable AWS_SDK_LOAD_CONFIG=1, or the session option SharedConfigState to SharedConfigEnable.

To configure your configuration profile to assume an IAM role with MFA, you need to specify the MFA device’s serial number for a Hardware MFA device, or ARN for a Virtual MFA device (mfa_serial). This is in addition to specifying the role’s ARN (role_arn) in your SDK shared configuration file.

The following example profile instructs the SDK to assume a role and requires the user to provide an MFA token to assume the role. The SDK uses the source_profile field to look up another profile in the configuration file that can specify the credentials, and region with which to make the AWS Security Token Service (STS) Assume Role API operation call.

The SDK supports assuming an IAM role with and without MFA. To assume a role without MFA, don’t provide the mfa_serial field.

[profile assume_role_profile]
role_arn = arn:aws:iam::<account_number>:role/<role_name>
source_profile = other_profile
mfa_serial = <hardware device serial number or virtual device arn>

See the SDK’s session package documentation for more details about configuring the shared configuration files.

After you’ve updated your shared configuration file, you can update your application code’s Sessions to specify how the MFA token code is retrieved from your application’s users. If a shared configuration profile specifies a role to assume, and the mfa_serial field is provided, the SDK requires that the AssumeRoleTokenProvider session option is also set. There’s no harm in always setting the AssumeRoleTokenProvider session for applications that will always be run by a person. The field is only used if the shared configuration’s profile has a role to assume, and then sets the mfa_serial field. Otherwise, the option is ignored.

The SDK doesn’t automatically set the AssumeRoleTokenProvider with a default value. This is because of the risk of halting an application unexpectedly while the token provider waits for a nonexistent user to provide a value due to a configuration change. You must set this value to use MFA roles with the SDK.

The SDK implements a simple token provider in the stscreds package, StdinTokenProvider. This function prompts on stdin for an MFA token, and waits forever until one is provided. You can also easily implement a custom token provider by satisfying the func() (string, error) signature. The returned string is the MFA token, and the error is any error that occurred while retrieving the token.

// Enable SDK's Shared Config support.
sess := session.Must(session.NewSessionWithOptions(session.Options{
    AssumeRoleTokenProvider: stscreds.StdinTokenProvider,
    SharedConfigState: session.SharedConfigEnable,
}))

// Use the session to create service clients and make API operation calls.
svc := s3.New(sess)
svc.PutObject(...)

Configuring the Assume Role Credentials Provider Directly

In addition to being able to create a Session configured to assume an IAM role, you can also create a credential provider to assume a role directly. This is helpful when the role’s configuration isn’t stored in the shared configuration files.

Creating the credential provider is similar to configuring a Session. However, you don’t need to enable the session’s shared configuration option. In addition, you can use this to configure service clients to use the assumed role directly instead of via the shared session. This is helpful when you want to shared base configuration across multiple service clients via the Session, and use roles for select tasks.

// Initial credentials loaded from SDK's default credential chain, such as
// the environment, shared credentials (~/.aws/credentials), or EC2 Instance
// Role. These credentials are used to make the AWS STS Assume Role API.
sess := session.Must(session.NewSession())

// Create the credentials from AssumeRoleProvider to assume the role
// referenced by the "myRoleARN" ARN. Prompt for MFA token from stdin.
creds := stscreds.NewCredentials(sess, "myRoleArn", func(p *stscreds.AssumeRoleProvider) {
    p.SerialNumber = aws.String("myTokenSerialNumber")
    p.TokenProvider = stscreds.StdinTokenProvider
})

// Create an Amazon SQS service client with the Session's default configuration.
sqsSvc := sqs.New(sess)

// Create service client configured for credentials from the assumed role.
s3Svc := s3.New(sess, &aws.Config{Credentials: creds})

Feedback

We’re always looking for more feedback. We added this feature as a direct result of feedback and requests we received. If you have any ideas that you think would be good improvements or additions to the AWS SDK for Go, please let us know.