Category: Go


Introducing Amazon DynamoDB Expression Builder in the AWS SDK for Go

This post was authored by Hajime Hayano.

The v1.11.0 release of the AWS SDK for Go adds a new expression package that enables you to create Amazon DynamoDB Expressions using statically typed builders. The expression package abstracts away the low-level detail of using DynamoDB Expressions and simplifies the process of using DynamoDB Expressions in DynamoDB Operations. In this blog post, we explain how to use the expression package.

In earlier versions of the AWS SDK for Go, you had to explicitly declare the member fields of the DynamoDB Operation input structs, such as QueryInput and UpdateItemInput. That meant the syntax and rules of DynamoDB Expressions were up to you to figure out. The goal of the expression package is to create the formatted DynamoDB Expression strings under the hood thus simplifying the process of using DynamoDB Expressions. The following example shows the verbosity of writing DynamoDB Expressions by hand.

input := &dynamodb.ScanInput{
    ExpressionAttributeNames: map[string]*string{
        "#AT": aws.String("AlbumTitle"),
        "#ST": aws.String("SongTitle"),
    },
    ExpressionAttributeValues: map[string]*dynamodb.AttributeValue{
        ":a": {
            S: aws.String("No One You Know"),
        },
    },
    FilterExpression:     aws.String("Artist = :a"),
    ProjectionExpression: aws.String("#ST, #AT"),
    TableName:            aws.String("Music"),
}

Representing DynamoDB Expressions

DynamoDB Expressions are represented by static builder types in the expression package. These builders, like ConditionBuilder and UpdateBuilder, are created in the package using a builder pattern. The static typing of the builders allows compile-time checks on the syntax of the DynamoDB Expressions that are being created. The following example shows how to create a builder that represents a FilterExpression and a ProjectionExpression.

filt := expression.Name("Artist").Equal(expression.Value("No One You Know"))
// let :a be an ExpressionAttributeValue representing the string "No One You Know"
// equivalent FilterExpression: "Artist = :a"

proj := expression.NamesList(
    expression.Name("SongTitle"),
    expression.Name("AlbumTitle"),
)
// equivalent ProjectionExpression: "SongTitle, AlbumTitle"

In this example, the variable filt represents a FilterExpression. Notice that DynamoDB item attributes are represented using the function Name() and DynamoDB item values are similarly represented using the function Value(). In this context, the string "Artist" represents the name of the item attribute that we want to evaluate, and the string "No One You Know" represents the value we want to evaluate the item attribute against. You specify the relationship between the two operands by using the method Equal().

Similarly, the variable proj represents a ProjectionExpression. The list of item attribute names comprising the ProjectionExpression are specified as arguments to the function NamesList(). The expression package uses the type safety of Go and, if an item value is to be used as an argument to the function NamesList(), a compile time error is returned. The pattern of representing DynamoDB Expressions by indicating relationships between operands with functions is consistent throughout the whole expression package.

Creating an Expression

The Expression type is the core of the expression package. An Expression represents a collection of DynamoDB Expressions with getter methods, such as Condition() and Projection(), used to retrieve specific formatted DynamoDB Expression strings. The following example shows how to create an Expression.

filt := expression.Name("Artist").Equal(expression.Value("No One You Know"))
proj := expression.NamesList(
    expression.Name("SongTitle"),
    expression.Name("AlbumTitle"),
)

expr, err := expression.NewBuilder().
    WithFilter(filt).
    WithProjection(proj).
    Build()
if err != nil {
  fmt.Println(err)
}

In this example, the variable expr is an instance of an Expression type. An Expression is built using a builder pattern. First, a new Builder is initialized by the NewBuilder() function. Then, types representing DynamoDB Expressions are added to the Builder by the WithFilter() and WithProjection() methods. The Build() method returns an instance of an Expression and an error. The error is either an InvalidParameterError or an UnsetParameterError.

There is no limit to the number of different kinds of DynamoDB Expressions that you can add to the Builder, but adding the same type of DynamoDB Expression will overwrite the previous DynamoDB Expression. The following example shows a specific instance of this problem.

cond1 := expression.Name("foo").Equal(expression.Value(5))
cond2 := expression.Name("bar").Equal(expression.Value(6))
expr, err := expression.NewBuilder().
    WithCondition(cond1).
    WithCondition(cond2).
    Build()
if err != nil {
  fmt.Println(err)
}

This example shows that the second call of WithCondition() overwrites the first call.

Filling in the fields of a DynamoDB Scan API

The following example shows how to use an Expression to fill in the member fields of a DynamoDB Operation API.

filt := expression.Name("Artist").Equal(expression.Value("No One You Know"))
proj := expression.NamesList(
    expression.Name("SongTitle"),
    expression.Name("AlbumTitle"),
)
expr, err := expression.NewBuilder().
    WithFilter(filt).
    WithProjection(proj).
    Build()
if err != nil {
  fmt.Println(err)
}

input := &dynamodb.ScanInput{
  ExpressionAttributeNames:  expr.Names(),
  ExpressionAttributeValues: expr.Values(),
  FilterExpression:          expr.Filter(),
  ProjectionExpression:      expr.Projection(),
  TableName:                 aws.String("Music"),
}

In this example, the getter methods of the Expression type are used to get the formatted DynamoDB Expression strings. When using Expression, you must always assign the ExpressionAttributeNames and ExpressionAttributeValues member fields of the DynamoDB API because all item attribute names and values are aliased. That means that if the ExpressionAttributeNames and ExpressionAttributeValues members are not assigned with the corresponding Names() and Values() methods, the DynamoDB operation will run into a logic error.

If you need a starting point, check out the working example in the AWS SDK for Go.

Overall, the expression package makes using the DynamoDB Expressions clean and simple. The complicated syntax and rules of DynamoDB Expressions are abstracted away so you no longer have to worry about them!

AWS SDK for Go – Batch Operations with Amazon S3

The v1.9.44 release of the AWS SDK for Go adds support for batched operations in the s3manager package. This enables you to easily upload, download, and delete Amazon S3 objects. The feature uses the iterator, also known as scanner pattern, to enable users to extend the functionality of batching. This blog post shows how to use and extend the new batched operations to fit a given use case.

Deleting objects using ListObjectsIterator

  sess := session.Must(session.NewSession(&aws.Config{}))
  svc := s3.New(sess)

  input := &s3.ListObjectsInput{
    Bucket:  aws.String("bucket"),
    MaxKeys: aws.Int64(100),
  }
  // Create a delete list objects iterator
  iter := s3manager.NewDeleteListIterator(svc, input)
  // Create the BatchDelete client
  batcher := s3manager.NewBatchDeleteWithClient(svc)

  if err := batcher.Delete(aws.BackgroundContext(), iter); err != nil {
    panic(err)
  }

This example lists all objects, one hundred at a time, under the bucket passed in the command line arguments. The example above creates a new delete list iterator and dictates how the BatchDelete client behaves. This means that when we call Delete on the client it will require a BatchDeleteIterator.

Creating a custom iterator

The SDK enables you to pass custom iterators to the new batched operations. For example, if we want to upload a directory, none of the default iterators do this easily. The following example shows how to implement a custom iterator that uploads a directory to S3.

 // DirectoryIterator iterates through files and directories to be uploaded                                          
// to S3.                                                                                                               
type DirectoryIterator struct {                                                                                         
  filePaths []string                                                                                                    
  bucket    string                                                                                                      
  next      struct {                                                                                                    
    path string                                                                                                         
    f    *os.File                                                                                                       
  }                                                                                                                     
  err error                                                                                                             
}                                                                                                                       
                                                                                                                        
// NewDirectoryIterator creates and returns a new BatchUploadIterator                                                
func NewDirectoryIterator(bucket, dir string) s3manager.BatchUploadIterator {                                           
  paths := []string{}                                                                                                   
  filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {                                             
    // We care only about files, not directories                                                                     
    if !info.IsDir() {                                                                                                  
      paths = append(paths, path)                                                                                       
    }                                                                                                                   
    return nil                                                                                                          
  })                                                                                                                    
                                                                                                                        
  return &DirectoryIterator{                                                                                            
    filePaths: paths,                                                                                                   
    bucket:    bucket,                                                                                                  
  }                                                                                                                     
}                                                                                                                       
                                                                                                                        
// Next opens the next file and stops iteration if it fails to open                                             
// a file.                                                                                                              
func (iter *DirectoryIterator) Next() bool {                                                                            
  if len(iter.filePaths) == 0 {                                                                                         
    iter.next.f = nil                                                                                                   
    return false                                                                                                        
  }                                                                                                                     
                                                                                                                        
  f, err := os.Open(iter.filePaths[0])                                                                                  
  iter.err = err                                                                                                        
                                                                                                                        
  iter.next.f = f                                                                                                       
  iter.next.path = iter.filePaths[0]                                                                                    
                                                                                                                        
  iter.filePaths = iter.filePaths[1:]                                                                                   
  return true && iter.Err() == nil                                                                                      
}                                                                                                                       
                                                                                                                        
// Err returns an error that was set during opening the file
func (iter *DirectoryIterator) Err() error {                                                                            
  return iter.err                                                                                                       
}                                                                                                                       
                                                                                                                        
// UploadObject returns a BatchUploadObject and sets the After field to                                              
// close the file.                                                                                                      
func (iter *DirectoryIterator) UploadObject() s3manager.BatchUploadObject {                                             
  f := iter.next.f                                                                                                      
  return s3manager.BatchUploadObject{                                                                                   
    Object: &s3manager.UploadInput{                                                                                     
      Bucket: &iter.bucket,                                                                                             
      Key:    &iter.next.path,                                                                                          
      Body:   f,                                                                                                        
    },
	// After was introduced in version 1.10.7
    After: func() error {                                                                                               
      return f.Close()                                                                                                  
    },                                                                                                                  
  }                                                                                                                     
}

We have defined a new iterator named DirectoryIterator. This satisfies the BatchUploadIterator by defining the three necessary methods of Next, Err, and UploadObject. The Next method on the iterator will let the batch operation know to continue the iteration or not. Err returns an error if there was one. In this case, the only time we will return an error is when we fail to open a file. If this occurs, the Next method will return false. Finally, the UploadObject returns the BatchUploadObject that is used to upload contents to the service. In this example, we see that we create an input object and a closure. This closure ensures that we’re not leaking files. Now let’s define our main function using what we defined above.

func main() {
  region := os.Args[1]
  bucket := os.Args[2]
  path := os.Args[3]
  iter := NewDirectoryIterator(bucket, path)                                                                  
  uploader := s3manager.NewUploader(session.New(&aws.Config{                                                            
    Region: &region,                                                                                    
  }))                                                                                                                   
                                                                                                                        
  if err := uploader.UploadWithIterator(aws.BackgroundContext(), iter); err != nil {                                    
    panic(err)                                                                                                          
  }                                                                                                                     
  fmt.Printf("Successfully uploaded %q to %q", path, bucket)                                                                                                
}  

You can verify that the directory has been uploaded by looking in S3.

Please chat with us on gitter and file feature requests or issues in github. We look forward to your feedback and recommendations!

Using Go 1.8’s Plugin for Credentials with the AWS SDK for Go

The v1.10.0 release of the AWS SDK for Go adds a new way to configure the SDK to retrieve AWS credentials. With this release, you can configure the SDK to retrieve AWS credentials from a Go plugin that is dynamically loaded during your application’s runtime. In this post, we explain how you can build a plugin and configure the SDK to use it. The SDK also includes a runnable example for you to try out the new plugin credential provider feature.

The SDK does takes advantage of the Go 1.8 plugin package, and associated build mode for Linux operating systems. The plugin package and associated build mode enable you to write components that can be loaded dynamically while your application runs. Plugins help you add functionality to your application while it’s running instead of only when the application is compiled.

The SDK’s plugincreds package enables you to use the plugins to retrieve AWS credentials. This package includes utilities to create a credentials Provider and Credentials loader.

Building a credential provider plugin

To use a plugin with the SDK, the SDK requires the plugin to export a function that returns two function pointers. The SDK uses these two returned function pointers to retrieve credentials and to determine if the credentials are expired. By default, the SDK expects the plugin to export the symbol named GetAWSSDKCredentialProvider for the getter function that returns the retrieve and isExpired function pointers.

The SDK requires the plugin’s getter function signature to match the following signature. If the getter function doesn’t match the signature, the SDK returns an error with the code ErrCodeInvalidSymbolError.

func() (RetrieveFn func() (key, secret, token string, err error), IsExpiredFn func() bool)

The SDK includes the NewCredentials helper function that looks up and validates the symbol, creating the SDK’s Credentials value automatically. You can use the returned Credentials value to configure a session or service client.

To use a custom symbol name, use the GetPluginProviderFnsByName function to look up the getter function from the plugin by name. This verifies that the symbol matches the expected signature. It also gets the credential provider’s retrieve and isExpired function pointers by calling the getter function. The retrieve and isExpired function pointers are returned. The SDK requires both function pointers to be valid and not nil.

Here is an example of a plugin that provides credential retrieve and expired functions to the application that loaded the plugin.

package main

func main() {}

// Build: go build -o plugin.so -buildmode=plugin plugin.go
func init() {
	// Initialize a mock credential provider with mock values. In a real-world usage
	// the provider's Retrieve method could reach out to the source of credentials
	// and return the credentials there, instead of this mock credential provider that statically
	// sets the credential values.
	myCredProvider = provider{"key","secret","token"}
}

// GetAWSSDKCredentialProvider is the symbol the SDK will look up and use to
// get the credential provider's retrieve and isExpired functions.
func GetAWSSDKCredentialProvider() (func() (key, secret, token string, err error), func() bool) {
	return myCredProvider.Retrieve,	myCredProvider.IsExpired
}

// Mock implementation of a type that retrieves credentials and
// returns if they are expired.
type provider struct {
	key, secret, token string
}

// Return the credentials that were previously set into the provider value.
func (p provider) Retrieve() (key, secret, token string, err error) {
	return p.key, p.secret, p.token, nil
}

func (p *provider) IsExpired() bool {
	return false;
}

Once you’ve written the code for your plugin, you can build it as a plugin file that can be loaded dynamically into your application with the -buildmode=plugin build flag.

go build -o myCredPlugin.so -buildmode=plugin plugin.go

You can find an example you can start from in the SDK’s plugincreds example.

Using a credential provider plugin

Once you’ve built your plugin, you can configure the SDK to retrieve credentials using it. The SDK makes this easy with the plugincreds package’s NewCredential function. This function takes a Plugin pointer value and looks up the expected credentials provider getter functions. See the plugincreds package for errors that can be returned.

The following example shows you how an application can open a Go plugin dynamically at runtime, and configure the SDK to use the plugin to retrieve AWS credentials.

// In your application code, open the plugin using its file name. This loads
// the plugin into memory, executing the plugin's main package init function.
p, err := plugin.Open(pluginFilename)
if err != nil {
	return nil, errors.Wrapf(err, "failed to open plugin, %s", pluginFilename)
}

// NewCredentials looks up the symbol from the plugin and configures the Credentials
// value that can be used to configure a session or service client.
//
// You can share the Credentials value and credentials, across many session and service clients 
// concurrently and safely.
creds, err := plugincreds.NewCredentials(p)
if err != nil {
	return nil, errors.Wrapf(err, "failed to load plugin credentials provider, %s", pluginFilename)
}

// Configure a session to use the credentials sourced from the plugin that is loaded.
sess := session.Must(session.NewSession(&aws.Config{
	Credentials: creds,
}))

// Return the configured session so it can be used to create service clients.
return sess, nil

You can find a usable example of this in the SDK’s plugincreds example.

Putting it all together

With this configuration, you can deploy your plugin and application independently to the platforms that your application will run on. Loading plugins dynamically allows you to separate your application from where your AWS credentials are retrieved. This practice allows your application to be more flexible when working with multiple environments. This technique is particularly useful for CLI applications where users of the CLI need to provide custom ways of retrieving credentials.

Let us know how you use the credentials plugin in your applications.

Context Pattern added to the AWS SDK for Go

The AWS SDK for Go v1.8.0 release adds support for the API operation request functional options, and the Context pattern. Both of these features were high demand requests from our users. Request options allow you to easily configure and augment how the SDK makes API operation requests to AWS services. The SDK’s support for the Context pattern allows your application take advantage of cancellation, timeouts, and Context Values on requests.  The new request options and Context pattern give your application even more control over SDK’s request execution and handling.

Request Options

Request Options are functional arguments that you pass in to the SDK’s API operation methods. These enable you to configure the request in line with functional options. Functional options are a pattern you can use to configure an operation via passed-in functions or closures in line with the method call.

For example, you can configure the Amazon S3 API operation PutObject to log debug information about the request directly, without impacting the other API operations used by your application.

// Log this API operation only. 
resp, err := svc.PutObjectWithContext(ctx, params, request.WithLogLevel(aws.LogDebug))

This pattern is also helpful when you want your application to inject request handlers into the request. This allows you to do so in line with the API operation method call.

resp, err := svc.PutObjectWithContext(ctx, params, func(r *request.Request) {
	start := time.Now()
	r.Handlers.Complete.PushBack(func(req *request.Request) {
		fmt.Println("request %s took %s to complete", req.RequestID, time.Since(start))
	})
})

All of the SDK’s new service client methods that have a WithContext suffix support these request options. You can also apply request options to the SDK’s standard Request directly with the ApplyOptions method.

API Operations with Context

All of the new methods of the SDK’s API operations that have a WithContext suffix take a ContextValue. This value must be non-nil. Context allows your application to control API operation request cancellation. This means you can now easily institute request timeouts based on the Context pattern. Go introduced the Context pattern in the experimental package golang.org/x/net/context, and it was later added to the Go standard library in Go 1.7. For backward compatibility with previous Go versions, the SDK created the Context interface type in the github.com/aws/aws-sdk-go/aws package. The SDK’s Context type is compatible with Context from both golang.org/x/net/context and the Go 1.7 standard library Context package.

Here is an example of how to use a Context to cancel uploading an object to Amazon S3. If the put doesn’t complete within the timeout passed in, the API operation is canceled. When a Context is canceled, the SDK returns the CanceledErrorCode error code. A working version of this example can be found in the SDK.

sess := session.Must(session.NewSession())
svc := s3.New(sess)

// Create a context with a timeout that will abort the upload if it takes 
// more than the passed in timeout.
ctx := context.Background()
var cancelFn func()
if timeout > 0 {
	ctx, cancelFn = context.WithTimeout(ctx, timeout)
}
// Ensure the context is canceled to prevent leaking.
// See context package for more information, https://golang.org/pkg/context/
defer cancelFn()

// Uploads the object to S3. The Context will interrupt the request if the 
// timeout expires.
_, err := svc.PutObjectWithContext(ctx, &s3.PutObjectInput{
	Bucket: aws.String(bucket),
	Key:    aws.String(key),
	Body:   body,
})
if err != nil {
	if aerr, ok := err.(awserr.Error); ok && aerr.Code() == request.CanceledErrorCode {
		// If the SDK can determine the request or retry delay was canceled
		// by a context the CanceledErrorCode error code will be returned.
		fmt.Println("request's context canceled,", err)
	}
	return err
}

API Operation Waiters

Waiters were expanded to include support for request Context and waiter options. The new WaiterOption type defines functional options that are used to configure the waiter’s functionality.

For example, the WithWaiterDelay allows you to provide your own function that returns how long the waiter will wait before checking the waiter’s resource state again. This is helpful when you want to configure an exponential backoff, or longer retry delays with ConstantWaiterDelay.

The example below highlights this by configuring the WaitUntilBucketExists method to use a 30-second delay between checks to determine if the bucket exists.

svc := s3.New(sess)
ctx := contex.Background()

_, err := svc.CreateBucketWithContext(ctx, &s3.CreateBucketInput{
	Bucket: aws.String("myBucket"),
})
if err != nil {
	return fmt.Errorf("failed to create bucket, %v", err)
}

err := svc.WaitUntilBucketExistsWithContext(ctx,
	&s3.HeadBucket{
		Bucket: aws.String("myBucket"),
	},
	request.WithWaiterDelay(request.ConstantWaiterDelay(30 * time.Second)),
)
if err != nil {
	return fmt.Errorf("failed to wait for bucket exists, %v", err)
}

fmt.Println("bucket created")

API Operation Paginators

Paginators were also expanded to add support for Context and request options. Configuring request options for pagination applies the options to each new Request that the SDK creates to retrieve the next page. By extending the Pages API methods to include Context and request options the SDK gives you control over how the SDK will make each page request, and cancellation of the pagination.

svc := s3.New(sess)
ctx := context.Background()

err := svc.ListObjectsPagesWithContext(ctx,
	&s3.ListObjectsInput{
		Bucket: aws.String("myBucket"),
		Prefix: aws.String("some/key/prefix"),
		MaxKeys: aws.Int64(100),
	},
	func(page *s3.ListObjectsOutput, lastPage bool) bool {
		fmt.Println("Received", len(page.Contents), "objects in page")
		for _, obj := range page.Contents {
			fmt.Println("Key:", aws.StringValue(obj.Key))
		}
		return true
	},
)
if err != nil {
	return fmt.Errorf("failed to create bucket, %v", err)
}

API Operation Pagination without Callbacks

In addition to the Pages API operations, you can use the new Pagination type in the github.com/aws/aws-sdk-go/aws/request package. This type enables you to control the iterations of pages directly. This is helpful when you do not want to use callbacks for paginating AWS operations. This new type allows you to treat pagination similar to the Go stdlib bufio package’s Scanner type to iterate through pages with a for loop. You can also use this pattern with the Context pattern by calling Request.SetContext on each request in the NewRequest function.

svc := s3.New(sess)

params := s3.ListObjectsInput{
	Bucket: aws.String("myBucket"),
	Prefix: aws.String("some/key/prefix"),
	MaxKeys: aws.Int64(100),
}
ctx := context.Background()

p := request.Pagination{
	NewRequest: func() (*request.Request, error) {
		req, _ := svc.ListObjectsRequest(&params)
		req.SetContext(ctx)
		return req, nil
	},
}

for p.Next(){
	page := p.Page().(*s3.ListObjectsOutput)
	
	fmt.Println("Received", len(page.Contents), "objects in page")
	for _, obj := range page.Contents {
		fmt.Println("Key:", aws.StringValue(obj.Key))
	}
}

return p.Err()

Wrap Up

The addition of Context and request options expands the capabilities of the AWS SDK for Go, giving your applications the tools needed to implement request lifecycle and configuration with the SDK. Let us know your experiences using the new Context pattern and request options features.

Assume AWS IAM Roles with MFA Using the AWS SDK for Go

AWS SDK for Go v1.7.0 added the feature allowing your code to assume AWS Identity and Access Management (IAM) roles with Multi Factor Authentication (MFA). This feature allows your applications to easily support users assuming IAM roles with MFA token codes with minimal setup and configuration.

IAM roles enable you to manage granular permissions for a specific role or task, instead of applying those permissions directly to users and groups. Roles create a layer of separation, decoupling resource permissions from users, groups, and other accounts. With IAM roles you give third-party AWS accounts access to your resources, without having to create additional users for them in your AWS account.

Assuming IAM roles with MFA is a pattern used for roles that will be assumed by applications used directly by users instead of automated systems such as services. You can require that users assuming your role specify an MFA token code each time the role is assumed. The AWS SDK for Go now makes this easier to support in your Go applications.

Setting Up an IAM Role and User for MFA

To take advantage of this feature, enable MFA for your users and IAM roles. There are two categories of MFA that IAM supports, Security Token and SMS Text Message. The SDK support for MFA takes advantage of the Security Token based method. In the security token category, there are two types of security token devices, Hardware MFA device and Virtual MFA device. The AWS SDK for Go supports both of these device types equally.

In order for a user to assume an IAM role with MFA there must be an MFA device linked with the user. You can do this via the IAM console on the Security credentials tab of a user’s details, and using the Assigned MFA device field. Here you can assign an MFA device to a user. Only one MFA device can be assigned per user.

You can also configure IAM roles to require users who assume those roles to do so using an MFA token. This feature is enabled in the Trust Relationship section of a role’s details. Use the MultiFactorAuthPreset flag to require that any user who assumes the role must do so with an MFA token.

The following is an example of a Trust Relationship that enables this restriction.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<account>:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "Bool": {
          "aws:MultiFactorAuthPresent": "true"
        }
      }
    }
  ]
}

Assuming a Role with SDK Session

A common practice when using the AWS SDK for Go is to specify credentials and configuration in files such as the shared configuration file (~/.aws/config) and the shared credentials file (~/.aws/credentials). The SDK’s session package makes using these configurations easy, and will automatically configure  service clients based on them. You can enable the SDK’s support for assuming a role and the shared configuration file by setting the environment variable AWS_SDK_LOAD_CONFIG=1, or the session option SharedConfigState to SharedConfigEnable.

To configure your configuration profile to assume an IAM role with MFA, you need to specify the MFA device’s serial number for a Hardware MFA device, or ARN for a Virtual MFA device (mfa_serial). This is in addition to specifying the role’s ARN (role_arn) in your SDK shared configuration file.

The following example profile instructs the SDK to assume a role and requires the user to provide an MFA token to assume the role. The SDK uses the source_profile field to look up another profile in the configuration file that can specify the credentials, and region with which to make the AWS Security Token Service (STS) Assume Role API operation call.

The SDK supports assuming an IAM role with and without MFA. To assume a role without MFA, don’t provide the mfa_serial field.

[profile assume_role_profile]
role_arn = arn:aws:iam::<account_number>:role/<role_name>
source_profile = other_profile
mfa_serial = <hardware device serial number or virtual device arn>

See the SDK’s session package documentation for more details about configuring the shared configuration files.

After you’ve updated your shared configuration file, you can update your application code’s Sessions to specify how the MFA token code is retrieved from your application’s users. If a shared configuration profile specifies a role to assume, and the mfa_serial field is provided, the SDK requires that the AssumeRoleTokenProvider session option is also set. There’s no harm in always setting the AssumeRoleTokenProvider session for applications that will always be run by a person. The field is only used if the shared configuration’s profile has a role to assume, and then sets the mfa_serial field. Otherwise, the option is ignored.

The SDK doesn’t automatically set the AssumeRoleTokenProvider with a default value. This is because of the risk of halting an application unexpectedly while the token provider waits for a nonexistent user to provide a value due to a configuration change. You must set this value to use MFA roles with the SDK.

The SDK implements a simple token provider in the stscreds package, StdinTokenProvider. This function prompts on stdin for an MFA token, and waits forever until one is provided. You can also easily implement a custom token provider by satisfying the func() (string, error) signature. The returned string is the MFA token, and the error is any error that occurred while retrieving the token.

// Enable SDK's Shared Config support.
sess := session.Must(session.NewSessionWithOptions(session.Options{
    AssumeRoleTokenProvider: stscreds.StdinTokenProvider,
    SharedConfigState: session.SharedConfigEnable,
}))

// Use the session to create service clients and make API operation calls.
svc := s3.New(sess)
svc.PutObject(...)

Configuring the Assume Role Credentials Provider Directly

In addition to being able to create a Session configured to assume an IAM role, you can also create a credential provider to assume a role directly. This is helpful when the role’s configuration isn’t stored in the shared configuration files.

Creating the credential provider is similar to configuring a Session. However, you don’t need to enable the session’s shared configuration option. In addition, you can use this to configure service clients to use the assumed role directly instead of via the shared session. This is helpful when you want to shared base configuration across multiple service clients via the Session, and use roles for select tasks.

// Initial credentials loaded from SDK's default credential chain, such as
// the environment, shared credentials (~/.aws/credentials), or EC2 Instance
// Role. These credentials are used to make the AWS STS Assume Role API.
sess := session.Must(session.NewSession())

// Create the credentials from AssumeRoleProvider to assume the role
// referenced by the "myRoleARN" ARN. Prompt for MFA token from stdin.
creds := stscreds.NewCredentials(sess, "myRoleArn", func(p *stscreds.AssumeRoleProvider) {
    p.SerialNumber = aws.String("myTokenSerialNumber")
    p.TokenProvider = stscreds.StdinTokenProvider
})

// Create an Amazon SQS service client with the Session's default configuration.
sqsSvc := sqs.New(sess)

// Create service client configured for credentials from the assumed role.
s3Svc := s3.New(sess, &aws.Config{Credentials: creds})

Feedback

We’re always looking for more feedback. We added this feature as a direct result of feedback and requests we received. If you have any ideas that you think would be good improvements or additions to the AWS SDK for Go, please let us know.

Using the AWS SDK for Go Encryption Client

Overview

AWS SDK for Go released the encryption client last year, and some of our customers have asked us how to use it. We’re very excited to show you some examples in this blog post. Before we get into the examples, let’s look at what client-side encryption is and why you might want to use it.

Client-side encryption is the act of encrypting or decrypting on the client’s side and not relying on a service to do the encryption for you. This has many added benefits, including enabling you to choose what to use to encrypt your data. It also enables extra security so that only those who have the master key can decrypt the data.

The crypto client has three major components: key wrap handler, cipher builder, and client. We use the key wrap handler to generate and encrypt the iv and key. Then we use those keys with the cipher builder to build a new cipher. Lastly, we use all these parts to create a client. To learn more about this process, see envelope encryption.

Prerequisite

To run these examples, we need

  • An AWS KMS encryption key
  • An Amazon S3 bucket bar

Encryption and Decryption

In our implementation, we wanted to provide interoperability across all SDKs and to give customers an easy way to extend the s3crypto package. Let’s first get into an example of putting a simple “hello world” object into S3.


	arn := "arn to our key"
	sess := session.New(&aws.Config{
        Region: aws.String("us-east-1"),
    })
	// This is our key wrap handler, used to generate cipher keys and IVs for
    // our cipher builder. Using an IV allows more “spontaneous” encryption.
    // The IV makes it more difficult for hackers to use dictionary attacks.
    // The key wrap handler behaves as the master key. Without it, you can’t
    // encrypt or decrypt the data.
	keywrap := s3crypto.NewKMSKeyGenerator(kms.New(sess), arn)
	// This is our content cipher builder, used to instantiate new ciphers
    // that enable us to encrypt or decrypt the payload.
	builder := s3crypto.AESGCMContentCipherBuilder(keywrap)
	// Let's create our crypto client!
	client := s3crypto.NewEncryptionClient(sess, builder)
	
	key := "foo"
	bucket := "bar"
	input := &s3.PutObjectInput{
    	Bucket: &bucket,
    	Key:    &key,
    	Body:   bytes.NewReader("Hello world!"),
	}

    _, err := client.PutObject(input)
	// What to expect as errors? You can expect any sort of S3 errors, http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html.
	// The s3crypto client can also return some errors:
    //  * MissingCMKIDError - when using AWS KMS, the user must specify their key's ARN
	if err != nil {
		return err
	}

Now that wasn’t too hard! It looks almost identical to the S3 PutObject! Let’s move on to an example of decryption.


	sess := session.New(&aws.Config{
        Region: aws.String("us-east-1"),
    })
	client := s3crypto.NewDecryptionClient(sess)

	key := "foo"
	bucket := "bar"
	input := &s3.GetObjectInput{
    	Bucket: &bucket,
    	Key:    &key,
	}

    result, err := client.GetObject(input)
  // Aside from the S3 errors, here is a list of decryption client errors:
  //   * InvalidWrapAlgorithmError - returned on an unsupported Wrap algorithm
  //   * InvalidCEKAlgorithmError - returned on an unsupported CEK algorithm
  //   * V1NotSupportedError - the SDK doesn’t support v1 because security is an issue for AES ECB
  // These errors don’t necessarily mean there’s something wrong. They just tell us we couldn't decrypt some data.
  // Users can choose to log this and then continue decrypting the data that they can, or simply return the error.
	if err != nil {
		return err
	}
	
	// Let's read the whole body from the response
	b, err := ioutil.ReadAll(result.Body)
	if err != nil {
		return err
	}
	fmt.Println(string(b))

As the code shows, there’s no difference between using the Amazon S3 client’s GetObject versus the s3crypto.DecryptionClient.GetObject.

Lost or Deleted Master Key

If you lose or delete your master key, there’s no way to decrypt your data. The beauty of client-side encryption is that the master key is never stored with your data. This allows you to specify who can view your data.

Supported Algorithms

The AWS SDK for Go currently supports AWS KMS for key wrapping and AES GCM as a content cipher. However, some users might not want to use AES GCM or KMS for their ciphers. The SDK allows any user to specify any cipher as long as it satisfies our interfaces. With that said, the goal of this crypto client is to allow interoperability between the crypto clients of other SDKs and enable easy extensibility. Please let us know in the comments how you’re using or extending the crypto client.

The documentation for Amazon S3 Encryption Client can be found here.

Check out other SDKs that support the Amazon S3 Encryption Client
AWS SDK for C++
AWS SDK for Java
AWS SDK for Ruby

AWS SDK for Go Adds Error Code Constants

The AWS SDK for Go v1.6.19 release adds generated constants for all modeled service response error codes. These constants improve discoverability of the error codes that a service can return, and reduce the chance of typos that can cause errors to be handled incorrectly.

You can find the new error code constants within each of the SDK’s service client packages that are prefixed with “ErrCode”.  For example, the “NoSuchBucket” error code returned by Amazon S3 API requests can be found in the s3 package as: “ErrCodeNoSuchBucket”.

Here is an example of how to use the error code constants with the Amazon S3 GetObject API response.

result, err := svc.GetObject(&s3.GetObjectInput{
    Bucket: aws.String("myBucket"),
    Key: aws.String("myKey"),
})
if err != nil {
    if aerr, ok := err.(awserr.Error); ok {
        // Special handling for bucket and key errors
        switch aerr.Code() {
            case s3.ErrCodeNoSuchBucket:
                 // Handling bucket not existing error
                 fmt.Println("bucket does not exist.")
            case s3.ErrCodeNoSuchKey:
                 // Handling key not existing error
                 fmt.Println("key does not exist.")
        }
    }
    return err
}

We’re working to include error codes for all services. Let us know if you find additional error codes to include in the AWS SDK for Go.

AWS SDK for Go Update Needed for Go 1.8

The AWS SDK for Go  is updated for Go 1.8. This update fixes an issue in which some API operations failed with a connection reset by peer error or service error. This failure prevented API operation requests from being made. If you’re using Go 1.8 with a version of the SDK that’s earlier than v1.6.3, you need to update the SDK to at least v1.6.3 to take advantage of this fix.

GitHub issue #984 discovered that the bug was caused by the way the SDK constructed its HTTP request body. The SDK relied on undocumented functionality of Go 1.7 and earlier versions. In that functionality, the Go http.Request automatically determined whether to send the request’s body, based on whether the body was empty.

Go addressed the issue for most use cases in 1.8rc2, but some APIs such as the Amazon Simple Storage Service (Amazon S3) CopyObject API were still affected.

The SDK’s fix for this issue takes advantage of Go 1.8’s new type, http.NoBody. The SDK uses this value to ensure the HTTP request doesn’t contain a body when none is expected. Another option for a fix was to set Request.Body to nil, but this would break backward compatibility because the Request.Body value is accessible.

See #991, #984, and golang/go#18257 for more information.

Thank you, to all who discovered, reported, and helped us resolve this issue.

Using the AWS SDK for Go’s Regions and Endpoints Metadata

In release v1.6.0 of the AWS SDK for Go, we added Regions and Endpoints metadata to the SDK. This feature enables you to easily enumerate the metadata and discover Regions, Services, and Endpoints. You can find this feature in the github.com/aws/aws-sdk-go/aws/endpoints package.

The endpoints package provides a simple interface to get a service’s endpoint URL and enumerate the Region metadata. The metadata is grouped into partitions. Each partition is a group of AWS Regions such as AWS Standard, AWS China, and AWS GovCloud (US).

Resolving Endpoints

The SDK automatically uses the endpoints.DefaultResolver function when setting the SDK’s default configuration. You can resolve endpoints yourself by calling the EndpointFor methods in the endpoints package.

// Resolve endpoint for S3 in us-west-2
resolver := endpoints.DefaultResolver()
endpoint, err := resolver.EndpointFor(endpoints.S3ServiceID, endpoints.UsWest2RegionID)
if err != nil {
        fmt.Println("failed to resolve endpoint", err)
        return
}
 
fmt.Println("Resolved URL:", endpoint.URL)

If you need to add custom endpoint resolution logic to your code, you can implement the endpoints.Resolver interface, and set the value to aws.Config.EndpointResolver. This is helpful when you want to provide custom endpoint logic that the SDK will use for resolving service endpoints.

The following example creates a Session that is configured so that Amazon S3 service clients are constructed with a custom endpoint.

s3CustResolverFn := func(service, region string, optFns ...func(*endpoints.Options)) (endpoints.ResolvedEndpoint, error) {
        if service == "s3" {
               return endpoints.ResolvedEndpoint{
                       URL:           "s3.custom.endpoint.com",
                       SigningRegion: "custom-signing-region",
               }, nil
        }
 
        return defaultResolver.EndpointFor(service, region, optFns...)
}
sess := session.Must(session.NewSessionWithOptions(session.Options{
        Config: aws.Config{
               Region:           aws.String("us-west-2"),
               EndpointResolver: endpoints.ResolverFunc(s3CustResolverFn),
        },
}))

Partitions

The return value of the endpoints.DefaultResolver function can be cast to the endpoints.EnumPartitions interface. This will give you access to the slice of partitions that the SDK will use, and can help you enumerate over partition information for each partition.

// Iterate through all partitions printing each partition's ID.
resolver := endpoints.DefaultResolver()
partitions := resolver.(endpoints.EnumPartitions).Partitions()
 
for _, p := range partitions {
        fmt.Println("Partition:", p.ID())
}

In addition to the list of partitions, the endpoints package also includes a getter function for each partition group. These utility functions enable you to enumerate a specific partition without having to cast and enumerate over all the default resolver’s partitions.

partition := endpoints.AwsPartition()
region := partition.Regions()[endpoints.UsWest2RegionID]
 
fmt.Println("Services in region:", region.ID())
for id, _ := range region.Services() {
        fmt.Println(id)
}

Once you have a Region or Service value, you can call ResolveEndpoint on it. This provides a filtered view of the Partition when resolving endpoints.

Check out the AWS SDK for Go repo for more examples. Let us know in the comments what you think of the endpoints package.

Using AWS SDK for Go API Setters

In release v1.5.0 of the AWS SDK for Go, we added setters to all API operation parameters. Setters give you the ability to set API parameters without directly taking the value’s address. The setters wrap this functionality internally so you don’t have to. The setters are a convenient way to reduce the need to use aws.String and similar utilities.

The following code shows how you could use the Amazon S3 PutObject with the setters.

resp, err := svc.PutObject((&s3.PutObject{}).
	SetBucket("myBucket").
	SetKey("myKey").
	SetBody(strings.NewReader("abc")).
	SetWebsiteRedirectLocation("https://example.com/something"),
)

The following example uses Amazon ECS and nested setters to update a service’s deployment.

resp, err := svc.UpdateService((&ecs.UpdateServiceInput{}).
	SetService("myService").
	SetDeploymentConfiguration((&ecs.DeploymentConfiguration{}).
		SetMinimumHealthyPrecent(80),
	),
)

If you have additional suggestions or feedback on how to improve the SDK, send us your comments. We look forward to hearing from you.