Category: Go


AWS SDK for Go 2.0 – Generated Marshalers

The AWS SDK for Go 2.0 has released generated marshalers for the restjson and restxml protocols. Generated marshalers will help with the performance and customer issues the SDK had been receiving.

To better understand what was causing the performance hit, we used Go’s benchmark tooling to help us determine the main bottleneck—reflection. The reflection package was consuming a large amount of memory and performance, as shown below.

Roughly 50% of the time is spent in the JSON marshaler, which uses a lot of the reflection package. To improve both memory and CPU performance, we implemented generated marshalers. The idea was to bypass the package that was affecting performance, like reflection, and set values directly in the query, header, or body of requests.

The following benchmark data was gathered from restjson and restxml benchmark tests on GitHub.

REST JSON Benchmarks

benchmark                                                   old ns/op     new ns/op     delta
BenchmarkRESTJSONBuild_Complex_ETCCreateJob-4               91645         36968         -59.66%
BenchmarkRESTJSONBuild_Simple_ETCListJobsByPipeline-4       8323          5722          -31.25%
BenchmarkRESTJSONRequest_Complex_CFCreateJob-4              274958        221579        -19.41%
BenchmarkRESTJSONRequest_Simple_ETCListJobsByPipeline-4     147774        140943        -4.62%

benchmark                                                   old allocs     new allocs     delta
BenchmarkRESTJSONBuild_Complex_ETCCreateJob-4               334            366            +9.58%
BenchmarkRESTJSONBuild_Simple_ETCListJobsByPipeline-4       73             59             -19.18%
BenchmarkRESTJSONRequest_Complex_CFCreateJob-4              679            711            +4.71%
BenchmarkRESTJSONRequest_Simple_ETCListJobsByPipeline-4     251            237            -5.58%

The restjson protocol shows great performance gains. We see that for the Complex_ETCCreateJob benchmark, the speed improved by 59.66%. However, the overall gains in memory allocation were far less than expected, and some benchmarks took even more memory

REST XML Benchmarks

benchmark                                                   old ns/op     new ns/op     delta
BenchmarkRESTXMLBuild_Complex_CFCreateDistro-4              212835        63765         -70.04%
BenchmarkRESTXMLBuild_Simple_CFDeleteDistro-4               8942          6893          -22.91%
BenchmarkRESTXMLBuild_REST_S3HeadObject-4                   17222         7194          -58.23%
BenchmarkRESTXMLBuild_XML_S3PutObjectAcl-4                  36723         14958         -59.27%
BenchmarkRESTXMLRequest_Complex_CFCreateDistro-4            416695        231318        -44.49%
BenchmarkRESTXMLRequest_Simple_CFDeleteDistro-4             143133        137391        -4.01%
BenchmarkRESTXMLRequest_REST_S3HeadObject-4                 182617        187526        +2.69%
BenchmarkRESTXMLRequest_XML_S3PutObjectAcl-4                212515        174650        -17.82%

benchmark                                                   old allocs     new allocs     delta
BenchmarkRESTXMLBuild_Complex_CFCreateDistro-4              1341           439            -67.26%
BenchmarkRESTXMLBuild_Simple_CFDeleteDistro-4               70             50             -28.57%
BenchmarkRESTXMLBuild_REST_S3HeadObject-4                   143            65             -54.55%
BenchmarkRESTXMLBuild_XML_S3PutObjectAcl-4                  260            122            -53.08%
BenchmarkRESTXMLRequest_Complex_CFCreateDistro-4            1627           723            -55.56%
BenchmarkRESTXMLRequest_Simple_CFDeleteDistro-4             237            217            -8.44%
BenchmarkRESTXMLRequest_REST_S3HeadObject-4                 452            374            -17.26%
BenchmarkRESTXMLRequest_XML_S3PutObjectAcl-4                476            338            -28.99%

The restxml protocol greatly improved both memory and speed. The RESTXMLBuild_Complex_CFCreateDistro benchmark had a about a 70% improvement in both.

Overall for both protocols there were massive speed improvements in more complex shapes, but only small improvements in the simple shapes. There are some outliers in the data that have minor performance or memory hits. However, there are more optimizations we can do to potentially eliminate them.

Try out the developer preview of the AWS SDK for Go 2.0 here, and let us know what you think in the comments below!

AWS SDK for Go 2.0 Developer Preview

We’re pleased to announce the Developer Preview release of the AWS SDK for Go 2.0. Many aspects of the SDK have been refactored based on your feedback, with a strong focus on performance, consistency, discoverability, and ease of use. The Developer Preview is here for you to provide feedback, and influence the direction of the AWS SDK for Go 2.0 before its production-ready, general availability launch. Tell us what you like, and what you don’t like. Your feedback matters to us. Find details at the bottom of this post on how to give feedback and contribute.

You can safely use the AWS SDK for Go 2.0 in parallel with the 1.x SDK, with both SDKs coexisting in the same Go application. We won’t drop support for the 1.0 SDK any time soon. We know there are a lot of customers depending on the 1.x SDK, and we will continue to support them. As we get closer to general availability for 2.0, we’ll share a more detailed plan about how we’ll support the 1.x SDK.

Getting started

Let’s walk through setting up an application with the 2.0 SDK. We’ll build a simple Go application using the 2.0 SDK to make a service API request. For this walkthrough, you’ll need to use a minimum of Go 1.9.

  1. Create a new Go file for the example application. We’ll name ours main.go in a new directory, awssdkgov2-example, in our Go Workspace.

    mkdir -p $(go env GOPATH)/src/awssdkgov2-example
    cd $(go env GOPATH)/src/awssdkgov2-example
    

    All of the following steps assume you will execute them from the awssdkgov2-example directory.

  2. To get the AWS SDK for Go 2.0, you can go get the SDK manually or use a dependency management tool such as Dep. You can manually get the 2.0 SDK with go get github.com/aws/aws-sdk-go-v2.

  3. In your favorite editor create a main.go file, and add the following code. This code loads the SDK’s default configuration, and creates a new Amazon DynamoDB client. See the external package for more details on how to customize the way the SDK’s default configuration is loaded.

    package main
    
    import (
        "github.com/aws/aws-sdk-go-v2/aws"
        "github.com/aws/aws-sdk-go-v2/aws/endpoints"
        "github.com/aws/aws-sdk-go-v2/aws/external"
        "github.com/aws/aws-sdk-go-v2/service/dynamodb"
    )
    
    func main() {
        // Using the SDK's default configuration, loading additional config
        // and credentials values from the environment variables, shared
        // credentials, and shared configuration files
        cfg, err := external.LoadDefaultAWSConfig()
        if err != nil {
            panic("unable to load SDK config, " + err.Error())
        }
    
        // Set the AWS Region that the service clients should use
        cfg.Region = endpoints.UsWest2RegionID
    
        // Using the Config value, create the DynamoDB client
        svc := dynamodb.New(cfg)
    }
    
  4. Build the service API request, send it, and process the response.

    // Build the request with its input parameters
    req := svc.DescribeTableRequest(&dynamodb.DescribeTableInput{
        TableName: aws.String("myTable"),
    })
    
    // Send the request, and get the response or error back
    resp, err := req.Send()
    if err != nil {
        panic("failed to describe table, "+err.Error())
    }
    
    fmt.Println("Response", resp)
    

What has changed?

Our focus for the 2.0 SDK is to improve the SDK’s development experience and performance, make the SDK easy to extend, and add new features. The changes made in the Developer Preview target the major pain points of configuring the SDK and using AWS service API calls. Check out our 2.0 repo for details on pending changes that are in development and designs we’re discussing.

The following are some of the larger changes included in the AWS SDK for Go 2.0 Developer Preview.

SDK configuration

The 2.0 SDK simplifies how you configure the SDK’s service clients by using a single Config type. This simplifies the Session and Config type interaction from the 1.x SDK. In addition, we’ve moved the service-specific configuration flags to the individual service client types. This reduces confusion about where service clients will use configuration members.

We added the external package to provide the utilities for you to use external configuration sources to populate the SDK’s Config type. External sources include environmental variables, shared credentials file (~/.aws/credentials), and shared config file (~/.aws/config). By default, the 2.0 SDK will now automatically load configuration values from the shared config file. The external package also provides you with the tools to customize how the external sources are loaded and used to populate the Config type.

You can even customize external configuration sources to include your own custom sources, for example, JSON files or a custom file format.

Use LoadDefaultAWSConfig in the external package to create the default Config value, and load configuration values from the external configuration sources.

cfg, err := external.LoadDefaultAWSConfig()

To specify the shared configuration profile load used in code, use the WithSharedConfigProfile helper passed into LoadDefaultAWSConfig with the profile name to use.

cfg, err := external.LoadDefaultAWSConfig(
	external.WithSharedConfigProfile("gopher")
)

Once a Config value is returned by LoadDefaultAWSConfig, you can set or override configuration values by setting the fields on the Config struct, such as Region.

cfg.Region = endpoints.UsWest2RegionID

Use the cfg value to provide the loaded configuration to new AWS service clients.

svc := dynamodb.New(cfg)

API request methods

The 2.0 SDK consolidates several ways to make an API call, providing a single request constructor for each API call. This means that your application will create an API request from input parameters, then send it. This enables you to optionally modify and configure how the request will be sent. This includes, but isn’t limited to, modifications such as setting the Context per request, adding request handlers, and enabling logging.

Each API request method is suffixed with Request and returns a typed value for the specific API request.

As an example, to use the Amazon Simple Storage Service GetObject API, the signature of the method is:

func (c *S3) GetObjectRequest(*s3.GetObjectInput) *s3.GetObjectRequest

To use the GetObject API, we pass in the input parameters to the method, just like we would with the 1.x SDK. The 2.0 SDK’s method will initialize a GetObjectRequest value that we can then use to send our request to Amazon S3.

req := svc.GetObjectRequest(params)

// Optionally, set the context or other configuration for the request to use
req.SetContext(ctx)

// Send the request and get the response
resp, err := req.Send()

API enumerations

The 2.0 SDK uses typed enumeration values for all API enumeration fields. This change provides you with the type safety that you and your IDE can leverage to discover which enumeration values are valid for particular fields. Typed enumeration values also provide a stronger type safety story for your application than using string literals directly. The 2.0 SDK uses string aliases for each enumeration type. The SDK also generates typed constants for each enumeration value. This change removes the need for enumeration API fields to be pointers, as a zero-value enumeration always means the field isn’t set.

For example, the Amazon Simple Storage Service PutObject API has a field, ACL ObjectCannedACL. An ObjectCannedACL string alias is defined within the s3 package, and each enumeration value is also defined as a typed constant. In this example, we want to use the typed enumeration values to set an uploaded object to have an ACL of public-read. The constant that the SDK provides for this enumeration value is ObjectCannedACLPublicRead.

svc.PutObjectRequest(&s3.PutObjectInput{
	Bucket: aws.String("myBucket"),
	Key:    aws.String("myKey"),
	ACL:    s3.ObjectCannedACLPublicRead,
	Body:   body,
})

API slice and map elements

The 2.0 SDK removes the need to convert slice and map elements from values to pointers for API calls. This will reduce the overhead of needing to use fields with a type, such as []string, in API calls. The 1.x SDK’s pattern of using pointer types for all slice and map elements was a significant pain point for users, requiring them to convert between the types. The 2.0 SDK does away with the pointer types for slices and maps, using value types instead.

The following example shows how value types for the Amazon Simple Queue Service AddPermission API’s AWSAccountIds and Actions member slices are set.

svc := sqs.New(cfg)

svc.AddPermission(&sqs.AddPermissionInput{
	AWSAcountIds: []string{
		"123456789",
	},
	Actions: []string{
		"SendMessage",
	},

	Label:    aws.String("MessageSender"),
	QueueUrl: aws.String(queueURL)
})

SDK’s use of pointers

We listened to your feedback on the SDK’s significant use of pointers across the surface of the SDK. The Config type was refactored to remove the use of pointers for both Config value and its member fields. Enumeration pointers were removed, and replaced with typed constants. Slice and map elements were also updated to be value types instead of pointers.

The API parameter fields are still pointer types in the 2.0 SDK. We looked at patterns to remove these pointers, changing them to values, but chose not to make a change to the field types. All of the patterns we investigated either didn’t solve the problem fully or imposed an increased risk of unintended API call outcome. For example, unknowingly making an API call with a field type’s zero value required parameters. The 2.0 SDK does include setter’s and utility functions for all API parameter types to provide a workaround to using pointers directly.

Please share your thoughts in GitHub issues or our Gitter channel. We want to hear your feedback.

Giving feedback and contributing

The 2.0 SDK will use GitHub issues to track feature requests and issues with the 2.0 repo. In addition, we’ll use GitHub projects to track large tasks spanning multiple pull requests, such as refactoring the SDK’s internal request lifecycle. You can provide feedback to us in several ways.

GitHub issues. To provide feedback using GitHub issues on the 2.0 repo. This is the preferred mechanism to give feedback so that other users can engage in the conversation, +1 issues, etc. Issues you open will be evaluated, and included in our roadmap for the GA launch.

Gitter channel. For more informal discussions or general feedback, check out our Gitter channel for the 2.0 repo. The Gitter channel is also a great place to ask general questions, and find help to get started with the 2.0 SDK Developer Preview.

Contributing. You can open pull requests for fixes or additions to the AWS SDK for Go 2.0 Developer Preview release. All pull requests must be submitted under the Apache 2.0 license and will be reviewed by an SDK team member before being merged in. Accompanying unit tests, where possible, are appreciated.

Introducing Amazon DynamoDB Expression Builder in the AWS SDK for Go

This post was authored by Hajime Hayano.

The v1.11.0 release of the AWS SDK for Go adds a new expression package that enables you to create Amazon DynamoDB Expressions using statically typed builders. The expression package abstracts away the low-level detail of using DynamoDB Expressions and simplifies the process of using DynamoDB Expressions in DynamoDB Operations. In this blog post, we explain how to use the expression package.

In earlier versions of the AWS SDK for Go, you had to explicitly declare the member fields of the DynamoDB Operation input structs, such as QueryInput and UpdateItemInput. That meant the syntax and rules of DynamoDB Expressions were up to you to figure out. The goal of the expression package is to create the formatted DynamoDB Expression strings under the hood thus simplifying the process of using DynamoDB Expressions. The following example shows the verbosity of writing DynamoDB Expressions by hand.

input := &dynamodb.ScanInput{
    ExpressionAttributeNames: map[string]*string{
        "#AT": aws.String("AlbumTitle"),
        "#ST": aws.String("SongTitle"),
    },
    ExpressionAttributeValues: map[string]*dynamodb.AttributeValue{
        ":a": {
            S: aws.String("No One You Know"),
        },
    },
    FilterExpression:     aws.String("Artist = :a"),
    ProjectionExpression: aws.String("#ST, #AT"),
    TableName:            aws.String("Music"),
}

Representing DynamoDB Expressions

DynamoDB Expressions are represented by static builder types in the expression package. These builders, like ConditionBuilder and UpdateBuilder, are created in the package using a builder pattern. The static typing of the builders allows compile-time checks on the syntax of the DynamoDB Expressions that are being created. The following example shows how to create a builder that represents a FilterExpression and a ProjectionExpression.

filt := expression.Name("Artist").Equal(expression.Value("No One You Know"))
// let :a be an ExpressionAttributeValue representing the string "No One You Know"
// equivalent FilterExpression: "Artist = :a"

proj := expression.NamesList(
    expression.Name("SongTitle"),
    expression.Name("AlbumTitle"),
)
// equivalent ProjectionExpression: "SongTitle, AlbumTitle"

In this example, the variable filt represents a FilterExpression. Notice that DynamoDB item attributes are represented using the function Name() and DynamoDB item values are similarly represented using the function Value(). In this context, the string "Artist" represents the name of the item attribute that we want to evaluate, and the string "No One You Know" represents the value we want to evaluate the item attribute against. You specify the relationship between the two operands by using the method Equal().

Similarly, the variable proj represents a ProjectionExpression. The list of item attribute names comprising the ProjectionExpression are specified as arguments to the function NamesList(). The expression package uses the type safety of Go and, if an item value is to be used as an argument to the function NamesList(), a compile time error is returned. The pattern of representing DynamoDB Expressions by indicating relationships between operands with functions is consistent throughout the whole expression package.

Creating an Expression

The Expression type is the core of the expression package. An Expression represents a collection of DynamoDB Expressions with getter methods, such as Condition() and Projection(), used to retrieve specific formatted DynamoDB Expression strings. The following example shows how to create an Expression.

filt := expression.Name("Artist").Equal(expression.Value("No One You Know"))
proj := expression.NamesList(
    expression.Name("SongTitle"),
    expression.Name("AlbumTitle"),
)

expr, err := expression.NewBuilder().
    WithFilter(filt).
    WithProjection(proj).
    Build()
if err != nil {
  fmt.Println(err)
}

In this example, the variable expr is an instance of an Expression type. An Expression is built using a builder pattern. First, a new Builder is initialized by the NewBuilder() function. Then, types representing DynamoDB Expressions are added to the Builder by the WithFilter() and WithProjection() methods. The Build() method returns an instance of an Expression and an error. The error is either an InvalidParameterError or an UnsetParameterError.

There is no limit to the number of different kinds of DynamoDB Expressions that you can add to the Builder, but adding the same type of DynamoDB Expression will overwrite the previous DynamoDB Expression. The following example shows a specific instance of this problem.

cond1 := expression.Name("foo").Equal(expression.Value(5))
cond2 := expression.Name("bar").Equal(expression.Value(6))
expr, err := expression.NewBuilder().
    WithCondition(cond1).
    WithCondition(cond2).
    Build()
if err != nil {
  fmt.Println(err)
}

This example shows that the second call of WithCondition() overwrites the first call.

Filling in the fields of a DynamoDB Scan API

The following example shows how to use an Expression to fill in the member fields of a DynamoDB Operation API.

filt := expression.Name("Artist").Equal(expression.Value("No One You Know"))
proj := expression.NamesList(
    expression.Name("SongTitle"),
    expression.Name("AlbumTitle"),
)
expr, err := expression.NewBuilder().
    WithFilter(filt).
    WithProjection(proj).
    Build()
if err != nil {
  fmt.Println(err)
}

input := &dynamodb.ScanInput{
  ExpressionAttributeNames:  expr.Names(),
  ExpressionAttributeValues: expr.Values(),
  FilterExpression:          expr.Filter(),
  ProjectionExpression:      expr.Projection(),
  TableName:                 aws.String("Music"),
}

In this example, the getter methods of the Expression type are used to get the formatted DynamoDB Expression strings. When using Expression, you must always assign the ExpressionAttributeNames and ExpressionAttributeValues member fields of the DynamoDB API because all item attribute names and values are aliased. That means that if the ExpressionAttributeNames and ExpressionAttributeValues members are not assigned with the corresponding Names() and Values() methods, the DynamoDB operation will run into a logic error.

If you need a starting point, check out the working example in the AWS SDK for Go.

Overall, the expression package makes using the DynamoDB Expressions clean and simple. The complicated syntax and rules of DynamoDB Expressions are abstracted away so you no longer have to worry about them!

AWS SDK for Go – Batch Operations with Amazon S3

The v1.9.44 release of the AWS SDK for Go adds support for batched operations in the s3manager package. This enables you to easily upload, download, and delete Amazon S3 objects. The feature uses the iterator, also known as scanner pattern, to enable users to extend the functionality of batching. This blog post shows how to use and extend the new batched operations to fit a given use case.

Deleting objects using ListObjectsIterator

  sess := session.Must(session.NewSession(&aws.Config{}))
  svc := s3.New(sess)

  input := &s3.ListObjectsInput{
    Bucket:  aws.String("bucket"),
    MaxKeys: aws.Int64(100),
  }
  // Create a delete list objects iterator
  iter := s3manager.NewDeleteListIterator(svc, input)
  // Create the BatchDelete client
  batcher := s3manager.NewBatchDeleteWithClient(svc)

  if err := batcher.Delete(aws.BackgroundContext(), iter); err != nil {
    panic(err)
  }

This example lists all objects, one hundred at a time, under the bucket passed in the command line arguments. The example above creates a new delete list iterator and dictates how the BatchDelete client behaves. This means that when we call Delete on the client it will require a BatchDeleteIterator.

Creating a custom iterator

The SDK enables you to pass custom iterators to the new batched operations. For example, if we want to upload a directory, none of the default iterators do this easily. The following example shows how to implement a custom iterator that uploads a directory to S3.

 // DirectoryIterator iterates through files and directories to be uploaded                                          
// to S3.                                                                                                               
type DirectoryIterator struct {                                                                                         
  filePaths []string                                                                                                    
  bucket    string                                                                                                      
  next      struct {                                                                                                    
    path string                                                                                                         
    f    *os.File                                                                                                       
  }                                                                                                                     
  err error                                                                                                             
}                                                                                                                       
                                                                                                                        
// NewDirectoryIterator creates and returns a new BatchUploadIterator                                                
func NewDirectoryIterator(bucket, dir string) s3manager.BatchUploadIterator {                                           
  paths := []string{}                                                                                                   
  filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {                                             
    // We care only about files, not directories                                                                     
    if !info.IsDir() {                                                                                                  
      paths = append(paths, path)                                                                                       
    }                                                                                                                   
    return nil                                                                                                          
  })                                                                                                                    
                                                                                                                        
  return &DirectoryIterator{                                                                                            
    filePaths: paths,                                                                                                   
    bucket:    bucket,                                                                                                  
  }                                                                                                                     
}                                                                                                                       
                                                                                                                        
// Next opens the next file and stops iteration if it fails to open                                             
// a file.                                                                                                              
func (iter *DirectoryIterator) Next() bool {                                                                            
  if len(iter.filePaths) == 0 {                                                                                         
    iter.next.f = nil                                                                                                   
    return false                                                                                                        
  }                                                                                                                     
                                                                                                                        
  f, err := os.Open(iter.filePaths[0])                                                                                  
  iter.err = err                                                                                                        
                                                                                                                        
  iter.next.f = f                                                                                                       
  iter.next.path = iter.filePaths[0]                                                                                    
                                                                                                                        
  iter.filePaths = iter.filePaths[1:]                                                                                   
  return true && iter.Err() == nil                                                                                      
}                                                                                                                       
                                                                                                                        
// Err returns an error that was set during opening the file
func (iter *DirectoryIterator) Err() error {                                                                            
  return iter.err                                                                                                       
}                                                                                                                       
                                                                                                                        
// UploadObject returns a BatchUploadObject and sets the After field to                                              
// close the file.                                                                                                      
func (iter *DirectoryIterator) UploadObject() s3manager.BatchUploadObject {                                             
  f := iter.next.f                                                                                                      
  return s3manager.BatchUploadObject{                                                                                   
    Object: &s3manager.UploadInput{                                                                                     
      Bucket: &iter.bucket,                                                                                             
      Key:    &iter.next.path,                                                                                          
      Body:   f,                                                                                                        
    },
	// After was introduced in version 1.10.7
    After: func() error {                                                                                               
      return f.Close()                                                                                                  
    },                                                                                                                  
  }                                                                                                                     
}

We have defined a new iterator named DirectoryIterator. This satisfies the BatchUploadIterator by defining the three necessary methods of Next, Err, and UploadObject. The Next method on the iterator will let the batch operation know to continue the iteration or not. Err returns an error if there was one. In this case, the only time we will return an error is when we fail to open a file. If this occurs, the Next method will return false. Finally, the UploadObject returns the BatchUploadObject that is used to upload contents to the service. In this example, we see that we create an input object and a closure. This closure ensures that we’re not leaking files. Now let’s define our main function using what we defined above.

func main() {
  region := os.Args[1]
  bucket := os.Args[2]
  path := os.Args[3]
  iter := NewDirectoryIterator(bucket, path)                                                                  
  uploader := s3manager.NewUploader(session.New(&aws.Config{                                                            
    Region: &region,                                                                                    
  }))                                                                                                                   
                                                                                                                        
  if err := uploader.UploadWithIterator(aws.BackgroundContext(), iter); err != nil {                                    
    panic(err)                                                                                                          
  }                                                                                                                     
  fmt.Printf("Successfully uploaded %q to %q", path, bucket)                                                                                                
}  

You can verify that the directory has been uploaded by looking in S3.

Please chat with us on gitter and file feature requests or issues in github. We look forward to your feedback and recommendations!

Using Go 1.8’s Plugin for Credentials with the AWS SDK for Go

The v1.10.0 release of the AWS SDK for Go adds a new way to configure the SDK to retrieve AWS credentials. With this release, you can configure the SDK to retrieve AWS credentials from a Go plugin that is dynamically loaded during your application’s runtime. In this post, we explain how you can build a plugin and configure the SDK to use it. The SDK also includes a runnable example for you to try out the new plugin credential provider feature.

The SDK does takes advantage of the Go 1.8 plugin package, and associated build mode for Linux operating systems. The plugin package and associated build mode enable you to write components that can be loaded dynamically while your application runs. Plugins help you add functionality to your application while it’s running instead of only when the application is compiled.

The SDK’s plugincreds package enables you to use the plugins to retrieve AWS credentials. This package includes utilities to create a credentials Provider and Credentials loader.

Building a credential provider plugin

To use a plugin with the SDK, the SDK requires the plugin to export a function that returns two function pointers. The SDK uses these two returned function pointers to retrieve credentials and to determine if the credentials are expired. By default, the SDK expects the plugin to export the symbol named GetAWSSDKCredentialProvider for the getter function that returns the retrieve and isExpired function pointers.

The SDK requires the plugin’s getter function signature to match the following signature. If the getter function doesn’t match the signature, the SDK returns an error with the code ErrCodeInvalidSymbolError.

func() (RetrieveFn func() (key, secret, token string, err error), IsExpiredFn func() bool)

The SDK includes the NewCredentials helper function that looks up and validates the symbol, creating the SDK’s Credentials value automatically. You can use the returned Credentials value to configure a session or service client.

To use a custom symbol name, use the GetPluginProviderFnsByName function to look up the getter function from the plugin by name. This verifies that the symbol matches the expected signature. It also gets the credential provider’s retrieve and isExpired function pointers by calling the getter function. The retrieve and isExpired function pointers are returned. The SDK requires both function pointers to be valid and not nil.

Here is an example of a plugin that provides credential retrieve and expired functions to the application that loaded the plugin.

package main

func main() {}

// Build: go build -o plugin.so -buildmode=plugin plugin.go
func init() {
	// Initialize a mock credential provider with mock values. In a real-world usage
	// the provider's Retrieve method could reach out to the source of credentials
	// and return the credentials there, instead of this mock credential provider that statically
	// sets the credential values.
	myCredProvider = provider{"key","secret","token"}
}

// GetAWSSDKCredentialProvider is the symbol the SDK will look up and use to
// get the credential provider's retrieve and isExpired functions.
func GetAWSSDKCredentialProvider() (func() (key, secret, token string, err error), func() bool) {
	return myCredProvider.Retrieve,	myCredProvider.IsExpired
}

// Mock implementation of a type that retrieves credentials and
// returns if they are expired.
type provider struct {
	key, secret, token string
}

// Return the credentials that were previously set into the provider value.
func (p provider) Retrieve() (key, secret, token string, err error) {
	return p.key, p.secret, p.token, nil
}

func (p *provider) IsExpired() bool {
	return false;
}

Once you’ve written the code for your plugin, you can build it as a plugin file that can be loaded dynamically into your application with the -buildmode=plugin build flag.

go build -o myCredPlugin.so -buildmode=plugin plugin.go

You can find an example you can start from in the SDK’s plugincreds example.

Using a credential provider plugin

Once you’ve built your plugin, you can configure the SDK to retrieve credentials using it. The SDK makes this easy with the plugincreds package’s NewCredential function. This function takes a Plugin pointer value and looks up the expected credentials provider getter functions. See the plugincreds package for errors that can be returned.

The following example shows you how an application can open a Go plugin dynamically at runtime, and configure the SDK to use the plugin to retrieve AWS credentials.

// In your application code, open the plugin using its file name. This loads
// the plugin into memory, executing the plugin's main package init function.
p, err := plugin.Open(pluginFilename)
if err != nil {
	return nil, errors.Wrapf(err, "failed to open plugin, %s", pluginFilename)
}

// NewCredentials looks up the symbol from the plugin and configures the Credentials
// value that can be used to configure a session or service client.
//
// You can share the Credentials value and credentials, across many session and service clients 
// concurrently and safely.
creds, err := plugincreds.NewCredentials(p)
if err != nil {
	return nil, errors.Wrapf(err, "failed to load plugin credentials provider, %s", pluginFilename)
}

// Configure a session to use the credentials sourced from the plugin that is loaded.
sess := session.Must(session.NewSession(&aws.Config{
	Credentials: creds,
}))

// Return the configured session so it can be used to create service clients.
return sess, nil

You can find a usable example of this in the SDK’s plugincreds example.

Putting it all together

With this configuration, you can deploy your plugin and application independently to the platforms that your application will run on. Loading plugins dynamically allows you to separate your application from where your AWS credentials are retrieved. This practice allows your application to be more flexible when working with multiple environments. This technique is particularly useful for CLI applications where users of the CLI need to provide custom ways of retrieving credentials.

Let us know how you use the credentials plugin in your applications.

Context Pattern added to the AWS SDK for Go

The AWS SDK for Go v1.8.0 release adds support for the API operation request functional options, and the Context pattern. Both of these features were high demand requests from our users. Request options allow you to easily configure and augment how the SDK makes API operation requests to AWS services. The SDK’s support for the Context pattern allows your application take advantage of cancellation, timeouts, and Context Values on requests.  The new request options and Context pattern give your application even more control over SDK’s request execution and handling.

Request Options

Request Options are functional arguments that you pass in to the SDK’s API operation methods. These enable you to configure the request in line with functional options. Functional options are a pattern you can use to configure an operation via passed-in functions or closures in line with the method call.

For example, you can configure the Amazon S3 API operation PutObject to log debug information about the request directly, without impacting the other API operations used by your application.

// Log this API operation only. 
resp, err := svc.PutObjectWithContext(ctx, params, request.WithLogLevel(aws.LogDebug))

This pattern is also helpful when you want your application to inject request handlers into the request. This allows you to do so in line with the API operation method call.

resp, err := svc.PutObjectWithContext(ctx, params, func(r *request.Request) {
	start := time.Now()
	r.Handlers.Complete.PushBack(func(req *request.Request) {
		fmt.Println("request %s took %s to complete", req.RequestID, time.Since(start))
	})
})

All of the SDK’s new service client methods that have a WithContext suffix support these request options. You can also apply request options to the SDK’s standard Request directly with the ApplyOptions method.

API Operations with Context

All of the new methods of the SDK’s API operations that have a WithContext suffix take a ContextValue. This value must be non-nil. Context allows your application to control API operation request cancellation. This means you can now easily institute request timeouts based on the Context pattern. Go introduced the Context pattern in the experimental package golang.org/x/net/context, and it was later added to the Go standard library in Go 1.7. For backward compatibility with previous Go versions, the SDK created the Context interface type in the github.com/aws/aws-sdk-go/aws package. The SDK’s Context type is compatible with Context from both golang.org/x/net/context and the Go 1.7 standard library Context package.

Here is an example of how to use a Context to cancel uploading an object to Amazon S3. If the put doesn’t complete within the timeout passed in, the API operation is canceled. When a Context is canceled, the SDK returns the CanceledErrorCode error code. A working version of this example can be found in the SDK.

sess := session.Must(session.NewSession())
svc := s3.New(sess)

// Create a context with a timeout that will abort the upload if it takes 
// more than the passed in timeout.
ctx := context.Background()
var cancelFn func()
if timeout > 0 {
	ctx, cancelFn = context.WithTimeout(ctx, timeout)
}
// Ensure the context is canceled to prevent leaking.
// See context package for more information, https://golang.org/pkg/context/
defer cancelFn()

// Uploads the object to S3. The Context will interrupt the request if the 
// timeout expires.
_, err := svc.PutObjectWithContext(ctx, &s3.PutObjectInput{
	Bucket: aws.String(bucket),
	Key:    aws.String(key),
	Body:   body,
})
if err != nil {
	if aerr, ok := err.(awserr.Error); ok && aerr.Code() == request.CanceledErrorCode {
		// If the SDK can determine the request or retry delay was canceled
		// by a context the CanceledErrorCode error code will be returned.
		fmt.Println("request's context canceled,", err)
	}
	return err
}

API Operation Waiters

Waiters were expanded to include support for request Context and waiter options. The new WaiterOption type defines functional options that are used to configure the waiter’s functionality.

For example, the WithWaiterDelay allows you to provide your own function that returns how long the waiter will wait before checking the waiter’s resource state again. This is helpful when you want to configure an exponential backoff, or longer retry delays with ConstantWaiterDelay.

The example below highlights this by configuring the WaitUntilBucketExists method to use a 30-second delay between checks to determine if the bucket exists.

svc := s3.New(sess)
ctx := contex.Background()

_, err := svc.CreateBucketWithContext(ctx, &s3.CreateBucketInput{
	Bucket: aws.String("myBucket"),
})
if err != nil {
	return fmt.Errorf("failed to create bucket, %v", err)
}

err := svc.WaitUntilBucketExistsWithContext(ctx,
	&s3.HeadBucket{
		Bucket: aws.String("myBucket"),
	},
	request.WithWaiterDelay(request.ConstantWaiterDelay(30 * time.Second)),
)
if err != nil {
	return fmt.Errorf("failed to wait for bucket exists, %v", err)
}

fmt.Println("bucket created")

API Operation Paginators

Paginators were also expanded to add support for Context and request options. Configuring request options for pagination applies the options to each new Request that the SDK creates to retrieve the next page. By extending the Pages API methods to include Context and request options the SDK gives you control over how the SDK will make each page request, and cancellation of the pagination.

svc := s3.New(sess)
ctx := context.Background()

err := svc.ListObjectsPagesWithContext(ctx,
	&s3.ListObjectsInput{
		Bucket: aws.String("myBucket"),
		Prefix: aws.String("some/key/prefix"),
		MaxKeys: aws.Int64(100),
	},
	func(page *s3.ListObjectsOutput, lastPage bool) bool {
		fmt.Println("Received", len(page.Contents), "objects in page")
		for _, obj := range page.Contents {
			fmt.Println("Key:", aws.StringValue(obj.Key))
		}
		return true
	},
)
if err != nil {
	return fmt.Errorf("failed to create bucket, %v", err)
}

API Operation Pagination without Callbacks

In addition to the Pages API operations, you can use the new Pagination type in the github.com/aws/aws-sdk-go/aws/request package. This type enables you to control the iterations of pages directly. This is helpful when you do not want to use callbacks for paginating AWS operations. This new type allows you to treat pagination similar to the Go stdlib bufio package’s Scanner type to iterate through pages with a for loop. You can also use this pattern with the Context pattern by calling Request.SetContext on each request in the NewRequest function.

svc := s3.New(sess)

params := s3.ListObjectsInput{
	Bucket: aws.String("myBucket"),
	Prefix: aws.String("some/key/prefix"),
	MaxKeys: aws.Int64(100),
}
ctx := context.Background()

p := request.Pagination{
	NewRequest: func() (*request.Request, error) {
		req, _ := svc.ListObjectsRequest(&params)
		req.SetContext(ctx)
		return req, nil
	},
}

for p.Next(){
	page := p.Page().(*s3.ListObjectsOutput)
	
	fmt.Println("Received", len(page.Contents), "objects in page")
	for _, obj := range page.Contents {
		fmt.Println("Key:", aws.StringValue(obj.Key))
	}
}

return p.Err()

Wrap Up

The addition of Context and request options expands the capabilities of the AWS SDK for Go, giving your applications the tools needed to implement request lifecycle and configuration with the SDK. Let us know your experiences using the new Context pattern and request options features.

Assume AWS IAM Roles with MFA Using the AWS SDK for Go

AWS SDK for Go v1.7.0 added the feature allowing your code to assume AWS Identity and Access Management (IAM) roles with Multi Factor Authentication (MFA). This feature allows your applications to easily support users assuming IAM roles with MFA token codes with minimal setup and configuration.

IAM roles enable you to manage granular permissions for a specific role or task, instead of applying those permissions directly to users and groups. Roles create a layer of separation, decoupling resource permissions from users, groups, and other accounts. With IAM roles you give third-party AWS accounts access to your resources, without having to create additional users for them in your AWS account.

Assuming IAM roles with MFA is a pattern used for roles that will be assumed by applications used directly by users instead of automated systems such as services. You can require that users assuming your role specify an MFA token code each time the role is assumed. The AWS SDK for Go now makes this easier to support in your Go applications.

Setting Up an IAM Role and User for MFA

To take advantage of this feature, enable MFA for your users and IAM roles. There are two categories of MFA that IAM supports, Security Token and SMS Text Message. The SDK support for MFA takes advantage of the Security Token based method. In the security token category, there are two types of security token devices, Hardware MFA device and Virtual MFA device. The AWS SDK for Go supports both of these device types equally.

In order for a user to assume an IAM role with MFA there must be an MFA device linked with the user. You can do this via the IAM console on the Security credentials tab of a user’s details, and using the Assigned MFA device field. Here you can assign an MFA device to a user. Only one MFA device can be assigned per user.

You can also configure IAM roles to require users who assume those roles to do so using an MFA token. This feature is enabled in the Trust Relationship section of a role’s details. Use the MultiFactorAuthPreset flag to require that any user who assumes the role must do so with an MFA token.

The following is an example of a Trust Relationship that enables this restriction.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<account>:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "Bool": {
          "aws:MultiFactorAuthPresent": "true"
        }
      }
    }
  ]
}

Assuming a Role with SDK Session

A common practice when using the AWS SDK for Go is to specify credentials and configuration in files such as the shared configuration file (~/.aws/config) and the shared credentials file (~/.aws/credentials). The SDK’s session package makes using these configurations easy, and will automatically configure  service clients based on them. You can enable the SDK’s support for assuming a role and the shared configuration file by setting the environment variable AWS_SDK_LOAD_CONFIG=1, or the session option SharedConfigState to SharedConfigEnable.

To configure your configuration profile to assume an IAM role with MFA, you need to specify the MFA device’s serial number for a Hardware MFA device, or ARN for a Virtual MFA device (mfa_serial). This is in addition to specifying the role’s ARN (role_arn) in your SDK shared configuration file.

The following example profile instructs the SDK to assume a role and requires the user to provide an MFA token to assume the role. The SDK uses the source_profile field to look up another profile in the configuration file that can specify the credentials, and region with which to make the AWS Security Token Service (STS) Assume Role API operation call.

The SDK supports assuming an IAM role with and without MFA. To assume a role without MFA, don’t provide the mfa_serial field.

[profile assume_role_profile]
role_arn = arn:aws:iam::<account_number>:role/<role_name>
source_profile = other_profile
mfa_serial = <hardware device serial number or virtual device arn>

See the SDK’s session package documentation for more details about configuring the shared configuration files.

After you’ve updated your shared configuration file, you can update your application code’s Sessions to specify how the MFA token code is retrieved from your application’s users. If a shared configuration profile specifies a role to assume, and the mfa_serial field is provided, the SDK requires that the AssumeRoleTokenProvider session option is also set. There’s no harm in always setting the AssumeRoleTokenProvider session for applications that will always be run by a person. The field is only used if the shared configuration’s profile has a role to assume, and then sets the mfa_serial field. Otherwise, the option is ignored.

The SDK doesn’t automatically set the AssumeRoleTokenProvider with a default value. This is because of the risk of halting an application unexpectedly while the token provider waits for a nonexistent user to provide a value due to a configuration change. You must set this value to use MFA roles with the SDK.

The SDK implements a simple token provider in the stscreds package, StdinTokenProvider. This function prompts on stdin for an MFA token, and waits forever until one is provided. You can also easily implement a custom token provider by satisfying the func() (string, error) signature. The returned string is the MFA token, and the error is any error that occurred while retrieving the token.

// Enable SDK's Shared Config support.
sess := session.Must(session.NewSessionWithOptions(session.Options{
    AssumeRoleTokenProvider: stscreds.StdinTokenProvider,
    SharedConfigState: session.SharedConfigEnable,
}))

// Use the session to create service clients and make API operation calls.
svc := s3.New(sess)
svc.PutObject(...)

Configuring the Assume Role Credentials Provider Directly

In addition to being able to create a Session configured to assume an IAM role, you can also create a credential provider to assume a role directly. This is helpful when the role’s configuration isn’t stored in the shared configuration files.

Creating the credential provider is similar to configuring a Session. However, you don’t need to enable the session’s shared configuration option. In addition, you can use this to configure service clients to use the assumed role directly instead of via the shared session. This is helpful when you want to shared base configuration across multiple service clients via the Session, and use roles for select tasks.

// Initial credentials loaded from SDK's default credential chain, such as
// the environment, shared credentials (~/.aws/credentials), or EC2 Instance
// Role. These credentials are used to make the AWS STS Assume Role API.
sess := session.Must(session.NewSession())

// Create the credentials from AssumeRoleProvider to assume the role
// referenced by the "myRoleARN" ARN. Prompt for MFA token from stdin.
creds := stscreds.NewCredentials(sess, "myRoleArn", func(p *stscreds.AssumeRoleProvider) {
    p.SerialNumber = aws.String("myTokenSerialNumber")
    p.TokenProvider = stscreds.StdinTokenProvider
})

// Create an Amazon SQS service client with the Session's default configuration.
sqsSvc := sqs.New(sess)

// Create service client configured for credentials from the assumed role.
s3Svc := s3.New(sess, &aws.Config{Credentials: creds})

Feedback

We’re always looking for more feedback. We added this feature as a direct result of feedback and requests we received. If you have any ideas that you think would be good improvements or additions to the AWS SDK for Go, please let us know.

Using the AWS SDK for Go Encryption Client

Overview

AWS SDK for Go released the encryption client last year, and some of our customers have asked us how to use it. We’re very excited to show you some examples in this blog post. Before we get into the examples, let’s look at what client-side encryption is and why you might want to use it.

Client-side encryption is the act of encrypting or decrypting on the client’s side and not relying on a service to do the encryption for you. This has many added benefits, including enabling you to choose what to use to encrypt your data. It also enables extra security so that only those who have the master key can decrypt the data.

The crypto client has three major components: key wrap handler, cipher builder, and client. We use the key wrap handler to generate and encrypt the iv and key. Then we use those keys with the cipher builder to build a new cipher. Lastly, we use all these parts to create a client. To learn more about this process, see envelope encryption.

Prerequisite

To run these examples, we need

  • An AWS KMS encryption key
  • An Amazon S3 bucket bar

Encryption and Decryption

In our implementation, we wanted to provide interoperability across all SDKs and to give customers an easy way to extend the s3crypto package. Let’s first get into an example of putting a simple “hello world” object into S3.


	arn := "arn to our key"
	sess := session.New(&aws.Config{
        Region: aws.String("us-east-1"),
    })
	// This is our key wrap handler, used to generate cipher keys and IVs for
    // our cipher builder. Using an IV allows more “spontaneous” encryption.
    // The IV makes it more difficult for hackers to use dictionary attacks.
    // The key wrap handler behaves as the master key. Without it, you can’t
    // encrypt or decrypt the data.
	keywrap := s3crypto.NewKMSKeyGenerator(kms.New(sess), arn)
	// This is our content cipher builder, used to instantiate new ciphers
    // that enable us to encrypt or decrypt the payload.
	builder := s3crypto.AESGCMContentCipherBuilder(keywrap)
	// Let's create our crypto client!
	client := s3crypto.NewEncryptionClient(sess, builder)
	
	key := "foo"
	bucket := "bar"
	input := &s3.PutObjectInput{
    	Bucket: &bucket,
    	Key:    &key,
    	Body:   bytes.NewReader("Hello world!"),
	}

    _, err := client.PutObject(input)
	// What to expect as errors? You can expect any sort of S3 errors, http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html.
	// The s3crypto client can also return some errors:
    //  * MissingCMKIDError - when using AWS KMS, the user must specify their key's ARN
	if err != nil {
		return err
	}

Now that wasn’t too hard! It looks almost identical to the S3 PutObject! Let’s move on to an example of decryption.


	sess := session.New(&aws.Config{
        Region: aws.String("us-east-1"),
    })
	client := s3crypto.NewDecryptionClient(sess)

	key := "foo"
	bucket := "bar"
	input := &s3.GetObjectInput{
    	Bucket: &bucket,
    	Key:    &key,
	}

    result, err := client.GetObject(input)
  // Aside from the S3 errors, here is a list of decryption client errors:
  //   * InvalidWrapAlgorithmError - returned on an unsupported Wrap algorithm
  //   * InvalidCEKAlgorithmError - returned on an unsupported CEK algorithm
  //   * V1NotSupportedError - the SDK doesn’t support v1 because security is an issue for AES ECB
  // These errors don’t necessarily mean there’s something wrong. They just tell us we couldn't decrypt some data.
  // Users can choose to log this and then continue decrypting the data that they can, or simply return the error.
	if err != nil {
		return err
	}
	
	// Let's read the whole body from the response
	b, err := ioutil.ReadAll(result.Body)
	if err != nil {
		return err
	}
	fmt.Println(string(b))

As the code shows, there’s no difference between using the Amazon S3 client’s GetObject versus the s3crypto.DecryptionClient.GetObject.

Lost or Deleted Master Key

If you lose or delete your master key, there’s no way to decrypt your data. The beauty of client-side encryption is that the master key is never stored with your data. This allows you to specify who can view your data.

Supported Algorithms

The AWS SDK for Go currently supports AWS KMS for key wrapping and AES GCM as a content cipher. However, some users might not want to use AES GCM or KMS for their ciphers. The SDK allows any user to specify any cipher as long as it satisfies our interfaces. With that said, the goal of this crypto client is to allow interoperability between the crypto clients of other SDKs and enable easy extensibility. Please let us know in the comments how you’re using or extending the crypto client.

The documentation for Amazon S3 Encryption Client can be found here.

Check out other SDKs that support the Amazon S3 Encryption Client
AWS SDK for C++
AWS SDK for Java
AWS SDK for Ruby

AWS SDK for Go Adds Error Code Constants

The AWS SDK for Go v1.6.19 release adds generated constants for all modeled service response error codes. These constants improve discoverability of the error codes that a service can return, and reduce the chance of typos that can cause errors to be handled incorrectly.

You can find the new error code constants within each of the SDK’s service client packages that are prefixed with “ErrCode”.  For example, the “NoSuchBucket” error code returned by Amazon S3 API requests can be found in the s3 package as: “ErrCodeNoSuchBucket”.

Here is an example of how to use the error code constants with the Amazon S3 GetObject API response.

result, err := svc.GetObject(&s3.GetObjectInput{
    Bucket: aws.String("myBucket"),
    Key: aws.String("myKey"),
})
if err != nil {
    if aerr, ok := err.(awserr.Error); ok {
        // Special handling for bucket and key errors
        switch aerr.Code() {
            case s3.ErrCodeNoSuchBucket:
                 // Handling bucket not existing error
                 fmt.Println("bucket does not exist.")
            case s3.ErrCodeNoSuchKey:
                 // Handling key not existing error
                 fmt.Println("key does not exist.")
        }
    }
    return err
}

We’re working to include error codes for all services. Let us know if you find additional error codes to include in the AWS SDK for Go.

AWS SDK for Go Update Needed for Go 1.8

The AWS SDK for Go  is updated for Go 1.8. This update fixes an issue in which some API operations failed with a connection reset by peer error or service error. This failure prevented API operation requests from being made. If you’re using Go 1.8 with a version of the SDK that’s earlier than v1.6.3, you need to update the SDK to at least v1.6.3 to take advantage of this fix.

GitHub issue #984 discovered that the bug was caused by the way the SDK constructed its HTTP request body. The SDK relied on undocumented functionality of Go 1.7 and earlier versions. In that functionality, the Go http.Request automatically determined whether to send the request’s body, based on whether the body was empty.

Go addressed the issue for most use cases in 1.8rc2, but some APIs such as the Amazon Simple Storage Service (Amazon S3) CopyObject API were still affected.

The SDK’s fix for this issue takes advantage of Go 1.8’s new type, http.NoBody. The SDK uses this value to ensure the HTTP request doesn’t contain a body when none is expected. Another option for a fix was to set Request.Body to nil, but this would break backward compatibility because the Request.Body value is accessible.

See #991, #984, and golang/go#18257 for more information.

Thank you, to all who discovered, reported, and helped us resolve this issue.