Category: Go


Context Pattern added to the AWS SDK for Go

by Jason Del Ponte | on | in Go | | Comments

The AWS SDK for Go v1.8.0 release adds support for the API operation request functional options, and the Context pattern. Both of these features were high demand requests from our users. Request options allow you to easily configure and augment how the SDK makes API operation requests to AWS services. The SDK’s support for the Context pattern allows your application take advantage of cancellation, timeouts, and Context Values on requests.  The new request options and Context pattern give your application even more control over SDK’s request execution and handling.

Request Options

Request Options are functional arguments that you pass in to the SDK’s API operation methods. These enable you to configure the request in line with functional options. Functional options are a pattern you can use to configure an operation via passed-in functions or closures in line with the method call.

For example, you can configure the Amazon S3 API operation PutObject to log debug information about the request directly, without impacting the other API operations used by your application.

// Log this API operation only. 
resp, err := svc.PutObjectWithContext(ctx, params, request.WithLogLevel(aws.LogDebug))

This pattern is also helpful when you want your application to inject request handlers into the request. This allows you to do so in line with the API operation method call.

resp, err := svc.PutObjectWithContext(ctx, params, func(r *request.Request) {
	start := time.Now()
	r.Handlers.Complete.PushBack(func(req *request.Request) {
		fmt.Println("request %s took %s to complete", req.RequestID, time.Since(start))
	})
})

All of the SDK’s new service client methods that have a WithContext suffix support these request options. You can also apply request options to the SDK’s standard Request directly with the ApplyOptions method.

API Operations with Context

All of the new methods of the SDK’s API operations that have a WithContext suffix take a ContextValue. This value must be non-nil. Context allows your application to control API operation request cancellation. This means you can now easily institute request timeouts based on the Context pattern. Go introduced the Context pattern in the experimental package golang.org/x/net/context, and it was later added to the Go standard library in Go 1.7. For backward compatibility with previous Go versions, the SDK created the Context interface type in the github.com/aws/aws-sdk-go/aws package. The SDK’s Context type is compatible with Context from both golang.org/x/net/context and the Go 1.7 standard library Context package.

Here is an example of how to use a Context to cancel uploading an object to Amazon S3. If the put doesn’t complete within the timeout passed in, the API operation is canceled. When a Context is canceled, the SDK returns the CanceledErrorCode error code. A working version of this example can be found in the SDK.

sess := session.Must(session.NewSession())
svc := s3.New(sess)

// Create a context with a timeout that will abort the upload if it takes 
// more than the passed in timeout.
ctx := context.Background()
var cancelFn func()
if timeout > 0 {
	ctx, cancelFn = context.WithTimeout(ctx, timeout)
}
// Ensure the context is canceled to prevent leaking.
// See context package for more information, https://golang.org/pkg/context/
defer cancelFn()

// Uploads the object to S3. The Context will interrupt the request if the 
// timeout expires.
_, err := svc.PutObjectWithContext(ctx, &s3.PutObjectInput{
	Bucket: aws.String(bucket),
	Key:    aws.String(key),
	Body:   body,
})
if err != nil {
	if aerr, ok := err.(awserr.Error); ok && aerr.Code() == request.CanceledErrorCode {
		// If the SDK can determine the request or retry delay was canceled
		// by a context the CanceledErrorCode error code will be returned.
		fmt.Println("request's context canceled,", err)
	}
	return err
}

API Operation Waiters

Waiters were expanded to include support for request Context and waiter options. The new WaiterOption type defines functional options that are used to configure the waiter’s functionality.

For example, the WithWaiterDelay allows you to provide your own function that returns how long the waiter will wait before checking the waiter’s resource state again. This is helpful when you want to configure an exponential backoff, or longer retry delays with ConstantWaiterDelay.

The example below highlights this by configuring the WaitUntilBucketExists method to use a 30-second delay between checks to determine if the bucket exists.

svc := s3.New(sess)
ctx := contex.Background()

_, err := svc.CreateBucketWithContext(ctx, &s3.CreateBucketInput{
	Bucket: aws.String("myBucket"),
})
if err != nil {
	return fmt.Errorf("failed to create bucket, %v", err)
}

err := svc.WaitUntilBucketExistsWithContext(ctx,
	&s3.HeadBucket{
		Bucket: aws.String("myBucket"),
	},
	request.WithWaiterDelay(request.ConstantWaiterDelay(30 * time.Second)),
)
if err != nil {
	return fmt.Errorf("failed to wait for bucket exists, %v", err)
}

fmt.Println("bucket created")

API Operation Paginators

Paginators were also expanded to add support for Context and request options. Configuring request options for pagination applies the options to each new Request that the SDK creates to retrieve the next page. By extending the Pages API methods to include Context and request options the SDK gives you control over how the SDK will make each page request, and cancellation of the pagination.

svc := s3.New(sess)
ctx := context.Background()

err := svc.ListObjectsPagesWithContext(ctx,
	&s3.ListObjectsInput{
		Bucket: aws.String("myBucket"),
		Prefix: aws.String("some/key/prefix"),
		MaxKeys: aws.Int64(100),
	},
	func(page *s3.ListObjectsOutput, lastPage bool) bool {
		fmt.Println("Received", len(page.Contents), "objects in page")
		for _, obj := range page.Contents {
			fmt.Println("Key:", aws.StringValue(obj.Key))
		}
		return true
	},
)
if err != nil {
	return fmt.Errorf("failed to create bucket, %v", err)
}

API Operation Pagination without Callbacks

In addition to the Pages API operations, you can use the new Pagination type in the github.com/aws/aws-sdk-go/aws/request package. This type enables you to control the iterations of pages directly. This is helpful when you do not want to use callbacks for paginating AWS operations. This new type allows you to treat pagination similar to the Go stdlib bufio package’s Scanner type to iterate through pages with a for loop. You can also use this pattern with the Context pattern by calling Request.SetContext on each request in the NewRequest function.

svc := s3.New(sess)

params := s3.ListObjectsInput{
	Bucket: aws.String("myBucket"),
	Prefix: aws.String("some/key/prefix"),
	MaxKeys: aws.Int64(100),
}
ctx := context.Background()

p := request.Pagination{
	NewRequest: func() (*request.Request, error) {
		req, _ := svc.ListObjectsRequest(&params)
		req.SetContext(ctx)
		return req, nil
	},
}

for p.Next(){
	page := p.Page().(*s3.ListObjectsOutput)
	
	fmt.Println("Received", len(page.Contents), "objects in page")
	for _, obj := range page.Contents {
		fmt.Println("Key:", aws.StringValue(obj.Key))
	}
}

return p.Err()

Wrap Up

The addition of Context and request options expands the capabilities of the AWS SDK for Go, giving your applications the tools needed to implement request lifecycle and configuration with the SDK. Let us know your experiences using the new Context pattern and request options features.

Assume AWS IAM Roles with MFA Using the AWS SDK for Go

by Jason Del Ponte | on | in Go | | Comments

AWS SDK for Go v1.7.0 added the feature allowing your code to assume AWS Identity and Access Management (IAM) roles with Multi Factor Authentication (MFA). This feature allows your applications to easily support users assuming IAM roles with MFA token codes with minimal setup and configuration.

IAM roles enable you to manage granular permissions for a specific role or task, instead of applying those permissions directly to users and groups. Roles create a layer of separation, decoupling resource permissions from users, groups, and other accounts. With IAM roles you give third-party AWS accounts access to your resources, without having to create additional users for them in your AWS account.

Assuming IAM roles with MFA is a pattern used for roles that will be assumed by applications used directly by users instead of automated systems such as services. You can require that users assuming your role specify an MFA token code each time the role is assumed. The AWS SDK for Go now makes this easier to support in your Go applications.

Setting Up an IAM Role and User for MFA

To take advantage of this feature, enable MFA for your users and IAM roles. There are two categories of MFA that IAM supports, Security Token and SMS Text Message. The SDK support for MFA takes advantage of the Security Token based method. In the security token category, there are two types of security token devices, Hardware MFA device and Virtual MFA device. The AWS SDK for Go supports both of these device types equally.

In order for a user to assume an IAM role with MFA there must be an MFA device linked with the user. You can do this via the IAM console on the Security credentials tab of a user’s details, and using the Assigned MFA device field. Here you can assign an MFA device to a user. Only one MFA device can be assigned per user.

You can also configure IAM roles to require users who assume those roles to do so using an MFA token. This feature is enabled in the Trust Relationship section of a role’s details. Use the MultiFactorAuthPreset flag to require that any user who assumes the role must do so with an MFA token.

The following is an example of a Trust Relationship that enables this restriction.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<account>:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "Bool": {
          "aws:MultiFactorAuthPresent": "true"
        }
      }
    }
  ]
}

Assuming a Role with SDK Session

A common practice when using the AWS SDK for Go is to specify credentials and configuration in files such as the shared configuration file (~/.aws/config) and the shared credentials file (~/.aws/credentials). The SDK’s session package makes using these configurations easy, and will automatically configure  service clients based on them. You can enable the SDK’s support for assuming a role and the shared configuration file by setting the environment variable AWS_SDK_LOAD_CONFIG=1, or the session option SharedConfigState to SharedConfigEnable.

To configure your configuration profile to assume an IAM role with MFA, you need to specify the MFA device’s serial number for a Hardware MFA device, or ARN for a Virtual MFA device (mfa_serial). This is in addition to specifying the role’s ARN (role_arn) in your SDK shared configuration file.

The following example profile instructs the SDK to assume a role and requires the user to provide an MFA token to assume the role. The SDK uses the source_profile field to look up another profile in the configuration file that can specify the credentials, and region with which to make the AWS Security Token Service (STS) Assume Role API operation call.

The SDK supports assuming an IAM role with and without MFA. To assume a role without MFA, don’t provide the mfa_serial field.

[profile assume_role_profile]
role_arn = arn:aws:iam::<account_number>:role/<role_name>
source_profile = other_profile
mfa_serial = <hardware device serial number or virtual device arn>

See the SDK’s session package documentation for more details about configuring the shared configuration files.

After you’ve updated your shared configuration file, you can update your application code’s Sessions to specify how the MFA token code is retrieved from your application’s users. If a shared configuration profile specifies a role to assume, and the mfa_serial field is provided, the SDK requires that the AssumeRoleTokenProvider session option is also set. There’s no harm in always setting the AssumeRoleTokenProvider session for applications that will always be run by a person. The field is only used if the shared configuration’s profile has a role to assume, and then sets the mfa_serial field. Otherwise, the option is ignored.

The SDK doesn’t automatically set the AssumeRoleTokenProvider with a default value. This is because of the risk of halting an application unexpectedly while the token provider waits for a nonexistent user to provide a value due to a configuration change. You must set this value to use MFA roles with the SDK.

The SDK implements a simple token provider in the stscreds package, StdinTokenProvider. This function prompts on stdin for an MFA token, and waits forever until one is provided. You can also easily implement a custom token provider by satisfying the func() (string, error) signature. The returned string is the MFA token, and the error is any error that occurred while retrieving the token.

// Enable SDK's Shared Config support.
sess := session.Must(session.NewSessionWithOptions(session.Options{
    AssumeRoleTokenProvider: stscreds.StdinTokenProvider,
    SharedConfigState: session.SharedConfigEnable,
}))

// Use the session to create service clients and make API operation calls.
svc := s3.New(sess)
svc.PutObject(...)

Configuring the Assume Role Credentials Provider Directly

In addition to being able to create a Session configured to assume an IAM role, you can also create a credential provider to assume a role directly. This is helpful when the role’s configuration isn’t stored in the shared configuration files.

Creating the credential provider is similar to configuring a Session. However, you don’t need to enable the session’s shared configuration option. In addition, you can use this to configure service clients to use the assumed role directly instead of via the shared session. This is helpful when you want to shared base configuration across multiple service clients via the Session, and use roles for select tasks.

// Initial credentials loaded from SDK's default credential chain, such as
// the environment, shared credentials (~/.aws/credentials), or EC2 Instance
// Role. These credentials are used to make the AWS STS Assume Role API.
sess := session.Must(session.NewSession())

// Create the credentials from AssumeRoleProvider to assume the role
// referenced by the "myRoleARN" ARN. Prompt for MFA token from stdin.
creds := stscreds.NewCredentials(sess, "myRoleArn", func(p *stscreds.AssumeRoleProvider) {
    p.SerialNumber = aws.String("myTokenSerialNumber")
    p.TokenProvider = stscreds.StdinTokenProvider
})

// Create an Amazon SQS service client with the Session's default configuration.
sqsSvc := sqs.New(sess)

// Create service client configured for credentials from the assumed role.
s3Svc := s3.New(sess, &aws.Config{Credentials: creds})

Feedback

We’re always looking for more feedback. We added this feature as a direct result of feedback and requests we received. If you have any ideas that you think would be good improvements or additions to the AWS SDK for Go, please let us know.

Using the AWS SDK for Go Encryption Client

by Ben Powell | on | in Go | | Comments

Overview

AWS SDK for Go released the encryption client last year, and some of our customers have asked us how to use it. We’re very excited to show you some examples in this blog post. Before we get into the examples, let’s look at what client-side encryption is and why you might want to use it.

Client-side encryption is the act of encrypting or decrypting on the client’s side and not relying on a service to do the encryption for you. This has many added benefits, including enabling you to choose what to use to encrypt your data. It also enables extra security so that only those who have the master key can decrypt the data.

The crypto client has three major components: key wrap handler, cipher builder, and client. We use the key wrap handler to generate and encrypt the iv and key. Then we use those keys with the cipher builder to build a new cipher. Lastly, we use all these parts to create a client. To learn more about this process, see envelope encryption.

Prerequisite

To run these examples, we need

  • An AWS KMS encryption key
  • An Amazon S3 bucket bar

Encryption and Decryption

In our implementation, we wanted to provide interoperability across all SDKs and to give customers an easy way to extend the s3crypto package. Let’s first get into an example of putting a simple “hello world” object into S3.


	arn := "arn to our key"
	sess := session.New(&aws.Config{
        Region: aws.String("us-east-1"),
    })
	// This is our key wrap handler, used to generate cipher keys and IVs for
    // our cipher builder. Using an IV allows more “spontaneous” encryption.
    // The IV makes it more difficult for hackers to use dictionary attacks.
    // The key wrap handler behaves as the master key. Without it, you can’t
    // encrypt or decrypt the data.
	keywrap := s3crypto.NewKMSKeyGenerator(kms.New(sess), arn)
	// This is our content cipher builder, used to instantiate new ciphers
    // that enable us to encrypt or decrypt the payload.
	builder := s3crypto.AESGCMContentCipherBuilder(keywrap)
	// Let's create our crypto client!
	client := s3crypto.NewEncryptionClient(sess, builder)
	
	key := "foo"
	bucket := "bar"
	input := &s3.PutObjectInput{
    	Bucket: &bucket,
    	Key:    &key,
    	Body:   bytes.NewReader("Hello world!"),
	}

    _, err := client.PutObject(input)
	// What to expect as errors? You can expect any sort of S3 errors, http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html.
	// The s3crypto client can also return some errors:
    //  * MissingCMKIDError - when using AWS KMS, the user must specify their key's ARN
	if err != nil {
		return err
	}

Now that wasn’t too hard! It looks almost identical to the S3 PutObject! Let’s move on to an example of decryption.


	sess := session.New(&aws.Config{
        Region: aws.String("us-east-1"),
    })
	client := s3crypto.NewDecryptionClient(sess)

	key := "foo"
	bucket := "bar"
	input := &s3.GetObjectInput{
    	Bucket: &bucket,
    	Key:    &key,
	}

    result, err := client.GetObject(input)
  // Aside from the S3 errors, here is a list of decryption client errors:
  //   * InvalidWrapAlgorithmError - returned on an unsupported Wrap algorithm
  //   * InvalidCEKAlgorithmError - returned on an unsupported CEK algorithm
  //   * V1NotSupportedError - the SDK doesn’t support v1 because security is an issue for AES ECB
  // These errors don’t necessarily mean there’s something wrong. They just tell us we couldn't decrypt some data.
  // Users can choose to log this and then continue decrypting the data that they can, or simply return the error.
	if err != nil {
		return err
	}
	
	// Let's read the whole body from the response
	b, err := ioutil.ReadAll(result.Body)
	if err != nil {
		return err
	}
	fmt.Println(string(b))

As the code shows, there’s no difference between using the Amazon S3 client’s GetObject versus the s3crypto.DecryptionClient.GetObject.

Lost or Deleted Master Key

If you lose or delete your master key, there’s no way to decrypt your data. The beauty of client-side encryption is that the master key is never stored with your data. This allows you to specify who can view your data.

Supported Algorithms

The AWS SDK for Go currently supports AWS KMS for key wrapping and AES GCM as a content cipher. However, some users might not want to use AES GCM or KMS for their ciphers. The SDK allows any user to specify any cipher as long as it satisfies our interfaces. With that said, the goal of this crypto client is to allow interoperability between the crypto clients of other SDKs and enable easy extensibility. Please let us know in the comments how you’re using or extending the crypto client.

The documentation for Amazon S3 Encryption Client can be found here.

Check out other SDKs that support the Amazon S3 Encryption Client
AWS SDK for C++
AWS SDK for Java
AWS SDK for Ruby

AWS SDK for Go Adds Error Code Constants

by Jason Del Ponte | on | in Go | | Comments

The AWS SDK for Go v1.6.19 release adds generated constants for all modeled service response error codes. These constants improve discoverability of the error codes that a service can return, and reduce the chance of typos that can cause errors to be handled incorrectly.

You can find the new error code constants within each of the SDK’s service client packages that are prefixed with “ErrCode”.  For example, the “NoSuchBucket” error code returned by Amazon S3 API requests can be found in the s3 package as: “ErrCodeNoSuchBucket”.

Here is an example of how to use the error code constants with the Amazon S3 GetObject API response.

result, err := svc.GetObject(&s3.GetObjectInput{
    Bucket: aws.String("myBucket"),
    Key: aws.String("myKey"),
})
if err != nil {
    if aerr, ok := err.(awserr.Error); ok {
        // Special handling for bucket and key errors
        switch aerr.Code() {
            case s3.ErrCodeNoSuchBucket:
                 // Handling bucket not existing error
                 fmt.Println("bucket does not exist.")
            case s3.ErrCodeNoSuchKey:
                 // Handling key not existing error
                 fmt.Println("key does not exist.")
        }
    }
    return err
}

We’re working to include error codes for all services. Let us know if you find additional error codes to include in the AWS SDK for Go.

AWS SDK for Go Update Needed for Go 1.8

by Jason Del Ponte | on | in Go | | Comments

The AWS SDK for Go  is updated for Go 1.8. This update fixes an issue in which some API operations failed with a connection reset by peer error or service error. This failure prevented API operation requests from being made. If you’re using Go 1.8 with a version of the SDK that’s earlier than v1.6.3, you need to update the SDK to at least v1.6.3 to take advantage of this fix.

GitHub issue #984 discovered that the bug was caused by the way the SDK constructed its HTTP request body. The SDK relied on undocumented functionality of Go 1.7 and earlier versions. In that functionality, the Go http.Request automatically determined whether to send the request’s body, based on whether the body was empty.

Go addressed the issue for most use cases in 1.8rc2, but some APIs such as the Amazon Simple Storage Service (Amazon S3) CopyObject API were still affected.

The SDK’s fix for this issue takes advantage of Go 1.8’s new type, http.NoBody. The SDK uses this value to ensure the HTTP request doesn’t contain a body when none is expected. Another option for a fix was to set Request.Body to nil, but this would break backward compatibility because the Request.Body value is accessible.

See #991, #984, and golang/go#18257 for more information.

Thank you, to all who discovered, reported, and helped us resolve this issue.

Using the AWS SDK for Go’s Regions and Endpoints Metadata

by Jason Del Ponte | on | in Go | | Comments

In release v1.6.0 of the AWS SDK for Go, we added Regions and Endpoints metadata to the SDK. This feature enables you to easily enumerate the metadata and discover Regions, Services, and Endpoints. You can find this feature in the github.com/aws/aws-sdk-go/aws/endpoints package.

The endpoints package provides a simple interface to get a service’s endpoint URL and enumerate the Region metadata. The metadata is grouped into partitions. Each partition is a group of AWS Regions such as AWS Standard, AWS China, and AWS GovCloud (US).

Resolving Endpoints

The SDK automatically uses the endpoints.DefaultResolver function when setting the SDK’s default configuration. You can resolve endpoints yourself by calling the EndpointFor methods in the endpoints package.

// Resolve endpoint for S3 in us-west-2
resolver := endpoints.DefaultResolver()
endpoint, err := resolver.EndpointFor(endpoints.S3ServiceID, endpoints.UsWest2RegionID)
if err != nil {
        fmt.Println("failed to resolve endpoint", err)
        return
}
 
fmt.Println("Resolved URL:", endpoint.URL)

If you need to add custom endpoint resolution logic to your code, you can implement the endpoints.Resolver interface, and set the value to aws.Config.EndpointResolver. This is helpful when you want to provide custom endpoint logic that the SDK will use for resolving service endpoints.

The following example creates a Session that is configured so that Amazon S3 service clients are constructed with a custom endpoint.

s3CustResolverFn := func(service, region string, optFns ...func(*endpoints.Options)) (endpoints.ResolvedEndpoint, error) {
        if service == "s3" {
               return endpoints.ResolvedEndpoint{
                       URL:           "s3.custom.endpoint.com",
                       SigningRegion: "custom-signing-region",
               }, nil
        }
 
        return defaultResolver.EndpointFor(service, region, optFns...)
}
sess := session.Must(session.NewSessionWithOptions(session.Options{
        Config: aws.Config{
               Region:           aws.String("us-west-2"),
               EndpointResolver: endpoints.ResolverFunc(s3CustResolverFn),
        },
}))

Partitions

The return value of the endpoints.DefaultResolver function can be cast to the endpoints.EnumPartitions interface. This will give you access to the slice of partitions that the SDK will use, and can help you enumerate over partition information for each partition.

// Iterate through all partitions printing each partition's ID.
resolver := endpoints.DefaultResolver()
partitions := resolver.(endpoints.EnumPartitions).Partitions()
 
for _, p := range partitions {
        fmt.Println("Partition:", p.ID())
}

In addition to the list of partitions, the endpoints package also includes a getter function for each partition group. These utility functions enable you to enumerate a specific partition without having to cast and enumerate over all the default resolver’s partitions.

partition := endpoints.AwsPartition()
region := partition.Regions()[endpoints.UsWest2RegionID]
 
fmt.Println("Services in region:", region.ID())
for id, _ := range region.Services() {
        fmt.Println(id)
}

Once you have a Region or Service value, you can call ResolveEndpoint on it. This provides a filtered view of the Partition when resolving endpoints.

Check out the AWS SDK for Go repo for more examples. Let us know in the comments what you think of the endpoints package.

Using AWS SDK for Go API Setters

by Jason Del Ponte | on | in Go | | Comments

In release v1.5.0 of the AWS SDK for Go, we added setters to all API operation parameters. Setters give you the ability to set API parameters without directly taking the value’s address. The setters wrap this functionality internally so you don’t have to. The setters are a convenient way to reduce the need to use aws.String and similar utilities.

The following code shows how you could use the Amazon S3 PutObject with the setters.

resp, err := svc.PutObject((&s3.PutObject{}).
	SetBucket("myBucket").
	SetKey("myKey").
	SetBody(strings.NewReader("abc")).
	SetWebsiteRedirectLocation("https://example.com/something"),
)

The following example uses Amazon ECS and nested setters to update a service’s deployment.

resp, err := svc.UpdateService((&ecs.UpdateServiceInput{}).
	SetService("myService").
	SetDeploymentConfiguration((&ecs.DeploymentConfiguration{}).
		SetMinimumHealthyPrecent(80),
	),
)

If you have additional suggestions or feedback on how to improve the SDK, send us your comments. We look forward to hearing from you.

Mocking Out the AWS SDK for Go for Unit Testing

by Jason Del Ponte | on | in Go | | Comments

In our previous post, we showed how you could use the request handler stack in the AWS SDK for Go to extend or modify how requests are sent and received. Now, we’d like to expand the idea of extending the SDK and discuss how you can unit test code that uses the SDK. The SDK’s service clients are a common component to short circuit with custom unit test functionality.

You can easily mock out the SDK service clients by taking advantage of Go’s interfaces. By using the methods of an interface, your code can use that interface instead of using the concrete service client directly. This enables you to mock out the implementation of the service client for your unit tests. You can define these interfaces yourself or use the interfaces that the SDK already defines for each service client’s API. The service client’s API interfaces are very simple to use, and give you the flexibility to test your code. This pattern of using the API interfaces for mocking out is simple to use for new or existing code.

You can find the API interface package nested under each service client package, named “iface“. For example, the Amazon SQS API interface package is “sqsiface“, with the import path of github.com/aws/aws-sdk-go/service/sqs/sqsiface.

The following example shows one pattern how your code can use the SDK’s SQS API interface instead of the concrete client. We’ll take a look at how you could mock out the SDK’s SQS client next.

func main() {
	sess := session.Must(session.NewSession())

	q := Queue{
		Client: sqs.New(sess),
		URL:    os.Args[1],
	}

	msgs, err := q.GetMessages(20)
	// ...
}

type Queue struct {
	Client sqsiface.SQSAPI
	URL    string
}

func (q *Queue) GetMessages(waitTimeout int64) ([]Message, error) {
	params := sqs.ReceiveMessageInput{
		QueueUrl: aws.String(q.URL),
	}
	if waitTimeout > 0 {
		params.WaitTimeSeconds = aws.Int64(waitTimeout)
	}
	resp, err := q.Client.ReceiveMessage(&params)
	// ...
}

Because the previous code uses the SQS API interface, our tests can mock out the client with the responses we want so that we can verify GetMessages returns the parsed messages correctly.

type mockedReceiveMsgs struct {
	sqsiface.SQSAPI
	Resp sqs.ReceiveMessageOutput
}

func (m mockedReceiveMsgs) ReceiveMessage(in *sqs.ReceiveMessageInput) (*sqs.ReceiveMessageOutput, error) {
	// Only need to return mocked response output
	return &m.Resp, nil
}

func TestQueueGetMessage(t *testing.T) {
	cases := []struct {
		Resp     sqs.ReceiveMessageOutput
		Expected []Message
	}{
		{
			Resp: sqs.ReceiveMessageOutput{
				Messages: []*sqs.Message{
					{Body: aws.String(`{"from":"user_1","to":"room_1","msg":"Hello!"}`)},
					{Body: aws.String(`{"from":"user_2","to":"room_1","msg":"Hi user_1 :)"}`)},
				},
			},
			Expected: []Message{
				{From: "user_1", To: "room_1", Msg: "Hello!"},
				{From: "user_2", To: "room_1", Msg: "Hi user_1 :)"},
			},
		},
	}

	for i, c := range cases {
		q := Queue{
			Client: mockedReceiveMsgs{Resp: c.Resp},
			URL:    fmt.Sprintf("mockURL_%d", i),
		}
		msgs, err := q.GetMessages(20)
		if err != nil {
			t.Fatalf("%d, unexpected error", err)
		}
		if a, e := len(msgs), len(c.Expected); a != e {
			t.Fatalf("%d, expected %d messages, got %d", i, e, a)
		}
		for j, msg := range msgs {
			if a, e := msg, c.Expected[j]; a != e {
				t.Errorf("%d, expected %v message, got %v", i, e, a)
			}
		}
	}
}

Using this pattern will help ensure your code that uses the SDK is easy to test and maintain. You can find the full code example used in this post on GitHub at https://github.com/aws/aws-sdk-go/tree/master/example/service/sqs/mockingClientsForTests.

Using Custom Request Handlers

by Ben Powell | on | in Go | | Comments

Requests are the core of the Go SDK. They handle the network request to a specific service. When you call a service method in the AWS SDK for Go, the SDK creates a request object that contains the input parameters and request lifecycle handlers. The logic involved in each step of a request’s lifecycle, such as Build, Validate, or Send, is handled by its corresponding list of handlers. For example, when the request’s Send method is called, it will iterate through and call every function in the request’s Send handler list. For the full list of handlers, see this API reference page. These handlers  shape and mold  what your request does on the way to AWS and what the response does on the way out.

Customizability is an important aspect of request handlers. By customizing its handlers, you can modify a request’s lifecycle for various use cases. You can do this for an individual request object returned by DoFooRequest client methods or, more interestingly, for an entire service client object, which will affect every request processed by the client.

An example use case would be gathering additional information on retried requests. In this case, a simple addition to the Retry handler would provide information required for further debugging. Logging is another use case. For example, a logger function can be added to log every request, or a benchmark function can be added to profile requests. Let’s see how you can add a custom logger to different handler lists.


func customLogger(req *request.Request) {
	 log.Printf(Op: %s\nParams: %v\nTime:      %s\nRetryCount: %d\n\n",
	 req.Operation.Name, req.Params, req.Time, req.RetryCount
	)
}


svc.Handlers.Send.PushBack(customLogger)
svc.Handlers.Unmarshal.PushBack(func(req *request.Request) {
log.Println(“Unmarshal handler”)
customLogger(req)
})

We added a customLogger request handler to the Sign and Send handlers and a closure function in the Unmarshal handler. The only requirement for this function is that it takes a request.Request pointer. The customLogger, which was added to the Send handler, will log the operation’s name, the parameters passed, time executed, and the retry count. The anonymous function added to Unmarshal will log when the Unmarshal handler was executed and then call the customLogger.

These functions are being added to the end of the handler list. During the request’s Send step, it will iterate through the list and execute the functions. Having custom loggers can be useful when developing and debugging applications because they provide more visibility into each API operation’s request/response layers.

In addition to logging, sometimes benchmarking or profiling requests are important. This will give you an idea of what to expect when hitting your production environments.


dt := time.Time{}
svc.Handlers.Send.PushFront(func(req *request.Request) {
	 dt = time.Now()
})
svc.Handlers.Send.PushBack(func(req *request.Request) {
	 fmt.Println("SENDING", time.Now().Sub(dt))
})
// Output:
// SENDING 201.007364ms

Here we push one function to the beginning of the Send’s list and then compute the delta time in the handler added to the end of the Send’s list to measure how long the API request took to send and receive a response.

As we have shown in these code samples, request handlers allow you to easily customize the SDK’s behavior and add new functionality. If you have used handlers to augment the SDK’s request/response functionality, please let us know in the comments!

Welcome to the AWS Developer Blog for Go

by Jason Del Ponte | on | in Go | | Comments

Hi, everyone! Welcome to the AWS Developer Blog for Go. I’m Jason Del Ponte, and I’m a developer on the AWS SDK for Go team. This blog will be the place to go for information about:

  • Tips and tricks for using the AWS SDK for Go
  • New feature announcements
  • Deep dives into the AWS SDK for Go
  • Guest posts from AWS service teams

In the meantime, we’ve created several resources to introduce you to the SDK:

We’re excited to see how you will use the SDK. Please share your experience, feedback, and questions. Stay tuned!