AWS Developer Blog

Using AWS SDK for Go API Setters

In release v1.5.0 of the AWS SDK for Go, we added setters to all API operation parameters. Setters give you the ability to set API parameters without directly taking the value’s address. The setters wrap this functionality internally so you don’t have to. The setters are a convenient way to reduce the need to use aws.String and similar utilities.

The following code shows how you could use the Amazon S3 PutObject with the setters.

resp, err := svc.PutObject((&s3.PutObject{}).
	SetBucket("myBucket").
	SetKey("myKey").
	SetBody(strings.NewReader("abc")).
	SetWebsiteRedirectLocation("https://example.com/something"),
)

The following example uses Amazon ECS and nested setters to update a service’s deployment.

resp, err := svc.UpdateService((&ecs.UpdateServiceInput{}).
	SetService("myService").
	SetDeploymentConfiguration((&ecs.DeploymentConfiguration{}).
		SetMinimumHealthyPrecent(80),
	),
)

If you have additional suggestions or feedback on how to improve the SDK, send us your comments. We look forward to hearing from you.

How the Amazon SQS FIFO API Works

by Leah Rivers, Jakub Wojciak, and Tim Bray | on | Permalink | Comments |  Share

We have just introduced FIFO queues for Amazon SQS. These queues offer strictly ordered message delivery and exactly-once message processing. The FIFO API builds on the SQS API and adds new capabilities. This post explains the additions, how they work, and when to use them.

Customers have asked us for these features.  Although many apps perform well with SQS’s traditional super-scalable at-least-once message processing, some applications do need ordering or exactly-once processing. For example, you might have a queue used to carry commands in an interactive “shell” session, or a queue that delivers a stream of price updates.  In both cases, the messages must be processed in order and exactly once.  FIFO queues make it much easier to support these apps, while preserving the SQS ultra-simple API.

You can use FIFO queues much as you use SQS queues today: sending, receiving, and deleting messages, and retrying whenever you get send or receive failures. What’s new with the FIFO API is that they support messages being delivered in order and processed only once.  Now, on to the details.

Note: If your network connections don’t drop for minutes at a time, and your messages have unique identifiers, you should be able to get strict ordering and exactly-once processing with little extra effort; the default settings for all the FIFO-specific arguments will be appropriate.

Making Queues
There are two additions to the SQS CreateQueue API; both are Boolean-valued queue attributes.  FifoQueue turns FIFO on; this discussion applies to queues that were created with FifoQueue set to true.  The other is ContentBasedDeduplication, which we’ll discuss later in the context of sending messages.

Ordering vs. Exactly-Once Processing
Ordering and exactly-once processing behaviors aren’t the same thing.  You always get deduplication, but you can control ordering behavior by using a new string-valued parameter named MessageGroupId, which applies to the SendMessage and SendMessageBatch APIs.

Basically, the FIFO behavior applies to messages that have the same MessageGroupId.  This means you have three options:

  • Give all the messages in the queue the same MessageGroupId (an empty string is fine) so that they are all delivered in order.
  • Mix up a few different MessageGroupIds.  This makes sense, for example, if you are tracking data from several different customers.  The idea is if you use the customer ID as the MessageGroupId, the records for each customer will be delivered in order; there’s no particular ordering between records from different customers.
  • Give every message a different MessageGroupId, for example, by making a new UUID for each one.  Then, there isn’t any particular ordering, but you get the deduplication (we provide details in the next section).

What Does “FIFO” Mean, Anyhow?
FIFO is only really meaningful when you have one single-threaded sender and one single-threaded receiver.  If there are a bunch of threads or processes writing into a queue, you can’t really even tell if it’s FIFO: the messages show up in the queue depending on when the senders get scheduled.

On the other hand, with MessageGroupId, you can have a bunch of independent senders throwing messages at a queue, each with their own MessageGroupId, and each sender’s messages show up at the other end in order.

At the receiving end, when you call ReceiveMessage, the messages you get may have several different MessageGroupIds (assuming there is more than one MessageGroupId in the queue); the messages for each MessageGroupId will be delivered in order.  A receiver can’t control which MessageGroupIds it’s going to get messages for.

What Does “Duplicate” Mean?
FIFO queues are designed to avoid introducing duplicate messages. Historically, standard SQS queues offered “at least once” service, with the potential of occasional duplicate message delivery.

The good news is that with the FIFO API, you get to decide what “duplicate” means.  There are two tools you can use:  the Boolean ContentBasedDeduplication queue attribute, and an optional Boolean SendMessage/SendMessageBatch parameter, MessageDeduplicationId.

Two messages are duplicates if they have the same MessageDeduplicationId.  If ContentBasedDeduplication is set, SQS calculates the MessageDeduplicationId for you as an SHA256 of the message content (but not the message attributes).

One implication is that if you haven’t set ContentBasedDeduplication on the queue, you must provide a MessageDeduplicationId or SQS throws an error.  On the other hand, if ContentBasedDeduplication is set, you can still provide the MessageDeduplicationId and SQS will use yours instead of the SHA256.

Now, in a lot of cases, application messages include some sort of unique identifier, often a UUID.  In that case, ContentBasedDeduplication reliably detects dupes.  So, why would you ever provide a MessageDeduplicationId?

  1. Maybe you have messages with duplicate bodies.  For example, I wrote an app that was pumping an HTTP server log into a FIFO queue.  There are lots of dumb bots and crawlers on the Internet that will fire a bunch of requests for the same resource at your server more often than once per second. They show up as duplicate lines in the log, and if you use ContentBasedDeduplication, SQS will think they’re dupes.  In this scenario, you might want to generate a MessageDeduplicationId for each line. (Actually, I didn’t; for what I was working on, I wanted unique entries, so I needed SQS to suppress the dupes.)
  2. Maybe you have messages that aren’t the same, but you want SQS to treat them as duplicates.  One example I’ve heard about is a mobile phone app that often gets network failures when it’s out in a coverage-free area, so when it sends messages it includes a field saying how many times it had to retry.  But the app doesn’t want more than one message getting through, so it keeps using the same MessageDeduplicationId until a message is acknowledged.
  3. Maybe you want to send messages that have identical content but different metadata attributes.  ContentBasedDeduplication only works on the content, so you’ll need to add a MessageDeduplicationId.

What Happens to Duplicates?
Suppose you send two messages with the same MessageDeduplicationId. There are a bunch of ways this could happen, the most obvious one being that you had a network breakage and SQS got the message, but because the acknowledgment didn’t get back to your app, the SendMessage call seemed to fail.

In any case, when this happens, SQS lets your call succeed, but then just tosses the duplicate on the floor, so that only one should ever get into the queue.

This means that if you’re talking to SQS and for some reason your call fails, you don’t have to worry whether it’s your fault, the fault of SQS, or the network’s fault.  Just retry with the same MessageDeduplicationId as many times as you want until the call succeeds, and only one copy should be transmitted.

How Long Does SQS Remember Duplicates?
It turns out that SQS implements transmit deduplication by remembering which MessageDeduplicationId values it’s seen.  To be precise, it remembers this for at least five minutes, which means that if you send a pair of duplicate messages more than five minutes apart, both might get through.

In most apps, this should be a complete nonissue, but it can happen.  Consider a mobile app that’s running on a device in a car.  You send a message and SQS gets it, but before you get the acknowledgment, you go into a coverage-free area and lose signal for ten minutes. When the app gets signal again, if you just retry the original call, it’s possible the receiver might get that message twice.

There are a variety of strategies you could adopt to work around this. The most obvious way is to send a request when you’ve lost network for a while, saying “what’s the last message received?”

A Note on Rate Limits
Anyone who’s used SQS seriously has come to appreciate its immense ability to soak up traffic.  Generally, when you try to put messages into SQS and you get throttled, the right thing to do is just keep trying and pretty soon it’ll work.  In fact, in many cases the SDK just does this for you so you don’t even notice.

FIFO queues are different.  The ordering imposes a real throughput limit – currently 300 requests per second per queue.  The queue just can’t go faster than that, and retrying won’t help.  Fortunately, our conversations with customers have told us that FIFO applications are generally lower-throughput—10 messages per second or lower.

“At Least Once” vs. “Exactly Once”
SQS FIFO does not claim to do exactly-once delivery.  To start with, there is no such thing; go looking around the Internet and you’ll find lots of eminent Computer Scientists explaining why exactly-once is impossible in distributed systems.

And regardless, you don’t really want exactly-once delivery, because your reader could blow up due to a cosmic ray or network problem or whatever after it’s received the message, but before it’s had a chance to act on it.  In that case, you absolutely want SQS to deliver the message again.

What you want, and what FIFO queues are designed to offer, is exactly-once processing.  To understand how this works, let’s walk through the details of how you go about receiving SQS FIFO messages (which in most respects is exactly like receiving traditional non-FIFO SQS messages).

  1. You call ReceiveMessage and get back a batch; each message is accompanied by a ReceiveHandle.
  2. You can provide a VisibilityTimeout argument; if you don’t, there’s a default value, normally 30 seconds.
  3. For the VisibilityTimeout period, the messages you’ve received are hidden from other callers and, because this is a FIFO queue, other callers are partially blocked from reading those messages to preserve FIFO ordering.  “Partially blocked” has to do with MessageGroupId.  If all the messages have the same one, then the whole queue is blocked; if there are several different MessageGroupIds, only the MessageGroupIds in the batch you receive are blocked.
  4. You can also provide an optional ReceiveMessageDeduplicationId argument to ReceiveMessage.
  5. Normally, assuming everything goes well, you process your messages and then delete them.  To delete each message, you have to use the ReceiveHandle that came with it.  This tells SQS the message has been processed and will never be delivered again.  You can also use the ReceiveHandle to extend the VisibilityTimeout for messages you’re working on.
  6. Suppose you get a network breakage such that SQS got your ReceiveMessage request and sent you the messages, but you didn’t get them.  You can go ahead and retry.  If you provide the same ReceiveMessageDeduplicationId, SQS sends you the messages (and ReceiveHandles) right away, and resets the VisibilityTimeout timer.  If you don’t provide the same ReceiveMessageDeduplicationId, SQS has to wait for the VisibilityTimeout to expire before releasing the messages to you.  So the only real effect of the ReceiveMessageDeduplicationId is that retries of failed ReceiveMessage calls run faster.  If your environment and network are solid, you might not want to bother using them.
  7. One more detail: If you use the ReceiveHandles to delete messages or change their visibility, you can no longer retry with that same ReceiveMessageDeduplicationId.

Can this ever go wrong? Yes, but it’s a pretty narrow corner case.  Suppose you receive a batch of messages, and for some reason or other you get stuck, and the VisibilityTimeout expires before you get around to deleting them.  In that case, they’d be released and if there are other readers, they might end reading and processing duplicates.

By the way, in this scenario where the VisibilityTimeout has expired, your DeleteMessage call will fail, so at least you can be sure of detecting the situation.

Some things you might do to prevent this from happening:

  • Have only one reader.
  • Use a nice, long VisibilityTimeout.
  • Keep track of the time, and before you process each message, double check to be sure its VisibilityTimeout hasn’t expired.

Summing up
We’ve gone through a lot of details here.  However, in the most common case, where you have reasonably reliable connections and your messages already have unique IDs, you can ignore most of the details and just take the defaults.  Set ContentBasedDeduplication on your queue, then just go ahead and use SendMessage as you always have, only with MessageGroupId arguments.

On the receiver side, if you’re worried about retry performance, supply a ReceiveMessageDeduplicationId on each call.  Other than that, go ahead and use the same SQS calls to process messages that you already have.  The results should be:

  1. All the messages with the same MessageGroupId will be processed in the order they were sent.
  2. If a message is read and deleted, no duplicate should ever be delivered to a queue reader.

Happy fully ordered, duplicate-free message processing!

 

 

 

Mocking Out the AWS SDK for Go for Unit Testing

In our previous post, we showed how you could use the request handler stack in the AWS SDK for Go to extend or modify how requests are sent and received. Now, we’d like to expand the idea of extending the SDK and discuss how you can unit test code that uses the SDK. The SDK’s service clients are a common component to short circuit with custom unit test functionality.

You can easily mock out the SDK service clients by taking advantage of Go’s interfaces. By using the methods of an interface, your code can use that interface instead of using the concrete service client directly. This enables you to mock out the implementation of the service client for your unit tests. You can define these interfaces yourself or use the interfaces that the SDK already defines for each service client’s API. The service client’s API interfaces are very simple to use, and give you the flexibility to test your code. This pattern of using the API interfaces for mocking out is simple to use for new or existing code.

You can find the API interface package nested under each service client package, named “iface“. For example, the Amazon SQS API interface package is “sqsiface“, with the import path of github.com/aws/aws-sdk-go/service/sqs/sqsiface.

The following example shows one pattern how your code can use the SDK’s SQS API interface instead of the concrete client. We’ll take a look at how you could mock out the SDK’s SQS client next.

func main() {
	sess := session.Must(session.NewSession())

	q := Queue{
		Client: sqs.New(sess),
		URL:    os.Args[1],
	}

	msgs, err := q.GetMessages(20)
	// ...
}

type Queue struct {
	Client sqsiface.SQSAPI
	URL    string
}

func (q *Queue) GetMessages(waitTimeout int64) ([]Message, error) {
	params := sqs.ReceiveMessageInput{
		QueueUrl: aws.String(q.URL),
	}
	if waitTimeout > 0 {
		params.WaitTimeSeconds = aws.Int64(waitTimeout)
	}
	resp, err := q.Client.ReceiveMessage(&params)
	// ...
}

Because the previous code uses the SQS API interface, our tests can mock out the client with the responses we want so that we can verify GetMessages returns the parsed messages correctly.

type mockedReceiveMsgs struct {
	sqsiface.SQSAPI
	Resp sqs.ReceiveMessageOutput
}

func (m mockedReceiveMsgs) ReceiveMessage(in *sqs.ReceiveMessageInput) (*sqs.ReceiveMessageOutput, error) {
	// Only need to return mocked response output
	return &m.Resp, nil
}

func TestQueueGetMessage(t *testing.T) {
	cases := []struct {
		Resp     sqs.ReceiveMessageOutput
		Expected []Message
	}{
		{
			Resp: sqs.ReceiveMessageOutput{
				Messages: []*sqs.Message{
					{Body: aws.String(`{"from":"user_1","to":"room_1","msg":"Hello!"}`)},
					{Body: aws.String(`{"from":"user_2","to":"room_1","msg":"Hi user_1 :)"}`)},
				},
			},
			Expected: []Message{
				{From: "user_1", To: "room_1", Msg: "Hello!"},
				{From: "user_2", To: "room_1", Msg: "Hi user_1 :)"},
			},
		},
	}

	for i, c := range cases {
		q := Queue{
			Client: mockedReceiveMsgs{Resp: c.Resp},
			URL:    fmt.Sprintf("mockURL_%d", i),
		}
		msgs, err := q.GetMessages(20)
		if err != nil {
			t.Fatalf("%d, unexpected error", err)
		}
		if a, e := len(msgs), len(c.Expected); a != e {
			t.Fatalf("%d, expected %d messages, got %d", i, e, a)
		}
		for j, msg := range msgs {
			if a, e := msg, c.Expected[j]; a != e {
				t.Errorf("%d, expected %v message, got %v", i, e, a)
			}
		}
	}
}

Using this pattern will help ensure your code that uses the SDK is easy to test and maintain. You can find the full code example used in this post on GitHub at https://github.com/aws/aws-sdk-go/tree/master/example/service/sqs/mockingClientsForTests.

Retry Throttling

by Sattwik Pati | on | in .NET | Permalink | Comments |  Share

In this blog post, we discuss the existing request retry feature, and the new retry throttling feature that we have rolled out in the AWS SDK for .NET V3 from version 3.3.4.0 of the AWSSDK.Core package.

In request retry, client side requests are retried, and often succeed, in cases involving transient network or service issues. The advantage to you as a client is that you don’t have to experience noise resulting from these exceptions, and are saved the trouble of writing code that would retry these requests. The downside to this retry feature is that situations such as network connectivity or unavailability, in which all retried requests fail, leads to tying up the client application thread and fail-slow behavior. The client eventually ends up getting a service unavailable exception that could have been relayed earlier, without the retry loop. This delay in surfacing the exception hurts the client’s recovery times and prolongs the client side impact. We want to walk a middle ground where we provide the retry request feature but with some limiting constraints.

Retry throttling, like its name suggests, throttles retry attempts when a large number of retry requests are failing. Each time a retry request is made, an internal retry capacity pool is drained. Retry requests are no longer made if the capacity pool is depleted. Retry requests are attempted only when the client starts getting successful responses, which refills the client’s capacity pool. Retry throttling takes care of “retry storm” situations by entering a fail-fast mode, in which the exceptions are surfaced and the needless retry loop is skipped. Also, because retry throttling kicks in only when a large number of requests and their retry attempts fail, transient retries are unaffected by this feature.

The AWS SDK for Java has already introduced this feature to great effect. Their blog post contains the metrics that compare situations when throttling is enabled versus when it is not.

Disabling Retry Throttling

Retry throttling is enabled by default and can be disabled easily by changing the ThrottleRetries property to false on the config object. We demonstrate this in the following by using an AmazonS3Config object.

AmazonS3Config config = new AmazonS3Config();
config.ThrottleRetries = false; // Default value is true

As you can see, it’s easy to opt out of this feature. Retry throttling can improve the ability of the SDK to adapt to sub-optimal situations. Feel free to leave questions or comments below!

AWS Toolkit for Eclipse: VPC Configuration for an AWS Elastic Beanstalk Environment

I’m glad to announce that the AWS Elastic Beanstalk plugin in the AWS Toolkit for Eclipse now supports Configuring VPC with Elastic Beanstalk. If you’re new to AWS Toolkit for Eclipse, see the User Guide for a basic introduction and setup guidance. If you’re new to AWS Elastic Beanstalk plugin, see AWS Elastic Beanstalk and Eclipse Integration to learn how to manage your application and environment within Eclipse. If you’re not familiar with VPC configurations for Elastic Beanstalk environments, see Using Elastic Beanstalk with Amazon VPC.

The following screenshots show the plugin New Server wizard pages for VPC configuration. If you’ve used the Elastic Beanstalk console, these user interfaces should be familiar. On the Configure Application and Environment page, you can choose Select a VPC to use when creating your environment to open the VPC Configuration page. Otherwise, when you click Next, the Permissions page opens directly.

application and environment configuration

Configure Application and Environment Page

On the VPC Configuration page, you can set up the VPC configurations, such as subnets, security group, etc. You must select at least one subnet for a valid configuration. ELB visibility is disabled unless you chose Load Balanced Web Server Environment for the Environment type on the previous Configure Application and Environment page. For more information about the options and how to configure them for your VPC needs, see Using Elastic Beanstalk with Amazon VPC.

VPC Configuration Page

VPC Configuration Page

As you can see, it’s easy to configure a VPC for your Elastic Beanstalk environment using the AWS Toolkit for Eclipse plugin. Please let us know if there are other features you want to see in this toolkit. We appreciate your comments.

Using webpack and the AWS SDK for JavaScript to Create and Bundle an Application – Part 2

In the previous post in this series, we introduced how to use webpack and the AWS SDK for JavaScript to create and bundle an application.

In this post, we’re going to dig a little bit into other features, such as creating bundles with only the AWS services you need, and generating bundles that will also run in Node.js.

Importing Individual Services

One of the benefits of using webpack is that it can parse your dependencies and include only the code your application needs. You might have noticed that in our previous project, webpack generated a bundle that was 2.38 MB. That’s because webpack is currently importing the entire AWS SDK for JavaScript based on the following statement in s3.js.

var AWS = require('aws-sdk');

We can help webpack include only the Amazon S3 service if we update our require statement to the following:

var S3 = require('aws-sdk/clients/s3');

All the AWS SDK configuration options that are available using AWS.config can also be set when instantiating a service. We can still access the AWS namespace to set global configuration across all services with the following:

var AWS = require('aws-sdk/global');

Here’s an example of what our s3.js file would look like with these changes.

s3.js

// Import the Amazon S3 service client
var S3 = require('aws-sdk/clients/s3');

// Set credentials and region
var s3 = new S3({
    apiVersion: '2006-03-01',
    region: 'REGION', 
    credentials: {/* */}
  });

/**
 * This function retrieves a list of objects
 * in a bucket, then triggers the supplied callback
 * with the received error or data
 */
function listObjects(bucket, callback) {
  s3.listObjects({
    Bucket: bucket
  }, callback);
}

// Export the handler function
module.exports = listObjects;

Now when we run npm run build, webpack reports the following:

    Version: webpack 1.13.2
    Time: 720ms
      Asset    Size  Chunks             Chunk Names
    bundle.js  797 kB     0  [emitted]  main
      [0] multi main 28 bytes {0} [built]
      [1] ./browser.js 653 bytes {0} [built]
      [2] ./s3.js 803 bytes {0} [built]
       + 155 hidden modules

Now, our generated bundle has dropped from 2.38 MB to 797 KB.

You can configure webpack to minify the generated code to reduce the final size even more!

Generating Node.js Bundles

You can use webpack to generate bundles that run in Node.js by specifying target: 'node' in the configuration. This can be useful when you’re running a Node.js application in an environment where your disk space is limited.

Let’s update our project to build a Node.js bundle by creating a file called node.js. This file will be nearly identical to browser.js, however, instead of listing S3 objects in the DOM, it will output them to the console.

node.js

// Import the listObjects function
var listObjects = require('./s3');
var bucket = 'BUCKET';
// Call listObjects on the specified bucket
listObjects(bucket, function(err, data) {
  if (err) {
    console.log(err);
  } else {
    console.log('S3 Objects in ' + bucket + ':');
    // Print the Key for each returned Object
    data.Contents.forEach(function(metadata) {
      console.log('Key: ' + metadata.Key);
    });
  }
});

Next, we’ll update our webpack.config.js to use node.js as the entry point, and add a new field, target: "node" to let webpack know it should generate a Node.js bundle.

webpack.config.js

// Import path for resolving file paths
var path = require('path');
module.exports = {
  // Specify the entry point for our app
  entry: [
    path.join(__dirname, 'node.js')
  ],
  // Specify the output file containing our bundled code
  output: {
    path: __dirname,
    filename: 'bundle.js'
  },
  // Let webpack know to generate a Node.js bundle
  target: "node",
  module: {
    /**
      * Tell webpack how to load JSON files
      * By default, webpack only knows how to handle
      * JavaScript files
      * When webpack encounters a 'require()' statement
      * where a JSON file is being imported, it will use
      * the json-loader
      */
    loaders: [
      {
        test: /.json$/, 
        loaders: ['json']
      }
    ]
  }
}

Run npm run build to generate the new bundle. You can test this code on the command line by running node bundle.js. This should output a list of S3 objects in the console!

Give It a Try!

We look forward to hearing what you think of this new support for webpack in the AWS SDK for JavaScript v2.6.1! Try it out and leave your feedback in the comments or on GitHub!

Using webpack and the AWS SDK for JavaScript to Create and Bundle an Application – Part 1

We introduced support for webpack in version 2.6.1 of the AWS SDK for JavaScript. Using tools such as webpack with the SDK give you a way to bundle your JavaScript modules so that you can write modularized code for the browser.

This post will walk through how to create and bundle a simple application that displays a list of Amazon S3 objects from a bucket by using webpack and the AWS SDK for JavaScript.

Why Use webpack?

Tools such as webpack parse your application code, searching for import or require statements to create bundles that contain all the assets your application needs.

Although webpack only knows how to handle JavaScript files by default, can also configure it to handle other types, such as JSON, CSS, and even image files! This makes it great at packaging up your application’s assets so that they can be easily served through a webpage.

Using webpack, you can create bundles with only the services you need, and generate bundles that will also run in Node.js!

Prerequisites

To follow along with this post, you’ll need to have node.js and npm installed (npm comes bundled with node.js). When you have these tools, create a new directory and download the dependencies we’ll need for this project by running npm install x, where x is the following:

  • aws-sdk: the AWS SDK
  • webpack: the webpack CLI and JavaScript module
  • json-loader: a webpack plugin that tells webpack how to load JSON files

Setting Up the Application

Start by creating a directory to store the project. We’ll name our project aws-webpack.

Our application will contain three files that do the following:

  • s3.js exports a function that accepts a bucket as a string, and a callback function, and returns a list of objects to the callback function.
  • browser.js imports the s3.js module, calls the listObjects function, and displays the results.
  • index.html references the JavaScript bundle that webpack creates.

Create these files in the project’s root directory, as follows:

s3.js

Important: We’ve left configuring the credentials to you.

// Import the AWS SDK
var AWS = require('aws-sdk');

// Set credentials and region,
// which can also go directly on the service client
AWS.config.update({region: 'REGION', credentials: {/* */}});

var s3 = new AWS.S3({apiVersion: '2006-03-01'});

/**
 * This function retrieves a list of objects
 * in a bucket, then triggers the supplied callback
 * with the received error or data
 */
function listObjects(bucket, callback) {
  s3.listObjects({
    Bucket: bucket
  }, callback);
}

// Export the handler function
module.exports = listObjects;

browser.js

// Import the listObjects function
var listObjects = require('./s3');
var bucket = 'BUCKET';
// Call listObjects on the specified bucket
listObjects(bucket, function(err, data) {
  if (err) {
    alert(err);
  } else {
    var listElement = document.getElementById('list');
    var content = 'S3 Objects in ' + bucket + ':n';
    // Print the Key for each returned Object
    content +=  data.Contents.map(function(metadata) {
      return 'Key: ' + metadata.Key;
    }).join('n');
    listElement.innerText = content;
  }
});

index.html

<!DOCTYPE html>
<html>
    <head>
        <title>AWS SDK with webpack</title>
    </head> 
    <body>
        <div id="list"></div>
        <script src="bundle.js"></script>
    </body>
</html>

At this point, we have one JavaScript file that handles making requests to Amazon S3, one JavaScript file that appends a list of S3 object keys to our webpage, and an HTML file that contains a single div tag and script tag. In this last step before our webpage will display data, we’ll use webpack to generate the bundle.js file that the script tag references.

Configuring webpack

You specify configuration options in webpack by using a plain JavaScript file. By default, webpack looks for a file named webpack.config.js in your project’s root directory. Let’s create our webpack.config.js.

webpack.config.js

// Import path for resolving file paths
var path = require('path');
module.exports = {
  // Specify the entry point for our app.
  entry: [
    path.join(__dirname, 'browser.js')
  ],
  // Specify the output file containing our bundled code
  output: {
    path: __dirname,
    filename: 'bundle.js'
  },
  module: {
    /**
      * Tell webpack how to load 'json' files because
      * by default, webpack only knows how to handle
      * JavaScript files.
      * When webpack encounters a 'require()' statement
      * where a 'json'' file is being imported, it will use
      * the json-loader.  
      */
    loaders: [
      {
        test: /.json$/, 
        loaders: ['json']
      }
    ]
  }
}

We specified our entry point as browser.js in webpack.config.js. The entry point is the file webpack uses to start searching for imported modules. We also defined the output as bundle.js. This bundle will contain all the JavaScript our application needs to run. We don’t’ have to specify s3.js as an entry point because webpack already knows to include it because it’s imported by browser.js. Also, webpack knows to include the aws-sdk because it was imported by s3.js!

Notice that we specified a loader to tell webpack how to handle importing JSON files, in this case by using the json-loader we installed earlier. By default, webpack only supports JavaScript, but uses loaders to add support for importing other file types as well. The AWS SDK makes heavy use of JSON files, so without this extra configuration, webpack will throw an error when generating the bundle.

Running webpack

We’re almost ready to build our application! In package.json, add "build": "webpack" to the scripts object.

{
  "name": "aws-webpack",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo "Error: no test specified" && exit 1",
    "build": "webpack"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "aws-sdk": "^2.6.1"
  },
  "devDependencies": {
    "json-loader": "^0.5.4",
    "webpack": "^1.13.2"
  }
}

Now run npm run build from the command line and webpack will generate a bundle.js file in your project’s root directory. The results webpack reports should look something like this:

    Version: webpack 1.13.2
    Time: 1442ms
      Asset     Size  Chunks             Chunk Names
    bundle.js  2.38 MB     0  [emitted]  main
      [0] multi main 28 bytes {0} [built]
      [1] ./browser.js 653 bytes {0} [built]
      [2] ./s3.js 760 bytes {0} [built]
       + 343 hidden modules    

 

At this point, you can open index.html in a browser and see output like that in our example.

Give It a Try!

In an upcoming post, we’ll explore some other features of using webpack with the AWS SDK for JavaScript.

We look forward to hearing what you think of this new support for webpack in the AWS SDK for JavaScript v2.6.1! Try it out and leave your feedback in the comments or on GitHub!

General Availability for .NET Core Support in the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today, we announce the general availability (GA) of our .NET Core support in the AWS SDK for .NET. Previously, we’ve supported .NET Core in our 3.2.x beta NuGet packages while maintaining our 3.1.x NuGet packages on our stable master branch with the frequent AWS service updates.

With the move to GA status for .NET Core, we’ve merged .NET Core support into the stable master branch and, going forward, will release version 3.3.x NuGet packages for the AWS SDK for .NET. We’ll add AWS service updates to our .NET Core version at the same time we add them to the rest of the .NET platforms we support, like .NET Framework 3.5 and 4.5. The SDK’s change of status also means our AWS Tools for PowerShell Core module (AWSPowerShell.NetCore) is at GA status, and its version bumps to 3.3.x to match the underlying SDK version.

This release is one more step in our continuing support for .NET Core on AWS. Other exciting .NET Core releases we’ve had this year include:

For help setting up and configuring the SDK for use with .NET Core, see our previous post on some of the extensions we added to take advantage of the new .NET Core frameworks.

We welcome your feedback. Check out our GitHub repository and let us know what you think of our .NET and .NET Core support.

Configuring AWS SDK with .NET Core

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

One of the biggest changes in .NET Core is the removal of ConfigurationManager and the standard app.config and web.config files that were used ubiquitously with .NET Framework and ASP.NET applications. The AWS SDK for .NET used this configuration system to set things like AWS credentials and region so that you wouldn’t have to do this in code.

A new configuration system in .NET Core allows any type of input source from any location. Also, the configuration object isn’t a global singleton like the old ConfigurationManager was, so the AWS SDK for .NET doesn’t have access to read settings from it.

To make it easy to use the AWS SDK for .NET with .NET Core, we have released a new NuGet package called AWSSDK.Extensions.NETCore.Setup. Like many .NET Core libraries, it adds extension methods to the IConfiguration interface to make getting the AWS configuration seamless.

Using AWSSDK.Extensions.NETCore.Setup

If we create an ASP.NET Core MVC application in Visual Studio, the constructor for Startup.cs handles configuration by reading in various input sources, using the ConfigurationBuilder and setting the Configuration property to the built IConfiguration object.

public Startup(IHostingEnvironment env)
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
        .AddEnvironmentVariables();
    Configuration = builder.Build();
}

To use the Configuration object to get the AWS options, we first add the AWSSDK.Extensions.NETCore.Setup NuGet package. Then, we add our options to the configuration file. Notice one of the files added to the ConfigurationBuilder is called $"appsettings.{env.EnvironmentName}.json". If you look at the Debug tab in the project’s properties, you can see this file is set to Development. This works great for local testing because we can put our configuration in the appsettings.Development.json file, which is loaded only during local testing in Visual Studio. When we deploy to an Amazon EC2 instance the EnvironmentName will default to Production and this file will be ignored causing the AWS SDK for .NET to fall back to the IAM credentials and region configured for the EC2 instance.

Let’s add an appsettings.Development.json file to our project and supply our AWS settings.

{
  "AWS": {
    "Profile": "local-test-profile",
    "Region": "us-west-2"
  }
}

To get the AWS options set in the file, we call the extension method that is added to IConfiguration, GetAWSOptions. To construct a service client from these options, we call CreateServiceClient. The following example code shows how to create an S3 service client.

var options = Configuration.GetAWSOptions();
IAmazonS3 client = options.CreateServiceClient();

ASP.NET Core Dependency Injection

The AWSSDK.Extensions.NETCore.Setup NuGet package also integrates with a new dependency injection system in ASP.NET Core. The ConfigureServices method in Startup is where the MVC services are added. If the application is using Entity Framework, this is also where that is initialized.

public void ConfigureServices(IServiceCollection services)
{
    // Add framework services.
    services.AddMvc();
}

The AWSSDK.Extensions.NETCore.Setup NuGet package adds new extension methods to IServiceCollection that you can use to add AWS services to the dependency injection. The following code shows how we add the AWS options read from IConfiguration and add S3 and Amazon DynamoDB to our list of services.

public void ConfigureServices(IServiceCollection services)
{
    // Add framework services.
    services.AddMvc();
    services.AddDefaultAWSOptions(Configuration.GetAWSOptions());
    services.AddAWSService<IAmazonS3>();
    services.AddAWSService<IAmazonDynamoDB>();
}

Now, if our MVC controllers use either IAmazonS3 or IAmazonDynamoDB as parameters in their constructors, the dependency injection system passes those services in.

public class HomeController : Controller
{
    IAmazonS3 S3Client { get; set; }

    public HomeController(IAmazonS3 s3Client)
    {
        this.S3Client = s3Client;
    }

    ...

}

Summary

We hope this new AWSSDK.Extensions.NETCore.Setup NuGet package helps you get started with ASP.NET Core and AWS. Feel free to give us your feedback at our GitHub repository for the AWS SDK for .NET

Custom Elastic Beanstalk Application Deployments

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In the previous post, you learned how to use the new deployment manifest for the the Windows container in AWS Elastic Beanstalk to deploy a collection of ASP.NET Core and traditional ASP.NET applications. The deployment manifest supports a third deployment type, custom application deployment.

Custom application deployment is a powerful feature for advanced users who want to leverage the power of Elastic Beanstalk to create and manage their AWS resources and also have complete control over how their application is deployed. For a custom application deployment, you declare the PowerShell scripts for the three actions that Elastic Beanstalk performs: install, restart, and uninstall. Install is used when a deployment is initiated, restart is used when the RestartAppServer API is called (which can be done from either the toolkit or the web console), and uninstall is invoked on the previous deployment whenever a new deployment occurs.

For example, you might have an ASP.NET application that you want to deploy, and your documentation team has written a static website that they want to include with the deployment. You can do this by writing your deployment manifest as follows.

{
  "manifestVersion": 1,
  "deployments": {
 
    "msDeploy": [
      {
        "name": "app",
        "parameters": {
          "appBundle": "CoolApp.zip",
          "iisPath": "/"
        }
      }
    ],
    "custom": [
      {
        "name": "PowerShellDocs",
        "scripts": {
          "install": {
            "file": "install.ps1"
          },
          "restart": {
            "file": "restart.ps1"
          },
          "uninstall": {
            "file": "uninstall.ps1"
          }
        }
      }
    ]
  }
}

The scripts listed for each action are in the application bundle relative to the deployment manifest file. For this example, the application bundle will also contain a documentation.zip file that contains the static website from your documentation team.

The install.ps1 script extracts the .zip file and sets up the IIS path.

Add-Type -assembly "system.io.compression.filesystem"
[io.compression.zipfile]::ExtractToDirectory('./documentation.zip', 'c:inetpubwwwrootdocumentation')

C:WindowsSysNativeWindowsPowerShellv1.0powershell.exe -Command {New-WebApplication -Name documentation -PhysicalPath  c:inetpubwwwrootdocumentation -Force}

Because your application is running in IIS, the restart action will invoke an IIS reset.

iisreset /timeout:1

For uninstall scripts, it’s important to clean up all settings and files that were performed during the install stage so that when the new version is being installed, you can avoid any collision with the previous deployments. For this example, you need to remove the IIS application for the static website and remove the files.

C:WindowsSysNativeWindowsPowerShellv1.0powershell.exe -Command {Remove-WebApplication -Name documentation}

Remove-Item -Recurse -Force 'c:inetpubwwwrootdocumentation'

Using these scripts files and the documentation.zip file that are included in the application bundle, the deployment will deploy your ASP.NET application and then deploy the documentation site.

This example showed a simple deployment of a simple static website. By using the power of custom application deployment, you can deploy any type of application and let Elastic Beanstalk manage the AWS resources for the application.