AWS Developer Blog

Amazon S3 TransferManager

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

One of the great APIs inside the AWS SDK for Java is a class called TransferManager that makes working with uploads and downloads from Amazon S3 easy and convenient.

TransferManager provides asynchronous management for uploads and downloads between your application and Amazon S3. You can easily check on the status of your transfers, add handlers to run code when a transfer completes, cancel transfers, and more.

But perhaps the best thing about TransferManager is how it hides the complexity of transferring files behind an extremely simple API. TransferManager is essentially two operations: upload and download. From there you just work with your upload and download objects to interact with your transfers. The following example shows how easy it is to create a TransferManager instance, upload a file, and print out its progress as a percent while it’s transferring.

// Each instance of TransferManager maintains its own thread pool
// where transfers are processed, so share an instance when possible
TransferManager tx = new TransferManager(credentials);

// The upload and download methods return immediately, while
// TransferManager processes the transfer in the background thread pool
Upload upload = tx.upload(bucketName, myFile.getName(), myFile);

// While the transfer is processing, you can work with the transfer object
while (upload.isDone() == false) {
    System.out.println(upload.getProgress().getPercentTransferred() + "%");
}

Behind this simple API, TransferManager is doing a lot of work for you. Depending on the size and data source for your upload, TransferManager adjusts the algorithm it uses to process your transfer, in order to get the best performance and reliability. Whenever possible, uploads are broken up into multiple pieces, so that several pieces can be sent in parallel to provide better throughput. In addition to higher throughput, this approach also enables more robust transfers, since an I/O error in any individual piece means the SDK only needs to retransmit the one affected piece, and not the entire transfer.

TransferManager includes several more advanced features, such as recursively downloading entire sections of S3 buckets, or the ability to clean up pieces of failed multipart uploads. One of the more commonly used options is the ability to attach a progress listener to your uploads and downloads, which can run custom code at different points in the transfer’s lifecycle. The following example demonstrates using a progress listener to periodically print out the transfer’s progress, and print a final message when the transfer completes.

TransferManager tx = new TransferManager(credentials);
Upload upload = tx.upload(bucketName, myFile.getName(), myFile);

// You can set a progress listener directly on a transfer, or you can pass one into
// the upload object to have it attached to the transfer as soon as it starts
upload.setProgressListener(new ProgressListener() {
    // This method is called periodically as your transfer progresses
    public void progressChanged(ProgressEvent progressEvent) {
        System.out.println(upload.getProgress().getPercentTransferred() + "%");

        if (progressEvent.getEventCode() == ProgressEvent.COMPLETED_EVENT_CODE) {
            System.out.println("Upload complete!!!");
        }
    }
};

// waitForCompletion blocks the current thread until the transfer completes
// and will throw an AmazonClientException or AmazonServiceException if
// anything went wrong.
upload.waitForCompletion();

For a complete example of using Amazon S3 TransferManager and progress listeners, see the AmazonS3TransferManager sample that ships with the SDK for Java.

Are you using TransferManager in any of your projects yet? What custom code do you run in your progress listeners? Let us know in the comments!

Contributing to the AWS SDK for Ruby

by Loren Segal | on | in Ruby | Permalink | Comments |  Share

We love getting contributions from the community to the AWS SDK for Ruby. Whether it be added features, fixed bugs, or just extra optimizations, submitting a pull request helps make the SDK better for all of our users. Since we started the project, the SDK has seen over 60 contributors providing everything from a one line typo fix to a 500+ line high level service abstraction. In the past month alone we have merged over 20 pull requests, and we continue to see more every day. We not only want to keep seeing these pull requests, we want to see more of them, so if you have been submitting pull requests we want you to know that you should keep them coming!

However, if you haven’t yet submitted a pull request, today is a great day to start. We know that contributing code to a large project can occasionally seem daunting, so let’s talk about some of the very easy things you can do to make it easier for us to evaluate your patch and possibly merge it into the SDK.

How to Contribute on GitHub

If you would like to contribute to the SDK you can do so on our GitHub project at https://github.com/aws/aws-sdk-ruby. From there you can fork the repository and submit a pull request of your changes. GitHub has tools and guides to make the technical portions of this process as easy as possible, including desktop applications for Microsoft Windows and Mac OS X to sync your code to and from the site. You should visit GitHub’s help pages to get more information about submitting pull requests.

Contributing Documentation

Contributing documentation and fixing typos is one of the easiest ways to get started as a contributor to the SDK. Fortunately, GitHub makes editing files for small text changes very easy thanks to its inline file editor. If you find a typo or missing documentation, you can navigate to the file in GitHub and click the "Edit" button at the top to quickly bring up the file editor:

Contributing documentation part 1

You can now edit the file and write a descriptive comment about the change you’ve made. You should also make sure to "Preview" the document and ensure that it still looks okay in GitHub.

Editing a file in GitHub

If you do not already have a fork of the repository, GitHub will automatically fork the project to a new repository that you can use to submit a pull request from.

Submitting a pull request

This opens a pull request against the aws/aws-sdk-ruby project and begins the process of allowing us to evaluate the changes.

Contributing Code

To contribute code, you first need to clone your own fork of the aws-sdk-ruby Git repository. Once you have cloned the repository, use the Bundler bundle install command to install all development dependencies.

You should then run tests to ensure that there are no issues with the project before you get started. To run tests, run the rake command.

After you verify that the tests pass, you can make your changes. Please make sure to include new tests if you change or add behavior to the SDK. You should then run tests and ensure that they all pass before submitting your pull request, as this makes the process of merging your commits hassle-free.

Once you have added code, tests, and all the tests pass, you can commit your code with descriptive messages and push your changes to your forked repository. You can then log onto GitHub and click the "Pull Request" button at the top of your fork of the SDK.

Submitting a pull request from a forked repository

Submitting the pull request follows the same process as for documentation. Once you submit the pull request, we can evaluate the code and discuss any adjustments, if necessary.

Accepting Contributions!

As I said before, we love getting contributions from the community. If you have ideas about how the AWS SDK for Ruby can be improved, we definitely appreciate the help. Feature suggestions are always encouraged, but providing working code with tests in the form of a pull request will usually mean that we can get the changes into the SDK much faster. Plus, you get credit for making the SDK a better product, not just for yourself, but for all of your fellow Rubyists!

So go ahead, check out the repository on GitHub, and start submitting patches.

Asynchronous Requests with the AWS SDK for Java

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

In addition to the standard, blocking/synchronous clients in the AWS SDK for Java that you’re probably already familiar with, the SDK also contains non-blocking/asynchronous clients that are just as easy to use, and often more convenient for certain types of applications.

When you call an operation with one of the standard, synchronous clients in the SDK, your code is blocked while the SDK sends your request, waits for the service to process it, and parses the response. This is an easy way to work with the SDK, but there are some situations where you just want to kick off the request, and let your code continue executing. The asynchronous clients in the SDK allow you to do exactly that. Kick off your requests, and check back later to see if they completed.

AmazonDynamoDBAsync dynamoDB = new AmazonDynamoDBAsyncClient(myCredentials);
dynamoDB.describeTableAsync(new DeleteTableRequest(myTableName));
// Your code immediately continues executing, while your request runs in the background

Now that you know how to kick off your asynchronous request, how do you handle the response when it arrives? All of the asynchronous operations return a Future object that you can poll to see if your request has completed processing and if a response object is available. But sitting around polling a Future defeats the purpose of freeing up your code to continue executing after you kick off the request.

Usually, what you really want to do is, when the request finishes, execute some code to process the response. The asynchronous operations allow you to pass in an AsyncHandler implementation, which the SDK automatically runs as soon as your request finishes processing.

For example, the following piece of code kicks off an asynchronous request to describe an Amazon DynamoDB table. It passes in an AsyncHandler implementation, and when the request completes, the SDK runs the onSuccess method, which updates a UI label with the table’s status. AsyncHandler also provides an onError method, that allows you to handle any errors that occur while processing your request.

AmazonDynamoDBAsync dynamoDB = new AmazonDynamoDBAsyncClient(myCredentials);
dynamoDB.describeTableAsync(new DescribeTableRequest().withTableName(myTableName), 
    new AsyncHandler<DescribeTableRequest, DescribeTableResult>() {
        public void onSuccess(DescribeTableRequest request, DescribeTableResult result) {
            myLabel.setText(result.getTable().getTableStatus());
        }
             
        public void onError(Exception exception) {
            System.out.println("Error describing table: " + exception.getMessage());
            // Callers can also test if exception is an instance of 
            // AmazonServiceException or AmazonClientException and cast 
            // it to get additional information
        }
    });

Using the asynchronous clients in the SDK is easy and convenient. There are a lot of applications where processing requests in the background makes sense. UI applications are a great fit for asynchronous clients, since you don’t want to lock up your main UI thread, and consequently, the entire UI, while the SDK processes a request. Network issues could result in longer processing times, and an unresponsive UI that results in unhappy customers.

Another great use for the asynchronous clients is when you want to kick off a large batch of requests. If the requests don’t need to be executed serially, then you can gain a lot of throughput in your application by using the asynchronous clients to kick off many requests, all from a single thread.

Have you tried the asynchronous clients in the AWS SDK for Java yet? What kinds of applications are you using them for? Let us know how they’re working for you in the comments below.

More information on asynchronous programming with the AWS SDK for Java

AWS SDK for Ruby Release v1.8.3

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

We just published version 1.8.3 of the AWS SDK for Ruby (aws-sdk gem).  This release adds support for AWS OpsWorks and resolves a number of customer reported issues. 

require 'aws-sdk'

opsworks = AWS::OpsWorks.new
resp = opsworks.client.describe_stacks
resp #=> { :stacks => [] }

You can view the AWS::OpsWorks::Client API documentation here. Take it for a spin and leave some feedback!

Logging Requests

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

The AWS SDK for Ruby (aws-sdk gem) has some pretty cool logging features. I find them particularly helpful when I need to debug something. I generally jump into an IRB session that has a logger pre-wired for me and then start sending requests.

Configuring a Logger

To get log messages from the aws-sdk gem, you need to configure a logger. The easiest way is to create a Logger (from Ruby’s standard lib) and pass it to AWS.config.

require 'aws-sdk'
require 'logger'

AWS.config(:logger => Logger.new($stdout))

# make a request
s3 = AWS::S3.new
s3.buckets['aws-sdk'].objects['key'].head

# log output sent to standard out
I, [2013-02-14T09:49:12.856086 #31922]  INFO -- : [AWS S3 200 0.194491 0 retries] head_object(:bucket_name=>"aws-sdk",:key=>"key")

By default, requests are logged with a level of :info. You can override the default log level with AWS.config.

AWS.config(:log_level => :debug)

Log Formatters

The default log message contain the following information:

  • The service class name (e.g. ‘S3’)
  • The HTTP response status code (e.g. 200)
  • The total time taken in seconds
  • The number of retries
  • A summary of the client method called

Similar to how you can configure :logger and :log_level, you can register a custom log formatter via :log_formatter. Log formatters accept a AWS::Core::Response object and then return a formatted log message. The built-in AWS::Core::LogFormatter class has support for simple pattern replacements.

pattern = '[REQUEST :http_status_code] :service :operation :duration'
formatter = AWS::Core::LogFormatter.new(pattern)

AWS::S3(:log_formatter => formatter).new.buckets.first

# log output
I, [2013-02-14T09:49:12.856086 #31922]  INFO -- : [REQUEST :http_status_code] S3 list_buckets 0.542574

Canned Log Formatters

You can choose from a handful of ready-to-use log formatters you can choose from, including:

  • AWS::Core::LogFormatter.default
  • AWS::Core::LogFormatter.short
  • AWS::Core::LogFormatter.debug
  • AWS::Core::LogFormatter.colored

Just pass one of these to AWS.config and start making requests.

AWS.config(:log_formatter => AWS::Core::LogFormatter.colored)

Logging in Rails

If you require the aws-sdk gem inside a Rails application, then the Ruby SDK automatically wire itself up to the Rails.logger. You are free to still configure a different logger or to change the log level or formatter.

Managing Multiple AWS Accounts with the AWS Toolkit for Eclipse

When you’re building the next great application with AWS services, you’ll probably end up with several different AWS accounts. You may have one account for your production application’s resources, another for your development environment, and a couple more for personal testing. It can be really helpful to switch between these various accounts during your development, either to move resources between accounts, to compare configuration values, or to debug a problem that only occurs in one environment.

The AWS Toolkit for Eclipse makes it painless to work with multiple AWS accounts. You can configure the toolkit to store as many different accounts as you like using the toolkit preferences page:

The previous screenshot illustrates configuring each account with a name to help you remember what it’s for, as well as its access credentials. If you’re importing your credentials into Eclipse for the first time, you can follow the links in the preferences dialog box to the credentials page on aws.amazon.com, where you can copy and paste them.

To configure multiple accounts, simply click the “Add account” button and fill in the account’s name and credentials. You can use the drop-down menu to edit the details of any individual account, as well as select the active account the toolkit will use.

Once you have all your accounts configured, you can quickly switch between them using the triangle drop-down menu in the upper-right-hand corner of the AWS Explorer view. It’s easy to miss this menu in Eclipse’s UI, so here’s a screenshot illustrating where to find it. The same drop-down menu also contains a shortcut to the accounts preferences page.

Switching the active account will cause the AWS Explorer view to refresh, showing you the AWS resources for whichever account you select. The active account will also be used for any actions you select from the orange AWS cube menu, such as launching a new Amazon EC2 instance.

How are you using the AWS Toolkit for Eclipse to manage your AWS accounts? Is the interface easy to understand? Does it work well for your use case? Let us know in the comments!

Fetch Object Data and Metadata from Amazon S3 (in a Single Call)

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

I came across an excellent question earlier this week on our support forums. The question was essentially, "How can I fetch object data and metadata from Amazon S3 in a single call?"

This is fair question, also one I did not have a good answer to. Amazon S3 returns both object data and metadata in a single GET object response, while AWS::S3::S3Object#readdoes not. But why?

How It Used to Be

Here is an example of how to get data from an object in S3.

obj = s3.buckets['my-bucket'].objects['key']
data = obj.read

Notice the #read method is returning the object data. This leaves no good place to return the metadata (returning multiple values from a function in ruby is generally frowned upon). In this case, the aws-sdk gem was getting the data and metadata from S3, but it was discarding the metadata.

The Best of Both Worlds

Last year we added support for streaming reads to AWS::S3::S3Object#read. If you pass a block to #read, then the data is yielded in chunks to the block.

File.open('filename', 'wb') do |file|
  obj.read do |chunk|
    file.write(chunk)
  end
end

Perfect! Since the #read method is yielding data in chunks, its return value becomes unused. This allowed me to patch the #read method to return the object metadata instead of nil.

resp = obj.read do |chunk|
  file.write(chunk)
end

resp #=> {:meta => {"foo" => "bar"}, :restore_in_progress => false, :content_type=>"text/plain", :etag=>""37b51d194a7513e45b56f6524f2d51f2"", :last_modified => 2013-02-06 12:54:39 -0800, :content_length => 94512, :data => nil}

You can checkout the new feature on our GitHub master branch now. This will be part of our next release.

If you see an issue with the AWS SDK for Ruby (aws-sdk gem), please, post an issue on our GitHub issue tracker!

Working with AWS CloudFormation in Eclipse

One of the latest features we’ve added to the AWS Toolkit for Eclipse is support for working with AWS CloudFormation.

If you’re not familiar with AWS CloudFormation yet, it gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. Templates describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. For example, your template might describe a set of Amazon EC2 instances, all located in an Auto Scaling group, configured behind an elastic load balancer, and an elastic IP. You don’t need to figure out the order in which AWS services need to be provisioned or the subtleties of how to make those dependencies work. CloudFormation takes care of this for you. Once your AWS resources are deployed, you can modify and update them in a controlled and predictable way allowing you to version control your AWS infrastructure in the same way you version control your software.

AWS CloudFormation is a powerful tool for building applications on the AWS platform, and the integration in Eclipse makes it easy to harness.

When you launch an AWS CloudFormation template, you create a stack, which is all your running infrastructure, as defined by your template. You can quickly see all the AWS CloudFormation stacks running in your currently selected account and active region by opening the AWS Explorer view in Eclipse.

If you don’t have any stacks running yet, you might want to start by launching one of the many sample templates. These sample templates are a great way to get a feel for what’s possible with AWS CloudFormation, and also to learn the AWS CloudFormation template syntax. Often, you can find a sample template that’s close to your application’s architecture, and use it as a starting point for your own custom template.

To launch a new AWS CloudFormation stack from Eclipse, right-click the AWS CloudFormation node in the AWS Explorer view, and then click Create Stack. The New Stack wizard allows you to specify your own custom template, or the URL for one of the sample templates.

Once you launch your stack, you can open the stack editor by double-clicking your stack listed under the AWS CloudFormation node in the AWS Explorer view. The stack editor shows you all the information about your running stack. While your stack is launching, you can use the stack editor to view the different events for your stack as AWS CloudFormation brings up all the pieces of your infrastructure and configures them for you. You can also view the various AWS resources that are part of your stack through the stack editor, and see the parameters and outputs declared in your template.

When you’re ready to start writing your own templates, or editing existing templates, the AWS Toolkit for Eclipse has a template editor that makes it easy to work with CloudFormation templates. Just copy your template into one of your projects, and open it in the template editor. You’ll get syntax highlighting, integration with Eclipse’s outline view, content assist, and JSON syntax error reporting. There’s a lot of functionality available in the template editor, and lots more that we plan to add over time. Stay tuned to the AWS Java Blog for more updates and in-depth examples of the various features.

Are you already using AWS CloudFormation in any of your projects? Have you tried creating your own custom templates yet? Tell us how it’s going in the comments below.

Understanding Auto-Paginated Scan with DynamoDBMapper

by zachmu | on | in Java | Permalink | Comments |  Share

The DynamoDBMapper framework is a simple way to get Java objects into Amazon DynamoDB and back out again. In a blog post a few months ago, we outlined a simple use case for saving an object to DynamoDB, loading it, and then deleting it. If you haven’t used the DynamoDBMapper framework before, you should take a few moments to read the previous post, since the use case we’re examining today is more advanced.

Reintroducing the User Class

For this example, we’ll be working with the same simple User class as the last post. The class has been properly annotated with the DynamoDBMapper annotations so that it works with the framework. The only difference is that, this time, the class has a @DynamoDBRangeKey attribute.

@DynamoDBTable(tableName = "users")
public static class User {
      
    private Integer id;
    private Date joinDate;
    private Set<String> friends;
    private String status;
      
    @DynamoDBHashKey
    public Integer getId() { return id; }
    public void setId(Integer id) { this.id = id; }
     
    @DynamoDBRangeKey
    public Date getJoinDate() { return joinDate; }       
    public void setJoinDate(Date joinDate) { this.joinDate = joinDate; }

    @DynamoDBAttribute(attributeName = "allFriends")
    public Set<String> getFriends() { return friends; }
    public void setFriends(Set<String> friends) { this.friends = friends; }
    
    @DynamoDBAttribute
    public String getStatus() { return status; }
    public void setStatus(String status) { this.status = status; }        
}

Let’s say that we want to find all active users that are friends with someone named Jason. To do so, we can issue a scan request like so:

DynamoDBMapper mapper = new DynamoDBMapper(dynamo);

DynamoDBScanExpression scanExpression = new DynamoDBScanExpression();
Map<String, Condition> filter = new HashMap<String, Condition>();
filter.put("allFriends", new Condition().withComparisonOperator(ComparisonOperator.CONTAINS)
                .withAttributeValueList(new AttributeValue().withS("Jason")));
filter.put(
                "status",
                new Condition().withComparisonOperator(ComparisonOperator.EQ).withAttributeValueList(
                        new AttributeValue().withS("active")));

scanExpression.setScanFilter(filter);
List<User> scanResult = mapper.scan(User.class, scanExpression);

Note the "allFriends" attribute on line 5. Even though the Java object property is called "friends," the @DyamoDBAttribute annotation overrides the name of the attribute to be "allFriends." Also notice that we’re using the CONTAINS comparison operator, which will check to see if a set-typed attribute contains a given value. The scan method on DynamoDBMapper immediately returns a list of results, which we can iterate over like so:

int usersFound = 0;
for ( User user : scanResult ) {
    System.out.println("Found user with id: " + user.getId());
    usersFound++;
}
System.out.println("Found " + usersFound + " users.");

So far, so good. But if we run this code on a large table, one with thousands or millions of items, we might notice some strange behavior. For one thing, our logging statements may not come at regular intervals—the program would seem to pause unpredictably in between chunks of results. And if you have wire-level logging turned on, you might notice something even stranger.

Found user with id: 5
DEBUG com.amazonaws.request - Sending Request: POST https://dynamodb.us-east-1.amazonaws.com/ ... 
DEBUG com.amazonaws.request - Sending Request: POST https://dynamodb.us-east-1.amazonaws.com/ ...
DEBUG com.amazonaws.request - Sending Request: POST https://dynamodb.us-east-1.amazonaws.com/ ...
DEBUG com.amazonaws.request - Sending Request: POST https://dynamodb.us-east-1.amazonaws.com/ ...
Found user with id: 6

Why does it take four service calls to iterate from user 5 to user 6? To answer this question, we need to understand how the scan operation works in DynamoDB, and what the scan operation in DynamoDBMapper is doing for us behind the scenes.

The Limit Parameter and Provisioned Throughput

In DynamoDB, the scan operation takes an optional limit parameter. Many new customers of the service get confused by this parameter, assuming that it’s used to limit the number of results that are returned by the operation, as is the case with the query operation. This isn’t the case at all. The limit for a scan doesn’t apply to how many results are returned, but to how many table items are examined. Because scan works on arbitrary item attributes, not the indexed table keys like query does, DynamoDB has to scan through every item in the table to find the ones you want, and it can’t predict ahead of time how many items it will have to examine to find a match. The limit parameter is there so that you can control how much of your table’s provisioned throughput to consume with the scan before returning the results collected so far, which may be empty. That’s why it took four services calls to find user 6 after finding user 5: DynamoDB had to scan through three full pages of the table before it found another item that matched the filters we specified. The List object returned by DynamoDBMapper.scan() hides this complexity from you and magically returns all the matching items in your table, no matter how many service calls it takes, so that you can concentrate on working with the domain objects in your search, rather than writing service calls in a loop. But it’s still helpful to understand what’s going on behind the scenes, so that you know how the scan operation can affect your table’s available provisioned throughput.

Auto-Pagination to the Rescue

The scan method returns a PaginatedList, which lazily loads more results from DynamoDB as necessary. The list will make as many service calls as necessary to load the next item in the list. In the example above, it had to make four service calls to find the next matching user between user 5 and user 6. Importantly, not all methods from the List interface can take advantage of lazy loading. For example, if you call get(), the list will try to load as many items as the index you specified, if it hasn’t loaded that many already. If you call the size() method, the list will load every single result in order to give you an accurate count. This can result in lots of provisioned throughput being consumed without you intending to, so be careful. On a very large table, it could even exhaust all the memory in your JVM.

We’ve had customer requests to provide manually paginated scan and query methods for DynamoDBMapper to enable more fine-tuned control of provisioned throughput consumption, and we’re working on getting those out in a future release. In the meantime, tell us how you’re using the auto-paginated scan and query functionality, and what you would like to see improved, in the comments!

Subscribing Queues to Topics

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Amazon Simple Notification Service (Amazon SNS) is a terrific service for publishing notification messages and having them automatically delivered to all your subscribers. You simply send a message to your SNS topic, and it gets delivered to all the subscribers for that topic. Amazon SNS supports many different types of subscribers for topics:

  • HTTP/HTTPS endpoints
  • Email addresses (with text or JSON format messages)
  • SMS/text-message addresses
  • Amazon SQS queues

Each type of subscriber is useful, but one of the most versatile for building systems is connecting an Amazon SQS queue directly to your Amazon SNS topic. This is a really handy and common architecture pattern when building applications on the AWS platform, and for good reason. The Amazon SNS topic provides you with a common point for sending messages and having them published to a dynamically managed list of subscribers, and the Amazon SQS queue provides you with a scalable, robust storage location for those delivered messages, while your application pulls them off the queue to process them.

Now that we’ve convinced you about the value of this pattern, let’s take a look at how to execute it in code, using the AWS SDK for Java. The first thing we need to do is create our Amazon SQS queue, and our Amazon SNS topic.

AmazonSNS sns = new AmazonSNSClient(credentials);
AmazonSQS sqs = new AmazonSQSClient(credentials);

String myTopicArn = sns.createTopic(new CreateTopicRequest("topicName")).getTopicArn();
String myQueueUrl = sqs.createQueue(new CreateQueueRequest("queueName")).getQueueUrl();

In order for a queue to receive messages from a topic, it needs to be subscribed and also needs a custom security policy to allow the topic to deliver messages to the queue. The following code in the SDK handles both of these for you automatically, without you ever having to deal with the details around building that custom policy.

Topics.subscribeQueue(sns, sqs, myTopicArn, myQueueUrl);

Now that your queue is connected to your topic, you’re ready to send messages to your topic, then pull them off of your queue. Note that it may take a few moments for the queue’s policy to be updated when the queue is initially subscribed to the topic.

sns.publish(new PublishRequest(myTopicArn, "Hello SNS World").withSubject("Subject"));

List<Message> messages = sqs.receiveMessage(new ReceiveMessageRequest(myQueueUrl).getMessages();
if (messages.size() > 0) {
    byte[] decodedBytes = Base64.decodeBase64((messages.get(0)).getBody().getBytes());
    System.out.println("Message: " +  new String(decodedBytes));
}

For more information on using this new method to subscribe an Amazon SQS queue to an Amazon SNS topic, including an explanation of the policy that is applied to your queue, see the AWS SDK for Java API documentation for Topics.subscribeQueue(…).