Category: Java

DevOps Meets Security: Security Testing Your AWS Application: Part I – Unit Testing

by Marcilio Mendonca | on | in Java | Permalink | Comments |  Share

The adoption of DevOps practices allows organizations to be agile while deploying high-quality software to customers on a regular basis. The CI/CD pipeline is an important component of the DevOps model. It automates critical verification tasks, making fully automated software deployments possible. Security tests are critical to the CI/CD pipeline. These tests verify whether the code conforms to the security specifications. For example, a security unit test could be used to enforce that a given software component must use server-side encryption to upload objects to an Amazon S3 bucket. Similarly, a security integration test could be applied to verify that the same software component always enables S3 bucket versioning.

In this three-part blog post series, we’ll do a deep dive on automated security testing for AWS applications. In this first post, we’ll discuss how AWS Java developers can create security unit tests to verify the correctness of their AWS applications by testing individual units of code in isolation. In part II, we’ll go one step further and show how developers can create integration tests that, unlike unit tests, interact with real software components and AWS resources. Finally, in part III, we’ll walk through how the provided security tests can be incorporated into a CI/CD pipeline (created through AWS CodePipeline) to enforce security verification whenever new code changes are pushed into the code repository. Even though we focus on security, the tests provided can be easily generalized to other domains.

S3 Artifact Manager

We start by introducing a simple S3 wrapper component built to illustrate the security tests discussed in this series. The wrapper, represented by a Java class named S3ArtifactManager (full source code can be accessed here), uses AWS SDK for Java APIs to provide a more secure way to store objects in S3.

Here we show an excerpt of class S3ArtifactManager that describes a method called upload() that can be used to securely upload objects to an S3 bucket. The method uses S3 bucket versioning to make sure  each new upload of the same object will preserve all previous versions of that object. A versionId is returned to clients each time an object (or a new version of it) is stored in the bucket so that specific versions can be retrieved later. Versioning is enabled by using a SetBucketVersioningConfigurationRequest object that takes a BucketVersioningConfiguration(BucketVersioningConfiguration.ENABLED) instance as parameter and by calling s3.setBucketVersioningConfiguration() passing the request object (lines 8-11). 

In addition, method upload() uses the server-side encryption with Amazon S3-managed encryption keys (SSE-S3) feature to enforce that objects stored in the bucket are encrypted. We simply create a metadata object setting – objectMetadata.setSSEAlgorithm() – as the encryption algorithm and attach the metadata object to the PutObjectRequest instance used to store the S3 object (lines 14-17). In line 20, the object is uploaded to S3 and its versionId is returned to client in line 22. 

public String upload(String s3Bucket, String s3Key, File file) 
   throws AmazonServiceException, AmazonClientException {
   if (!s3.doesBucketExist(s3Bucket)) {

   // enable bucket versioning
   SetBucketVersioningConfigurationRequest configRequest = 
      new SetBucketVersioningConfigurationRequest(s3Bucket, 
         new BucketVersioningConfiguration(BucketVersioningConfiguration.ENABLED));

   // enable server-side encryption (SSE-S3)
   PutObjectRequest request = new PutObjectRequest(s3Bucket, s3Key, file);
   ObjectMetadata objectMetadata = new ObjectMetadata();

   // upload object to S3
   PutObjectResult putObjectResult = s3.putObject(request);

   return putObjectResult.getVersionId();

Because security is key in the cloud, security components like our S3ArtifactManager might interest individuals and organizations responsible for meeting security compliance requirements (for example, PCI). In this context, developers and other users of such components must be confident that the security functionality provided behaves as expected. A bug in the component (for example, an object that is stored unencrypted or overwrites a previous version) can be disastrous. In addition, users must remain confident as new versions of the component are released. How can confidence be achieved continuously?

It turns out that DevOps practices improve confidence. In a traditional software development approach, coding the logic of the method upload() and running a few manuals test might be enough, but in a DevOps setting, this  is not acceptable. DevOps practices require that mechanisms that automatically verify code behavior are in place. In fact, these mechanisms are just as important as the code’s main logic. Which mechanisms are we talking about? Unit and integration tests!

In the next section, we’ll discuss how unit tests can be leveraged to verify the security behavior of our S3ArtifactManager wrapper. In parts II and III of this series, we’ll dive deep into integration tests and CI/CD automation, respectively.

Security Unit Tests

Next, we’ll create a suite of security unit tests to verify the behavior of our upload() method. We’ll leverage two popular test frameworks in Java named JUnit and Mockito to code the unit tests.

The primary purpose of unit tests is to test a unit of code in isolation. Here we define unit as the Java class under test (in our case, the S3ArtifactManager class). In order to isolate the class under test, we mock all other objects used in the class, such as the S3 client object. Mocking means that our unit tests will not interact with a real S3 resource and will not upload objects into a S3 bucket. Instead, we’re using a mock object with predefined behavior.

Verifying bucket versioning enablement on S3 buckets

The first security unit test is named testUploadWillEnableVersioningOnExistingS3Bucket. It verifies whether method upload() enables bucket versioning on an existing bucket upon uploading an object to that bucket. Note that we are using a mock object in Mockito instead of a real object to represent an S3 client instance. For this reason, we need to specify the behavior of the mock object for the functionality used by method upload(). In line 5, we use Mockito’s when statement to return true when s3Client.doesBucketExist() is called because this is the condition we want to test. In line 8, method upload() is called using test values for S3 bucket, key, and file parameters. 

public void testUploadWillEnableVersioningOnExistingS3Bucket() {
   // set Mock behavior
   // call object under test
   String versionId = s3ArtifactManager.upload(s3Bucket, s3Key, file);
   // assert versionID is the expected value
   assertEquals("VersionId returned is incorrect", 
      VERSION_ID, versionId);
   // assert that a new bucket has NOT been created
   verify(s3Client, never()).createBucket(s3Bucket);
   // capture BucketVersioningConfigurationReques object 
      bucketVerConfigRequestCaptor = ArgumentCaptor.forClass(
   // assert versioning is set on the bucket
   SetBucketVersioningConfigurationRequest bucketVerConfigRequest = 
   assertEquals("Versioning of S3 bucket could not be 
      verified", BucketVersioningConfiguration.ENABLED, 

The first verification we perform is shown in lines 11-12. It verifies that the versionId value returned matches the constant value expected in the test. In line 15, we verify that a call to s3Client.createBucket() has never been made because the bucket already exists (as mocked using when in line 5). These are standard verifications, not related to security.

In line 18, we start verifying security behavior. We use Mockito’s argument captor feature to capture the parameter passed to setBucketVersioningConfiguration, which is a real object (lines 18-22). Later, in lines 25-30, we check whether bucket versioning is enabled in that object by comparing the value captured with constant BucketVersioningConfiguration.ENABLED. If this security verification fails, it means that versioning was not correctly configured. In this scenario, because a critical security assertion could not be verified, the CI/CD pipeline should be blocked until the code is fixed.

We also created a security unit test to verify bucket versioning enablement for newly created buckets. We’ve omitted the code for brevity, but you can download full source here. This test is similar to the one we just discussed. The main differences are in line 5 (which now returns false) and line 15 which verifies that the createBucket API was called once (verify(s3Client, times(1)).createBucket(s3Bucket)).

Verifying server-side-encryption of uploaded S3 objects

The second security unit test verifies that uploaded S3 objects use server-side-encryption with Amazon S3 encryption keys (SSE-S3). In lines 8-12, the security unit test verification starts by once again using Mockito’s argument captor to capture the request object passed to s3Client.putObject(). This object is used later in two ways: first, in lines 16-18, to verify that no customer keys were provided (because upload() is expected to use SSE-S3) and then in lines 22-26 to assert that the object’s metadata is not null and returns AES256 as the encryption algorithm, the value expected for SSE-S3 encryption. Once again, if this security verification fails, the CI/CD pipeline should be blocked until SSE-S3 code implementation is fixed and verified.

public void TestUploadAddsSSE_S3EncryptedObjectToBucket() {
   // call object under test
   s3ArtifactManager.upload(s3Bucket, s3Key, file);
   // capture putObjectRequest object
   ArgumentCaptor<PutObjectRequest> putObjectRequestCaptor = 
   PutObjectRequest putObjectRequest = 
   // assert that there's no customer key provided as 
   // we're expecting SSE-S3
   assertNull("A customer key was incorrectly used (SSE-C). 
      SSE-S3 encryption expected instead.", 
   // assert that the SSE-S3 'AES256' algorithm was set as part of 
   // the request's metadata 
   assertNotNull("PutObjectRequest's metadata object must be non-null 
      and enforce SSE-S3 encryption", putObjectRequest.getMetadata());
   assertEquals("Object has not been encrypted using SSE-S3 (AES256 
      encryption algorithm)", AES256, putObjectRequest.getMetadata()

Running the Security Tests Locally

Setting Up

Follow these steps in your local workstation to run the unit tests:

Running the Unit Tests

You can use Maven to run the provided security unit tests locally.

  • Navigate to the root directory where you installed the source code (this is where the pom.xml file resides)
  • Type mvn verify –DskipIntegrationTests=true to run the security unit tests

Expected output:

 T E S T S

Running com.amazonaws.samples.s3.artifactmanager.unittests.S3ArtifactManagerUnitTest

Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.155 sec

You’ll see in the output that all three security unit tests passed. That is, the individual units of code tested are behaving as expected in isolation.

Final Remarks

In the first part of this series, we have discussed how AWS Java developers can create unit tests that verify the behavior of individual software components in their AWS applications. We used mocks to replace actual objects (for example, an S3 client) and used Maven to trigger test execution.

In the second part of this series, we’ll discuss integration tests that will use real AWS objects and resources.  



New Additions to Exception Handling

by Andrew Shore | on | in Java | Permalink | Comments |  Share

Exception handling in the AWS SDK for Java just got a little easier! We just introduced base exception classes for all AWS services in the SDK. Now all modeled exceptions extend from this service-specific base exception and all unmodeled exceptions (unknown exceptions thrown by the service) will be unmarshalled into the service-specific base exception, and not into the generic AmazonServiceException. The new service base exceptions still extend AmazonServiceException, so existing exception handling code you’ve written won’t be affected. This change gives you greater control to handle exceptions differently, depending on the originating service.

Writing robust applications is challenging, and distributed systems occasionally fail with transient (often retriable) errors. Although the SDK retries certain errors by default, sometimes these exceptions bubble up to the caller’s code and must be handled appropriately. Previously, when you used the SDK, you only had the option to catch a service-specific modeled exception (i.e., ResourceInUseException for Amazon DynamoDB) or a generic AmazonServiceException, which could be thrown by any service. When you wrote code that involved multiple service dependencies, you had to write error handling code for each service independently.

Consider the following example where you read messages from an Amazon SQS queue and write each message to a DynamoDB table. The contrived requirements are to treat SQS errors as transient and retry them after an appropriate backoff time, and to treat all DynamoDB errors (except for ProvisionedThroughputExceededException) as fatal and persist the messages to local disk for future processing. ProvisionedThroughputExceededException should be treated as retriable and the program should back off before attempting to process another message. Finally, if any other non-service exceptions are thrown, the program should terminate. For simplicity, we’ll assume message processing is idempotent and it’s acceptable to process a message multiple times.

Previously, this was difficult to do in the SDK. Your options included catching service specific modeled exceptions (like ProvisionedThroughputExceededException) or catching the generic AmazonServiceException which applies to all AWS services. This makes handling errors differently per service challenging. You might have written code like this to meet the example’s requirements.

while (true) {
    ReceiveMessageResult currentResult = null;
    try {
        // Receive messages from the queue
        currentResult = sqs.receiveMessage(QUEUE_URL);
        for (Message message : currentResult.getMessages()) {
            // Save message to DynamoDB
            ddb.putItem(new PutItemRequest().withTableName(TABLE_NAME).withItem(
                   .of("messageId", new AttributeValue(message.getMessageId()),
                       "messageBody", new AttributeValue(message.getBody()))));
            // Delete message from queue to indicate it's been processed
            sqs.deleteMessage(new DeleteMessageRequest()
    } catch (ProvisionedThroughputExceededException e) {
        LOG.warn("Table capacity exceeding. Backing off and retrying.");
    } catch (AmazonServiceException e) {
        switch (e.getServiceName()) {
            // Have to use magic service name constants that aren't publicly exposed
            case "AmazonSQS":
                LOG.warn("SQS temporarily unavailable. Backing off and retrying.");
            case "AmazonDynamoDBv2":
                LOG.fatal("Could not save messages to DynamoDB. Saving to disk.");
    } catch (AmazonClientException e) {
        // Caught unexpected error

You can rewrite the example above much more cleanly by using the new exception base classes in the SDK, as shown here.

while (true) {
    ReceiveMessageResult currentResult = null;
    try {
        // Receive messages from the queue
        currentResult = sqs.receiveMessage(QUEUE_URL);
        for (Message message : currentResult.getMessages()) {
            // Save message to DynamoDB
            ddb.putItem(new PutItemRequest().withTableName(TABLE_NAME).withItem(
                   .of("messageId", new AttributeValue(message.getMessageId()),
                       "messageBody", new AttributeValue(message.getBody()))));
            // Delete message from queue to indicate it's been processed
            sqs.deleteMessage(new DeleteMessageRequest()
    } catch (AmazonSQSException e) {
        LOG.warn("SQS temporarily unavailable. Backing off and retrying.");
    } catch (ProvisionedThroughputExceededException e) {
        // You can catch modeled exceptions
        LOG.warn("Table capacity exceeding. Backing off and retrying.");
    } catch (AmazonDynamoDBException e) {
        // Or you can catch the service base exception to handle any exception that
        // can be thrown by a particular service
        LOG.fatal("Could not save messages to DynamoDB. Saving to file.");
    } catch (AmazonClientException e) {
        // Caught unexpected error

We hope this addition to the SDK makes writing robust applications even easier! Let us know what you think in the comments section below.

Fluent Client Builders

by Andrew Shore | on | in Java | Permalink | Comments |  Share

We are pleased to announce a better, more intuitive way to construct and configure service clients in the AWS SDK for Java. Previously, the only way to construct a service client was through one of the many overloaded constructors in the client class. Finding the right constructor was difficult and sometimes required duplicating the default configuration to supply a custom dependency. In addition, the constructors of the client don’t expose the full set of options that can be configured through the client. You must provide a region and custom request handlers through a setter on the client after construction. This approach isn’t intuitive, and it promotes unsafe practices when using a client in a multithreaded environment.

New Client Builders

To address the shortcomings of the current approach, we’ve introduced a new builder style to create a client in a fluent and easy-to-read way. The builder approach makes the set of available configurations more discoverable. You can use fluent setters for each option instead of relying on positional parameters in an overloaded constructor. The fluent setters allow for more readable code by using method chaining. The builder pattern also allows for overriding only the options you care about. If you just want a custom RequestMetricCollector, then that’s all you have to set and you can still benefit from the defaults of all the other dependencies. After a client is created with the builder, it is immutable to enforce thread safety when using the client in a multithreaded environment.

Let’s compare the current approach and the new builder approach.

Current Approach

 * Just to provide a custom metric collector I have to supply the defaults
 * of AWSCredentialsProvider and ClientConfiguration. Also note that 
 * I've gotten the default ClientConfiguration for Dynamo wrong, 
 * it's actually got a service specific default config.
AmazonDynamoDBClient client = new AmazonDynamoDBClient(
                                    new DefaultAWSCredentialsProviderChain(), 
                                    new ClientConfiguration(),
                                    new MyCustomRequestMetricsCollector());
 * I have to set the region after which could cause issues in a multi 
 * threaded application if you aren't safely publishing the reference.

Builder Approach

 * With the new builder I only have to supply what I want to customize
 * and all options are set before construction of the client
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
                          .withMetricsCollector(new MyCustomRequestMetricsCollector())

Region Configuration Improvements

In the AWS SDK for Java, we strongly recommend setting a region for every client created. If you create a client through the constructor, it initializes the client with a default endpoint (typically, us-east-1 or a global endpoint). For this reason, many customers don’t bother setting a region because it just works with the default endpoint. When these customers attempt to deploy their application to another region, a lot of code has to be changed to configure the client with the correct region based on the environment. With the builder, we enforce setting a region to promote best practices to make it easier to write applications that can be deployed to multiple regions. When you create clients through the builder, you can provide a region explicitly with the fluent setters or implicitly through the new region provider chain introduced.

The region provider chain is similar to the credentials provider chain. If no region is explicitly provided, the SDK consults the region provider chain to find a region to use from the environment. The SDK will first check if the AWS_REGION environment variable is set. If not, it will look in the AWS Shared Config file (usually located at ~/.aws/config) and use the region in the default profile (unless the AWS_PROFILE environment variable is set, in which case the SDK will look at that profile). Finally, if the SDK still hasn’t found a region, it will attempt to find one from the EC2 metadata service for applications that run on AWS infrastructure. If a region is not explicitly provided or if the chain cannot determine the region, able to be determined by the chain then the builder will not allow the client to be created.

// Creates a client with an explicit region that's in the Regions enum
 * Creates a client with an explicit region string. Useful when a new
 * region isn't present in the enum and upgrading to a newer 
 * version of the SDK isn't feasible.
 * Creates a client with the default credentials provider chain and a 
 * region found from the new region provider chain.
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.defaultClient();

Async Client Builder

The SDK offers async variants of most clients. Although the async client duplicates many of the constructors of the sync client, they’ve often gotten out of sync. For example, the async clients do not expose a constructor that takes a RequestMetricsCollector. The new builder remedies that. Going forward, it will allow us to keep the customizable options for both variants of the client in sync. Because the async client has additional dependencies, it is a separate builder class.

 * For ExecutorService we've introduced a factory to supply new 
 * executors for each async client created as it's completely 
 * possible to use a builder instance to create multiple clients and
 * you'll rarely want to share a common executor between them. 
 * The factory interface is a functional interface so it integrates 
 * fully with Java8's lambda expressions and method references.
final AmazonDynamoDBAsync client = AmazonDynamoDBAsyncClientBuilder
         .withExecutorFactory(() -> Executors.newFixedThreadPool(10))
         .withCredentials(new ProfileCredentialsProvider("my-profile"))

S3 and TransferManager

S3 is a little different from most clients in the SDK. It has special configuration options (previously in S3ClientOptions) that apply only to S3. It also doesn’t have a typical async client. Instead, it provides the TransferManager utility for asynchronous uploads and downloads. Both the S3 client and TransferManager have their own builders. Note that the options in S3ClientOptions have been collapsed into the builder interface.

// Create a client with accelerate mode enabled
final AmazonS3 client = AmazonS3ClientBuilder.standard()

The TransferManager is similar to other async client builders. Options that were in the TransferManagerConfiguration class have been collapsed into the builder interface.

 * The TransferManager requires the low level client so it must be
 * provided via AmazonS3ClientBuilder
TransferManager tf = TransferManagerBuilder.standard()
        .withExecutorFactory(() -> Executors.newFixedThreadPool(100))

// The following factory method is provided for convenience
// The above is equivalent to

Fast Facts

  • Each service client has two dedicated builder classes for its sync and async variants
  • Builder class names are based on the interface they create (for example, the builders for the AmazonDynamoDB and AmazonDynamoDBAsync interfaces are AmazonDynamoDBClientBuilder and AmazonDynamoDBAsyncClientBuilder, respectively).
  • Builder instances can be obtained through the static factory method `standard()`.
  • A convenience factory method, defaultClient, initializes the client with the default credential and region provider chains.
  • Builders can be used to create multiple clients.
  • The async builder has all of the same options as the sync client builder plus an additional executor factory option.
  • Region must be provided either explicitly or implicitly (through the region provider chain) before creating a client.
  • Clients built through the builder are immutable.
  • S3 and TransferManager have custom builders with additional configuration options.


We are excited about these new additions to the SDK! Feel free to leave your feedback in the comments. To use the new builders, declare a dependency on the 1.11.18 version of the SDK.

Encrypting Message Payloads Using the Amazon SQS Extended Client and the Amazon S3 Encryption Client

by Amin Suzani | on | in Java | Permalink | Comments |  Share

The Amazon SQS Extended Client is an open-source Java library that lets you manage Amazon SQS message payloads with Amazon S3. This is especially useful for storing and retrieving messages with a message payload size larger than the SQS limit of 256 KB. Some customers have asked us about encryption. This blog post explains how you can use this library to take advantage of S3 client-side encryption features for your SQS messages.

Here is how the SQS Extended Client Library works: You configure and provide an SQS client, an S3 client, and an S3 bucket to the library. Then you will be able to send, receive, and delete messages exactly as you would with the standard SQS client. The library automatically stores each message payload in an S3 bucket and uses the native SQS payload to transmit a pointer to the S3 object. After the message has been received and deleted by a consumer, the payload is automatically deleted from the S3 bucket. For a code example, see Managing Amazon SQS Messages with Amazon S3 in the Amazon SQS Developer Guide.

To enable client-side encryption, simply configure an S3 encryption client (instead of a standard S3 client) and pass it to the SQS Extended Client Library. You also have the option to use AWS Key Management Service (AWS KMS) for managing your encryption keys. For examples of code that can configure the Amazon S3 encryption client in different ways, see Protecting Data Using Client-Side Encryption in the Amazon S3 Developer Guide.

By default, the SQS Extended Client Library uses S3 only for message payloads larger than the SQS limit of 256 KB. In the following example, the AlwaysThroughS3 flag is enabled so that the SQS Extended Client Library sends all messages through Amazon S3, regardless of the message payload size:

ExtendedClientConfiguration extendedClientConfiguration = new ExtendedClientConfiguration()
		.withLargePayloadSupportEnabled(s3EncryptionClient, s3BucketName)

AmazonSQS sqsExtendedClient = new AmazonSQSExtendedClient(new AmazonSQSClient(credentials),  

That’s all! Now, all message payloads sent and received using the SQS Extended Client Library will be encrypted and stored in S3. Please let us know if you have any questions, comments, or suggestions.

Parallelizing Large Downloads for Optimal Speed

by Varun Nandimandalam | on | in Java | Permalink | Comments |  Share

TransferManager now supports a feature that parallelizes large downloads from Amazon S3. You do not need to change your code to use this feature. You only need to upgrade the AWS SDK for Java to version 1.11.0 or later. When you download a file using TransferManager, the utility automatically determines if the object is multipart. If so, TransferManager downloads the object in parallel.

I have seen around 23% improvement in download time for multipart objects larger than 300 MBMy tests were run on a MacBook Pro with following specifications:

  • Processor: 3.1 GHz Intel i7
  • Memory: 16 GB 1867 MHz DDR3
  • HardDrive: SSD-512G
  • Logical Cores: 4

The performance varies based on the hardware and internet speed. By default, the TransferManager creates a pool of 10 threads, but you can set a custom pool size. For optimal performance, tune the executor pool size according to the hardware in which your application is running.

Downloading a File Using TransferManager (Example):

// Initialize TransferManager.
TransferManager tx = new TransferManager();

// Download the Amazon S3 object to a file.
Download myDownload =, myKey, new File("myFile")));

// Blocking call to wait until the download finishes.

// If transfer manager will not be used anymore, shut it down. 

The pause and resume functionality is supported for parallel downloads. When you pause the download, TransferManager tries to capture the information required to resume the transfer after the pause. You can use that information to resume the download from where you paused. To protect your download from a JVM crash, PersistableDownload should be serialized to disk as soon as possible.  You can do this by passing an instance of S3SyncProgressListener to TransferManager#download. For more information about pause and resume, see this post.

Parallel downloads are not supported in some cases. The file is downloaded in serial if the client is an instance of AmazonS3EncryptionClient, if the download request is a ranged request or if the object was originally uploaded to Amazon S3 as a single part.

Low-Level Implementation:

An object can be uploaded to S3 in multiple parts. You can retrieve a part of an object from S3 by specifying the part number in GetObjectRequest. TransferManager uses this logic to download all parts of an object asynchronously and writes them to individual, temporary files. The temporary files are then merged into the destination file provided by the user. For more information, see the implementation here.

We hope you’ll try the new parallel download feature supported by TransferManger. Feel free to leave your feedback in the comments.



Creating Lambda Stream Functions Using the AWS Toolkit for Eclipse

In this blog post, I will introduce two new features in the AWS Toolkit for Eclipse: creating an AWS Lambda stream function and creating multiple AWS Lambda functions in a single project. Unlike the normal AWS Lambda functions which take in POJOs for handler Input/Output, the AWS Lambda stream functions let you use the InputStream and OutputStream as the input and output types for the handler. For more information about the AWS Lambda stream handler, see this example in the AWS Lambda Developer Guide. For information about how to use the AWS Toolkit for Eclipse to develop, deploy, and test Lambda functions, see this blog post.

  1. Create a Lambda stream handler.You can use either the Create New AWS Lambda Java Project wizard or the Create New AWS Lambda Function wizard. From Handler Type, choose Stream Request Handler. The stream request handler template will appear in the Preview field. (The syntax highlighting in the template is another recently added feature.)

    After you choose Finish, the AWS Toolkit for Eclipse will create a Lambda stream handler class and the unit test for that Lambda stream function. The stream handler template is a simple implementation of capitalizing the characters of the input stream and writing to the output stream. The following screenshot is the initial project structure:


  2. Create a Lambda function in a project.You can now create multiple Lambda functions in a single Lambda project. There are a number of ways to create a Lambda function class in your existing Lambda project:
  • On the Eclipse menu, from File, choose New, and then choose AWS Lambda Function.
  • On the Eclipse toolbar, choose New and then choose AWS Lambda Function.
  • On the Eclipse toolbar, from the AWS Toolkit for Eclipse drop-down menu (identified by the AWS icon), choose New AWS Lambda Function.
  • In Eclipse, press Ctrl + N, and from the AWS folder, choose AWS Lambda Function.
  • In the Package Explorer view of Eclipse, right-click and choose New, and then choose AWS Lambda Function.The following screenshot shows the Create a new AWS Lambda function wizard.

    After you choose Finish, the newly created Lambda handler Java file, unit test Java file, and other resource files will be appended to the existing project.

How do you like these new features? Please feel free to leave your feedback in the comments. And, as always, let us know what other features you would like us to provide.


AWS SDK for Java Developer Guide Is Now Open Source

We are happy to announce that the AWS SDK for Java Developer Guide and AWS Toolkit for Eclipse User Guide are now open-sourced on GitHub! You can edit content and code inline, make pull requests, file issues, and send content suggestions to the documentation and SDK teams.

The AWS SDKs have always embraced openness as a design principle. To date, we have received more than 450 issues and 200 pull requests for the AWS SDK for Java. We decided to open the developer and user guides too, so that we can produce the highest quality, most comprehensive SDK documentation possible.

Although our writers will continue to proactively create content, we will also monitor this GitHub repo to round out the content offering for the SDK. You can star the repos, watch them, and start contributing today!

Let us know what you would like to see. Feel free to contribute your own examples, if you have them. We will look at pull requests as they come in. Your content could become part of the core documentation!

Testing Lambda functions using the AWS Toolkit for Eclipse

In this blog post, I will introduce how to test AWS Lambda functions in Eclipse by using the AWS Toolkit for Eclipse. The AWS Toolkit for Eclipse Lambda Plugin provides a feature in which a JUnit test class is created automatically upon creation of a new AWS Lambda Java project. I will give you step-by-step instructions for creating an AWS Lambda Java project, creating an AWS Lambda function, unit testing the AWS Lambda function, uploading the AWS Lambda function to AWS Lambda, and testing the AWS Lambda function remotely. Currently, AWS Lambda supports five AWS service event sources for Java: S3 Event, SNS Event, Kinesis Event, Cognito Event, and DynamoDB Event. You can also define a custom event. In this post, I will give examples using the S3 Event and a custom event. The other event types can be used in the same way.


  1. Install Eclipse version 3.6 (Helios) or later on your computer.
  2. Follow the instructions on the AWS Toolkit for Eclipse page to Install the AWS Toolkit for Eclipse.
  3. After it is installed, the new AWS Toolkit for Eclipse icon will appare on the toolbar.

Steps for testing a Lambda function

  1. Create an AWS Lambda Java project.
    • Choose the AWS Toolkit for Eclipse or New icon, and then choose AWS Lambda Java Project.
    • When you create a name for the project and package, you should see the corresponding changes in the Preview text area. For Input Type, you can choose from the five AWS service event sources plus the custom event. You can also complete the Class Name, Input Type, and Output Type fileds. The AWS Toolkit for Eclipse will auto-generate the Lambda function class in the src/ folder and the unit test class in the tst/ folder. In this example, we set Project Name as S3EventDemo, Package Name to  com.lambda.demo.s3, and left the other settings at their defaults.
    • The AWS Toolkit for Eclipse will create the following folder structure for the S3 Event.

      The LambdaFunctionHandler class is an implementation of the RequestHandler interface that defines the Lambda function you need to implement. The LambdaFunctionHandlerTest class is where the unit tests reside. The TestContext class is an implementation of the Context interface, which acts as a parameter for the Lambda function. The TestUtils class is a supporting class to parse JSON file. The s3-event.put.json file is the sample S3 event source configuration you can use for testing.
  2. Create an AWS Lambda function.
    You need to implement the Lambda function handleRequest in the LambdaFunctionHandler class. It takes S3Event and Context as parameters, and returns an Object. You can always define a custom output class instead of the default Object class. The following is the sample implementation of the Lambda function, which returns a string of the bucket name from the S3 Event.

    public Object handleRequest(S3Event input, Context context) {
        context.getLogger().log("Input: " + input);
        return input.getRecords().get(0).getS3().getBucket().getName();
  3. Unit-test the AWS Lambda function.
    In the unit test, the S3Event parameter is loaded from the s3-event.put.json file in the tst/ folder and the Context is implemented and instantiated by the customers for testing. The default unit test in the LambdaFunctionHandlerTest class is simply printing the output. You may want to change this to a validation as shown in the following code. From the s3-event.put.json file, the bucket name returned from the Lambda function is expected to be “sourcebucket.

    public void testLambdaFunctionHandler() {
        LambdaFunctionHandler handler = new LambdaFunctionHandler();
        Context ctx = createContext();
        Object output = handler.handleRequest(input, ctx);
        if (output != null) {
        Assert.assertEquals("sourcebucket", output);

    This is the simplest way to write the test case. When you run the unit test, output like that shown in the following screenshot will appear in the console.

  4. Upload and run the AWS Lambda function.You can also test the Lambda function after you upload it to AWS Lambda. To do this, right-click anywhere in the workspace of the project, choose Amazon Web Services, and choose Run function on AWS Lambda…, as shown in the following screenshot.

    You will be asked to select the JSON file as the S3Event input. Choose the default one provided by the AWS Toolkit for Eclipse, as shown in the following screenshot.

    Choose Invoke. You will see output similar to the following screenshot in the console. The function output is the bucket name returned by the Lambda function.

  5. Test the custom event Lambda function.
    The workflow for testing a custom event is very similar to testing the S3 Event. Let’s define a Lambda function that calculates the maximum value from a list of integer values.

    • First, define the custom event input class.
      public class CustomEventInput {
          private List<Integer> values;
          public List<Integer> getValues() {
              return values;
          public void setValues(List<Integer> values) {
              this.values = values;
    • Second, define the custom event output class.
      public class CustomEventOutput {
          private Integer value;
          public CustomEventOutput(int value) {
          public Integer getValue() {
              return value;
          public void setValue(Integer value) {
              this.value = value;
    • Third, implement the Lambda function.
      public CustomEventOutput handleRequest(CustomEventInput input, Context context) {
          context.getLogger().log("Input: " + input);
          int maxValue = Integer.MIN_VALUE;
          for (Integer value : input.getValues()) {
              if (value > maxValue) {
                  maxValue = value;
          return new CustomEventOutput(maxValue);
    • Fourth, prepare a sample JSON file as the CustomEventInput object for testing. AWS Lambda will use JSON format to represent THE object you defined. Here is an example using POJOs for handler input/output.
          "values" : [34, 52, 335, 32]
    • Lastly, upload this Lambda function to AWS Lambda, and test remotely. You should see console output similar to the following. The output is the JSON format of the CustomEventOutput object returned by the Lambda function.

This is how a typical Lambda function is written and tested using the AWS Toolkit for Eclipse. For more advanced use cases, you can use the S3 Event and DynamoDB Event examples provisioned by AWS Lambda.

Announcing the AWS Encryption SDK

by Andrew Shore | on | in Java | Permalink | Comments |  Share

We’ve published several posts on client-side encryption using Java tools over the past couple of years, including ones on the S3 Encryption Client and the DynamoDB Encryption Client. Both of these clients assume a specific AWS service as the storage layer for data encrypted by the client. Today, the AWS Cryptography team released the AWS Encryption SDK for Java, a library that you can use to encrypt your data without assuming a particular storage layer. The SDK makes envelope encryption easier for developers while minimizing errors that could lower the security of your applications. The SDK doesn’t require you to use any specific AWS services, but we’ve provided ready-to-use samples for AWS customers who do use AWS CloudHSM or AWS Key Management Service (KMS).

Check out the AWS Encryption SDK on AWS Labs. You should also read Greg Rubin’s post on the AWS Security Blog on how to use the SDK. Let us know what you think!

Introducing Retry Throttling

by Jonathan Breedlove | on | in Java | Permalink | Comments |  Share

Client side retries are used to avoid surfacing unnecessary exceptions back to the caller in the case of transient network or service issues.  In these situations a subsequent retry will likely succeed.  Although this process incurs a time penalty, it is often better than the noise from oversensitive client side exceptions.  Retries are less useful in cases of longer running issues where subsequent retries will almost always fail. An extended retry loop for each request ties up a client application thread, that could otherwise be moving on to another task, only to return an exception.  In cases of service degradation, the explosion of retried requests from clients can often exacerbate problems for the service, which hurts recovery times, prolonging the client side impact. To address this issue we are pleased to announce the introduction of a client retry throttling feature.  

Retry throttling is designed to throttle back retry attempts when a large percentage of requests are failing and retries are unsuccessful. With retry throttling enabled, the client will drain an internal retry capacity pool and slowly roll off from retry attempts until there is no remaining capacity. At this point subsequent retries will not be attempted until the client gets successful responses, at which time the retry capacity pool will slowly begin to refill and retries will once again be permitted.  Because retry throttling only kicks in only when a large number of requests fail and retries are not successful, transient retries are still permitted and unaffected by this feature. Retries resulting from provisioned capacity exceptions are not throttled.

Behavior compared

To test the effectiveness of this new feature we set up a controlled environment in which we could subject the AWS SDK for Java to various failure scenarios.  For this test, we drove a consistent request load through the client and placed a fault injection proxy between the client and service.  The fault proxy was set up to return 5xx responses for a certain percentage of requests.  Each test run lasted 30 minutes. The test, which initially began with no errors, slowly ramped up to a 100% error rate, and then back down to 0% by the end of the run.

No throttling

With the default retry behavior and no throttling you can clearly see the client ramping up retries proportional to the number of 5xx responses it sees.  At the middle of the test run we hit the 100% error rate and retries are pegged at their maximum level.  Even though none of these retries result in successful responses the client continues retrying at the same pace, tying up application threads and client connections and hammering the service with wasteful requests.

Throttling enabled

With retry throttling enabled you can see the client initially ramp up its retry attempts as 5xx errors are introduced but begin to tail off as errors increase.  After the 100% error rate is reached the client abandons retry attempts because there are no successful responses. As the error rate drops below 100% and the client begins to get successful responses, retries are slowly re-enabled.

In situations where retries have been throttled this feature will result in fail-fast behavior from the client. Because retries are circumvented an exception will be immediately returned to the caller immediately if the initial request is unsuccessful.  Although this will result in more up-front exceptions, it will avoid tying up connections and client application threads for longer periods of time. This is particularly important in latency sensitive applications.

Enabling retry throttling

Retry throttling can be enabled by explicitly setting it on the ClientConfiguration, as shown in this example:

ClientConfiguration clientConfig = new ClientConfiguration();
AmazonSQS sqs = new AmazonSQSClient(clientConfig);

Alternatively, it can be enabled by including this system property when you start up the JVM. Retry throttling will apply to all client instances running in that VM:


As you can see, it’s easy to opt in to this feature. Retry throttling can improve the ability of the SDK to adapt to suboptimal situations.  Have you used this feature? Feel free to leave questions or comments below!