Category: Java


How to Protect the Integrity of Your Encrypted Data by Using AWS Key Management Service and EncryptionContext

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

There’s a great post on the AWS Security Blog today. Greg Rubin explains How to Protect the Integrity of Your Encrypted Data by Using AWS Key Management Service and EncryptionContext.

Greg is a security expert and a developer on AWS Key Management Service. He’s helped us out with encryption and security changes in the AWS SDK for Java many times, and he also wrote the AWS DynamoDB Encryption Client project on GitHub.

Go check out Greg’s post on the AWS Security Blog to learn more about keeping your data secure by properly using EncryptionContext in the KMS API.

Building a serverless developer authentication API in Java using AWS Lambda, Amazon DynamoDB, and Amazon Cognito – Part 3

In parts 1 and 2 of this blog post, we saw how easy it is to get started on Java development for AWS Lambda, and use a microservices architecture to quickly iterate on an AuthenticateUser call that integrates with Amazon Cognito. We set up the AWS Toolkit for Eclipse, used the wizard to create a Java Lambda function, implemented logic for checking a user name/password combination against an Amazon DynamoDB table, and then used the Amazon Cognito Identity Broker to get an OpenID token.

In part 3 of this blog post, we will test our function locally as a JUnit test. Upon successful testing, we will then use the AWS Toolkit for Eclipse to configure and upload the function to Lambda, all from within the development environment. Finally, we will test the function from within the development environment on Lambda.

Expand the tst folder in Package Explorer:

You will see the AWS Toolkit for Eclipse has already created some stubs for you to write your own unit test. Double-click AuthenticateUserTest.java. The test must be implemented in the testAuthenticateUser function. The function creates a dummy Lambda context and a custom event that will be your test data for the testing of your Java Lambda function. Open the TestContext.java file to see a stub that is created to represent a Lambda context. The Context object in Java allows you to interact with the AWS Lambda execution environment through the context parameter. The Context object allows you to access useful information in the Lambda execution environment. For example, you can use the context parameter to determine the CloudWatch log stream associated with the function. For a full list of available context properties in the programming model for Java, see the documentation.

As we mentioned in part 1 of our blog post, our custom object is passed as a LinkedHashMap into our Java Lambda function. Create a test input in the createInput function for a valid input (meaning there is a row in your DynamoDB table User that matches your input).

@BeforeClass
    public static void createInput() throws IOException {
        // TODO: set up your sample input object here.
        input = new LinkedHashMap();
        input.put("userName", "Dhruv");
        input.put("passwordHash","8743b52063cd84097a65d1633f5c74f5");
    } 

Fill in any appropriate values for building the context object and then implement the testAuthenticateUser function as follows:

@Test	
    public void testhandleRequest() {
        AuthenticateUser handler = new AuthenticateUser();
        Context ctx = createContext();

        AuthenticateUserResponse output = (AuthenticateUserResponse)handler.handleRequest(input, ctx);

        // TODO: validate output here if needed.
        if (output.getStatus().equalsIgnoreCase("true")) {
            System.out.println("AuthenticateUser JUnit Test Passed");
        }else{
        	Assert.fail("AuthenticateUser JUnit Test Failed");
        }
    }

Save the file. To run the unit test, right-click AuthenticateUserTest, choose Run As, and then choose JUnit Test. If everything goes well, your test should pass. If not, run the test in Debug mode to see if there are any exceptions. The most common causes for test failures are not setting the right region for your DynamoDB table or not setting the AWS credentials in the AWS Toolkit for Eclipse configuration.

Now that we have successfully tested this function, let’s upload it to Lambda. The AWS Toolkit for Eclipse makes this process very simple. To start the wizard, right-click your Eclipse project, choose Amazon Web Services, and then choose Upload function to AWS Lambda.

 
          

You will now see a page that will allow you to configure your Lambda function. Give your Lambda function the name AuthenticateUser and make sure you choose the region in which you created your DynamoDB table and Amazon Cognito identity pool. Choose Next.

On this page, you will configure your Lambda function. Provide a description for your service. The function handler should already have been selected for you.

You will need to create an IAM role for Lambda execution. Choose Create and type AuthenticateUser-Lambda-Execution-Role. We will need to update this role later so your Lambda function has appropriate access to your DynamoDB table and Amazon Cognito identity pool. You will also need to create or choose an S3 bucket where you will upload your function code. In Advanced Settings, for Memory (MB), type 256. For Timeout(s), type 30. Choose Finish.

Your Lambda function should be created. When the upload is successful, go to the AWS Management Console and navigate to the Lambda dashboard to see your newly created function. Before we execute the function, we need to provide the permissions to the Lambda execution role. Navigate to IAM, choose Roles, and then choose the AuthenticateUser-Lambda-Execution-Role. Make sure the following managed policies are attached.

We need to provide two inline policies for the DynamoDB table and Amazon Cognito. Click Create Role Policy, and then add the following policy document. This will give Lambda access to your identity pool.

The policy document that gives access to the DynamoDB table should look like the following:

Finally, go back to Eclipse, right-click your project name, choose Amazon Web Services, and then choose Run Function on AWS Lambda. Provide your custom JSON input in the format we provided in part 1 of the blog and click Invoke. You should see the result of your Lambda function execution in the Eclipse console:

 
         

Building a serverless developer authentication API in Java using AWS Lambda, Amazon DynamoDB, and Amazon Cognito – Part 2

In part 1 of this blog post, we showed you how to leverage the AWS Toolkit for Eclipse to quickly develop Java functions for AWS Lambda. We then set up a skeleton project and the structure to handle custom objects sent to your Java function.

In part 2 of this blog post, we will implement the handleRequest function that will handle the logic of interacting with Amazon DynamoDB and then generate an OpenID token by using the Amazon Cognito API.

We will now implement the handleRequest function within the AuthenticateUser class. Our final handleRequest function looks like the following:

@Override
public AuthenticateUserResponse handleRequest(Object input, Context context) {
      
    AuthenticateUserResponse authenticateUserResponse = new AuthenticateUserResponse();
    @SuppressWarnings("unchecked")
    LinkedHashMap inputHashMap = (LinkedHashMap)input;
    User user = authenticateUser(inputHashMap);
    if(user!=null){
        authenticateUserResponse.setUserId(user.getUserId());
        authenticateUserResponse.setStatus("true");
        authenticateUserResponse.setOpenIdToken(user.getOpenIdToken());
    }else{
        authenticateUserResponse.setUserId(null);
        authenticateUserResponse.setStatus("false");
        authenticateUserResponse.setOpenIdToken(null);
    }
        
    return authenticateUserResponse;
}

We will need to implement the authenticateUser function for this Lambda Java function to compile properly. Implement the function as shown here:


public User authenticateUser(LinkedHashMap input){
    User user=null;
    	
    String userName = input.get("userName");
    String passwordHash = input.get("passwordHash");
    	
    try{
        AmazonDynamoDBClient client = new AmazonDynamoDBClient();
        client.setRegion(Region.getRegion(Regions.US_EAST_1));
        DynamoDBMapper mapper = new DynamoDBMapper(client);
	    	
        user = mapper.load(User.class, userName);
	    	
        if(user!=null){
            if(user.getPasswordHash().equalsIgnoreCase(passwordHash)){
                String openIdToken = getOpenIdToken(user.getUserId());
                user.setOpenIdToken(openIdToken);
                return user;
            }
        }
    }catch(Exception e){
        System.out.println(e.toString());
    }
    return user;
}

In this function, we use the DynamoDB Mapper to check if a row with the provided username attribute exists in the table User. Make sure you set the region in your code. If a row with the username exists, the code makes a simple check against the provided password hash value. If the passwords match, we will authenticate this user and then follow the developer authentication flow to get an OpenID token from the CognitoIdentityBroker. The token will be passed to the client as an attribute in the AuthenticationResponse object. For about information about the authentication flow for developer authenticated identities, see the Amazon Cognito documentation here. For this Java Lambda function, we will be using the enhanced authflow.

Before we can get an OpenID token, we need to create an identity pool in Amazon Cognito and then register our developer authentication provider with this identity pool. When you create the identity pool, you can keep the default roles provided by the console. In the Authentication Providers field, in the Custom section, type login.yourname.services.

After the pool is created, implement the getOpenIdToken as shown:


private String getOpenIdToken(Integer userId){
    	
    AmazonCognitoIdentityClient client = new AmazonCognitoIdentityClient();
    GetOpenIdTokenForDeveloperIdentityRequest tokenRequest = new GetOpenIdTokenForDeveloperIdentityRequest();
    tokenRequest.setIdentityPoolId("us-east-1:6dbccdfd-9444-4d4c-9e1b-5d1139cbe863");
    	
    HashMap map = new HashMap();
    map.put("login.dhruv.services", userId.toString());
    	
    tokenRequest.setLogins(map);
    tokenRequest.setTokenDuration(new Long(10001));
    	
    GetOpenIdTokenForDeveloperIdentityResult result = client.getOpenIdTokenForDeveloperIdentity(tokenRequest);
    	
    String token = result.getToken();
    	
    return token;
}

This code calls the GetOpenIdTokenForDeveloperIdentity function in the Amazon Cognito API. You need to pass in your Amazon Cognito identity pool ID along with the unique identity provider string you entered in the Custom field earlier. You also have to provide a unique identifier for the user so Amazon Cognito can map that to its Cognito ID. This unique ID is usually the user ID you use internally, but it can be any other unique attribute that allows both your authentication back end and Amazon Cognito to identify a user.

In part 3 of this blog, we will test the Java Lambda function locally using JUnit. Then we will upload and test the function on Lambda.

Building a serverless developer authentication API in Java using AWS Lambda, Amazon DynamoDB, and Amazon Cognito – Part 1

Most of us are aware of the support for a developer authentication backend in Amazon Cognito and how one can use a custom backend service to authenticate and authorize users to access AWS resources using temporary credentials. In this blog, we will create a quick serverless backend authentication API written in Java and deployed on Lambda. You can mirror this workflow in your current backend authentication service, or you can use this service as it is.

The blog will cover the following topics in a four-part series.

  1. Part 1: How to get started with Java development on Lambda using the AWS Toolkit for Eclipse.
  2. Part 1: How to use Java Lambda functions for custom events.
  3. Part 2: How to create a simple authentication microservice that checks users against an Amazon DynamoDB table.
  4. Part 2: How to integrate with the Amazon Cognito Identity Broker to get an OpenID token.
  5. Part 3: How to locally test your Java Lambda functions through JUnit before uploading to Lambda.
  6. Part 4: How to hook up your Lambda function to Amazon API Gateway.

The Lambda workflow support in the latest version of the AWS Toolkit for Eclipse makes it really simple to create Java functions for Lambda. If you haven’t already downloaded Eclipse, you can get it here. We assume you have an AWS account with at least one IAM user with an Administrator role (that is, the user should belong to an IAM group with administrative permissions).

Important: We strongly recommend you do not use your root account credentials to create this microservice.

After you have downloaded Eclipse and set up your AWS account and IAM user, install the AWS Toolkit for Eclipse. When prompted, restart Eclipse.

We will now create an AWS Lambda project. In the Eclipse toolbar, click the yellow AWS icon, and choose New AWS Lambda Java Project.

On the wizard page, for Project name, type AuthenticateUser. For Package Name, type aws.java.lambda.demo (or any package name you want). For Class Name, type AuthenticateUser. For Input Type, choose Custom Object. If you would like to try other predefined events that Lambda supports in Java, such as an S3Event or DynamoDBEvent, see these samples in our documentation here. For Output Type, choose a custom object, which we will define in the code later. The output type should be a Java class, not a primitive type such an int or float.

Choose Finish.

In Package Explorer, you will now see a Readme file in the project structure. You can close the Readme file for now. The structure below shows the main class, AuthenticateUser, which is your Lambda handler class. It’s where you will be implementing the handleRequest function. Later on, we will implement the unit tests in JUnit by modifying the AuthenticateUserTest class to allow local testing of your Lambda function before uploading.

Make sure you have added the AWS SDK for Java Library in your build path for the project. Before we implement the handleRequest function, let’s create a Data class for the User object that will hold our user data stored in a DynamoDB table called User. You will also need to create a DynamoDB table called User with some test data in it. To create a DynamoDB table, follow the tutorial here. We will choose the username attribute as the hash key. We do not need to create any indexes for this table. Create a new User class in the package aws.java.lambda.demo and then copy and paste the following code:

Note: For this exercise, we will create all our resources in the us-east-1 region. This region, along with the ap-northeast-1 (Tokyo) and eu-west-1 (Ireland) regions, supports Amazon Cognito, AWS Lambda, and API Gateway.

package aws.dhruv.lambda.services;

import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;

@DynamoDBTable(tableName="User")
public class User {
	
    private String userName;
    private Integer userId;
    private String passwordHash;
    private String openIdToken;
	
    @DynamoDBHashKey(attributeName="username")
    public String getUserName() { return userName; }
    public void setUserName(String userName) { this.userName = userName; }
	
    @DynamoDBAttribute(attributeName="userid")
    public Integer getUserId() { return userId; }
    public void setUserId(Integer userId) { this.userId = userId; }
	
    @DynamoDBAttribute(attributeName="passwordhash")
    public String getPasswordHash() { return passwordHash; }
    public void setPasswordHash(String passwordHash) { this.passwordHash = passwordHash; }
	
    @DynamoDBAttribute(attributeName="openidtoken")
    public String getOpenIdToken() { return openIdToken; }
    public void setOpenIdToken(String openIdToken) { this.openIdToken = openIdToken; }
	
    public User(String userName, Integer userId, String passwordHash, String openIdToken) {
        this.userName = userName;
        this.userId = userId;
        this.passwordHash = passwordHash;
        this.openIdToken = openIdToken;
    }
	
    public User(){ }	
}

You will see we are leveraging annotations so we can use the advanced features provided by the DynamoDB Mapper. The AWS SDK for Java provides DynamoDBMapper, a high-level interface that automates the process of getting your objects into Amazon DynamoDB and back out again. For more information about annotating your Java classes for use in DynamoDB, see the developer guide here.

Our Java function will ingest a custom object from API Gateway and, after execution, return a custom response object. Our custom input is a JSON POST body that will be invoked through an API Gateway endpoint. A sample request will look like the following:

        {
          "userName": "Dhruv",
          "passwordHash": "8743b52063cd84097a65d1633f5c74f5"
        } 

The data is passed in as a LinkedHashMap of key-value pairs to your handleRequest function. As you will see later, you will need to cast your input properly to extract the values of the POST body. Your custom response object looks like the following:

        {
          "userId": "123",
          "status": "true",
          "openIdToken": "eyJraWQiOiJ1cy1lYXN0LTExIiwidHlwIjoiSldTIiwiYWxnIjoiUl"	 
        }

We need to create an implementation of the Response class in our AuthenticateUser class as follows.

public static class AuthenticateUserResponse{
		
    protected Integer userId;
    protected String openIdToken;
    protected String status;
		
    public Integer getUserId() { return userId; }
    public void setUserId(Integer userId) { this.userId = userId; }

    public String getOpenIdToken() { return openIdToken; }
    public void setOpenIdToken(String openIdToken) { this.openIdToken = openIdToken; }
		
    public String getStatus() {	return status; }
    public void setStatus(String status) { this.status = status; }			
}

Now that we have the structure in place to handle a custom event, in part 2 of this blog post, we will finish the implementation of the handleRequest function that will do user validation and interact with Amazon Cognito.

S3 workflows simplified with Java 8 streams

by Jonathan Breedlove | on | in Java | Permalink | Comments |  Share

Of the many changes brought about with Java 8, the Stream API is perhaps one of the most exciting.  Java 8 streams, which are unrelated to Java’s I/O streams, allow you to perform a series of mutations and transformations against a collection of items.  You can think of a stream as a form of data pipeline, where a collection of data is passed as input and a series of defined steps are performed against that data.  Streams can produce a result in the form of a new collection or directly perform actions against each element of the stream.  Streams can be directly created from multiple sources, including directly specified values, from a collection, or from a Spliterator using a utility method.

The following are some very simple examples of how streams can be used with the Amazon S3 Java client.

Creating a Stream from results

Iterable<S3ObjectSummary> objectSummaries = S3Objects.inBucket(s3Client, "myBucket");
Stream<S3ObjectSummary> objectStream = StreamSupport.stream(objectSummaries.spliterator(), false);

We first make a call through the S3 client to grab a paginated Iterable of result object summaries from the objects in a bucket.  This transparently handles iteration across multiple pages by making additional calls to the service, as needed, to retrieve subsequent result pages.  Now it’s time to create a stream to process our results.  Although Java 8 does not provide a direct way to generate a stream from an Iterable, it does provide a utility class (StreamSupport) with methods to help you do this.  We’re able to use this to pass in a Spliterator (also new to Java 8, it helps facilitate parallelized iteration) grabbed off the Iterable to generate a stream.

Finding the total size of all objects in a bucket

This is a simple example of how using Java 8 streams can reduce the verbosity of an operation.  It’s not uncommon to want to compute the total size of all objects in a bucket and historically one might iterate through the results and keep a running tally of cumulative sizes of each object.

long totalBucketSize = 0L;
for (S3ObjectSummary summary : objectSummaries) {
    totalSize += summary.getSize();
}

Using a stream gives you a neat alternative that does the same thing.

long totalBucketSize = objectStream.mapToLong(obj -> obj.getSize()).sum();

Calling mapToLong on our stream produces a LongStream generated from the results of applying a function (in this case, one that simply grabs the object size from each summary) which allows us to perform subsequent stream operations.  Calling sum (which is a stream terminal reduction operation) returns the sum of all elements of the stream.

Delete all bucket objects older than a specified date

You might regularly run a job that goes through the objects in a bucket and deletes those that were last modified before some date.  Again, streams allow us to perform this operation concisely.  Here we’ll say that we want to delete any objects that were last modified over 30 days ago.

Calendar c = Calendar.getInstance();
c.add(Calendar.DAY_OF_MONTH, -30);
Date cutoffDate = c.getTime();

objectStream.filter(obj -> obj.getLastModified().before(cutoffDate))
    .forEach(obj -> s3Client.deleteObject("myBucket", obj.getKey()));

First we generate our target cutoff date.  In this example we call filter on our stream to filter the stream elements down to those matching our condition.  At that point calling forEach (which itself is a stream terminal operation) executes a function against the remaining stream elements.  In this case it makes a calls to the S3 client to delete each object.

This could also be easily modified to simply return a List of these old objects to pass around.

List<S3ObjectSummary> oldObjects = objectStream
			.filter(obj -> obj.getLastModified().before(cutoffDate))
			.collect(Collectors.toList());

Conclusion

I hope these simple examples give you some ideas for using streams in your application.  Are you using Java 8 streams with the AWS SDK for Java?  Let us know in the comments!

AWS re:Invent 2015

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

AWS re:Invent 2015 kicks off next week! We couldn’t be more excited to hear how you’re using our SDKs and tools to build your applications.

You can find several sessions covering the AWS SDKs and tools in the Developer Tools track. We’ll also be working at the AWS booth in the Expo Hall, so be sure to come by and see us.

I’ll be co-presenting DEV303: Practical DynamoDB Programming with Java on Thursday morning. Come by to see how we use the AWS SDK for Java along with AWS Lambda and the AWS Toolkit for Eclipse to efficiently work with data in DynamoDB.

As always, the re:Invent 2015 technical sessions will be available to watch online, for free, after the event. Here are a few sessions on the AWS SDK for Java from years past:

Will you be at AWS re:Invent this year? What are most excited about? Let us know in the comments below.

Managing Dependencies with AWS SDK for Java – Bill of Materials module (BOM)

by Manikandan Subramanian | on | in Java | Permalink | Comments |  Share

Every Maven project specifies its required dependencies in the pom.xml file. The AWS SDK for Java provides a Maven module for every service it supports. To use the Java client for a service, all you need to do is specify the group ID, artifact ID and the Maven module version in the dependencies section of pom.xml.

The AWS SDK for Java introduces a new Maven bill of materials (BOM) module, aws-java-sdk-bom, to manage all your dependencies on the SDK and to make sure Maven picks the compatible versions when depending on multiple SDK modules. You may wonder why this BOM module is required when the dependencies are specified in the pom.xml file. Let me take you through an example. Here is the dependencies section from a pom.xml file:

  <dependencies>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-ec2</artifactId>
      <version>1.10.2</version>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-s3</artifactId>
      <version>1.10.5</version>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-dynamodb</artifactId>
      <version>1.10.10</version>
    </dependency>
  <dependencies>

Here is the Maven’s dependency resolution for the above pom.xml file:

As you see, the aws-java-sdk-ec2 module is pulling in an older version of aws-java-sdk-core. This intermixing of different versions of SDK modules can create unexpected issues. To ensure that Maven pulls in the correct version of the dependencies, import the aws-java-sdk-bom into your dependency management section and specify your project’s dependencies, as shown below.

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk-bom</artifactId>
        <version>1.10.10</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>
    </dependencies>
  </dependencyManagement>
  
  <dependencies>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-ec2</artifactId>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-s3</artifactId>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-dynamodb</artifactId>
    </dependency>
  </dependencies>

The Maven version for each dependency will be resolved to the version specified in the BOM. Notice that when you are importing a BOM, you will need to mention the type as pom and the scope as import.

Here is the Maven’s dependency resolution for the above pom.xml file:

As you can see, all the AWS SDK for Java modules are resolved to a single Maven version. And upgrading to a newer version of the AWS SDK for Java requires you to change only the version of aws-java-sdk-bom module being imported.

Have you been using modularized Maven modules in your project? Please leave your feedback in the comments.

Using AWS CodeCommit from Eclipse

Earlier this month, we launched AWS CodeCommit — a managed revision control service that hosts Git repositories and works with existing Git-based tools.

If you’re an Eclipse user, it’s easy to use the EGit tools in Eclipse to work with AWS CodeCommit. This post shows how to publish a project to AWS CodeCommit so you can start trying out the new service.

Configure SSH Authentication

To use AWS CodeCommit with Eclipse’s Git tooling, you’ll need to configure SSH credentials for accessing CodeCommit. This is an easy process you’ll only need to do once. The AWS CodeCommit User Guide has a great walkthrough describing the exact steps to create a keypair and register it with AWS. Make sure you take the time to test your SSH credentials and configuration as described in the walkthrough.

Create a Repository

Next, we’ll create a new Git repository using AWS CodeCommit. The AWS CodeCommit User Guide has instructions for creating repositories through the AWS CLI or the AWS CodeCommit console.

Here’s how I used the AWS CLI:

% aws --region us-east-1 codecommit create-repository 
      --repository-name MyFirstRepo 
      --repository-description "My first CodeCommit repository"
{
  "repositoryMetadata": {
    "creationDate": 1437760512.195,
    "cloneUrlHttp": 
       "https://git-codecommit.us-east-1.amazonaws.com/v1/repos/MyFirstRepo",
    "cloneUrlSsh": 
       "ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/MyFirstRepo",
    "repositoryName": "MyFirstRepo",
    "Arn": "arn:aws:codecommit:us-east-1:963699449919:MyFirstRepo",
    "repositoryId": "c4ed6846-5000-44ce-a808-b1862766d8bc",
    "repositoryDescription": "My first CodeCommit repository",
    "accountId": "963699449919",
    "lastModifiedDate": 1437760512.195
  }
}

Whether you use the CLI or the console to create your CodeCommit repository, make sure to copy the cloneUrlSsh property that’s returned. We’ll use that in the next step when we clone the CodeCommit repository to our local machine.

Create a Clone

Now we’re ready to use our repository locally and push one of our projects into it. The first thing we need to do is clone our repository so that we have a local version. In Eclipse, open the Git Repositories view (Window -> Show View -> Other…) and select the option to clone a Git repository.

In the first page of the Clone Git Repository wizard, paste the Git SSH URL from your CodeCommit repository into the URI field. Eclipse will parse out the connection protocol, host, and repository path.

Click Next. The CodeCommit repository we created is an empty, or bare, repository, so there aren’t any branches to configure yet.

Click Next. On the final page of the wizard, select where on your local machine you’d like to store the cloned repository on your local machine.

Push to Your Repository

Now that we’ve got a local clone of our repository, we’re ready to start pushing a project into it. Select a project and use Team -> Share to connect that project with the repository we just cloned. In my example, I simply created a new project.

Next use Team -> Commit… to make the initial check-in to your cloned repo.

Finally, use Team -> Push Branch… to push the master branch in your local repository up to your CodeCommit repository. This will create the master branch on the CodeCommit repository and configure your local repo for upstream pushes and pulls.

Conclusion

Your project is now configured with the EGit tools in Eclipse and set up to push and pull from a remote AWS CodeCommit repository. You can take advantage of all the EGit tooling in Eclipse to work with your repository and easily push and pull changes from your AWS CodeCommit repository. Have you tried using AWS CodeCommit yet?

Invoking AWS Lambda Functions from Java

by David Murray | on | in Java | Permalink | Comments |  Share

AWS Lambda makes it incredibly easy and cost-effective to run your code at arbitrary scale in the cloud. Simply write the handler code for your function and upload it to Lambda. The service takes care of hosting and scaling the function for you. And in case you somehow missed it, it now supports writing function handlers in Java!

Although many use cases for Lambda involve running code in response to triggers from other AWS services like Amazon S3 or Amazon Cognito, you can also invoke Lambda functions directly, making them an easy and elastically scalable way to decompose an application into reusable microservices. In this post, we’ll assume we’ve got a Lambda function named “CountCats” that accepts an S3 bucket and key for an image, analyzes the image to count the number of cats the image contains, and returns that count to the caller. An example request to this service might look like:

{
  "bucket": "pictures-of-cats",
  "key": "three-cool-cats.jpg"
}

And an example response might look like:

{
  "count": 3
}

To invoke this function from Java code, we’ll first define POJOs representing the input and output JSON:

public class CountCatsInput {

  private String bucketName;
  private String key;

  public String getBucketName() { return bucketName; }
  public void setBucketName(String value) { bucketName = value; }

  public String getKey() { return key; }
  public void setKey(String value) { key = value; }
}

public class CountCatsOutput {

  private int count;

  public int getCount() { return count; }
  public void setCount(int value) { count = value; }
}

Next we’ll define an interface representing our microservice, and annotate it with the name of the Lambda function to invoke when it’s called:

import com.amazonaws.services.lambda.invoke.LambdaFunction;

public interface CatService {
  @LambdaFunction(functionName="CountCats")
  CountCatsOutput countCats(CountCatsInput input);
}

We can then use the LambdaInvokerFactory to create an implementation of this interface that will make calls to our service running on Lambda (note that providing a lambdaClient is optional, if one is not provided a default client will be used):

import com.amazonaws.services.lambda.AWSLambdaClientBuilder;
import com.amazonaws.services.lambda.invoke.LambdaInvokerFactory;

final CatService catService = LambdaInvokerFactory.builder()
 .lambdaClient(AWSLambdaClientBuilder.defaultClient())
 .build(CatService.class);

Finally, we invoke our service using this proxy object:

CountCatsInput input = new CountCatsInput();
input.setBucketName("pictures-of-cats");
input.setKey("three-cute-cats");

int cats = catService.countCats(input).getCount();

When called, the input POJO is serialized to JSON and sent to your Lambda function; the function’s result is transparently deserialized back into your output POJO. Details like authentication, timeouts, and retries in case of transient network issues are handled by the underlying AWSLambdaClient.

Are you using Lambda to host a microservice and calling it from Java code? Let us know how it’s going in the comments or over on our GitHub repository!

AWS SDK for Java Office Hour

by David Murray | on | in Java | Permalink | Comments |  Share

The AWS SDKs and Tools team invites you to the first-ever online office hour hosted by the maintainers of the AWS SDK for Java and AWS Toolkit for Eclipse. It will be held via Google Hangouts from 10:30-11:30am PDT (UTC -7:00) on Thursday 6/18. If you don’t have one already, you will be required to create an account with Google to join the video chat.

This first office hour will be entirely driven by customer questions. We expect to focus on questions about the two developer tools we manage, but any questions related to Java development on AWS are welcome. We’re excited to meet you and help Java developers be successful on AWS!

The event details can be easily added to your calendar using this link. Alternatively, you can directly join the video call at the scheduled time via this link.