Tag: Eclipse


Testing Lambda functions using the AWS Toolkit for Eclipse

In this blog post, I will introduce how to test AWS Lambda functions in Eclipse by using the AWS Toolkit for Eclipse. The AWS Toolkit for Eclipse Lambda Plugin provides a feature in which a JUnit test class is created automatically upon creation of a new AWS Lambda Java project. I will give you step-by-step instructions for creating an AWS Lambda Java project, creating an AWS Lambda function, unit testing the AWS Lambda function, uploading the AWS Lambda function to AWS Lambda, and testing the AWS Lambda function remotely. Currently, AWS Lambda supports five AWS service event sources for Java: S3 Event, SNS Event, Kinesis Event, Cognito Event, and DynamoDB Event. You can also define a custom event. In this post, I will give examples using the S3 Event and a custom event. The other event types can be used in the same way.

Prerequisites

  1. Install Eclipse version 3.6 (Helios) or later on your computer.
  2. Follow the instructions on the AWS Toolkit for Eclipse page to Install the AWS Toolkit for Eclipse.
  3. After it is installed, the new AWS Toolkit for Eclipse icon will appare on the toolbar.

Steps for testing a Lambda function

  1. Create an AWS Lambda Java project.
    • Choose the AWS Toolkit for Eclipse or New icon, and then choose AWS Lambda Java Project.
    • When you create a name for the project and package, you should see the corresponding changes in the Preview text area. For Input Type, you can choose from the five AWS service event sources plus the custom event. You can also complete the Class Name, Input Type, and Output Type fileds. The AWS Toolkit for Eclipse will auto-generate the Lambda function class in the src/ folder and the unit test class in the tst/ folder. In this example, we set Project Name as S3EventDemo, Package Name to  com.lambda.demo.s3, and left the other settings at their defaults.
    • The AWS Toolkit for Eclipse will create the following folder structure for the S3 Event.

      The LambdaFunctionHandler class is an implementation of the RequestHandler interface that defines the Lambda function you need to implement. The LambdaFunctionHandlerTest class is where the unit tests reside. The TestContext class is an implementation of the Context interface, which acts as a parameter for the Lambda function. The TestUtils class is a supporting class to parse JSON file. The s3-event.put.json file is the sample S3 event source configuration you can use for testing.
  2. Create an AWS Lambda function.
    You need to implement the Lambda function handleRequest in the LambdaFunctionHandler class. It takes S3Event and Context as parameters, and returns an Object. You can always define a custom output class instead of the default Object class. The following is the sample implementation of the Lambda function, which returns a string of the bucket name from the S3 Event.

    @Override
    public Object handleRequest(S3Event input, Context context) {
        context.getLogger().log("Input: " + input);
        return input.getRecords().get(0).getS3().getBucket().getName();
    }
    
  3. Unit-test the AWS Lambda function.
    In the unit test, the S3Event parameter is loaded from the s3-event.put.json file in the tst/ folder and the Context is implemented and instantiated by the customers for testing. The default unit test in the LambdaFunctionHandlerTest class is simply printing the output. You may want to change this to a validation as shown in the following code. From the s3-event.put.json file, the bucket name returned from the Lambda function is expected to be “sourcebucket.

    @Test
    public void testLambdaFunctionHandler() {
        LambdaFunctionHandler handler = new LambdaFunctionHandler();
        Context ctx = createContext();
        Object output = handler.handleRequest(input, ctx);
        if (output != null) {
            System.out.println(output.toString());
        }
        Assert.assertEquals("sourcebucket", output);
    }

    This is the simplest way to write the test case. When you run the unit test, output like that shown in the following screenshot will appear in the console.

  4. Upload and run the AWS Lambda function.You can also test the Lambda function after you upload it to AWS Lambda. To do this, right-click anywhere in the workspace of the project, choose Amazon Web Services, and choose Run function on AWS Lambda…, as shown in the following screenshot.

    You will be asked to select the JSON file as the S3Event input. Choose the default one provided by the AWS Toolkit for Eclipse, as shown in the following screenshot.

    Choose Invoke. You will see output similar to the following screenshot in the console. The function output is the bucket name returned by the Lambda function.

  5. Test the custom event Lambda function.
    The workflow for testing a custom event is very similar to testing the S3 Event. Let’s define a Lambda function that calculates the maximum value from a list of integer values.

    • First, define the custom event input class.
      public class CustomEventInput {
          private List<Integer> values;
          public List<Integer> getValues() {
              return values;
          }
          public void setValues(List<Integer> values) {
              this.values = values;
          }
      }
      
    • Second, define the custom event output class.
      public class CustomEventOutput {
          private Integer value;
          public CustomEventOutput(int value) {
              setValue(value);
          }
          public Integer getValue() {
              return value;
          }
          public void setValue(Integer value) {
              this.value = value;
          }
      }
      
    • Third, implement the Lambda function.
      @Override
      public CustomEventOutput handleRequest(CustomEventInput input, Context context) {
          context.getLogger().log("Input: " + input);
      
          int maxValue = Integer.MIN_VALUE;
          for (Integer value : input.getValues()) {
              if (value > maxValue) {
                  maxValue = value;
              }
          }
          return new CustomEventOutput(maxValue);
      }
      
    • Fourth, prepare a sample JSON file as the CustomEventInput object for testing. AWS Lambda will use JSON format to represent THE object you defined. Here is an example using POJOs for handler input/output.
      {
          "values" : [34, 52, 335, 32]
      }
      
    • Lastly, upload this Lambda function to AWS Lambda, and test remotely. You should see console output similar to the following. The output is the JSON format of the CustomEventOutput object returned by the Lambda function.

This is how a typical Lambda function is written and tested using the AWS Toolkit for Eclipse. For more advanced use cases, you can use the S3 Event and DynamoDB Event examples provisioned by AWS Lambda.

Building a serverless developer authentication API in Java using AWS Lambda, Amazon DynamoDB, and Amazon Cognito – Part 3

In parts 1 and 2 of this blog post, we saw how easy it is to get started on Java development for AWS Lambda, and use a microservices architecture to quickly iterate on an AuthenticateUser call that integrates with Amazon Cognito. We set up the AWS Toolkit for Eclipse, used the wizard to create a Java Lambda function, implemented logic for checking a user name/password combination against an Amazon DynamoDB table, and then used the Amazon Cognito Identity Broker to get an OpenID token.

In part 3 of this blog post, we will test our function locally as a JUnit test. Upon successful testing, we will then use the AWS Toolkit for Eclipse to configure and upload the function to Lambda, all from within the development environment. Finally, we will test the function from within the development environment on Lambda.

Expand the tst folder in Package Explorer:

You will see the AWS Toolkit for Eclipse has already created some stubs for you to write your own unit test. Double-click AuthenticateUserTest.java. The test must be implemented in the testAuthenticateUser function. The function creates a dummy Lambda context and a custom event that will be your test data for the testing of your Java Lambda function. Open the TestContext.java file to see a stub that is created to represent a Lambda context. The Context object in Java allows you to interact with the AWS Lambda execution environment through the context parameter. The Context object allows you to access useful information in the Lambda execution environment. For example, you can use the context parameter to determine the CloudWatch log stream associated with the function. For a full list of available context properties in the programming model for Java, see the documentation.

As we mentioned in part 1 of our blog post, our custom object is passed as a LinkedHashMap into our Java Lambda function. Create a test input in the createInput function for a valid input (meaning there is a row in your DynamoDB table User that matches your input).

@BeforeClass
    public static void createInput() throws IOException {
        // TODO: set up your sample input object here.
        input = new LinkedHashMap();
        input.put("userName", "Dhruv");
        input.put("passwordHash","8743b52063cd84097a65d1633f5c74f5");
    } 

Fill in any appropriate values for building the context object and then implement the testAuthenticateUser function as follows:

@Test	
    public void testhandleRequest() {
        AuthenticateUser handler = new AuthenticateUser();
        Context ctx = createContext();

        AuthenticateUserResponse output = (AuthenticateUserResponse)handler.handleRequest(input, ctx);

        // TODO: validate output here if needed.
        if (output.getStatus().equalsIgnoreCase("true")) {
            System.out.println("AuthenticateUser JUnit Test Passed");
        }else{
        	Assert.fail("AuthenticateUser JUnit Test Failed");
        }
    }

Save the file. To run the unit test, right-click AuthenticateUserTest, choose Run As, and then choose JUnit Test. If everything goes well, your test should pass. If not, run the test in Debug mode to see if there are any exceptions. The most common causes for test failures are not setting the right region for your DynamoDB table or not setting the AWS credentials in the AWS Toolkit for Eclipse configuration.

Now that we have successfully tested this function, let’s upload it to Lambda. The AWS Toolkit for Eclipse makes this process very simple. To start the wizard, right-click your Eclipse project, choose Amazon Web Services, and then choose Upload function to AWS Lambda.

 
          

You will now see a page that will allow you to configure your Lambda function. Give your Lambda function the name AuthenticateUser and make sure you choose the region in which you created your DynamoDB table and Amazon Cognito identity pool. Choose Next.

On this page, you will configure your Lambda function. Provide a description for your service. The function handler should already have been selected for you.

You will need to create an IAM role for Lambda execution. Choose Create and type AuthenticateUser-Lambda-Execution-Role. We will need to update this role later so your Lambda function has appropriate access to your DynamoDB table and Amazon Cognito identity pool. You will also need to create or choose an S3 bucket where you will upload your function code. In Advanced Settings, for Memory (MB), type 256. For Timeout(s), type 30. Choose Finish.

Your Lambda function should be created. When the upload is successful, go to the AWS Management Console and navigate to the Lambda dashboard to see your newly created function. Before we execute the function, we need to provide the permissions to the Lambda execution role. Navigate to IAM, choose Roles, and then choose the AuthenticateUser-Lambda-Execution-Role. Make sure the following managed policies are attached.

We need to provide two inline policies for the DynamoDB table and Amazon Cognito. Click Create Role Policy, and then add the following policy document. This will give Lambda access to your identity pool.

The policy document that gives access to the DynamoDB table should look like the following:

Finally, go back to Eclipse, right-click your project name, choose Amazon Web Services, and then choose Run Function on AWS Lambda. Provide your custom JSON input in the format we provided in part 1 of the blog and click Invoke. You should see the result of your Lambda function execution in the Eclipse console:

 
         

Building a serverless developer authentication API in Java using AWS Lambda, Amazon DynamoDB, and Amazon Cognito – Part 2

In part 1 of this blog post, we showed you how to leverage the AWS Toolkit for Eclipse to quickly develop Java functions for AWS Lambda. We then set up a skeleton project and the structure to handle custom objects sent to your Java function.

In part 2 of this blog post, we will implement the handleRequest function that will handle the logic of interacting with Amazon DynamoDB and then generate an OpenID token by using the Amazon Cognito API.

We will now implement the handleRequest function within the AuthenticateUser class. Our final handleRequest function looks like the following:

@Override
public AuthenticateUserResponse handleRequest(Object input, Context context) {
      
    AuthenticateUserResponse authenticateUserResponse = new AuthenticateUserResponse();
    @SuppressWarnings("unchecked")
    LinkedHashMap inputHashMap = (LinkedHashMap)input;
    User user = authenticateUser(inputHashMap);
    if(user!=null){
        authenticateUserResponse.setUserId(user.getUserId());
        authenticateUserResponse.setStatus("true");
        authenticateUserResponse.setOpenIdToken(user.getOpenIdToken());
    }else{
        authenticateUserResponse.setUserId(null);
        authenticateUserResponse.setStatus("false");
        authenticateUserResponse.setOpenIdToken(null);
    }
        
    return authenticateUserResponse;
}

We will need to implement the authenticateUser function for this Lambda Java function to compile properly. Implement the function as shown here:


public User authenticateUser(LinkedHashMap input){
    User user=null;
    	
    String userName = input.get("userName");
    String passwordHash = input.get("passwordHash");
    	
    try{
        AmazonDynamoDBClient client = new AmazonDynamoDBClient();
        client.setRegion(Region.getRegion(Regions.US_EAST_1));
        DynamoDBMapper mapper = new DynamoDBMapper(client);
	    	
        user = mapper.load(User.class, userName);
	    	
        if(user!=null){
            if(user.getPasswordHash().equalsIgnoreCase(passwordHash)){
                String openIdToken = getOpenIdToken(user.getUserId());
                user.setOpenIdToken(openIdToken);
                return user;
            }
        }
    }catch(Exception e){
        System.out.println(e.toString());
    }
    return user;
}

In this function, we use the DynamoDB Mapper to check if a row with the provided username attribute exists in the table User. Make sure you set the region in your code. If a row with the username exists, the code makes a simple check against the provided password hash value. If the passwords match, we will authenticate this user and then follow the developer authentication flow to get an OpenID token from the CognitoIdentityBroker. The token will be passed to the client as an attribute in the AuthenticationResponse object. For about information about the authentication flow for developer authenticated identities, see the Amazon Cognito documentation here. For this Java Lambda function, we will be using the enhanced authflow.

Before we can get an OpenID token, we need to create an identity pool in Amazon Cognito and then register our developer authentication provider with this identity pool. When you create the identity pool, you can keep the default roles provided by the console. In the Authentication Providers field, in the Custom section, type login.yourname.services.

After the pool is created, implement the getOpenIdToken as shown:


private String getOpenIdToken(Integer userId){
    	
    AmazonCognitoIdentityClient client = new AmazonCognitoIdentityClient();
    GetOpenIdTokenForDeveloperIdentityRequest tokenRequest = new GetOpenIdTokenForDeveloperIdentityRequest();
    tokenRequest.setIdentityPoolId("us-east-1:6dbccdfd-9444-4d4c-9e1b-5d1139cbe863");
    	
    HashMap map = new HashMap();
    map.put("login.dhruv.services", userId.toString());
    	
    tokenRequest.setLogins(map);
    tokenRequest.setTokenDuration(new Long(10001));
    	
    GetOpenIdTokenForDeveloperIdentityResult result = client.getOpenIdTokenForDeveloperIdentity(tokenRequest);
    	
    String token = result.getToken();
    	
    return token;
}

This code calls the GetOpenIdTokenForDeveloperIdentity function in the Amazon Cognito API. You need to pass in your Amazon Cognito identity pool ID along with the unique identity provider string you entered in the Custom field earlier. You also have to provide a unique identifier for the user so Amazon Cognito can map that to its Cognito ID. This unique ID is usually the user ID you use internally, but it can be any other unique attribute that allows both your authentication back end and Amazon Cognito to identify a user.

In part 3 of this blog, we will test the Java Lambda function locally using JUnit. Then we will upload and test the function on Lambda.

Building a serverless developer authentication API in Java using AWS Lambda, Amazon DynamoDB, and Amazon Cognito – Part 1

Most of us are aware of the support for a developer authentication backend in Amazon Cognito and how one can use a custom backend service to authenticate and authorize users to access AWS resources using temporary credentials. In this blog, we will create a quick serverless backend authentication API written in Java and deployed on Lambda. You can mirror this workflow in your current backend authentication service, or you can use this service as it is.

The blog will cover the following topics in a four-part series.

  1. Part 1: How to get started with Java development on Lambda using the AWS Toolkit for Eclipse.
  2. Part 1: How to use Java Lambda functions for custom events.
  3. Part 2: How to create a simple authentication microservice that checks users against an Amazon DynamoDB table.
  4. Part 2: How to integrate with the Amazon Cognito Identity Broker to get an OpenID token.
  5. Part 3: How to locally test your Java Lambda functions through JUnit before uploading to Lambda.
  6. Part 4: How to hook up your Lambda function to Amazon API Gateway.

The Lambda workflow support in the latest version of the AWS Toolkit for Eclipse makes it really simple to create Java functions for Lambda. If you haven’t already downloaded Eclipse, you can get it here. We assume you have an AWS account with at least one IAM user with an Administrator role (that is, the user should belong to an IAM group with administrative permissions).

Important: We strongly recommend you do not use your root account credentials to create this microservice.

After you have downloaded Eclipse and set up your AWS account and IAM user, install the AWS Toolkit for Eclipse. When prompted, restart Eclipse.

We will now create an AWS Lambda project. In the Eclipse toolbar, click the yellow AWS icon, and choose New AWS Lambda Java Project.

On the wizard page, for Project name, type AuthenticateUser. For Package Name, type aws.java.lambda.demo (or any package name you want). For Class Name, type AuthenticateUser. For Input Type, choose Custom Object. If you would like to try other predefined events that Lambda supports in Java, such as an S3Event or DynamoDBEvent, see these samples in our documentation here. For Output Type, choose a custom object, which we will define in the code later. The output type should be a Java class, not a primitive type such an int or float.

Choose Finish.

In Package Explorer, you will now see a Readme file in the project structure. You can close the Readme file for now. The structure below shows the main class, AuthenticateUser, which is your Lambda handler class. It’s where you will be implementing the handleRequest function. Later on, we will implement the unit tests in JUnit by modifying the AuthenticateUserTest class to allow local testing of your Lambda function before uploading.

Make sure you have added the AWS SDK for Java Library in your build path for the project. Before we implement the handleRequest function, let’s create a Data class for the User object that will hold our user data stored in a DynamoDB table called User. You will also need to create a DynamoDB table called User with some test data in it. To create a DynamoDB table, follow the tutorial here. We will choose the username attribute as the hash key. We do not need to create any indexes for this table. Create a new User class in the package aws.java.lambda.demo and then copy and paste the following code:

Note: For this exercise, we will create all our resources in the us-east-1 region. This region, along with the ap-northeast-1 (Tokyo) and eu-west-1 (Ireland) regions, supports Amazon Cognito, AWS Lambda, and API Gateway.

package aws.dhruv.lambda.services;

import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;

@DynamoDBTable(tableName="User")
public class User {
	
    private String userName;
    private Integer userId;
    private String passwordHash;
    private String openIdToken;
	
    @DynamoDBHashKey(attributeName="username")
    public String getUserName() { return userName; }
    public void setUserName(String userName) { this.userName = userName; }
	
    @DynamoDBAttribute(attributeName="userid")
    public Integer getUserId() { return userId; }
    public void setUserId(Integer userId) { this.userId = userId; }
	
    @DynamoDBAttribute(attributeName="passwordhash")
    public String getPasswordHash() { return passwordHash; }
    public void setPasswordHash(String passwordHash) { this.passwordHash = passwordHash; }
	
    @DynamoDBAttribute(attributeName="openidtoken")
    public String getOpenIdToken() { return openIdToken; }
    public void setOpenIdToken(String openIdToken) { this.openIdToken = openIdToken; }
	
    public User(String userName, Integer userId, String passwordHash, String openIdToken) {
        this.userName = userName;
        this.userId = userId;
        this.passwordHash = passwordHash;
        this.openIdToken = openIdToken;
    }
	
    public User(){ }	
}

You will see we are leveraging annotations so we can use the advanced features provided by the DynamoDB Mapper. The AWS SDK for Java provides DynamoDBMapper, a high-level interface that automates the process of getting your objects into Amazon DynamoDB and back out again. For more information about annotating your Java classes for use in DynamoDB, see the developer guide here.

Our Java function will ingest a custom object from API Gateway and, after execution, return a custom response object. Our custom input is a JSON POST body that will be invoked through an API Gateway endpoint. A sample request will look like the following:

        {
          "userName": "Dhruv",
          "passwordHash": "8743b52063cd84097a65d1633f5c74f5"
        } 

The data is passed in as a LinkedHashMap of key-value pairs to your handleRequest function. As you will see later, you will need to cast your input properly to extract the values of the POST body. Your custom response object looks like the following:

        {
          "userId": "123",
          "status": "true",
          "openIdToken": "eyJraWQiOiJ1cy1lYXN0LTExIiwidHlwIjoiSldTIiwiYWxnIjoiUl"	 
        }

We need to create an implementation of the Response class in our AuthenticateUser class as follows.

public static class AuthenticateUserResponse{
		
    protected Integer userId;
    protected String openIdToken;
    protected String status;
		
    public Integer getUserId() { return userId; }
    public void setUserId(Integer userId) { this.userId = userId; }

    public String getOpenIdToken() { return openIdToken; }
    public void setOpenIdToken(String openIdToken) { this.openIdToken = openIdToken; }
		
    public String getStatus() {	return status; }
    public void setStatus(String status) { this.status = status; }			
}

Now that we have the structure in place to handle a custom event, in part 2 of this blog post, we will finish the implementation of the handleRequest function that will do user validation and interact with Amazon Cognito.

Using AWS CodeCommit from Eclipse

Earlier this month, we launched AWS CodeCommit — a managed revision control service that hosts Git repositories and works with existing Git-based tools.

If you’re an Eclipse user, it’s easy to use the EGit tools in Eclipse to work with AWS CodeCommit. This post shows how to publish a project to AWS CodeCommit so you can start trying out the new service.

Configure SSH Authentication

To use AWS CodeCommit with Eclipse’s Git tooling, you’ll need to configure SSH credentials for accessing CodeCommit. This is an easy process you’ll only need to do once. The AWS CodeCommit User Guide has a great walkthrough describing the exact steps to create a keypair and register it with AWS. Make sure you take the time to test your SSH credentials and configuration as described in the walkthrough.

Create a Repository

Next, we’ll create a new Git repository using AWS CodeCommit. The AWS CodeCommit User Guide has instructions for creating repositories through the AWS CLI or the AWS CodeCommit console.

Here’s how I used the AWS CLI:

% aws --region us-east-1 codecommit create-repository 
      --repository-name MyFirstRepo 
      --repository-description "My first CodeCommit repository"
{
  "repositoryMetadata": {
    "creationDate": 1437760512.195,
    "cloneUrlHttp": 
       "https://git-codecommit.us-east-1.amazonaws.com/v1/repos/MyFirstRepo",
    "cloneUrlSsh": 
       "ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/MyFirstRepo",
    "repositoryName": "MyFirstRepo",
    "Arn": "arn:aws:codecommit:us-east-1:963699449919:MyFirstRepo",
    "repositoryId": "c4ed6846-5000-44ce-a808-b1862766d8bc",
    "repositoryDescription": "My first CodeCommit repository",
    "accountId": "963699449919",
    "lastModifiedDate": 1437760512.195
  }
}

Whether you use the CLI or the console to create your CodeCommit repository, make sure to copy the cloneUrlSsh property that’s returned. We’ll use that in the next step when we clone the CodeCommit repository to our local machine.

Create a Clone

Now we’re ready to use our repository locally and push one of our projects into it. The first thing we need to do is clone our repository so that we have a local version. In Eclipse, open the Git Repositories view (Window -> Show View -> Other…) and select the option to clone a Git repository.

In the first page of the Clone Git Repository wizard, paste the Git SSH URL from your CodeCommit repository into the URI field. Eclipse will parse out the connection protocol, host, and repository path.

Click Next. The CodeCommit repository we created is an empty, or bare, repository, so there aren’t any branches to configure yet.

Click Next. On the final page of the wizard, select where on your local machine you’d like to store the cloned repository on your local machine.

Push to Your Repository

Now that we’ve got a local clone of our repository, we’re ready to start pushing a project into it. Select a project and use Team -> Share to connect that project with the repository we just cloned. In my example, I simply created a new project.

Next use Team -> Commit… to make the initial check-in to your cloned repo.

Finally, use Team -> Push Branch… to push the master branch in your local repository up to your CodeCommit repository. This will create the master branch on the CodeCommit repository and configure your local repo for upstream pushes and pulls.

Conclusion

Your project is now configured with the EGit tools in Eclipse and set up to push and pull from a remote AWS CodeCommit repository. You can take advantage of all the EGit tooling in Eclipse to work with your repository and easily push and pull changes from your AWS CodeCommit repository. Have you tried using AWS CodeCommit yet?

AWS Toolkit for Eclipse Integration with AWS OpsWorks

Today, we are introducing a new addition to the AWS toolkit for Eclipse — the AWS OpsWorks plugin. This new plugin allows you to easily deploy your Java web applications from your development environment directly to AWS infrastructures.

So you might remember the AWS CodeDeploy plugin that we introduced recently, and some of you have probably used the AWS Elastic Beanstalk plugin before — they both seem to provide the same functionality of deploying a Java web app. Then why do we need yet another option for accomplishing the very same thing?

AWS Elastic Beanstalk, AWS CodeDeploy and AWS OpsWorks indeed share a lot in common as they are all considered part of the AWS deployment services family. However, they differ from each other in aspects like deployment execution and infrastructure resource management, and these differences make each of them suitable for a specific category of use cases.

  • AWS Elastic Beanstalk is a fully-managed application container service. It’s based on a PaaS (Platform as a Service) model where your application is provisioned by infrastructures that are automatically managed by AWS — there is no need for you to manually build and maintain them. As a container service, it also provides built-in deployment features for a variety of web app frameworks. All of these allow you to focus on your application development, while the deployments and provisioning of the application are handled by the service powered by cloud ninjas. The downside of this whole black box-ish model is of course its limited flexibility and extensibility. For example, you might not have fine-grained control over the underlying infrastructure resource, and it sometimes could be difficult to extend the built-in deployment commands to execute your custom tasks.
  • In contrast, AWS CodeDeploy focuses on only one thing — managing deployments to your existing instances in EC2 or elsewhere. It is not an application container service, so there are no built-in deployment features. You need to write your own deployment logic, which gives you the freedom to perform any kind of custom tasks for your deployments. Another difference compared to Elastic Beanstalk is that the service does not create or maintain the EC2 instances for you; you need to manage them yourself, which also means you have full control over your infrastructure. At a higher-level, you can think of AWS CodeDeploy as a fully-programmable robot that delivers your application artifacts to your EC2 instances fleet and then runs your custom commands on each of the instances for you.
  • Within the range between fully-managed (Elastic Beanstalk) and fully-customizable (CodeDeploy), AWS OpsWorks sits somewhere in the middle. It is an application management service that provides built-in deployment features for instances running a specific web framework (a.k.a., an application server layer). The reason that makes it really stand out compared to Elastic Beanstalk is that it uses Chef to perform the deployment actions. The deployment logic for built-in layers is essentially a default Chef cookbook that is open to all levels of customization. Using Chef allows you to achieve the necessary customization for your specific task while at the same time enjoy all the built-in features that are useful for the majority of use cases.

So generally speaking, AWS Elastic Beanstalk is the easiest option if you need to quickly deploy your application and don’t want to be concerned about infrastructure maintenance. AWS CodeDeploy gives you maximum flexibility but lacks built-in deployment features. AWS OpsWorks has a good tradeoff between flexibility and ease of use, but you need to learn Chef in order to fully utilize it.

Ok, now I hope I have answered your doubts about why you should care about this blog post if you are already familiar with the other deployment services. Then let’s get back to Eclipse and see how the plugin works.

After you install the new AWS OpsWorks Plugin component, you should see the “AWS OpsWorks” node under the AWS Explorer View. (Make sure you select “US East (Virginia)” as the current region since OpsWorks is available only in this region.)

The top-level elements under the service node are your stacks, each of which includes all the resources serving the same high-level purpose (e.g., hosting a Tomcat application). Each stack consists of one or more layers, where each layer represents a system component that consists of a set of instances that are functionally the same. For each layer that acts as an application server, it is associated with one app, which is where the revision of the application code for this layer should be deployed to.

For this demo, I have created a sample stack that has only one Java App Server layer in it and I have started two EC2 instances for this layer. Creating all of these can be done in a couple of minutes using the AWS OpsWorks console. We will create the Java app for this layer inside Eclipse.

To start with, let’s switch to “Java” or “Java EE” perspective and then create a sample web project. File -> New -> AWS Java Web Project. Then in the Project Explorer, right-click on the sample project that we just created, and select Amazon Web Services -> Deploy to AWS OpsWorks.

In the first page of the deployment wizard, choose our target region (US East) and target stack (MyStack) and then let’s create a new Java app called “My App“.

In the App Configuration page, you are required to specify an S3 location where the application artifact will the uploaded to. You can optionally pass in additional environment variables, specify a custom domain, and enable SSL for your application.

Click Next, and you will see the Deployment Action Configuration page. Here you can optionally add a comment for your deployment and  provide custom Chef JSON input.

Now click Finish, and the deployment will be initiated immediately. Wait for a couple of minutes until all the instances in the layer are successfully deployed.

After it finishes, you will see a confirmation message that shows you the expected endpoint where your application will be hosted on the instances. You can access the endpoint via web browser to make sure the deployment succeeds (make sure you include the trailing slash character in the URL).

As you can see, because of the built-in support for Tomcat applications, it’s really easy to deploy and host your Java web app using AWS OpsWorks. We want to focus on the deployment experience of this plugin, but we are also interested in what other features you are specifically looking for. More support on service resource creation and configuration? Or integration with Chef cookbook and recipe? Let us know in the comments!

AWS Toolkit for Eclipse Integration with AWS CodeDeploy (Part 3)

In this part of the series, we will show you how easy it is to run deployment commands on your EC2 fleet with the help of the AWS CodeDeploy plugin for Eclipse.

Create AppSpec Template

  • First, let’s create a shell script that executes the command we need to run on our instances:

/home/hanshuo/stop-httpd-server-appspec/template/stop-httpd.sh

#!/bin/bash

service ##HTTPD_SERVICE_NAME## stop

To make it a little bit fancier, instead of hardcoding httpd as the service name, we instead use a place holder ##HTTPD_SERVICE_NAME##. Later, you will learn how this could help you create a configurable deployment task in Eclipse.

  • Next, inside the same directory, let’s create a simple AppSpec file that specifies our shell script as the command for the ApplicationStart lifecycle event.

/home/hanshuo/stop-httpd-server-appspec/template/appspec.yml

version: 0.0
os: linux
hooks:
  ApplicationStart:
    - location: stop-httpd.sh
      timeout: 300
      runas: root

This AppSpec file asks CodeDeploy to run stop-httpd.sh as the root user during the ApplicationStart phase of the deployment. Since this is the only phase mentioned, it basically tells the service to run this single script as the whole deployment process – that’s all we need! You can find more information about the AppSpec file in the AWS CodeDeploy Developer Guide.

  • Now that we have created our template which consists of all the necessary AppSpec and command script files. The final step is to create a metadata file for it, which is in a specific JSON format understood by the Eclipse plugin.

/home/hanshuo/stop-httpd-server-appspec/template.md

{
  "metadataVersion" : "1.0",
  "templateName" : "Stop Apache HTTP Server",
  "templateDescription" : "Stop Apache HTTP Server",
  "templateBasedir" : "/home/hanshuo/stop-httpd-server-appspec/template",
  "isCustomTemplate" : true,
  "warFileExportLocationWithinDeploymentArchive" : "/application.war",
  "parameters" : [
    {
      "name" : "Apache HTTP service name",
      "type" : "STRING",
      "defaultValueAsString" : "httpd",
      "substitutionAnchorText" : "##HTTPD_SERVICE_NAME##",
      "constraints" : {
        "validationRegex" : "[\S]+"
      }
    }
  ]
}
  • templateName, templateDescription – Specifies the name and description for this template
  • templateBasedir –  Specifies the base directory where your AppSpec file and the command scripts are located
  • isCustomTemplate – True if it is a custom template created by the user; this tells the plugin to treat templateBasedir as an absolute path.
  • warFileExportLocationWithinDeploymentArchive – Since this deployment task doesn’t actually consume any WAR file, we can specify any value for this attribute.
  • parameters – Specifies a list of all the configurable parameters in our template. In this case, we have only one parameter ##HTTPD_SERVICE_NAME##
    • ​name – The user-friendly name for the parameter
    • type – Either STRING or INTEGER
    • defaultValueAsString – The default value for this parameter
    • substitutionAnchorText – The place-holder text that represents this parameter in the template files; when copying the template files, the plugin will replace these place holders with the user’s actual input value
    • constraints – The constraints that will be used to validate user input; supported constraints are validationRegex (for STRING), minValue and maxValue (for INTEGER).

Ok, now we that have everything ready, let’s go back to Eclipse and import the AppSpec template we just created.

In the last page of the deployment wizard, click the Import Template button at the top-right corner of the page:

Then find the location of our template metadata file, and click Import.

The plugin will parse the metadata and create a simple UI view for the user input of the template parameter ##HTTPD_SERVICE_NAME##. Let’s just use the default value httpd, and click Finish.

After the deployment completes, all the httpd services running on your EC2 instances will be stopped. If you are interested in how your commands were executed on your hosts, or if you need to debug your deployment, the log output of your scripts can be found at /opt/codedeploy-agent/deployment-root/{deployment-group-ID}/{deployment-ID}/logs/scripts.log

In this example, you are expected to get the following log output. You can see that the httpd service was successfully stopped during the ApplicationStart event:

	2015-01-07 00:28:04 LifecycleEvent - ApplicationStart
	2015-01-07 00:28:04 Script - stop-httpd.sh
	2015-01-07 00:28:04 [stdout]Stopping httpd: [  OK  ]

In the future, if you ever want to repeat this operation on your EC2 instances, just kick off another deployment in Eclipse using the same Stop Apache HTTP Server template, and you are done!

You can use the AppSpec template system to create more complicated deployment tasks. For example, you can define your own template that deploys your Java web app to other servlet containers such us Jetty and JBoss. If you are interested in the Tomcat 7 running on Linux template we used in the walkthrough, you can find the source code in our GitHub repo.

Feel free to customize the source for your specific need, and keep in mind that we are always open to pull-requests if you want to contribute your own templates that might be useful for other Java developers.

Conclusion

The AWS CodeDeploy plugin for Eclipse allows you to easily initiate a deployment directly from your source development environment. It eliminates the need to repeat the manual operations of building, packaging and preparing revision. It also allows you to quickly set up an AppSpec template that represents a repeatable and configurable deployment task.

Give it a try and see whether it can improve how you deploy your Java web project to your EC2 instances. If you have any feedback or feature requests, tell us about them in the comments. We’d love to hear them!

AWS Toolkit for Eclipse Integration with AWS CodeDeploy (Part 2)

In this part of the blog series, we will show you how to deploy a Java web project to your EC2 instances using the AWS CodeDeploy plugin.

Prerequisites

If you want to follow the walkthrough, you will need to create a CodeDeploy deployment group to begin with. The easiest way to do so is to follow the first-run walkthrough in the AWS CodeDeploy Console.

In this example, we have created a CodeDeploy application called DemoApplication and a deployment group called DemoFleet, which includes three EC2 instances running the Amazon Linux AMI.

Deploy an AWS Java Web Project to CodeDeploy

First, let’s open Eclipse and create a new AWS Java Web project in the workspace (File -> New -> Project -> AWS Java Web Project). Select the Basic Java Web Application option to start with. Note that this step is the same as how you would start a project for AWS Elastic Beanstalk.

In Project Explorer, right-click on the new project, and select Amazon Web Services -> Deploy to AWS CodeDeploy….

In the first page of the deployment wizard, you will be asked to select the target CodeDeploy application and deployment group. In this example, we select “DemoApplication” and “DemoFleet” which we just created in the console.

In the next page, you can specify the following options for this deployment.

  • CodeDeploy deployment config – Specifies how many instances you would want the deployment to run in parallel. In this example, we select “CodeDeployDefault.OneAtATime” which is the safest approach to reduce application downtime.
  • Ignore ApplicationStop step failures – Indicates whether or not the deployment should stop if it encounters an error when executing the ApplicationStop lifecycle event command.
  • S3 bucket name – Specifies the S3 bucket where your revision will be uploaded.

Click Next, and you will be asked to select the AppSpec template and parameter values for this deployment. At this moment, you should see only one predefined template, Tomcat 7 running on Linux. This AppSpec template includes the lifecycle event commands that spin up a Tomcat 7 server on your EC2 instances and deploy your application to it. The template accepts parameters including the context-path of the application and the port number that the Tomcat server will listen to.

We will explain later how the AppSpec template is defined and how you can add your custom templates. Here we select deploying to server root and using the default HTTP port 80. Then just click Finish to initiate the deployment.

After the deployment starts, you will be prompted by a dialog that tracks the progress of all the deployments on individual EC2 instances.

You can double-click on any of the instances to open the detailed view of each lifecycle event. If the deployment fails during any of the events, you can click View Diagnostics to see the error code and the log output from your command script.

After the deployment completes, your application will be available at http://{ec2-public-endpoint}

To view the full deployment history of a deployment group, we can visit the deployment group detail page via AWS Explorer View -> AWS CodeDeploy -> DemoApplication -> Double-click DemoFleet.

For some of you who have been following the walkthrough, it’s possible that you might not see the sample JSP page when accessing the EC2 endpoint. Instead it shows the “Amazon Linux AMI Test Page”. This happens because the Amazon Linux AMI pre-bundles a running Apache HTTP server that has occupied the 80 port which our Tomcat server also attempts to bind to.

To solve this problem, you will need to run `sudo service httpd stop` on every EC2 instance before the Java web app is deployed. Without the help of CodeDeploy, you would need to ssh into each of the instances and manually run the command, which is a tedious and time-consuming process. So how can we leverage the CodeDeploy service to ease this process? What would be even better is to have the ability to save this specific deployment task into some configurable format, and make it easily repeatable in the future.

In the next part of our blog series, we will take a look at how we can accomplish this by using the AWS CodeDeploy plugin for Eclipse.

AWS Toolkit for Eclipse Integration with AWS CodeDeploy (Part 1)

We are excited to announce that the AWS Toolkit for Eclipse now includes integration with AWS CodeDeploy and AWS OpsWorks. In addition to the support of AWS Elastic Beanstalk deployment, these two new plugins provide more options for Java developers to deploy their web application to AWS directly from their Eclipse development environment.

In this blog post series, we will take a look at the CodeDeploy plugin and walk you through its features to show you how it can improve your deployment automation.

How to Install?

The AWS CodeDeploy and OpsWorks plugins are available at the official AWS Eclipse Toolkit update site (http://aws.amazon.com/eclipse). Just follow the same steps you took when you installed and updated the AWS plugins, and you will see the two new additions in the plugin list of our update site.

For more information about the installation and basic usage of the AWS Toolkit for Eclipse, go to our official documentation site.

AWS CodeDeploy

If you haven’t heard of it yet, AWS CodeDeploy is a new AWS service that was just launched last year during re:Invent 2014. The service allows you to fully automate the process of deploying your code to a fleet of EC2 instances. It eliminates the need for manual operations by providing a centralized solution that allows you to initiate, control and monitor your deployments.

If you want to learn more about CodeDeploy, here are some useful links:

One of the major design goals of AWS CodeDeploy is to be platform and language agnostic. With a command-based install model, CodeDeploy allows you to specify the commands you want to run during each deployment phase (a.k.a. lifecycle event), and these commands can be written in any kind of code.

The language-agnostic characteristic of CodeDeploy brings maximum flexibility and makes it usable for all kinds of deployment purposes. But meanwhile, because of its generality, the service may not natively support some common use cases that are specific to a particular development language. For example, when working with Java web applications, we would likely not want to deploy the Java source code directly to our hosts – the deployment process always involves some necessary building and packaging phases before publishing the content to the hosts. This is in contrast to many scripting languages where the source code itself can be used directly as the deployment artifact. In its deployment workflow model, CodeDeploy also requires the developer to prepare a revision every time he/she wants to initiate a deployment. This could be either in form of a snapshot of the GitHub repo or an archive bundle uploaded to Amazon S3. This revision should include an Application Specification (AppSpec) file, where the developer’s custom deployment commands are specified.

To summarize, as the following diagram shows, deploying a Java web application via CodeDeploy would require non-trivial manual operations in the development environment, before the deployment actually happens.

Ideally, we would want a tool that is able to:

  • automate the building, packaging, and revision preparation phases for a Java web app
  • support creating configurable and repeatable deployment tasks

In the next part of our blog series, we will walkthrough a simple use case to demonstrate how the AWS CodeDeploy Eclipse plugin solves these problems and makes the deployment as easy as a number of mouse-clicks. Stay tuned!

Secure Local Development with the ProfileCredentialsProvider

We’ve talked in the past about the importance of secure credentials management. When your application is running in production, IAM roles for Amazon EC2 are a great way to securely deliver AWS credentials to your application. However, they’re by definition available only when your application is running on EC2 instances.

If you’re a developer making changes to an application, it’s often convenient to be able to fire up a local instance of the application to see your changes in action without having to spin up a full test environment in the cloud. If your application uses IAM roles for EC2 to pick up credentials when running in the cloud, this means you’ll need an additional way of injecting credentials when running locally on a developer’s box. It’s tempting to simply hardcode a set of credentials into the application for testing purposes, but this makes it distressingly easy to accidentally check those credentials in to source control.

The AWS SDK for Java includes a number of different credential providers that you can use as alternatives to hardcoded credentials. You can easily inject credentials into your application from system properties, environment variables, properties files, and more. All of these choices allow you to keep your credentials separate from your source code and reduce the risk of accidentally checking them in.

We’ve recently added a new credentials provider that loads credentials from a credentials profile file stored in your home directory. This option is particularly exciting because other tools like the AWS CLI and the AWS Toolkit for Eclipse also support reading credentials from and writing credentials to this file. You can configure your credentials in one place, and reuse them whether you’re running one-off CLI commands to check on the state of your resources, browsing around using the Toolkit, or running a local instance of one of your applications.

The default credentials profile file is located at System.getProperty("user.home") + ".aws/credentials". The format allows you to define multiple “profiles,” which makes it easy to maintain different sets of credentials for different projects with appropriately-scoped permissions; this way you don’t have to worry about a bug in the local version of your application accidentally wiping out your production system. Here’s a simple example:

  # Credentials for App-1's production stack (allowing only read-only
  # access for debugging production issues).
  [app-1-production]
  aws_access_key_id={access key id}
  aws_secret_access_key={secret access key}
  aws_session_token={optional session token}

  # Credentials for App-1's development stack, allowing full read-write
  # access.
  [app-1-development]
  aws_access_key_id={another access key id}
  aws_secret_access_key={another secret access key}

  # Default credentials to be used if no profile is specified.
  [default]
  aws_access_key_id=...
  aws_secret_access_key=...

If you’re running a recent version of the AWS CLI, you can set up a file in the correct format by running the aws configure command; you’ll be prompted to enter a set of credentials, which will be stored in the file. Similarly, if you’re running a recent version of the AWS Toolkit for Eclipse, any credentials you configure through its Preferences page will be written into the credentials profile file.

The AWS Toolkit for Eclipse Preferences Page

To use the ProfileCredentialsProvider when running local integration tests, simply add it to your credentials provider chain:

AmazonDynamoDBClient client = new AmazonDynamoDBClient(
      new AWSCredentialsProviderChain(

          // First we'll check for EC2 instance profile credentials.
          new InstanceProfileCredentialsProvider(),

          // If we're not on an EC2 instance, fall back to checking for
          // credentials in the local credentials profile file.
          new ProfileCredentialsProvider("app-1-development"));

The constructor parameter is the name of the profile to use; if you call the parameterless constructor, it will load the “default” profile. Another constructor overload allows you to override the location of the profiles file to load credentials from (or you can change this by setting the AWS_CREDENTIALS_PROFILES_FILE environment variable).

Have you already started using the new ProfileCredentialsProvider? Let us know what you think in the comments below!