AWS Developer Blog

Announcing Support for the PowerShell Gallery

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

The AWS Tools for Windows PowerShell have until now been made available in a single MSI installer that also contains the AWS SDK for .NET and AWS Toolkit for Visual Studio. MSIs have historically been the primary method of installing software on Windows. On the Linux and OS X platforms, package managers have become the primary mechanism for distributing and acquiring software, with package mangers like apt-get, yum, npm and pip providing simple experiences to install software from large repositories of packages. The Windows ecosystem has several package managers, Nuget targeted at developers and Chocolately for general purpose software.

We’re pleased to announce that you can now obtain the AWSPowerShell module from the new PowerShell Gallery (https://www.powershellgallery.com/packages/AWSPowerShell/). The PowerShell Gallery is a Microsoft repository for the new PowerShell package management system. This post explains how to get started using the Gallery to install and update the tools.

Gallery Requirements

As noted on the PowerShell Gallery homepage, to use the Gallery the Microsoft Windows Management Framework (WMF) v5 preview is required. Follow the link to obtain and install the preview if needed for your system.

Installing the AWSPowerShell Module

Once you have installed the WMF version 5 preview, you can run the Get-PSRepository cmdlet to see details of the Gallery setup:

PS C:> Get-PSRepository

Name       OneGetProvider  InstallationPolicy  SourceLocation
----       --------------  ------------------  --------------
PSGallery  NuGet           Untrusted           https://...

You install the module by using the Install-Module cmdlet. To install the AWSPowerShell tools (you may want to consider adding the -Verbose switch to see additional progress information from the install cmdlet):

PS C:> Install-Module -Name AWSPowerShell

If you already have the AWSPowerShell module installed from earlier use of our MSI installer, Install-Module will exit with a message similar to this:

PS C:> Install-Module -Name AWSPowerShell
WARNING: Version '2.3.31.0' of module 'AWSPowerShell' is already installed at 'C:Program Files (x86)AWS ToolsPowerShellAWSPowerShell'. 
To delete version '2.3.31.0' and install version '2.3.35.0', run Install-Module, and add the -Force parameter.
PS C:>

To force the install to occur, re-run the command as suggested in the message:

PS C:> Install-Module -Name AWSPowerShell -Force -Verbose
VERBOSE: The -Repository parameter was not specified.  PowerShellGet will use all of the registered repositories.
VERBOSE: Getting the provider object for the PackageManagement Provider 'NuGet'.
VERBOSE: The specified Location is 'https://www.powershellgallery.com/api/v2/' and PackageManagementProvider is
'NuGet'.
VERBOSE: The specified module will be installed in 'C:Program FilesWindowsPowerShellModules'.
VERBOSE: The specified Location is 'NuGet' and PackageManagementProvider is 'NuGet'.
VERBOSE: Downloading module 'AWSPowerShell' with version '2.3.35.0' from the repository
'https://www.powershellgallery.com/api/v2/'.
VERBOSE: NuGet: Installing 'AWSPowerShell 2.3.35.0'.
VERBOSE: NuGet: Successfully installed 'AWSPowerShell 2.3.35.0'.
VERBOSE: Module 'AWSPowerShell' was installed successfully.
PS C:>

Install-Module will not in this case actually uninstall the previous version. As explained later in this blog post, the default ordering of search paths for modules means that the newer version installed from the Gallery will take precedence—this is useful if you are running the tools on an EC2 instance (although remember that to use the Gallery, you need to install the WMF version 5 preview first). Once you close and open a new shell console, the new version will be running, which you can verify by running Get-AWSPowerShellVersion.

Where is the Module Installed?

The default install location used by Install-Module is ”C:Program FilesWindowsPowerShellModules” It’s also possible to have Install-Module install the tools to your local profile folder (”C:UsersuseridDocumentsWindowsPowerShellModules”) using the -Scope parameter:

PS C:> Install-Module -Name AWSPowerShell -Scope CurrentUser

The default value for -Scope if the parameter is not specified is ”’AllUsers”’.

Installing Updates

To install new versions of the module, you use the Update-Module cmdlet:

PS C:> Update-Module -Name AWSPowerShell

Which Should I Choose – MSI or Gallery?

That depends! If you only need the AWS Tools for Windows PowerShell, then you may want to consider uninstalling the version you currently have (which is located in ”C:Program Files (x86)AWS ToolsPowerShellAWSPowerShell”) and moving over to using the PowerShell Gallery instead (or use the -Force switch with Install-Module, as noted earlier). Just as we do with the MSI installer, we’ll be keeping the Gallery updated each time we ship a new version so you won’t miss anything whichever approach you use.

If you need the AWS Toolkit for Visual Studio, or want to reference the AWS SDK for .NET assemblies from disk rather than via Nuget, then you may want to consider continuing to use the MSI installer. Note that in the installer, the tools are preselected for installation.

If you are running on an EC2 instance and you want to update the AWSPowerShell module, perhaps to take advantage of new features released between the periodic updating of the Amazon EC2 Windows images, then (provided you have installed the WMF version 5 preview) just run Install-Module with the -Force switch as shown earlier.

What Happens If I Use Both?

Install-Module will report if the requested module is already installed (either from the Gallery or our MSI). In this scenario, you’ll need to use the -Force switch to cause the Gallery version to be downloaded and installed. When you open shell windows, which version wins depends on your system’s %PSModulePath% environment variable, but the default values mean that the Gallery versions take precedence, as follows.

The default value of %PSModulePath% causes PowerShell to first look in your user profile location for modules. So AWSPowerShell installed using Install-Module with the -Scope CurrentUser parameter value will be found first. If the module is not located there, PowerShell will then check the system location at ”C:Program FilesWindowsPowerShellModules”. AWSPowerShell installed with the default setting for the -Scope parameter of Install-Module will be found. If neither of the locations used by Install-Module yield the AWSPowerShell module, then any custom paths added to the environment variable will be searched. If you used the MSI installer, your module path will have ”C:Program Files (x86)AWS ToolsPowerShell” after the defaults, and the version installed by our MSI will then be found.

Wrap

We’re very excited to offer access to the AWSPowerShell module from the PowerShell Gallery. Let us know in the comments if there are any other mechanisms you would like us to consider for distributing the tools.

DynamoDB XSpec API

by Hanson Char | on | in Java | Permalink | Comments |  Share

One of the most powerful tools for accessing Amazon DynamoDB is the use of a DynamoDB domain-specific language (DSL) called expressions. If you look closely, you will find the support of DynamoDB expressions everywhere. For instance, you can access the attributes of an item using projection expressions. You can query or scan items using filter expressions and key condition expressions. Likewise, you can specify the details of updating an item using update expressions and condition expressions.

Why the need for expressions? Not only can you use DynamoDB expressions to perform typical operations such as PutItem, GetItem, Query, etc., you can also use expressions to specify arbitrarily complex operations and conditions that are otherwise not possible with the regular APIs. This can best be illustrated with examples.

But first, with the latest release 1.9.34 of the AWS SDK of Java, we are excited to announce the beta release of the Expression Specification (XSpec) API, which makes it easy to build and make use of expressions.

Let’s take a look at the code snippet copied from the earlier blog, Introducing DynamoDB Document API (Part 1). This is what the code looks like to perform a conditional update without the use of expressions:

UpdateItemOutcome outcome = table.updateItem(new UpdateItemSpec()
    .withReturnValues(ReturnValue.ALL_NEW)
    .withPrimaryKey("GameId", "abc")
    .withAttributeUpdate(
        new AttributeUpdate("Player1-Position").addNumeric(1))
    .withExpected(
        new Expected("Player1-Position").lt(20),
        new Expected("Player2-Position").lt(20),
        new Expected("Status").eq("IN_PROGRESS"))); 

This is all well and good. But what if you need to specify a more complex condition such as the use of disjunction or nested conditions? This is where you will find the DynamoDB XSpec API handy. For example, suppose you want to specify a nested or-condition, together with a function that checks if a specific attribute exists. Here is how you can do that using the DynamodB XSpec API:

import static com.amazonaws.services.dynamodbv2.xspec.ExpressionSpecBuilder.*;
...
UpdateItemOutcome outcome = table.updateItem(new UpdateItemSpec()
    .withReturnValues(ReturnValue.ALL_NEW)
    .withPrimaryKey("GameId", "abc")
    .withExpressionSpec(new ExpressionSpecBuilder()
        .addUpdate(N("Player1-Position").add(1))
        .withCondition(
                  N("Player1-Position").lt(20)
            .and( N("Player2-Position").lt(20) )
            .and( S("Status").eq("IN_PROGRESS")
                .or( attribute_not_exists("Status") )))
        .buildForUpdate()));

Or perhaps you want to specify an arbitrarily complex condition in a Scan operation. Here is an example:

import static com.amazonaws.services.dynamodbv2.xspec.ExpressionSpecBuilder.*;
...
ScanExpressionSpec xspec = new ExpressionSpecBuilder()
    .withCondition(N("Player1-Position").between(10, 20)
        .and( S("Status").in("IN_PROGRESS", "IDLE")
              .or( attribute_not_exists("Status") )))
    .buildForScan();

for (Item item: table.scan(xspec))
    System.out.println(item.toJSONPretty());

It’s worth pointing out that the only entry point to the DynamoDB XSpec API is the ExpressionSpecBuilder. We also recommend to always specify the static imports of its methods as demonstrated above.

In summary, the DynamoDB expression language allows arbitrarily complex conditions and operations to be specified, whereas the DynamoDB XSpec API makes it easy to harness the full power of this language.

Hope you find this useful. Don’t forget to download the latest AWS SDK for Java and give it a spin. Let us know what you think!

Generating Amazon S3 Pre-signed URLs with SSE-KMS (Part 2)

by Hanson Char | on | in Java | Permalink | Comments |  Share

To continue from the previous blog, I will provide specific code examples that show how you can generate and consume pre-signed URLs using server-side encryption with AWS Key Management Service (SSE-KMS). A pre-requisite to this option is that you must be using Signature Version 4 (SigV4). You can enable SigV4 in the AWS SDK for Java in various ways, including using S3-specific system properties. Here, I will provide a less known but programmatic way to achieve that by explicitly configuring the signer. The code samples assume the version of the AWS SDK for Java to be 1.9.31 or later.

Configure AmazonS3Client to use SigV4

AmazonS3Client s3 = new AmazonS3Client(
    new ClientConfiguration().withSignerOverride("AWSS3V4SignerType"));

Once this is in place, you are good to go.

Server-Side Encryption with AWS Key Management Service (SSE-KMS)

Example A. Here’s how to generate a pre-signed PUT URL using SSE-KMS:

String myExistingBucket = ... // an existing bucket
String myKey = ...    // target S3 key
// Generate a pre-signed PUT URL for use with SSE-KMS
GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    myExistingBucket, myKey, HttpMethod.PUT)
    .withSSEAlgorithm(SSEAlgorithm.KMS.getAlgorithm())
    ;
// s3 is assumed to have been configured to use SigV4
URL puturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned PUT URL with SSE-KMS: " + puturl);

In the above example, Amazon S3 will make use of the default KMS master key for S3 that is automatically created for you. (See Creating Keys in AWS Key Management Service for more information on how you can set up your AWS KMS customer master keys.)

However, you can also choose to explicitly specify your KMS customer master key id as part of the pre-signed URLs.

Example B. Here’s how to generate a pre-signed PUT URL using SSE-KMS with an explicit KMS customer master key id:


// Generate a pre-signed PUT URL for use with SSE-KMS with an
// explicit KMS Customer Master Key ID
String myKmsCmkId = ...;
GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    myExistingBucket, myKey, HttpMethod.PUT)
    .withSSEAlgorithm(SSEAlgorithm.KMS.getAlgorithm())
    // Explicitly specifying your KMS customer master key id
    .withKmsCmkId(myKmsCmkId)
    ;
URL puturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned PUT URL using SSE-KMS with explicit CMK ID: "
    + puturl);

Here’s how to make use of the generated pre-signed PUT URL (from Example A) via the Apache HttpClient (4.3):


File fileToUpload = ...;
HttpPut putreq = new HttpPut(URI.create(puturl.toExternalForm()));
putreq.addHeader(new BasicHeader(Headers.SERVER_SIDE_ENCRYPTION,
    SSEAlgorithm.KMS.getAlgorithm()));
putreq.setEntity(new FileEntity(fileToUpload));
CloseableHttpClient httpclient = HttpClients.createDefault();
httpclient.execute(putreq);

Here’s how to make use of the generated pre-signed PUT URL from (Example B) via the Apache HttpClient (4.3):


File fileToUpload = ...;
HttpPut putreq = new HttpPut(URI.create(puturl.toExternalForm()));
putreq.addHeader(new BasicHeader(Headers.SERVER_SIDE_ENCRYPTION,
    SSEAlgorithm.KMS.getAlgorithm()));
putreq.addHeader(new BasicHeader(Headers.SERVER_SIDE_ENCRYPTION_AWS_KMS_KEYID,
    myKmsCmkId)); // Explicitly specifying your KMS customer master key id
putreq.setEntity(new FileEntity(fileToUpload));
CloseableHttpClient httpclient = HttpClients.createDefault();
httpclient.execute(putreq);

Here’s how to generate a pre-signed GET URL for use with SSE-KMS:


GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    BUCKET, KEY, HttpMethod.GET);
// s3 configured to use SigV4
URL geturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned GET URL for SSE-KMS: " + geturl);

(Note in particular that generating a pre-signed GET URL for an S3 object encrypted using SSE-KMS is as simple as generating a regular pre-signed URL!)

Here’s how to make use of the generated pre-signed GET URL via the Apache HttpClient (4.3):


HttpGet getreq = new HttpGet(URI.create(geturl.toExternalForm()));
CloseableHttpClient httpclient = HttpClients.createDefault();
CloseableHttpResponse res = httpclient.execute(getreq);
InputStream is = res.getEntity().getContent();
String actual = IOUtils.toString(is);

In the next blog (Part 3), I will provide specific code examples that show how you can generate and consume pre-signed URLs using server side encryption with Amazon S3-managed keys (SSE-S3). 

Stay tuned!

Modularization Released to NuGet in Preview

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today, we pushed our new modularized version of the AWS SDK for .NET to NuGet in preview. This means there are separate NuGet packages for each AWS service. For example, if your application uses Amazon S3 and Amazon DynamoDB, then instead of including the existing AWSSDK package that includes all the AWS services, you can add the AWSSDK.S3 and AWSSDK.DynamoDB packages. This allows your application to include much smaller assemblies, and you’ll need to update these packages only when the services you use are updated.

Why Preview?

The modularized version of the SDK is production ready, so we encourage developers to start using the modularized version now. We marked the modularized SDK as a preview while we are tweaking our release process and documentation. When adding preview packages, be sure to select Include Prerelease.

Check our previous blog post to learn about the differences. You can also follow our development on the modularization branch in GitHub.

NuGet Packages

Service Name NuGet Package
Auto Scaling AWSSDK.AutoScaling
AWS Support API AWSSDK.AWSSupport
AWS CloudFormation AWSSDK.CloudFormation
Amazon CloudFront AWSSDK.CloudFront
AWS CloudHSM AWSSDK.CloudHSM
Amazon CloudSearch AWSSDK.CloudSearch
Amazon CloudSearch Domain AWSSDK.CloudSearchDomain
AWS CloudTrail AWSSDK.CloudTrail
Amazon CloudWatch AWSSDK.CloudWatch
Amazon CloudWatch Logs AWSSDK.CloudWatchLogs
AWS CodeDeploy AWSSDK.CodeDeploy
Amazon Cognito Identity AWSSDK.CognitoIdentity
Amazon Cognito Sync AWSSDK.CognitoSync
AWS Config AWSSDK.ConfigService
AWS Data Pipeline AWSSDK.DataPipeline
AWS Direct Connect AWSSDK.DirectConnect
Amazon DynamoDB (v2) AWSSDK.DynamoDBv2
Amazon Elastic Compute Cloud (EC2) AWSSDK.EC2
Amazon EC2 Container Service AWSSDK.ECS
Amazon ElastiCache AWSSDK.ElastiCache
AWS Elastic Beanstalk AWSSDK.ElasticBeanstalk
Elastic Load Balancing AWSSDK.ElasticLoadBalancing
Amazon Elastic MapReduce AWSSDK.ElasticMapReduce
Amazon Elastic Transcoder AWSSDK.ElasticTranscoder
Amazon Glacier AWSSDK.Glacier
AWS Identity and Access Management (IAM) AWSSDK.IdentityManagement
AWS Import/Export AWSSDK.ImportExport
AWS Key Management Service AWSSDK.KeyManagementService
Amazon Kinesis AWSSDK.Kinesis
AWS Lambda AWSSDK.Lambda
Amazon Machine Learning AWSSDK.MachineLearning
AWS OpsWorks AWSSDK.OpsWorks
Amazon Relational Database Service (RDS) AWSSDK.RDS
Amazon Redshift AWSSDK.Redshift
Amazon Route 53 AWSSDK.Route53
Amazon Route 53 Domains AWSSDK.Route53Domains
Amazon Simple Storage Service (S3) AWSSDK.S3
AWS Security Token Service (STS) AWSSDK.SecurityToken
Amazon SimpleDB AWSSDK.SimpleDB
Amazon Simple Email Service (SES) AWSSDK.SimpleEmail
Amazon Simple Notification Service (SNS) AWSSDK.SimpleNotificationService
Amazon EC2 Simple Systems Manager (SSM) AWSSDK.SimpleSystemsManagement
Amazon Simple Workflow Service AWSSDK.SimpleWorkflow
Amazon Simple Queue Service (SQS) AWSSDK.SQS
AWS Storage Gateway AWSSDK.StorageGateway
Amazon WorkSpaces AWSSDK.WorkSpaces

 

Generating Amazon S3 Pre-signed URLs with SSE (Part 1)

by Hanson Char | on | in Java | Permalink | Comments |  Share

By default, all objects and buckets are private in Amazon S3. Pre-Signed URLs are a popular way to let your users or customers upload or download specific objects to/from your bucket, but without requiring them to have AWS security credentials or permissions.

In part 1 of this blog, we will take a look at all the different types of pre-signed URLs that can be generated to work with Amazon S3 server-side encryption (SSE). In part 2 of this blog, I will provide concrete sample code that shows how you can generate and consume pre-signed URLs for one of AWS’s most recommended security best practices – server-side encryption with AWS Key Management Service (SSE-KMS). To find out more about the considerable benefits of using AWS Key Management Service provide, see the official blog New AWS Key Management Service (KMS).

To begin with, the generation and use of pre-signed URLs requires a request to be signed for authentication purposes. Amazon S3 supports the latest Signature Version 4 (SigV4), which requires the request body to be signed for added security, and the previous Signature Version 2 (SigV2). However, even though pre-signed URLs using different options of SSE is 100 percent supported using SigV4, this is not the case with SigV2.

Here is a summary of all the valid combinations for generating pre-signed URLs using server-side encryption.

Pre-signed URL generation SigV2 SigV4
Using SSE with AWS KMS-managed keys (SSE-KMS) No Yes
Using SSE with Amazon S3-managed keys (SSE-S3) Yes Yes
Using SSE with customer-provided encryption keys (SSE-C) Yes Yes
Using SSE with specific customer-provided encryption keys (SSE-C) No Yes

What is the difference between generating a pre-signed URL using SSE-C versus SSE-C with specific customer-provided encryption keys? In the first case, when you generate the pre-signed URL, the customer-provided encryption key does not need to be specified. Instead, the key only needs to be specified in the request later when the generated pre-signed URL is used (to actually upload or download objects to/from Amazon S3).

On the other hand, you may want to impose further restrictions on a pre-signed URL by requiring that the pre-signed URL can be used only against a specific customer-provided encryption key. In such cases, you can do so by specifying the customer-provided encryption key during the generation of the pre-signed URLs, and enabling the use of SigV4. I will provide specific examples to cover these two cases in Part 4 and 5 of the upcoming blogs.

In the next blog (Part 2), I will provide specific code examples that show how you can generate and consume pre-signed URLs using server side encryption with Amazon KMS-managed keys (SSE-KMS). 

Stay tuned!

AWS Toolkit for Eclipse Integration with AWS OpsWorks

Today, we are introducing a new addition to the AWS toolkit for Eclipse — the AWS OpsWorks plugin. This new plugin allows you to easily deploy your Java web applications from your development environment directly to AWS infrastructures.

So you might remember the AWS CodeDeploy plugin that we introduced recently, and some of you have probably used the AWS Elastic Beanstalk plugin before — they both seem to provide the same functionality of deploying a Java web app. Then why do we need yet another option for accomplishing the very same thing?

AWS Elastic Beanstalk, AWS CodeDeploy and AWS OpsWorks indeed share a lot in common as they are all considered part of the AWS deployment services family. However, they differ from each other in aspects like deployment execution and infrastructure resource management, and these differences make each of them suitable for a specific category of use cases.

  • AWS Elastic Beanstalk is a fully-managed application container service. It’s based on a PaaS (Platform as a Service) model where your application is provisioned by infrastructures that are automatically managed by AWS — there is no need for you to manually build and maintain them. As a container service, it also provides built-in deployment features for a variety of web app frameworks. All of these allow you to focus on your application development, while the deployments and provisioning of the application are handled by the service powered by cloud ninjas. The downside of this whole black box-ish model is of course its limited flexibility and extensibility. For example, you might not have fine-grained control over the underlying infrastructure resource, and it sometimes could be difficult to extend the built-in deployment commands to execute your custom tasks.
  • In contrast, AWS CodeDeploy focuses on only one thing — managing deployments to your existing instances in EC2 or elsewhere. It is not an application container service, so there are no built-in deployment features. You need to write your own deployment logic, which gives you the freedom to perform any kind of custom tasks for your deployments. Another difference compared to Elastic Beanstalk is that the service does not create or maintain the EC2 instances for you; you need to manage them yourself, which also means you have full control over your infrastructure. At a higher-level, you can think of AWS CodeDeploy as a fully-programmable robot that delivers your application artifacts to your EC2 instances fleet and then runs your custom commands on each of the instances for you.
  • Within the range between fully-managed (Elastic Beanstalk) and fully-customizable (CodeDeploy), AWS OpsWorks sits somewhere in the middle. It is an application management service that provides built-in deployment features for instances running a specific web framework (a.k.a., an application server layer). The reason that makes it really stand out compared to Elastic Beanstalk is that it uses Chef to perform the deployment actions. The deployment logic for built-in layers is essentially a default Chef cookbook that is open to all levels of customization. Using Chef allows you to achieve the necessary customization for your specific task while at the same time enjoy all the built-in features that are useful for the majority of use cases.

So generally speaking, AWS Elastic Beanstalk is the easiest option if you need to quickly deploy your application and don’t want to be concerned about infrastructure maintenance. AWS CodeDeploy gives you maximum flexibility but lacks built-in deployment features. AWS OpsWorks has a good tradeoff between flexibility and ease of use, but you need to learn Chef in order to fully utilize it.

Ok, now I hope I have answered your doubts about why you should care about this blog post if you are already familiar with the other deployment services. Then let’s get back to Eclipse and see how the plugin works.

After you install the new AWS OpsWorks Plugin component, you should see the “AWS OpsWorks” node under the AWS Explorer View. (Make sure you select “US East (Virginia)” as the current region since OpsWorks is available only in this region.)

The top-level elements under the service node are your stacks, each of which includes all the resources serving the same high-level purpose (e.g., hosting a Tomcat application). Each stack consists of one or more layers, where each layer represents a system component that consists of a set of instances that are functionally the same. For each layer that acts as an application server, it is associated with one app, which is where the revision of the application code for this layer should be deployed to.

For this demo, I have created a sample stack that has only one Java App Server layer in it and I have started two EC2 instances for this layer. Creating all of these can be done in a couple of minutes using the AWS OpsWorks console. We will create the Java app for this layer inside Eclipse.

To start with, let’s switch to “Java” or “Java EE” perspective and then create a sample web project. File -> New -> AWS Java Web Project. Then in the Project Explorer, right-click on the sample project that we just created, and select Amazon Web Services -> Deploy to AWS OpsWorks.

In the first page of the deployment wizard, choose our target region (US East) and target stack (MyStack) and then let’s create a new Java app called “My App“.

In the App Configuration page, you are required to specify an S3 location where the application artifact will the uploaded to. You can optionally pass in additional environment variables, specify a custom domain, and enable SSL for your application.

Click Next, and you will see the Deployment Action Configuration page. Here you can optionally add a comment for your deployment and  provide custom Chef JSON input.

Now click Finish, and the deployment will be initiated immediately. Wait for a couple of minutes until all the instances in the layer are successfully deployed.

After it finishes, you will see a confirmation message that shows you the expected endpoint where your application will be hosted on the instances. You can access the endpoint via web browser to make sure the deployment succeeds (make sure you include the trailing slash character in the URL).

As you can see, because of the built-in support for Tomcat applications, it’s really easy to deploy and host your Java web app using AWS OpsWorks. We want to focus on the deployment experience of this plugin, but we are also interested in what other features you are specifically looking for. More support on service resource creation and configuration? Or integration with Chef cookbook and recipe? Let us know in the comments!

Authentication in the Browser with Amazon Cognito and Public Identity Providers

Our earlier blog post introduced authentication with Amazon Cognito in the browser.

Amazon Cognito has since simplified the authentication workflow. This article describes authenticating the SDK in the browser using Amazon Cognito and supported public identity providers like Google, Facebook, and Amazon.

Step 1 and Step 2 outline registering your application with a public identity provider, and creating a Cognito identity pool. These steps typically need to be performed only once.

One-time Setup

Step 3 and Step 4 describe the authentication workflow of a client application using a public identity provider with Amazon Cognito.

Client Application Workflow

Step 1: Set up a public identity provider

Amazon Cognito supports Facebook, Google, Amazon, and any other OpenID Connect compliant provider. As a first step you will have to register your application with a public identity provider. Here is a list of popular providers:

  1. Facebook
  2. Google
  3. Login with Amazon

You can then use the corresponding provider’s SDK in your web application to allow users to authenticate with the provider. Listed below are the developer guides for the providers listed above:

  1. Facebook login for the Web with the Facebook JavaScript SDK
  2. Google+ Sign-In
  3. Login with Amazon – Getting Started for Web

Step 2: Create a Cognito Identity Pool

To begin using Amazon Cognito you will need to set up an identity pool. An identity pool is a store of user identity data specific to your account. The easiest way to setup an identity pool is to use the Amazon Cognito console.

The New Identity Pool wizard will guide you through the configuration process. When creating your identity pool, make sure that you enable access to unauthenticated identities. At this time, you can also configure any public identity providers that you have setup in Step 1.

The wizard will then create authenticated and unauthenticated roles for you with very limited permissions. You can edit these roles later using the IAM console. Note that Amazon Cognito will use these roles to grant authenticated and unauthenticated access to your resources, so scope them accordingly.

Step 3: Starting with Unauthenticated Access to Resources

You may want to grant unauthenticated users read-only access to some resources. These permissions should be configured in the IAM role for unauthenticated access (the role created in Step 2).

Configuring the SDK

To configure the SDK to work with unauthenticated roles simply omit the Logins property of the AWS.CognitoIdentityCredentials provider.

Because your identity pool is already configured to use authenticated and unauthenticated IAM roles, you need not set the RoleArn parameter when constructing your provider.

// Identity pool already configured to use roles
var creds = new AWS.CognitoIdentityCredentials({
    IdentityPoolId: 'us-east-1:1699ebc0-7900-4099-b910-2df94f52a030'
})

AWS.config.update({
    region: 'us-east-1',
    credentials: creds
});

Making requests

Having configured a credential provider, you can now make requests with the SDK.

var s3 = new AWS.S3({region: 'us-west-2'});
s3.listObjects({Bucket: 'bucket'}, function(err, data) {
    if (err) console.log(err);
    else console.log(data);
});

Step 4: Switching to Authenticated Access

You can also use the public identity provider configured in Step 1 to provide authenticated access to your resources.

When a user of your application authenticates with a public identity provider, the response contains a login token that must be supplied to Amazon Cognito in exchange for temporary credentials.

For Facebook and Amazon, this token is available at the access_token property of the response data. For Google and any other OpenID provider, this token is available at the id_token property of the response.

Refreshing credentials

After a user has authenticated with a public identity provider, you will need to update your credential provider with the login token from the authentication response.

// access_token received in the authentication response
// from Facebook
creds.params.Logins = {};
creds.params.Logins['graph.facebook.com'] = access_token;

// Explicitly expire credentials so they are refreshed
// on the next request.
creds.expired = true;

The credential provider will refresh credentials when the next request is made.

Persisting authentication tokens

In most cases, the SDKs of public identity providers have built-in mechanisms for caching the login tokens for the duration of the session. For example, the Login with Amazon SDK for JavaScript will cache the access_token and subsequent amazon.Login.authorize() calls will return the cached token as long as the session is valid.

The Facebook SDK for JavaScript exposes a FB.getLoginStatus() method which allows you to check the status of the login session.

The Google APIs Client Library for JavaScript automatically sets the OAuth 2.0 token for your application with the gapi.auth.setToken() method. This token can be retrieved using the gapi.auth.getToken() method.

You can also implement your own caching mechanism for login tokens, if these default mechanisms are insufficient for your use case.

Wrapping up

This article describes how to grant access to your AWS resources by using Amazon Cognito with public identity providers. We hope this article helps you easily authenticate users in your web applications with Amazon Cognito. We’d love to hear more about how you’re using the SDK in your browser applications, so leave us a comment or tweet about it @awsforjs.

Uploading Files to Amazon S3

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

I blogged previously about downloading objects from Amazon S3 using the version 2 AWS SDK for Ruby. It was requested that I write about uploading objects as well.

Managed File Uploads

The simplest and most common task is upload a file from disk to a bucket in Amazon S3. This is very straightforward when using the resource interface for Amazon S3:

s3 = Aws::S3::Resource.new

s3.bucket('bucket-name').object('key').upload_file('/source/file/path')

You can pass additional options to the Resource constructor and to #upload_file. This expanded example demonstrates configuring the resource client, uploading a public object and then generating a URL that can be used to download the object from a browser.

s3 = Aws::S3::Resource.new(
  credentials: Aws::Credentials.new('akid', 'secret'),
  region: 'us-west-1'
)

obj = s3.bucket('bucket-name').object('key')
obj.upload_file('/source/file/path', acl:'public-read')
obj.public_url
#=> "https://bucket-name.s3-us-west-1.amazonaws.com/key"

This is the recommended method of using the SDK to upload files to a bucket. Using this approach has the following benefits:

  • Manages multipart uploads for objects larger than 15MB.
  • Correctly opens files in binary mode to avoid encoding issues.
  • Uses multiple threads for uploading parts of large objects in parallel.

Other Methods

In addition to Aws::S3::Object#upload_file, you can upload an object using #put or using the multipart upload APIs.

PUT Object

For smaller objects, you may choose to use #put instead. The #put method accepts an optional body, which can be a string or any IO object.

obj = s3.bucket('bucket-name').object('key')

# from a string
obj.put(body:'Hello World!')

# from an IO object
File.open('/source/file', 'rb') do |file|
  obj.put(body:file)
end

Multipart APIs

I recommend you use #upload_file whenever possible. If you need to manage large object copies, then you will need to use the multipart interfaces. There are restrictions on the minimum file, and part sizes you should be aware of. Typically these are reserved for advanced use cases.

Feedback

I’d love to hear feedback. If you find the AWS SDK for Ruby lacks a utility for working with Amazon S3, I’d love to hear about it. Please feel free to open a GitHub issue or drop into our Gitter channel.

Storing JSON documents in Amazon DynamoDB tables

by Manikandan Subramanian | on | in Java | Permalink | Comments |  Share

DynamoDBMapper is a high-level abstraction layer in the AWS SDK for Java that allows you to transform java objects into items in Amazon DynamoDB tables and vice versa. All you need to do is annotate your java class in a few places, and the mapper takes care of getting the objects in and out of the database.

DynamoDBMapper has a new feature that allows you to save an object as a JSON document in a DynamoDB attribute. To do this, simply annotate the class with @DynamoDBDocument, and the mapper does the heavy work of converting the object into a JSON document and storing it in DynamoDB. DynamoDBMapper also takes care of loading the java object from the JSON document when requested by the user.

Let’s say your application maintains the inventory of a car dealership in Amazon DynamoDB and uses DynamoDBMapper to save and retrieve data. One of the tables is Car, which holds information about a car and has name as its primary key. Here is how the java class looks for the table:

@DynamoDBTable(tableName = "Car")
public class Car {
		
    private String name;
    private int year;
    private String make;
    private List<String> colors;
    private Spec spec;

    @DynamoDBHashKey
    public String getName() { return name; }
    public void setName(String name) { this.name = name; }

    public int getYear() { return year; }
    public void setYear(int year) { this.year = year; }

    public String getMake() { return make; }
    public void setMake(String make) { this.make = make; }

    public List<String> getColors() { return colors; }
    public void setColors(List<String> colors) { this.colors = colors; }

    public Spec getSpec() { return spec; }
    public void setSpec(Spec spec) { this.spec = spec; }
}

@DynamoDBDocument
public class Spec {

    private String engine;
    private String wheelbase;
    private String length;
    private String width;
    private String height;

    public String getEngine() { return engine; }
    public void setEngine(String engine) { this.engine = engine; }

    public String getWheelbase() { return wheelbase; }
    public void setWheelbase(String wheelbase) { this.wheelbase = wheelbase; }

    public String getLength() { return length; }
    public void setLength(String length) { this.length = length; }

    public String getWidth() { return width; }
    public void setWidth(String width) { this.width = width; }

    public String getHeight() { return height; }
    public void setHeight(String height) { this.height = height; }

}

As you can see, the class Spec is modeled with a @DynamoDBDocument annotation. DynamoDBMapper converts an instance of Spec into a JSON document before storing it in DynamoDB. When stored in DynamoDB, an instance of the class Car will look like this:

{
   "name" : "IS 350",
   "year" : "2015",
   "make" : "Lexus",
   "colors" : ["black","white","grey"],
   "spec" : {
      "engine" : "V6",
      "wheelbase" : "110.2 in",
      "length" : "183.7 in",
      "width" : "71.3 in",
      "height" : "56.3 in"
   }
}

You can also apply other DyanmoDBMapper annotations like @DyanmoDBIgnore and @DynamoDBAttribute to the JSON document. For instance, model the height attribute of the Spec class with @DynamoDBIgnore.

@DynamoDBIgnore
public String getHeight() { return height; }
public void setHeight(String height) { this.height = height; }

The updated item in DynamoDB will look like this:

{
   "name" : "IS 350",
   "year" : "2015",
   "make" : "Lexus",
   "colors" : ["black","white","grey"],
   "spec" : {
      "engine" : "V6",
      "wheelbase" : "110.2 in",
      "length" : "183.7 in",
      "width" : "71.3 in"
   }
}

To learn more, check out our other blog posts and the developer guide

Do you want to see new features in DynamoDBMapper? Let us know what you think!

Verifying Amazon SNS Message Authenticity

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

You can now use version 2 of the AWS SDK for Ruby to verify the signatures of Amazon SNS messages. To help prevent spoofing attacks, you should verify messages are sent by Amazon SNS.

The new verifier follows the documented best practices for verification, including:

  • Always use HTTPS when getting the certificate from Amazon SNS.
  • Validate the authenticity of the certificate.
  • Verify the certificate was received from Amazon SNS.

Basic Usage

Usage is straightforward. Construct a message verifier and call one of two methods. The given message body should be the JSON document string of the message.

verifier = Aws::SNS::MessageVerifier.new

verifier.authentic?(message_body)
#=> returns true or false

verifier.authenticate!(message_body)
#=> returns true or raises a VerificationError

You can use one instance of Aws::SNS::MessageVerifier.new to verify multiple messages.

Feedback

As always, we love to hear your feedback. It helps us prioritize our development efforts. In fact, this feature was added by customer request. Feel free to join our Gitter channel or open a GitHub issue.