AWS Developer Blog

AWS Toolkit for Eclipse: Improved Support for Serverless Applications (Part 3 of 3)

by Zhaoxi Zhang | on | in Eclipse, Java | | Comments

In the first part of the blog series, we created a new application named rekognition-service from the rekognition blueprint. In the second part, we deployed this serverless application to AWS CloudFormation. In this last part of the blog series, we describe how to test and check the result of the newly deployed rekognition-service application.

Test the rekognition-service application by using the Amazon S3 bucket editor

You can drag and drop a group of files, including folders, to the Amazon S3 bucket editor so you can upload them to an Amazon S3 bucket. The .jpg files trigger the underlying Lambda function to be tagged with the name and confidence value returned by Amazon Rekognition. You can also manually update and delete these tags by using the tag dialog box.

Check the Lambda logs by using the AWS Lambda function editor

You can also check the Lambda function logs by using the Lambda function editor. All the Amazon CloudWatch streams for the Lambda function are listed on the Logs tab in the editor. You can double-click one item to open the underlying one stream in Eclipse. You can also select multiple items, right-click, and then select Show Log Events to open the underlying streams in one batch.

This concludes our three-part series. What do you think of the rekognition serverless blueprint and the working flow in the AWS Toolkit for Eclipse? If you have any requests for new blueprints and features in the AWS Toolkit for Eclipse, please let us know. We appreciate your feedback.

AWS Toolkit for Eclipse: Improved Support for Serverless Applications (Part 2 of 3)

by Zhaoxi Zhang | on | in Eclipse, Java | | Comments

In the first part of this blog post, we talked about how to create a new AWS SAM application from the rekognition serverless blueprint. In this second part, we describe how to deploy the application to AWS CloudFormation.

Deploy the rekognition-service application to a new CloudFormation stack

This .gif animation shows the steps to deploy an AWS SAM application to AWS CloudFormation.

What does the AWS Toolkit for Eclipse do for you during deployment

  • Creates a .zip file that contains the project and all its dependencies. Then uploads the file to the specified Amazon S3 bucket.
  • Updates the serverless.template (as shown in the following snippet) to fill in the complete properties for the AWS::Serverless::Function resource type:
    • Replaces the Handler with the FQCN of the AWS Lambda function handler.
    • Generates the actual code URI for CodeUri so that AWS CloudFormation can reference the Lambda function artifact in the S3 bucket.
    • Adds the missing configurations (Runtime, Description, MemorySize, Timeout, Role) and use the default values.
  • Creates a new AWS CloudFormation stack using the updated serverless.template file.

Here is the updated snippet for TagImage in the CloudFormation template.

"TagImage" : {
      "Type" : "AWS::Serverless::Function",
      "Properties" : {
        "Handler" : "com.serverless.demo.function.TagImage",
        "Runtime" : "java8", "CodeUri" : "s3://zhaoxiz-us-west-1/rekognition-service-stack-1497642692569-1497643074359.zip", "Description" : null, "MemorySize" : 512, "Timeout" : 300, "Role" : null,
        "Policies" : [ "AmazonS3FullAccess", "AmazonRekognitionFullAccess" ],
        "Events" : {
          "ProcessNewImage" : {
            "Type" : "S3",
            "Properties" : {
              "Bucket" : {"Ref" : "ImageBucket"},
              "Events" : "s3:ObjectCreated:*",
              "Filter" : {
                "S3Key" : {
                  "Rules" : [{"Name": "suffix", "Value": ".jpg"}]
                }
              }
            }
          }
        }
      }
    }

Deploy the rekognition-service application to an existing CloudFormation stack

We want to update the recognition confidence value to 80 in the Lambda function code and redeploy it to the CloudFormation stack. The following .gif animation shows how you can achieve that. When doing a second deployment for the same project, the AWS toolkit for Eclipse remembers the parameters used in the last deployment, so if you want to keep them the same, you don’t have to retype them.

Notice that we need to change the parameter value of ImageBucketExists to true in the parameter page (Fill in stack template parameters) because the bucket was already created during the first deployment. The underlying CloudFormation stack is updated with the new version of the Lambda function whether or not you update the parameters.

Update the Lambda event source by using the parameters page

Now, we want to configure the trigger event for the Lambda function to another new S3 bucket. This removes the bucket we created in the first deployment and creates a new bucket for this deployment. We only need to redeploy the application and update the ImageBucketExists parameter to false, and the ImageBucketName parameter to the new bucket name. After deployment, you see that the name of the ImageStack in the stack outputs is updated to the new name.

In the third part of this blog post, we’ll talk about how to use the AWS Toolkit for Eclipse to check the result of the rekognition-service application.

AWS Toolkit for Eclipse: Improved Support for Serverless Applications (Part 1 of 3)

by Zhaoxi Zhang | on | in Eclipse, Java | | Comments

I am happy to announce that the latest release of the AWS Toolkit for Eclipse includes a couple new enhancements for developing AWS Serverless Application Model (AWS SAM) applications. In this release, we added a new blueprint: rekognition.

In part 1 of this blog post, we describe and show with an animation what this blueprint does, and how to use the AWS Toolkit for Eclipse to create an application from it. In part 2, we’ll deploy the AWS SAM application to AWS CloudFormation. In part 3, we’ll check the result of the application and test the AWS Lambda function by using the AWS Explorer in the AWS Toolkit for Eclipse.

About the rekognition blueprint

The rekognition blueprint includes a Lambda function TagImage. This Lambda function automatically tags .jpg files uploaded to a specified Amazon S3 bucket by using the Amazon Rekognition service. It applies the top five confident tags recognized by the Amazon Rekognition service as the keys to the Amazon S3 object. It then applies the confident values as tag values, accordingly.

Create an application named rekognition-service from the rekognition blueprint.

This .gif animation shows the steps to create an application from the rekognition blueprint.

About the AWS SAM template

Here is the template snippet from the serverless.template in the project we just created for defining the Lambda function TagImage. Notice that this is a simplified configuration for this Lambda function. This is because during the deployment phase, the AWS Toolkit for Eclipse will fill in all other properties we needed. For a complete configuration set, see Deploying Lambda-based Applications in the AWS Lambda Developer Guide.

In this snippet, we grant the Lambda function permissions to access Amazon S3 and Amazon Rekognition. We also define a triggering event for the Lambda function when uploading .jpg files to the specified Amazon S3 bucket.

"TagImage": {
  "Type": "AWS::Serverless::Function",
  "Properties": {
    "Handler": "TagImage",
    "Policies": [
      "AmazonS3FullAccess",
      "AmazonRekognitionFullAccess"
    ],
    "Events": {
      "ProcessNewImage": {
        "Type": "S3",
        "Properties": {
          "Bucket": {"Ref" : "ImageBucket"},
          "Events": "s3:ObjectCreated:*",
          "Filter": {
            "S3Key": {
              "Rules": [{"Name": "suffix", "Value": ".jpg"}]
            }
          }
        }
      }
    }
  }
}

How the Lambda function works

Here is a snippet from the Lambda function ImageTag. This Lambda function retrieves the Amazon S3 object from the S3Event. Then it calls the Amazon Rekognition service to detect labels with a confidence value of at least 77.

Image imageToTag = new Image().withS3Object(new S3Object().withName(objectKey).withBucket(bucketName));
// Call Rekognition to identify image labels
DetectLabelsRequest request = new DetectLabelsRequest()
                    .withImage(imageToTag)
                    .withMaxLabels(5)
                    .withMinConfidence(77F);
List<Label> labels = rekognitionClient.detectLabels(request).getLabels();

In part 2 of this blog post, we’ll deploy the newly created AWS SAM application to AWS. Then we’ll configure the parameters in the template during the deployment phase. Stay tuned!

Creating and Deploying a Serverless Web Application with CloudFormation and Ember.js

by Michael Labieniec | on | in CLI, JavaScript | | Comments

Serverless computing enables you to build scalable and cost-effective applications that scale up or down automatically without provisioning, scaling, and managing servers. You can use AWS Lambda to execute your back-end application code, Amazon API Gateway for a fully managed service to create, publish, maintain, monitor, and secure your REST API, and Amazon S3 to host and serve your static web files.

Ember.js is a popular, long-standing front-end web application framework for developing rich HTML5 web applications. Ember.js has an intuitive command line interface. You can write dramatically less code by using its integrated Handlebars templates. Ember.js also contains an abstracted data access pattern that enables you to write application adapters that communicate with REST APIs.

In this tutorial, we build a simple Ember.js application that initializes the AWS SDK for JavaScript using native Ember.js initializers. The application communicates with API Gateway, which runs backend Lambda code that reads and writes to Amazon DynamoDB. In addition, we use the Ember.js command line interface to package and deploy the web application to Amazon S3 for hosting and for delivery through Amazon CloudFront. We also use AWS CloudFormation with the AWS Serverless Application Model (AWS SAM) to automate the creation and management of our serverless cloud architecture.

Configuring your local development environment

Requirements:

Clone or fork the aws-serverless-ember repository and install the dependencies. Based on your setup, you might need to use the sudo argument when installing these. If you get an EACCESS error, run these commands with sudo npm install:

npm install -g ember-cli
npm install -g bower
git clone https://github.com/awslabs/aws-serverless-ember

You should now have two folders in the project’s root directory, client and cloud. The cloud folder contains our CloudFormation templates and Lambda function code. The client folder contains the client-side Ember.js web application that deploys to S3. Next, install the dependencies:

cd aws-serverless-ember
cd client
npm install && bower install

When this is complete, navigate to the cloud directory in your project’s root directory:

cd ../cloud

Create and deploy the AWS Cloud environment using CloudFormation

The infrastructure for this application uses API Gateway, Lambda, S3, and DynamoDB.

Note: You will incur charges after running these templates. See the appropriate pricing page for details.

The cloud folder of the repository contains the following files:

  • api.yaml – An AWS SAM CloudFormation template for your serverless backend
  • deploy.sh – A bash helper script for deploying with CloudFormation
  • hosting.yaml – A CloudFormation template for your S3 bucket and website
  • swagger.yaml – The API definition for API Gateway
  • index.js – The Lambda function code
  • swagger.yaml – The OpenAPI definition for your serverless REST API

First, create a website bucket with static website hosting. You also create a bucket for deploying your serverless code.

Note: For this section be sure you’ve installed and configured the AWS CLI with appropriate permissions (your CLI should at least have access to read/write to CloudFormation and read/write to S3) for this tutorial.

Run the following command to create the S3 hosting portion of our back-end, wait for completion, and finally describe stacks once complete:

aws cloudformation create-stack --stack-name ember-serverless-hosting --template-body file://$PWD/hosting.yaml && \
aws cloudformation wait stack-create-complete --stack-name ember-serverless-hosting && \ 
aws cloudformation describe-stacks --stack-name ember-serverless-hosting

This creates a CloudFormation stack named ember-serverless-hosting, waits for the stack to complete, and then display the output results. While you wait, you can monitor the events or view the resource creation in the CloudFormation console, or you run the following command in another terminal:

aws cloudformation describe-stack-events –-stack-name ember-serverless-hosting

When the stack is created, the console returns JSON that includes the stack outputs specified in your template. If the JSON does not output for some reason, re-run the last portion of the command: aws cloudformation describe-stacks --stack-name ember-serverless-hosting. This output includes the bucket name you’ll use to deploy your serverless code. It also includes the bucket URL you’ll use to deploy your ember application. For example:

{
    "Stacks": [
        {
            "StackId": "arn:aws:cloudformation:<aws-region>:<account-id>:stack/ember-serverless-hosting/<unique-id>",
            "Description": "Ember Serverless Hosting",
            "Tags": [],
            "Outputs": [
                {
                    "Description": "The bucket used to deploy Ember serverless code",
                    "OutputKey": "CodeBucketName",
                    "OutputValue": "ember-serverless-codebucket-<unique-id>"
                },
                {
                    "Description": "The bucket name of our website bucket",
                    "OutputKey": "WebsiteBucketName",
                    "OutputValue": "ember-serverless-websitebucket-<unique-id>"
                },
                {
                    "Description": "Name of S3 bucket to hold website content",
                    "OutputKey": "S3BucketSecureURL",
                    "OutputValue": "https://ember-serverless-websitebucket-<unique-id>.s3.amazonaws.com"
                },
                {
                    "Description": "URL for website hosted on S3",
                    "OutputKey": "WebsiteURL",
                    "OutputValue": "http://ember-serverless-websitebucket-<unique-id>.s3-website-<aws-region>.amazonaws.com"
                }
            ],
            "CreationTime": "2017-02-22T14:40:46.797Z",
            "StackName": "ember-serverless",
            "NotificationARNs": [],
            "StackStatus": "CREATE_COMPLETE",
            "DisableRollback": false
        }
    ]
}

Next, you create your REST API with CloudFormation and AWS SAM. For this, you need the CodeBucketName output parameter from the previous stack JSON output when running the following command to package the template:

aws cloudformation package --template-file api.yaml –-output-template-file api-deploy.yaml --s3-bucket <<CodeBucketName>>

This command creates a packaged template file named api-deploy.yaml. This file contains the S3 URI to your Lambda code, which was uploaded by the previous command. To deploy your serverless REST API, run the following:

aws cloudformation deploy --template-file api-deploy.yaml --stack-name ember-serverless-api --capabilities CAPABILITY_IAM

Next, run the following command to retrieve the outputs:

aws cloudformation describe-stacks --stack-name ember-serverless-api

Note the API Gateway ID OutputValue:

...
"Outputs": [
                {
                    "Description": "URL of your API endpoint", 
                    "OutputKey": "ApiUrl", 
                    "OutputValue": "https://xxxxxxxx.execute-api.us-east-1.amazonaws.com/Prod"
                }, 
                {
                    "Description": "API Gateway ID", 
                    "OutputKey": "Api", 
                    "OutputValue": "xxxxxxxx" //<- This Value is needed in the below command
                }
            ],
...

Now, run the following command to download the JavaScript SDK for your API using the ID noted above:

aws apigateway get-sdk --rest-api-id <<api-id>> --stage-name Prod --sdk-type javascript ./apiGateway-js-sdk.zip

This downloads a .zip file that contains the JavaScript interface generated by API Gateway. You use this interface to interact with API Gateway from your Ember.js application. Extract the contents of the .zip file, which produces a folder named apiGateway-js-sdk , directly into your client/vendor/ folder. You should now have the following client/vendor folder structure:

  • client
    • vendor
      • apiGateway-js-sdk
      • amazon-cognito

The additional files are libraries that the SDK generated by API Gateway uses to properly sign your API Gateway request. For details about how the libraries are used, see the Amazon API Gateway Developer Guide.

To use the SDK in your Ember.js application, you need to ensure Ember loads them properly. We do this by using app.import() statements within the ember-cli-build.js file.

Initialize the AWS SDK for JavaScript with Ember

You’ll set up and initialize the AWS SDK for JavaScript within your application by using ember initializers. Ember initializers allow you to initialize code before your application loads. By using them, you can ensure the AWS SDK and your API SDK are properly initialized before your application is presented to the user.

Open your client/config/environment.js file and add your AWS configuration to the development section. Add the region in which you are running, the Amazon Cognito identity pool ID created in the previous section, the Amazon Cognito user pool ID created in the previous section, and the user pool app client ID created in the previous section.

You can retrieve these values by running:

aws cloudformation describe-stacks --stack-name ember-serverless-api

Use the following values returned in the Output within the client/config/environment.js file:

  • ENV.AWS_POOL_ID -> CognitoIdentityPoolId
  • ENV.AWS_USER_POOL_ID -> CognitoUserPoolsId
  • ENV.AWS_CLIENT_ID -> CognitoUserPoolsClientId
// client/config/environment.js L26
if (environment === 'development') {
    ENV.AWS_REGION = ‘aws-region-1’
    ENV.AWS_POOL_ID = ‘aws-region:unique-hash-id’
    ENV.AWS_USER_POOL_ID = ‘aws-region_unique-id’
    ENV.AWS_CLIENT_ID = ‘unique-user-pool-app-id’
}

Now, run your client and ensure the SDK loads properly. From the client directory, run:

ember s

Then visit http://localhost:4200/ in your web browser, and open the developer tools. (If you’re using Chrome on a macOS machine, press cmd+option+i. Otherwise, press Ctrl+Shift+i.), you should see the following messages:

AWS SDK Initialized, Registering API Gateway Client: 
API Gateway client initialized: 
Object {docsGet: function, docsPost: function, docsDelete: function, docsOptions: function}

This confirms your generated SDK and the AWS SDK are properly loaded and ready for use.

Now try the following:

1. Log in with any user/credentials, and observe the error. (onFailure: Error: User does not exist.)
2. Register with a weak password, and observe the error. (Password did not conform with policy. Password not long enough.)
3. Register a new user, and use an email address you have access to so you receive a confirmation code.
4. Try re-sending the confirmation code.
5. Enter the confirmation code.
6. Log in with your newly created user.

The JWT access token is retrieved from Amazon Cognito user pools, it is decoded and displayed for reference after logging into the client application. The section entitled “Dynamo Items” within the application creates and deletes items within your DynamoDB table (that was previously created with CloudFormation) via API Gateway and Lambda, and authenticated with Cognito User Pools.

Deploying the web application to Amazon S3

In this command, you use the S3 WebsiteBucketName output from the ember-serverless-hosting CloudFormation stack as the sync target.

Note: you can retrieve the output values again by running:
aws cloudformation describe-stacks --stack-name ember-serverless-hosting

cd client
ember build
aws s3 sync dist/ s3://WebsiteBucketName/ --acl public-read

Your website should now be available at the WebsiteURL output value from the ember-serverless-hostingCloudFormation stack’s outputs.

Questions or comments? Reach out to us on the AWS Development Forum.

Source code and issues on GitHub: https://github.com/awslabs/aws-serverless-ember.

General Availability of the AWS Toolkit for Visual Studio 2017

by Steve Roberts | on | in .NET, Visual Studio | | Comments

We’re pleased to announce that the AWS Toolkit for Visual Studio is now generally available (GA) for Visual Studio 2017. You can install it from the Visual Studio Gallery. The GA version is 1.12.1.0.

Unlike with previous versions of the toolkit, we are using the Visual Studio Gallery to distribute new versions that target Visual Studio 2017 editions and those that follow. New features in the Visual Studio installation experience enable setup experiences that we previously required a Windows Installer to provide. No more! Now you’re able to use the familiar Extensions and Updates dialog box in the IDE to obtain updates.

For users who require the toolkit in earlier versions of Visual Studio (2013 and 2015), or the awsdeploy.exe standalone deployment tool, we’ll continue to maintain our current Windows Installer. In addition to the toolkit and deployment tool, this installer contains the .NET 3.5 and .NET 4.5 AWS SDK for .NET assemblies. It also contains the AWS Tools for Windows PowerShell module. If you have PowerShell version 5, you can also install the AWSPowerShell module from the PowerShell Gallery.

Thanks to everyone who tried out the preview versions of the AWS Toolkit for Visual Studio 2017 and for providing feedback!

Chalice Version 0.9.0 is Now Available

by James Saryerwinnie | on | in Python | | Comments

The latest preview version of Chalice, our microframework for Python serverless application development, is now available. This release includes a couple of commonly requested features:

To demonstrate these features, let’s walk through a sample Chalice app.

Thumbnail Generator

In this sample app, we create a view function that accepts a .png file as input, and generates a thumbnail of the image as output. This requires accepting binary content in the request body and returning binary content in the response body.

First, we install Chalice and create our initial project structure.

$ virtualenv /tmp/venv
$ source /tmp/venv/bin/activate
$ pip install chalice
$ chalice new-project thumbnail-generator
$ cd thumbnail-generator

Next, we write a view function that expects an image as the request body. It then produces a thumbnail of that image and sends it as a response. Here’s the code to do this.

from io import BytesIO
from chalice import Chalice, Response

from PIL import Image

app = Chalice(app_name='thumbnail-generator')


@app.route('/thumbnail', content_types=['image/png'], methods=['POST'])
def thumbnail():
    im = Image.open(BytesIO(app.current_request.raw_body))
    im.thumbnail((150, 150))
    out = BytesIO()
    im.save(out, 'PNG')
    return Response(out.getvalue(),
                    status_code=200,
                    headers={'Content-Type': 'image/png'})

To properly handle incoming binary content, we need two elements. First, we must declare the content type we’re expecting as input using the content_types argument in the route() call. By specifying content_types=['image/png'], we state that requests must set the Content-Type header to image/png. Second, to access the raw binary request body, we must use the app.current_request.raw_body attribute. This is the raw binary data that we will load with
Pillow, the python imaging library we’ll use to generate thumbnails.

Once we’ve used Pillow to generate a thumbnail, we must return the generated image as binary content. To do so, we use the Response` class. To return binary content, we again need two elements.

First, the response body we pass to the Response object must be of type bytes(). Second, we must specify a Content-Type header that is configured as a binary content type. By default, Chalice automatically configures a common set of content types as binary. Therefore, in most cases you only have to set the appropriate content type. You can see the list of default binary content types here
<http://chalice.readthedocs.io/en/latest/api.html#default_binary_types>__. If a binary content type you want to use isn’t in this list by default, you can change the binary content type list by modifying the app.api.binary_types list.

Before deploying, let’s test this locally. First, we install the Pillow library:

$ pip install Pillow

Next, we run chalice local and test our app using a local .png file we saved in /tmp.

$ chalice local
Serving on localhost:8000

$  curl -H 'Accept: image/png' -H 'Content-Type: image/png' \
    --data-binary @/tmp/lion.png \
    http://localhost:8000/thumbnail > /tmp/thumbnail.png

You can open /tmp/thumbnail.png and verify that a thumbnail was created.

In this example, we must set the Accept header to indicate that we want to return the binary content of type image/png.

Deploying Our App

Now that we tested the app locally, let’s deploy it to API Gateway and Lambda.

We use a couple of additional Chalice features to deploy our application. First, we configure the amount of memory to allocate to our Lambda function. This feature was just released in Chalice version 0.9.0. We want to increase the memory size of our Lambda function to 1GB. To do this, we update our .chalice/config.json file with the following lambda_memory_size entry:

$ cat .chalice/config.json
{
  "lambda_memory_size": 1024,
  "stages": {
    "dev": {
      "api_gateway_stage": "dev"
    }
  },
  "version": "2.0",
  "app_name": "thumbnail-generator"
}

We increase the amount of memory so that we have more CPU power available for our thumbnail generator. In the Lambda resource model, you choose the amount of memory you want for your function, and are allocated proportional CPU power and other resources. For example, choosing 256MB of memory allocates approximately twice as much CPU power to your Lambda function as requesting 128MB of memory.

Next, we need to install Pillow. Pillow contains C extensions that must be compatible with the platform where you run the Lambda function.

Create a vendor/ directory at the root of your application, as follow.

$ mkdir vendor

Next, download the Pillow wheel file for Linux:

$ cd vendor
$ pip download --platform manylinux1_x86_64 \
    --python-version 27 --implementation cp \
    --abi cp27mu --only-binary=:all: Pillow
$ unzip Pillow*
$ rm *.whl
$ cd ..

If you get an error about being unable to find the olefile package, you can safely ignore it. You can read more about how the vendor/ directory works in our documentation.

Now we can deploy the app using the chalice deploy command.

$ chalice deploy
Initial creation of lambda function.
Creating role
Creating deployment package.
Initiating first time deployment...
Deploying to: dev
https://abcd.execute-api.us-west-2.amazonaws.com/dev/

Let’s test it. We can use the same curl command we used previously, but replace it with our newly created remote endpoint.

$ curl -H 'Accept: image/png' -H 'Content-Type: image/png' \
    --data-binary @/tmp/lion.png \
    https://y5d036o2mf.execute-api.us-west-2.amazonaws.com/dev/thumbnail \
    > /tmp/thumbnail2.png

You can open /tmp/thumbnail2.png and verify that Chalice correctly generated a thumbnail.

Try out the latest version of Chalice today and let us know what you think. You can chat with us on our gitter channel and file feature requests on our github repo We look forward to your feedback and suggestions.

Updates to AWSPowerShell Cmdlet Names

by Steve Roberts | on | in .NET, PowerShell | | Comments

Since we launched the first AWS module for PowerShell five years ago, we’ve been hugely encouraged by the feedback from the user community, from first-time PowerShell users to PowerShell MVPs. In that time, we’ve acted immediately on some feedback, and put more complex changes into our backlog for future consideration.

One common request from experienced community members was to rename some cmdlets to change plural nouns to singular, as is the recommended practice. (We had initially created them with plural nouns as we tried to map them closely to the underlying service operation names).

Today we’re pleased to announce we’ve completed the remapping work. We’ve released new versions of the AWSPowerShell and AWSPowerShell.NetCore modules – version 3.3.96.0 – with singular cmdlet names and other cross-service consistency changes.

All the cmdlet name changes have backward-compatible aliases, so you should not have to update any existing scripts. You can continue to use the earlier names.

As we analyzed the cmdlets to determine where we needed to make changes, we found fewer problematic cases than we feared, but more than we’d like! To list them all in this blog post would be overwhelming. Instead, we added a page to our  cmdlet cross-reference on the web at https://docs.aws.amazon.com/powershell/latest/reference/items/pstoolsref-legacyaliases.html. You can also quickly inspect the changes by opening the AWSPowerShellLegacyAliases.psm1 file in the module distribution. For convenience both these resources list the aliases by service.

Please keep your feedback and feature requests coming! We really enjoy getting feedback (good or bad) and using it to plan improvements to the tools to address the problems you handle day to day.

Security update to AWS SDK for .NET’s Amazon CloudFront Cookie Signer

by Milind Gokarn | on | in .NET | | Comments

The AWS SDK for .NET has a utility class, Amazon.CloudFront.AmazonCloudFrontCookieSigner, for creating signed cookies to access private content served using Amazon CloudFront. This blog contains details on usage of this utility class along with sample code.

Specifying AmazonCloudFrontCookieSigner.Protocols.Https as the protocol parameter creates a cookie with incorrect policy; the policy contains a resource restriction of “http*://” instead of “https://” .

Potential Impact

CloudFront distributions configured to serve HTTP and HTTPS requests are affected by this issue, unless “Viewer Protocol Policy” is configured as HTTPS. In this case, CloudFront will block attempts to access content over HTTP.

Impacted SDK versions

  • Versions 2.3.36 to 2.3.55 for version 2 of the AWS SDK for .NET
  • Versions 3.0.1-preview to 3.3.3.6 for package AWSSDK.CloudFront of the AWS SDK for .NET
  • Versions 3.2.0-beta to 3.2.3.7-beta, and 3.2.8-rc for package AWSSDK.CloudFront in the preview version 3.2 of the AWS SDK for .NET, that targets .NET Core

Mitigation

Update your dependency to the latest version of the SDK. The fix contains a change to the AmazonCloudFrontCookieSigner.Protocols enum’s underlying values (a breaking change) and requires a recompilation of the consuming application. The assembly version of the SDK package has been updated for this fix. There are no other breaking API changes in this version.

  • Version 2.3.55.2 and above for package AWSSDK in version 2 of the AWS SDK for .NET
  • Version 3.3.4.0 and above for package AWSSDK.CloudFront in version 3 of the AWS SDK for .NET

React Native Support in the AWS SDK for JavaScript

by Christopher Radek | on | in JavaScript | | Comments

We’re excited to announce React Native support in the AWS SDK for JavaScript.

You can now access all services that are currently supported in the AWS SDK for JavaScript from within a React Native application. You can configure Amazon Cognito Identity as the authentication provider by using the same Amazon Cognito Identity credentials you might already be familiar with from the browser SDK.

Getting Started with the SDK in React Native

Save the AWS SDK for JavaScript as a project dependency by running the following:

npm install --save aws-sdk

Within your project, import the SDK with the following example:

import AWS from 'aws-sdk/dist/aws-sdk-react-native';

Once imported, you can use the SDK the same way you use it in the browser. For example, the following shows how to get a list of folders from an S3 bucket, given a prefix:

import AWS from 'aws-sdk/dist/aws-sdk-react-native';

const s3 = new AWS.S3({
  region: 'REGION',
  credentials: {/* */}
});

export async function getFoldersByPrefix(bucket, prefix) {
  let nextMarker = null;
  let isTruncated = false;
  const folders = [];

  // Returns all objects in current 'directory'
  do {
    const s3Objects = await s3.listObjects({
      Bucket: bucket,
      Prefix: prefix || '',
      Delimiter: '/',
      Marker: nextMarker
    }).promise();
    // Store the folder paths
    const prefixes = s3Objects.CommonPrefixes.map(common => common.Prefix);
    folders.push.apply(folders, prefixes)
    // Check if there are more objects in this directory
    isTruncated = s3Objects.IsTruncated;
    nextMarker = isTruncated ? s3Objects.NextMarker : null;
  } while (isTruncated);
  
  return folders;
}

Take a look at the AWS SDK for JavaScript SDK Developer Guide for examples of how to make service calls with the SDK.

Give It a Try!

We look forward to hearing what you think of this new support for React Native in the AWS SDK for JavaScript! Try it out and leave your feedback in the comments or on GitHub!

Build and Deploy a Serverless REST API in Minutes Using Chalice

by Leah Rivers | on | in Python | | Comments

Chalice is a serverless microframework that makes it simple for you to use AWS Lambda and Amazon API Gateway to build serverless apps. We’ve improved Chalice based on community feedback from GitHub, and we’re eager for you to take our latest version for a spin. Hopefully, you’ll find Chalice a fast and effective way to build serverless apps.

To help you get started with Chalice, here’s a quick five-step review:

   Step 1: Install Chalice
   Step 2: Configure credentials
   Step 3: Create a project
   Step 4: Deploy your API
   Step 5: You’re done launching a simple API. Consider adding something to your app!

Let’s dig in.

Step 1: Install Chalice.
To install Chalice, you have to use Python2.7 or 3.6, the versions Lambda supports. We recommend using a virtual environment, as follows.
(If you haven’t installed chalice before, you can do that with pip install chalice).

 $ pip install virtualenv
 $ virtualenv ~/.virtualenvs/chalice-demo
 $ source ~/.virtualenvs/chalice-demo/bin/activate

Step 2: Add credentials if you haven’t previously configured boto3 or the AWS CLI.
(If you’re already running boto3 or the AWS CLI, you’re all good. Move on to Step 3.)

If this is your first time configuring credentials for AWS, use the following.

 $ mkdir ~/.aws
 $ cat >> ~/.aws/config
 [default]
 aws_access_key_id=YOUR_ACCESS_KEY_HERE
 aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
 region=YOUR_REGION (such as us-west-2, us-west-1, etc)

For more information on all the supported methods for configuring credentials, see the boto3 docs.

Step 3: Create a project using the chalice command line.
Use the new-project command to create a sample app that defines a single view.

 $ chalice new-project helloworld
 $ cd helloworld

Take a moment to check out what you’ve created. In app.py, you’ve created a sample app that defines a single view, /, that when called will return the JSON body {“hello”: “world”}.

Step 4: Deploy your App.
Alright, double-check that you’re still in your project directory – you’re ready to deploy!
From the command line, run chalice deploy.

 $ chalice deploy
 ...
 Initiating first time deployment...
 https://qxea58oupc.execute-api.us-west-2.amazonaws.com/dev/

You now have an API up and running using API Gateway and Lambda.

 $ curl https://qxea58oupc.execute-api.us-west-2.amazonaws.com/dev/
 {"hello": "world"}

Step 5: Add something to your app!
From this point, there’s a bunch of stuff you can do, including adding URL parameters, adding routing, or customizing the HTTP response. Find tutorials and examples here.

Have fun!