Category: CLI


Creating and Deploying a Serverless Web Application with CloudFormation and Ember.js

Serverless computing enables you to build scalable and cost-effective applications that scale up or down automatically without provisioning, scaling, and managing servers. You can use AWS Lambda to execute your back-end application code, Amazon API Gateway for a fully managed service to create, publish, maintain, monitor, and secure your REST API, and Amazon S3 to host and serve your static web files.

Ember.js is a popular, long-standing front-end web application framework for developing rich HTML5 web applications. Ember.js has an intuitive command line interface. You can write dramatically less code by using its integrated Handlebars templates. Ember.js also contains an abstracted data access pattern that enables you to write application adapters that communicate with REST APIs.

In this tutorial, we build a simple Ember.js application that initializes the AWS SDK for JavaScript using native Ember.js initializers. The application communicates with API Gateway, which runs backend Lambda code that reads and writes to Amazon DynamoDB. In addition, we use the Ember.js command line interface to package and deploy the web application to Amazon S3 for hosting and for delivery through Amazon CloudFront. We also use AWS CloudFormation with the AWS Serverless Application Model (AWS SAM) to automate the creation and management of our serverless cloud architecture.

Configuring your local development environment

Requirements:

Clone or fork the aws-serverless-ember repository and install the dependencies. Based on your setup, you might need to use the sudo argument when installing these. If you get an EACCESS error, run these commands with sudo npm install:

npm install -g ember-cli
npm install -g bower
git clone https://github.com/awslabs/aws-serverless-ember

You should now have two folders in the project’s root directory, client and cloud. The cloud folder contains our CloudFormation templates and Lambda function code. The client folder contains the client-side Ember.js web application that deploys to S3. Next, install the dependencies:

cd aws-serverless-ember
cd client
npm install && bower install

When this is complete, navigate to the cloud directory in your project’s root directory:

cd ../cloud

Create and deploy the AWS Cloud environment using CloudFormation

The infrastructure for this application uses API Gateway, Lambda, S3, and DynamoDB.

Note: You will incur charges after running these templates. See the appropriate pricing page for details.

The cloud folder of the repository contains the following files:

  • api.yaml – An AWS SAM CloudFormation template for your serverless backend
  • deploy.sh – A bash helper script for deploying with CloudFormation
  • hosting.yaml – A CloudFormation template for your S3 bucket and website
  • swagger.yaml – The API definition for API Gateway
  • index.js – The Lambda function code
  • swagger.yaml – The OpenAPI definition for your serverless REST API

First, create a website bucket with static website hosting. You also create a bucket for deploying your serverless code.

Note: For this section be sure you’ve installed and configured the AWS CLI with appropriate permissions (your CLI should at least have access to read/write to CloudFormation and read/write to S3) for this tutorial.

Run the following command to create the S3 hosting portion of our back-end, wait for completion, and finally describe stacks once complete:

aws cloudformation create-stack --stack-name ember-serverless-hosting --template-body file://$PWD/hosting.yaml && \
aws cloudformation wait stack-create-complete --stack-name ember-serverless-hosting && \ 
aws cloudformation describe-stacks --stack-name ember-serverless-hosting

This creates a CloudFormation stack named ember-serverless-hosting, waits for the stack to complete, and then display the output results. While you wait, you can monitor the events or view the resource creation in the CloudFormation console, or you run the following command in another terminal:

aws cloudformation describe-stack-events –-stack-name ember-serverless-hosting

When the stack is created, the console returns JSON that includes the stack outputs specified in your template. If the JSON does not output for some reason, re-run the last portion of the command: aws cloudformation describe-stacks --stack-name ember-serverless-hosting. This output includes the bucket name you’ll use to deploy your serverless code. It also includes the bucket URL you’ll use to deploy your ember application. For example:

{
    "Stacks": [
        {
            "StackId": "arn:aws:cloudformation:<aws-region>:<account-id>:stack/ember-serverless-hosting/<unique-id>",
            "Description": "Ember Serverless Hosting",
            "Tags": [],
            "Outputs": [
                {
                    "Description": "The bucket used to deploy Ember serverless code",
                    "OutputKey": "CodeBucketName",
                    "OutputValue": "ember-serverless-codebucket-<unique-id>"
                },
                {
                    "Description": "The bucket name of our website bucket",
                    "OutputKey": "WebsiteBucketName",
                    "OutputValue": "ember-serverless-websitebucket-<unique-id>"
                },
                {
                    "Description": "Name of S3 bucket to hold website content",
                    "OutputKey": "S3BucketSecureURL",
                    "OutputValue": "https://ember-serverless-websitebucket-<unique-id>.s3.amazonaws.com"
                },
                {
                    "Description": "URL for website hosted on S3",
                    "OutputKey": "WebsiteURL",
                    "OutputValue": "http://ember-serverless-websitebucket-<unique-id>.s3-website-<aws-region>.amazonaws.com"
                }
            ],
            "CreationTime": "2017-02-22T14:40:46.797Z",
            "StackName": "ember-serverless",
            "NotificationARNs": [],
            "StackStatus": "CREATE_COMPLETE",
            "DisableRollback": false
        }
    ]
}

Next, you create your REST API with CloudFormation and AWS SAM. For this, you need the CodeBucketName output parameter from the previous stack JSON output when running the following command to package the template:

aws cloudformation package --template-file api.yaml --output-template-file api-deploy.yaml --s3-bucket <<CodeBucketName>>

This command creates a packaged template file named api-deploy.yaml. This file contains the S3 URI to your Lambda code, which was uploaded by the previous command. To deploy your serverless REST API, run the following:

aws cloudformation deploy --template-file api-deploy.yaml --stack-name ember-serverless-api --capabilities CAPABILITY_IAM

Next, run the following command to retrieve the outputs:

aws cloudformation describe-stacks --stack-name ember-serverless-api

Note the API Gateway ID OutputValue:

...
"Outputs": [
                {
                    "Description": "URL of your API endpoint", 
                    "OutputKey": "ApiUrl", 
                    "OutputValue": "https://xxxxxxxx.execute-api.us-east-1.amazonaws.com/Prod"
                }, 
                {
                    "Description": "API Gateway ID", 
                    "OutputKey": "Api", 
                    "OutputValue": "xxxxxxxx" //<- This Value is needed in the below command
                }
            ],
...

Now, run the following command to download the JavaScript SDK for your API using the ID noted above:

aws apigateway get-sdk --rest-api-id <<api-id>> --stage-name Prod --sdk-type javascript ./apiGateway-js-sdk.zip

This downloads a .zip file that contains the JavaScript interface generated by API Gateway. You use this interface to interact with API Gateway from your Ember.js application. Extract the contents of the .zip file, which produces a folder named apiGateway-js-sdk , directly into your client/vendor/ folder. You should now have the following client/vendor folder structure:

  • client
    • vendor
      • apiGateway-js-sdk
      • amazon-cognito

The additional files are libraries that the SDK generated by API Gateway uses to properly sign your API Gateway request. For details about how the libraries are used, see the Amazon API Gateway Developer Guide.

To use the SDK in your Ember.js application, you need to ensure Ember loads them properly. We do this by using app.import() statements within the ember-cli-build.js file.

Initialize the AWS SDK for JavaScript with Ember

You’ll set up and initialize the AWS SDK for JavaScript within your application by using ember initializers. Ember initializers allow you to initialize code before your application loads. By using them, you can ensure the AWS SDK and your API SDK are properly initialized before your application is presented to the user.

Open your client/config/environment.js file and add your AWS configuration to the development section. Add the region in which you are running, the Amazon Cognito identity pool ID created in the previous section, the Amazon Cognito user pool ID created in the previous section, and the user pool app client ID created in the previous section.

You can retrieve these values by running:

aws cloudformation describe-stacks --stack-name ember-serverless-api

Use the following values returned in the Output within the client/config/environment.js file:

  • ENV.AWS_POOL_ID -> CognitoIdentityPoolId
  • ENV.AWS_USER_POOL_ID -> CognitoUserPoolsId
  • ENV.AWS_CLIENT_ID -> CognitoUserPoolsClientId
// client/config/environment.js L26
if (environment === 'development') {
    ENV.AWS_REGION = ‘aws-region-1’
    ENV.AWS_POOL_ID = ‘aws-region:unique-hash-id’
    ENV.AWS_USER_POOL_ID = ‘aws-region_unique-id’
    ENV.AWS_CLIENT_ID = ‘unique-user-pool-app-id’
}

Now, run your client and ensure the SDK loads properly. From the client directory, run:

ember s

Then visit http://localhost:4200/ in your web browser, and open the developer tools. (If you’re using Chrome on a macOS machine, press cmd+option+i. Otherwise, press Ctrl+Shift+i.), you should see the following messages:

AWS SDK Initialized, Registering API Gateway Client: 
API Gateway client initialized: 
Object {docsGet: function, docsPost: function, docsDelete: function, docsOptions: function}

This confirms your generated SDK and the AWS SDK are properly loaded and ready for use.

Now try the following:

1. Log in with any user/credentials, and observe the error. (onFailure: Error: User does not exist.)
2. Register with a weak password, and observe the error. (Password did not conform with policy. Password not long enough.)
3. Register a new user, and use an email address you have access to so you receive a confirmation code.
4. Try re-sending the confirmation code.
5. Enter the confirmation code.
6. Log in with your newly created user.

The JWT access token is retrieved from Amazon Cognito user pools, it is decoded and displayed for reference after logging into the client application. The section entitled “Dynamo Items” within the application creates and deletes items within your DynamoDB table (that was previously created with CloudFormation) via API Gateway and Lambda, and authenticated with Cognito User Pools.

Deploying the web application to Amazon S3

In this command, you use the S3 WebsiteBucketName output from the ember-serverless-hosting CloudFormation stack as the sync target.

Note: you can retrieve the output values again by running:
aws cloudformation describe-stacks --stack-name ember-serverless-hosting

cd client
ember build
aws s3 sync dist/ s3://WebsiteBucketName/ --acl public-read

Your website should now be available at the WebsiteURL output value from the ember-serverless-hostingCloudFormation stack’s outputs.

Questions or comments? Reach out to us on the AWS Development Forum.

Source code and issues on GitHub: https://github.com/awslabs/aws-serverless-ember.

Super-Charge Your AWS Command-Line Experience with aws-shell

by Peter Moon | on | in CLI | Permalink | Comments |  Share
When we first started developing the AWS Command Line Interface (CLI) nearly three years ago, we had to figure out how to deliver a consistent command-line experience to the ever-expanding surface area of AWS APIs. We decided to auto-generate commands and options from the underlying models that describe AWS APIs. This strategy has enabled us to deliver timely support for new AWS services and API updates with a consistent style of commands.
We believe consistency and timeliness help our customers be productive on AWS. We also understand it is difficult to get familiar with and effectively use the thousands of commands and options available in the CLI. We are always looking for solutions to make the CLI as easy as possible to learn and use.
This search for better usability led us to create aws-shell, which we’re making public today at https://github.com/awslabs/aws-shell. The project, along with our plans to collaborate with Donne Martin, the author of SAWS, were announced during this year’s re:Invent talk, Automating AWS with the AWS CLI. Donne has made amazing progress in providing a great command line UI for AWS CLI users. It’s been a pleasant surprise and a validation of our feature ideas to see a community project take off and gain so much popularity in a short amount of time. We are excited and grateful to join forces with Donne and bring him on board as one of the maintainers of the aws-shell project.
Now let’s take a look at some of the key features available in aws-shell:
Interactive, fuzzy auto-completion of commands and options
Dynamic in-line documentation of commands and options
Auto-completion of resource identifiers (for example, Amazon EC2 instance IDs, Amazon SQS queue URLs, Amazon SNS topic names, and more)
Execute regular shell commands by piping or prefixing shell commands with ‘!’
Export all commands executed in the current session to your text editor (.edit special command)
Running the “.edit” command after executing some commands gives you all the commands in your default text editor

We’re excited to get aws-shell into your hands, and we look forward to your feedback. As always, you can find us in GitHub. Please share any questions and issues you have!

Contributing Topics and Examples to the AWS CLI

by Kyle Knapp | on | in CLI | Permalink | Comments |  Share

Whether it’s a quickstart for using a service, a tricky gotcha, or a neat application, have you ever wanted to share information with other users of the AWS CLI? The good news is, you can.

In a post, that I wrote a few months ago, I introduced the AWS CLI Topic Guide and described how to search for and access the contents of topics. In this post, I will discuss how you can contribute your own topic to the AWS CLI Topic Guide and examples for CLI commands.

How do I write a topic?

If you want to contribute to the CLI, submit a pull request to our GitHub repository.

The topics in the AWS CLI Topic Guide are maintained in this section in the CLI code base. They are written in the format of reStructuredText. Let’s walk through the steps for adding a topic to the AWS CLI Topic Guide.

Setting a development environment

If you have not cloned the CLI repository and set up a development environment, follow these steps.

First, clone the AWS CLI git repository:

~$ git clone git@github.com:aws/aws-cli.git

Then use pip to install the CLI:

~$ cd aws-cli
~/aws-cli$ pip install -e .

You are now ready to start contributing to the CLI.

Step 1: Create a file in the topics dirctory

Navigate from the root directory of the CLI’s cloned git repository to the topics directory:

~/aws-cli$ cd awscli/topics

Use a text editor to create a new file:

~/aws-cli/topics$ vim my-sample-topic.rst

The reStructuredText file, my-sample-topic.rst, will show up in the output of the aws help topics command as my-sample-topic. To access the topic’s contents, a user should run the command aws help my-sample-topic.

Step 2: Add the appropriate fields to the topic

You will need to add some metadata to the beginning of the file in the form of reStructuredText fields to the beginning of the file. These fields play an important role in the display and organization of each topic.

The currently supported fields are:

  • title: Specifies a title for the topic. Its value will be displayed as the title whenever the content of the topic is displayed through the command aws help my-sample-topic.
  • description: Specifies a sentence description for the topic. Its value will be displayed when listing all of the available topics through the command aws help topics.
  • category: Specifies a category to which a topic belongs to. . A topic can belong to one category only.. The topic will be listed under the specified category when viewing the available topics through the command aws help topics.

Here is an example of what the list of fields would look like:

:title: My Sample Topic
:description: This describes my sample topic
:category: General

Step 3: Add the content

After these fields have been added to the top of the file, you can now add some reStructuredText content.

:title: My Sample Topic
:description: This describes my sample topic
:category: General

Here is a summary of my topic.


My Section
==========
Here is some more content.


My Subsection
-------------
Here is even more content.

The content and the structure of the content I added is arbitrary. As long as it is valid reStructuredText, decide for yourself what you want to add and how you want to structure it.

Step 4: Regenerate the topic index

After you have written the content, run the script make-topic-index to make sure the topic will be visible in the aws help topics command and available through the aws topics my-sample-topic command. This step is straightforward as well. All you need to do is run the script make-topic-index:

~/aws-cli/topics$ cd ~/aws-cli/scripts
~/aws-cli/scripts$ ./make-topic-index

This script will run through all of the topic files in awscli/topics and regenerate an index that allows for fast lookups of topics in the CLI. You can use the aws help topics command to check that your topic is available. (The following shows only the AVAILABLE TOPICS section of the output.)

$ aws help topics

AVAILABLE TOPICS
   General
       o config-vars: Configuration Variables for the AWS CLI

       o my-sample-topic: This describes my sample topic

       o return-codes: Describes the various return codes of the AWS CLI

   S3
       o s3-config: Advanced configuration for AWS S3 Commands

And the content of the topic is available through the aws help my-sample-topic command:

$ aws help my-sample-topic


NAME
       My Sample Topic -

       Here is a summary of my topic.

MY SECTION
       Here is some more content.

   My Subsection
       Here is even more content.

How do I write an example?

By example, I mean the EXAMPLES section that appears in the output when you run the help command for a CLI command. For example, take the help output for aws ec2 describe-instances:

$ aws ec2 describe-instances help

...continued...
EXAMPLES
       To describe an Amazon EC2 instance

       Command:

          aws ec2 describe-instances --instance-ids i-5203422c

       To describe all instances with the instance type m1.small

       Command:

          aws ec2 describe-instances --filters "Name=instance-type,Values=m1.small"

       To describe all instances with a Owner tag

       Command:

          aws ec2 describe-instances --filters "Name=tag-key,Values=Owner"

       To describe all instances with a Purpose=test tag

       Command:

          aws ec2 describe-instances --filters "Name=tag:Purpose,Values=test"
...continued...

All of these examples are written in reStructuredText and are located in the example section of the CLI codebase.

Writing an example for a command requires even fewer steps than writing a topic. In this walkthrough, we will add an example for aws iot list-things.

If you have not already done so, make sure you have set up your development environment as described earlier in this post.

Step 1: Create a file in the examples dirctory

Navigate from the root directory of the CLI’s cloned git repository to the examples directory:

~/aws-cli$ cd awscli/examples

This directory contains directories of other service commands in the CLI. For this walkthrough, you need to navigate to the iot directory (or create it if it does not exist):

~/aws-cli/examples$ cd iot

Use a text editor to create a new file:

~/aws-cli/examples/iot$ vim list-things.rst

In order for the example to be picked up in the aws iot list-things help command, the name of the file must match the name of the command.

Step 2: Add the content

Now just add reStructuredText content to this newly created file:

The following command lists all AWS IoT things::

$ aws iot list-things

Output::

    {
        "things": []
    }

To confirm the example was added to the help command:

$ aws iot list-things help

...continued...
EXAMPLES
       The following command lists all AWS IoT things:

          $ aws iot list-things

       Output:

          {
              "things": []
          }
...continued...

Conclusion

After you have created your content, submit a pull request to our GitHub repository so that your knowledge can be shared with other CLI users.

Follow us on Twitter @AWSCLI and let us know what you’d like to read about next! Stay tuned for our next post, and contribute some topics and examples to the CLI today.

 

AWS re:Invent 2015 and more

by James Saryerwinnie | on | in CLI | Permalink | Comments |  Share

re:Invent has come and gone, and once again, we all had a blast. It’s always
great to meet with customers and discuss how they’re using the AWS SDKs and
CLIs. Keep the feedback coming.

At this year’s AWS CLI session at re:Invent, I had the opportunity to
address one of the topics that we previously hadn’t talked about much,
which is using the AWS CLI as a toolkit to create
shell scripts. As a CLI user, you may initially start off running a few
commands interactively in your terminal, but eventually you may want to combine
several AWS CLI commands to create some higher level of abstraction that’s
meaningful to you. This could include:

  • Combining a sequence of commands together: perform operation A, then B, then C.
  • Taking the output of one command and using it as input to another command.
  • Modifying the output to work well with other text processing tools such as
    grep, sed, and awk.

This talk discussed tips, techniques, and tools you can use to help you write shell scripts with the AWS CLI.

The video for the sesssion is now available online. The slides are also
available.

During the talk, I also mentioned that all the code I was showing would be
available on github. You can check out those code samples, along with some I
did not have time to go over, at the awscli-reinvent2015-samples repo.

In this post, and the next couple of posts, I’ll be digging into this
topic of shell scripting in more depth and covering things that I did
not have time to discuss in my re:Invent session.

Resource exists

I’d like to go over one of the examples that I didn’t have time to demo during
the re:Invent talk. During the breakout session, I showed how you can launch an
Amazon EC2 instance, wait until it’s running, and then automatically SSH to
the instance.

To do this, I made some assumptions that simplified the script,
particularly:

  • You have imported your id_rsa SSH key
  • You have a security group tagged with “dev-ec2-instance=linux”
  • You have an instance profile called “dev-ec2-instance”

What if these assumptions aren’t true?

Let’s work on a script called setup-dev-ec2-instance that checks for these
resources, and creates them for you, if necessary.

We’re going to use the resource exists pattern I talked about in this
part of the session. In pseudocode, here’s what this script will do:

if security group tagged with dev-ec2-instance=linux does not exist:
  show a list of security groups and ask which one to tag

if keypair does not exist in ec2
  offer to import the local ~/.ssh/id_rsa key for the user

if instance profile named "dev-ec2-instance" does not exist
  offer to create instance profile for user

Here’s what this script looks like the first time it’s run:

$ ./setup-dev-ec2-instance
Checking for required resources...

Security group not found.

default  sg-87190abc  None
ssh      sg-616a6abc  None
sshNAT   sg-91befabc  vpc-9ecd7abc
default  sg-f987fabc  vpc-17a20abc
default  sg-16befabc  vpc-9ecd7abc

Enter the security group ID to tag: sg-616a6abc
Tagging security group

~/.ssh/id_rsa key pair does not appear to be imported.
Would you like to import ~/.ssh/id_rsa.pub? [y/N]: y
{
    "KeyName": "id_rsa",
    "KeyFingerprint": "bb:3e:a9:82:50:32:6f:45:8a:f8:d4:24:0e:aa:aa:aa"
}

Missing IAM instance profile 'dev-ec2-instance'
Would you like to create an IAM instance profile? [y/N]: y
{
    "Role": {
        "AssumeRolePolicyDocument": {
             ...
        },
        "RoleId": "...",
        "CreateDate": "2015-10-23T21:24:57.160Z",
        "RoleName": "dev-ec2-instance",
        "Path": "/",
        "Arn": "arn:aws:iam::12345:role/dev-ec2-instance"
    }
}
{
    "InstanceProfile": {
        "InstanceProfileId": "...",
        "Roles": [],
        "CreateDate": "2015-10-23T21:25:02.077Z",
        "InstanceProfileName": "dev-ec2-instance",
        "Path": "/",
        "Arn": "arn:aws:iam::12345:instance-profile/dev-ec2-instance"
    }
}

Here’s what this script looks like if all of your resources are configured:

$ ./setup-dev-ec2-instance
Checking for required resources...

Security groups exists.
Key pair exists.
Instance profile exists.

The full script is available on github, but I’d like to highlight the use of
the resource_exists function, which is used for each of the three
resources:

echo "Checking for required resources..."
echo ""
# 1. Check if a security group is found.
if resource_exists "aws ec2 describe-security-groups 
  --filter Name=tag:dev-ec2-instance,Values=linux"; then
  echo "Security groups exists."
else
  echo "Security group not found."
  tag_security_group
fi

# 2. Make sure the keypair is imported.
if [ ! -f ~/.ssh/id_rsa ]; then
  echo "Missing ~/.ssh/id_rsa key pair."
elif has_new_enough_openssl; then
  fingerprint=$(compute_key_fingerprint ~/.ssh/id_rsa)
  if resource_exists "aws ec2 describe-key-pairs 
    --filter Name=fingerprint,Values=$fingerprint"; then
    echo "Key pair exists."
  else
    echo "~/.ssh/id_rsa key pair does not appear to be imported."
    import_key_pair
  fi
else
  echo "Can't check if SSH key has been imported."
  echo "You need at least openssl 1.0.0 that has a "pkey" command."
  echo "Please upgrade your version of openssl."
fi

# 3. Check that they have an IAM role called dev-ec2-instance.
#    We're using a local --query expression here.
if resource_exists "aws iam list-instance-profiles" 
  "InstanceProfiles[?InstanceProfileName=='dev-ec2-instance']"; then
  echo "Instance profile exists."
else
  echo "Missing IAM instance profile 'dev-ec2-instance'"
  create_instance_profile
fi

By using the resource_exists function, the bash script is fairly close to the
original pseudocode. In the third step, we’re using a local
–query expression
to check if we have an instance profile with the name dev-ec2-instance.

Creating an instance profile

In the preceding script, each of the three conditionals has a clause where we
offer to create the resource that does not exist. They all follow a similar
pattern, so we’ll take a look at just one for creating an instance profile.

The create_instance_profile has this high level logic:

  • Ask the user if they’d like us to create an instance profile.
  • If they don’t say yes, then exit.
  • First create an IAM role with a trust policy that allows the
    ec2.amazonaws.com service to assume the role
  • Next find the ARN for the “AdministratorAccess” managed policy.
  • Attach the role policy to the role we’ve created
  • Create an instance profile with the same name as the role
  • Finally, add the role to the instance profile we’ve created.

Here’s the code for that function:

create_instance_profile() {
  echo -n "Would you like to create an IAM instance profile? [y/N]: "
  read confirmation
  if [[ "$confirmation" != "y" ]]; then
    return
  fi
  aws iam create-role --role-name dev-ec2-instance 
    --assume-role-policy-document "$TRUST_POLICY" || errexit "Could not create Role"

  # Use a managed policy
  admin_policy_arn=$(aws iam list-policies --scope AWS 
      --query "Policies[?PolicyName=='AdministratorAccess'].Arn | [0]" 
      --output text | head -n 1)
  aws iam attach-role-policy 
    --role-name dev-ec2-instance 
    --policy-arn "$admin_policy_arn" || errexit "Could not attach role policy"

  # Then we need to create an instance profile from the role.
  aws iam create-instance-profile 
    --instance-profile-name dev-ec2-instance || 
    errexit "Could not create instance profile."
  # And add it to the role
  aws iam add-role-to-instance-profile 
    --role-name dev-ec2-instance 
    --instance-profile-name dev-ec2-instance || 
    errexit "Could not add role to instance profile."
}

In this function, we’re using –output text combined with –query
to retrieve the admin_policy_arn as a string that we can pass to
the subsequent attach-role-policy command. Using –output text
along with –query is one of the most powerful patterns you can use
when shell scripting with the AWS CLI.

See you at re:Invent 2016

I hope everyone enjoyed re:Invent 2015. We look forward to seeing you again in
2016!

The AWS CLI Topic Guide

by Kyle Knapp | on | in CLI | Permalink | Comments |  Share

Hi everyone! This blog post is about the AWS CLI Topic Guide, a feature that was added in version 1.7.24 of the CLI. The AWS CLI Topic Guide allows users to discover and read information about a CLI feature or its behavior at a level of detail not found in-depth in the Help page of a single command.

Discovering Topics

Run the following command to discover the topics available:

$ aws help topics

A Help page with a list of available topics will be displayed. Here is an example list:

AVAILABLE TOPICS
   General
       o config-vars: Configuration Variables for the AWS CLI

       o return-codes: Describes the various return codes of the AWS CLI

   S3
       o s3-config: Advanced configuration for AWS S3 Commands

In this case, the returned topics (config-vars, return-codes, and s3-config) fall into two categories: General and S3. Each topic belongs to a single category only, so you will never see repeated topics in the list.

Accessing Topics

Run the following command to access a topic’s contents:

$ aws help topicname

where topicname is the name of a topic listed in the output of the aws help topics command. For example, if you wanted to access the return-codes topic to learn more about the various return codes in the CLI, all you would have to type is:

$ aws help return-codes

This will display a Help page that describes the various return codes you might recieve when running a CLI command and the scenarios for particular status codes.

The AWS CLI Topic Guide is also available online.

Conclusion

The AWS CLI Topic Guide is a great source of information about the CLI. If you have topics you would like us to add, submit a request through our GitHub repository.

Follow us on Twitter @AWSCLI and let us know what you’d like to read about next! Stay tuned for our next post.

 

Best Practices for Local File Parameters

by Kyle Knapp | on | in CLI | Permalink | Comments |  Share

If you have ever passed the contents of a file to a parameter of the AWS CLI, you most likely did so using the file:// notation. By setting a parameter’s value as the file’s path prepended by file://, you can explicitly pass in the contents of a local file as input to a command:

aws service command --parameter file://path_to_file

The value passed to --parameter is the contents of the file, read as text. This means that as the contents of the file are read, the file’s bytes are decoded using the system’s set encoding. Then as the request is serialized, the contents are encoded and sent over the wire to the service.

You may be wondering why the CLI does not just send the straight bytes of the file to the service without decoding and encoding the contents. The bytes of the file must be decoded and then encoded because your system’s encoding may differ from the encoding the service expects. Ultimately, the use of file:// grants you the convenience of using files written in your preferred encoding when using the CLI.

In versions 1.6.3 and higher of the CLI, you have access to another way to pass the contents of a file to the CLI, fileb://. It works similiar to file://, but instead of reading the contents of the file as text, it is read as binary:

aws service command --parameter fileb://path_to_file

When the file is read as binary, the file’s bytes are not decoded as they are read in. This allows you to pass binary files, which have no encoding, as input to a command.

In this post, I am going to go into detail about some cases of when to use file:// over fileb:// and vice versa.

Use Cases Involving Text Files

Here are a couple of the more popular cases for using file:// to read a file as text.

Parameter value is a long text body

One of the most common use cases for file:// is when the input is a long text body. For example, if I had a shell script named myshellscript that I wanted to run when I launch an Amazon EC2 instance, I could pass the shell script in when I launch my instance from the CLI:

$ aws ec2 run-instances --image-id ami-b66ed3de 
    --instance-type m3.medium 
    --key-name mykey 
    --security-groups my-security-group 
    --user-data file://myshellscript

This command will take the contents of myshellscript and pass it to the instance as user data such that once the instance starts running, it will run my shell script. You can read more about the different ways to provide user data in the Amazon EC2 User Guide.

Parameter requires JSON input

Oftentimes parameters require a JSON structure as input, and sometimes this JSON structure can be large. For example, let’s look at launching an EC2 instance with an additional Amazon EBS volume attached using the CLI:

$ aws ec2 run-instances --image-id ami-b66ed3de 
   --instance-type m3.medium 
    --key-name mykey 
    --security-groups my-security-group 
   --block-device-mappings '[{"DeviceName":"/dev/sdf","Ebs":{"VolumeSize":20,"DeleteOnTermination":false,"VolumeType":"standard"}}]'

Notice that the --block-device-mappings parameter requires JSON input, which can be somewhat lengthy on the command line. So, it would be convenient if you could specify the JSON input in a format that is easier to read and edit, such as in the form of a text file:

[
  {
    "DeviceName": "/dev/sdf",
    "Ebs": {
      "VolumeSize": 20,
      "DeleteOnTermination": false,
      "VolumeType": "standard"
    }
  }
]

By writing the JSON to a text file, it becomes easier to determine if the JSON is formatted correctly, and you can work with it in your favorite text editor. If the JSON above is written to some local file named myinput.json, you can run the same command as before using the myinput.json file as input to the --block-device-mappings parameter:

$ aws ec2 run-instances --image-id ami-b66ed3de 
   --instance-type m3.medium 
    --key-name mykey 
    --security-groups my-security-group 
   --block-device-mappings file://myinput.json

This becomes especially useful if you plan to reuse the input.json file for future ec2 run-instances commands since you will not have to retype the entire JSON input.

Use Cases Involving Binary Files

For most cases, file:// will satisfy your use case for passing the contents of a file as input. However, there are some cases where fileb:// must be used to pass the contents of the file in as binary as opposed to as text. Here are a couple of examples.

AWS Key Management Service (KMS) decryption

KMS is an AWS service that makes it easy for you to create and control the encryption keys used to encrypt your data. You can read more about KMS in the AWS Key Management Service Developer Guide. One service that KMS provides is the ability to encrypt and decrypt data using your KMS keys. This is really useful if you want to encrypt arbitrary data such as a password or RSA key. Here is how you can use KMS to encrypt data using the CLI:

$ aws kms encrypt --key-id my-key-id --plaintext mypassword
   --query CipherTextBlob --output text

CiAxWxaLB2LyTobc/ppFeNcSLW/abxdFuvBdD3IBtHBTYBKRAQEBAgB4MVsWiwdi8k6G3P6aRX
jXEi1v2m8XRbrwXQ9yAbRwU2AAAABoMGYGCSqGSIb3DQEHBqBZMFcCAQAwUgYJKoZIhvcNAQcBM
B4GCWCGSAFlAwQBLjARBAyE/taUnrxXzSqa1+8CARCAJSi8/E819toVhfxm2A+T9mFdOfnjGuJI
zGynaCB3FsPXnrwl7vQ=

This command uses the KMS key my-key-id to encrypt the data mypassword. However, in order for the CLI to properly display content, the encrypted data output from this command is base64 encoded. So by base64-decoding the output, you can store the data as a binary file:

$ aws kms encrypt --key-id my-key-id --plaintext mypassword
   --query CipherTextBlob 
   --output text | base64 --decode > my-encrypted-password

Then if I want to decrypt the data in my file, I can use KMS to decrypt my encrypted binary:

$ echo "$(aws kms decrypt  --ciphertext-blob fileb://my-encrypted-file 
   --query Plaintext --output text | base64 --decode)"
mypassword

Since the file is binary, I use fileb:// as opposed to file:// to read in the contents of the file. If I were to read the file in as text via file://, the CLI would try to decode the binary file using my set system encoding. However since the binary file has no encoding, decoding errors would be thrown:

$ echo "$(aws kms decrypt  --ciphertext-blob file://my-encrypted-file 
   --query Plaintext --output text | base64 --decode)"

'utf8' codec can't decode byte 0x8b in position 5: invalid start byte

EC2 User Data

Looking back at the EC2 user data example from the Parameter value is a long text body section, file:// was used to pass the shell script as text to --user-data. However in some cases, the value passed to --user-data is a binary file.

One limitation of passing user data when launching an EC2 instance is that the user data is limited to 16 KB. Fortunately, there is a way to help avoid reaching this limit. By utilizing the cloud-init package on EC2 instances, you can gzip-compress your cloud-init directives because the cloud-init package will decompress the user data for you when the instance is being launched:

$ aws ec2 run-instances --image-id ami-b66ed3de 
    --instance-type m3.medium 
    --key-name mykey 
    --security-groups my-security-group 
    --user-data fileb://mycloudinit.gz

By gzip-compressing the file, the cloud-init directive becomes a binary file. Subsequentially, the gzip-compressed file must be passed to the --user-data using fileb:// in order to read in the contents of the file as binary.

Conclusion

I hope that my examples and explanations helped you better understand the various use cases for file:// and fileb://. Here’s a quick way to remember which file parameter to use: when the content of the file is human readable text, use file://; and when the content is human unreadable binary, use fileb://.

You can follow us on Twitter @AWSCLI and let us know what you’d like to read about next! If you have any questions about the CLI, please get in contact with us at the Amazon Web Services Discussion Forums. If you have any feature requests or run into any issues using the CLI, don’t be afraid to communicate with us via our GitHub repository.

Stay tuned for our next blog post, and have a Happy New Year!

 

AWS re:Invent 2014 Recap

by James Saryerwinnie | on | in CLI | Permalink | Comments |  Share

This year at re:Invent we had a great time meeting customers and discussing their usage of the AWS CLI. We hope everyone had a blast!

I had the opportunity to present a talk titled “Advanced Usage of the AWS CLI.” In this talk, I discussed some advanced features of the AWS CLI, and how you can leverage these features to make you more proficient at using the CLI. Some of these features were brand new.

In the talk, I presented six topics:

  • aws configure subcommands
  • Using JMESPath via the --query command line argument
  • Waiters
  • Input JSON templates
  • The new AssumeRole credential provider, with and without MFA
  • Amazon S3 stdout/stdin streaming

Both the slides as well as the video of the talk are online, and you can check them out if you weren’t able to attend.

In the next few posts, we’ll explore some of these six topics in more depth, and in this post, we’ll explore waiters.

Waiters

One of the examples I showed in the talk was how to use the new waiters feature of the CLI to block until an AWS resource reaches a specific state. I gave an example of how you can use the aws ec2 wait command to block until an Amazon EC2 instance reaches a running state. I’d like to explore this topic and give you an additional example of how you can leverage waiters in the CLI when creating an Amazon DynamoDB table.

When you first create a DynamoDB table the table will enter the CREATING state. You can use theaws dynamodb wait table-exists command to block until the table is available.

The first thing we need to do is create a table:

$ aws dynamodb create-table 
  --table-name waiter-demo
  --attribute-definitions AttributeName=foo,AttributeType=S 
  --key-schema AttributeName=foo,KeyType=HASH 
  --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5

Now if we immediately try to put an item into this DynamoDB table, we will get a ResourceNotFoundException error:

$ aws dynamodb put-item  --table-name waiter-demo 
  --item '{"foo": {"S": "bar"}}'
A client error (ResourceNotFoundException) occurred when calling the PutItem operation: Requested resource not found

In order to avoid this issue, we can use the aws dynamodb wait table-exists command, which will not exit until the table is in the ACTIVE state:

$ aws dynamodb wait table-exists --table-name waiter-demo

Once this command finishes, we can put an item into the DynamoDB table and then verify that this item is now available:

$ aws dynamodb put-item  --table-name waiter-demo 
  --item '{"foo": {"S": "bar"}}'
$ aws dynamodb scan  --table-name waiter-demo
{
    "Count": 1,
    "Items": [
        {
            "foo": {
                "S": "bar"
            }
        }
    ],
    "ScannedCount": 1,
    "ConsumedCapacity": null
}

If you’re following along, you can cleanup the resource we’ve created by running:

$ aws dynamodb delete-table --table-name waiter-demo

If an AWS service provides wait commands, you’ll see them in the output of aws help. You can also view the docs online. For DynamoDB, you can see all the available waiters, as well as the documentation for theaws dynamodb wait table-exists command.

re:Invent 2015

We hope everyone enjoyed re:Invent 2014, and we look forward to seeing everyone again next year!

 

Leveraging the s3 and s3api Commands

by Kyle Knapp | on | in CLI | Permalink | Comments |  Share

Have you ever run aws help on the command line or browsed the AWS CLI Reference Documentation and noticed that there are two sets of Amazon S3 commands to choose from: s3 and s3api? If you are completely unfamiliar with either the s3 or s3api commands, you can read about the two commands in the AWS CLI User Guide. In this post, I am going to go into detail about the two different commands and provide a few examples on how to leverage the two sets of commands to your advantage.

s3api

Most of the commands in the AWS CLI are generated from JSON models, which directly model the APIs of the various AWS services. This allows the CLI to generate commands that are a near one-to-one mapping of the service’s API. The s3api commands falls into this category of commands. The commands are entirely driven by these JSON models and closely mirrors the API of S3, hence the name s3api. It mirrors the API such that each command operation, e.g. s3api list-objects or s3api make-bucket, shares a similar operation name, a similar input, and a similar output as the corresponding operation in S3’s API. As a result, this gives you a significantly granular amount of control over the requests you make to S3 using the CLI.

s3

The s3 commands are a custom set of commands specifically designed to make it even easier for you to manage your S3 files using the CLI. The main difference between the s3 and s3api commands is that the s3 commands are not solely driven by the JSON models. Rather, the s3 commands are built on top of the operations found in the s3api commands. As a result, these commands allow for higher-level features that are not provided by the s3api commands. This includes, but is not limited to, the ability to synchronize local directories and S3 buckets, transfer multiple files in parallel, stream files, and automatically handle multipart transfers. In short, these commands further simplify and further quicken the transferring of files to, within, and from S3.

s3 and s3api Examples

Both sets of S3 commands have a lot to offer. With this wide array of commands to choose from, it is important to be able to identify what commands you need for your specific use case. For example, if you want to upload a set of files on your local machine to your S3 bucket, you would probably want to use the s3 commands via the cp or sync command operations. On the other hand, if you wanted to set a bucket policy, you would use the s3api commands via the put-bucket-policy command operation.

However, your choice of S3 commands should not be limited to strictly deciding whether you need to use the s3 commands or s3api commands. Sometimes you can use both sets of commands in conjunction to satisfy your use case. Often times this proves to be even more powerful as you are able to the leverage the low-level granular control of the s3api commands with the higher-level simplicity and speed of the s3 commands. Here are a few examples of how you can work with both sets of S3 commands for your specific use case.

Bucket Regions

When you create an S3 bucket, the bucket is created in a specific region. Knowing the region that your bucket is in is essential for a variety of use cases such as transferring files across buckets located in different regions and making requests that require Signature Version 4 signing. However, you may not know or remember where your bucket is located. Fortunately by using the s3api commands, you can determine your bucket’s region.

For example, if I make a bucket located in the Frankfurt region using the s3 commands:

$ aws s3 mb s3://myeucentral1bucket --region eu-central-1
make_bucket: s3://myeucentral1bucket/

I can then use s3api get-bucket-location to determine the region of my newly created bucket:

$ aws s3api get-bucket-location --bucket myeucentral1bucket
{
    "LocationConstraint": "eu-central-1"
}

As shown above, the value of the LocationConstraint member in the output JSON is the expected region of the bucket, eu-central-1. Note that for buckets created in the US Standard region, us-east-1, the value of LocationConstraint will be null. As a quick reference to how location constraints correspond to regions, refer to the AWS Regions and Endpoints Guide.

Once you have learned the region of your bucket, you can pass the region in using the --region parameter, setting it in your config file, setting it in a profile, or setting it using the AWS_DEFAULT_REGION environment variable. You can read more about how to set a region in the AWS CLI User Guide This allows you to select your region when you are making subsequent requests to your bucket via the s3 and s3api commands.

Deleting a Set of Buckets

For this example, suppose that I have a lot of buckets that I was using for testing and they are no longer needed. But, I have other buckets, too, and they need to stick around:

$ aws s3 ls
2014-12-02 13:36:17 awsclitest-123
2014-12-02 13:36:24 awsclitest-234
2014-12-02 13:36:51 awsclitest-345
2014-11-21 16:47:14 mybucketfoo

The buckets beginning with awsclitest- are test buckets that I want to get rid of. An obvious way would to be to just delete each bucket using aws s3 rb one at a time. This becomes tedious though if I were to have a lot of these test buckets or the test bucket names were longer and more complicated. I am going to go step by step on how you can build a single command that will delete all of the buckets that begin with awsclitest-.

Instead of using the s3 ls command to list my buckets, I am going to use the s3api list-buckets command to list them:

$ aws s3api list-buckets
{
    "Owner": {
        "DisplayName": "mydisplayname",
        "ID": "myid"
    },
    "Buckets": [
        {
            "CreationDate": "2014-12-02T21:36:17.000Z",
            "Name": "awsclitest-123"
        },
        {
            "CreationDate": "2014-12-02T21:36:24.000Z",
            "Name": "awsclitest-234"
        },
        {
            "CreationDate": "2014-12-02T21:36:51.000Z",
            "Name": "awsclitest-345"
        },
        {
            "CreationDate": "2014-11-22T00:47:14.000Z",
            "Name": "mybucketfoo"
        }
    ]
}

At first glance, it does not make much sense to use the s3api list-buckets over the s3 ls because all of the bucket names are embedded in the JSON output of the command. However, we can take advantage of the command’s --query parameter to perform JMESPath queries for specific members and values in the JSON output:

$ aws s3api list-buckets 
   --query 'Buckets[?starts_with(Name, `awsclitest-`) == `true`].Name'
[
    "awsclitest-123",
    "awsclitest-234",
    "awsclitest-345"
]

If you are unfamiliar with the --query parameter, you can read about it in the AWS CLI User Guide. For this specific query, I am asking for the names of all of the buckets that begin with awsclitest-. However, the output is still a little difficult to parse if we hope to use that as input to the s3 rb command. To make the names easier to parse out, we can modify our query slightly and specify text for the --output parameter:

$ aws s3api list-buckets 
   --query 'Buckets[?starts_with(Name, `awsclitest-`) == `true`].[Name]' 
   --output text
awsclitest-123
awsclitest-234
awsclitest-345

With this output, we can now use it as input to perform a forced bucket delete on all of the buckets whose name starts with awsclitest-:

$ aws s3api list-buckets 
   --query 'Buckets[?starts_with(Name, `awsclitest-`) == `true`].[Name]' 
   --output text | xargs -I {} aws s3 rb s3://{} --force
delete: s3://awsclitest-123/test
remove_bucket: s3://awsclitest-123/
delete: s3://awsclitest-234/test
remove_bucket: s3://awsclitest-234/
delete: s3://awsclitest-345/test
remove_bucket: s3://awsclitest-345/

As shown in the output, all of the desired buckets along with any files inside of them were deleted. Then to ensure that it worked, I then can list out all of my buckets:

$ aws s3 ls
2014-11-21 16:47:14 mybucketfoo

Aggregating S3 Server Access Logs

In this final example, I will show you how you can use the s3 and s3api commands together in order to aggregate your S3 server access logs. These logs are used to track the requests for access to your S3 bucket. If you are unfamiliar with server access logs, you read can about them in the Amazon S3 Developer Guide.

Server access logs follow the naming convention TargetPrefixYYYY-mm-DD-HH-MM-SS-UniqueString where YYYY, mm, DD, HH, MM and SS are the digits of the year, month, day, hour, minute, and seconds, respectively, of when the log file was delivered. However, the numbers of logs delivered for a specific period of time and inside of a specific log file is somewhat unpredictable. As a result, it would be convenient to aggregate all of the logs for a specific period of time into one file in an S3 bucket.

For this example, I am going to aggregate all of the logs that were delivered on October 31, 2014 from 11 a.m. to 12 p.m. to the file 2014-10-31-11.log in my bucket. To begin, I will use s3api list-objects to list all of the objects in my bucket beginning with logs/2014-10-31-11:

$ aws s3api list-objects --bucket myclilogs --output text 
   --prefix logs/2014-10-31-11 --query Contents[].[Key]
logs/2014-10-31-11-19-03-D7E3D44429C236C9
logs/2014-10-31-11-19-05-9FCEDD1393C9319F
logs/2014-10-31-11-19-26-01DE8498F22E8EB6
logs/2014-10-31-11-20-03-1B26CD31AE5BFEEF
logs/2014-10-31-11-21-34-757D6904963C22A6
logs/2014-10-31-11-21-35-27B909408B88017B
logs/2014-10-31-11-21-50-1967E793B8865384

.......  Continuing to the end ...........

logs/2014-10-31-11-42-44-F8AD38626A24E288
logs/2014-10-31-11-43-47-160D794F4D713F24

Using both the --query and --ouput parameters, I was able to list the logs in a format that could easily be used as input for the s3 commands. Now that I have identified all of the logs that I want to aggregate, I am going to take advantage of s3 cp streaming capability to actually aggregate the logs.

When using s3 cp to stream, you have two options: upload a stream from standard input to an S3 object or download an S3 object as a stream to standard output. You can do so by specifying - as the first path parameter to the cp command if you want to upload a stream or by specifying - as the second path parameter to the cp if you want to download an object as a stream. For my use case, I am going to stream in both directions:

$ aws s3api list-objects --bucket myclilogs 
   --output text --prefix logs/2014-10-31-11 
   --query Contents[].[Key] | 
   xargs -I {} aws s3 cp s3://myclilogs/{} - | 
   aws s3 cp - s3://myclilogs/aggregatedlogs/2014-10-31-11.log

The workflow for this command is as follows. First, I stream each desired log one by one to standard output. Then I pipe the stream from standard output to standard input and upload the stream to the desired location in my bucket.

If you wanted to speed up this process, you can utilize GNU parallel shell tool to make each of the s3 cp commands, that download the log as a stream, run in parallel with each other:

$ aws s3api list-objects --bucket myclilogs 
   --output text --prefix logs/2014-10-31-11 
   --query Contents[].[Key] | 
   parallel -j5 aws s3 cp s3://myclilogs/{} - | 
   aws s3 cp - s3://myclilogs/aggregatedlogs/2014-10-31-11.log

By indicating the -j5 parameter in the command above, I am assigning each s3 cp streaming download command to one of five jobs that are running those commands in parallel. Also, note that the GNU parallel shell tool may not be automatically installed on your machine and can be installed with tools such as brew and apt-get.

Once the command finishes, I can then verify that my aggregated log exists:

$ aws s3 ls s3://myclilogs/aggregatedlogs/
2014-12-03 10:43:49     269956 2014-10-31-11.log

Conclusion

I hope that the description and examples that I provided will help you further leverage both the s3 and s3api commands to your advantage. However, do not limit yourself to just the examples I provided. Go ahead and try to figure out other ways to utilize the s3 and s3api commands together today!

You can follow us on Twitter @AWSCLI and let us know what you’d like to read about next! If you have any questions about the CLI or any feature requests, do not be afraid to get in communication with us via our GitHub repository

Stay tuned for our next blog post!

 

Welcome to the AWS CLI Blog

by James Saryerwinnie | on | in CLI | Permalink | Comments |  Share

Hi everyone! Welcome to the AWS Command Line Interface blog. I’m James Saryerwinnie, and I work on the AWS CLI. This blog will be the place to go for information about the AWS CLI including:

  • Tips and tricks for using the AWS CLI
  • New feature announcements
  • Deep dives into various AWS CLI features
  • Guest posts from various AWS service teams

In the meantime, here are a few links to get you started:

We’re excited to get this blog started, and we hope to see you again real soon. Stay tuned!