AWS Developer Blog

Querying the Public IP Address Ranges for AWS

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

A post on the AWS Official Blog last November noted that the authoritative public IP address ranges used by AWS could now be obtained from a JSON-format file. The same information can now be accessed easily from AWS Tools for Windows PowerShell with a new cmdlet, Get-AWSPublicIpAddressRange, without the need to parse JSON. This cmdlet was added in version 2.3.15.0.

When run with no parameters, the cmdlet outputs all of the address ranges to the pipeline:

PS C:> Get-AWSPublicIpAddressRange

IpPrefix                    Region             Service
--------                    ------             -------
50.19.0.0/16                us-east-1          AMAZON
54.239.98.0/24              us-east-1          AMAZON
...
50.19.0.0/16                us-east-1          EC2
75.101.128.0/17             us-east-1          EC2
...
205.251.192.0/21            GLOBAL             ROUTE53
54.232.40.64/26             sa-east-1          ROUTE53_HEALTHCHECKS
...
54.239.192.0/19             GLOBAL             CLOUDFRONT
204.246.176.0/20            GLOBAL             CLOUDFRONT
...

If you’re comfortable using the pipeline to filter output, this may be all you need, but the cmdlet is also able to filter output using the -ServiceKey and -Region parameters. For example you can get the address ranges for EC2 across all regions like this (the parameter value is case insensitive):

PS C:> Get-AWSPublicIpAddressRange -ServiceKey ec2

Similarly, you can get the address ranges used by AWS in a given region:

PS C:> Get-AWSPublicIpAddressRange -Region us-west-2

Both of these parameters accept string arrays and can be supplied together. This example shows how to get the address ranges for Amazon EC2 and Amazon Route53 health checks in both US West regions:

PS C:> Get-AWSPublicIpAddressRange -ServiceKey ec2,route53_healthchecks -Region us-west-1,us-west-2

IpPrefix                    Region              Service
--------                    ------              -------
184.72.0.0/18               us-west-1           EC2
54.215.0.0/16               us-west-1           EC2
...
54.214.0.0/16               us-west-2           EC2
54.245.0.0/16               us-west-2           EC2
...
54.241.32.64/26             us-west-1           ROUTE53_HEALTHCHECKS
54.245.168.0/26             us-west-2           ROUTE53_HEALTHCHECKS
54.244.52.192/26            us-west-2           ROUTE53_HEALTHCHECKS
54.183.255.128/26           us-west-1           ROUTE53_HEALTHCHECKS

As noted in the original post, this information can change several times per week. You can find the publication date and time of the current information using the -OutputPublicationDate switch. The returned value here is a DateTime object:

PS C:> Get-AWSPublicIpAddressRange -OutputPublicationDate

Monday, December 15, 2014 4:41:01 PM

The set of service keys may change over time (see AWS IP Address Ranges for current documentation on this information). The current set of keys in use in the file can be obtained using the -OutputServiceKeys switch:

PS C:> Get-AWSPublicIpAddressRange -OutputServiceKeys

AMAZON
EC2
ROUTE53
ROUTE53_HEALTHCHECKS
CLOUDFRONT

If you’ve read this far and are thinking that this would also be useful for your C#/.NET applications, then you’ll be glad to know it’s also exposed in the AWS SDK for .NET. See the AWSPublicIpAddressRanges class in the Amazon.Util namespace for more details.

We hope you find this new capability useful in your scripts. If you have ideas for other cmdlets that you would find useful, be sure to leave a comment!

Caching Amazon Cognito Identity IDs

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Amazon Cognito is a service that you can use to get AWS credentials to your mobile and desktop applications without embedding them in your code. A few months ago, we added a credentials provider for Cognito. In version 2.3.14 of the AWS SDK for .NET, we updated the credentials provider to support caching the identity ID that Cognito creates.

Caching IDs is really useful for mobile and desktop applications where you don’t want to require users to authenticate but need to remember the user for each run of the application. For example, if you have a game whose scores you want to store in Amazon S3, you can use the identity ID as the object key in S3. Then, in future runs of the game, you can use the identity ID to get the scores back from S3. To get the current identity ID, call the GetIdentityId method on the credentials provider. You can also use the identity ID in the AWS Identity and Access Management (IAM) role that Cognito is using to restrict access to only the current user’s score. Below is a policy that shows how to use the Cognito identity ID. In the policy, the variable ${cognito-identity.amazonaws.com:sub} is used. When the policy is evaluated, ${cognito-identity.amazonaws.com:sub} is replaced with the current user’s identity ID.

{
    "Version" : "2012-10-17",
    "Statement" : [
        {
            "Sid" : "1",
            "Effect" : "Allow",
            "Action" : [
                "mobileanalytics:PutEvents",
                "cognito-sync:*"
            ],
            "Resource" : "*"
        },
        {
            "Sid" : "2",
            "Effect" : "Allow",
            "Action" : ["s3:PutObject", "s3:GetObject"]
            "Resource" : "arn:aws:s3:::my-game-scores-bucket/scores/${cognito-identity.amazonaws.com:sub}.json"
        }
    ]
}

In the Windows Phone and Windows Store version of the SDK, caching is controlled by the IdentityIdCacheMode property on Amazon.CognitoIdentity.CognitoAWSCredentials. By default, this property is set to LocalSettings, which means the identity ID will be cached local to just the device. Windows.Storage.ApplicationData.Current.LocalSettings is used to cache the identity ID. It can also be set to RoamingSettings, which means the identity ID will be stored in Windows.Storage.ApplicationData.Current.RoamingSettings, and the Windows Runtime will sync data stored in this collection to other devices where the user is logged in. To turn off caching, set IdentityIdCacheMode to None.

To enable caching for the .NET 3.5 and 4.5 versions of the SDK, you need to extend the Amazon.CognitoIdentity.CognitoAWSCredentials class and implement the GetCachedIdentityId, CacheIdentityId, and ClearIdentityCache methods.

Best Practices for Local File Parameters

by Kyle Knapp | on | in AWS CLI | Permalink | Comments |  Share

If you have ever passed the contents of a file to a parameter of the AWS CLI, you most likely did so using the file:// notation. By setting a parameter’s value as the file’s path prepended by file://, you can explicitly pass in the contents of a local file as input to a command:

aws service command --parameter file://path_to_file

The value passed to --parameter is the contents of the file, read as text. This means that as the contents of the file are read, the file’s bytes are decoded using the system’s set encoding. Then as the request is serialized, the contents are encoded and sent over the wire to the service.

You may be wondering why the CLI does not just send the straight bytes of the file to the service without decoding and encoding the contents. The bytes of the file must be decoded and then encoded because your system’s encoding may differ from the encoding the service expects. Ultimately, the use of file:// grants you the convenience of using files written in your preferred encoding when using the CLI.

In versions 1.6.3 and higher of the CLI, you have access to another way to pass the contents of a file to the CLI, fileb://. It works similiar to file://, but instead of reading the contents of the file as text, it is read as binary:

aws service command --parameter fileb://path_to_file

When the file is read as binary, the file’s bytes are not decoded as they are read in. This allows you to pass binary files, which have no encoding, as input to a command.

In this post, I am going to go into detail about some cases of when to use file:// over fileb:// and vice versa.

Use Cases Involving Text Files

Here are a couple of the more popular cases for using file:// to read a file as text.

Parameter value is a long text body

One of the most common use cases for file:// is when the input is a long text body. For example, if I had a shell script named myshellscript that I wanted to run when I launch an Amazon EC2 instance, I could pass the shell script in when I launch my instance from the CLI:

$ aws ec2 run-instances --image-id ami-b66ed3de 
    --instance-type m3.medium 
    --key-name mykey 
    --security-groups my-security-group 
    --user-data file://myshellscript

This command will take the contents of myshellscript and pass it to the instance as user data such that once the instance starts running, it will run my shell script. You can read more about the different ways to provide user data in the Amazon EC2 User Guide.

Parameter requires JSON input

Oftentimes parameters require a JSON structure as input, and sometimes this JSON structure can be large. For example, let’s look at launching an EC2 instance with an additional Amazon EBS volume attached using the CLI:

$ aws ec2 run-instances --image-id ami-b66ed3de 
   --instance-type m3.medium 
    --key-name mykey 
    --security-groups my-security-group 
   --block-device-mappings '[{"DeviceName":"/dev/sdf","Ebs":{"VolumeSize":20,"DeleteOnTermination":false,"VolumeType":"standard"}}]'

Notice that the --block-device-mappings parameter requires JSON input, which can be somewhat lengthy on the command line. So, it would be convenient if you could specify the JSON input in a format that is easier to read and edit, such as in the form of a text file:

[
  {
    "DeviceName": "/dev/sdf",
    "Ebs": {
      "VolumeSize": 20,
      "DeleteOnTermination": false,
      "VolumeType": "standard"
    }
  }
]

By writing the JSON to a text file, it becomes easier to determine if the JSON is formatted correctly, and you can work with it in your favorite text editor. If the JSON above is written to some local file named myinput.json, you can run the same command as before using the myinput.json file as input to the --block-device-mappings parameter:

$ aws ec2 run-instances --image-id ami-b66ed3de 
   --instance-type m3.medium 
    --key-name mykey 
    --security-groups my-security-group 
   --block-device-mappings file://myinput.json

This becomes especially useful if you plan to reuse the input.json file for future ec2 run-instances commands since you will not have to retype the entire JSON input.

Use Cases Involving Binary Files

For most cases, file:// will satisfy your use case for passing the contents of a file as input. However, there are some cases where fileb:// must be used to pass the contents of the file in as binary as opposed to as text. Here are a couple of examples.

AWS Key Management Service (KMS) decryption

KMS is an AWS service that makes it easy for you to create and control the encryption keys used to encrypt your data. You can read more about KMS in the AWS Key Management Service Developer Guide. One service that KMS provides is the ability to encrypt and decrypt data using your KMS keys. This is really useful if you want to encrypt arbitrary data such as a password or RSA key. Here is how you can use KMS to encrypt data using the CLI:

$ aws kms encrypt --key-id my-key-id --plaintext mypassword
   --query CipherTextBlob --output text

CiAxWxaLB2LyTobc/ppFeNcSLW/abxdFuvBdD3IBtHBTYBKRAQEBAgB4MVsWiwdi8k6G3P6aRX
jXEi1v2m8XRbrwXQ9yAbRwU2AAAABoMGYGCSqGSIb3DQEHBqBZMFcCAQAwUgYJKoZIhvcNAQcBM
B4GCWCGSAFlAwQBLjARBAyE/taUnrxXzSqa1+8CARCAJSi8/E819toVhfxm2A+T9mFdOfnjGuJI
zGynaCB3FsPXnrwl7vQ=

This command uses the KMS key my-key-id to encrypt the data mypassword. However, in order for the CLI to properly display content, the encrypted data output from this command is base64 encoded. So by base64-decoding the output, you can store the data as a binary file:

$ aws kms encrypt --key-id my-key-id --plaintext mypassword
   --query CipherTextBlob 
   --output text | base64 --decode > my-encrypted-password

Then if I want to decrypt the data in my file, I can use KMS to decrypt my encrypted binary:

$ echo "$(aws kms decrypt  --ciphertext-blob fileb://my-encrypted-file 
   --query Plaintext --output text | base64 --decode)"
mypassword

Since the file is binary, I use fileb:// as opposed to file:// to read in the contents of the file. If I were to read the file in as text via file://, the CLI would try to decode the binary file using my set system encoding. However since the binary file has no encoding, decoding errors would be thrown:

$ echo "$(aws kms decrypt  --ciphertext-blob file://my-encrypted-file 
   --query Plaintext --output text | base64 --decode)"

'utf8' codec can't decode byte 0x8b in position 5: invalid start byte

EC2 User Data

Looking back at the EC2 user data example from the Parameter value is a long text body section, file:// was used to pass the shell script as text to --user-data. However in some cases, the value passed to --user-data is a binary file.

One limitation of passing user data when launching an EC2 instance is that the user data is limited to 16 KB. Fortunately, there is a way to help avoid reaching this limit. By utilizing the cloud-init package on EC2 instances, you can gzip-compress your cloud-init directives because the cloud-init package will decompress the user data for you when the instance is being launched:

$ aws ec2 run-instances --image-id ami-b66ed3de 
    --instance-type m3.medium 
    --key-name mykey 
    --security-groups my-security-group 
    --user-data fileb://mycloudinit.gz

By gzip-compressing the file, the cloud-init directive becomes a binary file. Subsequentially, the gzip-compressed file must be passed to the --user-data using fileb:// in order to read in the contents of the file as binary.

Conclusion

I hope that my examples and explanations helped you better understand the various use cases for file:// and fileb://. Here’s a quick way to remember which file parameter to use: when the content of the file is human readable text, use file://; and when the content is human unreadable binary, use fileb://.

You can follow us on Twitter @AWSCLI and let us know what you’d like to read about next! If you have any questions about the CLI, please get in contact with us at the Amazon Web Services Discussion Forums. If you have any feature requests or run into any issues using the CLI, don’t be afraid to communicate with us via our GitHub repository.

Stay tuned for our next blog post, and have a Happy New Year!

 

Preview the AWS Resource APIs for PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

This year is just about over, but we are too excited to wait until the new year to share with you a feature we are developing for the AWS SDK for PHP. We are calling it the AWS Resource APIs for PHP. This feature is maintained as a separate package, but it acts as an extension to Version 3 of the AWS SDK for PHP.

As you know, the core SDK is composed of service client objects that have methods corresponding 1-to-1 with operations in the service’s API (e.g., Ec2Client::runInstances() method maps to the EC2 service’s RunInstances operation). The resource APIs build upon the SDK to add new types of objects that allow you to interact with the AWS service APIs in a more resource-oriented way. This allows you to use a more expressive syntax when working with AWS services, because you are acting on objects that understand their relationships with other resources and that encapsulate their identifying information.

Resource Objects

Resource objects each represent a single, identifiable AWS resource (e.g., an Amazon S3 bucket or an Amazon SQS queue). They contain information about how to identify the resource and load its data, the actions that can be performed on it, and the other resources to which it is related. Let’s take a look at a few examples showing how to interact with these resource objects.

First, let’s set up the Aws object, which acts as the starting point into the resource APIs.

<?php

require 'vendor/autoload.php';

use AwsResourceAws;

$aws = new Aws([
    'region'  => 'us-west-2',
    'version' => 'latest',
    'profile' => 'your-credential-profile',
]);

(Note: The array of configuration options provided in the preceding example is the same as what you would provide when instantiating the AwsSdk object in the core SDK.)

You can access related resources by calling the related resource’s name as a method and passing in its identity.

$bucket = $aws->s3->bucket('your-bucket');
$object = $bucket->object('image/bird.jpg');

Accessing resources this way is evaluated lazily, so the preceding example does not actually make any API calls.

Once you access the data of a resource, an API call will be triggered to "load" the resource and fetch its data. To access a resource object’s data, you can access it like an array.

echo $object['LastModified'];

Performing Actions

You can perform actions on a resource by calling verb-like methods on the object.

// Create a bucket and object.
$bucket = $aws->s3->createBucket([
    'Bucket' => 'my-new-bucket'
]);
$object = $bucket->putObject([
    'Key'  => 'images/image001.jpg',
    'Body' => fopen('/path/to/image.jpg', 'r'),
]);

// Delete the bucket and object.
$object->delete();
$bucket->delete();

Because the resource’s identity is encapsulated within the resource object, you never have to specify it again once the object is created. This way, actions like $object->delete() do not need to require arguments.

Collections

Some resources have a "has many" type relationship with other resources. For example, an S3 bucket has many S3 objects. The AWS Resource APIs also allow you to work with resource collections.

foreach ($bucket->objects() as $object) {
    echo $object->delete();
}

Using the Resource APIs

We are currently working on providing API documentation for the AWS Resource APIs. Even without documentation, you can programmatically determine what methods are available on a resource object by calling the respondsTo method.

print_r($bucket->respondsTo());
// Array
// (
//     [0] => create
//     [1] => delete
//     [2] => deleteObjects
//     [3] => putObject
//     [4] => multipartUploads
//     [5] => objectVersions
//     [6] => objects
//     [7] => bucketAcl
//     [8] => bucketCors
//     [9] => bucketLifecycle
//     [10] => bucketLogging
//     [11] => bucketPolicy
//     [12] => bucketNotification
//     [13] => bucketRequestPayment
//     [14] => bucketTagging
//     [15] => bucketVersioning
//     [16] => bucketWebsite
//     [17] => object
// )

var_dump($bucket->respondsTo('putObject'));
// bool(true)

Check it Out!

To get started, you can install the AWS Resource APIs for PHP using Composer, by requiring the aws/aws-sdk-php-resources package in your project. The source code and README, are located in the awslabs/aws-sdk-php-resources repo on GitHub.

The initial preview release of the AWS Resource APIs supports the following services: Amazon EC2, Amazon Glacier, Amazon S3, Amazon SNS, Amazon SQS, AWS CloudFormation, and AWS Identity and Access Management (IAM). We will continue to add support for more APIs over this next year.

We’re eager to hear your feedback about this new feature! Please use the issue tracker to ask questions, provide feedback, or submit any issues or feature requests.

AWS re:Invent 2014 Recap

by James Saryerwinnie | on | in AWS CLI | Permalink | Comments |  Share

This year at re:Invent we had a great time meeting customers and discussing their usage of the AWS CLI. We hope everyone had a blast!

I had the opportunity to present a talk titled “Advanced Usage of the AWS CLI.” In this talk, I discussed some advanced features of the AWS CLI, and how you can leverage these features to make you more proficient at using the CLI. Some of these features were brand new.

In the talk, I presented six topics:

  • aws configure subcommands
  • Using JMESPath via the --query command line argument
  • Waiters
  • Input JSON templates
  • The new AssumeRole credential provider, with and without MFA
  • Amazon S3 stdout/stdin streaming

Both the slides as well as the video of the talk are online, and you can check them out if you weren’t able to attend.

In the next few posts, we’ll explore some of these six topics in more depth, and in this post, we’ll explore waiters.

Waiters

One of the examples I showed in the talk was how to use the new waiters feature of the CLI to block until an AWS resource reaches a specific state. I gave an example of how you can use the aws ec2 wait command to block until an Amazon EC2 instance reaches a running state. I’d like to explore this topic and give you an additional example of how you can leverage waiters in the CLI when creating an Amazon DynamoDB table.

When you first create a DynamoDB table the table will enter the CREATING state. You can use theaws dynamodb wait table-exists command to block until the table is available.

The first thing we need to do is create a table:

$ aws dynamodb create-table 
  --table-name waiter-demo
  --attribute-definitions AttributeName=foo,AttributeType=S 
  --key-schema AttributeName=foo,KeyType=HASH 
  --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5

Now if we immediately try to put an item into this DynamoDB table, we will get a ResourceNotFoundException error:

$ aws dynamodb put-item  --table-name waiter-demo 
  --item '{"foo": {"S": "bar"}}'
A client error (ResourceNotFoundException) occurred when calling the PutItem operation: Requested resource not found

In order to avoid this issue, we can use the aws dynamodb wait table-exists command, which will not exit until the table is in the ACTIVE state:

$ aws dynamodb wait table-exists --table-name waiter-demo

Once this command finishes, we can put an item into the DynamoDB table and then verify that this item is now available:

$ aws dynamodb put-item  --table-name waiter-demo 
  --item '{"foo": {"S": "bar"}}'
$ aws dynamodb scan  --table-name waiter-demo
{
    "Count": 1,
    "Items": [
        {
            "foo": {
                "S": "bar"
            }
        }
    ],
    "ScannedCount": 1,
    "ConsumedCapacity": null
}

If you’re following along, you can cleanup the resource we’ve created by running:

$ aws dynamodb delete-table --table-name waiter-demo

If an AWS service provides wait commands, you’ll see them in the output of aws help. You can also view the docs online. For DynamoDB, you can see all the available waiters, as well as the documentation for theaws dynamodb wait table-exists command.

re:Invent 2015

We hope everyone enjoyed re:Invent 2014, and we look forward to seeing everyone again next year!

 

Leveraging the s3 and s3api Commands

by Kyle Knapp | on | in AWS CLI | Permalink | Comments |  Share

Have you ever run aws help on the command line or browsed the AWS CLI Reference Documentation and noticed that there are two sets of Amazon S3 commands to choose from: s3 and s3api? If you are completely unfamiliar with either the s3 or s3api commands, you can read about the two commands in the AWS CLI User Guide. In this post, I am going to go into detail about the two different commands and provide a few examples on how to leverage the two sets of commands to your advantage.

s3api

Most of the commands in the AWS CLI are generated from JSON models, which directly model the APIs of the various AWS services. This allows the CLI to generate commands that are a near one-to-one mapping of the service’s API. The s3api commands falls into this category of commands. The commands are entirely driven by these JSON models and closely mirrors the API of S3, hence the name s3api. It mirrors the API such that each command operation, e.g. s3api list-objects or s3api make-bucket, shares a similar operation name, a similar input, and a similar output as the corresponding operation in S3’s API. As a result, this gives you a significantly granular amount of control over the requests you make to S3 using the CLI.

s3

The s3 commands are a custom set of commands specifically designed to make it even easier for you to manage your S3 files using the CLI. The main difference between the s3 and s3api commands is that the s3 commands are not solely driven by the JSON models. Rather, the s3 commands are built on top of the operations found in the s3api commands. As a result, these commands allow for higher-level features that are not provided by the s3api commands. This includes, but is not limited to, the ability to synchronize local directories and S3 buckets, transfer multiple files in parallel, stream files, and automatically handle multipart transfers. In short, these commands further simplify and further quicken the transferring of files to, within, and from S3.

s3 and s3api Examples

Both sets of S3 commands have a lot to offer. With this wide array of commands to choose from, it is important to be able to identify what commands you need for your specific use case. For example, if you want to upload a set of files on your local machine to your S3 bucket, you would probably want to use the s3 commands via the cp or sync command operations. On the other hand, if you wanted to set a bucket policy, you would use the s3api commands via the put-bucket-policy command operation.

However, your choice of S3 commands should not be limited to strictly deciding whether you need to use the s3 commands or s3api commands. Sometimes you can use both sets of commands in conjunction to satisfy your use case. Often times this proves to be even more powerful as you are able to the leverage the low-level granular control of the s3api commands with the higher-level simplicity and speed of the s3 commands. Here are a few examples of how you can work with both sets of S3 commands for your specific use case.

Bucket Regions

When you create an S3 bucket, the bucket is created in a specific region. Knowing the region that your bucket is in is essential for a variety of use cases such as transferring files across buckets located in different regions and making requests that require Signature Version 4 signing. However, you may not know or remember where your bucket is located. Fortunately by using the s3api commands, you can determine your bucket’s region.

For example, if I make a bucket located in the Frankfurt region using the s3 commands:

$ aws s3 mb s3://myeucentral1bucket --region eu-central-1
make_bucket: s3://myeucentral1bucket/

I can then use s3api get-bucket-location to determine the region of my newly created bucket:

$ aws s3api get-bucket-location --bucket myeucentral1bucket
{
    "LocationConstraint": "eu-central-1"
}

As shown above, the value of the LocationConstraint member in the output JSON is the expected region of the bucket, eu-central-1. Note that for buckets created in the US Standard region, us-east-1, the value of LocationConstraint will be null. As a quick reference to how location constraints correspond to regions, refer to the AWS Regions and Endpoints Guide.

Once you have learned the region of your bucket, you can pass the region in using the --region parameter, setting it in your config file, setting it in a profile, or setting it using the AWS_DEFAULT_REGION environment variable. You can read more about how to set a region in the AWS CLI User Guide This allows you to select your region when you are making subsequent requests to your bucket via the s3 and s3api commands.

Deleting a Set of Buckets

For this example, suppose that I have a lot of buckets that I was using for testing and they are no longer needed. But, I have other buckets, too, and they need to stick around:

$ aws s3 ls
2014-12-02 13:36:17 awsclitest-123
2014-12-02 13:36:24 awsclitest-234
2014-12-02 13:36:51 awsclitest-345
2014-11-21 16:47:14 mybucketfoo

The buckets beginning with awsclitest- are test buckets that I want to get rid of. An obvious way would to be to just delete each bucket using aws s3 rb one at a time. This becomes tedious though if I were to have a lot of these test buckets or the test bucket names were longer and more complicated. I am going to go step by step on how you can build a single command that will delete all of the buckets that begin with awsclitest-.

Instead of using the s3 ls command to list my buckets, I am going to use the s3api list-buckets command to list them:

$ aws s3api list-buckets
{
    "Owner": {
        "DisplayName": "mydisplayname",
        "ID": "myid"
    },
    "Buckets": [
        {
            "CreationDate": "2014-12-02T21:36:17.000Z",
            "Name": "awsclitest-123"
        },
        {
            "CreationDate": "2014-12-02T21:36:24.000Z",
            "Name": "awsclitest-234"
        },
        {
            "CreationDate": "2014-12-02T21:36:51.000Z",
            "Name": "awsclitest-345"
        },
        {
            "CreationDate": "2014-11-22T00:47:14.000Z",
            "Name": "mybucketfoo"
        }
    ]
}

At first glance, it does not make much sense to use the s3api list-buckets over the s3 ls because all of the bucket names are embedded in the JSON output of the command. However, we can take advantage of the command’s --query parameter to perform JMESPath queries for specific members and values in the JSON output:

$ aws s3api list-buckets 
   --query 'Buckets[?starts_with(Name, `awsclitest-`) == `true`].Name'
[
    "awsclitest-123",
    "awsclitest-234",
    "awsclitest-345"
]

If you are unfamiliar with the --query parameter, you can read about it in the AWS CLI User Guide. For this specific query, I am asking for the names of all of the buckets that begin with awsclitest-. However, the output is still a little difficult to parse if we hope to use that as input to the s3 rb command. To make the names easier to parse out, we can modify our query slightly and specify text for the --output parameter:

$ aws s3api list-buckets 
   --query 'Buckets[?starts_with(Name, `awsclitest-`) == `true`].[Name]' 
   --output text
awsclitest-123
awsclitest-234
awsclitest-345

With this output, we can now use it as input to perform a forced bucket delete on all of the buckets whose name starts with awsclitest-:

$ aws s3api list-buckets 
   --query 'Buckets[?starts_with(Name, `awsclitest-`) == `true`].[Name]' 
   --output text | xargs -I {} aws s3 rb s3://{} --force
delete: s3://awsclitest-123/test
remove_bucket: s3://awsclitest-123/
delete: s3://awsclitest-234/test
remove_bucket: s3://awsclitest-234/
delete: s3://awsclitest-345/test
remove_bucket: s3://awsclitest-345/

As shown in the output, all of the desired buckets along with any files inside of them were deleted. Then to ensure that it worked, I then can list out all of my buckets:

$ aws s3 ls
2014-11-21 16:47:14 mybucketfoo

Aggregating S3 Server Access Logs

In this final example, I will show you how you can use the s3 and s3api commands together in order to aggregate your S3 server access logs. These logs are used to track the requests for access to your S3 bucket. If you are unfamiliar with server access logs, you read can about them in the Amazon S3 Developer Guide.

Server access logs follow the naming convention TargetPrefixYYYY-mm-DD-HH-MM-SS-UniqueString where YYYY, mm, DD, HH, MM and SS are the digits of the year, month, day, hour, minute, and seconds, respectively, of when the log file was delivered. However, the numbers of logs delivered for a specific period of time and inside of a specific log file is somewhat unpredictable. As a result, it would be convenient to aggregate all of the logs for a specific period of time into one file in an S3 bucket.

For this example, I am going to aggregate all of the logs that were delivered on October 31, 2014 from 11 a.m. to 12 p.m. to the file 2014-10-31-11.log in my bucket. To begin, I will use s3api list-objects to list all of the objects in my bucket beginning with logs/2014-10-31-11:

$ aws s3api list-objects --bucket myclilogs --output text 
   --prefix logs/2014-10-31-11 --query Contents[].[Key]
logs/2014-10-31-11-19-03-D7E3D44429C236C9
logs/2014-10-31-11-19-05-9FCEDD1393C9319F
logs/2014-10-31-11-19-26-01DE8498F22E8EB6
logs/2014-10-31-11-20-03-1B26CD31AE5BFEEF
logs/2014-10-31-11-21-34-757D6904963C22A6
logs/2014-10-31-11-21-35-27B909408B88017B
logs/2014-10-31-11-21-50-1967E793B8865384

.......  Continuing to the end ...........

logs/2014-10-31-11-42-44-F8AD38626A24E288
logs/2014-10-31-11-43-47-160D794F4D713F24

Using both the --query and --ouput parameters, I was able to list the logs in a format that could easily be used as input for the s3 commands. Now that I have identified all of the logs that I want to aggregate, I am going to take advantage of s3 cp streaming capability to actually aggregate the logs.

When using s3 cp to stream, you have two options: upload a stream from standard input to an S3 object or download an S3 object as a stream to standard output. You can do so by specifying - as the first path parameter to the cp command if you want to upload a stream or by specifying - as the second path parameter to the cp if you want to download an object as a stream. For my use case, I am going to stream in both directions:

$ aws s3api list-objects --bucket myclilogs 
   --output text --prefix logs/2014-10-31-11 
   --query Contents[].[Key] | 
   xargs -I {} aws s3 cp s3://myclilogs/{} - | 
   aws s3 cp - s3://myclilogs/aggregatedlogs/2014-10-31-11.log

The workflow for this command is as follows. First, I stream each desired log one by one to standard output. Then I pipe the stream from standard output to standard input and upload the stream to the desired location in my bucket.

If you wanted to speed up this process, you can utilize GNU parallel shell tool to make each of the s3 cp commands, that download the log as a stream, run in parallel with each other:

$ aws s3api list-objects --bucket myclilogs 
   --output text --prefix logs/2014-10-31-11 
   --query Contents[].[Key] | 
   parallel -j5 aws s3 cp s3://myclilogs/{} - | 
   aws s3 cp - s3://myclilogs/aggregatedlogs/2014-10-31-11.log

By indicating the -j5 parameter in the command above, I am assigning each s3 cp streaming download command to one of five jobs that are running those commands in parallel. Also, note that the GNU parallel shell tool may not be automatically installed on your machine and can be installed with tools such as brew and apt-get.

Once the command finishes, I can then verify that my aggregated log exists:

$ aws s3 ls s3://myclilogs/aggregatedlogs/
2014-12-03 10:43:49     269956 2014-10-31-11.log

Conclusion

I hope that the description and examples that I provided will help you further leverage both the s3 and s3api commands to your advantage. However, do not limit yourself to just the examples I provided. Go ahead and try to figure out other ways to utilize the s3 and s3api commands together today!

You can follow us on Twitter @AWSCLI and let us know what you’d like to read about next! If you have any questions about the CLI or any feature requests, do not be afraid to get in communication with us via our GitHub repository

Stay tuned for our next blog post!

 

AWS Resource APIs for SNS and SQS

by David Murray | on | in Java | Permalink | Comments |  Share

Last week we released version 0.0.3 of the AWS Resource APIs for Java, adding support for the Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS). SNS and SQS are similar services that together provide a fully-managed cloud messaging platform. These services expose two powerful primitives — Topics and Queues — which let you decouple message producers from message consumers. The Resource APIs for SNS and SQS make it easier than ever to use these two services. Enough chit-chat, let’s see some code!

Amazon SNS — Topics

SNS is used for multicast messaging. Consumers subscribe to a "Topic," and messages published to the Topic are pushed to all current subscribers. The resource API for SNS exposes a resource object representing a Topic, giving you convenient methods for managing subscriptions to the topic and publishing messages to the topic. It also exposes resource objects for PlatformApplications and PlatformEndpoints, which are used to integrate with various mobile push services. This example demonstrates creating a new Topic, adding a couple of subscribers, and publishing a message to the topic.

SNS sns = ServiceBuilder.forService(SNS.class).build();

// Create a new topic.
Topic topic = sns.createTopic("MyTestTopic");
try {

    // Subscribe an email address.
    topic.subscribe("david@example.com", "email");

    // Subscribe an HTTPS endpoint.
    topic.subscribe("https://api.example.com/notify?user=david", "https");

    // Subscribe all of the endpoints from a previously-created
    // mobile platform application.
    PlatformApplication myMobileApp =
            sns.getPlatformApplication("arn:aws:...");
    for (PlatformEndpoint endpoint : myMobileApp.getEndpoints()) {
        topic.subscribe(endpoint.getArn(), "application");
    }

    // Publish a message to all of the subscribers.
    topic.publish("Hello from Amazon SNS!");

} finally {
    // Clean up after ourselves.
    topic.delete();
}

Amazon SQS — Queues

SQS is used for reliable anycast messaging. Producers write messages to a "Queue," and consumers pull messages from the queue; each message is delivered to a single consumer[1]. The resource API for SQS exposes a Queue resource object, giving you convenient methods for sending messages to a queue and receiving messages from the queue. This example demonstrates creating a queue, sending a couple of messages to it, and then reading those messages back out.

SQS sqs = ServiceBuilder.forService(SQS.class).build();

// Create a new queue.
Queue queue = sqs.createQueue("MyTestQueue");
try {

    // Configure the queue for more efficient long-polling.
    queue.setAttributes(Collections.singletonMap(
            "ReceiveMessageWaitTimeSeconds",
            "20"));

    // Send it a couple messages.
    for (int i = 0; i < 10; ++i) {
        queue.sendMessage("Hello from Amazon SQS: " + i);
    }

    while (true) {
        // Pull a batch of messages from the queue for processing.
        List<Message> messages = queue.receiveMessages();
        for (Message message : messages) {
            System.out.println(message.getBody());

            // Delete the message from the queue to acknowledge that
            // we've successfully processed it.
            message.delete();
        }
    }

} finally {
    // Clean up after ourselves.
    queue.delete();
}

Conclusion

Using SNS or SQS, or interested in getting started with them? Give these new resource APIs a try and let us know what you think, either here or via GitHub issues!

[1] To be precise, it’s delivered to at least one consumer; if the first consumer who reads it does not delete the message from the queue in time (whether due to failure or just being slow), it’ll eventually be delivered to another consumer.

New AWS Elastic Beanstalk Deployment Wizard

Today, we released version 1.8 of the AWS Toolkit for Visual Studio. For this release, we revamped our wizard to deploy your ASP.NET Applications. Our goal was to make deployment easier as well as take advantage of some of the new features AWS Elastic Beanstalk has added.

What happened to the AWS CloudFormation deployments?

Unlike the new deployment wizard, the previous wizard had the option to deploy using the Load Balanced and Single Instance Templates, which would deploy using AWS CloudFormation templates. This deployment option was added before we had Elastic Beanstalk, which has since added features that make these deployment templates obsolete. If you still need access to this deployment mechanism, on the first page of the new wizard you can choose to relaunch the legacy wizard.

So what’s new?

Rolling deployments

If you are deploying your applications to a load balanced environment, you can configure how new versions of your applications are deployed to the instances in your environment. You can also configure how changes to your environment are made. For example, if you have 4 instances in your environment and you want to change the instance type, you can configure the environment to change 2 instances at a time keeping your application up and running while the change is being made.

AWS Identity and Access Management roles

AWS Identity and Access Management roles are an important way of getting AWS credentials to your deployed application. With the new wizard, you can select an existing role or choose to create a new role based on a number of role templates. It is easy in the new wizard to set up a new role that gives access to Amazon S3 and DynamoDB. After deployment, you can refine the role from the AWS Explorer.

Application options

The application options page has several new features. You can now choose which build configuration to use. You can also set any application settings you want to be pushed into the web.config appSettings section when the application is being deployed.

In the previous deployment wizard, applications were deployed to a sub-folder in IIS based on the project name with the suffix "_deploy". It appeared as if it was deployed at the root because URL rewrite rules were added to the root. This worked for most cases, but there are some edge cases where this caused problems. With the new wizard, applications can be configured to deploy at any folder and by default it will be deployed at the root folder of IIS. If the application is deployed to anywhere other then the root, the URL rewrite rules are added to the root.

Feedback

We hope that you like the new wizard and that it makes things easier for you. For a full walk through of the new wizard check out the user guide for the AWS Toolkit for Visual Studio. We would love to hear your feedback on the new wizard. We would also love to hear about any interesting deployment issues you have and where you would like help from AWS .NET tooling.

 

Amazon EC2 ImageUtilities and Get-EC2ImageByName Updates

Versions 2.3.14 of the AWS SDK for .NET and AWS Tools for Windows PowerShell, released today (December 18, 2014), contain updates to the utilities and the Get-EC2ImageByName cmdlet used to query common Microsoft Windows 64-bit Amazon Machine Images using version-independent names. Briefly, we renamed some of the keys used to identify Microsoft Windows Server 2008 images to address confusion over what versions are actually returned, and we added the ability to retrieve some additional images. In the Get-EC2ImageByName cmdlet, we made a small behavior change to help when running the cmdlet in a pipeline when more than one image version exists (as happens when Amazon periodically revises the images) – the cmdlet by default now outputs only the very latest image. The previous behavior that output all available versions (latest + prior) can be enabled using a new switch.

Renamed and New Image Keys

This change affects both the SDK Amazon.EC2.Util.ImageUtilities class and the Get-EC2ImageByName cmdlet. For some time now, the keys prefixed with Windows_2008_* have returned Microsoft Windows Server 2008 R2 images, not the original Windows Server 2008 editions, leading to some confusion. We addressed this by adding a new set of R2-specific keys—these all have the prefix Windows_2008R2_*. To maintain backward compatibility, the SDK retains the old keys, but we have tagged them with the [Obsolete] attribute and a message detailing the corresponding R2-based key you should use. Additionally, these old keys will still return Windows Server 2008 R2 images.

Note that the Get-EC2ImageByName cmdlet will not display the obsolete keys (when run with no parameters), but you can still supply them for the -Name parameter so your existing scripts will continue to function.

We also added three new keys enabling you to retrieve 64-bit Windows Server 2008 SP3 editions (base image, plus SQL Server 2008 Standard and SQL Server 2008 Express images). The keys for these images are WINDOWS_2008RTM_BASE, WINDOWS_2008RTM_SQL_SERVER_EXPRESS_2008, and WINDOWS_2008RTM_SQL_SERVER_STANDARD_2008.

The following keys are displayed when you run the cmdlet with no parameters:

PS C:> Get-EC2ImageByName
WINDOWS_2012R2_BASE
WINDOWS_2012R2_SQL_SERVER_EXPRESS_2014
WINDOWS_2012R2_SQL_SERVER_STANDARD_2014
WINDOWS_2012R2_SQL_SERVER_WEB_2014
WINDOWS_2012_BASE
WINDOWS_2012_SQL_SERVER_EXPRESS_2014
WINDOWS_2012_SQL_SERVER_STANDARD_2014
WINDOWS_2012_SQL_SERVER_WEB_2014
WINDOWS_2012_SQL_SERVER_EXPRESS_2012
WINDOWS_2012_SQL_SERVER_STANDARD_2012
WINDOWS_2012_SQL_SERVER_WEB_2012
WINDOWS_2012_SQL_SERVER_EXPRESS_2008
WINDOWS_2012_SQL_SERVER_STANDARD_2008
WINDOWS_2012_SQL_SERVER_WEB_2008
WINDOWS_2008R2_BASE
WINDOWS_2008R2_SQL_SERVER_EXPRESS_2012
WINDOWS_2008R2_SQL_SERVER_STANDARD_2012
WINDOWS_2008R2_SQL_SERVER_WEB_2012
WINDOWS_2008R2_SQL_SERVER_EXPRESS_2008
WINDOWS_2008R2_SQL_SERVER_STANDARD_2008
WINDOWS_2008R2_SQL_SERVER_WEB_2008
WINDOWS_2008RTM_BASE
WINDOWS_2008RTM_SQL_SERVER_EXPRESS_2008
WINDOWS_2008RTM_SQL_SERVER_STANDARD_2008
WINDOWS_2008_BEANSTALK_IIS75
WINDOWS_2012_BEANSTALK_IIS8
VPC_NAT

The following keys are deprecated but still recognized:

WINDOWS_2008_BASE
WINDOWS_2008_SQL_SERVER_EXPRESS_2012
WINDOWS_2008_SQL_SERVER_STANDARD_2012
WINDOWS_2008_SQL_SERVER_WEB_2012
WINDOWS_2008_SQL_SERVER_EXPRESS_2008
WINDOWS_2008_SQL_SERVER_STANDARD_2008
WINDOWS_2008_SQL_SERVER_WEB_2008

Get-EC2ImageByName Enhancements

Amazon periodically revises the set of Microsoft Windows images that Amazon makes available to customers and for a period, the Get-EC2ImageByName cmdlet could return the latest image for a key, plus one or more prior versions. For example, at the time of writing this post, running the command Get-EC2ImageByName -Name windows_2012r2_base emitted two images as output. If run in a pipeline that then proceeds to invoke the New-EC2Instance cmdlet, for example, instances of multiple images could then be started—perhaps not what was expected. To obtain and start the latest image only, you would have to either index the returned collection, which could contain one or several objects, or insert a call to Select-Object in your pipeline to extract the first item before then calling New-EC2Instance (the first item in the output from Get-EC2ImageByName is always the latest version).

With the new release, when a single key is supplied to the -Name parameter, the cmdlet emits only the single latest machine image that is available. This makes using the cmdlet in a ‘get | start’ pattern much safer and more convenient:

# guaranteed to only return one image to launch
PS C:> Get-EC2ImageByName -Name windows_2012r2_base | New-EC2Instance -InstanceType t1.micro ...

If you do need to get all versions of a given image, this is supported using the new ”-AllAvailable” switch. The following command outputs all available versions of the Windows Server 2012 R2 image, which may be one or several images:

PS C:> Get-EC2ImageByName -Name windows_2012r2_base -AllAvailable

The cmdlet can also emit all available versions when either more than one value is supplied for the -Name parameter or a custom key value is supplied, as it is assumed in these scenarios you are expecting a collection to work with:

# use of multiple keys (custom or built-in) yields all versions
PS C:> Get-EC2ImageByName -Name windows_2012r2_base,windows_2008r2_base

# use of a custom key, single or multiple, yields all versions
PS C:> Get-EC2ImageByName -Name "Windows_Server-2003*"

These updates to the Get-EC2ImageByName cmdlet were driven in part by feedback from our users. If you have an idea or suggestion for new features that would make your scripting life easier, please get in touch with us! One way is via the AWS PowerShell Scripting forum here.

Preview release of AWS Resource APIs for .NET

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

We have released a preview of AWS Resource APIs for .NET, which is a brand new high-level API. The latest version of the preview ships with the resource APIs for the following AWS services, support for other services will be added in the near future.

  • Amazon Glacier
  • Amazon Simple Notification Service (SNS)
  • Amazon Simple Queue Service (SQS)
  • AWS CloudFormation
  • AWS Identity and Access Management (IAM)

The goal of this preview is to provide early access to the new API and to get feedback from you that we can incorporate in the GA release. The source code for the preview is available as a new branch of the aws-sdk-net GitHub repository, and the binaries are available here.

The resource APIs allows you to work more directly with the resources that are managed by AWS services. A resource is a logical object exposed by an AWS service’s API. For example, User, Group, and Role are some of the resources exposed by the IAM service. Here are the benefits of using the resource APIs :

Easy to understand

The low-level APIs are request-response style APIs that corresponds to the actions exposed by an AWS service. The resource APIs are  higher-level object-oriented APIs that represent the logical relationships between the resources within a service. When you work with a resource object, only the operations and relationships applicable to it are visible, in contrast to the low-level API where you can see all the operations for a service on the service client object. This makes it easier to understand and explore the features of a service.

Write less code

The resource APIs reduces the the amount of code you need to write to achieve the same results.

  • Operations on resource objects infer identifier parameters from its current context. This allows you to write code where you don’t have to specify identifiers repeatedly.

    // No need to specify ResyncMFADeviceRequest.UserName 
    // as it is inferred from the user object
    user.Resync(new ResyncMFADeviceRequest
    {
        SerialNumber = "",
        AuthenticationCode1 ="",
        AuthenticationCode2 = ""
    };
    
  • Simplified method overloads eliminate creating request objects for commonly used and mandatory request parameters. You can also use the overload, which accepts a request object for complex usages.

    group.AddUser(user.Name); // Use this simplified overload instead of  
    group.AddUser(new AddUserToGroupRequest { UserName = user.Name});  
    
  • Auto pagination for operations that support paging – The resource APIs will make multiple service calls for APIs that support paging as you enumerate through the results. You do not have to write additional code to make multiple service calls and to capture/resend pagination tokens.

Using the API

The entry point for using the resource APIs is the service object. It represents an AWS service itself, in this case IAM. Using the service object, you can access top-level resources and operations on a service. Once you get the resource objects, further operations can be performed on them. The following code demonstrates various API usages with IAM and resource objects.

using Amazon.IdentityManagement.Model;
using Amazon.IdentityManagement.Resources; // Namespace for IAM resource APIs

...

// AWS credentials or profile is picked up from app.config 
var iam = new IdentityManagementService();            

// Get a group by its name
var adminGroup = iam.GetGroupByName("admins");

// List all users in the admins group.          
// GetUsers() calls an API that supports paging and 
// automatically makes multiple service calls if
// more results are available as we enumerate
// through the results.
foreach (var user in adminGroup.GetUsers())
{
    Console.WriteLine(user.Name);
}

// Create a new user and add the user to the admins group
var userA= iam.CreateUser("Alice");
adminGroup.AddUser(userA.Name);

// Create a new access key for a user
var userB = iam.GetUserByName("Bob");
var accessKey = userB.CreateAccessKey();

// Deactivate all MFA devices for a user
var userC = iam.GetUserByName("Charlie");
foreach (var mfaDevice in userC.GetMfaDevices())
{
    mfaDevice.Deactivate();
}

// Update an existing policy for a user
var policy = userC.GetUserPolicyByName("S3AccessPolicy");            
policy.Put(POLICY_DOCUMENT);

The AWS SDK for .NET Developer Guide has code examples and more information about the resource APIs. We would really like to hear your feedback and suggestions about this new API. You can provide your feedback through GitHub and the AWS forums.