AWS Developer Blog

Using Elastic IP Addresses

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Elastic IP addresses are great for keeping a consistent public IP address. They can also be transferred to other EC2 instances, which is useful if you need to replace an instance but don’t want your public IP address to change. The Amazon EC2 User Guide has information on IP addresses for EC2 instances that can give you a better understanding of how and when they are assigned. You can use the AWS Toolkit for Visual Studio or the AWS Management Console to manage your Elastic IP addresses, but what if you want to assign them from code?

To allocate Elastic IP addresses and associate them using the AWS SDK for .NET is very simple, but it differs slightly between EC2-Classic instances and instances launched into a VPC. This snippet shows how to allocate and associate an Elastic IP address for an instance launched into EC2-Classic.

// Create a new Elastic IP
var allocateRequest = new AllocateAddressRequest() { Domain = DomainType.Standard };
var allocateResponse = ec2Client.AllocateAddress(allocateRequest);

// Assign the IP to an EC2 instance
var associateRequest = new AssociateAddressRequest
{
    PublicIp = allocateResponse.PublicIp,
    InstanceId = "i-XXXXXXXX"
};
ec2Client.AssociateAddress(associateRequest);

And the following snippet is for an EC2 instance launched into a VPC.

// Create a new Elastic IP
var allocateRequest = new AllocateAddressRequest() { Domain = DomainType.Vpc };
var allocateResponse = ec2Client.AllocateAddress(allocateRequest);

// Assign the IP to an EC2 instance
var associateRequest = new AssociateAddressRequest
{
    AllocationId = allocateResponse.AllocationId,
    InstanceId = "i-XXXXXXXX"
};
ec2Client.AssociateAddress(associateRequest);

The difference between the two pieces of code is that the Domain property on AllocateAddressRequest is changed from DomainType.Standard to DomainType.Vpc. The other difference is that the address associated with the PublicIp property is used for EC2-Classic, whereas AllocationId is used for EC2-VPC.

Later, if the Elastic IP address needs to be changed to a different instance, the ReleaseAddress API can be called, and then AssociateAddress can be called again on the new instance.

Note, I was using Version 2 of the SDK for this blog. If you are using Version 1 of the SDK, the enumerations DomainType.Standard and DomainType.Vpc would be replaced with the string literals "standard" and "vpc".

Using Windows PowerShell

This is also a great use for the AWS Tools for Windows PowerShell. Here’s how you can do the same as above for EC2-Classic in PowerShell.

$address = New-EC2Address -Domain "standard"
Register-EC2Address -InstanceId "i-XXXXXXXX" -PublicIp $address.PublicIp

Using the SaveBehavior Configuration for the DynamoDBMapper

by Wade Matveyenko | on | in Java | Permalink | Comments |  Share

The high-level save API of DynamoDBMapper provides a convenient way of persisting items in an Amazon DynamoDB table. The underlying implementation uses either a PutItem request to create a new item or an UpdateItem request to edit the existing item. In order to exercise finer control over the low-level service requests, you can use a SaveBehavior configuration to specify the expected behavior when saving an item. First, let’s look at how to set a SaveBehavior configuration. There are two ways of doing it:

  • You can specify the default SaveBehavior when constructing a new mapper instance, which will affect all save operations from this mapper:

    // All save operations will use the UPDATE behavior by default
    DynamoDBMapper mapper = new DynamoDBMapper(dynamoDBClient, 
                            new DynamoDBMapperConfig(SaveBehavior.UPDATE));
  • You can also force a SaveBehavior for a particular save operation:

    // Force this save operation to use CLOBBER, instead of the default behavior of this mapper
    mapper.save(obj, new DynamoDBMapperConfig(SaveBehavior.CLOBBER));

The next step is to understand different SaveBehavior configurations. There are four different configurations you can choose from: UPDATE(default), UPDATE_SKIP_NULL_ATTRIBUTES, CLOBBER, and APPEND_SET. When you add a new item to the table, using any of the four configurations has the same effect—puts the item as specified in the POJO (though it might be achieved by different service request calls). However, when it comes to updating an existing item, these SaveBehavior configurations have different results, and you need to choose the appropriate one according to how you want to control your data. In order to explain this, let’s walk through an example of using different SaveBehavior configurations to update an item specified by the same POJO:

  • Table schema:

    AttributeName key modeled_scalar modeled_set unmodeled
    KeyType Hash Non-key Non-key Non-key
    AttributeType Number String String set String
  • POJO class definition:

    @DynamoDBTable(tableName="TestTable")
    public class TestTableItem {
    
       private int key;
       private String modeledScalar;
       private Set<String> modeledSet;
    
       @DynamoDBHashKey(attributeName="key")
       public int getKey() { return key; }
       public void setKey(int key) { this.key = key; }
    
       @DynamoDBAttribute(attributeName="modeled_scalar")
       public String getModeledScalar() { return modeledScalar; }
       public void setModeledScalar(String modeledScalar) { this.modeledScalar = modeledScalar; }
    	
       @DynamoDBAttribute(attributeName="modeled_set")
       public Set<String> getModeledSet() { return modeledSet; }
       public void setModeledSet(Set<String> modeledSet) { this.modeledSet = modeledSet; }
    
    }
      
  • Existing item:

    {
         "key" : "99",
         "modeled_scalar" : "foo", 
         "modeled_set" : [
              "foo0", 
              "foo1"
         ], 
         "unmodeled" : "bar" 
    }
  • POJO object:

    TestTableItem obj = new TestTableItem();
    obj.setKey(99);
    obj.setModeledScalar(null);
    obj.setModeledSet(Collections.singleton("foo2");

Then let’s look at the effect of using each SaveBehavior configuration:

  • UPDATE (default)

    UPDATE will not affect unmodeled attributes on a save operation, and a null value for the modeled attribute will remove it from that item in DynamoDB.

    Updated item:

    {
         "key" : "99",
         "modeled_set" : [
              "foo2"
         ],
         "unmodeled" : "bar" 
    }
  • UPDATE_SKIP_NULL_ATTRIBUTES

    UPDATE_SKIP_NULL_ATTRIBUTES is similar to UPDATE, except that it ignores any null value attribute(s) and will NOT remove them from that item in DynamoDB.

    Updated item:

    {
         "key" : "99",
         "modeled_scalar" : "foo",
         "modeled_set" : [
              "foo2"
         ], 
         "unmodeled" : "bar" 
    }
  • CLOBBER

    CLOBBER will clear and replace all attributes, including unmodeled ones, (delete and recreate) on save.

    Updated item:

    {
         "key" : "99", 
         "modeled_set" : [
              "foo2"
         ]
    }
  • APPEND_SET

    APPEND_SET treats scalar attributes (String, Number, Binary) the same as UPDATE_SKIP_NULL_ATTRIBUTES does. However, for set attributes, it will append to the existing attribute value, instead of overriding it.

    Updated item:

    {
         "key" : "99",
         "modeled_scalar" : "foo",
         "modeled_set" : [
              "foo0", 
              "foo1", 
              "foo2"
         ], 
         "unmodeled" : "bar" 
    }

Here is a summary of the differences between these SaveBehavior configurations:

SaveBehavior On unmodeled attribute On null-value attribute On set attribute
UPDATE keep remove override
UPDATE_SKIP_NULL_ATTRIBUTES keep keep override
CLOBBER remove remove override
APPEND_SET keep keep append

As you can see, SaveBehavior provides great flexibility on how to update your data in Amazon DynamoDB. Do you find these SaveBehavior configurations easy to use? Are there any other save behaviors that you need? Leave your comment here and help us improve our SDK!

Release: AWS SDK for PHP – Version 2.4.7

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.4.7 of the AWS SDK for PHP. This release adds support for audio transcoding features to the Amazon Elastic Transcoder client and updates to the Amazon CloudFront, Amazon EC2, Amazon RDS, Auto Scaling, and AWS OpsWorks clients.

Changelog

  • Added support for audio transcoding features to the Amazon Elastic Transcoder client
  • Added support for modifying Reserved Instances in a region to the Amazon EC2 client
  • Added support for new resource management features to the AWS OpsWorks client
  • Added support for additional HTTP methods to the Amazon CloudFront client
  • Added support for custom error page configuration to the Amazon CloudFront client
  • Added support for the public IP address association of instances in Auto Scaling group via the Auto Scaling client
  • Added support for tags and filters to various operations in the Amazon RDS client
  • Added the ability to easily specify event listeners on waiters
  • Added support for using the ap-southeast-2 region to the Amazon Glacier client
  • Added support for using the ap-southeast-1 and ap-southeast-2 regions to the Amazon Redshift client
  • Updated the Amazon EC2 client to use the 2013-09-11 API version
  • Updated the Amazon CloudFront client to use the 2013-09-27 API version
  • Updated the AWS OpsWorks client to use the 2013-07-15 API version
  • Updated the Amazon CloudSearch client to use Signature Version 4
  • Fixed an issue with the Amazon S3 Client so that the top-level XML element of the CompleteMultipartUpload operation is correctly sent as CompleteMultipartUpload
  • Fixed an issue with the Amazon S3 Client so that you can now disable bucket logging using with the PutBucketLogging operation
  • Fixed an issue with the Amazon CloudFront so that query string parameters in pre-signed URLs are correctly URL-encoded
  • Fixed an issue with the Signature Version 4 implementation where headers with multiple values were sometimes sorted and signed incorrectly

Install/Download the Latest SDK

Drag and Drop in the AWS Toolkit for Visual Studio

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Using drag and drop can be a great time saver when using your favorite tool, but it is not always obvious what drag and drop features are available. The AWS Toolkit for Visual Studio has many drag and drop features that you might not have discovered yet.

AWS Explorer to the Code Window

When dragging resources from AWS Explorer to your code, the name used to look up the resource is inserted into your code. For example, dragging an Amazon S3 bucket inserts the bucket name into your code. This is especially useful for Amazon SQS queues where the full queue URL is inserted, and for Amazon SNS topics where the topic ARN is inserted.

Amazon S3 Bucket Browser

Files and folders in Windows Explorer can be dragged into the S3 bucket browser. This uploads the local files and folders to the specific bucket. S3 objects can also be dragged out of the S3 bucket browser into Windows Explorer. If you drag a "folder" from the S3 bucket browser, a folder is created on your local system, and all of the objects with the folder prefix are downloaded into the folder.

Subscribing Amazon SQS Queues to Amazon SNS Topics

In order to have an SQS queue receive messages from an SNS topic, the queue must be subscribed and the permissions on the SQS queue must give the SNS topic access to the SendMessage action. In the toolkit, this is easy to do by opening up the SNS topic view and then dragging the target SQS queue into the view.

This displays the confirmation dialog box with the check box to add permissions on the SQS queue for the SNS topic. Afterwards, you can confirm the permissions by right-clicking the SQS queue and selecting Edit Policy. You can also confirm the subscription by using the "Publish to Topic" feature in the topic view and seeing the message in the queue view.

AWS Identity and Access Management (IAM) Policy Editor

Using IAM to restrict access to your resources is very important in keeping your account secure. In the policies that you create for IAM groups, roles, and users, you identify the resources you want to give or deny access to by their Amazon Resource Name (ARN). To make this step easier, you can drag your target resources or services from AWS Explorer to the policy editor, which automatically fills in the required ARNs.

AWS CloudFormation Stack to a Template File

When using the CloudFormation Editor, you can drag stacks from AWS Explorer to an open template file. This replaces all the contents of the template with the template from the stack. (A confirmation box appears to make sure that this is what you want to do.)

AWS at Web & PHP Con 2013

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

In September, I was able to attend and speak at Web & PHP Con in San
Jose, CA. It was great to be around a good group of PHP developers, talk about web development and AWS, and meet new
friends.

Getting Good with the AWS SDK for PHP

On Wednesday, September 17th, I gave a talk called Getting Good with the AWS SDK for PHP. In my session, I gave a
brief introduction to AWS and its services, taught how to use the AWS SDK for PHP, and walked through some code examples
from a small PHP application built with the SDK using Amazon S3, Amazon DynamoDB, and AWS Elastic Beanstalk. Here is the
slide deck,
joind.in page, and Lanyrd page for the
talk.

Git Educated About Git

On September 18th, I gave a talk called Git Educated About Git – 20 Essential Commands
(slide deck). This talk was not
related to AWS or the AWS SDK for PHP, but I used the development of the SDK as a use case during the presentation.
Since we work on a combination of both publicly available and unannounced features, we don’t have a single canonical
repository. Instead we have two remotes, our public GitHub repository and another private, internal repository. For fun,
I also wrote and performed a song called You’re Doing Git! during my session, and you can watch the performance on
YouTube
.

Attending PHP Conferences

I’ve enjoyed my opportunities to attend various PHP and developer conferences and user group meetings throughout this
year. I’ve found it to be a great opportunity to connect to PHP developers and help them learn more about developing on
AWS and with the AWS SDK for PHP. I hope to see you at future conferences.

Getting Ready for AWS re:Invent 2013

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

AWS re:Invent is coming up again this November 12-15 in Las Vegas. Last year, Steve Roberts and I had a great time meeting with developers and discussing how they use AWS. We also gave a talk about deploying your apps from Visual Studio. To watch the screencast, see Deploying to the AWS Cloud with Visual Studio.

This year, Jim Flanagan and I are coming to re:Invent. We’ll be hanging out in the developer lounge so we can meet and chat with the fellow attendees. We’ll also be giving another talk this year. In this talk, we plan to show off our new version 2 of the AWS SDK for .NET and also new enhancements we’ve made for deploying your apps. For more information, check out TLS302 – Building Scalable Windows and .NET Apps on AWS.

Hope to see you there!

AWS re:Invent 2013

We’re all getting very excited about AWS re:Invent 2013. In just over a month, we’ll be down in Las Vegas talking to developers and customers from all over the world.

There’s a huge amount of great technical content this year, and attendees will be taking home lots of knowledge on the latest and greatest features of the AWS platform, and learning best practices for building bigger, more robust applications faster. Our team will be giving a few presentations, including TLS301 – Accelerate Your Java Development on AWS.

I hope we’ll get to meet you at the conference this year. If you weren’t able to make it last year, you can find lots of great videos of the sessions online. One of my favorites is Andy Jassy’s re:Invent Day 1 Keynote. Some of you might remember Zach Musgrave’s session last year on Developing, Deploying, and Debugging AWS Applications with Eclipse, and a few of you might have been there for my session on Being Productive with the AWS SDK for Java.

See you in Las Vegas!

AWS Regions and Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

The majority of the cmdlets in the AWS Tools for Windows PowerShell require that you specify an AWS region. Specifying a region defines the service endpoint that is used for the request, in addition to scoping the resources you want to operate on. There are, however, a couple of exceptions to this rule:

  • Some services are considered region-less. This means that the service exposes an endpoint that does not contain any region information, for example AWS Identity and Access Management (IAM) and Amazon Route 53 fall into this category.
  • Some services expose only a single regional endpoint, usually in the US East (Northern Virginia) region. Examples for this category are Amazon Simple Email Service (SES) and AWS OpsWorks.

Cmdlets for services in these categories do not require that you specify a region and are designed, in the case of the second category, to automatically select the single regional endpoint for you. Note that although Amazon Simple Storage Service (S3) has multiple regional endpoints, its cmdlets can also operate without the need for an explicit region, falling back to the US East (Northern Virginia) region in this scenario. This may or may not work based on location constraints on your buckets. You therefore might want to consider always specifying a region anyway (this also safeguards against assumptions for services that may expand to other regions in the future).

This blog post describes how to specify the region for a cmdlet and how to specify a default region. A useful summary guide to endpoints and regions for services can be found at Regions and Endpoints in the Amazon Web Services General Reference.

Specifying the Region for a Cmdlet

All cmdlets that require region information to operate expose a -Region parameter. This parameter accepts a string value, which is the system name of the AWS region. For example, we can obtain a list of all running Amazon EC2 instances in the US West (Oregon) region as follows:

PS C:> Get-EC2Instance -Region us-west-2

Note:For simplicity, the cmdlet examples shown here assume that your AWS credential information is being obtained automatically, as described in Handling Credentials with AWS Tools for Windows PowerShell.

Similarly, we can obtain the set of Amazon Machine Images (AMIs) for Microsoft Windows Server 2012, this time in the EU (Ireland) region:

PS C:> Get-EC2ImageByName -Region eu-west-1 -Name "windows_2012_base"

Given these examples, you might write the following command to start an instance:

PS C:> Get-EC2ImageByName -Region eu-west-1 -Name "windows_2012_base" | New-EC2Instance -InstanceType m1.small -MinCount 1 -MaxCount 1 
New-EC2Instance: The image id '[ami-a63edbd1]' does not exist
At line:1 char:66
+ Get-EC2ImageByName -Region eu-west-1 -Name "windows_2012_base" 
        | New-EC2Instance ...
+         ~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (Amazon.PowerShe...2InstanceCmdlet:NewEC2InstanceCmdlet) [New-EC2Instance], InvalidOperationException
    + FullyQualifiedErrorId : Amazon.EC2.AmazonEC2Exception,Amazon.PowerShell.Cmdlets.EC2.NewEC2InstanceCmdlet

Oops! As you can see, the -Region parameter is scoped to the individual cmdlet, so the AMI that is returned is specific to the EU (Ireland) region. The New-EC2Instance cmdlet also needs to use the EU (Ireland) region, otherwise the AMI will not be found, so we must supply a matching -Region parameter (or, as shown later, have this region be our shell default):

PS C:> Get-EC2ImageByName -Region eu-west-1 -Name "windows_2012_base" | New-EC2Instance -InstanceType m1.small -MinCount 1 -MaxCount 1 -Region eu-west-1
ReservationId   : r-12345678
OwnerId         : ############
RequesterId     :
GroupId         : {sg-abc12345}
GroupName       : {default}
RunningInstance : {}

Specifying a Default Region

Adding an explicit -Region parameter to each cmdlet can become awkward for anything more than one or two commands, so I make use of a default region in my shell. To manage this, I make use of the region cmdlets in the toolset:

  • Set-DefaultAWSRegion
  • Get-DefaultAWSRegion
  • Get-AWSRegion
  • Clear-DefaultAWSRegion

Set-DefaultAWSRegion accepts the (string) system name of an AWS region (similar to the -Region parameter on cmdlets) or an AWSRegion object, which can be obtained from Get-AWSRegion:

# set a default region of EU West (Ireland) for all subsequent cmdlets
PS C:> Set-DefaultAWSRegion eu-west-1

# query the set of AWS regions (to include AWS GovCloud, add the -IncludeGovCloud switch)
PS C:> Get-AWSRegion
Region              Name                                  IsShellDefault
------              ----                                  --------------
us-east-1           US East (Virginia)                             False
us-west-1           US West (N. California)                        False
us-west-2           US West (Oregon)                               False
eu-west-1           EU West (Ireland)                               True
ap-northeast-1      Asia Pacific (Tokyo)                           False
ap-southeast-1      Asia Pacific (Singapore)                       False
ap-southeast-2      Asia Pacific (Sydney)                          False
sa-east-1           South America (Sao Paulo)                      False

# use the region list to set another default by selection:
PS C:> Get-AWSRegion |? { $_.Name.Contains("Tokyo") } | Set-DefaultAWSRegion

# test it!
PS C:> Get-DefaultAWSRegion 

Region              Name                                  IsShellDefault
------              ----                                  --------------
ap-northeast-1      Asia Pacific (Tokyo)                            True

Clear-AWSDefaultRegion can be used to clear a default region. After you use this cmdlet you need to start using the -Region parameter with the service cmdlets again. In scripts that run a lot of service cmdlets, you may find it useful to use the Get-DefaultAWSRegion and Set-DefaultAWSRegion cmdlets at the start and end of the script, perhaps in conjunction with a region script parameter, to temporarily switch away from your regular shell default and restore the original default on exit.

By the way, setting a default region doesn’t preclude overriding this subsequently on a per-cmdlet basis. Simply add the -Region parameter as needed for the particular cmdlet invocation.

Using S3Link with Amazon DynamoDB

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Today we’re excited to talk about the new S3Link class. S3Link allows you to easily link to an Amazon S3 resource in your Amazon DynamoDB data. You can use S3Link when storing Java objects in Amazon DynamoDB tables with the DynamoDBMapper class.

To use the new S3Link class, just add a member of type S3Link to your annotated class. The following User class has an S3Link member named avatar:

@DynamoDBTable(tableName = "user-table")
public class User {
    private String username;
    private S3Link avatar;

    @DynamoDBHashKey
    public String getUsername() {
        return username;
    }

    public void setUsername(String username) {
        this.username = username;
    }

    public S3Link getAvatar() {
        return avatar;
    }

    public void setAvatar(S3Link avatar) {
        this.avatar = avatar;
    }
}

Now that we have our POJO annotated, we’re ready to use DynamoDBMapper to work with our data. The following example shows three ways to use S3Link:

  • Upload a file to Amazon S3
  • Download a file from Amazon S3
  • Get an Amazon S3 client to perform more advanced operations
// Construct a mapper and pass in credentials to use when sending requests to Amazon S3
DynamoDBMapper mapper = new DynamoDBMapper(myDynamoClient, myCredentialsProvider);

// Create your objects
User user = new User();
user.setUsername("jamestkirk");

// Create a link to your data in Amazon S3
user.setAvatar(mapper.createS3Link(myBucketName, "avatars/jamestkirk.jpg"));
  
// Save the Amazon DynamoDB data for your object (does not write to Amazon S3)
mapper.save(user);
  
// Use S3Link to easily upload to Amazon S3
user.getAvatar().uploadFrom(new File("/path/to/all/those/user/avatars/jamestkirk.jpg"));

// Or use S3Link to easily download from Amazon S3
user = mapper.load("spock");
user.getAvatar().downloadTo(new File("/path/to/downloads/spock.jpg"));

// Or grab a full Amazon S3 client to perform more advanced operations
user.getAvatar().getAmazonS3Client();

That’s all there is to using the new S3Link class. Just point it at your data in Amazon S3, and then use the link to upload and download your data.

For more information about using DynamoDBMapper, see the Using the Object Persistence Model with Amazon DynamoDB section in the Amazon DynamoDB Developer Guide.

Introducing AWS SDK Core

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

We’ve been working hard on version 2 of the AWS SDK for Ruby. Loren blogged about some of our upcoming plans for version 2. I’m excited to pull back the curtains and show off the work we’ve done on version 2 of the Ruby SDK.

AWS SDK Core

The AWS SDK Core library will provide a single client for each of the Amazon Web Services we support. Our initial goal is reach feature parity between AWS SDK Core clients and Ruby SDK version 1 clients. We have made good progress, but there are some missing features, e.g., retry logic, logging, API reference docs, etc.

We are currently evaluating how to provide higher level abstractions like those found in version 1. This is not our current focus.

You can learn more about how AWS SDK Core from the project README.

A New Namespace

If you dig around the AWS SDK Ruby Core source code on GitHub you may notice we have changed namespaces from AWS:: to Aws::. We want to make it possible for users to install versions 1 and 2 of the Ruby SDK in the same project. This will make it much easier to try out the new code and to upgrade at will.

Whats Next?

After client parity, there are a lot of things on our todo list. We are releasing the code now so that we can solicit your feedback. Your feedback helps us pick our priorities. Check it out and drop us a note!