AWS Developer Blog

Performance improvements to AWS SDK for .NET

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

We recently fixed a performance issue which affects versions of the SDK that target .NET Framework 4.5. This issue is present in SDK versions 2.0.0.0 – 2.2.4.0 and only affects synchronous API calls.

This issue results in the usage of additional threads while using the synchronous APIs. In some specific environments—for example, with ASP.NET applications running on single core machines—we have seen this issue introduce additional latency intermittently for SDK calls. SDK versions 2.3.x.x and onwards contain the fix for this issue.

 

Introducing the AWS Resource APIs for Java

by David Murray | on | in Java | Permalink | Comments |  Share

Today we’re excited to announce the first developer preview release of the AWS Resource APIs for Java!

The AWS SDK for Java provides a set of Amazon[Service]Client classes, each exposing a direct mapping of each AWS service’s API. These client objects have a method for each operation that the service supports, with corresponding POJOs representing the request parameters and the response data. Using this "low-level" client API gives you full control over what requests you’re making to the service, allowing you to very tightly control the behavior and performance of your calls to AWS services, but it can also be a bit intimidating to a new user.

With the resource APIs, we’re hoping to improve this experience by providing a higher-level, object-oriented abstraction on top of the low-level SDK clients. Instead of a single client object exposing a service’s entire API, with the resource APIs we’ve defined a class representing each of the conceptual "resources" that you interact with while using a service. These classes expose getters for data about the resource, actions that can be taken on the resource, and links to other related resources. For example, using the EC2 API:

Instance instance = ec2.getInstance("i-xxxxxxxx");
System.out.println(instance.getDnsName());
instance.terminate();

Trying the API

First you’ll need to get your hands on the library. The easiest way to do this is via Maven:

    <dependency>
      <groupId>com.amazonaws.resources</groupId>
      <artifactId>aws-resources</artifactId>
      <version>0.0.1</version>
      <type>pom</type>
    </dependency>

Or alternatively, you can download the preview release here.

Creating a Service

A service object is the entry point for creating and interacting with resource objects. You create service instances by using the ServiceBuilder:

EC2 ec2 = ServiceBuilder.forService(EC2.class)
    .withCredentials(new ProfileCredentialsProvider("test-app"))
    .withRegion(Region.getRegion(Regions.US_WEST_2))
    .build();

Moving on to Resource objects

From the Service instance, you can follow references to resources exposed by the service:

// Get an instance reference by id
Instance instance = ec2.getInstance("i-xxxxxxxx");
System.out.println(instance.getDnsName());

// Enumerate all current instances for this account/region
for (Instance instance : ec2.getInstances()) {
    System.out.println(instance.getDnsName();
}

You can also follow references from one resource to another:

Instance instance = ...;
Subnet subnet = instance.getSubnet();
System.out.println(subnet.getCidrBlock());

for (Volume volume : instance.getVolumes()) {
    System.out.println(volume.getVolumeType() + " : " + volume.getSize());
}

Conclusion

This release is a developer preview, with support for Amazon EC2, AWS Identity and Access Management, and Amazon Glacier. You can browse through the API or go straight to the code. We’ll be adding support for more services and tweaking the API over the next couple months as we move toward GA. We’re very interested to hear your thoughts on the APIs. Please give it a try and let us know what you think on GitHub, Twitter, or here in the comments!

AWS Ant Tasks

by Jesse Duarte | on | in Java | Permalink | Comments |  Share

Introducing a new AWS Labs project: AWS Ant Tasks. These are custom tasks to use within your Ant builds that allow easy access to AWS services. Ant is a commonly used tool for building Java projects, and now you can use it to deploy your project to AWS within the same build. To use these tasks, simply reference “taskdefs.xml” in the project’s jar. The services currently available are Amazon S3 and Amazon Elastic Beanstalk, with AWS OpsWorks on its way soon. If you currently develop a Java application and deploy new versions to an AWS service often, consider trying these tasks out!

Here’s an example of an Ant build that updates an Elastic Beanstalk environment with a new war file:

<project basedir="." default="deploy" name="mybeanstalkproject">
    <taskdef resource="taskdefs.xml" 
             classpath="lib/aws-java-sdk-ant-tasks-1.0.0.jar" />
	
    <target name="compile">
        <mkdir dir="build/classes"/> 
        <javac srcdir="src" destdir="build/classes” />
    </target>

    <target name="war" depends="compile">
	    <war destfile="dist/MyProject.war" webxml="WebContent/WEB-INF/web.xml">
	        <fileset dir="WebContent" />
	        <lib dir="WebContent/WEB-INF/lib"/>
	        <classes dir="build/classes" />
	    </war>
    </target>

    <target name="deploy" depends="war">
         <deploy-beanstalk-app bucketName="mys3bucket" 
             versionLabel="version 0.2"
             versionDescription="Version 0.2 of my app" 
             applicationName="myapp" 
             environmentName="myenv" 
             file="dist/MyProject.war" />
    </target>
</project>

Now if you run ant deploy, this build file will compile, war, and deploy your project to your Elastic Beanstalk environment. You can immediately view the results by heading to your environment. If you use Ant to build your project, this task has the potential to be very helpful for deploying in one step as soon as your code is ready. For more in-depth documentation and example code, check out the README documentation on GitHub.

What’s the next AWS service you’d like to see with Ant integration? Is there an Ant task you’d like to contribute to the GitHub project?

Determining an Application’s Current Region

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

AWS regions allow you to run your applications in multiple geographically distributed locations all over the world. This allows you to position your applications and data near your customers for the best possible experience. There are ten AWS regions available today:

  • 4 regions in North America
  • 4 regions in Asia Pacific
  • 1 region in South America
  • 1 region in Europe

When you host an application in one region, you typically want to use the AWS services available in that region, since they’ll give you lower latency and higher throughput. If your application is running on Amazon EC2 instances, the latest version of the AWS SDK for Java enables you to easily detect what AWS region those instances are in. Before being able to detect this, if you wanted to run your application in multiple regions, you needed to give your application a region-specific configuration so it knew what regional endpoints to use. The new Regions.getCurrentRegion() method makes this a lot easier. For example, if you start an Amazon EC2 instance in us-west-1 and run your application on that instance, it would know it’s running in us-west-1 and you could use that information to easily configure your application to talk to other services in us-west-1.

// When running on an Amazon EC2 instance, this method
// will tell you what region your application is in
Region region = Regions.getCurrentRegion();

// If you aren’t running in Amazon EC2, then region will be null
// and you can set whatever default you want to use for development
if (region == null) region = Region.getRegion(Regions.US_WEST_1);

// Then just use that same region to construct clients 
// for all the services you want to work with
AmazonDynamoDBClient dynamo = new AmazonDynamoDBClient();
AmazonSQSClient sqs = new AmazonSQSClient();
dynamo.setRegion(region);
sqs.setRegion(region);

How are you using AWS regions? Are you running any applications in multiple regions?

End of Life of PEAR Channel

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

There’s been a noticeable wave of popular PHP projects recently announcing that they will no longer support PEAR as an installation method. Because the AWS SDK for PHP provides a PEAR channel, we’ve been very interested in the discussion in the community on PEAR channel support.

PEAR has been one of the many ways to install the AWS SDK for PHP since 2010. While it’s served us well, better alternatives for installing PHP packages are now available (i.e., Composer) and literally all of the PEAR dependencies of the AWS SDK for PHP are no longer providing updates to their PEAR channels.

Symfony and Pirum

Fabien Potencier recently blogged about the "Rise of Composer and the fall of PEAR", stating that he would soon no longer update the PEAR channels for the packages he maintains (e.g., Symfony, Twig, Swiftmailer, etc.):

I've been using PEAR as a package manager since my first PHP project back
in 2004. I even wrote a popular PEAR channel server,
Pirum (http://pirum.sensiolabs.org/). But today, it's time for me to move
on and announce my plan about the PEAR channels I'm managing.

One of the projects that we rely on to build the PEAR channel for the AWS SDK for PHP is Pirum, and it has made building a PEAR channel slightly less cumbersome. That said, we’ve had to make small modifications to Pirum over the years to suit our needs. With the announcement that Pirum is no longer maintained, we now have much less confidence in relying on it as a tool used to power one of our installation methods.

One of the Symfony project’s Fabien published to a PEAR channel was the Symfony EventDispatcher. The AWS SDK for PHP has a PEAR dependency on the EventDispatcher PEAR channel. Because the channel is no longer updated, users of the SDK via PEAR will not receive any bugfix updates to the EventDispatcher.

PHPUnit

PHPUnit, the most popular unit testing framework for PHP applications, recently stopped updating their PEAR channels:

We are taking the next step in retiring the PEAR installation method with
today's release of PHPUnit 3.7.35 and PHPUnit 4.0.17. These two releases
are the last versions of PHPUnit released as PEAR packages. Installing
them using the PEAR Installer will trigger a deprecation message on every
execution of the commandline test runner.

PHPUnit is the testing framework used to test the AWS SDK for PHP.

Guzzle

Guzzle, another dependency of the AWS SDK for PHP’s PEAR channel, is no longer providing updates to their PEAR channel since the 3.9.0 release.

AWS SDK for PHP PEAR Channel

With all of these contributing factors, we will no longer be providing updates to the AWS SDK for PHP PEAR channel starting on Monday, September 15th, 2014. Our PEAR channel will still be available to download older versions of the SDK, but it will not receive any further updates after this date.

If you are currently using the PEAR channel to install the SDK or build downstream packages (e.g., RPMs), please begin to update your installation mechanism to one of the following alternatives:

  1. Composer (the recommended method).
  2. Our zip package that contains all of the dependencies and autoloader. Available at https://github.com/aws/aws-sdk-php/releases/latest.
  3. Our phar file that contains all of the dependencies and sets up an autoloader. Available at https://github.com/aws/aws-sdk-php/releases/latest.

To stay up to date with important fixes and updates, we strongly recommend migrating to one of the installation methods listed above.

More instructions on installing the AWS SDK for PHP can be found in the user guide.

S3 Server Side Encryption with Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today we have a guest post by AWS Solutions Architect David Veith on making use of Amazon S3’s server-side encryption with customer-provided keys.

The release of version 2.1.4 of the AWS Tools for Windows PowerShell introduced support for a new server-side encryption method for Amazon S3. You now have three primary options for encrypting data at rest in S3:

  • You can secure your sensitive data before you ever send it to AWS by using client-side encryption.
  • You can use S3’s built-in server-side encryption (SSE), so your data is encrypted with AWS keys and processes while AWS manages the keys.
  • And now you can get all of the benefits and ease of use of server-side encryption, but with your own customer-provided keys (SSE-C).

This blog post describes how you can use the AWS PowerShell tools to secure your data at rest with S3 with the two methods of S3 server-side encryption (SSE-C and SSE).

Server-Side Encryption with Customer-Provided Keys (SSE-C)

With SSE-C, S3 encrypts your data on your behalf using keys that you provide and manage. Because S3 performs the encryption for you, you get the benefits of using your encryption keys without the burden or cost of writing or executing your own encryption code. This method of encryption is not available via the AWS console.

Protecting Your Keys

S3 discards your key immediately upon encrypting/decrypting your object; because the key is never retained, you lose your object if you lose your key. For this reason, it is very important to take special precautions to store your keys safely and securely. If you use multiple keys, you are responsible for tracking which encryption key you provided for each object. You should also consider implementing an envelope encryption process when storing encrypted objects in S3, as described in the article Client-Side Data Encryption with the AWS SDK for Java and Amazon S3.

Creating Your Key

The following commands use the .NET AES class in System.Security.Cryptography to create a base64 encoded key.

$Aes = New-Object System.Security.Cryptography.AesManaged
$Aes.KeySize = 256
$Aes::GenerateKey
$Base64key = [System.Convert]::ToBase64String($Aes.Key)

Writing an Object (SSE-C)

The Write-S3Object cmdlet is used to store an object in S3, encrypting it at rest using a client-provided key. The key is base64 encoded and the encryption method is specified as AES256. After your object is encrypted, your key is discarded.

$initialfile  = "YourFile"
$bucket       = "YourBucketName"
$objectkey    = "YourKeyName" 

try 
{
	Write-S3Object -Region us-west-2 -File $initialfile -BucketName $bucket -Key $objectkey -ServerSideEncryptionCustomerProvidedKey $Base64key -ServerSideEncryptionCustomerMethod AES256
}
catch [system.exception] 
{
	Write-Host  "Error: " $_.Exception.Message
}

Reading an Object (SSE-C)

The Read-S3Object cmdlet is used to retrieve an encrypted object in S3 using the same client-provided key that was used to encrypt it when it was originally stored in S3. The key is base64-encoded and the encryption method is specified as AES256. After your object is decrypted, your key is discarded.

$ssecfileout  = "YourOutputFile" 
$bucket       = "YourBucketName"
$objectkey    = "YourKeyName" 

try 
{
	Read-S3Object -Region us-west-2 -BucketName $bucket -Key $objectkey -File $ssecfileout  -ServerSideEncryptionCustomerProvidedKey $Base64key -ServerSideEncryptionCustomerMethod AES256
}
catch [system.exception]
{
	Write-Host  "Error: " $_.Exception.Message
}

Copying an Object (SSE-C)

The Copy-S3Object cmdlet is used to copy an encrypted object in S3 to a new key. Two keys are required for this scenario. The first key is required to decrypt the original object (because S3 never stores the key). The second key is used to encrypt the new copy. In this case, we used the same encryption key and the same bucket but that is not a requirement. As always, your keys are discarded after use.

$bucket         = "YourBucketName"
$objectkey      = "YourKeyName" 
$copyobjectkey  = "YourDestinationKeyName"

try 
{
	Copy-S3Object -Region us-west-2 -BucketName $bucket -Key $objectkey -DestinationBucket $bucket  -DestinationKey $copyobjectkey -CopySourceServerSideEncryptionCustomerMethod AES256 -CopySourceServerSideEncryptionCustomerProvidedKey $Base64key -ServerSideEncryptionCustomerProvidedKey $Base64key -ServerSideEncryptionCustomerMethod AES256
}
catch [system.exception] 
{
	Write-Host  "Error: " $_.Exception.Message
}

S3 Server-Side Encryption (SSE) with AWS Keys

This is the simplest method of encrypting your data at rest in S3. With SSE, S3 encrypts your data on your behalf using AWS keys and processes. You don’t need to track, store or provide any encryption keys. This method of encryption is also available via the AWS console.

Writing an Object (SSE)

As mentioned earlier, the Write-S3Object cmdlet is used to store an object in S3, in this case, encrypting the object on disk using AWS encryption and keys.

$initialfile  = "YourFile"
$bucket       = "YourBucketName"
$objectkey    = "YourKeyName" 

try 
{
	Write-S3Object -Region us-west-2 -File $initialfile -BucketName $bucket -Key $objectkey -ServerSideEncryption AES256
}
catch [system.exception] 
{
	Write-Host  "Error: " $_.Exception.Message
}

Reading an Object (SSE)

The Read-S3Object cmdlet is used retrieve an object from S3. If the object is encrypted in S3, a decrypted object is returned.

$ssefileout   = "YourOutputFile" 
$bucket       = "YourBucketName"
$objectkey    = "YourKeyName" 

try 
{
	Read-S3Object -Region us-west-2 -BucketName $bucket -Key $objectkey -File $ssefileout  
}
catch [system.exception]
{
	Write-Host  "Error: " $_.Exception.Message
}

Copying an Object (SSE)

The Copy-S3Object cmdlet can be used to make a copy of a server-side encrypted object. When copying an object, encryption must be specified explicitly, otherwise the copy will not be encrypted on the server-side. The sample below specifies server-side encryption for the copy.

$bucket         = "YourBucketName"
$objectkey      = "YourKeyName" 
$copyobjectkey  = "YourDestinationKeyName"

try 
{
	Copy-S3Object -Region us-west-2 -BucketName $bucket -Key $objectkey -DestinationBucket $bucket  -DestinationKey $copyobjectkey -ServerSideEncryption AES256
}
catch [system.exception] 
{
	Write-Host  "Error: " $_.Exception.Message
}

Summary

This post showed the options available for encrypting data at rest in S3 when using the Read-S3Object, Write-S3Object, and Copy-S3Object cmdlets to move data around.

For more information, see the Client Side Data Encryption with AWS SDK for .NET and Amazon S3 blog post.

Accessing Private Content in Amazon CloudFront

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Amazon CloudFront is an easy to use, high performance, and cost efficient content delivery service. With over 50 worldwide edge locations, CloudFront is able to deliver your content to your customers with low latency in any part of the world.

In addition to serving public content for anyone on the Internet to access, you can also use Amazon CloudFront to distribute private content. For example, if your application requires a subscription, you can use Amazon CloudFront’s private content feature to ensure that only authenticated users can access your content and prevent users from accessing your content outside of your application.

Accessing private content in Amazon CloudFront is now even easier with the AWS SDK for Java. You can now easily generate authenticated links to your private content. You can distribute these links or use them in your application to enable customers to access your private content. You can also set expiration times on these links, so even if your application gives a link to a customer, they’ll only have a limited time to access the content.

To use private content with Amazon CloudFront, you’ll need an Amazon CloudFront distribution with private content enabled and a list of authorized accounts you trust to access your private content. From the Create Distribution Wizard in the Amazon CloudFront console, start creating a web distribution. In the ”’Origin Settings”’ section, select an Amazon S3 bucket that you’ve created for private content only, and make sure you select the options as below:

This will set the permissions on your Amazon S3 bucket to protect your content from being accessed publicly, but still allow CloudFront to access your content.

Continue creating your distribution, and at the bottom of the Default Cache Behavior Settings section, make sure you enable the Restrict Viewer Access option and select self as the trusted signer. These are called trusted signers because you’re trusting URLs that are signed by them and allowing them to access your private content. In our example, we’re using self as the only trusted signer, which means that only your account can sign URLs to access your CloudFront private content.

The last thing you need to set up in your account is a CloudFront key pair. This is the public/private key pair that you’ll use to sign requests for your CloudFront private content. Any trusted signer that you configure for your CloudFront distribution will need to set up their own CloudFront key pair for their account in order to sign requests for your CloudFront private content. You can configure your CloudFront key pair through the Security Credentials page in the IAM console. Make sure you download your private key, and make a note of the key pair ID listed in the AWS Management Console.

Now that your account and distribution are configured, you’re ready to use the SDK to generate signed URLs for accessing your CloudFront private content. The CloudFrontUrlSigner class in the AWS SDK for Java makes it easy to create signed URLs that you and your customers can use to access your private content. In the following example, we create a signed URL that expires in 60 seconds and allows us to access the private foo/bar.html content in our CloudFront distribution.

// the DNS name of your CloudFront distribution, or a registered alias
String distributionDomainName;   
// the private key you created in the AWS Management Console
File cloudFrontPrivateKeyFile;
// the unique ID assigned to your CloudFront key pair in the console    
String cloudFrontKeyPairId;   
Date expirationDate = new Date(System.currentTimeMillis() + 60 * 1000);

String signedUrl = CloudFrontUrlSigner.getSignedURLWithCannedPolicy(
           Protocol.https, 
           distributionDomainName, 
           cloudFrontPrivateKeyFile,   
           “foo/bar.html”, // the resource path to our content
           cloudFrontKeyPairId, 
           expirationDate);

You can also attach additional policy restrictions to the presigned URLs you create with CloudFrontUrlSigner. The following example shows how to create a policy to restrict access to a CIDR IP range, which can be useful to limit access to your private content to users on a specific network:

// the DNS name of your CloudFront distribution, or a registered alias
String distributionDomainName;   
// the private key you created in the AWS Management Console
File cloudFrontPrivateKeyFile;
// the unique ID assigned to your CloudFront key pair in the console   
String cloudFrontKeyPairId;   
// the CIDR range limiting which IP addresses are allowed to access your content
String cidrRange; 
// the resource path to our content
String resourcePath  = "foo/bar.html";  
Date expirationDate = new Date(System.currentTimeMillis() + 60 * 1000);

String policy = buildCustomPolicyForSignedUrl(
                    resourcePath,
                    expirationDate,
                    cidrRange,
                    null);

String signedUrl = CloudFrontUrlSigner.getSignedURLWithCustomPolicy(
                    resourcePath,
                    cloudFrontKeyPairId,
                    cloudFrontPrivateKey,
                    policy);

Are you already an Amazon CloudFront customer? Have you tried out Amazon CloudFront private content yet?

Introducing S3Link to DynamoDBContext

by Mason Schneider | on | in .NET | Permalink | Comments |  Share

S3Link has been in the AWS SDK for Java for a while now, and we have decided to introduce it to the AWS SDK for .NET as well. This feature allows you to access your Amazon S3 resources easily through a link in your Amazon DynamoDB data. S3Link can be used with minimal configuration with the .NET DynamoDB Object Persistence Model. To use S3Link, simply add it as a field to your DynamoDB annotated class and create a bucket in S3. The following Book class has an S3Link property named CoverImage.

// Create a class for DynamoDBContext
[DynamoDBTable("Library")]
public class Book
{
	[DynamoDBHashKey]   
	public int Id { get; set; }

	public S3Link CoverImage { get; set; }

	public string Title { get; set; }
	public int ISBN { get; set; }

	[DynamoDBProperty("Authors")]    
	public List BookAuthors { get; set; }
}

Now that we have an S3Link in our annotated class, we are ready to manage an S3 object. The following code does four things:

  1. Creates and saves a book to DynamoDB
  2. Uploads the cover of the book to S3
  3. Gets a pre-signed URL to the uploaded object
  4. Loads the book back in using the Context object and downloads the cover of the book to a local file
// Create a DynamoDBContext
var context = new DynamoDBContext();

// Create a book with an S3Link
Book myBook = new Book
{
	Id = 501,
	CoverImage = S3Link.Create(context, "myBucketName", "covers/AWSSDK.jpg", Amazon.RegionEndpoint.USWest2),
	Title = "AWS SDK for .NET Object Persistence Model Handling Arbitrary Data",
	ISBN = 999,
	BookAuthors = new List { "Jim", "Steve", "Pavel", "Norm", "Milind" }
};

// Save book to DynamoDB
context.Save(myBook);

// Use S3Link to upload the content to S3
myBook.CoverImage.UploadFrom("path/to/covers/AWSSDK.jpg");

// Get a pre-signed URL for the image
string coverURL = myBook.CoverImage.GetPreSignedURL(DateTime.Now.AddHours(5));

// Load book from DynamoDB
myBook = context.Load(501);

// Download file linked from S3Link
myBook.CoverImage.DownloadTo("path/to/save/cover/otherbook.jpg");

And that’s the general use for S3Link. Simply provide it a bucket and a key, and then you can upload and download your data.

Importing VM Images and Volumes with PowerShell and C#

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Version 2.2.0 of the AWS Tools for Windows PowerShell and AWS SDK for .NET contained updates to make it easy to import virtual machine images and disk volumes into Amazon EC2. In the case of PowerShell, there are revised Import-EC2Instance and Import-EC2Volume cmdlets, while in the SDK there is a new helper class, DiskImageImporter, in the Amazon.EC2.Import namespace. This post examines the updates and shows you how to use PowerShell or C# to upload your images.

Importing via PowerShell

You can import virtual machine images into Amazon EC2 by using Import-EC2Instance, or you can use Import-EC2Volume to import disk images as EBS volumes. In both cases, Amazon EC2 must convert the uploaded content before it can be used. To track progress of the conversion, the cmdlets return a ConversionTask object containing the task ID plus other supporting information.

Both cmdlets can also be used to upload the content but defer the conversion to a later time. In this mode, the cmdlets return the Amazon S3 object key of an import manifest that has been created on your behalf based upon the parameters you supply and the image file being uploaded. To start the conversion, you run the same cmdlet that was used to upload the artifacts, but this time you pass the object key to the -ManifestFileKey parameter and remove the -ImageFile parameter. The cmdlets will then request Amazon EC2 begin conversion and return to you a ConversionTask object.

Monitoring Conversion Progress

Starting a conversion is great, but then what? To track the conversion progress, you can use the Get-EC2ConversionTask cmdlet. This cmdlet accepts a ConversionTask object (strictly speaking, it wants the conversion task ID contained within the object) and outputs the current status of the conversion. The ConversionTask object can be from either an instance conversion or a volume conversion.

You can also cancel a conversion using the Stop-EC2ConversionTask cmdlet. Just like Get-EC2ConversionTask, this cmdlet takes a ConversionTask object, or conversion task ID, and it instructs Amazon EC2 to abandon the in-progress conversion.

Now that we know the basics of uploading and converting, let’s take a look at some examples of the cmdlets and the options available.

Upload and Convert

Uploading with immediate conversion is the default operating mode for the cmdlets. For this mode, the cmdlet requires the name of the image file to upload and the destination bucket for the artifacts (the bucket will be created if required). When importing a VM image, you add parameters to specify the instance type, architecture, and platform (the default platform is ‘Windows’ if you don’t specify it). For example:

Import-EC2Instance -ImageFile C:Windows.2012-disk1.vmdk `
                   -InstanceType m3.xlarge `
                   -Architecture x86_64 `
                   -BucketName myvmimages ` 
                   -KeyPrefix windows.2012 

The -KeyPrefix parameter is optional and can be used to collect related image files together in Amazon S3. Each image file will be uploaded to an S3 key path of keyprefix/guid, with a new guid used for each upload.

Here’s an example of importing a disk volume:

Import-EC2Volume -ImageFile C:Windows.2012-disk2.vmdk `
                 -BucketName myvmimages `
                 -KeyPrefix windows.2012.volume2 `
                 -AvailabilityZone us-west-2a

For a volume, you must specify the Availability Zone for the region it is being imported to. You can retrieve the set of Availability Zones for a region by using the Get-EC2AvailabilityZone cmdlet:

PS C:> Get-EC2AvailabilityZone -Region us-west-2 | select ZoneName

ZoneName
--------
us-west-2a
us-west-2b
us-west-2c

Upload Only, Defer Conversion

To upload an image file but defer conversion, you add the -UploadOnly switch to the cmdlets. Parameters related to Amazon EC2 instances or Availability Zones are omitted:

Import-EC2Instance -UploadOnly `
                   -ImageFile C:Windows.2012-disk1.vmdk `
                   -BucketName myvmimages `
                   -KeyPrefix windows.2012


Import-EC2Volume -UploadOnly `
                 -ImageFile C:Windows.2012-disk2.vmdk `
                 -BucketName myvmimages `
                 -KeyPrefix windows.2012

Notice how both cmdlets take the same parameters for this mode. When run with the -UploadOnly switch, both cmdlets return to the pipeline the Amazon S3 object key of the import manifest. You’ll need this to start the conversion at a later date, as explained below.

Regarding Volume Sizes

None of the examples in this post use the -VolumeSize parameter. In this case, the import will use the size of the image file, rounded up to the nearest GB, on your behalf with the minimum size of a volume being 8 GB. To use a different volume size, simply add the -VolumeSize parameter. EBS volumes can be a minimum of 1 GB, but EC2 instance boot volumes must be 8 GB or greater.

Starting a Deferred Conversion

To start conversion of a disk image that has been uploaded previously, you need the S3 object key of the import manifest, which you pass to the -ManifestFileKey parameter, in addition to parameters describing the instance or volume (but omitting the -ImageFile parameter, since the image has been uploaded already). The -ManifestFileKey parameter takes an array of manifest key names, so you can upload several images and then request conversion as a batch operation. Note that when dealing with VM images, the EC2 instance parameter values apply to all of the images so you can’t mix Linux and Windows, or 32-bit/64-bit images in a batch:

PS C:> $manifest1 = Import-EC2Instance -ImageFile ...
windows.2012/f21dcaea...
PS C:> $manifest2 = Import-EC2Instance -ImageFile ...
windows.2012/4e1dbaea...
PS C:> $manifest3 = Import-EC2Instance -ImageFile ...
windows.2012/d32dcfed...

PS C:> Import-EC2Instance `
               -ManifestFileKey @($manifest1, $manifest2, $manifest3) `
               -InstanceType m3.xlarge `
               -Architecture ... 

Handling Upload Errors

If the upload of the image file fails, the cmdlets will leave what has been successfully uploaded in the Amazon S3 bucket and return to you the S3 object key to the manifest, together with instructions on how to resume the operation or cancel and delete the uploaded content. Here’s an example of the output message when an upload fails:

PS C:> Import-EC2Instance -ImageFile C:Windows.2012-disk1.vmdk ...   

Import-EC2Instance : The import operation failed to upload one or more image file parts.
To resume: re-run the cmdlet and add the -Resume parameter.
To cancel and remove uploaded artifacts: inspect the S3 bucket and delete all objects with key prefix
'windows.2012/f21dcaea-4cff-472d-bb0f-7aed0dcf7cf9'.
.....

To resume the upload, the command to run would therefore be

Import-EC2Instance -ImageFile C:Windows.2012-disk1.vmdk `
                   -Resume `
                   -KeyPrefix windows.2012 `
                   -InstanceType ...

Note that the -Resume parameter currently applies only to the upload phase of the import process. If the subsequent EC2 conversion fails with an error, you need to submit a new import request (with new upload) to correct whatever error EC2 has reported.

Importing via C#

The PowerShell cmdlets perform the upload and conversion work using a new helper class in the AWS SDK for .NET, so the same capabilities are available to SDK-based applications. The new class is in the Amazon.EC2.Import namespace and is called DiskImageImporter. The class handles uploads and conversion of VM images and disk volumes and can also be used to perform upload-only or upload-and-convert usage scenarios, just like the cmdlets.

The following code snippet shows how you can upload and request conversion of an image file from C#:

using Amazon.EC2.Import;

...
const string imagesBucket = "mybucketname";

var importer = new DiskImageImporter(RegionEndpoint.USWest2, imagesBucket);

var launchConfiguration = new ImportLaunchConfiguration
{
    InstanceType = "m3.xlarge",
    Architecture = "x86_64",
    Platform = "Windows",
};

importer.ImportInstance
  (@"C:Windows.2012-disk1.vmdk",
   null,                   // file format -- inferred from image extension
   null,                   // volume size -- infer from image size
   "windows.2012",         // key prefix of artifacts in S3
   launchConfiguration,    // EC2 instance settings
   (message, percentComplete) =>
       Console.WriteLine(message +
           (percentComplete.HasValue
               ? string.Format("{0}% complete", percentComplete.Value)
               : string.Empty))
 );
...

More information on importing images and volumes to Amazon EC2 can be found at Importing a VM into Amazon EC2.

As you can see, importing virtual machine artifacts to Amazon EC2 is now easy and convenient for Windows and .NET users. Be sure to let us know via our comments section what other operations you’d find convenient to have available!

Version 2 Resource Interfaces

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

In version 1 of the AWS SDK for Ruby provides a 1-to-1 client class for each AWS service. For many services it also provides a resource-oriented interface. These resource objects use the client to provide a more natural object-oriented experience when working with AWS APIs.

We are busy working resource interfaces for the v2 Ruby SDK.

Resource Interfaces

The following examples use version 1 of the aws-sdk gem. This first example uses the 1-to-1 client to terminate running instances:

ec2 = AWS::EC2::Client.new
resp = ec2.describe_instances
resp[:reservations].each do |reservation|
  reservation[:instances].each do |instance|
    if instance[:state][:name] == 'running'
      ec2.terminate_instances(instance_ids:[instance[:instance_id]])
    end
  end
end

This example uses the resource abstraction to start instances in the stopped state:

ec2 = AWS::EC2.new
ec2.instances.each do |instance|
  instance.start if instance.status == :stopped
end

Resources for Version 2

We have a lot of lessons learned from our v1 resource interfaces. We are busy working on the v2 abstraction. Here are some of the major changes from v1 to v2.

Memoization Interfaces Removed

The version 1 resource abstraction was very chatty by default. It did not memoize any resource attributes and a user could unknowingly trigger a large number of API requests. As a workaround, users could use memoization blocks around sections of their code.

In version 2, all resources objects will hold onto their data/state until you explicitly call a method to reload the resource. We are working hard to make it very obvious when calling a method on a resource object will generate an API request over the network.

Less Hand-Written Code and More API Coverage

The version 1 SDK has hand-coded resource and collection classes. In version 2, our goal is to extend the service API descriptions that power our clients with resource definitions. These definitions will be consumed to generate our resource classes.

Using resource definitions helps eliminate a significant amount of hand written code, ensures interfaces are consistent, and makes it easier for users to contribute resource abstractions.

We also plan to provide extension points to resources to allow for custom logic and more powerful helpers.

Resource Waiters

It is a common pattern to operate on a resource and then wait for the change to take effect. Waiting typically requires making an API request, asserting some value has changed and optionally waiting and trying again. Waiting for a resource to enter a certain state can be tricky. You need to deal with terminal cases, failures, transient errors, etc.

Our goal is to provide waiter definitions and attach them to our resource interfaces. For example:

# create a new table in Amazon DynamoDB
table = dynamodb.table('my-table')
table.update(provisioned_throughput: { 
  read_capcity_units: 1000
})

# wait for the table to be ready
table.wait_for(:status, 'ACTIVE')

In a follow up blog post, I will be introducing the resources branch of the SDK that is available today on GitHub. Please take a look and feedback is always welcome!