AWS Developer Blog

Waiters

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

We’ve added a feature called Waiters to the v2 AWS SDK for Ruby, and I am pretty excited about it. A waiter is a simple abstraction around the pattern of polling an AWS API until a desired state is reached.

Basic Usage

This simple example shows how to use waiters to block until a particular EC2 instance is running:

ec2 = Aws::EC2::Client.new
ec2.wait_until(:instance_running, instance_ids:['i-12345678'])

Waiters will not wait indefinitely and can fail. Each waiter has a default polling interval and maximum number of attempts to make. If a waiter encounters an unexpected error or fails to reach the desired condition in time it will raise an error:

begin
  ec2.wait_until(:instance_running, instance_ids:['i-12345678'])
resuce Aws::Waiters::Errors::WaiterFailed
  # oops
end

Configuration

You can modify the default interval and wait time between attempts by passing a block.

# this will wait upto ~ one hour
ec2.wait_until(:instance_running, instance_ids:['i-12345678']) do |w|

  # seconds between each attempt
  w.interval = 15

  # maximum number of polling attempts before giving up
  w.max_attempts = 240

end

Callbacks

In addition to interval and maximum attempts, you can configure callbacks to trigger before each attempt polling attempt and before sleeping between attempts.

ec2.wait_until(:instance_running, instance_ids:['i-12345678']) do |w|

  w.before_attempt do |n|
    # n - the number of attempts made
  end

  w.before_wait do |n, resp|
    # n - the number of attempts made
    # resp -the client response from the previous attempt
  end

end

You can throw :success or :failure from these callbacks to stop the waiter immediately. You can use this to write you own delay and back-off logic.

Here I am using a callback to perform exponential back-off between polling attempts:

ec2.wait_until(:instance_running, instance_ids:['i-12345678']) do |w|
  w.interval = 0 # disable normal sleep
  w.before_wait do |n, resp|
    sleep(n ** 2)
  end
end

This example gives up after one hour.

ec2.wait_until(:instance_running, instance_ids:['i-12345678']) do |w|
  one_hour_later = Time.now + 3600
  w.before_wait do |n, resp|
    throw :failure, 'waited too long' if Time.now > one_hour_later
  end
end

Waiters and Resources, Looking Ahead

You may have noticed that some waiters have already been exposed to the resource classes.

ec2 = Aws::EC2::Resource.new
instance = ec2.instance('i-12345678')
instance.stop
instance.wait_until_stopped
puts instance.id + ' is stopped'

In addition to connecting more waiters and resources, I’m excited to look into batch waiters. Imagine the following use case:

instances = ec2.create_instances(min_count: 5, ...)
instances.wait_until_running
puts "the following new instances are now running:n"
puts instances.map(&:id)

Documentation

Waiters are documented in the Ruby SDK API reference. Each service client documents the #wait_until method and provides a list of available waiter names. Here are links to the Aws::EC2::Client waiter methods:

Give waiters a try and let us know what you think!

AWS re:Invent 2014

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We spent the past week at AWS re:Invent! The PHP SDK team was there with many of our co-workers and customers. It was a great conference, and we had a lot of fun.

If you did not attend re:Invent or follow our @awsforphp Twitter feed during the event, then you have a lot to catch up on.

New AWS Services and Features

Several new services were announced during the keynotes, on both the first day and second day, and during other parts of the event.

During the first keynote, three new AWS services for code management and deployment were announced: AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline. CodeDeploy is available today, and can help you automate code deployments to Amazon EC2 instances.

Additionally, three other new services were revealed that are related to enterprise security and compliance: AWS Key Management Service (AWS KMS), AWS Config, and AWS Service Catalog.

Amazon RDS for Aurora was also announced during the first keynote. Amazon Aurora is a new, MySQL-compatible, relational database engine built for high performance and availability.

The keynote on the second day boasted even more announcements, including the new Amazon EC2 Container Service, which is a highly scalable, high performance container management service that supports Docker containers.

Also, new compute-optimized (C4) Amazon EC2 Instances were announced, as well as new larger and faster Elastic Block Store (EBS) volumes backed with SSDs.

AWS Lambda was introduced during the second keynote, as well. It is a new compute service that runs your code in response to events and automatically manages the compute resources for you. To learn about AWS Lambda in more detail, you should check out their session at re:Invent, which shows how you can implement image thumbnail generation in your applications using AWS Lambda and the new Amazon S3 Event Notifications feature. They also briefly mention the upcoming DynamoDB streams feature in that presentation, which was announced just prior to the conference.

The APIs for AWS CodeDeploy, AWS KMS, AWS Config, and AWS Lambda are currently available, and all are supported in the AWS SDK for PHP as of version 2.7.5.

PHP Presentations

I had the honor of presenting a session about the PHP SDK called Building Apps with the AWS SDK for PHP, where I explained how to use many of the new features from Version 3 of the SDK in the context of building an application I called "SelPHPies with ElePHPants". You should definitely check it out whether you are new to or experienced with the SDK.

Here are the links to my presentation as well as two other PHP-specific sessions that you might be interested in.

  • Building Apps with the AWS SDK for PHP (slides, video)
  • Best Practices for Running WordPress on AWS (slides, video)
  • Running and Scaling Magento on AWS (video)

There were so many other great presentations at re:Invent. The slides, videos, and podcasts for all of the presentations are (or will be) posted online.

PHPeople

Announcements and presentations are exciting and informative, but my favorite part about any conference is the people. Re:Invent was no exception.

It was great to run into familiar faces from my Twitter stream like Juozas Kaziukėnas, Ben Ramsey, Brian DeShong, and Boaz Ziniman. I also had the pleasure of meeting some new friends from companies that had sent their PHP developers to the conference.

See You Next Year

We hope you take the time to check out some of the presentations from this year’s event, and consider attending next year. Get notified about registration for next year’s event by signing up for the re:Invent mailing list on the AWS re:Invent website.

Using Resources

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

With the recent 2.0 stable release of the aws-sdk-core gem, we started publishing preview releases of aws-sdk-resources. Until the preview status is released, you will need to use the –pre flag to install this gem:

gem install aws-sdk-resources --pre

In bundler, you should give the full version:

# update the version as needed
gem 'aws-sdk-resources', version: '2.0.1.pre'

Usage

Each service module has a Client class that provides a 1-to-1 mapping of the service API. Each service module now also has a Resource class that provides an object-oriented interface to work with.

Each resource object wraps a service client.

s3 = Aws::S3::Resource.new
s3.client
#=> #<Aws::S3::Client>

Given a service resource object you can start exploring related resources. Lets start with buckets in Amazon S3:

# enumerate all of my buckets
s3.buckets.map(&:name)
#=> ['aws-sdk', ...]

# get one bucket
bucket = s3.buckets.first
#=> #<Aws::S3::Bucket name="aws-sdk">

If you know the name of a bucket, you can construct a bucket resource without making an API request.

bucket = s3.bucket('aws-sdk')

# constructors are also available
bucket = Aws::S3::Bucket.new('aws-sdk')
bucket = Aws::S3::Bucket.new(name: 'aws-sdk')

In each of the three previous examples, an instance of Aws::S3::Bucket is returned. This is a lightweight reference to an actual bucket that might exist in Amazon S3. When you reference a resource, no API calls are made until you operate on the resource.

Here I will use the bucket reference to delete the bucket.

bucket.delete

You can use a resource to reference other resources. In the next exmple, I use the bucket object to reference an object in the bucket by its key.
Again, no API calls are made until I invoke an operation such as #put or #delete.

obj = bucket.object('hello.txt')
obj.put(body:'Hello World!')
obj.delete

Resource Data

Resources have one or more identifiers, and data. To construct a resource, you only need the identifiers. A resource can load itself using its identifiers.

Constructing a resource object from its identifiers will never make an API call.

obj = s3.bucket('aws-sdk').object('key') # no API call made

# calling #data loads an object, returning a structure
obj.data.etag
#=> "ed076287532e86365e841e92bfc50d8c"

# same as obj.data.etag
obj.etag
#=> "ed076287532e86365e841e92bfc50d8c"

Resources will never update internal data until you call #reload. Use #reload if you need to poll a resource attribute for a change.

# force the resource to refresh data, returning self
obj.reload.last_updated_at

Resource Associations

Most resources types are associated with one or more different resources. For example, an Aws::S3::Bucket object bucket has many objects, a website configuration, an ACL, etc.

Each association is documented on the resource class. The API documentation will specify what API call is being made. If the association is plural, it will document when multiple calls are made.

When working with plural associations, such as bucket that has many objects, resources are automatically paginated. This makes it simple to lazily enumerate all objects.

bucket = s3.bucket('aws-sdk')

# enumerate **all** objects in a bucket, objects are fetched
# in batches of 1K until every object has been yielded
bucket.objects.each do |obj|
  puts "#{obj.key} => #{obj.etag}"
end

# filter objects with a prefix
bucket.objects(prefix:'/tmp/').map(&:key)

Some APIs support operating on resources in batches. When possible,
the SDK will provide batch actions.

# gets and deletes objects in batches of 1K, sweet!
bucket.objects(prefix:'/tmp/').delete

Resource Waiters

Some resources have associated waiters. These allow you to poll until the resource enters a desired state.

instance = Aws::EC2::Instance.new('i-12345678')
instance.stop
instance.wait_until_stopped
puts instance.id + ' is stopped'

Whats Next?

The resource interface has a lot of unfinished features. Some of the things we are working on include:

  • Adding #exists? methods to all resource objects
  • Consistent tagging interfaces
  • Batch waiters
  • More service coverage with resource definitions

We would love to hear your feedback. Resources are available now in the preview release of the aws-sdk-resources gem and in the master branch of GitHub.

Happy coding!

AWS Toolkit support for Visual Studio Community 2013

We often hear from our customers that they would like our AWS Toolkit for Visual Studio to work with the Express editions of Visual Studio. We understand how desirable this is, but due to restrictions built into the Express editions of Visual Studio, it hasn’t been possible…until now.

With the recent announcement of the new Visual Studio Community 2013 edition, it is now possible to get the full functionality of our AWS Toolkit for Visual Studio inside a free edition of Visual Studio. This includes the AWS Explorer for managing resources, Web Application deployment from the Solution Explorer, and the AWS CloudFormation editor for authoring and deploying your CloudFormation templates.

So if you haven’t tried the AWS Toolkit for Visual Studio, now is a great time to check it out.

Amazon S3 Encryption with AWS Key Management Service

by Hanson Char | on | in Java | Permalink | Comments |  Share

With version 1.9.5 of the AWS SDK for Java, we are excited to announce the full support of S3 object encryption using AWS Key Management Service (KMS). Why KMS, you may ask? In a nutshell, AWS Key Management Service provides many security and administrative benefits, including centralized key management, better security in protecting your master keys, and it leads to simpler code!

In this blog, we will provide two quick examples of how you can make use of AWS KMS for client-side encryption via Amazon S3 Encryption Client, and compare it with the use of AWS KMS for server-side encryption via Amazon S3 Client.

The first example demonstrates how you can make use of KMS for client-side encryption in the Amazon S3 Encryption Client. As you see, it can be as simple as configuring a KMSEncryptionMaterialsProvider with a KMS Customer Master Key ID (generated a-priori, for example, via the AWS management console). Every object put to Amazon S3 would then result in a data key generated by AWS KMS for use in client-side encryption before sending the data (along with other metadata such as the KMS "wrapped" data key) to S3 for storage. During retrieval, KMS would automatically "unwrap" the encrypted data key, and the Amazon S3 Encryption Client would then use it to decrypt the ciphertext locally on the client side.

S3 client-side encryption using AWS KMS

String customerMasterKeyId = ...;
AmazonS3EncryptionClient s3 = new AmazonS3EncryptionClient(
            new ProfileCredentialsProvider(),
            new KMSEncryptionMaterialsProvider(customerMasterKeyId))
        .withRegion(Region.getRegion(Regions.US_EAST_1));

String bucket = ...;
byte[] plaintext = "Hello S3/KMS Client-side Encryption!"
            .getBytes(Charset.forName("UTF-8"));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(plaintext.length);

PutObjectResult putResult = s3.putObject(bucket, "hello_s3_kms.txt",
        new ByteArrayInputStream(plaintext), metadata);
System.out.println(putResult);

S3Object s3object = s3.getObject(bucket, "hello_s3_kms.txt");
System.out.println(IOUtils.toString(s3object.getObjectContent()));
s3.shutdown();

The second example demonstrates how you can delegate the crypto operations entirely to the Amazon S3 server side, yet using fully managed data keys generated by AWS KMS (instead of having the data key locally generated on the client side). This has the obvious benefit of offloading the computationally expensive operations to the server side, and potentially improving the client-side performance. Similar to what you did in the first example, all you need to do is to specify your KMS Customer Master Key ID (generated a-priori, for example, via the AWS management console) in the S3 put request.

S3 server-side encryption using AWS KMS

String customerMasterKeyId = ...;
AmazonS3Client s3 = new AmazonS3Client(new ProfileCredentialsProvider())
        .withRegion(Region.getRegion(Regions.US_EAST_1));

String bucket = ...;
byte[] plaintext = "Hello S3/KMS SSE Encryption!"
            .getBytes(Charset.forName("UTF-8"));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(plaintext.length);

PutObjectRequest req = new PutObjectRequest(bucket, "hello_s3_sse_kms.txt",
        new ByteArrayInputStream(plaintext), metadata)
        .withSSEAwsKeyManagementParams(
            new SSEAwsKeyManagementParams(customerMasterKeyId));
PutObjectResult putResult = s3.putObject(req);
System.out.println(putResult);

S3Object s3object = s3.getObject(bucket, "hello_s3_sse_kms.txt");
System.out.println(IOUtils.toString(s3object.getObjectContent()));
s3.shutdown();

For more information about AWS KMS, check out the AWS Key Management Service whitepaper, or the blog New AWS Key Management Service (KMS). Don’t forget to download the latest AWS SDK for Java and give it a spin!

Come see us at re:Invent 2014!

AWS re:Invent is just around the corner, and we are excited to meet you.

I will be presenting DEV 306 – Building cross platform applications using the AWS SDK for JavaScript on November 13, 2014. This talk will introduce you to building portable applications using the SDK and outline some differences in porting your application to multiple platforms. You can learn more about the talk here. Come check it out!

We will also be at the AWS Booth in the Expo Hall (map). Come talk to us about how you’re using AWS services, ask us a question, and learn about how to use our many AWS SDKs and tools.

Hope to see you there!

Announcing the AWS CloudTrail Processing Library

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

We’re excited to announce a new extension to the AWS SDK for Java: The AWS CloudTrail Processing Library.

AWS CloudTrail delivers log files containing AWS API activity to a customer’s Amazon S3 bucket. The AWS CloudTrail Processing Library makes it easy to build applications that read and process those CloudTrail logs and incorporate their own business logic. For example, developers can filter events by event source or event type, or persist events into a database such as Amazon RDS or Amazon Redshift or any third-party data store.

The AWS CloudTrail Processing Library, or CPL, eliminates the need to write code that polls Amazon SQS queues, reads and parses queue messages, downloads CloudTrail log files, and parses and serializes events in the log file. Using CPL, developers can read and process CloudTrail log files in as few as 10 lines of code. CPL handles transient and enduring failures related to network timeouts and inaccessible resources in a resilient and fault tolerant manner. CPL is built to scale easily and can process an unlimited number of log files in parallel. If needed, any number of hosts can each run CPL, processing the same S3 bucket and same SQS queue in parallel.

Getting started with CPL is easy. After configuring your AWS credentials and SQS queue, you simply implement a callback method to be called for every event, and start the AWSCloudTrailProcessingExecutor.

// This file contains your AWS security credentials and the name
// of an Amazon SQS queue to poll for updates
String myPropertiesFileName = "myCPL.properties";

// An EventsProcessor is what processes each event from AWS CloudTrail
final AmazonSNSClient sns = new AmazonSNSClient();
EventsProcessor eventsProcessor = new EventsProcessor() {
    public void process(List<CloudTrailEvent> events) {
        for (CloudTrailEvent event : events) {
            CloudTrailEventData data = event.getEventData();
            if (data.getEventSource().equals("ec2.amazonaws.com") &&
                data.getEventName().equals("ModifyVpcAttribute")) {
                System.out.println("Processing event: " + data.getRequestId());
                sns.publish(myQueueArn, "{ " + 
                    "'requestId'= '" + data.getRequestId() + "'," + 
                    "'request'  = '" + data.getRequestParameters() + "'," + 
                    "'response' = '" + data.getResponseElements() + "'," +
                    "'source'   = '" + data.getEventSource() + "'," +
                    "'eventName'= '" + data.getEventName() + "'" +
                    "}");
            }
        }
    }
};

// Create AWSCloudTrailProcessingExecutor and start it
final AWSCloudTrailProcessingExecutor executor = 
            new AWSCloudTrailProcessingExecutor
                .Builder(eventsProcessor, myPropertiesFileName)
                .build();
executor.start();

The preceding example creates an implementation of EventsProcessor that processes each of our events. If the event was from a user modifying an Amazon EC2 VPC through the ModifyVPCAttribute operation, then this code publishes a message to an Amazon SNS topic, so that an operator can review this potentially large change to the account’s VPC configuration.

This example shows how easy it is to use the CPL to process your AWS CloudTrail events. You’ve seen how to create your own implementation of EventsProcessor to specify your own custom logic for acting on CloudTrail events. In addition to EventsProcessor, you can also control the behavior of AWSCloudTrailProcessingExecutor with these interfaces:

  • EventFilter allows you to easily filter specific events that you want to process. For example, if you only want to process CloudTrail events in a specific region, or from a specific service, you can use a EventFilter to easily select those events.
  • SourceFilters allow you to perform filtering using data specific to the source of the events. In this case, the SQSBasedSource contains additional information you can use for filtering, such as how many times a message has been delivered.
  • ProgressReporters allow you to report back progress through your application so you can tell your users how far along in the processing your application is.
  • ExceptionHandlers allow you to add custom error handling for any errors encountered during event processing.

You can find the full source for the AWS CloudTrail Processing Library in the aws-cloudtrail-processing-library project on GitHub, and you can easily pick up the CPL as a dependency in your Maven-based projects:

<dependency>
	<groupId>com.amazonaws</groupId>
	<artifactId>aws-cloudtrail-processing-library</artifactId>
	<version>1.0.0</version>
</dependency>

For more information, go to the CloudTrail FAQ and documentation.

How are you using AWS CloudTrail to track your AWS usage?

Welcome to the AWS CLI Blog

by James Saryerwinnie | on | in AWS CLI | Permalink | Comments |  Share

Hi everyone! Welcome to the AWS Command Line Interface blog. I’m James Saryerwinnie, and I work on the AWS CLI. This blog will be the place to go for information about the AWS CLI including:

  • Tips and tricks for using the AWS CLI
  • New feature announcements
  • Deep dives into various AWS CLI features
  • Guest posts from various AWS service teams

In the meantime, here are a few links to get you started:

We’re excited to get this blog started, and we hope to see you again real soon. Stay tuned!

 

Introducing the AWS SDK for JavaScript Blog

Introducing the AWS SDK for JavaScript Blog

Today we’re announcing a new blog for the AWS SDK for JavaScript. On this blog, we will be sharing the latest tips, tricks, and best practices when using the SDK. We will also keep you up to date on new developments in the SDK and share information on upcoming features. Ultimately, this blog is a place for us to reach out and get feedback from you, our developers, in order to make our SDK even better.

We’re excited to finally start writing about the work we’ve done and will be doing in the future; there’s a lot of content to share. In the meantime, here’s a little primer on the AWS SDK for JavaScript, if you haven’t had the chance to kick its tires.

Works in Node.js and Modern Browsers

The SDK is designed to work seamlessly across Node.js and browser environments. With the exception of a few environment-specific integration points (like streams and file access in Node.js, and Blob support in the browser), we attempt to make all SDK API calls work the same way across all of your different applications. One of my favorite features is the ability to take snippets of SDK code and move them from Node.js to the browser and back with at most a few changes in code.

In Node.js you can install the SDK as the aws-sdk npm package:

$ npm install aws-sdk --save

In the browser, you can use a hosted script tag to install the SDK or build your own version. More details on this can be found in our guide.

Full Service Coverage

The SDK has support for all the AWS services you want to use, and we keep it up to date with new API updates as they are released. Note that although some services in the browser require CORS to work over the web, we are continually working to expand the list of CORS-supported services. You can also take advantage of JavaScript in various local environments (Chrome and Firefox extensions, iOS, Android, WinRT and other mobile applications), which do not enforce CORS and develop your AWS-backed applications there; and you can do that with a custom build of the SDK today.

Open Source

Finally, the thing about the SDK that excites me the most is the fact that the entire SDK is openly developed and shared on GitHub. Feel free to check out the SDK code, post issue reports, and even submit pull requests with fixes or new features. Our SDK depends on feedback from our developers, so we love to get reports and pull requests. Send away!

More to Come

We will be posting much more information about the SDK on this blog. We have plenty of exciting things to share with you about new features and improvements to the library. Bookmark this blog and check back soon as we publish more information!

Stripe Windows Ephemeral Disks at Launch

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today we have another guest post by AWS Solutions Architect David Veith.

Amazon EC2 currently offers more than 20 current-generation instance types for your Windows operating system workloads. The root volume for Windows instances will always be a volume provided by the Amazon EBS service. Additional EBS drives can easily be added as desired.

Depending on the EC2 instance type selected, there will also be from zero to 24 instance-store volumes automatically available to the instance. Instance-store volumes provide temporary block-level storage to the instance. The data in an instance store persists only during the lifetime of its associated instance. Because of the temporary nature of instance-store volumes, they are often referred to as ””’ephemeral”’—not lasting, enduring, or permanent”.

Many workloads can benefit from this type of temporary block-level storage, and it’s important to mention that ephemeral volumes also come with no extra cost.

This blog post describes how the ephemeral volumes of any Windows EC2 instance can be detected at launch, and then automatically striped into one large OS volume. This is a common use case for many AWS customers.

Detecting Ephemeral vs. EBS Volumes

In order to build a striped volume consisting only of instance-store volumes, we first need a mechanism to distinguish the volume types (EBS or ephemeral) associated with the instance. The EC2 metadata service provides a mechanism to determine this.

The following PowerShell statement retrieves all the block devices of a running Windows EC2 instance it is executed upon:

$alldrives = (Invoke-WebRequest -Uri http://169.254.169.254/latest/meta-data/block-device-mapping/).Content

Here’s an example of the data returned from the metadata service for an M3.Xlarge instance (launched from the AWS Management Console) with one root EBS volume and two instance-store volumes:


Using the same instance type (M3.Xlarge), but this time launching the instance from an AWS CloudFormation script (or AWS command-line tools), the same code produces this output:

Why the difference?

When an instance is launched from the AWS Management Console, the console performs some additional steps to have the instance metadata reflect only the ephemeral drives that are actually present. In order for our code to handle both cases, we can query WMI to see if the OS actually sees the volume.

$disknumber = (Get-WmiObject -Class Win32_DiskDrive | where-object {$_.SCSITargetId -eq $scsiid}).Index
if ($disknumber -ne $null)

How EC2 Windows Maps Drives

Hopefully, you noticed in the code directly above that we queried WMI with the SCSI ID of each volume. Where did we get the SCSI ID?

To answer that question, we need to explain how EC2 Windows instances map block devices’ SCSI IDs in the operating system. The following table shows this:

For example, we can see that ‘xvdcb’ will always map to SCSI ID ’79’. We could build a lookup table that contains all the potential mount points and their corresponding SCSI IDs, but a more elegant approach is to use a simple algorithm based on ASCII arithmetic.

We know that all device mappings for Windows instances begin with the ‘xvd’ prefix. If we remove this prefix, we can use the remaining portion (‘cb’ in this example) to derive the correct SCSI ID.

'c' = ASCII 99
'b' = ASCII 98

(('c' - 97) * 26) + ('b' - 97) = 79

In the final PowerShell script below, this psuedo-code is implemented as the GetSCSI function.

The Complete Powershell Script

#################################################
#  Detect the Ephemeral drives and stripe them  
#################################################

# Be sure to choose a drive letter that will not already be assigned
$DriveLetterToAssign = "K:"

#################################################
#  Given a device (e.g. xvda), strip off 
# "xvd" and convert the remainder to the 
# appropriate SCSI ID   
#################################################
function GetSCSI {
	Param([string]$device)
	
        # remove xvd prefix
	$deviceSuffix = $device.substring(3)      

        if ($deviceSuffix.length -eq 1) {
		$scsi = (([int][char] $deviceSuffix[0]) - 97)
	}
	else {
		$scsi = (([int][char] $deviceSuffix[0]) - 96) *  26 
                            +  (([int][char] $deviceSuffix[1]) - 97)
	}

	return $scsi
}

#################################################
#  Main   
#################################################

# From metadata read the device list and grab only 
# the ephemeral volumes

$alldrives = (Invoke-WebRequest -Uri http://169.254.169.254/latest/meta-data/block-device-mapping/).Content
$ephemerals = $alldrives.Split(10) | where-object {$_ -like 'ephemeral*'} 

# Build a list of scsi ID's for the ephemeral volumes

$scsiarray = @()
foreach ($ephemeral in $ephemerals) {
	$device = (Invoke-WebRequest -Uri http://169.254.169.254/latest/meta-data/block-device-mapping/$ephemeral).Content
	$scsi = GetSCSI $device
	$scsiarray = $scsiarray + $scsi
}

# Convert the scsi ID's to OS drive numbers and set them up with diskpart

$diskarray = @()
foreach ($scsiid in $scsiarray) {
	$disknumber = (Get-WmiObject -Class Win32_DiskDrive | where-object {$_.SCSITargetId -eq $scsiid}).Index
	if ($disknumber -ne $null)
	{
		$diskarray += $disknumber
		$dpcommand = "select disk $disknumber
	                        select partition 1
	                        delete partition
	                        convert dynamic
	                        exit"
	    $dpcommand | diskpart
	}
}

# Build the stripe from the diskarray

$diskseries = $diskarray -join ','

if ($diskarray.count -gt 0) 
{
	if ($diskarray.count -eq 1) {
		$type = "simple"
	}
	else {
		$type = "stripe"
	}
		
	$dpcommand = "create volume $type disk=$diskseries
		         format fs=ntfs quick
                         assign letter=$DriveLetterToAssign
	                 exit"
	$dpcommand | diskpart
}
</powershell>

Extra Credit

In Windows Server 2012 R2, Microsoft introduced new PowerShell storage-management cmdlets that replace the need to use the diskpart utility in many cases. If you know your servers will be running only Windows Server 2012 R2, or later, you might want to use these newer Microsoft cmdlets. You can find more information on these cmdlets at http://technet.microsoft.com/en-us/library/hh848705.aspx.