AWS Developer Blog

Version 3 of the AWS SDK for .NET Out of Preview

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Back in February, we announced our intention to release a new major version of the AWS SDK for .NET. In April, we released a preview on NuGet. After receiving great feedback from users, today we are taking version 3 of the AWS SDK for .NET out of preview. This means the preview flag has been removed from the NuGet packages. The SDK is now included in our MSI installer from our website.

Version 3 is a new, modularized SDK. Every service is a separate assembly and distributed as a separate NuGet package. Each service has a dependency on a common runtime, AWSSDK.Core. This has been a major request from our users, especially now that AWS has grown to over 50 services. This design also gives SDK users better control over when to upgrade to the newest service updates.

We wanted to make the transition to version 3 as easy possible, so there are very few breaking changes to the public API. For the full list of changes, see our API Reference which contains a migration guide.

Our hope is that most users will just need to replace the old reference to version 2 and add the reference to the services they are using. If you are using NuGet to get the SDK, the reference to our core runtime package will be added automatically. If you are getting the SDK from the installer on our website, then you will need to add a reference to AWSSDK.Core.

Xamarin Preview

We recently announced a public preview of Xamarin support, which is part of version 3. Even though the SDK is now widely available, Xamarin and the Portable Class Library version of the SDK are still in preview. We encourage you to try the new Xamarin support and give us feedback, but we are not ready for users to publish production applications just yet. Users with an immediate need for Windows Phone and Windows Store support should continue using version 2 until the PCL version of the SDK version 3 is production-ready.

PowerShell

With our move to version 3, we have also switched our AWS Tools for Windows PowerShell to the new SDK. The version numbers for AWS SDK for .NET and our AWS Tools for Windows PowerShell are kept in sync, so AWS Tools for Windows PowerShell is getting a major version bump to 3. There are otherwise no major changes to AWS Tools for Windows PowerShell.

Changes to Our Installer

The installer has been updated to contain version 3 of the SDK, but it also contains version 2 for users who are not ready to move to version 3. The Portable Class Library version of the SDK (which includes Xamarin support) is only distributed through NuGet and will not be available through the installer. The Portable Class Library uses platform-specific dependencies which are automatically resolved when references are added through NuGet. This would be a complex process if done manually or without NuGet.

Packages on NuGet

For an up to date list of the version 3 NuGet packages check out the NuGet section in the SDK’s github README.md.

Invoking AWS Lambda Functions from Java

by David Murray | on | in Java | Permalink | Comments |  Share

AWS Lambda makes it incredibly easy and cost-effective to run your code at arbitrary scale in the cloud. Simply write the handler code for your function and upload it to Lambda. The service takes care of hosting and scaling the function for you. And in case you somehow missed it, it now supports writing function handlers in Java!

Although many use cases for Lambda involve running code in response to triggers from other AWS services like Amazon S3 or Amazon Cognito, you can also invoke Lambda functions directly, making them an easy and elastically scalable way to decompose an application into reusable microservices. In this post, we’ll assume we’ve got a Lambda function named “CountCats” that accepts an S3 bucket and key for an image, analyzes the image to count the number of cats the image contains, and returns that count to the caller. An example request to this service might look like:

{
  "bucket": "pictures-of-cats",
  "key": "three-cool-cats.jpg"
}

And an example response might look like:

{
  "count": 3
}

To invoke this function from Java code, we’ll first define POJOs representing the input and output JSON:

public class CountCatsInput {

  private String bucketName;
  private String key;

  public String getBucketName() { return bucketName; }
  public void setBucketName(String value) { bucketName = value; }

  public String getKey() { return key; }
  public void setKey(String value) { key = value; }
}

public class CountCatsOutput {

  private int count;

  public int getCount() { return count; }
  public void setCount(int value) { count = value; }
}

Next we’ll define an interface representing our microservice, and annotate it with the name of the Lambda function to invoke when it’s called:

import com.amazonaws.services.lambda.invoke.LambdaFunction;

public interface CatService {
  @LambdaFunction(functionName="CountCats")
  CountCatsOutput countCats(CountCatsInput input);
}

We can then use the LambdaInvokerFactory to create an implementation of this interface that will make calls to our service running on Lambda (note that providing a lambdaClient is optional, if one is not provided a default client will be used):

import com.amazonaws.services.lambda.AWSLambdaClientBuilder;
import com.amazonaws.services.lambda.invoke.LambdaInvokerFactory;

final CatService catService = LambdaInvokerFactory.builder()
 .lambdaClient(AWSLambdaClientBuilder.defaultClient())
 .build(CatService.class);

Finally, we invoke our service using this proxy object:

CountCatsInput input = new CountCatsInput();
input.setBucketName("pictures-of-cats");
input.setKey("three-cute-cats");

int cats = catService.countCats(input).getCount();

When called, the input POJO is serialized to JSON and sent to your Lambda function; the function’s result is transparently deserialized back into your output POJO. Details like authentication, timeouts, and retries in case of transient network issues are handled by the underlying AWSLambdaClient.

Are you using Lambda to host a microservice and calling it from Java code? Let us know how it’s going in the comments or over on our GitHub repository!

Using the New Import Cmdlets for Amazon EC2

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Using the New Import Cmdlets for Amazon EC2

Amazon EC2 recently released an updated set of APIs for importing virtual machine images and disks. These new APIs, ImportImage and ImportSnapshot, are faster and more flexible than the original import APIs and are now available in the AWS Tools for Windows PowerShell (from version 2.3.43.0) through two new cmdlets – Import-EC2Image and Import-EC2Snapshot. Let’s take a look at how we use the new cmdlets to perform imports.

Importing a VM Image

Importing an image to EC2 can be done in just a couple of steps. First, we have to upload the disk image to Amazon S3, and then we run the import cmdlet that will yield an Amazon Machine Image (AMI) we can launch. We also need to set up an Identity and Access Management role, plus associated role policy, that gives EC2 access to the S3 artifacts. This is a one-time operation.

Import Prerequisites

As detailed in the EC2 user guide topic, the new import service APIs use an Identity and Access Management role, with associated role policy, to access the image file(s) that you upload to Amazon S3 during import. Setting these up is a one-time operation (assuming you use the same bucket to hold the image file for each import) and can be done from PowerShell very easily, as follows.

First, we create the role. The EC2 import API defaults to a role name of ”vmimport” if a custom role name is not supplied when we run the import command. For the sake of simplicity, that’s the name we’ll use in this blog example:

PS C:> $importPolicyDocument = @"
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Sid":"",
         "Effect":"Allow",
         "Principal":{
            "Service":"vmie.amazonaws.com"
         },
         "Action":"sts:AssumeRole",
         "Condition":{
            "StringEquals":{
               "sts:ExternalId":"vmimport"
            }
         }
      }
   ]
}
"@

PS C:> New-IAMRole -RoleName vmimport -AssumeRolePolicyDocument $importPolicyDocument

Now that we have created the role, we add a policy allowing EC2 access to the bucket containing our image:

PS C:> $bucketName = "myvmimportimages"
PS C:> $rolePolicyDocument = @"
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource":[
            "arn:aws:s3:::$bucketName"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObject"
         ],
         "Resource":[
            "arn:aws:s3:::$bucketName/*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ec2:ModifySnapshotAttribute",
            "ec2:CopySnapshot",
            "ec2:RegisterImage",
            "ec2:Describe*"
         ],
         "Resource":"*"
      }
   ]
}
"@

PS C:> Write-IAMRolePolicy -RoleName vmimport -PolicyName vmimport -PolicyDocument $rolePolicyDocument

That completes the prerequisites. If we want to use a different bucket (or additional buckets) in the future, we simply reconstruct the policy here-string shown above with the name(s) of the new or additional buckets and re-run the Write-IAMRolePolicy cmdlet.

Uploading the Image

The VM or disk image must be uploaded to S3. To do this, we use the Write-S3Object cmdlet. Assume we have a Windows Server 2012 R2 image consisting of a single disk that we want to import. This image is located on disk in the file C:CustomWindows2012R2.vhd. We’re also using the same bucket declared in the prerequisites above, ”myvmimportimages”, which we captured in a variable:

PS C:> Write-S3Object -BucketName $bucketName -File .CustomWindows2012R2.vhd

Because we did not supply a -Key parameter to the cmdlet to identify the object in the bucket, the file name is used by default. If the VM image to be imported consists of multiple disk images, simply repeat the use of Write-S3Object to upload all the images.

We’re now ready to import the image.

Importing the Image

The cmdlet to import VM images, Import-EC2Image, accepts a number of parameters that allow you to describe the import for future reference and detail which object in S3 contains the image EC2 should operate on. You can also specify a custom role name (with the -RoleName parameter) granting EC2 access to the S3 object. Earlier in this post we showed how to set up the role and policy using the default name EC2 assumes if a custom role is not specified, so this parameter will not be used here.

First, we must construct one or more ImageDiskContainer instances. If you are importing a VM that consists of multiple disk images (and therefore multiple S3 objects), we would create multiple container instances and pass them as an array to the cmdlet. Our sample image for this post contains just a single image file:

PS C:> $windowsContainer = New-Object Amazon.EC2.Model.ImageDiskContainer
PS C:> $windowsContainer.Format="VHD"

Details of the S3 location of the image file are specified in a nested object:

PS C:> $userBucket = New-Object Amazon.EC2.Model.UserBucket
PS C:> $userBucket.S3Bucket = $bucketName
PS C:> $userBucket.S3Key = "CustomWindows2012R2.vhd"
PS C:> $windowsContainer.UserBucket = $userBucket

Having constructed the disk container object(s), we can set up the parameters to the import cmdlet. One of the parameters, ClientToken, allows us to pass an idempotency token – this ensures that if a problem arises and we need to re-run the command, EC2 does not start a new import:

PS C:> $params = @{
    "ClientToken"="CustomWindows2012R2_" + (Get-Date)
    "Description"="My custom Windows 2012R2 image import"
    "Platform"="Windows"
    "LicenseType"="AWS"
}

We’re now ready to run the import cmdlet:

PS C:> Import-EC2Image -DiskContainer $windowsContainer @params 

Architecture    : 
Description     : My custom Windows 2012R2 image import
Hypervisor      : 
ImageId         : 
ImportTaskId    : import-ami-abcdefgh
LicenseType     : AWS
Platform        : Windows
Progress        : 2
SnapshotDetails : {}
Status          : active
StatusMessage   : pending

We can check progress on an import (or set of imports) using the Get-EC2ImportImageTask cmdlet, which outputs the same information as above for each import task. Optionally, we can query a specific import by supplying a value to the ImportTaskId parameter. We can also supply a set of filters if we don’t want to slice-n-dice the output through the PowerShell pipeline.

To abandon an import, we use the Stop-EC2ImportTask cmdlet. This cmdlet is used for both VM image and disk snapshot imports. It accepts the import task id of the import to be stopped.

Importing a Disk Snapshot

Importing disk snapshots to be used as additional EBS volumes to attach to EC2 instances is very similar to importing a VM image except that we’re always importing a single image:

PS C:> Write-S3Object -BucketName $bucketName -File .DataDisk.vhd
PS C:> $params = @{
    "ClientToken"="MySnapshotImport_" + (Get-Date)
    "Description"="My Data Disk Image"
    "DiskContainer_Description" = "Data disk import"
    "DiskContainer_Format" = "VHD"
    "DiskContainer_S3Bucket" = $bucketName
    "DiskContainer_S3Key" = "DataDisk.vhd"
}

PS C:> Import-EC2Snapshot @params | fl

Description         : My Data Disk Image
ImportTaskId        : import-snap-abcdefg
SnapshotTaskDetail  : Amazon.EC2.Model.SnapshotTaskDetail

To check progress of a snapshot import, we use the Get-EC2ImportSnapshotTask cmdlet, which is very similar to Get-EC2ImportImageTask. As mentioned earlier, a snapshot import can be stopped using Stop-EC2ImportTask.

Deprecated: Original Import Cmdlets

The original import cmdlets (Import-EC2Instance, Import-EC2Volume, Get-EC2ConversionTask and Stop-EC2ConversionTask) have now been marked as deprecated. They will be removed in a future release.

More Information

We hope you find the new cmdlets easier to use! For more information about importing VM images and disk snapshots to Amazon EC2, see this post on the official AWS Blog. You can also access the EC2 documentation for the feature.

Reduce Composer Issues on Elastic Beanstalk

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

During the past couple of months, we’ve had a few reports from customers where they have experienced PHP application deployment failures on AWS Elastic Beanstalk related to parsing exceptions being thrown by Composer. In case you have recently run into the issue yourself, we would like to briefly describe why it is happening and how you can circumvent the issue.

The issue

The issue occurs when a project or its dependencies expresses its requirements using newer Composer syntax features like the carat (^) operator. For users of the AWS SDK for PHP, the error looks something like this:

[RuntimeException] Could not load package aws/aws-sdk-php in http://packagist.org:
[UnexpectedValueException] Could not parse version constraint ^5.3: Invalid version string "^5.3"

We also observed the issue with some versions of the Laravel framework and a few other libraries. The issue comes up when you are using older Elastic Beanstalk stacks with your applications. The older stacks have an old version of Composer included on the underlying Amazon Machine Image (AMI) that does not support some of the latest Composer features like the carat (^) and new OR (||) syntax.

The solution

There are 3 different ways to solve this issue.

  1. Upgrade your application to use the latest Elastic Beanstalk solution stack. The latest solution stacks for PHP have a more recent version of Composer that supports the new syntax features.
  2. Use Elastic Beanstalk configuration files (.ebextension). You can create a file ending in .config inside your .ebextension directory that allows you to perform a Compose self-update command before installing your dependencies. For example, name the file 01composer.config and add the following configuration:

    commands:
      01updateComposer:
        command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update
    
    option_settings:
      - namespace: aws:elasticbeanstalk:application:environment
        option_name: COMPOSER_HOME
        value: /root
  3. Install your dependencies locally. One way to avoid issues with Composer during deployment is to bypass the whole Composer workflow entirely by creating deployments of your application with the dependencies pre-installed.

The conclusion

We hope that this short blog post will be helpful if you happen to run into this issue. If this article does not solve your problem or you are running into other issues, please contact AWS Support or ask for help on the Elastic Beanstalk forum.

Introducing the SDK Builder for the AWS SDK for JavaScript in the Browser

We are pleased to introduce the SDK builder for the AWS SDK for JavaScript in the Browser.

The SDK builder allows you to customize and download the AWS SDK for JavaScript in the Browser. You can now include support for just the service clients you are using, which reduces the size of the browser distributable of the AWS SDK for JavaScript and improves the efficiency of your web applications.

You’ll find the SDK builder here: https://sdk.amazonaws.com/builder/js.

This article gives you an overview of the SDK builder, and shows you how to build a custom version of the SDK.

Step 1: Select an SDK Version

First, select the SDK version you want to customize. We recommend selecting the latest version.

Select a core version

 

Step 2: Select Services

You can then choose which service clients you want to include in your customized build. There are two preconfigured selections available:

  • Default services: This includes service clients for services that support the CORS standard.
  • All services: If you are using the AWS SDK for JavaScript in an environment that doesn’t enforce the CORS standard (for example, Google Chrome extensions and Windows Store Applications), then you can include support for all AWS services.

Select services

The globe icon next to the name of the service client indicates that the service supports CORS.

Services that support CORS

You can also customize the API version of the service clients you select. We recommend selecting the latest API version.

Select API versions

 

Step 3: Select a Bundle Type and Build

You are now ready to build your custom version of the SDK. Verify your build configuration, select a bundle type, and then click Build!

Supported Browsers

The SDK builder can be used on all modern browsers.

Browser Version
Google Chrome 31.0+
Microsoft Internet Explorer 10.0+
Mozilla Firefox 31.0+
Apple Safari 7.0+
Opera 29.0+

Wrapping Up

We hope the SDK builder makes it easier for you to optimize your application’s JavaScript footprint. We’re eager to hear what you think. Leave us a comment, tweet about it @awsforjs, or report an issue at github.com/aws/aws-sdk-js/issues.

AWS SDK for JavaScript Office Hour

The AWS SDKs and Tools team invites you to the first-ever online office hour hosted by the maintainers of the AWS SDK for JavaScript. It will be held via Google Hangouts at 10:00-11:00am PDT (UTC -7:00) on Tuesday 6/30. If you don’t have one already, you will need to create an account with Google to join the video chat.
 
This first office hour will be driven by customer questions. We expect to focus on questions about the SDK, but any questions related to JavaScript development on AWS are welcome. We’re excited to meet you and help you be successful in developing JavaScript applications on AWS!
 
Please register for the event, add it to your calendar, and join the office hour next Tuesday.

 

AWS SDK for PHP Office Hour

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

The AWS SDKs and Tools team invites you to the first-ever online office hour hosted by the maintainers of the AWS SDK for PHP. It will be held via Google Hangouts at 10:30-11:30am PDT (UTC -7:00) on Monday 6/29. If you don’t have one already, you will need to create an account with Google to join the video chat.

This first office hour will be driven by customer questions. We expect to focus on questions about the SDK, but any questions related to PHP development on AWS are welcome. We’re excited to meet you and help you be successful in developing PHP applications on AWS!

Please register for the event, add it to your calendar, and join the office hour next Monday.

AWS SDK for Ruby Office Hour

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

The AWS SDKs and Tools team invites you to the first-ever online office hour hosted by the maintainers of the AWS SDK for Ruby. It will be held via Google Hangouts at 11:00am-12:00pm PDT (UTC -7:00) on Tuesday 6/30. If you don’t have one already, you will need to create an account with Google to join the video chat.

This first office hour will be driven by customer questions. We expect to focus on questions about the SDK, but any questions related to Ruby development on AWS are welcome. We’re excited to meet you and help you be successful in developing Ruby applications on AWS!

Please register for the event, add it to your calendar, and join the office hour next Monday.

Updated Framework Modules for V3

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Last month, we announced that Version 3 of the AWS SDK for PHP was generally available. We’ve now updated all of our framework-specific modules with releases that support Version 3 (V3) of the SDK. Take a look!

We’ve also updated our AWS Resource APIs for PHP library which we previewed in December. Now that V3 of the SDK is stable, we will be working on adding features and documentation to this library over the coming weeks.

As always, we appreciate your feedback on any of our open source packages. Check out these updates and let us know what you think. :-)

P.S. We’d like to give a special thanks to Graham Campbell and Michaël Gallego for their contributions to the Laravel and ZF2 packages, respectively.

RegisterProfile

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

The .NET SDK team is aware that some customers are having issues using the Amazon.Util.ProfileManager.RegisterProfile method, so this blog will attempt to explain what this method does, when it should be used, and more importantly, why it should never be called inside your development application.

We discussed RegisterProfile in an earlier blog post about storing and loading AWS credentials. Take a look at this post for more information about profiles and how they can be used to simplify local credentials management.

Let’s start with what Amazon.Util.ProfileManager.RegisterProfile is and how it should be used. The RegisterProfile method creates a new profile or updates an existing profile with a given set of credentials. After this is done, the profile can be used in the SDK, PowerShell, or the Visual Studio Toolkit to make AWS calls with a set of credentials, without having to constantly include the credentials in your code.

When using the SDK, we can access our profile by specifying it in our app.config/web.config file:

<configuration>
   <appSettings>
      <add key="AWSProfileName" value="profile-name"/>
   </appSettings>
</configuration>

Or explicitly with the following code:

var credentials = new Amazon.Runtime.StoredProfileAWSCredentials("profile-name");

In PowerShell, the profile can be accessed like this:

Set-AWSCredentials -ProfileName development

Finally, when using the Visual Studio Toolkit, you simply choose the desired profile from the Profile drop-down menu.

In this sense, RegisterProfile is a utility method and should be called only once: when you want to configure or update your current environment. After a profile is configured, you should not be making calls to RegisterProfile.

You should not be calling this method in your main AWS application. After you’ve configured your environment with the credentials you want to use, calls to RegisterProfile will not have any effect and, as illustrated in a recent forum post, in some cases can actually cause your application to crash. (Unfortunately, if you are running your application under IIS, the SDK credential store will not work. The credentials are encrypted for the currently logged-on user, and the system account running IIS will not be able to decrypt them. In this case, you could use the shared credentials with the AWSProfileLocation app setting, as outlined above.)

We hope this clears up the confusion about Amazon.Util.ProfileManager.RegisterProfile. Happy coding!