AWS Developer Blog

New Support for ASP.NET 5 in AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today we have released beta support for ASP.NET 5 in the AWS SDK for .NET. ASP.NET 5 is an exciting development for .NET developers with modularization and cross-platform support being major goals for the new platform.

Currently, ASP.NET 5 is on beta 7. There may be more changes before its 1.0 release. For this reason, we have released a separate 3.2 version of the SDK (marked beta) to NuGet. We will continue to maintain the 3.1 version as the current, stable version of the SDK. When ASP.NET 5 goes out of beta, we will take version 3.2 of the SDK out of beta.

CoreCLR

ASP.NET 5 applications can run on .NET 4.5.2, mono 4.0.1, or the new CoreCLR runtime. If you are targeting the new CoreCLR runtime, be aware of these coding differences:

  • Service calls must be made asynchronously. This is because the HTTP client used for CoreCLR supports asynchronous calls only. Coding your application to use asynchronous operations can improve your application performance because fewer tasks are blocked waiting for a response from the server.
  • The CoreCLR version of the AWS SDK for .NET currently does not support our encrypted SDK credentials store, which is available in the .NET 3.5 and 4.5 versions of the AWS SDK for .NET. This is because the encrypted store uses P/Invoke to make system calls into Windows to handle the encryption. Because CoreCLR is cross-platform, that option is not available. For local development with CoreCLR, we recommend you use the shared credentials file. When running in EC2 instances, Identity and Access Management (IAM) roles are the preferred mechanism for delivering credentials to your application.

AWS re:Invent

If you are attending AWS re:Invent next month, I’m going to address a breakout session about ASP.NET 5 development with AWS and options for deploying ASP.NET 5 applications to AWS.

Feedback

To give us feedback on ASP.NET 5 support or to suggest AWS features to better support ASP.NET 5, open a GitHub issue on the repository for the AWS SDK for .NET. Check out the dnxcore-development branch to see where the ASP.NET 5 work is being done.

DynamoDB DataModel Enum Support

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

In version 3.1.1 of the DynamoDB .NET SDK package, we added enum support to the Object Persistence Model. This feature allows you to use enums in .NET objects you store and load in DynamoDB. Before this change, the only way to support enums in your objects was to use a custom converter to serialize and deserialize the enums, storing them either as string or numeric representations. With this change, you can use enums directly, without having to implement a custom converter. The following two code samples show an example of this:

Definitions:

[DynamoDBTable("Books")]
public class Book
{
    [DynamoDBHashKey]
    public string Title { get; set; }
    public List Authors { get; set; }
    public EditionTypes Editions { get; set; }
}
[Flags]
public enum EditionTypes
{
    None      = 0,
    Paperback = 1,
    Hardcover = 2,
    Digital   = 4,
}

Using enums:

var client = new AmazonDynamoDBClient();
DynamoDBContext context = new DynamoDBContext(client);

// Store item
Book book = new Book
{
    Title = "Cryptonomicon",
    Authors = new List { "Neal Stephenson" },
    Editions = EditionTypes.Paperback | EditionTypes.Digital
};
context.Save(book);

// Get item
book = context.Load("Cryptonomicon");
Console.WriteLine("Title = {0}", book.Title);
Console.WriteLine("Authors = {0}", string.Join(", ", book.Authors));
Console.WriteLine("Editions = {0}", book.Editions);

Custom Converters

With OPM enum support, enums are stored as their numeric representations in DynamoDB. (The default underlying type is int, but you can change it, as described in this MSDN article.) If you were previously working with enums by using a custom converter, you may now be able to remove it and use this new support, depending on how your converter was implemented:

  • If your converter stored the enum into its corresponding numeric value, this is the same logic we use, so you can remove it.
  • If your converter turned the enum into a string (if you use ToString and Parse), you can discontinue the use of a custom converter as long as you do this for all of the clients. This feature is able to convert strings to enums when reading data from DynamoDB, but will always save an enum as its numeric representation. This means that if you load an item with a "string" enum, and then save it to DynamoDB, the enum will now be "numeric." As long as all clients are updated to use the latest SDK, the transition should be seamless.
  • If your converter worked with strings and you depend on them elsewhere (for example, queries or scans that depend on the string representation), continue to use your current converter.

Enum changes

Finally, it’s important to keep in mind the fact that enums are stored as their numeric representations because updates to the enum can create problems with existing data and code. If you modify an enum in version B of an application, but have version A data or clients, it’s possible some of your clients may not be able to properly handle the newer version of the enum values. Even something as simple as reorganizing the enum values can lead to some very hard-to-identify bugs. This MSDN blog post provides some very good advice to keep in mind when designing an enum.

Announcing the Amazon DynamoDB Document Client in the AWS SDK for JavaScript

Version 2.2.0 of the AWS SDK for JavaScript introduces support for the document client abstraction in the AWS.DynamoDB namespace. The document client abstraction makes it easier to read and write data to Amazon DynamoDB with the AWS SDK for JavaScript. Now you can use native JavaScript objects without annotating them as AttributeValue types.

This article describes how to use the document client abstraction to make requests to Amazon DynamoDB.

Making Requests with the Document Client

The following example shows a PutItem request to Amazon DynamoDB with the document client. Note that you can use native JavaScript objects without annotating them as AttributeValue types. The document client annotates the JavaScript object that you provide as input with AttributeValue types before making a request to DynamoDB.

For a list of supported API operations, you can check out the API documentation.

Example

var docClient = new AWS.DynamoDB.DocumentClient({region: 'us-west-2'});

var params = {
    Item: {
        hashkey: 'key',
        boolAttr: true,
        listAttr: [1, 'baz', true]
        mapAttr: {
            foo: 'bar'
        }
    },
    TableName: 'table'
};

docClient.put(params, function(err, data){
    if (err) console.log(err);
    else console.log(data);
});

Support for Sets

The AWS.DynamoDB.DocumentClient.createSet() is a convenience method for creating a set. This method accepts a JavaScript array and a map of options. The type of set is inferred from the type of the first element in the list. Amazon DynamoDB currently supports three types of sets: string sets, number sets, and binary sets.

Example

var docClient = new AWS.DynamoDB.DocumentClient({region: 'us-west-2'});

var params = {
    Item: {
        hashkey: 'key',
        stringSet: docClient.createSet(['a', 'b']);
        numberSet: docClient.createSet([1, 2]);
        binarySet: docClient.createSet([new Buffer(5), new Uint8Array(5)]);
    },
    TableName: 'table'
};

docClient.put(params, function(err, data){
    if (err) console.log(err);
    else console.log(data);
});

You can also validate the uniformity of the supplied list by setting validate: true in the options passed in to the createSet() method.

// This is a valid string set
var validSet = docClient.createSet(['a', 'b'], {validate: true});

// This is an invalid number set
var invalidSet = docClient.createSet([1, 'b'], {validate: true});

Using Response Data from the Document Client

The document client also unmarshalls response data annotated with AttributeValue types from DynamoDB to native JavaScript objects that can be easily used with other JavaScript code.

Example

var docClient = new AWS.DynamoDB.DocumentClient({region: 'us-west-2'});

var params = {
    Key: {
        hashkey: 'key',
    },
    TableName: 'table'
};

docClient.get(params, function(err, data){
    if (err) console.log(err);
    else console.log(data); 
    /**
     *  { 
     *      Item: { 
     *          hashkey: 'key'
     *          boolAttr: true,
     *          listAttr: [1, 'baz', true]
     *          mapAttr: {
     *              foo: 'bar'
     *          }
     *      }
     *  }
     **/
});

For more information about the document client and its supported operations, see the API documentation.

We hope this simplifies the development of applications with the AWS SDK for JavaScript and Amazon DynamoDB. We’d love to hear what you think about the document client abstraction, so leave us a comment here, or on GitHub, or tweet about it @awsforjs.

Xamarin Support Out of Preview

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Last month, with the release of version 3 of the AWS SDK for .NET, Xamarin and Portable Class Library (PCL) support was announced as an in-preview feature. We’ve worked hard to stabilize this feature and with today’s release, we are labeling Xamarin and PCL support production-ready. This applies to Windows Phone and Windows Store support, too. If you’ve been waiting for the production-ready version of the SDK for these platforms, you can now upgrade from version 2 to this release of the SDK.

The immediate impact of this push is that the AWSSDK.CognitoSync, AWSSDK.SyncManager, and AWSSDK.MobileAnalytics NuGet packages are no longer marked as preview. The versions of other AWS SDK NuGet packages have been incremented.

Happy coding!

S3 Transfer Utility Upgrade

by Tyler Moore | on | in .NET | Permalink | Comments |  Share

Version 3 of the AWS SDK for .NET includes an update to the S3 transfer utility. Before this update, if an S3 download of a large file failed, the entire download would be retried. Now the retry logic has been updated so that any retry attempts will use bits that have already been laid down. This means better performance for customers. Because the retry attempt no longer requests the entire file, there is less data to stream from S3 when a download is interrupted.

As long as you are already using the S3 transfer utility, there is no code work required to take advantage of this update. It’s available in the AWSSDK.S3 package in version 3.1.2 and later. For more information about the S3 transfer utility, see Amazon S3 Transfer Utility for Windows Store and Windows Phone.

The AWS CLI Topic Guide

by Kyle Knapp | on | in AWS CLI | Permalink | Comments |  Share

Hi everyone! This blog post is about the AWS CLI Topic Guide, a feature that was added in version 1.7.24 of the CLI. The AWS CLI Topic Guide allows users to discover and read information about a CLI feature or its behavior at a level of detail not found in-depth in the Help page of a single command.

Discovering Topics

Run the following command to discover the topics available:

$ aws help topics

A Help page with a list of available topics will be displayed. Here is an example list:

AVAILABLE TOPICS
   General
       o config-vars: Configuration Variables for the AWS CLI

       o return-codes: Describes the various return codes of the AWS CLI

   S3
       o s3-config: Advanced configuration for AWS S3 Commands

In this case, the returned topics (config-vars, return-codes, and s3-config) fall into two categories: General and S3. Each topic belongs to a single category only, so you will never see repeated topics in the list.

Accessing Topics

Run the following command to access a topic’s contents:

$ aws help topicname

where topicname is the name of a topic listed in the output of the aws help topics command. For example, if you wanted to access the return-codes topic to learn more about the various return codes in the CLI, all you would have to type is:

$ aws help return-codes

This will display a Help page that describes the various return codes you might recieve when running a CLI command and the scenarios for particular status codes.

The AWS CLI Topic Guide is also available online.

Conclusion

The AWS CLI Topic Guide is a great source of information about the CLI. If you have topics you would like us to add, submit a request through our GitHub repository.

Follow us on Twitter @AWSCLI and let us know what you’d like to read about next! Stay tuned for our next post.

 

Managing Dependencies with AWS SDK for Java – Bill of Materials module (BOM)

by Manikandan Subramanian | on | in Java | Permalink | Comments |  Share

Every Maven project specifies its required dependencies in the pom.xml file. The AWS SDK for Java provides a Maven module for every service it supports. To use the Java client for a service, all you need to do is specify the group ID, artifact ID and the Maven module version in the dependencies section of pom.xml.

The AWS SDK for Java introduces a new Maven bill of materials (BOM) module, aws-java-sdk-bom, to manage all your dependencies on the SDK and to make sure Maven picks the compatible versions when depending on multiple SDK modules. You may wonder why this BOM module is required when the dependencies are specified in the pom.xml file. Let me take you through an example. Here is the dependencies section from a pom.xml file:

  <dependencies>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-ec2</artifactId>
      <version>1.10.2</version>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-s3</artifactId>
      <version>1.10.5</version>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-dynamodb</artifactId>
      <version>1.10.10</version>
    </dependency>
  <dependencies>

Here is the Maven’s dependency resolution for the above pom.xml file:

As you see, the aws-java-sdk-ec2 module is pulling in an older version of aws-java-sdk-core. This intermixing of different versions of SDK modules can create unexpected issues. To ensure that Maven pulls in the correct version of the dependencies, import the aws-java-sdk-bom into your dependency management section and specify your project’s dependencies, as shown below.

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk-bom</artifactId>
        <version>1.10.10</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>
    </dependencies>
  </dependencyManagement>
  
  <dependencies>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-ec2</artifactId>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-s3</artifactId>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-dynamodb</artifactId>
    </dependency>
  </dependencies>

The Maven version for each dependency will be resolved to the version specified in the BOM. Notice that when you are importing a BOM, you will need to mention the type as pom and the scope as import.

Here is the Maven’s dependency resolution for the above pom.xml file:

As you can see, all the AWS SDK for Java modules are resolved to a single Maven version. And upgrading to a newer version of the AWS SDK for Java requires you to change only the version of aws-java-sdk-bom module being imported.

Have you been using modularized Maven modules in your project? Please leave your feedback in the comments.

AWS Workshop and Hackathon at PNWPHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

In September, the Pacific Northwest PHP Conference (PNWPHP) is happening in Seattle. It’s just down the street from us, and we decided to partner with the them to host an AWS Workshop and Hackathon on September 10th, 2015.

The workshop portion will serve as kind of AWS bootcamp for PHP developers, and will include a few presentations about AWS services and architecture, the AWS SDK for PHP, and running PHP applications on AWS. You can see a full list of the presentations and speakers on the PNWPHP website.

The hackathon portion will allow people to team up and create something using AWS services and the SDK. Like most hackathons, there will be food and prizes involved. Hackathon participants will also receive AWS credits through the AWS Activate program to cover the costs of the services they will be using during the hackathon.

Tickets for AWS Workshop and Hackathon are sold separately from the main PNWPHP conference, so whether you end up attending the main conference or not, you still have the opportunity to join us at our workshop/hackathon. In fact, you can use the discount code "AWSHACK" to get your AWS Workshop and Hackathon ticket for a 50% discount. Head to the PNWPHP registration page to get your ticket.

Whether you are a Seattle native, or you are in town for PNWPHP, we hope to see you at our special AWS Workshop and Hackathon.

Using AWS CodeCommit from Eclipse

Earlier this month, we launched AWS CodeCommit — a managed revision control service that hosts Git repositories and works with existing Git-based tools.

If you’re an Eclipse user, it’s easy to use the EGit tools in Eclipse to work with AWS CodeCommit. This post shows how to publish a project to AWS CodeCommit so you can start trying out the new service.

Configure SSH Authentication

To use AWS CodeCommit with Eclipse’s Git tooling, you’ll need to configure SSH credentials for accessing CodeCommit. This is an easy process you’ll only need to do once. The AWS CodeCommit User Guide has a great walkthrough describing the exact steps to create a keypair and register it with AWS. Make sure you take the time to test your SSH credentials and configuration as described in the walkthrough.

Create a Repository

Next, we’ll create a new Git repository using AWS CodeCommit. The AWS CodeCommit User Guide has instructions for creating repositories through the AWS CLI or the AWS CodeCommit console.

Here’s how I used the AWS CLI:

% aws --region us-east-1 codecommit create-repository 
      --repository-name MyFirstRepo 
      --repository-description "My first CodeCommit repository"
{
  "repositoryMetadata": {
    "creationDate": 1437760512.195,
    "cloneUrlHttp": 
       "https://git-codecommit.us-east-1.amazonaws.com/v1/repos/MyFirstRepo",
    "cloneUrlSsh": 
       "ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/MyFirstRepo",
    "repositoryName": "MyFirstRepo",
    "Arn": "arn:aws:codecommit:us-east-1:963699449919:MyFirstRepo",
    "repositoryId": "c4ed6846-5000-44ce-a808-b1862766d8bc",
    "repositoryDescription": "My first CodeCommit repository",
    "accountId": "963699449919",
    "lastModifiedDate": 1437760512.195
  }
}

Whether you use the CLI or the console to create your CodeCommit repository, make sure to copy the cloneUrlSsh property that’s returned. We’ll use that in the next step when we clone the CodeCommit repository to our local machine.

Create a Clone

Now we’re ready to use our repository locally and push one of our projects into it. The first thing we need to do is clone our repository so that we have a local version. In Eclipse, open the Git Repositories view (Window -> Show View -> Other…) and select the option to clone a Git repository.

In the first page of the Clone Git Repository wizard, paste the Git SSH URL from your CodeCommit repository into the URI field. Eclipse will parse out the connection protocol, host, and repository path.

Click Next. The CodeCommit repository we created is an empty, or bare, repository, so there aren’t any branches to configure yet.

Click Next. On the final page of the wizard, select where on your local machine you’d like to store the cloned repository on your local machine.

Push to Your Repository

Now that we’ve got a local clone of our repository, we’re ready to start pushing a project into it. Select a project and use Team -> Share to connect that project with the repository we just cloned. In my example, I simply created a new project.

Next use Team -> Commit… to make the initial check-in to your cloned repo.

Finally, use Team -> Push Branch… to push the master branch in your local repository up to your CodeCommit repository. This will create the master branch on the CodeCommit repository and configure your local repo for upstream pushes and pulls.

Conclusion

Your project is now configured with the EGit tools in Eclipse and set up to push and pull from a remote AWS CodeCommit repository. You can take advantage of all the EGit tooling in Eclipse to work with your repository and easily push and pull changes from your AWS CodeCommit repository. Have you tried using AWS CodeCommit yet?

DynamoDB Table Cache

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Version 3 of the AWS SDK for .NET includes a new feature, the SDK Cache. This is an in-memory cache used by the SDK to store information like DynamoDB table descriptions. Before version 3, the SDK retrieved table information when you constructed a Table or DynamoDBContext object. For example, the following code creates a table and performs several operations on it. The LoadTable method makes a DescribeTable call to DynamoDB, so this sample will make three service calls: DescribeTable, GetItem, and UpdateItem.

var table = Table.LoadTable(ddbClient, "TestTable");
var item = table.GetItem(42);
item["Updated"] = DateTime.Now;
table.UpdateItem(item);

In most cases, your application will use tables that do not change, so constantly retrieving the same table information is wasteful and unnecessary. In fact, to keep the number of service calls to a minimum, the best option is to create a single copy of the Table or DynamoDBContext object and keep it around for the lifetime of your application. This, of course, requires a change to the way your application uses the AWS SDK for .NET.

We will now attempt to retrieve table information from the SDK Cache. Even if your code is constructing a new Table or DynamoDBContext object for each call, the SDK will only make a single DescribeTable call per table, and will keep this data around for the lifetime of the process. So if you ran the preceding code twice, only the first invocation of LoadTable would result in a DescribeTable call.

This change will reduce the number of DescribeTable calls your application makes, but in some cases you may need to get the most up-to-date table information from the service (for example, if you are developing a generic DynamoDB table scanner utility). You have two options: periodically clear the table metadata cache or disable the SDK Cache.

The first approach is to call Table.ClearTableCache(), a static method on the Table class. This operation will clear out the entire table metadata cache, so any Table or DynamoDBContext objects you create after this point will result in one new DescribeTable call per table. (Of course, after the data is retrieved once, it will again be stored in the cache. This approach will work only if you know when your table metadata changes and clear the cache intermittently.)

The second approach is to disable the SDK Cache, forcing the SDK to always retrieve the current table configuration. This can be accomplished through code or the app.config/web.config file, as illustrated below. (Disabling the SDK Cache will revert to version 2 behavior, so unless you hold on to the Table or DynamoDBContext objects as you create them, your application will end up making DescribeTable service calls.)

Disabling the cache through code:

// Disable SDK Cache for the entire application
AWSConfigs.UseSdkCache = false;

Disabling the cache through app.config:

<configuration>
  <appSettings>
	<!-- Disables SDK Cache for the entire application -->
    <add key="AWSCache" value="false" />
  </appSettings>
</configuration>