Category: .NET


AWS re:Invent 2015 Recap

Another AWS re:Invent in the bag. It was great to talk to so many of our customers about .NET and PowerShell. Steve and I gave two talks this year. The first session was about how to take advantage of ASP.NET 5 in AWS. The second session was our first-ever PowerShell talk at re:Invent. It was great to see community excitement for our PowerShell support. If you weren’t able to come to re:Invent this year, you can view our sessions online.

We published the source code and scripts used in our talks in the reInvent-2015 folder in our .NET SDK samples repository.

Hope to see you at next year’s AWS re:Invent!

AWS re:Invent 2015

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

This year’s AWS re:Invent conference is just a few days away. Norm, Milind, and I from the .NET SDK and Tools team at AWS will be attending. We are looking forward to meeting with as many of you as we can.

This year we have two .NET-related breakout sessions:

On Wednesday afternoon, we will show you how to develop and host ASP.NET 5 applications on AWS. Check out DEV302: Hosting ASP.NET 5 Applications in AWS with Docker and AWS CodeDeploy in the session catalog.

On Thursday afternoon, we will hold our first-ever re:Invent session on the AWS Tools for Windows PowerShell! Check out DEV202: Under the Desk to the AWS Cloud with Windows PowerShell in the session catalog. The session will walk through how some easy-to-use scripts can be used to handle the workflow of moving a virtualized server into the cloud.

If you’re attending the conference this year, be sure to stop by the SDKs and Tools booth in the Exhibit Hall and say hello. We’d love to get feedback on what we can do to help with your day-to-day work with the AWS SDK for .NET, the AWS Tools for Windows PowerShell, and the AWS Toolkit for Visual Studio. See you in Las Vegas!

New Support for ASP.NET 5 in AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today we have released beta support for ASP.NET 5 in the AWS SDK for .NET. ASP.NET 5 is an exciting development for .NET developers with modularization and cross-platform support being major goals for the new platform.

Currently, ASP.NET 5 is on beta 7. There may be more changes before its 1.0 release. For this reason, we have released a separate 3.2 version of the SDK (marked beta) to NuGet. We will continue to maintain the 3.1 version as the current, stable version of the SDK. When ASP.NET 5 goes out of beta, we will take version 3.2 of the SDK out of beta.

CoreCLR

ASP.NET 5 applications can run on .NET 4.5.2, mono 4.0.1, or the new CoreCLR runtime. If you are targeting the new CoreCLR runtime, be aware of these coding differences:

  • Service calls must be made asynchronously. This is because the HTTP client used for CoreCLR supports asynchronous calls only. Coding your application to use asynchronous operations can improve your application performance because fewer tasks are blocked waiting for a response from the server.
  • The CoreCLR version of the AWS SDK for .NET currently does not support our encrypted SDK credentials store, which is available in the .NET 3.5 and 4.5 versions of the AWS SDK for .NET. This is because the encrypted store uses P/Invoke to make system calls into Windows to handle the encryption. Because CoreCLR is cross-platform, that option is not available. For local development with CoreCLR, we recommend you use the shared credentials file. When running in EC2 instances, Identity and Access Management (IAM) roles are the preferred mechanism for delivering credentials to your application.

AWS re:Invent

If you are attending AWS re:Invent next month, I’m going to address a breakout session about ASP.NET 5 development with AWS and options for deploying ASP.NET 5 applications to AWS.

Feedback

To give us feedback on ASP.NET 5 support or to suggest AWS features to better support ASP.NET 5, open a GitHub issue on the repository for the AWS SDK for .NET. Check out the dnxcore-development branch to see where the ASP.NET 5 work is being done.

DynamoDB DataModel Enum Support

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

In version 3.1.1 of the DynamoDB .NET SDK package, we added enum support to the Object Persistence Model. This feature allows you to use enums in .NET objects you store and load in DynamoDB. Before this change, the only way to support enums in your objects was to use a custom converter to serialize and deserialize the enums, storing them either as string or numeric representations. With this change, you can use enums directly, without having to implement a custom converter. The following two code samples show an example of this:

Definitions:

[DynamoDBTable("Books")]
public class Book
{
    [DynamoDBHashKey]
    public string Title { get; set; }
    public List Authors { get; set; }
    public EditionTypes Editions { get; set; }
}
[Flags]
public enum EditionTypes
{
    None      = 0,
    Paperback = 1,
    Hardcover = 2,
    Digital   = 4,
}

Using enums:

var client = new AmazonDynamoDBClient();
DynamoDBContext context = new DynamoDBContext(client);

// Store item
Book book = new Book
{
    Title = "Cryptonomicon",
    Authors = new List { "Neal Stephenson" },
    Editions = EditionTypes.Paperback | EditionTypes.Digital
};
context.Save(book);

// Get item
book = context.Load("Cryptonomicon");
Console.WriteLine("Title = {0}", book.Title);
Console.WriteLine("Authors = {0}", string.Join(", ", book.Authors));
Console.WriteLine("Editions = {0}", book.Editions);

Custom Converters

With OPM enum support, enums are stored as their numeric representations in DynamoDB. (The default underlying type is int, but you can change it, as described in this MSDN article.) If you were previously working with enums by using a custom converter, you may now be able to remove it and use this new support, depending on how your converter was implemented:

  • If your converter stored the enum into its corresponding numeric value, this is the same logic we use, so you can remove it.
  • If your converter turned the enum into a string (if you use ToString and Parse), you can discontinue the use of a custom converter as long as you do this for all of the clients. This feature is able to convert strings to enums when reading data from DynamoDB, but will always save an enum as its numeric representation. This means that if you load an item with a "string" enum, and then save it to DynamoDB, the enum will now be "numeric." As long as all clients are updated to use the latest SDK, the transition should be seamless.
  • If your converter worked with strings and you depend on them elsewhere (for example, queries or scans that depend on the string representation), continue to use your current converter.

Enum changes

Finally, it’s important to keep in mind the fact that enums are stored as their numeric representations because updates to the enum can create problems with existing data and code. If you modify an enum in version B of an application, but have version A data or clients, it’s possible some of your clients may not be able to properly handle the newer version of the enum values. Even something as simple as reorganizing the enum values can lead to some very hard-to-identify bugs. This MSDN blog post provides some very good advice to keep in mind when designing an enum.

Xamarin Support Out of Preview

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Last month, with the release of version 3 of the AWS SDK for .NET, Xamarin and Portable Class Library (PCL) support was announced as an in-preview feature. We’ve worked hard to stabilize this feature and with today’s release, we are labeling Xamarin and PCL support production-ready. This applies to Windows Phone and Windows Store support, too. If you’ve been waiting for the production-ready version of the SDK for these platforms, you can now upgrade from version 2 to this release of the SDK.

The immediate impact of this push is that the AWSSDK.CognitoSync, AWSSDK.SyncManager, and AWSSDK.MobileAnalytics NuGet packages are no longer marked as preview. The versions of other AWS SDK NuGet packages have been incremented.

Happy coding!

S3 Transfer Utility Upgrade

by Tyler Moore | on | in .NET | Permalink | Comments |  Share

Version 3 of the AWS SDK for .NET includes an update to the S3 transfer utility. Before this update, if an S3 download of a large file failed, the entire download would be retried. Now the retry logic has been updated so that any retry attempts will use bits that have already been laid down. This means better performance for customers. Because the retry attempt no longer requests the entire file, there is less data to stream from S3 when a download is interrupted.

As long as you are already using the S3 transfer utility, there is no code work required to take advantage of this update. It’s available in the AWSSDK.S3 package in version 3.1.2 and later. For more information about the S3 transfer utility, see Amazon S3 Transfer Utility for Windows Store and Windows Phone.

DynamoDB Table Cache

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Version 3 of the AWS SDK for .NET includes a new feature, the SDK Cache. This is an in-memory cache used by the SDK to store information like DynamoDB table descriptions. Before version 3, the SDK retrieved table information when you constructed a Table or DynamoDBContext object. For example, the following code creates a table and performs several operations on it. The LoadTable method makes a DescribeTable call to DynamoDB, so this sample will make three service calls: DescribeTable, GetItem, and UpdateItem.

var table = Table.LoadTable(ddbClient, "TestTable");
var item = table.GetItem(42);
item["Updated"] = DateTime.Now;
table.UpdateItem(item);

In most cases, your application will use tables that do not change, so constantly retrieving the same table information is wasteful and unnecessary. In fact, to keep the number of service calls to a minimum, the best option is to create a single copy of the Table or DynamoDBContext object and keep it around for the lifetime of your application. This, of course, requires a change to the way your application uses the AWS SDK for .NET.

We will now attempt to retrieve table information from the SDK Cache. Even if your code is constructing a new Table or DynamoDBContext object for each call, the SDK will only make a single DescribeTable call per table, and will keep this data around for the lifetime of the process. So if you ran the preceding code twice, only the first invocation of LoadTable would result in a DescribeTable call.

This change will reduce the number of DescribeTable calls your application makes, but in some cases you may need to get the most up-to-date table information from the service (for example, if you are developing a generic DynamoDB table scanner utility). You have two options: periodically clear the table metadata cache or disable the SDK Cache.

The first approach is to call Table.ClearTableCache(), a static method on the Table class. This operation will clear out the entire table metadata cache, so any Table or DynamoDBContext objects you create after this point will result in one new DescribeTable call per table. (Of course, after the data is retrieved once, it will again be stored in the cache. This approach will work only if you know when your table metadata changes and clear the cache intermittently.)

The second approach is to disable the SDK Cache, forcing the SDK to always retrieve the current table configuration. This can be accomplished through code or the app.config/web.config file, as illustrated below. (Disabling the SDK Cache will revert to version 2 behavior, so unless you hold on to the Table or DynamoDBContext objects as you create them, your application will end up making DescribeTable service calls.)

Disabling the cache through code:

// Disable SDK Cache for the entire application
AWSConfigs.UseSdkCache = false;

Disabling the cache through app.config:

<configuration>
  <appSettings>
	<!-- Disables SDK Cache for the entire application -->
    <add key="AWSCache" value="false" />
  </appSettings>
</configuration>

Version 3 of the AWS SDK for .NET Out of Preview

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Back in February, we announced our intention to release a new major version of the AWS SDK for .NET. In April, we released a preview on NuGet. After receiving great feedback from users, today we are taking version 3 of the AWS SDK for .NET out of preview. This means the preview flag has been removed from the NuGet packages. The SDK is now included in our MSI installer from our website.

Version 3 is a new, modularized SDK. Every service is a separate assembly and distributed as a separate NuGet package. Each service has a dependency on a common runtime, AWSSDK.Core. This has been a major request from our users, especially now that AWS has grown to over 50 services. This design also gives SDK users better control over when to upgrade to the newest service updates.

We wanted to make the transition to version 3 as easy possible, so there are very few breaking changes to the public API. For the full list of changes, see our API Reference which contains a migration guide.

Our hope is that most users will just need to replace the old reference to version 2 and add the reference to the services they are using. If you are using NuGet to get the SDK, the reference to our core runtime package will be added automatically. If you are getting the SDK from the installer on our website, then you will need to add a reference to AWSSDK.Core.

Xamarin Preview

We recently announced a public preview of Xamarin support, which is part of version 3. Even though the SDK is now widely available, Xamarin and the Portable Class Library version of the SDK are still in preview. We encourage you to try the new Xamarin support and give us feedback, but we are not ready for users to publish production applications just yet. Users with an immediate need for Windows Phone and Windows Store support should continue using version 2 until the PCL version of the SDK version 3 is production-ready.

PowerShell

With our move to version 3, we have also switched our AWS Tools for Windows PowerShell to the new SDK. The version numbers for AWS SDK for .NET and our AWS Tools for Windows PowerShell are kept in sync, so AWS Tools for Windows PowerShell is getting a major version bump to 3. There are otherwise no major changes to AWS Tools for Windows PowerShell.

Changes to Our Installer

The installer has been updated to contain version 3 of the SDK, but it also contains version 2 for users who are not ready to move to version 3. The Portable Class Library version of the SDK (which includes Xamarin support) is only distributed through NuGet and will not be available through the installer. The Portable Class Library uses platform-specific dependencies which are automatically resolved when references are added through NuGet. This would be a complex process if done manually or without NuGet.

Packages on NuGet

For an up to date list of the version 3 NuGet packages check out the NuGet section in the SDK’s github README.md.

Using the New Import Cmdlets for Amazon EC2

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Using the New Import Cmdlets for Amazon EC2

Amazon EC2 recently released an updated set of APIs for importing virtual machine images and disks. These new APIs, ImportImage and ImportSnapshot, are faster and more flexible than the original import APIs and are now available in the AWS Tools for Windows PowerShell (from version 2.3.43.0) through two new cmdlets – Import-EC2Image and Import-EC2Snapshot. Let’s take a look at how we use the new cmdlets to perform imports.

Importing a VM Image

Importing an image to EC2 can be done in just a couple of steps. First, we have to upload the disk image to Amazon S3, and then we run the import cmdlet that will yield an Amazon Machine Image (AMI) we can launch. We also need to set up an Identity and Access Management role, plus associated role policy, that gives EC2 access to the S3 artifacts. This is a one-time operation.

Import Prerequisites

As detailed in the EC2 user guide topic, the new import service APIs use an Identity and Access Management role, with associated role policy, to access the image file(s) that you upload to Amazon S3 during import. Setting these up is a one-time operation (assuming you use the same bucket to hold the image file for each import) and can be done from PowerShell very easily, as follows.

First, we create the role. The EC2 import API defaults to a role name of ”vmimport” if a custom role name is not supplied when we run the import command. For the sake of simplicity, that’s the name we’ll use in this blog example:

PS C:> $importPolicyDocument = @"
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Sid":"",
         "Effect":"Allow",
         "Principal":{
            "Service":"vmie.amazonaws.com"
         },
         "Action":"sts:AssumeRole",
         "Condition":{
            "StringEquals":{
               "sts:ExternalId":"vmimport"
            }
         }
      }
   ]
}
"@

PS C:> New-IAMRole -RoleName vmimport -AssumeRolePolicyDocument $importPolicyDocument

Now that we have created the role, we add a policy allowing EC2 access to the bucket containing our image:

PS C:> $bucketName = "myvmimportimages"
PS C:> $rolePolicyDocument = @"
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource":[
            "arn:aws:s3:::$bucketName"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObject"
         ],
         "Resource":[
            "arn:aws:s3:::$bucketName/*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ec2:ModifySnapshotAttribute",
            "ec2:CopySnapshot",
            "ec2:RegisterImage",
            "ec2:Describe*"
         ],
         "Resource":"*"
      }
   ]
}
"@

PS C:> Write-IAMRolePolicy -RoleName vmimport -PolicyName vmimport -PolicyDocument $rolePolicyDocument

That completes the prerequisites. If we want to use a different bucket (or additional buckets) in the future, we simply reconstruct the policy here-string shown above with the name(s) of the new or additional buckets and re-run the Write-IAMRolePolicy cmdlet.

Uploading the Image

The VM or disk image must be uploaded to S3. To do this, we use the Write-S3Object cmdlet. Assume we have a Windows Server 2012 R2 image consisting of a single disk that we want to import. This image is located on disk in the file C:CustomWindows2012R2.vhd. We’re also using the same bucket declared in the prerequisites above, ”myvmimportimages”, which we captured in a variable:

PS C:> Write-S3Object -BucketName $bucketName -File .CustomWindows2012R2.vhd

Because we did not supply a -Key parameter to the cmdlet to identify the object in the bucket, the file name is used by default. If the VM image to be imported consists of multiple disk images, simply repeat the use of Write-S3Object to upload all the images.

We’re now ready to import the image.

Importing the Image

The cmdlet to import VM images, Import-EC2Image, accepts a number of parameters that allow you to describe the import for future reference and detail which object in S3 contains the image EC2 should operate on. You can also specify a custom role name (with the -RoleName parameter) granting EC2 access to the S3 object. Earlier in this post we showed how to set up the role and policy using the default name EC2 assumes if a custom role is not specified, so this parameter will not be used here.

First, we must construct one or more ImageDiskContainer instances. If you are importing a VM that consists of multiple disk images (and therefore multiple S3 objects), we would create multiple container instances and pass them as an array to the cmdlet. Our sample image for this post contains just a single image file:

PS C:> $windowsContainer = New-Object Amazon.EC2.Model.ImageDiskContainer
PS C:> $windowsContainer.Format="VHD"

Details of the S3 location of the image file are specified in a nested object:

PS C:> $userBucket = New-Object Amazon.EC2.Model.UserBucket
PS C:> $userBucket.S3Bucket = $bucketName
PS C:> $userBucket.S3Key = "CustomWindows2012R2.vhd"
PS C:> $windowsContainer.UserBucket = $userBucket

Having constructed the disk container object(s), we can set up the parameters to the import cmdlet. One of the parameters, ClientToken, allows us to pass an idempotency token – this ensures that if a problem arises and we need to re-run the command, EC2 does not start a new import:

PS C:> $params = @{
    "ClientToken"="CustomWindows2012R2_" + (Get-Date)
    "Description"="My custom Windows 2012R2 image import"
    "Platform"="Windows"
    "LicenseType"="AWS"
}

We’re now ready to run the import cmdlet:

PS C:> Import-EC2Image -DiskContainer $windowsContainer @params 

Architecture    : 
Description     : My custom Windows 2012R2 image import
Hypervisor      : 
ImageId         : 
ImportTaskId    : import-ami-abcdefgh
LicenseType     : AWS
Platform        : Windows
Progress        : 2
SnapshotDetails : {}
Status          : active
StatusMessage   : pending

We can check progress on an import (or set of imports) using the Get-EC2ImportImageTask cmdlet, which outputs the same information as above for each import task. Optionally, we can query a specific import by supplying a value to the ImportTaskId parameter. We can also supply a set of filters if we don’t want to slice-n-dice the output through the PowerShell pipeline.

To abandon an import, we use the Stop-EC2ImportTask cmdlet. This cmdlet is used for both VM image and disk snapshot imports. It accepts the import task id of the import to be stopped.

Importing a Disk Snapshot

Importing disk snapshots to be used as additional EBS volumes to attach to EC2 instances is very similar to importing a VM image except that we’re always importing a single image:

PS C:> Write-S3Object -BucketName $bucketName -File .DataDisk.vhd
PS C:> $params = @{
    "ClientToken"="MySnapshotImport_" + (Get-Date)
    "Description"="My Data Disk Image"
    "DiskContainer_Description" = "Data disk import"
    "DiskContainer_Format" = "VHD"
    "DiskContainer_S3Bucket" = $bucketName
    "DiskContainer_S3Key" = "DataDisk.vhd"
}

PS C:> Import-EC2Snapshot @params | fl

Description         : My Data Disk Image
ImportTaskId        : import-snap-abcdefg
SnapshotTaskDetail  : Amazon.EC2.Model.SnapshotTaskDetail

To check progress of a snapshot import, we use the Get-EC2ImportSnapshotTask cmdlet, which is very similar to Get-EC2ImportImageTask. As mentioned earlier, a snapshot import can be stopped using Stop-EC2ImportTask.

Deprecated: Original Import Cmdlets

The original import cmdlets (Import-EC2Instance, Import-EC2Volume, Get-EC2ConversionTask and Stop-EC2ConversionTask) have now been marked as deprecated. They will be removed in a future release.

More Information

We hope you find the new cmdlets easier to use! For more information about importing VM images and disk snapshots to Amazon EC2, see this post on the official AWS Blog. You can also access the EC2 documentation for the feature.

RegisterProfile

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

The .NET SDK team is aware that some customers are having issues using the Amazon.Util.ProfileManager.RegisterProfile method, so this blog will attempt to explain what this method does, when it should be used, and more importantly, why it should never be called inside your development application.

We discussed RegisterProfile in an earlier blog post about storing and loading AWS credentials. Take a look at this post for more information about profiles and how they can be used to simplify local credentials management.

Let’s start with what Amazon.Util.ProfileManager.RegisterProfile is and how it should be used. The RegisterProfile method creates a new profile or updates an existing profile with a given set of credentials. After this is done, the profile can be used in the SDK, PowerShell, or the Visual Studio Toolkit to make AWS calls with a set of credentials, without having to constantly include the credentials in your code.

When using the SDK, we can access our profile by specifying it in our app.config/web.config file:

<configuration>
   <appSettings>
      <add key="AWSProfileName" value="profile-name"/>
   </appSettings>
</configuration>

Or explicitly with the following code:

var credentials = new Amazon.Runtime.StoredProfileAWSCredentials("profile-name");

In PowerShell, the profile can be accessed like this:

Set-AWSCredentials -ProfileName development

Finally, when using the Visual Studio Toolkit, you simply choose the desired profile from the Profile drop-down menu.

In this sense, RegisterProfile is a utility method and should be called only once: when you want to configure or update your current environment. After a profile is configured, you should not be making calls to RegisterProfile.

You should not be calling this method in your main AWS application. After you’ve configured your environment with the credentials you want to use, calls to RegisterProfile will not have any effect and, as illustrated in a recent forum post, in some cases can actually cause your application to crash. (Unfortunately, if you are running your application under IIS, the SDK credential store will not work. The credentials are encrypted for the currently logged-on user, and the system account running IIS will not be able to decrypt them. In this case, you could use the shared credentials with the AWSProfileLocation app setting, as outlined above.)

We hope this clears up the confusion about Amazon.Util.ProfileManager.RegisterProfile. Happy coding!