Category: .NET


Performance improvements to AWS SDK for .NET

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

We recently fixed a performance issue which affects versions of the SDK that target .NET Framework 4.5. This issue is present in SDK versions 2.0.0.0 – 2.2.4.0 and only affects synchronous API calls.

This issue results in the usage of additional threads while using the synchronous APIs. In some specific environments—for example, with ASP.NET applications running on single core machines—we have seen this issue introduce additional latency intermittently for SDK calls. SDK versions 2.3.x.x and onwards contain the fix for this issue.

 

S3 Server Side Encryption with Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today we have a guest post by AWS Solutions Architect David Veith on making use of Amazon S3’s server-side encryption with customer-provided keys.

The release of version 2.1.4 of the AWS Tools for Windows PowerShell introduced support for a new server-side encryption method for Amazon S3. You now have three primary options for encrypting data at rest in S3:

  • You can secure your sensitive data before you ever send it to AWS by using client-side encryption.
  • You can use S3’s built-in server-side encryption (SSE), so your data is encrypted with AWS keys and processes while AWS manages the keys.
  • And now you can get all of the benefits and ease of use of server-side encryption, but with your own customer-provided keys (SSE-C).

This blog post describes how you can use the AWS PowerShell tools to secure your data at rest with S3 with the two methods of S3 server-side encryption (SSE-C and SSE).

Server-Side Encryption with Customer-Provided Keys (SSE-C)

With SSE-C, S3 encrypts your data on your behalf using keys that you provide and manage. Because S3 performs the encryption for you, you get the benefits of using your encryption keys without the burden or cost of writing or executing your own encryption code. This method of encryption is not available via the AWS console.

Protecting Your Keys

S3 discards your key immediately upon encrypting/decrypting your object; because the key is never retained, you lose your object if you lose your key. For this reason, it is very important to take special precautions to store your keys safely and securely. If you use multiple keys, you are responsible for tracking which encryption key you provided for each object. You should also consider implementing an envelope encryption process when storing encrypted objects in S3, as described in the article Client-Side Data Encryption with the AWS SDK for Java and Amazon S3.

Creating Your Key

The following commands use the .NET AES class in System.Security.Cryptography to create a base64 encoded key.

$Aes = New-Object System.Security.Cryptography.AesManaged
$Aes.KeySize = 256
$Aes::GenerateKey
$Base64key = [System.Convert]::ToBase64String($Aes.Key)

Writing an Object (SSE-C)

The Write-S3Object cmdlet is used to store an object in S3, encrypting it at rest using a client-provided key. The key is base64 encoded and the encryption method is specified as AES256. After your object is encrypted, your key is discarded.

$initialfile  = "YourFile"
$bucket       = "YourBucketName"
$objectkey    = "YourKeyName" 

try 
{
	Write-S3Object -Region us-west-2 -File $initialfile -BucketName $bucket -Key $objectkey -ServerSideEncryptionCustomerProvidedKey $Base64key -ServerSideEncryptionCustomerMethod AES256
}
catch [system.exception] 
{
	Write-Host  "Error: " $_.Exception.Message
}

Reading an Object (SSE-C)

The Read-S3Object cmdlet is used to retrieve an encrypted object in S3 using the same client-provided key that was used to encrypt it when it was originally stored in S3. The key is base64-encoded and the encryption method is specified as AES256. After your object is decrypted, your key is discarded.

$ssecfileout  = "YourOutputFile" 
$bucket       = "YourBucketName"
$objectkey    = "YourKeyName" 

try 
{
	Read-S3Object -Region us-west-2 -BucketName $bucket -Key $objectkey -File $ssecfileout  -ServerSideEncryptionCustomerProvidedKey $Base64key -ServerSideEncryptionCustomerMethod AES256
}
catch [system.exception]
{
	Write-Host  "Error: " $_.Exception.Message
}

Copying an Object (SSE-C)

The Copy-S3Object cmdlet is used to copy an encrypted object in S3 to a new key. Two keys are required for this scenario. The first key is required to decrypt the original object (because S3 never stores the key). The second key is used to encrypt the new copy. In this case, we used the same encryption key and the same bucket but that is not a requirement. As always, your keys are discarded after use.

$bucket         = "YourBucketName"
$objectkey      = "YourKeyName" 
$copyobjectkey  = "YourDestinationKeyName"

try 
{
	Copy-S3Object -Region us-west-2 -BucketName $bucket -Key $objectkey -DestinationBucket $bucket  -DestinationKey $copyobjectkey -CopySourceServerSideEncryptionCustomerMethod AES256 -CopySourceServerSideEncryptionCustomerProvidedKey $Base64key -ServerSideEncryptionCustomerProvidedKey $Base64key -ServerSideEncryptionCustomerMethod AES256
}
catch [system.exception] 
{
	Write-Host  "Error: " $_.Exception.Message
}

S3 Server-Side Encryption (SSE) with AWS Keys

This is the simplest method of encrypting your data at rest in S3. With SSE, S3 encrypts your data on your behalf using AWS keys and processes. You don’t need to track, store or provide any encryption keys. This method of encryption is also available via the AWS console.

Writing an Object (SSE)

As mentioned earlier, the Write-S3Object cmdlet is used to store an object in S3, in this case, encrypting the object on disk using AWS encryption and keys.

$initialfile  = "YourFile"
$bucket       = "YourBucketName"
$objectkey    = "YourKeyName" 

try 
{
	Write-S3Object -Region us-west-2 -File $initialfile -BucketName $bucket -Key $objectkey -ServerSideEncryption AES256
}
catch [system.exception] 
{
	Write-Host  "Error: " $_.Exception.Message
}

Reading an Object (SSE)

The Read-S3Object cmdlet is used retrieve an object from S3. If the object is encrypted in S3, a decrypted object is returned.

$ssefileout   = "YourOutputFile" 
$bucket       = "YourBucketName"
$objectkey    = "YourKeyName" 

try 
{
	Read-S3Object -Region us-west-2 -BucketName $bucket -Key $objectkey -File $ssefileout  
}
catch [system.exception]
{
	Write-Host  "Error: " $_.Exception.Message
}

Copying an Object (SSE)

The Copy-S3Object cmdlet can be used to make a copy of a server-side encrypted object. When copying an object, encryption must be specified explicitly, otherwise the copy will not be encrypted on the server-side. The sample below specifies server-side encryption for the copy.

$bucket         = "YourBucketName"
$objectkey      = "YourKeyName" 
$copyobjectkey  = "YourDestinationKeyName"

try 
{
	Copy-S3Object -Region us-west-2 -BucketName $bucket -Key $objectkey -DestinationBucket $bucket  -DestinationKey $copyobjectkey -ServerSideEncryption AES256
}
catch [system.exception] 
{
	Write-Host  "Error: " $_.Exception.Message
}

Summary

This post showed the options available for encrypting data at rest in S3 when using the Read-S3Object, Write-S3Object, and Copy-S3Object cmdlets to move data around.

For more information, see the Client Side Data Encryption with AWS SDK for .NET and Amazon S3 blog post.

Introducing S3Link to DynamoDBContext

by Mason Schneider | on | in .NET | Permalink | Comments |  Share

S3Link has been in the AWS SDK for Java for a while now, and we have decided to introduce it to the AWS SDK for .NET as well. This feature allows you to access your Amazon S3 resources easily through a link in your Amazon DynamoDB data. S3Link can be used with minimal configuration with the .NET DynamoDB Object Persistence Model. To use S3Link, simply add it as a field to your DynamoDB annotated class and create a bucket in S3. The following Book class has an S3Link property named CoverImage.

// Create a class for DynamoDBContext
[DynamoDBTable("Library")]
public class Book
{
	[DynamoDBHashKey]   
	public int Id { get; set; }

	public S3Link CoverImage { get; set; }

	public string Title { get; set; }
	public int ISBN { get; set; }

	[DynamoDBProperty("Authors")]    
	public List BookAuthors { get; set; }
}

Now that we have an S3Link in our annotated class, we are ready to manage an S3 object. The following code does four things:

  1. Creates and saves a book to DynamoDB
  2. Uploads the cover of the book to S3
  3. Gets a pre-signed URL to the uploaded object
  4. Loads the book back in using the Context object and downloads the cover of the book to a local file
// Create a DynamoDBContext
var context = new DynamoDBContext();

// Create a book with an S3Link
Book myBook = new Book
{
	Id = 501,
	CoverImage = S3Link.Create(context, "myBucketName", "covers/AWSSDK.jpg", Amazon.RegionEndpoint.USWest2),
	Title = "AWS SDK for .NET Object Persistence Model Handling Arbitrary Data",
	ISBN = 999,
	BookAuthors = new List { "Jim", "Steve", "Pavel", "Norm", "Milind" }
};

// Save book to DynamoDB
context.Save(myBook);

// Use S3Link to upload the content to S3
myBook.CoverImage.UploadFrom("path/to/covers/AWSSDK.jpg");

// Get a pre-signed URL for the image
string coverURL = myBook.CoverImage.GetPreSignedURL(DateTime.Now.AddHours(5));

// Load book from DynamoDB
myBook = context.Load(501);

// Download file linked from S3Link
myBook.CoverImage.DownloadTo("path/to/save/cover/otherbook.jpg");

And that’s the general use for S3Link. Simply provide it a bucket and a key, and then you can upload and download your data.

Importing VM Images and Volumes with PowerShell and C#

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Version 2.2.0 of the AWS Tools for Windows PowerShell and AWS SDK for .NET contained updates to make it easy to import virtual machine images and disk volumes into Amazon EC2. In the case of PowerShell, there are revised Import-EC2Instance and Import-EC2Volume cmdlets, while in the SDK there is a new helper class, DiskImageImporter, in the Amazon.EC2.Import namespace. This post examines the updates and shows you how to use PowerShell or C# to upload your images.

Importing via PowerShell

You can import virtual machine images into Amazon EC2 by using Import-EC2Instance, or you can use Import-EC2Volume to import disk images as EBS volumes. In both cases, Amazon EC2 must convert the uploaded content before it can be used. To track progress of the conversion, the cmdlets return a ConversionTask object containing the task ID plus other supporting information.

Both cmdlets can also be used to upload the content but defer the conversion to a later time. In this mode, the cmdlets return the Amazon S3 object key of an import manifest that has been created on your behalf based upon the parameters you supply and the image file being uploaded. To start the conversion, you run the same cmdlet that was used to upload the artifacts, but this time you pass the object key to the -ManifestFileKey parameter and remove the -ImageFile parameter. The cmdlets will then request Amazon EC2 begin conversion and return to you a ConversionTask object.

Monitoring Conversion Progress

Starting a conversion is great, but then what? To track the conversion progress, you can use the Get-EC2ConversionTask cmdlet. This cmdlet accepts a ConversionTask object (strictly speaking, it wants the conversion task ID contained within the object) and outputs the current status of the conversion. The ConversionTask object can be from either an instance conversion or a volume conversion.

You can also cancel a conversion using the Stop-EC2ConversionTask cmdlet. Just like Get-EC2ConversionTask, this cmdlet takes a ConversionTask object, or conversion task ID, and it instructs Amazon EC2 to abandon the in-progress conversion.

Now that we know the basics of uploading and converting, let’s take a look at some examples of the cmdlets and the options available.

Upload and Convert

Uploading with immediate conversion is the default operating mode for the cmdlets. For this mode, the cmdlet requires the name of the image file to upload and the destination bucket for the artifacts (the bucket will be created if required). When importing a VM image, you add parameters to specify the instance type, architecture, and platform (the default platform is ‘Windows’ if you don’t specify it). For example:

Import-EC2Instance -ImageFile C:Windows.2012-disk1.vmdk `
                   -InstanceType m3.xlarge `
                   -Architecture x86_64 `
                   -BucketName myvmimages ` 
                   -KeyPrefix windows.2012 

The -KeyPrefix parameter is optional and can be used to collect related image files together in Amazon S3. Each image file will be uploaded to an S3 key path of keyprefix/guid, with a new guid used for each upload.

Here’s an example of importing a disk volume:

Import-EC2Volume -ImageFile C:Windows.2012-disk2.vmdk `
                 -BucketName myvmimages `
                 -KeyPrefix windows.2012.volume2 `
                 -AvailabilityZone us-west-2a

For a volume, you must specify the Availability Zone for the region it is being imported to. You can retrieve the set of Availability Zones for a region by using the Get-EC2AvailabilityZone cmdlet:

PS C:> Get-EC2AvailabilityZone -Region us-west-2 | select ZoneName

ZoneName
--------
us-west-2a
us-west-2b
us-west-2c

Upload Only, Defer Conversion

To upload an image file but defer conversion, you add the -UploadOnly switch to the cmdlets. Parameters related to Amazon EC2 instances or Availability Zones are omitted:

Import-EC2Instance -UploadOnly `
                   -ImageFile C:Windows.2012-disk1.vmdk `
                   -BucketName myvmimages `
                   -KeyPrefix windows.2012


Import-EC2Volume -UploadOnly `
                 -ImageFile C:Windows.2012-disk2.vmdk `
                 -BucketName myvmimages `
                 -KeyPrefix windows.2012

Notice how both cmdlets take the same parameters for this mode. When run with the -UploadOnly switch, both cmdlets return to the pipeline the Amazon S3 object key of the import manifest. You’ll need this to start the conversion at a later date, as explained below.

Regarding Volume Sizes

None of the examples in this post use the -VolumeSize parameter. In this case, the import will use the size of the image file, rounded up to the nearest GB, on your behalf with the minimum size of a volume being 8 GB. To use a different volume size, simply add the -VolumeSize parameter. EBS volumes can be a minimum of 1 GB, but EC2 instance boot volumes must be 8 GB or greater.

Starting a Deferred Conversion

To start conversion of a disk image that has been uploaded previously, you need the S3 object key of the import manifest, which you pass to the -ManifestFileKey parameter, in addition to parameters describing the instance or volume (but omitting the -ImageFile parameter, since the image has been uploaded already). The -ManifestFileKey parameter takes an array of manifest key names, so you can upload several images and then request conversion as a batch operation. Note that when dealing with VM images, the EC2 instance parameter values apply to all of the images so you can’t mix Linux and Windows, or 32-bit/64-bit images in a batch:

PS C:> $manifest1 = Import-EC2Instance -ImageFile ...
windows.2012/f21dcaea...
PS C:> $manifest2 = Import-EC2Instance -ImageFile ...
windows.2012/4e1dbaea...
PS C:> $manifest3 = Import-EC2Instance -ImageFile ...
windows.2012/d32dcfed...

PS C:> Import-EC2Instance `
               -ManifestFileKey @($manifest1, $manifest2, $manifest3) `
               -InstanceType m3.xlarge `
               -Architecture ... 

Handling Upload Errors

If the upload of the image file fails, the cmdlets will leave what has been successfully uploaded in the Amazon S3 bucket and return to you the S3 object key to the manifest, together with instructions on how to resume the operation or cancel and delete the uploaded content. Here’s an example of the output message when an upload fails:

PS C:> Import-EC2Instance -ImageFile C:Windows.2012-disk1.vmdk ...   

Import-EC2Instance : The import operation failed to upload one or more image file parts.
To resume: re-run the cmdlet and add the -Resume parameter.
To cancel and remove uploaded artifacts: inspect the S3 bucket and delete all objects with key prefix
'windows.2012/f21dcaea-4cff-472d-bb0f-7aed0dcf7cf9'.
.....

To resume the upload, the command to run would therefore be

Import-EC2Instance -ImageFile C:Windows.2012-disk1.vmdk `
                   -Resume `
                   -KeyPrefix windows.2012 `
                   -InstanceType ...

Note that the -Resume parameter currently applies only to the upload phase of the import process. If the subsequent EC2 conversion fails with an error, you need to submit a new import request (with new upload) to correct whatever error EC2 has reported.

Importing via C#

The PowerShell cmdlets perform the upload and conversion work using a new helper class in the AWS SDK for .NET, so the same capabilities are available to SDK-based applications. The new class is in the Amazon.EC2.Import namespace and is called DiskImageImporter. The class handles uploads and conversion of VM images and disk volumes and can also be used to perform upload-only or upload-and-convert usage scenarios, just like the cmdlets.

The following code snippet shows how you can upload and request conversion of an image file from C#:

using Amazon.EC2.Import;

...
const string imagesBucket = "mybucketname";

var importer = new DiskImageImporter(RegionEndpoint.USWest2, imagesBucket);

var launchConfiguration = new ImportLaunchConfiguration
{
    InstanceType = "m3.xlarge",
    Architecture = "x86_64",
    Platform = "Windows",
};

importer.ImportInstance
  (@"C:Windows.2012-disk1.vmdk",
   null,                   // file format -- inferred from image extension
   null,                   // volume size -- infer from image size
   "windows.2012",         // key prefix of artifacts in S3
   launchConfiguration,    // EC2 instance settings
   (message, percentComplete) =>
       Console.WriteLine(message +
           (percentComplete.HasValue
               ? string.Format("{0}% complete", percentComplete.Value)
               : string.Empty))
 );
...

More information on importing images and volumes to Amazon EC2 can be found at Importing a VM into Amazon EC2.

As you can see, importing virtual machine artifacts to Amazon EC2 is now easy and convenient for Windows and .NET users. Be sure to let us know via our comments section what other operations you’d find convenient to have available!

Supporting Windows Phone 8.1

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

When we introduced version 2 of AWS SDK for .NET, it included support for Windows Store 8 and Windows Phone 8. With the release of Windows Phone 8.1, the runtime environment has changed to make it similar to Windows Store apps and to support Universal Apps. This means that when you create a new Windows Phone 8.1 project, you have two options, the older Silverlight runtime and the newer Universal runtime. Our current Windows Phone 8 version of the SDK works with the older Microsoft Silverlight runtime but is incompatible with the new Universal runtime.

To address the incompatibility with Universal Apps, we have created a new version of the SDK in version 2.2.0.0. This new version is a portable class library that targets both the Windows Store 8.1 and Windows Phone 8.1 platforms. It is included in the NuGet package for the SDK as well as in the installer. The supported services in the new version of the SDK.

  • Amazon Cognito
  • Amazon CloudWatch
  • Amazon DynamoDB
  • Amazon EC2
  • Amazon Elastic Transcoder
  • Amazon Glacier
  • Amazon Kinesis
  • Amazon RDS
  • Amazon S3
  • Amazon SimpleDB
  • Amazon SES
  • Amazon SNS
  • Amazon SQS
  • Auto Scaling
  • AWS CloudFormation
  • AWS Elastic Beanstalk
  • AWS Identity and Access Management
  • AWS Security Token Service
  • Elastic Load Balancing

Give it a try and let us know what you think.

Subscribing Websites to Amazon SNS Topics

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Amazon SNS allows you to create topics that have many different subscribers to receive the messages sent from the topic. Amazon SQS queues and emails are probably the most common type of consumers for a topic, but it is also possible to subscribe a website.

Setting Up the Website

The sample application creates a generic handler called SNSReceiver.ashx to handle requests coming from SNS. We’ll discuss each part of the SNSReceiver.ashx individually, but you can download a full copy of SNSReceiver.ashx here.

Each SNS message is sent to the website as an HTTP POST request, which the ProcessRequest method uses to determine if it is is an SNS message that should be processed. For HTTP GET requests, we’ll write back status information of the messages received from SNS.

public void ProcessRequest(HttpContext context)
{
    if(context.Request.HttpMethod == "POST")
    {
        ProcessPost(context);
    }
    else if (context.Request.HttpMethod == "GET")
    {
        ProcessGet(context);
    }
}

SNS messages are sent as JSON documents. In version 2.1.8.0 of the AWS SDK for .NET, we added the utility class Amazon.SimpleNotificationService.Util.Message to parse the JSON. This class also has the ability to verify authenticity of the message coming from SNS. This is done by calling IsMessageSignatureValid. When a subscription is made to a website, the website must confirm the subscription. The confirmation comes into our website like other messages. To detect a confirmation request, we need to check the Type property from the Message object. If the type is SubscriptionConfirmation, then we need to confirm the request; if the type is Notification, then it is a message that needs to be processed.

private void ProcessPost(HttpContext context)
{
    string contentBody;
    using (StreamReader reader = new StreamReader(context.Request.InputStream))
        contentBody = reader.ReadToEnd();

    Message message = Message.ParseMessage(contentBody);

    // Make sure message is authentic
    if (!message.IsMessageSignatureValid())
        throw new Exception("Amazon SNS Message signature is invalid");


    if (message.IsSubscriptionType)
    {
        ConfirmSubscription(context, message);
    }
    else if (message.IsNotificationType)
    {
        ProcessNotification(context, message);
    }
}

To confirm the subscription, we need to call SubscribeToTopic, which uses the SubscribeURL property and makes an HTTP GET request. In a real production situation, you would check the TopicArn property to make sure this is a topic that you should subscribe to.

private void ConfirmSubscription(HttpContext context, Message message)
{
    if (!IsValidTopic(message.TopicArn))
        return;

    try
    {
        message.SubscribeToTopic();
        Trace.WriteLine(string.Format("Subscription to {0} confirmed.", message.TopicArn));
    }
    catch(Exception e)
    {
        Trace.WriteLine(string.Format("Error confirming subscription to {0}: {1}", message.TopicArn, e.Message));
    }
}

To process messages, we grab the MessageText property from the Message object. For demonstration purposes, we add each message to a list of messages that we attach to the Application object. This list of messages is then returned by the GET request handler to display the list of messages received.

private void ProcessNotification(HttpContext context, Message message)
{
    var log = context.Application["log"] as IList;
    if (log == null)
    {
        log = new List();
        context.Application["log"] = log;
    }

    log.Add(string.Format("{0}: Received notification from {1} with message {2}", DateTime.Now, message.TopicArn, message.MessageText));
}

Here is the ProcessGet method that called from ProcessRequest for HTTP GET requests. It shows the list of received messages from SNS.

private void ProcessGet(HttpContext context)
{
    context.Response.ContentType = "text/plain";
    var log = context.Application["log"] as IList;
    if (log == null)
    {
        context.Response.Write("No log messages");
    }
    else
    {
        foreach (var message in log.Reverse())
        {
            context.Response.Write(message + "n");
        }
    }
}

Setting Up a Subscription

Remember that our website must be publicly accessible for SNS to send messages to it. We tested this by first deploying the application to AWS using AWS Elastic Beanstalk. We can use either the AWS Management Console or the AWS Toolkit for Visual Studio. Let’s use the Toolkit to test this.

First, we need to create the topic. In AWS Explorer, right-click Amazon SNS and select Create Topic, give the topic a name, and click OK.

Double-click on the new topic in the explorer to bring up its view. Click Create New Subscription, select HTTP or HTTPS for the protocol depending on how you deployed your application, and specify the URL to the SNSReceiver.ashx.

Depending on how fast the site responds to the confirmation, you might see a subscription status of "Pending Confirmation". If that’s the case, then just click the refresh button.

Once the subscription is confirmed, we can a send a test message by clicking the Publish to Topic button, adding some sample text, and clicking OK. Since our website will respond to GET requests by writing out the messages it receives, we can navigate to the website to see if the test message made it.

Now that we have confirmed messages are being sent and received by our website, we can use the AWS SDK for .NET or any of the other AWS SDKs to send messages to our website. Here is a snippet of how to use the .NET SDK to send messages.

var snsClient = new AmazonSimpleNotificationServiceClient(RegionEndpoint.USWest2);
snsClient.Publish(new PublishRequest
{
    TopicArn = topicArn,
    Message = "Test message"
});

Enjoy sending messages, and let us know what you think.

Enhancements to the DynamoDB SDK

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

The release of AWS SDK for .NET version 2.1.0 has introduced a number of changes to the high-level Amazon DynamoDB classes. Now, less markup is required to use classes with DynamoDBContext, as the SDK now infers reasonable default behavior. You can customize this behavior through app.config/web.config files and during run time through the SDK. In this blog post, we discuss the impact of this change and the new ways you can customize the behavior of DynamoDBContext.

Attributes

With previous versions of the .NET SDK, classes that were used with DynamoDBContext had to have attributes on them specifying the target table, the hash/range keys, and other data. The classes looked like this:

[DynamoDBTable("Movies")]
public class Movie
{
    [DynamoDBHashKey]
    public string Title { get; set; }

    [DynamoDBRangeKey(AttributeName = "Released")]
    public DateTime ReleaseDate { get; set; }

    public List<string> Genres { get; set; }

    [DynamoDBProperty(Converter = typeof(RatingConverter))]
    public Rating Rating { get; set; }

    [DynamoDBIgnore]
    public string Comment { get; set; }

    [DynamoDBVersion]
    public int Version { get; set; }
}

As of version 2.1.0 of the SDK, some of the information that the attributes provided is now being inferred from the target table and the class. You can also provide this information in the app.config/web.config files. In the following section, we show how it’s possible to remove all markup from our Movie class, either by removing the now-optional attributes or by moving the configuration to app.config files.

First, however, let’s look at the various types of attributes that are available and what it means to remove them.

Table attribute

Removing the DynamoDBTable attribute now forces DynamoDBContext to use the class name as the target table name. So for the class SampleApp.Models.Movie, the target table would be "Movie".

Key attributes

Some attributes, such as DynamoDBHashKey, DynamoDBRangeKey, and various SecondaryIndex attributes, are now inferred from the DynamoDB table. So unless you were using those attributes to specify an alternate property name or a converter, it is now safe to omit those attributes from your class definition.

Client-side attributes

There are also attributes that are "client-side", in that there is no information stored about them in DynamoDB, so DynamoDBContext can make no inferences about them. These are DynamoDBIgnore, DynamoDBVersion, DynamoDBProperty, as well as any other attributes that were used to specify an attribute name or a converter. Removing these attributes alters the behavior of your application, unless you’ve added corresponding attribution information to your app.config/web.config file.

App.config

The new release of the SDK adds a way to configure how DynamoDBContext operates data through app.config/web.config files.

To better illustrate this new functionality, here is a modified class definition for the class Movie where all DynamoDB attributes have been removed, and a corresponding app.config which provides functionality identical to what we first started with.

public class Movie
{
    public string Title { get; set; }
    public DateTime ReleaseDate { get; set; }
    public List<string> Genres { get; set; }
    public Rating Rating { get; set; }
    public string Comment { get; set; }
    public int Version { get; set; }
}
<configuration>
  <configSections>
    <section name="aws" type="Amazon.AWSSection, AWSSDK"/>
  </configSections>
  
  <aws>
    <dynamoDB>
      <dynamoDBContext>
        <mappings>
          <map type="SampleApp.Models.Movie, SampleDLL" targetTable="Movies">
            <property name="ReleaseDate" attribute="Released" />
            <property name="Rating" converter="SampleApp.Models.RatingConverter, SampleDLL" />
            <property name="Comment" ignore="true" />
            <property name="Version" version="true" />
          </map>
        </mappings>
      </dynamoDBContext>
    </dynamoDB>
  </aws>

</configuration>

Table aliases and prefixes

With this release, we have also added the ability to specify table aliases. You can now reconfigure the target table for a class without updating its DynamoDBTable attribute, or even for a class that is missing this attribute. This new feature is in addition to the already-existing prefix support, which allows simple separation of tables based on a common prefix.

Below is a simple .NET class named "Studio" that has no attributes. The configuration for this class is stored in the "Studios" table. Additionally, we have configured a prefix through the config, so the actual table where the class Studio is stored will be "Test-Studios".

// No DynamoDBTable attribute, so DynamoDBContext assumes the
// target table is "Studio"
public class Studio
{
    public string StudioName { get; set; }
    public string Address { get; set; }
    // other properties
}
<configuration>
  <configSections>
    <section name="aws" type="Amazon.AWSSection, AWSSDK"/>
  </configSections>
  
  <aws>
    <dynamoDB>
      <dynamoDBContext tableNamePrefix="Test-">
        <tableAliases>
          <alias fromTable="Studio" toTable="Studios" />
        </tableAliases>
      </dynamoDBContext>
    </dynamoDB>
  </aws>

</configuration>

You can use aliases for both attributed and non-attributed classes. Note that the SDK first applies the configured aliases, then applies the prefix.

For more information on the updated configuration section, see the .NET developer guide.

AWSConfigs

All of the preceding configuration settings are also accessible through code, so you can modify the mappings, aliases, and prefixes during application run time. This is done using the Amazon.AWSConfigs.DynamoDBConfig.Context property. In the following code sample, we show how to modify the current prefix, configure a new alias, update an existing alias, and update a converter for the Movie.Rating property.

var contextConfig = Amazon.AWSConfigs.DynamoDBConfig.Context;

// set the prefix to "Prod-"
contextConfig.TableNamePrefix = "Prod-";

// add and update aliases
contextConfig.AddAlias(new TableAlias("Actor", "Actors"));
contextConfig.TableAliases["Studio"] = "NewStudiosTable";

// replace converter on "Rating" property
var typeMapping = contextConfig.TypeMappings[typeof(Movie)];
var propertyConfig = typeMapping.PropertyConfigs["Rating"];
propertyConfig.Converter = typeof(RatingConverter2);

Note: changes to these settings will take effect only for new instances of DynamoDBContext.

For more information on setting these configurations, see the .NET developer guide.

Monitoring Your Estimated Costs with Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

The documentation for Amazon CloudWatch contains this sample scenario for setting up alarms to monitor your estimated charges. Apart from a one-time operation to enable billing alerts for your account, the same capability can be set up and maintained using the AWS Tools for Windows PowerShell.

Enabling Alerts

The first step is to enable billing alerts for your account. To do this one-time operation, you need to use the AWS Billing console.

Important Note: This is a one-way step! Once you enable alerts for an account, you cannot turn them off.

  1. Once you are logged into the console, click Preferences and then select the Receive Billing Alerts check box.
  2. Click the Save preferences button and then log out of the console.

It can take around 15 minutes after enabling this option before you can view billing data and set alarms—plenty of time to read the rest of this post!

The remainder of this post assumes you are working in a PowerShell console prompt (or environment like the PowerShell ISE), have the AWSPowerShell module loaded, and your environment is configured to default to the account that you just enabled billing alerts for. If you’re not sure how to do this, check out this post on configuring accounts for PowerShell. In addition to setting the account, we’ll also need to use the US East (Virginia) region for the cmdlets we need to run, since this is where all metric data related to billing is held. We could add a -Region us-east-1 parameter to each cmdlet, but it’s simpler in this case to set a default for the current shell or script:

PS C:> Set-DefaultAWSRegion us-east-1

Now all cmdlets that we run in the current shell or script will operate by default against this region.

Setting Up the Billing Alarm and Notification

Once we’ve enabled billing alerts, we can start to construct alarm notifications. Just as in the Amazon CloudWatch sample, we’ll create an alarm that will trigger an Amazon SNS topic to send an email notification when our total estimated charges for the period exceeds $200.

We’ll first set up the email notification topic, and then use the topic as the alarm action later when we create the alarm.

Creating the Notification Topic

To create a new topic and subscribe an email endpoint to it, we can run this pipeline (indentation used for clarity):

PS C:> ($topicARN = New-SNSTopic -Name BillingAlarmNotifications) | 
                 Connect-SNSNotification -Protocol email `
                                         -Endpoint email@address.com
pending confirmation

The output from the pipeline, pending confirmation, signals that we need to go to our email and confirm the subscription. Once we do this, our topic is all set up to send notifications to the specified email. Notice that we capture the Amazon Resource Name (ARN) of the new topic into the variable $topicARN. We’ll need this when creating the subsequent alarm.

Creating the Alarm

Now that we have the notification topic in place, we can perform the final step to create the alarm.

To do this, we’ll use the Write-CWMetricAlarm cmdlet. For readers who know the underlying Amazon CloudWatch API, this cmdlet maps to the PutMetricAlarm operation and is used to both create and update alarms. Before creating an alarm, we need to know the namespace and the name of the metric it should be associated with. We can get a list of available metrics by using the Get-CWMetrics cmdlet:

PS C:> Get-CWMetrics

Namespace           MetricName                  Dimensions
---------           ----------                  ----------
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {Currency}

At first glance, this looks like a set of duplicated metrics, but by examining the Dimensions for each object we see the following:

PS C:> (Get-CWMetrics).Dimensions

Name                    Value
----                    -----
ServiceName             AmazonEC2
Currency                USD
ServiceName             AmazonSimpleDB
Currency                USD
ServiceName             AWSQueueService
Currency                USD
ServiceName             AWSDataTransfer
Currency                USD
ServiceName             AmazonSNS
Currency                USD
ServiceName             AmazonS3
Currency                USD
Currency                USD

Now we can see that what initially looked like duplicate metrics are in fact separate metrics for 6 services (in this example) plus one extra that only has a Dimension of Currency—this is the Total Estimated Charge metric we’re interested in for this post. If you wanted to set up billing alerts for, say, Amazon EC2 usage only, then you would simply use that specific dimension when creating the alarm.

Alarms need to have a name that is unique to your account. This, plus the namespace, metric name, and dimension is all we need to create the alarm for the metric, which will be measured periodically. In this example, our alarm threshold (-Threshold parameter) is $200. We want to check every six hours, which we specify using the -Period parameter (the value is in seconds, where 21600 seconds is 6 hours). We want the alarm to fire the first time that the metric breaches, so the value for our -EvaluationPeriods parameter will be 1.

Write-CWMetricAlarm -AlarmName "My Estimated Charges" `
                    -AlarmDescription "Estimated Monthly Charges" `
                    -Namespace "AWS/Billing" `
                    -MetricName EstimatedCharges `
                    -Dimensions @{ Name="Currency"; Value="USD" } `
                    -AlarmActions $topicARN `
                    -ComparisonOperator GreaterThanOrEqualToThreshold `
                    -EvaluationPeriods 1 `
                    -Period 21600 `
                    -Statistic Maximum `
                    -Threshold 200

Note that Amazon CloudWatch returns no response output from the call. If we want to look at the alarm we just created, we can use the Get-CWAlarm cmdlet:

PS C:> Get-CWAlarm "My Estimated Charges"
AlarmName                          : My Estimated Charges
AlarmArn                           : arn:aws:cloudwatch:us-east-1:123412341234:alarm:My Estimated Charges
AlarmDescription                   : Estimated Monthly Charges
AlarmConfigurationUpdatedTimestamp : 3/27/2014 9:41:57 AM
ActionsEnabled                     : True
OKActions                          : {}
AlarmActions                       : {arn:aws:sns:us-east-1:123412341234:BillingNotification}
InsufficientDataActions            : {}
StateValue                         : OK
StateReason                        : Threshold Crossed: 1 datapoint (1.38) was not greater than or equal to the threshold (200.0).
StateReasonData                    : {"version":"1.0","queryDate":"2014-03-27T16:41:58.550+0000","startDate":"2014-03-27T10:41:00.0
                                     00+0000","statistic":"Maximum","period":21600,"recentDatapoints":[1.38],"threshold":20.0}
StateUpdatedTimestamp              : 3/27/2014 9:41:58 AM
MetricName                         : EstimatedCharges
Namespace                          : AWS/Billing
Statistic                          : Maximum
Dimensions                         : {Currency}
Period                             : 21600
Unit                               :
EvaluationPeriods                  : 1
Threshold                          : 200
ComparisonOperator                 : GreaterThanOrEqualToThreshold

All that remains is to wait for the alarm to fire (or, depending on your reasons for wanting to set up the alarm, to not fire!).

Referencing Credentials using Profiles

There are a number of ways to provide AWS credentials to your .NET applications. One approach is to embed your credentials in the appSettings sections of your App.config file. While this is easy and convenient, your AWS credentials might end up getting checked into source control or published to places that you didn’t mean. A better approach is to use profiles, which was introduced in version 2.1 of the AWS SDK for .NET. Profiles offer an easy-to-use mechanism to safely store credentials in a central location outside your application directory. After setting up your credential profiles once, you can refer to them by name in all of the applications you run on that machine. The App.config file will look similar to this example when using profiles.

<configuration>
   <appSettings>
      <add key="AWSProfileName" value="development"/>
      <add key="AWSRegion" value="us-west-2" />
   </appSettings>
</configuration>

The SDK supports two different profile stores. The first is what we call the SDK store which stores the profiles encrypted in the C:Users<username>AppDataLocalAWSToolkit folder. This is the same store used by the AWS Toolkit for Visual Studio and AWS Tools for PowerShell. The second store is the credentials file under c:Users<username>.aws. The credentials file is used by the other AWS SDKs and AWS Command Line Interface. The SDK will always check the SDK store first and then fallback to the credentials file.

Setting up Profiles with Visual Studio

The Visual Studio Toolkit lists all the profiles registered in the SDK store in the AWS Explorer. To add new profiles click the New Account Profile button.

When you create a new project in Visual Studio using one of the AWS project templates the project wizard will allow you to pick an existing profile or create a new profile. The selected profile will be referenced in the App.config of the new project.

 

Setting up Profiles with PowerShell

Profiles can also be setup using the AWS Tools for Windows PowerShell.

PS C:> Set-AWSCredentials -AccessKey 123MYACCESSKEY -SecretKey 456SECRETKEY -StoreAs development

Like the Toolkit these credentials will be accessible to the SDK and Toolkit after running this command. To use the profile in PowerShell run the following command before using AWS cmdlets.

PS C:> Set-AWSCredentials -ProfileName development

Setting up Profiles with the SDK

Profiles can also be managed using just the AWS SDK for .NET using the Amazon.Util.ProfileManager class. Here is how you can register a profile using the ProfileManager.

Amazon.Util.ProfileManager.RegisterProfile(profileName, accessKey, secretKey)

You can also list the registered profiles and unregistered profiles using the ListProfileNames and UnregisterProfile methods.

Getting the SDK from Nuget

If you get the SDK from NuGet the package’s install script will add an empty AWSProfileName tag to the App.config file if the app setting doesn’t already exist. You can use any of the already mentioned methods for registering profiles. Alternatively, you can use the PowerShell script account-management.ps1 that comes with the NuGet package and will be placed in /packages/AWSSDK-X.X.X.X/tools/ folder. This is an interactive script that will let you register, list and unregister profiles.

Credentials File Format

The previous methods for adding profiles have all been about adding credentials to the SDK store. To put credentials in the SDK store requires using one of these tools because the credentials are encrypted. The alternative is to use the credentials file. This is a plain text file similar to a .ini file. Here is an example of a credentials file with two profiles.

[default]
aws_access_key_id = <access-key>
aws_secret_access_key = <secret-key>

[development]
aws_access_key_id = <access-key>
aws_secret_access_key = <secret-key>

Default Profile

When you create a service client without specifying credentials or profile name the SDK will search for a default profile. The default profile’s name is "default" and it will first be searched for in the SDK store and then the credentials file. When the AWS Tools for PowerShell was released last year it introduced a default profile called "AWS PS Default". To make all of our tools have a consistent experience, we have changed AWS Tools for PowerShell to now use "default" for the default. To make sure we didn’t break any existing users, the AWS Tools for PowerShell will still try to load the old profile ("AWS PS Default") when "default" is not found, but will now save credentials to "default" profile unless otherwise specified.

Credentials Search Path

If an application is creating a service client without specifying credentials then the SDK uses the following order to find credentials.

  • Look for AWSAccessKey and AWSSecretKey in App.config.

    • Important to note that the 2.1 version of the SDK didn’t break any existing applications using the AWSAccessKey and AWSSecretKey app settings.
  • Search the SDK Store

    • If the AWSProfileName exists then see if this profile exists. If no AWSProfileName is specified look for the default profile called "default" in the SDK Store.
  • Search the credentials file

    • If the AWSProfileName exists then see if this profile exists. If no AWSProfileName is specified look for the default profile called "default" in the credentials file.
  • Search for Instance Profiles

    • These are credentials that can be found on EC2 instance that were created with instance profiles.

Setting Profile in Code

It is also possible to specify the profile to use in code, in addition to using App.config. This code shows how to create an Amazon S3 client for the development profile.

Amazon.Runtime.AWSCredentials credentials = new Amazon.Runtime.StoredProfileAWSCredentials("development");
Amazon.S3.IAmazonS3 s3Client = new AmazonS3Client(credentials, Amazon.RegionEndpoint.USWest2);

Alternative Credentials File

Both the SDK store and the credentials file are located under the current user’s home directory. If your application is running under a different user – such as Local System – then the AWSProfilesLocation app setting can be set to use an alternative credentials file. For example, this App.Config tells the SDK to look for credentials in the C:aws_service_credentialscredentials file.

<configuration>
   <appSettings>
      <add key="AWSProfileName" value="development"/>
      <add key="AWSProfilesLocation" value="C:aws_service_credentialscredentials"/>
      <add key="AWSRegion" value="us-west-2" />
   </appSettings>
</configuration>

IAM Roles for Amazon EC2 instances (Access Key Management for .NET Applications – Part 4)

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

In this post, we’ll see how to use Identity and Access Management(IAM) roles for Amazon EC2 instances. Using IAM roles for EC2 instances, you don’t need to manage or distribute credentials that your application needs. Instead, credentials are automatically distributed to EC2 instances and picked up by the AWS SDK for .NET. Here are the advantages of using this approach.

  • No need to distribute and manage credentials for your application
  • Credentials are periodically auto rotated and distributed to EC2 instances
  • The credentials are transparently available to your application through the SDK

Before we go further and look at code snippets, let’s talk about IAM roles and related concepts in a little more detail. A role lets you define a set of permissions to access resources that your application needs. This is specified using an access policy. A role also contains information about who can assume the role. This is specified using a trust policy. To use roles with EC2 instances, we need an instance profile. An instance profile is a container for roles and is used to pass role information to EC2 instances when they are launched. When you launch an EC2 instance with an instance profile, your application can make requests to AWS resources using the role credentials for the role associated with the instance profile.

In the rest of this post, we will perform the steps required to use IAM roles using the AWS SDK for .NET. Please note that all of these steps can be performed using the AWS Management Console as well.

Create an IAM Role

We start by creating an IAM role. As I mentioned before, you need to provide two pieces of information here: the access policy that will contain the permissions your application needs, and the trust policy that will specify that EC2 can assume this role. The trust policy is required so that EC2 can assume the role and fetch the temporary role credentials.

This is the trust policy that allows EC2 to assume the role.

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal":{"Service":["ec2.amazonaws.com"]},
    "Action": "sts:AssumeRole"
  }]
}

This is a sample access policy that gives restricted access to a bucket by allowing the ListBucket, PutObject and GetObject actions.

{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Effect":"Allow",
      "Action":[
        "s3:ListBucket"
      ],
      "Resource":"arn:aws:s3:::MyApplicationBucket"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::MyApplicationBucket/*"
    }
  ]
}

The following code creates a role with the given trust and access policy.

var roleName = "S3Access";
var profileName = "S3Access";
var iamClient = new AmazonIdentityManagementServiceClient();

// Create a role with the trust policy
var role = iamClient.CreateRole(new CreateRoleRequest
{
   RoleName = roleName,
   AssumeRolePolicyDocument = trustPolicy
});

// Add the access policy to the role
iamClient.PutRolePolicy(new PutRolePolicyRequest
{
    RoleName = roleName,
    PolicyName = "S3Policy",
    PolicyDocument = accessPolicy                
});

Create an instance profile

Now we create an instance profile for the role.

// Create an instance profile
iamClient.CreateInstanceProfile(new CreateInstanceProfileRequest
{
    InstanceProfileName = profileName                
});

// Add the role to the instance profile
iamClient.AddRoleToInstanceProfile(new AddRoleToInstanceProfileRequest
{
    InstanceProfileName = profileName,
    RoleName = roleName
});

Launch EC2 instance(s) with the instance profile

We can now launch EC2 instances with the instance profile that we created. Notice that we use the Amazon.EC2.Util.ImageUtilities helper class to retrieve the image identifier.

var ec2Client = new AmazonEC2Client();
            
// Find an image using ImageUtilities helper class
var image = Amazon.EC2.Util.ImageUtilities.FindImage(
    ec2Client,
    Amazon.EC2.Util.ImageUtilities.WINDOWS_2012_BASE);

//Launch an EC2 instance with the instance profile
var instance = ec2Client.RunInstances(new RunInstancesRequest
{
    ImageId = image.ImageId,
    IamInstanceProfile = new IamInstanceProfileSpecification
    {
        Name = profileName
    },
    MinCount=1, MaxCount =1,
});

Access AWS Resources from your application code deployed on EC2

You don’t need to make any changes to your application code to use IAM roles. Your application code should construct service clients without specifying any explicit credentials like the code below (without having any credentials in the application configuration file). Behind the scenes, the Amazon.Runtime.InstanceProfileAWSCredentials class fetches the credentials from EC2 Instance metadata service and automatically refreshes them when a new set of credentials is available.

// Create an S3 client with the default constructor,
// this will use the role credentials to access resources.
var s3Client = new AmazonS3Client();
var s3Objects = s3Client.ListObjects(new ListObjectsRequest 
{
    BucketName = "MyApplicationBucket" 
}).S3Objects;

In this post, we saw how IAM roles can greatly simplify and secure access key management for applications on Amazon EC2. We highly recommend that you use this approach for all applications that are run on Amazon EC2.