Tag: .NET


Resource Condition Support in the AWS CloudFormation Editor

AWS CloudFormation recently added support for conditions that control whether resources are created or what value to set for properties on resources. The CloudFormation editor included with the AWS Toolkit for Visual Studio was updated to support conditions in version 1.6.1. If you have never used the CloudFormation editor, we have a screencast that gives a quick introduction to the editor.

Defining Conditions

To get started with conditions, you first need to define them.

In this example, there are 2 conditions defined. The first condition checks to see if the deployment will be a production deployment. The second condition checks to see if a new security group should be created.

Using Conditions to Control Resource Creation

For all resources defined in a template, you can set the Condition property. If the condition evaluates to true, then the resource is created with the CloudFormation stack that is the instantiation of the CloudFormation template.

This security group is created only if the CreateSecurityGroup condition evaluates to true, which occurs if no security group is passed in to the ExistingSecurityGroup parameter.

Using Conditions to Control Resource Properties

You can also use conditions to determine what value to set for a resource property.

Since the security group is going to be either created or set by the ExistingSecurityGroup parameter, the SecurityGroups property needs to have its value set conditionally depending on how the security group was created. Also, in this example, we are going to control the size of the EC2 instance depending on the deployment being a production deployment or not.

For more information about using conditions with CloudFormation, check out the AWS CloudFormation User Guide.

Creating Amazon DynamoDB Tables with PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Version 2.0 of the AWS Tools for Windows PowerShell contains new cmdlets that allow you to manage tables in Amazon DynamoDB. The cmdlets all share the same noun prefix, DDB, and can be discovered using Get-Command:

PS C:> Get-Command -Module AWSPowerShell -Noun DDB*

CommandType     Name                                               ModuleName
-----------     ----                                               ----------
Cmdlet          Add-DDBIndexSchema                                 AWSPowerShell
Cmdlet          Add-DDBKeySchema                                   AWSPowerShell
Cmdlet          Get-DDBTable                                       AWSPowerShell
Cmdlet          Get-DDBTables                                      AWSPowerShell
Cmdlet          New-DDBTable                                       AWSPowerShell
Cmdlet          New-DDBTableSchema                                 AWSPowerShell
Cmdlet          Remove-DDBTable                                    AWSPowerShell
Cmdlet          Update-DDBTable                                    AWSPowerShell

This post looks at the New-DDBTable cmdlet and the schema builder cmdlets — New-DDBTableSchema, Add-DDBKeySchema, and Add-DDBIndexSchema — that you can use in a pipeline to make table definition and creation simple and fluent.

Defining Schema

The schema builder cmdlets allow you to define the schema for your table and can be used in a PowerShell pipeline to incrementally refine and extend the schema you require. The schema object is then passed to New-DDBTable (either in the pipeline or as the value for the -Schema parameter) to create the table you need. Behind the scenes, these cmdlets and New-DDBTable infer and wire up the correct settings for your table with respect to hash keys (on the table itself or in the indexes) without you needing to manually add this information.

Let’s take a look at the syntax for the schema builder cmdlets (parameters inside [] are optional; for parameters that accept a range of values, the allowable values are shown in {} separated by |):

# takes no parameters, returns a new Amazon.PowerShell.Cmdlets.DDB.Model.TableSchema object
New-DDBTableSchema

# The schema definition object may be piped to the cmdlet or passed as the value for -Schema
Add-DDBKeySchema -KeyName "keyname" 
                 -KeyDataType { "N" | "S" | "B" }
                 [ -KeyType { "hash" | "range" } ]
                 -Schema Amazon.PowerShell.Cmdlets.DDB.Model.TableSchema

# The schema definition object may be piped to the cmdlet or passed as the value for -Schema
Add-DDBIndexSchema -IndexName "indexName"
                   -RangeKeyName "keyName"
                   -RangeKeyDataType { "N" | "S" | "B" }
                   [ -ProjectionType { "keys_only" | "include" | "all" } ]
                   [ -NonKeyAttribute @( "attrib1", "attrib2", ... ) ]
                   -Schema Amazon.PowerShell.Cmdlets.DDB.Model.TableSchema 

Not all of the parameters for each cmdlet are required as the cmdlets accept certain defaults. For example, the default key type for Add-DDBKeySchema is "hash". For Add-DDBIndexSchema, -ProjectionType is optional (and -NonKeyAttribute is needed only if -ProjectionType is set to "include"). If you’re familiar with the Amazon DynamoDB API, you’ll probably recognize the type codes used with -KeyDataType and -RangeKeyDataType. You can find the API reference for the CreateTable operation here.

Using the Create a Table example shown on the CreateTable API reference page, here’s how we can easily define the schema using these cmdlets in a pipeline:

PS C:> New-DDBTableSchema `
            | Add-DDBKeySchema -KeyName "ForumName" -KeyDataType "S" `
            | Add-DDBKeySchema -KeyName "Subject" -KeyType "range" -KeyDataType "S" `
            | Add-DDBIndexSchema -IndexName "LastPostIndex" `
                                 -RangeKeyName "LastPostDateTime" `
                                 -RangeKeyDataType "S" `
                                 -ProjectionType "keys_only"

AttributeSchema                  KeySchema                        LocalSecondaryIndexSchema        GlobalSecondaryIndexSchema
---------------                  ---------                        -------------------------        --------------------------
{ForumName, Subject, LastPost... {ForumName, Subject}             {LastPostIndex}                  {}

PS C:>

As you can see from the output, the cmdlets took the empty schema object created by New-DDBTableSchema and extended it with the data that New-DDBTable will need. One thing to note is that, apart from New-DDBTableSchema, the cmdlets can be run in any order, any number of times. This gives you complete freedom to experiment at the console without needing to define all the keys up front and then define the index schema and so on. You can also clone the schema object and stash away a basic template that you can then further refine for multiple different tables (the Clone() method on the schema object makes a deep copy of the data it contains).

Creating the Table

Once the schema is defined, it can be passed to New-DDBTable to request that the table be created. The schema can be passed into New-DDBTable using a pipeline or by passing the schema object to the -Schema parameter. Here is the syntax for New-DDBTable:

# The schema definition object may be piped to the cmdlet or passed as the value for -Schema
New-DDBTable -TableName "tableName"
             -Schema Amazon.PowerShell.Cmdlets.DDB.Model.TableSchema 
             -ReadCapacity  value
             -WriteCapacity value

As you can see, it’s pretty simple. To use the previous example schema definition—but this time actually create the table—we can extend our pipeline like this:

PS C:> New-DDBTableSchema `
            | Add-DDBKeySchema -KeyName "ForumName" -KeyDataType "S" `
            | Add-DDBKeySchema -KeyName "Subject" -KeyType "range" -KeyDataType "S" `
            | Add-DDBIndexSchema -IndexName "LastPostIndex" `
                                 -RangeKeyName "LastPostDateTime" `
                                 -RangeKeyDataType "S" `
                                 -ProjectionType "keys_only" `
            | New-DDBTable "Threads" -ReadCapacity 10 -WriteCapacity 5

AttributeDefinitions : {ForumName, LastPostDateTime, Subject}
TableName            : Threads
KeySchema            : {ForumName, Subject}
TableStatus          : CREATING
CreationDateTime     : 11/29/2013 5:47:31 PM
ProvisionedThroughput: Amazon.DynamoDBv2.Model.ProvisionedThroughputDescription
TableSizeBytes       : 0
ItemCount            : 0
LocalSecondaryIndexes: {LastPostIndex}
GlobalSecondaryIndexes: {}

PS C:>

By default Add-DDBIndexSchema constructs local secondary indices. To have the cmdlet construct a global secondary index schema entry instead, you simply add the -Global switch plus the required provisioning -ReadCapacity and -WriteCapacity parameter values you need. You can also optionally specify -HashKeyName and -HashKeyDataType instead of, or in addition to, the range key parameters:

    ...
    | Add-DDBIndexSchema -Global `
                         -IndexName "myGlobalIndex" `
                         -HashKeyName "hashKeyName" `
                         -HashKeyDataType "N" `
                         -RangeKeyName "rangeKeyName" `
                         -RangeKeyDataType "S" `
                         -ProjectionType "keys_only" `
                         -Global `
                         -ReadCapacity 5 `
                         -WriteCapacity 5 `
                         ...

Let us know in the comments what you think about the fluent-style cmdlet piping, or how well these DynamoDB cmdlets fit your scripting needs.

Using IAM Users (Access Key Management for .NET Applications – Part 2)

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

In the previous post about access key management, we covered the different methods to provide AWS access keys to your .NET applications. We also talked about a few best practices, one of which is to use IAM users to access AWS instead of the root access keys of your AWS account. In this post, we’ll see how to create IAM users and set up different options for them, using the AWS SDK for .NET.

The root access keys associated with your AWS account should be safely guarded, as they have full privileges over AWS resources belonging to your account and access to your billing information. Therefore, instead of using the root access keys in applications or providing them to your team/organization, you should create IAM users for individuals or applications. IAM users can make API calls, use the AWS Management Console, and have their access limited by IAM policies. Let’s see the steps involved to start using IAM users.

Create an IAM user

For this example, we are going to use the following policy, which gives access to a specific bucket. You’ll need to replace BUCKET_NAME with the name of the bucket you want to use.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListAllMyBuckets"],
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket","s3:GetBucketLocation"],
      "Resource": "arn:aws:s3:::BUCKET_NAME"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:PutObject","s3:GetObject","s3:DeleteObject"],
      "Resource": "arn:aws:s3:::BUCKET_NAME/*"
    }
  ]
}

In cases where you are creating a policy on the fly or you want a strongly typed mechanism to create policies, you can use the Policy class found in the Amazon.Auth.AccessControlPolicy namespace to construct a policy. For more details, check Creating Access Policies in Code.

var iamClient = new AmazonIdentityManagementServiceClient(ACCESS_KEY, SECRET_KEY, RegionEndpoint.USWest2);

// Create an IAM user
var userName = "Alice";
iamClient.CreateUser(new CreateUserRequest
{
  UserName = userName,
  Path = "/developers/"
});

// Add a policy to the user
iamClient.PutUserPolicy(new PutUserPolicyRequest
{
  UserName = userName,
  PolicyName = allowS3BucketAccess,
  PolicyDocument = s3AccessPolicy
});

The Path parameter in the CreateUser call is an optional parameter that can be used to give a path to the user. In this example, the Amazon Resource Name (ARN) for the user created in the above example will be arn:aws:iam::account-number-without-hyphens:user/developers/Alice. The path for an IAM user is part of its Amazon Resource Name (ARN) and is a simple but powerful mechanism to organize users and create policies that apply to a subset of your users.

Use IAM groups

Instead of assigning permissions to an IAM user, we can create an IAM group with the relevant permissions and then add the user to the group. The group’s permissions are then applicable to all users belonging to it. With this approach, we don’t have to manage permissions for each user.

// Create an IAM group
var groupName = "DevGroup";
iamClient.CreateGroup(new CreateGroupRequest
{
  GroupName = groupName
});

// Add a policy to the group
iamClient.PutGroupPolicy(new PutGroupPolicyRequest
{
  GroupName = groupName,
  PolicyName = allowS3BucketAccess,
  PolicyDocument = s3AccessPolicy
});

// Add the user to the group
iamClient.AddUserToGroup(new AddUserToGroupRequest
{
  UserName = userName,
  GroupName = groupName
});

The preceding code creates an IAM group, assigns a policy, and then adds a user to the group. If you are wondering how the the permissions are evaluated when a group has multiple policies or a user belongs to multiple groups, IAM Policy Evaluation Logic explains this in detail.

Generate access key for an IAM user

To access AWS using the API or command line interface (CLI), the IAM user needs an access key that consists of the access key ID and secret access key.

// Create an access key for the IAM user
AccessKey accessKey = iamClient.CreateAccessKey(new CreateAccessKeyRequest
{
  UserName = userName
}).AccessKey;

The CreateAccessKey method returns an instance of the AccessKey class that contains the access key ID [AccessKey.AccessKeyId] and secret access key [AccessKey.SecretAccessKey]. You will need to save the secret key or securely distribute it to the user since you will not be able to retrieve it again. You can always create a new access key and delete the old access key (using the DeleteAccessKey method) if you lose it.

Enable access to the AWS Management Console

IAM users can access the AWS Management Console to administer the resources to which they have permissions. To enable access to the AWS Management Console, you need to create a login profile for the user and then provide them with the URL of your account’s sign-in page.

// Allow the IAM user to access AWS Console
iamClient.CreateLoginProfile(new CreateLoginProfileRequest
{
  UserName = userName,
  Password = "" // Put the user's console password here.
});

In this post we saw how to use IAM users for accessing AWS instead of the root access keys of your AWS account. In the next post in this series, we’ll talk about rotating credentials.

Configuring DynamoDB Tables for Development and Production

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

The Object Persistence Model API in the SDK uses annotated classes to tell the SDK which table to store objects in. For example, the DyanmoDBTable attribute on the Users class below tells the SDK to store instances of the Users class into the "Users" table.

[DynamoDBTable("Users")]
public class Users
{
    [DynamoDBHashKey]
    public string Id { get; set; }

    public string FirstName { get; set; }

    public string LastName { get; set; }
	
    ...
}

A common scenario is to have a different set of tables for production and development. To handle this scenario, the SDK supports setting a prefix in the application’s app.config file with the AWS.DynamoDBContext.TableNamePrefix app setting. This app.config file indicates that all the tables used by the Object Persistence Model should have the "Dev_" prefix.

<appSettings>
  ...
  <add key="AWSRegion" value="us-west-2" />
  <add key="AWS.DynamoDBContext.TableNamePrefix" value="Dev_"/>
  ...
</appSettings>

The prefix can also be modified at run time by setting either the global property AWSConfigs.DynamoDBContextTableNamePrefix or the TableNamePrefix property for the DynamoDBContextConfig used to store the objects.

 

AWS re:Invent .NET Recap

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Jim and I had a great time at re:Invent this year talking to all the AWS users. It was really interesting to hear all the different ways our SDK and tools are being used. We got some great feature requests and now we are excited to be back in the office to start working on them.

The video and slides of our talk on building scalable .NET apps on AWS are now online.

The topics we cover in our talk are

  • Amazon DynamoDB Object Persistence Model
  • Getting and Putting objects into Amazon S3
  • Using Amazon SQS to manage background processing
  • Using AWS Elastic Beanstalk customization to install a Windows service
  • Using Web Identity Federation to get credentials securely to our Windows Store App

If you weren’t able to come to re:Invent but have ideas for our SDK and tools you want to share, you can always reach us through comments here, through our forums, and through GitHub.

Subscribing an SQS Queue to an SNS Topic

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In version 2.0.2.3 of the SDK we added an enhancement to the SDK to make it easier to subscribe an Amazon SQS queue to an Amazon SNS topic. You have always been able to subscribe queues to topics using the Subscribe method on the SNS client, but after you subscribed to the topic with your queue, you also had to set a policy on the queue using the SetQueueAttributes method from the SQS client. The policy gives permission to the topic to send a message to the queue.

With this new feature, you can call SubscribeQueue from the SNS client, and it will take care of both the subscription and setting up the policy. This code snippet shows how to create a queue and topic, subscribe the queue, and then send a message.

string queueURL = sqsClient.CreateQueue(new CreateQueueRequest
{
    QueueName = "theQueue"
}).QueueUrl;


string topicArn = snsClient.CreateTopic(new CreateTopicRequest
{
    Name = "theTopic"
}).TopicArn;

snsClient.SubscribeQueue(topicArn, sqsClient, queueURL);

// Sleep to wait for the subscribe to complete.
Thread.Sleep(TimeSpan.FromSeconds(5));

// Publish the message to the topic
snsClient.Publish(new PublishRequest
{
    TopicArn = topicArn,
    Message = "Test Message"
});

// Get the message from the queue.
var messages = sqsClient.ReceiveMessage(new ReceiveMessageRequest
{
    QueueUrl = queueURL,
    WaitTimeSeconds = 20
}).Messages;

Amazon S3 Lifecycle Management

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Amazon Simple Storage Service (S3) provides a simple method to control the lifecycle of your S3 objects. In this post, we examine how you can easily set up rules to delete or archive old data in S3 using the AWS SDK for .NET.

Lifecycle Rules

Lifecycle configurations are associated with a bucket. A lifecycle configuration consists of a number of rules, with each rule specifying the objects it acts on and the actions to take. Rules specify which objects they act on by defining a prefix. A rule can archive an object to Amazon Glacier, delete an object, or both. The action associated with a rule specifies a time constraint on it, acting on the objects that are either older than a specific number of days, or after a particular date. A rule also has a Status, which can be set to Enabled or Disabled. If you haven’t set this field, the rule will be disabled by default.

For instance, it’s possible to configure a rule that all objects with the prefix "logs/" must be archived to Glacier after one month. Here is a rule that does just that:

var rule1 = new LifecycleRule
{
    Prefix = "logs/", 
    Transition = new LifecycleTransition
    {
        Days = 30,
        StorageClass = S3StorageClass.Glacier
    },
    Status = LifecycleRuleStatus.Enabled
};

Rules can also be configured for a specific date. The following rule is configured to delete all objects with the prefix "june/" on the 1st of August, 2014.

var rule2 = new LifecycleRule
{
    Prefix = "june/", 
    Expiration = new LifecycleRuleExpiration
    {
        Date = new DateTime(2014, 08, 01)
    },
    Status = LifecycleRuleStatus.Enabled
};

Finally, a rule can contain both a transition and an expiration command. The following rule transitions objects to Glacier after 2 months and deletes objects after 1 year. This sample also configures a disabled rule.

var rule3 = new LifecycleRule
{
    Prefix = "user-data/",
    Status = LifecycleRuleStatus.Disabled,
    Transition = new LifecycleTransition
    {
        Days = 60,
        StorageClass = S3StorageClass.Glacier
    },
    Expiration = new LifecycleRuleExpiration
    {
        Days = 365
    },
    Status = LifecycleRuleStatus.Disabled
};

Lifecycle Configuration

A lifecycle configuration is simply a list of rules. In the following example, we construct a lifecycle configuration that consists of the rules we created earlier, and then this configuration is applied to our test bucket.

S3Client.PutLifecycleConfiguration(new PutLifecycleConfigurationRequest
{
    BucketName = "sample-bucket",
    Configuration = new LifecycleConfiguration
    {
        Rules = new List { rule1, rule2, rule3 }
    }
});

When dealing with configurations, we must configure all rules on a bucket. This means that if you wish to modify or add new rules, you must first retrieve the current configuration, modify it, and then apply it to the bucket. The following sample shows how we can enable all disabled rules and remove a specific rule.

// Retrieve current configuration
var configuration = S3Client.GetLifecycleConfiguration(
new GetLifecycleConfigurationRequest
{
    BucketName = "sample-bucket"
}).Configuration;

// Remove rule with prefix 'june/'
configuration.Rules.Remove(configuration.Rules.Find(r => r.Prefix == "june/"));
// Enable all disabled rules
foreach (var rule in configuration.Rules)
    if (rule.Status == LifecycleRuleStatus.Disabled)
        rule.Status = LifecycleRuleStatus.Enabled;

// Save the updated configuration
S3Client.PutLifecycleConfiguration(new PutLifecycleConfigurationRequest
{
    BucketName = "sample-bucket",
    Configuration = configuration
});

Finally, if you want to turn off all lifecycle rules for a bucket, you must either disable all rules (by setting Status = LifecycleRuleStatus.Disabled) or call the DeleteLifecycleConfiguration method, as follows.

// Remove a bucket's lifecycle configuration
S3Client.DeleteLifecycleConfiguration(new DeleteLifecycleConfigurationRequest
{
    BucketName = "sample-bucket"
});

Summary

In this blog post, we’ve shown how simple it is to configure the lifecycle of your S3 objects. For more information on this topic, see S3 Object Lifecycle Management.

The Three Different APIs for Amazon S3

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

The AWS SDK for .NET has three different APIs to work with Amazon S3. The low-level API found in the Amazon.S3 and Amazon.S3.Model namespaces provides complete coverage of the S3 APIs. For easy uploads and downloads, there is TransferUtility, which is found in the Amazon.S3.Transfer namespace. Finally the File I/O API in the Amazon.S3.IO namespace gives the ability to use filesystem semantics with S3.

Low-level API

The low-level API uses the same pattern used for other service low-level APIs in the SDK. There is a client object called AmazonS3Client that implements the IAmazonS3 interface. It contains methods for each of the service operations exposed by S3. Here are examples of performing the basic operations of putting a file in S3 and getting the file back out.

s3Client.PutObject(new PutObjectRequest
{
    BucketName = bucketName,
    FilePath = @"c:datalog.txt"
});

var getResponse = s3Client.GetObject(new GetObjectRequest
{
    BucketName = bucketName,
    Key = "log.txt"
});

getResponse.WriteResponseStreamToFile(@"c:datalog-low-level.txt");

TransferUtility

The TransferUtility runs on top of the low-level API. For putting and getting objects into S3, I would recommend using this API. It is a simple interface for handling the most common uses of S3. The biggest benefit comes with putting objects. For example, TransferUtility detects if a file is large and switches into multipart upload mode. The multipart upload gives the benefit of better performance as the parts can be uploaded simultaneously as well, and if there is an error, only the individual part has to be retried. Here are examples showing the same operations above in the low-level API.

var transferUtility = new TransferUtility(s3Client);

transferUtility.Upload(@"c:datalog.txt", bucketName);

transferUtility.Download(@"c:datalog-transfer.txt", bucketName, "log.txt");

File I/O

The File I/O API is the third API that you’ll find in the Amazon.S3.IO namespace. This API is useful for applications that want to treat S3 as a file system. It does this by mimicking the .NET base classes FileInfo and DirectoryInfo with the new classes S3FileInfo and S3DirectoryInfo. For example, this code shows how similar creating a directory structure in an S3 bucket is to doing so in the local filesystem.

// Create a directory called code at c:code	
DirectoryInfo localRoot = new DirectoryInfo(@"C:");
DirectoryInfo localCode = localRoot.CreateSubdirectory("code");
	
// Create a directory called code in the bucket
S3DirectoryInfo s3Root = new S3DirectoryInfo(s3Client, "bucketofcode");
S3DirectoryInfo codeS3Dir = s3Root.CreateSubdirectory("code");

The following code shows how to get a list of directories and files from the root of the bucket. While going through the enumeration of directories and files all the paging for Amazon S3 calls is handled behind the scenes so there is no need to keep track of a next token.

// Print out the names of the subdirectories under the root directory
foreach (S3DirectoryInfo subDirectory in s3Root.GetDirectories())
{
    Console.WriteLine(subDirectory.Name);
}

// Print the names of the files in the root directory
foreach (S3FileInfo file in s3Root.GetFiles())
{
    Console.WriteLine(file.Name);
}

To write to a file in Amazon S3, you simply open a stream for write from S3FileInfo and write to it. Once the stream is closed, the in-memory data for the stream will be committed to Amazon S3. To read the data back from Amazon S3, just open the stream for read from the S3FileInfo object.

// Write file to Amazon S3
S3DirectoryInfo artDir = s3Root.CreateSubdirectory("asciiart");
S3FileInfo artFile = artDir.GetFile("aws.txt");
using (StreamWriter writer = new StreamWriter(artFile.OpenWrite()))
{
    writer.WriteLine("   _____  __      __  _________");
    writer.WriteLine("  /  _  /      /  /   _____/");
    writer.WriteLine(" /  /_     //   /_____   ");
    writer.WriteLine("/    |            / /        ");
    writer.WriteLine("____|____/__/__/ /_________/");
}	

// Read file back from Amazon S3
using (StreamReader reader = artFile.OpenText())
{
    Console.WriteLine(reader.ReadToEnd());
}

Client Side Data Encryption with AWS SDK for .NET and Amazon S3

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

What is client-side encryption, and when would you want to use it?

Version 2 of AWS SDK for .NET provides an easy-to-use Amazon S3 encryption client that allows you to secure your sensitive data before you send it to Amazon S3. Using the AmazonS3EncryptionClient class, the SDK automatically encrypts data on the client when uploading to Amazon S3, and automatically decrypts it when data is retrieved.

EncryptionMaterials encryptionMaterials = new EncryptionMaterials(RSA.Create());
AmazonS3EncryptionClient client = new AmazonS3EncryptionClient(encryptionMaterials);
PutObjectResponse putObjectResponse = client.PutObject(putObjectRequest);
GetObjectResponse getObjectResponse = client.GetObject(getObjectRequest);

The entire process of encryption and decryption is called "envelope encryption". AmazonS3EncryptionClient generates a one-time-use AES 256-bit symmetric key (the envelope symmetric key) to encrypt your data, then that key is encrypted by a master encryption key you supply and stored alongside your data in Amazon S3. When accessing your data with the Amazon S3 encryption client, the encrypted symmetric key is retrieved and decrypted with a master encryption key you supply, and then the data is decrypted. Your master encryption key can be a symmetric or asymmetric key.

You can also store your data in Amazon S3 with server-side encryption, but using client-side encryption has some added benefits. First, with server-side encryption, your data is encrypted and decrypted after reaching S3, whereas client-side encryption is performed locally and your data never leaves the execution environment unencrypted.

Another benefit is that client-side encryption allows you to use your own master encryption keys. This ensures that no one can decrypt your data without having access to your master encryption keys.

Encryption metadata storage location

You have the choice to store the encrypted envelope symmetric key either in object metadata or in an instruction file. The instruction file is stored at the same location as that of the object. The following code snippet shows how you can set the storage location.

AmazonS3CryptoConfiguration config = new AmazonS3CryptoConfiguration()
{
    StorageMode = CryptoStorageMode.InstructionFile
};
AmazonS3EncryptionClient client = new AmazonS3EncryptionClient(config, encryptionMaterials);

How simple is it to use the AmazonS3EncryptionClient?

The AmazonS3EncryptionClient class implements the same interface as the standard AmazonS3Client, which means it is easy to switch to the AmazonS3EncryptionClient class. In fact, your application code will not be aware of the encryption and decryption happening automatically in the client. All you have to do is create an EncryptionMaterials object that holds an instance of either an asymmetric algorithm (preferably RSA) or a symmetric algorithm. You then simply pass the EncryptionMaterials object to the constructor of AmazonS3EncryptionClient.

The following example shows how you can use AmazonS3EncryptionClient.

EncryptionMaterials encryptionMaterials = new EncryptionMaterials(RSA.Create());

AmazonS3EncryptionClient client = new AmazonS3EncryptionClient(encryptionMaterials);

string bucketName = "YourBucketName";
string keyName = "YourKeyName";
client.PutBucket(new PutBucketRequest { BucketName = bucketName });
PutObjectRequest putObjectRequest = new PutObjectRequest
{
    BucketName = bucketName,
    Key = keyName,
    ContentBody = "Secret Message"
};
client.PutObject(putObjectRequest);
GetObjectRequest getObjectRequest = new GetObjectRequest
{
    BucketName = bucketName,
    Key = keyName
};
GetObjectResponse getObjectResponse = client.GetObject(getObjectRequest);
using (Stream decryptedStream = getObjectResponse.ResponseStream)
{
    using (StreamReader reader = new StreamReader(decryptedStream))
    {
        string decryptedContent = reader.ReadToEnd();
        Console.WriteLine("Decrypted data: {0}", decryptedContent);
    }
}

The AWS SDK for .NET supports client-side encryption for MultiPartUpload and TransferUtility as well, but since we use Cipher Block Chaining mode, TransferUtility uploads the parts sequentially rather than in parallel. Note that this means encrypted multi-part uploads cannot take advantage of multi-threading.

What happens if your master encryption keys are lost?

If your master encryption keys are lost, you will not be able to decrypt your data. Your master encryption keys are never sent to AWS; hence, it is important that you safely store them (e.g., as a file or using a separate key management system) and load them when needed for uploading or downloading objects.

The following example shows how you can use a master encryption key with an asymmetric algorithm.

Create an instance of an RSA algorithm and save the private key in a file.

RSA rsaAlgorithm = RSA.Create();
string privateKey = rsaAlgorithm.ToXmlString(true);
string filePath = @"c:tempPrivateKey.txt";
File.WriteAllText(filePath, privateKey);
EncryptionMaterials materials = new EncryptionMaterials(rsaAlgorithm);
AmazonS3EncryptionClient client = new AmazonS3EncryptionClient(materials);
// Perform your operations, such as PutObject, GetObject, etc.

Create an instance of an RSA algorithm and load it with the saved private key.

string filePath = @"c:tempPrivateKey.txt";
string privateKey = File.ReadAllText(filePath);
RSA rsaAlgorithm = RSA.Create();
rsaAlgorithm.FromXmlString(privateKey);
EncryptionMaterials materials = new EncryptionMaterials(rsaAlgorithm);
AmazonS3EncryptionClient client = new AmazonS3EncryptionClient(materials);
// Perform your operations, such as PutObject, GetObject, etc.

The following example shows how you can use a master encryption key with a symmetric algorithm.

Create an instance of an AES algorithm and save the symmetric key in a file.

Aes aesAlgorithm = Aes.Create();
File.WriteAllBytes(@"c:tempSymmetricKey.txt", aesAlgorithm.Key);
EncryptionMaterials materials = new EncryptionMaterials(aesAlgorithm);
AmazonS3EncryptionClient client = new AmazonS3EncryptionClient(materials);
//Perform your operations, such as PutObject, GetObject, etc.

Create an instance of an AES algorithm and load it with the saved SymmetricKey key.

Aes aesAlgorithm = Aes.Create();
aesAlgorithm.Key = File.ReadAllBytes(@"c:tempSymmetricKey.txt");
EncryptionMaterials materials = new EncryptionMaterials(aesAlgorithm);
AmazonS3EncryptionClient client = new AmazonS3EncryptionClient(materials);
//Perform your operations, such as PutObject, GetObject, etc.

The AmazonS3EncryptionClient class in the AWS SDK for .NET is fully compatible with the AmazonS3EncryptionClient class in the AWS SDK for Java and AWS SDK for Ruby. All you have to do is, using one SDK, store your private encryption keys in a commonly-accessible location (for example, a .pem file) and then load them in the second SDK.

GA Release of AWS SDK for .NET Version 2

by Wade Matveyenko | on | in .NET | Permalink | Comments |  Share

We are excited to announce the General Availability (GA) release of AWS SDK for .NET version 2! This is the next major release of the SDK, which adds support for Windows Store, Windows Phone, and .NET Framework 4.5 platforms. You can download it here.

Improvements

  • One of the most exciting new features of version 2 is the ability to have Windows Store and Windows Phone 8 Apps use our SDK. Like other SDKs for these new platforms, all method calls that make requests to AWS are asynchronous methods.
  • Another big improvement we made to the SDK for asynchronous programming is that when you target Windows Store, Windows Phone 8, or .NET 4.5, the SDK uses the new Task-based pattern for asynchronous programming instead of the IAsyncResult pattern using pairs of Begin and End methods. Version 2 of the SDK also consists of a version compiled for .NET 3.5 Framework that contains the Begin and End methods for applications that aren’t yet ready to move to .NET 4.5.
  • The AWS SDK for .NET provides four distinct assemblies for developers to target different platforms. However, not all SDK functionality is available on each of these platforms. This guide describes the differences in what is supported across these platforms. We have also put together a migration guide that describes how version 2 of AWS SDK for .NET differs from the first version of the SDK and how to migrate your code to use the new SDK.
  • We have also added a new Amazon S3 encryption client in this SDK. This client allows you to secure your sensitive data before you send it to Amazon S3. Using the AmazonS3EncryptionClient class, the SDK automatically encrypts data on the client when uploading to Amazon S3, and automatically decrypts it when data is retrieved.

Breaking Changes

Below are the breaking changes in version 2 of the AWS SDK for .NET that you need to be aware of if you are migrating from version 1 of the SDK.

The region parameter is now mandatory

The SDK now requires the region to be explicitly specified through the client constructor or by using the AWSRegion setting in the application’s app or web config file. Prior versions of the SDK implicitly defaulted to us-east-1 if the region was not set. Here is an example of setting a region in the app config file so applications that are not explicitly setting a region can take this update without making any code changes.

<configuration>
  <appsettings>
    <add key="AWSRegion" value="us-east-1">
  </add></appsettings>
</configuration>

Here is an example of instantiating an Amazon S3 client using the new method on AWSClientFactory that accepts a RegionEndpoint parameter.

var s3Client = AWSClientFactory.CreateAmazonS3Client(accessKey,secretKey,RegionEndpoint.USWest2);

Fluent programming methods are no longer supported

The "With" methods on model classes that are present in version 1 of the SDK are not supported in version 2. You can use constructor initializers when creating new instances.

Here is an example that demonstrates this change. Calling the "With" methods using version 1 of the SDK to set up a TransferUtilityUploadRequest object looks like this:

TransferUtilityUploadRequest uploadRequest = new TransferUtilityUploadRequest()
    .WithBucketName("my-bucket")
    .WithKey("test")
    .WithFilePath("c:test.txt");

In version 2 of the SDK, you can instead use constructor initializers like this:

TransferUtilityUploadRequest uploadRequest = new TransferUtilityUploadRequest
{
    BucketName = "my-bucket",
    Key = "test",
    FilePath = "c:test.txt"
};

Resources

Here are a few resources that you will find handy while working with the new SDK.