Category: .NET


Configuring Advanced Logging on AWS Elastic Beanstalk

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

Sometimes developers want more flexibility in logging for their IIS environments. For example, in the IIS log files on instances in a load-balanced AWS Elastic Beanstalk environment, the client IP for requests always appears to be the load balancer. Elastic Load Balancing adds an X-Forwarded-For header to each request that contains the actual client IP address, but there’s no way to log that with IIS’s default logging.

Microsoft has created Advanced Logging to provide more flexibility in logging. You can add Advanced Logging to your Elastic Beanstalk instances by making the MSI available (for example, in an Amazon S3 bucket), then scripting the configuration with Windows PowerShell.

By default, though, Advanced Logging puts its log files in a different location than the default IIS log files, so if you want to see them in Snapshot Logs, or have them published to S3, you need to tell Elastic Beanstalk about it.

We’ll build up an .ebextensions config file that addresses each of these points, then show the completed configuration at the end.

Download and Install Advanced Logging

First, we need to make the Advanced Logging installer available at a well-known location that will persist over the lifetime of our environment, because instances that get autoscaled into the environment will need to download it as well as any instances created when the environment is first brought up. This can be any URL-addressable location. For this example, we will upload the installer to an S3 bucket we control.

After uploading the AdvancedLogging64.msi to an S3 bucket and making it publicly readable, add the following to the config file (for example: .ebextensionsadvancedlogging.config).

files:
  "c:/software/AdvancedLogging64.msi": 
  source: https://my-bucket.s3.amazonaws.com/AdvancedLogging64.msi
commands:
  00-install-advanced-logging:
    command: msiexec /i AdvancedLogging64.msi
    test: cmd /c "if exist c:\software\configured (exit 1) else (exit 0)"
    cwd: c:/software/
    waitAfterCompletion: 0
  02-set-configured:
    command: date /t > c:/software/configured
    waitAfterCompletion: 0

The files: key gets the MSI onto the instance, and the commands: key runs msiexec and then creates a file to signal that the install has been done. The test: subkey makes the command contingent on the non-existence of the signal file, so that the install happens only on the initial deployment, and not on every redeployment.

Configuring Advanced Logging

Advanced Logging is usually configured through the IIS Manager UI, but we can use PowerShell to accomplish this. The steps are:

  • disable IIS logging
  • add the X-Forwarded-For header to the list of possible fields
  • add X-Forwarded-For to the selected fields
  • enable Advanced Logging
  • iisreset

The PowerShell script to do this uses the WebAdministration module:

import-module WebAdministration

Set-WebConfigurationProperty `
  -Filter system.webServer/httpLogging `
  -PSPath machine/webroot/apphost `
  -Name dontlog `
  -Value true

Add-WebConfiguration "system.webServer/advancedLogging/server/fields" `
  -value @{id="X-Forwarded-For";sourceName="X-Forwarded-For";sourceType="RequestHeader";logHeaderName="X-Forwarded-For";category="Default";loggingDataType="TypeLPCSTR"}

$logDefinitions = Get-WebConfiguration "system.webServer/advancedLogging/server/logDefinitions"
foreach ($item in $logDefinitions.Collection) {
    Add-WebConfiguration `
      "system.webServer/advancedLogging/server/logDefinition/logDefinition[@baseFileName='$($item.baseFileName)']/selectedFields" `
      -value @{elementTagName="logField";id="X-Forwarded-For";logHeaderName="";required="false";defaultValue=""}
}

Set-WebConfigurationProperty `
  -Filter system.webServer/advancedLogging/server `
  -PSPath machine/webroot/apphost `
  -Name enabled `
  -Value true

iisreset

We can either put this script in a file and download it like the MSI, or just inline it in the config file, like this (without the added line breaks for readability):

files:
  "c:/software/configureLogging.ps1":
    content: |
      import-module WebAdministration
      Set-WebConfigurationProperty -Filter system.webServer/httpLogging -PSPath machine/webroot/apphost -Name dontlog -Value true
      Add-WebConfiguration "system.webServer/advancedLogging/server/fields" -value @{id="X-Forwarded-For";sourceName="X-Forwarded-For";sourceType="RequestHeader";logHeaderName="X-Forwarded-For";category="Default";loggingDataType="TypeLPCSTR"}
      $logDefinitions = Get-WebConfiguration "system.webServer/advancedLogging/server/logDefinitions"
      foreach ($item in $logDefinitions.Collection) {
        Add-WebConfiguration "system.webServer/advancedLogging/server/logDefinitions/logDefinition[@baseFileName='$($item.baseFileName)']/selectedFields" -value @{elementTagName="logField";id="X-Forwarded-For";logHeaderName="";required="false";defaultValue=""}
      }
      Set-WebConfigurationProperty -Filter system.webServer/advancedLogging/server -PSPath machine/webroot/apphost -Name enabled -Value true
      iisreset
commands:
  01-add-forwarded-header:
    command: Powershell.exe -ExecutionPolicy Bypass -File c:\software\configureLogging.ps1
    test: cmd /c "if exist c:\software\configured (exit 1) else (exit 0)"
    waitAfterCompletion: 0

This snippet creates the script file and executes it.

Configure Elastic Beanstalk Logging

As we mentioned before, the default location for Advanced Logging log files is different from where the IIS logs usually go. In order to get the Advanced Logging log files to show up for Snapshot Logs and log publication, we need to add some configuration files that tell the Snapshot Logs and log publication features where to look for log files. In this case, these files say that all files in C:inetpublogsAdvancedLogs are eligible for snapshotting or log publication.

files:
  "c:/Program Files/Amazon/ElasticBeanstalk/config/publogs.d/adv-logging.conf":
    content: |
      C:inetpublogsAdvancedLogs
  "c:/Program Files/Amazon/ElasticBeanstalk/config/taillogs.d/adv-logging.conf":
    content: |
      C:inetpublogsAdvancedLogs

Combining all of the above snippets into a single configuration file looks like this:

files:
  "c:/software/AdvancedLogging64.msi": 
    source: https://my-bucket.s3.amazonaws.com/AdvancedLogging64.msi
  "c:/Program Files/Amazon/ElasticBeanstalk/config/publogs.d/adv-logging.conf":
    content: |
      C:inetpublogsAdvancedLogs
  "c:/Program Files/Amazon/ElasticBeanstalk/config/taillogs.d/adv-logging.conf":
    content: |
      C:inetpublogsAdvancedLogs
  "c:/software/configureLogging.ps1":
    content: |
      import-module WebAdministration
      Set-WebConfigurationProperty -Filter system.webServer/httpLogging -PSPath machine/webroot/apphost -Name dontlog -Value true
      Add-WebConfiguration "system.webServer/advancedLogging/server/fields" -value @{id="X-Forwarded-For";sourceName="X-Forwarded-For";sourceType="RequestHeader";logHeaderName="X-Forwarded-For";category="Default";loggingDataType="TypeLPCSTR"}
      $logDefinitions = Get-WebConfiguration "system.webServer/advancedLogging/server/logDefinitions"
      foreach ($item in $logDefinitions.Collection) {
        Add-WebConfiguration "system.webServer/advancedLogging/server/logDefinitions/logDefinition[@baseFileName='$($item.baseFileName)']/selectedFields" -value @{elementTagName="logField";id="X-Forwarded-For";logHeaderName="";required="false";defaultValue=""}
      }
      Set-WebConfigurationProperty -Filter system.webServer/advancedLogging/server -PSPath machine/webroot/apphost -Name enabled -Value true
      iisreset
commands:
  00-install-advanced-logging:
    command: msiexec /i AdvancedLogging64.msi
    test: cmd /c "if exist c:\software\configured (exit 1) else (exit 0)"
    cwd: c:/software/
    waitAfterCompletion: 0
  01-add-forwarded-header:
    command: Powershell.exe -ExecutionPolicy Bypass -File c:\software\configureLogging.ps1
    test: cmd /c "if exist c:\software\configured (exit 1) else (exit 0)"
    waitAfterCompletion: 0
  02-set-configured:
    command: date /t > c:/software/configured
    waitAfterCompletion: 0

For more information about how to customize Elastic Beanstalk environments, see the AWS Elastic Beanstalk Developer Guide.

Working with Amazon S3 Object Versions and the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Amazon S3 allows you to enable versioning for a bucket. You can enable or disable versioning with the SDK by calling the PutBucketVersioning method. Note, all code samples were written for our new version 2 of the SDK. Users of version 1 of the SDK will notice some slight name changes.

s3Client.PutBucketVersioning(new PutBucketVersioningRequest
{
    BucketName = versionBucket,
    VersioningConfig = new S3BucketVersioningConfig() { Status = VersionStatus.Enabled }
});

Once versioning is enabled, every PutObject call with the same key will add a new version of the object with a different version ID instead of overwriting the object. For example, running the code below will create three versions of the "sample.txt" object. The sleeps are added to give a more obvious difference in the timestamps.

var putRequest = new PutObjectRequest
{
    BucketName = versionBucket,
    Key = "sample.txt",
    ContentBody = "Content For Version 1"
};

s3Client.PutObject(putRequest);

Thread.Sleep(TimeSpan.FromSeconds(10));

s3Client.PutObject(new PutObjectRequest
{
    BucketName = versionBucket,
    Key = "sample.txt",
    ContentBody = "Content For Version 2"
});

Thread.Sleep(TimeSpan.FromSeconds(10));

s3Client.PutObject(new PutObjectRequest
{
    BucketName = versionBucket,
    Key = "sample.txt",
    ContentBody = "Content For Version 3"
});

Now, if you call the GetObject method without specifying a version ID like this:

var getRequest = new GetObjectRequest
{
    BucketName = versionBucket,
    Key = "sample.txt"
};

using (GetObjectResponse getResponse = s3Client.GetObject(getRequest))
using (StreamReader reader = new StreamReader(getResponse.ResponseStream))
{
    Console.WriteLine(reader.ReadToEnd());
}

// Outputs:
Content For Version 3

It will print out the contents of the last object that was put into the bucket.

Use the ListVersions method to get the list of versions.

var listResponse = s3Client.ListVersions(new ListVersionsRequest
{
    BucketName = versionBucket,
    Prefix = "sample.txt"                    
});

foreach(var version in listResponse.Versions)
{
    Console.WriteLine("Key: {0}, Version ID: {1}, IsLatest: {2}, Modified: {3}", 
        version.Key, version.VersionId, version.IsLatest, version.LastModified);
}

// Output:
Key: sample.txt, Version ID: nx5sVCpUSdpHzPBpOICF.eELc2nUsm3c, IsLatest: True, Modified: 10/29/2013 4:45:07 PM
Key: sample.txt, Version ID: LOgcIIrvtM0ZqYfkvfRz3UMdgdmRXNWE, IsLatest: False, Modified: 10/29/2013 4:44:56 PM
Key: sample.txt, Version ID: XxnZRKXHZ7cHYiogeCHXXxccojj9DLK5, IsLatest: False, Modified: 10/29/2013 4:44:46 PM

To get a specific version of an object, you simply need to specify the VersionId property when performing a GetObject.

var earliestVersion = listResponse.Versions.OrderBy(x => x.LastModified).First();

var getRequest = new GetObjectRequest
{
    BucketName = versionBucket,
    Key = "sample.txt",
    VersionId = earliestVersion.VersionId
};

using(GetObjectResponse getResponse = s3Client.GetObject(getRequest))
using(StreamReader reader = new StreamReader(getResponse.ResponseStream))
{
    Console.WriteLine(reader.ReadToEnd());
}

// Outputs:
Content For Version 1

Deleting an object that is versioned works differently than the non-versioned objects. If you call delete like this:

s3Client.DeleteObject(new DeleteObjectRequest
{
    BucketName = versionBucket,
    Key = "sample.txt"
});

and then try to do a GetObject for the "sample.txt" object, S3 will return an error that the object doesn’t exist. What S3 actually does when you call delete for a versioned object is insert a delete marker. You can see this if you list the versions again.

var  listResponse = s3Client.ListVersions(new ListVersionsRequest
{
    BucketName = versionBucket,
    Prefix = "sample.txt"                    
});

foreach (var version in listResponse.Versions)
{
    Console.WriteLine("Key: {0}, Version ID: {1}, IsLatest: {2}, IsDeleteMarker: {3}", 
        version.Key, version.VersionId, version.IsLatest, version.IsDeleteMarker);
}

// Outputs:
Key: sample.txt, Version ID: YRsryuUODxDujL4Y4iJjRLKweHrV0t2U, IsLatest: True, IsDeleteMarker: True
Key: sample.txt, Version ID: nx5sVCpUSdpHzPBpOICF.eELc2nUsm3c, IsLatest: False, IsDeleteMarker: False
Key: sample.txt, Version ID: LOgcIIrvtM0ZqYfkvfRz3UMdgdmRXNWE, IsLatest: False, IsDeleteMarker: False
Key: sample.txt, Version ID: XxnZRKXHZ7cHYiogeCHXXxccojj9DLK5, IsLatest: False, IsDeleteMarker: False

If you want to delete a specific version of an object, when calling DeleteObject, set the VersionId property. This is also how you can restore an object by deleting the delete marker.

var deleteMarkerVersion = listResponse.Versions.FirstOrDefault(x => x.IsDeleteMarker && x.IsLatest);
if (deleteMarkerVersion != null)
{
    s3Client.DeleteObject(new DeleteObjectRequest
    {
        BucketName = versionBucket,
        Key = "sample.txt",
        VersionId = deleteMarkerVersion.VersionId
    });
}

Now, calls to GetObject for the "sample.txt" object will succeed again.

Access Key Management for .NET Applications – Part 1

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

In this post, we talk about the three methods for providing AWS access keys to your .NET applications and look at a few best practices. We look at the following questions while discussing the different options for managing access keys.

  • Security: How do you securely store credentials and supply them to your application?
  • Ease of management: What do you do if the credentials are compromised? How easy is it to rotate the credentials? How do you supply the new credentials to your application? (We’ll talk about this in detail in a future post.)

Setting credentials while constructing the service client

You can pass the IAM user credentials directly as parameters while constructing the service client. All service clients and the AWSClientFactory class have overloads that allow you to pass the credentials. The following snippet demonstrates creating an Amazon S3 client using AWSClientFactory.

var accessKey = "";// Get access key from a secure store
var secretKey = "";// Get secret key from a secure store
var s3Client = AWSClientFactory.CreateAmazonS3Client(accessKey,secretKey,RegionEndpoint.USWest2);

The preceding snippet assumes you have a way to manually retrieve IAM user access keys from a secure location. You should never store access keys in plain text in the code, as anyone with access to your code can misuse them.

Application configuration file

Instead of passing credentials to the service client in code, you can define access keys in an application configuration file (e.g., app.config or web.config). The SDK looks up the application configuration for both components of the access keys [AWSAccessKey and AWSSecretKey] and uses its values. We strongly recommended using IAM user access keys instead of root account access keys, as root account access keys always have full permissions to your entire AWS environment. If your application configuration file is ever compromised, you will want to ensure the access keys it contains are as limited in scope as possible. Defining an IAM user lets you do this.

The following snippet shows a sample app.config file and creation of a S3 client using the AWSClientFactory class.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <appSettings>
    <add key="AWSAccessKey" value=""/>
    <add key="AWSSecretKey" value=""/>
    <add key="AWSRegion" value="us-west-2"/>
  </appSettings>
</configuration>
var s3Client = AWSClientFactory.CreateAmazonS3Client();

If you store credentials in the configuration file, be careful not to commit the file with credentials to your source control, making it available to anyone with access to it.

We looked at two methods to pass credentials, both of these are easy to use but only support trivial scenarios. For production grade usage, you will additionally need to implement the following

  • Securely store and retrieve credentials.
  • Implement a mechanism to rotate credentials.

Next, we’ll see a method that provides both these features out of the box.

Preferred method: IAM roles for EC2

If your application runs on an Amazon EC2 instance, you can use AWS Identity and Access Management(IAM) roles for EC2 to secure and simplify access key management. Roles for EC2 automatically distributes temporary security credentials to your EC2 instances that the AWS SDK for .NET in your application can use as if they were long-term access keys. These temporary security credentials are auto-rotated, and the SDK transparently picks up the new credentials before the existing ones expire.

To use this method, you first need to create an IAM role with only the permissions to access AWS resources required by your application. When you launch an EC2 instance with the IAM role, your application can query the EC2 instance metadata service for the temporary security credentials that will be retrieved automatically by the instance that assumes the role. The SDK supports loading these credentials from the EC2 instance metadata service, so you don’t need to take any special steps to use this feature. If you construct a service client without specifying the credentials, the client will pick up the credentials from the metadata service. (We’ll cover IAM roles in detail in a subsequent post.)

This is a good time to understand how credentials are loaded by the SDK when there are multiple ways in which they are made available to an application. Knowing this information can save you time debugging or trying to figure out the credentials being actually used when they are specified in multiple ways. The credentials are loaded or resolved in the following order

  • Credentials explicitly passed to the service client.
  • Credentials in application configuration file.
  • Credentials from EC2 instance metadata service (applicable only when your code is running in an EC2 instance).

A few best practices

  • Do not hard-code access keys in code or embed them in your application. Doing so makes it difficult to rotate the keys, and you risk disclosure.
  • Do not use root access keys associated with your AWS account for development or in production. These keys have unlimited privileges. Instead, create IAM users and use their associated access keys.
  • For applications that run on Amazon EC2, only use IAM roles to distribute access keys. The security and management properties of IAM roles for EC2 are superior to any alternative where you manually manage access keys.

You will find more best practices with in-depth information here.

Getting your Amazon EC2 Windows Password with the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

When you launch a Windows instance in EC2, a password will be generated for the Windows administrator user. You can retrieve this administrator’s password by using the AWS SDK for .NET.

In order to be able get the administrator password, you need to launch the EC2 instance with a key pair. To create a key pair, call the CreateKeyPair method.

string keyPairName = "get-my-password";
var createKeyPairResponse = ec2Client.CreateKeyPair(new CreateKeyPairRequest()
{
    KeyName = keyPairName
});

// The private key for the key pair used to decrypt the password.
string privateKey = createKeyPairResponse.KeyPair.KeyMaterial;

It is important when creating a key pair to save the private key. This is required to be able to decrypt the password.

Now, when launching the EC2 instance, you need to set the key pair.

// Use the ImageUtilities from the Amazon.EC2.Util namespace to look up the latest Windows 2012 AMI
var image = ImageUtilities.FindImage(ec2Client, ImageUtilities.WINDOWS_2012_BASE);
var runInstanceResponse = ec2Client.RunInstances(new RunInstancesRequest()
{
    ImageId = image.ImageId,
    KeyName = keyPairName,
    InstanceType = InstanceType.T1Micro,
    MaxCount = 1,
    MinCount = 1
});

// Capture the instance ID
string instanceId = runInstanceResponse.Reservation.Instances[0].InstanceId;

Once you’ve launched the instance, it will take a few minutes for the password to become available. To get the password, call the GetPasswordData method. If the PasswordData property on the response from GetPasswordData is null, then the password is not available yet.

var getPasswordResponse = ec2Client.GetPasswordData(new GetPasswordDataRequest()
{
    InstanceId = instanceId
});

if (string.IsNullOrEmpty(getPasswordResponse.PasswordData))
{
    Console.WriteLine("Password not available yet.");
}
else
{
    string decryptedPassword = getPasswordResponse.GetDecryptedPassword(privateKey);
    Console.WriteLine("Decrypted Windows Password: {0}", decryptedPassword);
}

If the PasswordData property is not null, then it contains the encrypted administrator password. The utility method GetDecryptedPassword on GetPasswordReponse takes in the private key from the key pair and decrypts the password.

Archiving and Backing-up Data with the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Jason Fulghum recently posted a blog entry about using Glacier with the AWS SDK for Java that I thought would be interesting for .NET developers. Here is Jason’s post with the code replaced with the C# equivalent.

Do you or your company have important data that you need to archive? Have you explored Amazon Glacier yet? Amazon Glacier is an extremely low-cost storage service that provides secure and durable storage for data archiving and backup. Just like with other AWS offerings, you pay only for what you use. You don’t have to pay large upfront infrastructure costs or predict capacity requirements like you do with on-premise solutions. Simply use what you need, when you need it, and pay only for what you use.

There are two easy ways to leverage Amazon Glacier for data archives and backups using the AWS SDK for .NET. The first option is to interact with the Amazon Glacier service directly. The AWS SDK for .NET includes a high-level API called ArchiveTransferManager for easily working with transfers into and out of Amazon Glacier.

IAmazonGlacier glacierClient = new AmazonGlacierClient();
ArchiveTransferManager atm = new ArchiveTransferManager(glacierClient);
UploadResult uploadResult = atm.Upload("myVaultName", "old logs", @"C:logsoldLogs.zip");

// later, when you need to retrieve your data
atm.Download("myVaultName", uploadResult.ArchiveId, @"C:downloadlogs.zip");

The second easy way of getting your data into Amazon Glacier using the SDK is to get your data into Amazon S3 first, and use a bucket lifecycle to automatically archive your objects into Amazon Glacier after a certain period.

It’s easy to configure an Amazon S3 bucket’s lifecycle using the AWS Management Console or the AWS SDKs. Here’s how to create a bucket lifecycle that will copy objects under the "logs/" key prefix to Amazon Glacier after 365 days, and will remove them completely from your Amazon S3 bucket a few days later at the 370 day mark.

IAmazonS3 s3 = new AmazonS3Client();

var configuration = new LifecycleConfiguration()
{
    Rules = new List
    {
        new LifecycleRule
        {
            Id = "log-archival-rule",
            Prefix = "logs/",
            Transition = new LifecycleTransition()
            {
                Days = 365,
                StorageClass = S3StorageClass.Glacier
            },
            Status = LifecycleRuleStatus.Enabled,
            Expiration = new LifecycleRuleExpiration()
            {
                Days = 370
            }
        }
    }
};

s3.PutLifecycleConfiguration(new PutLifecycleConfigurationRequest()
{
    BucketName = myBucket,
    Configuration = configuration
});

Are you using Amazon Glacier yet? Let know how you’re using it and how it’s working for you!

DynamoDB APIs

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Amazon DynamoDB is a fast NoSQL database service offered by AWS. DynamoDB can be invoked from .NET applications by using the AWS SDK for .NET. The SDK provides three different models for communicating with DynamoDB. This blog post is the first of a series that describes the various APIs, their respective tradeoffs, best practices, and little-known features.

The Models

The SDK provides three ways of communicating with DynamoDB. Each one offers a different tradeoff between control and ease of use.

  • Low-level : Amazon.DynamoDBv2 namespace—This is a thin wrapper over the DynamoDB service calls. It matches all the service features. You can reference the service documentation to learn more about each individual operation.
  • Document Model : Amazon.DynamoDBv2.DocumentModel namespace—This is a model that provides a simpler interface for dealing with data. DynamoDB tables are represented by Table objects, while individual rows of data are represented by Document objects. Conversion of .NET objects to DynamoDB data is automatic for basic types.
  • Object Persistence Model : Amazon.DynamoDBv2.DataModel namespace—This set of APIs allow you to store and load .NET objects in DynamoDB. Objects must be marked up to configure the target table and the hash/range keys. DynamoDBContext acts on marked up objects. It is used to store and load DynamoDB data, or to retrieve .NET objects from a query or scan operation. Basic data types are automatically converted to DynamoDB data and converters allow arbitrary types to be stored in DynamoDB.

The three models provide different approaches to working with the service. While the low-level approach requires more client-side code—the user must convert .NET types such as numbers and dates to DynamoDB-supported strings—it provides access to all service features. By comparison, the Object Persistence Model approach makes it easier to use the service—since the user is for the most part working with familiar .NET objects—but does not provide all the functionality. For example, it is not possible to make conditional Put calls with the Object Persistence Model.

Sample code

The best way to gain an understanding of the different models is with a code sample. Below are three examples of storing and retrieving data from DynamoDB, each using a different model.

Low-level

var client = new AmazonDynamoDBClient();

// Store item
client.PutItem(new PutItemRequest
{
    TableName = "Books",
    Item = new Dictionary<string, AttributeValue>
    {
        { "Title", new AttributeValue { S = "Cryptonomicon" } },
        { "Id", new AttributeValue { N = "42" } },
        { "Authors", new AttributeValue {
            SS = new List<string> { "Neal Stephenson" } } },
        { "Price", new AttributeValue { N = "12.95" } }
    }
});

// Get item
Dictionary<string, AttributeValue> book = client.GetItem(new GetItemRequest
{
    TableName = "Books",
    Key = new Dictionary<string, AttributeValue>
    {
        { "Id", new AttributeValue { N = "42" } }
    }
}).Item;

Console.WriteLine("Id = {0}", book["Id"].S);
Console.WriteLine("Title = {0}", book["Title"].S);
Console.WriteLine("Authors = {0}",
    string.Join(", ", book["Authors"].SS));

Document Model

var client = new AmazonDynamoDBClient();
Table booksTable = Table.LoadTable(client, "Books");

// Store item
Document book = new Document();
book["Title"] = "Cryptonomicon";
book["Id"] = 42;
book["Authors"] = new List<string> { "Neal Stephenson" };
book["Price"] = 12.95;
booksTable.PutItem(book);

// Get item
book = booksTable.GetItem(42);
Console.WriteLine("Id = {0}", book["Id"]);
Console.WriteLine("Title = {0}", book["Title"]);
Console.WriteLine("Authors = {0}",
    string.Join(", ", book["Authors"].AsListOfString()));

Object Persistence Model

This example consists of two parts: first, we must define our Book type; second, we use it with DynamoDBContext.

[DynamoDBTable("Books")]
class Book
{
    [DynamoDBHashKey]
    public int Id { get; set; }
    public string Title { get; set; }
    public List<string> Authors { get; set; }
    public double Price { get; set; }
}
var client = new AmazonDynamoDBClient();
DynamoDBContext context = new DynamoDBContext(client);

// Store item
Book book = new Book
{
    Title = "Cryptonomicon",
    Id = 42,
    Authors = new List<string> { "Neal Stephenson" },
    Price = 12.95
};
context.Save(book);

// Get item
book = context.Load<Book>(42);
Console.WriteLine("Id = {0}", book.Id);
Console.WriteLine("Title = {0}", book.Title);
Console.WriteLine("Authors = {0}", string.Join(", ", book.Authors));

Summary

As you can see, the three models differ considerably. The low-level approach is quite verbose, but it does expose service capabilities that are not present in the other models. The tradeoffs, specific features unique to each model, and how to use all models together will be the focus of this series on the .NET SDK and DynamoDB.

Release 2.0.0.6 of the AWS SDK V2.0 for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today, we updated our version 2 preview of the AWS SDK for .NET. You can download version 2.0.0.6 of the SDK here. This preview contains the following updates.

  • The SDK now requires the region to be explicitly specified through the client constructor or by using the AWSRegion setting in the application’s app or web config file. Prior versions of the SDK implicitly defaulted to us-east-1 if the region was not set. Here is an example of setting a region in the app config file so applications that are not explicitly setting a region can take this update without making any code changes.

    <configuration>
      <appSettings>
        <add key="AWSRegion" value="us-east-1"/>
      </appSettings>
    </configuration>
    
  • The Amazon DynamoDB high-level APIs Document Model and Object Persistence Model were added to the Windows Store and Windows Phone 8 version of the AWS SDK for .NET. For those of you coming to the AWS re:Invent conference next month, you can see our session where we’ll discuss using these APIs with version 2 of the SDK.
  • All the service clients have been updated to match the latest changes for each service.

 

We’ve received some great feedback on our preview and have made changes based on that feedback. We are hoping to GA version 2 of the SDK soon, but it is not too late to send us your thoughts.

Using Elastic IP Addresses

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Elastic IP addresses are great for keeping a consistent public IP address. They can also be transferred to other EC2 instances, which is useful if you need to replace an instance but don’t want your public IP address to change. The Amazon EC2 User Guide has information on IP addresses for EC2 instances that can give you a better understanding of how and when they are assigned. You can use the AWS Toolkit for Visual Studio or the AWS Management Console to manage your Elastic IP addresses, but what if you want to assign them from code?

To allocate Elastic IP addresses and associate them using the AWS SDK for .NET is very simple, but it differs slightly between EC2-Classic instances and instances launched into a VPC. This snippet shows how to allocate and associate an Elastic IP address for an instance launched into EC2-Classic.

// Create a new Elastic IP
var allocateRequest = new AllocateAddressRequest() { Domain = DomainType.Standard };
var allocateResponse = ec2Client.AllocateAddress(allocateRequest);

// Assign the IP to an EC2 instance
var associateRequest = new AssociateAddressRequest
{
    PublicIp = allocateResponse.PublicIp,
    InstanceId = "i-XXXXXXXX"
};
ec2Client.AssociateAddress(associateRequest);

And the following snippet is for an EC2 instance launched into a VPC.

// Create a new Elastic IP
var allocateRequest = new AllocateAddressRequest() { Domain = DomainType.Vpc };
var allocateResponse = ec2Client.AllocateAddress(allocateRequest);

// Assign the IP to an EC2 instance
var associateRequest = new AssociateAddressRequest
{
    AllocationId = allocateResponse.AllocationId,
    InstanceId = "i-XXXXXXXX"
};
ec2Client.AssociateAddress(associateRequest);

The difference between the two pieces of code is that the Domain property on AllocateAddressRequest is changed from DomainType.Standard to DomainType.Vpc. The other difference is that the address associated with the PublicIp property is used for EC2-Classic, whereas AllocationId is used for EC2-VPC.

Later, if the Elastic IP address needs to be changed to a different instance, the ReleaseAddress API can be called, and then AssociateAddress can be called again on the new instance.

Note, I was using Version 2 of the SDK for this blog. If you are using Version 1 of the SDK, the enumerations DomainType.Standard and DomainType.Vpc would be replaced with the string literals "standard" and "vpc".

Using Windows PowerShell

This is also a great use for the AWS Tools for Windows PowerShell. Here’s how you can do the same as above for EC2-Classic in PowerShell.

$address = New-EC2Address -Domain "standard"
Register-EC2Address -InstanceId "i-XXXXXXXX" -PublicIp $address.PublicIp

Drag and Drop in the AWS Toolkit for Visual Studio

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Using drag and drop can be a great time saver when using your favorite tool, but it is not always obvious what drag and drop features are available. The AWS Toolkit for Visual Studio has many drag and drop features that you might not have discovered yet.

AWS Explorer to the Code Window

When dragging resources from AWS Explorer to your code, the name used to look up the resource is inserted into your code. For example, dragging an Amazon S3 bucket inserts the bucket name into your code. This is especially useful for Amazon SQS queues where the full queue URL is inserted, and for Amazon SNS topics where the topic ARN is inserted.

Amazon S3 Bucket Browser

Files and folders in Windows Explorer can be dragged into the S3 bucket browser. This uploads the local files and folders to the specific bucket. S3 objects can also be dragged out of the S3 bucket browser into Windows Explorer. If you drag a "folder" from the S3 bucket browser, a folder is created on your local system, and all of the objects with the folder prefix are downloaded into the folder.

Subscribing Amazon SQS Queues to Amazon SNS Topics

In order to have an SQS queue receive messages from an SNS topic, the queue must be subscribed and the permissions on the SQS queue must give the SNS topic access to the SendMessage action. In the toolkit, this is easy to do by opening up the SNS topic view and then dragging the target SQS queue into the view.

This displays the confirmation dialog box with the check box to add permissions on the SQS queue for the SNS topic. Afterwards, you can confirm the permissions by right-clicking the SQS queue and selecting Edit Policy. You can also confirm the subscription by using the "Publish to Topic" feature in the topic view and seeing the message in the queue view.

AWS Identity and Access Management (IAM) Policy Editor

Using IAM to restrict access to your resources is very important in keeping your account secure. In the policies that you create for IAM groups, roles, and users, you identify the resources you want to give or deny access to by their Amazon Resource Name (ARN). To make this step easier, you can drag your target resources or services from AWS Explorer to the policy editor, which automatically fills in the required ARNs.

AWS CloudFormation Stack to a Template File

When using the CloudFormation Editor, you can drag stacks from AWS Explorer to an open template file. This replaces all the contents of the template with the template from the stack. (A confirmation box appears to make sure that this is what you want to do.)

Getting Ready for AWS re:Invent 2013

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

AWS re:Invent is coming up again this November 12-15 in Las Vegas. Last year, Steve Roberts and I had a great time meeting with developers and discussing how they use AWS. We also gave a talk about deploying your apps from Visual Studio. To watch the screencast, see Deploying to the AWS Cloud with Visual Studio.

This year, Jim Flanagan and I are coming to re:Invent. We’ll be hanging out in the developer lounge so we can meet and chat with the fellow attendees. We’ll also be giving another talk this year. In this talk, we plan to show off our new version 2 of the AWS SDK for .NET and also new enhancements we’ve made for deploying your apps. For more information, check out TLS302 – Building Scalable Windows and .NET Apps on AWS.

Hope to see you there!