Category: .NET


Requesting feedback on the AWS Toolkit for Visual Studio

by Andrew Fitz Gibbon | on | in .NET | Permalink | Comments |  Share

The AWS Toolkit for Visual Studio provides extensions for Microsoft Visual Studio that make it easier to develop, debug, and deploy .NET applications using Amazon Web Services. We’re constantly working to improve these extensions and provide developers what they need to develop and manage their applications.

To better guide the future of the AWS Toolkit for Visual Studio, we’re reaching out to you for direct feedback. Below is a link to a short survey. It shouldn’t take more than 15 minutes to fill out and your responses will help us bring you a better development experience. Thank you!

Survey: Feedback on the AWS Toolkit for Visual Studio

Using Amazon SQS Dead Letter Queues

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

After Jason Fulghum recently posted a blog entry about using Amazon SQS dead letter queues with the AWS SDK for Java, I thought his post would be interesting for .NET developers as well. Here is Jason’s post with the code replaced with the C# equivalent.

Amazon SQS recently introduced support for dead letter queues. This feature is an important tool to help your applications consume messages from SQS queues in a more resilient way.

Dead letter queues allow you to set a limit on the number of times a message in a queue is processed. Consider an application that consumes messages from a queue and does some sort of processing based on the message. A bug in your application may only be triggered by certain types of messages or when working with certain data in your application. If your application receives one of these messages, it won’t be able to successfully process it and remove it from the queue. Instead, your application will continue to try to process the message again and again. While this message is being continually retried, your queue is likely filling up with other messages, which your application is unable to process because it’s stuck repeatedly processing the bad message.

Amazon SQS dead letter queues enable you to configure your application so that if it can’t successfully process a problematic message and remove it from the queue, that message will be automatically removed from your queue and delivered to a different SQS queue that you’ve designated as a dead letter queue. Another part of your application can then periodically monitor the dead letter queue and alert you if it contains any messages, which you can debug separately.

Using Amazon SQS dead letter queues is easy. You just need to configure a RedrivePolicy on your queue to specify when messages are delivered to a dead letter queue and to which dead letter queue they should be delivered. You can use the AWS Management Console, or you can access the Amazon SQS API directly with the AWS SDK for .NET.

// First, we'll need an Amazon SQS client object.
IAmazonSQS sqs = new AmazonSQSClient(RegionEndpoint.USWest2);

// Create two new queues:
//     one main queue for our application messages
//     and another to use as our dead letter queue
string qUrl = sqs.CreateQueue(new CreateQueueRequest()
{
    QueueName = "MyApplicationQueue"
}).QueueUrl;

string dlqUrl = sqs.CreateQueue(new CreateQueueRequest()
{
    QueueName = "MyDeadLetterQueue"
}).QueueUrl;

// Next, we need to get the the ARN (Amazon Resource Name) of our dead
// letter queue so we can configure our main queue to deliver messages to it.
IDictionary attributes = sqs.GetQueueAttributes(new GetQueueAttributesRequest()
{
    QueueUrl = dlqUrl,
    AttributeNames = new List() { "QueueArn" }
}).Attributes;

string dlqArn = attributes["QueueArn"];

// The last step is setting a RedrivePolicy on our main queue to configure
// it to deliver messages to our dead letter queue if they haven't been
// successfully processed after five attempts.
string redrivePolicy = string.Format(
    "{{"maxReceiveCount":"{0}", "deadLetterTargetArn":"{1}"}}",
    5, dlqArn);

sqs.SetQueueAttributes(new SetQueueAttributesRequest()
{
    QueueUrl = qUrl,
    Attributes = new Dictionary()
    {
        {"RedrivePolicy", redrivePolicy}
    }
});

There’s also a new operation in the Amazon SQS API to help you identify which of your queues are set up to deliver messages to a specific dead letter queue. If you want to know what queues are sending messages to a dead letter queue, just use the IAmazonSQS.ListDeadLetterSourceQueues operation.

IList sourceQueues = sqs.ListDeadLetterSourceQueues(
    new ListDeadLetterSourceQueuesRequest()
    {
        QueueUrl = dlqUrl
    }).QueueUrls;

Console.WriteLine("Source Queues Delivering to " + qUrl);
foreach (string queueUrl in sourceQueues)
{
    Console.WriteLine(" * " + queueUrl);
}

Dead letter queues are a great way to add more resiliency to your queue-based applications. Have you set up any dead letter queues in Amazon SQS yet?

Steve Roberts Interviewed in Episode 255 of the PowerScripting Podcast

A few weeks ago, Steve Roberts, from the AWS SDK and Tools team for .NET, was pleased to be invited to take part in an episode of the PowerScripting Podcast, chatting with fellow developers about PowerShell here at AWS, the AWS SDK for .NET and other general topics (including his choice of superhero!). The recording of the event has now been published and can be accessed here.

As mentioned in the podcast, a new book has also just been published about using PowerShell with AWS. More details can be found on the publisher’s website at Pro PowerShell for Amazon Web Services.

Amazon DynamoDB Local Integration with AWS Toolkit for Visual Studio

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Recently, the Amazon DynamoDB team released DynamoDB Local, a great tool for local testing and working disconnected from the Internet. With version 1.6.3 of the AWS Toolkit for Visual Studio, DynamoDB Local was integrated to make it easy to manage your locally running DynamoDB.

In order to run DynamoDB Local, you need at least a JavaSE-1.6-compatible JRE installed, but we recommend 1.7.

Getting Started

To get started with DynamoDB Local

  1. In AWS Explorer, select Local (localhost).

  2. Now right-click on the DynamoDB node and select Connect to DynamoDB Local.

    • If you already have DynamoDB Local running, you can clear the Start new DynamoDB Local process check box. In this case, the toolkit attempts to connect to a currently running DynamoDB Local at the configured port.
    • If you haven’t installed DynamoDB Local yet, you can do that here by selecting the version you want, which is most likely the latest, and click Install. This downloads DynamoDB Local to the folder "dynamodb-local" under your home directory.
  3. Ensure that you have a proper path to Java set for the Java Executable Path and click OK to start a new instance of DynamoDB Local. AWS Explorer refreshes and shows any tables that you might have set up previously.

 

Connecting to DynamoDB Local

To connect to DynamoDB Local using the AWS SDK for .NET, you need to set the ServiceURL property on the AmazonDynamoDBConfig object for the client. Here is an example of setting up the DynamoDB client, assuming DynamoDB Local is running on port 8000.

var config = new AmazonDynamoDBConfig
{
   ServiceURL = "http://localhost:8000/"
}

// Access key and secret key are not required
// when connecting to DynamoDB Local and
// are left empty in this sample.
var client = new AmazonDynamoDBClient("", "", config);

 

IAM Credential Rotation (Access Key Management for .NET Applications – Part 3)

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

In the previous post in this series, we talked about using IAM users instead of using the root access keys of your AWS account. In this post, we’ll talk about another security best practice, regularly rotating your credentials.

Instead of rotating credentials only when keys are compromised, you should regularly rotate your credentials. If you follow this approach, you’ll have a process in place that takes care of rotating keys if they are compromised, instead of figuring it out when the event takes place. You’ll also have some degree of protection against keys that are compromised without your knowledge, as those keys will only be valid for a certain period, before they are rotated.

We use the following steps for access key rotation to minimize any disruption to running applications:

  • Generate new access key
  • Securely distribute the access key to your applications
  • Disable the old access key
  • Make sure that your applications work with the new key
  • Delete the old access key

Here is the code that performs some of these steps. How you implement distributing the key to your applications and testing the applications is specific to your solution.

var iamClient = new AmazonIdentityManagementServiceClient(ACCESS_KEY, SECRET_KEY, RegionEndpoint.USWest2);
            
// Generate new access key for the current account
var accessKey = iamClient.CreateAccessKey().AccessKey;
	
//
// Store the access key ID (accessKey.AccessKeyId) and 
// secret access key (accessKey.SecretAccessKey)
// securely and distribute it to your applications.
//

// Disable the old access key
iamClient.UpdateAccessKey(new UpdateAccessKeyRequest
{
  AccessKeyId = OLD_ACCESS_KEY_ID,
  Status = StatusType.Inactive
});

// 
// Confirm that your applications pick the new access key
// and work properly using the new key.
//

// Delete the old access key.
iamClient.DeleteAccessKey(new DeleteAccessKeyRequest
{
  AccessKeyId = OLD_ACCESS_KEY_ID
});

If your applications don’t work properly after switching to the new access key, you can always reactivate the old access key (from inactive state) and switch back to it. Only delete the old access keys after testing your applications as they cannot be restored once deleted.

 

New Sample Simple Workflow

When you install the SDK from our website, many samples are installed inside Visual Studio, including the Express editions of Visual Studio. Look in the New Project Wizard, where you’ll find samples showing off many of the AWS services.

 

We recently added a new sample that shows off using Amazon Simple Workflow Service (SWF) with the .NET SDK. The sample is under AWS -> App Services section and is called AWS Simple Workflow Image Processing Sample. The sample shows how to use SWF to monitor images coming from S3 and to generate thumbnails of various sizes. In a real-world scenario, this would most likely be done with multiple processes monitoring SWF for decision and activity tasks. This sample is set up as WPF app hosting virtual consoles, each representing an individual process to make it easier to run the sample.

 

The virtual console on the top is the process that chooses an image to generate thumbnails for and starts the workflow execution.

// Snippet from StartWorkflowExecutionProcessor.cs that starts the workflow execution

swfClient.StartWorkflowExecution(new StartWorkflowExecutionRequest
{
    // Serialize input to a string
    Input = Utils.SerializeToJSON(input),
    //Unique identifier for the execution
    WorkflowId = DateTime.Now.Ticks.ToString(),
    Domain = Constants.ImageProcessingDomain,
    WorkflowType = new WorkflowType
    {
        Name = Constants.ImageProcessingWorkflow,
        Version = Constants.ImageProcessingWorkflowVersion
    }
});

 

The virtual console in the bottom left monitors SWF for decision tasks. When it gets a decision task, it looks at the workflow’s history and sees what activities have been completed to figure out which thumbnail hasn’t be created yet. If one of the thumbnail sizes hasn’t been created yet, it schedules an activity to create the next thumbnail sizes. If all the thumbnails have been created, it completes the workflow.

// Snippet from ImageProcessWorkflow.cs that polls for decision tasks and decides what decisions to make.

void PollAndDecide()
{
    this._console.WriteLine("Image Process Workflow Started");
    while (!_cancellationToken.IsCancellationRequested)
    {
        DecisionTask task = Poll();
        if (!string.IsNullOrEmpty(task.TaskToken))
        {
            // Create the next set of decisions based on the current state and
            // the execution history
            List decisions = Decide(task);

            // Complete the task with the new set of decisions
            CompleteTask(task.TaskToken, decisions);
        }
    }
}

 

The virtual console in the bottom right monitors SWF for activity tasks to perform. The activity task will have input from the decider process that tells what image to create a thumbnail for and what size of thumbnail.

// Snippet from ImageActivityWorker.cs showing the main loop for the worker that polls for tasks and processes them.

void PollAndProcessTasks()
{
    this._console.WriteLine("Image Activity Worker Started");
    while (!_cancellationToken.IsCancellationRequested)
    {
        ActivityTask task = Poll();
        if (!string.IsNullOrEmpty(task.TaskToken))
        {
            ActivityState activityState = ProcessTask(task.Input);
            CompleteTask(task.TaskToken, activityState);
        }
    }
}

 

Resource Condition Support in the AWS CloudFormation Editor

AWS CloudFormation recently added support for conditions that control whether resources are created or what value to set for properties on resources. The CloudFormation editor included with the AWS Toolkit for Visual Studio was updated to support conditions in version 1.6.1. If you have never used the CloudFormation editor, we have a screencast that gives a quick introduction to the editor.

Defining Conditions

To get started with conditions, you first need to define them.

In this example, there are 2 conditions defined. The first condition checks to see if the deployment will be a production deployment. The second condition checks to see if a new security group should be created.

Using Conditions to Control Resource Creation

For all resources defined in a template, you can set the Condition property. If the condition evaluates to true, then the resource is created with the CloudFormation stack that is the instantiation of the CloudFormation template.

This security group is created only if the CreateSecurityGroup condition evaluates to true, which occurs if no security group is passed in to the ExistingSecurityGroup parameter.

Using Conditions to Control Resource Properties

You can also use conditions to determine what value to set for a resource property.

Since the security group is going to be either created or set by the ExistingSecurityGroup parameter, the SecurityGroups property needs to have its value set conditionally depending on how the security group was created. Also, in this example, we are going to control the size of the EC2 instance depending on the deployment being a production deployment or not.

For more information about using conditions with CloudFormation, check out the AWS CloudFormation User Guide.

Creating Amazon DynamoDB Tables with PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Version 2.0 of the AWS Tools for Windows PowerShell contains new cmdlets that allow you to manage tables in Amazon DynamoDB. The cmdlets all share the same noun prefix, DDB, and can be discovered using Get-Command:

PS C:> Get-Command -Module AWSPowerShell -Noun DDB*

CommandType     Name                                               ModuleName
-----------     ----                                               ----------
Cmdlet          Add-DDBIndexSchema                                 AWSPowerShell
Cmdlet          Add-DDBKeySchema                                   AWSPowerShell
Cmdlet          Get-DDBTable                                       AWSPowerShell
Cmdlet          Get-DDBTables                                      AWSPowerShell
Cmdlet          New-DDBTable                                       AWSPowerShell
Cmdlet          New-DDBTableSchema                                 AWSPowerShell
Cmdlet          Remove-DDBTable                                    AWSPowerShell
Cmdlet          Update-DDBTable                                    AWSPowerShell

This post looks at the New-DDBTable cmdlet and the schema builder cmdlets — New-DDBTableSchema, Add-DDBKeySchema, and Add-DDBIndexSchema — that you can use in a pipeline to make table definition and creation simple and fluent.

Defining Schema

The schema builder cmdlets allow you to define the schema for your table and can be used in a PowerShell pipeline to incrementally refine and extend the schema you require. The schema object is then passed to New-DDBTable (either in the pipeline or as the value for the -Schema parameter) to create the table you need. Behind the scenes, these cmdlets and New-DDBTable infer and wire up the correct settings for your table with respect to hash keys (on the table itself or in the indexes) without you needing to manually add this information.

Let’s take a look at the syntax for the schema builder cmdlets (parameters inside [] are optional; for parameters that accept a range of values, the allowable values are shown in {} separated by |):

# takes no parameters, returns a new Amazon.PowerShell.Cmdlets.DDB.Model.TableSchema object
New-DDBTableSchema

# The schema definition object may be piped to the cmdlet or passed as the value for -Schema
Add-DDBKeySchema -KeyName "keyname" 
                 -KeyDataType { "N" | "S" | "B" }
                 [ -KeyType { "hash" | "range" } ]
                 -Schema Amazon.PowerShell.Cmdlets.DDB.Model.TableSchema

# The schema definition object may be piped to the cmdlet or passed as the value for -Schema
Add-DDBIndexSchema -IndexName "indexName"
                   -RangeKeyName "keyName"
                   -RangeKeyDataType { "N" | "S" | "B" }
                   [ -ProjectionType { "keys_only" | "include" | "all" } ]
                   [ -NonKeyAttribute @( "attrib1", "attrib2", ... ) ]
                   -Schema Amazon.PowerShell.Cmdlets.DDB.Model.TableSchema 

Not all of the parameters for each cmdlet are required as the cmdlets accept certain defaults. For example, the default key type for Add-DDBKeySchema is "hash". For Add-DDBIndexSchema, -ProjectionType is optional (and -NonKeyAttribute is needed only if -ProjectionType is set to "include"). If you’re familiar with the Amazon DynamoDB API, you’ll probably recognize the type codes used with -KeyDataType and -RangeKeyDataType. You can find the API reference for the CreateTable operation here.

Using the Create a Table example shown on the CreateTable API reference page, here’s how we can easily define the schema using these cmdlets in a pipeline:

PS C:> New-DDBTableSchema `
            | Add-DDBKeySchema -KeyName "ForumName" -KeyDataType "S" `
            | Add-DDBKeySchema -KeyName "Subject" -KeyType "range" -KeyDataType "S" `
            | Add-DDBIndexSchema -IndexName "LastPostIndex" `
                                 -RangeKeyName "LastPostDateTime" `
                                 -RangeKeyDataType "S" `
                                 -ProjectionType "keys_only"

AttributeSchema                  KeySchema                        LocalSecondaryIndexSchema        GlobalSecondaryIndexSchema
---------------                  ---------                        -------------------------        --------------------------
{ForumName, Subject, LastPost... {ForumName, Subject}             {LastPostIndex}                  {}

PS C:>

As you can see from the output, the cmdlets took the empty schema object created by New-DDBTableSchema and extended it with the data that New-DDBTable will need. One thing to note is that, apart from New-DDBTableSchema, the cmdlets can be run in any order, any number of times. This gives you complete freedom to experiment at the console without needing to define all the keys up front and then define the index schema and so on. You can also clone the schema object and stash away a basic template that you can then further refine for multiple different tables (the Clone() method on the schema object makes a deep copy of the data it contains).

Creating the Table

Once the schema is defined, it can be passed to New-DDBTable to request that the table be created. The schema can be passed into New-DDBTable using a pipeline or by passing the schema object to the -Schema parameter. Here is the syntax for New-DDBTable:

# The schema definition object may be piped to the cmdlet or passed as the value for -Schema
New-DDBTable -TableName "tableName"
             -Schema Amazon.PowerShell.Cmdlets.DDB.Model.TableSchema 
             -ReadCapacity  value
             -WriteCapacity value

As you can see, it’s pretty simple. To use the previous example schema definition—but this time actually create the table—we can extend our pipeline like this:

PS C:> New-DDBTableSchema `
            | Add-DDBKeySchema -KeyName "ForumName" -KeyDataType "S" `
            | Add-DDBKeySchema -KeyName "Subject" -KeyType "range" -KeyDataType "S" `
            | Add-DDBIndexSchema -IndexName "LastPostIndex" `
                                 -RangeKeyName "LastPostDateTime" `
                                 -RangeKeyDataType "S" `
                                 -ProjectionType "keys_only" `
            | New-DDBTable "Threads" -ReadCapacity 10 -WriteCapacity 5

AttributeDefinitions : {ForumName, LastPostDateTime, Subject}
TableName            : Threads
KeySchema            : {ForumName, Subject}
TableStatus          : CREATING
CreationDateTime     : 11/29/2013 5:47:31 PM
ProvisionedThroughput: Amazon.DynamoDBv2.Model.ProvisionedThroughputDescription
TableSizeBytes       : 0
ItemCount            : 0
LocalSecondaryIndexes: {LastPostIndex}
GlobalSecondaryIndexes: {}

PS C:>

By default Add-DDBIndexSchema constructs local secondary indices. To have the cmdlet construct a global secondary index schema entry instead, you simply add the -Global switch plus the required provisioning -ReadCapacity and -WriteCapacity parameter values you need. You can also optionally specify -HashKeyName and -HashKeyDataType instead of, or in addition to, the range key parameters:

    ...
    | Add-DDBIndexSchema -Global `
                         -IndexName "myGlobalIndex" `
                         -HashKeyName "hashKeyName" `
                         -HashKeyDataType "N" `
                         -RangeKeyName "rangeKeyName" `
                         -RangeKeyDataType "S" `
                         -ProjectionType "keys_only" `
                         -Global `
                         -ReadCapacity 5 `
                         -WriteCapacity 5 `
                         ...

Let us know in the comments what you think about the fluent-style cmdlet piping, or how well these DynamoDB cmdlets fit your scripting needs.

Using IAM Users (Access Key Management for .NET Applications – Part 2)

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

In the previous post about access key management, we covered the different methods to provide AWS access keys to your .NET applications. We also talked about a few best practices, one of which is to use IAM users to access AWS instead of the root access keys of your AWS account. In this post, we’ll see how to create IAM users and set up different options for them, using the AWS SDK for .NET.

The root access keys associated with your AWS account should be safely guarded, as they have full privileges over AWS resources belonging to your account and access to your billing information. Therefore, instead of using the root access keys in applications or providing them to your team/organization, you should create IAM users for individuals or applications. IAM users can make API calls, use the AWS Management Console, and have their access limited by IAM policies. Let’s see the steps involved to start using IAM users.

Create an IAM user

For this example, we are going to use the following policy, which gives access to a specific bucket. You’ll need to replace BUCKET_NAME with the name of the bucket you want to use.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListAllMyBuckets"],
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket","s3:GetBucketLocation"],
      "Resource": "arn:aws:s3:::BUCKET_NAME"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:PutObject","s3:GetObject","s3:DeleteObject"],
      "Resource": "arn:aws:s3:::BUCKET_NAME/*"
    }
  ]
}

In cases where you are creating a policy on the fly or you want a strongly typed mechanism to create policies, you can use the Policy class found in the Amazon.Auth.AccessControlPolicy namespace to construct a policy. For more details, check Creating Access Policies in Code.

var iamClient = new AmazonIdentityManagementServiceClient(ACCESS_KEY, SECRET_KEY, RegionEndpoint.USWest2);

// Create an IAM user
var userName = "Alice";
iamClient.CreateUser(new CreateUserRequest
{
  UserName = userName,
  Path = "/developers/"
});

// Add a policy to the user
iamClient.PutUserPolicy(new PutUserPolicyRequest
{
  UserName = userName,
  PolicyName = allowS3BucketAccess,
  PolicyDocument = s3AccessPolicy
});

The Path parameter in the CreateUser call is an optional parameter that can be used to give a path to the user. In this example, the Amazon Resource Name (ARN) for the user created in the above example will be arn:aws:iam::account-number-without-hyphens:user/developers/Alice. The path for an IAM user is part of its Amazon Resource Name (ARN) and is a simple but powerful mechanism to organize users and create policies that apply to a subset of your users.

Use IAM groups

Instead of assigning permissions to an IAM user, we can create an IAM group with the relevant permissions and then add the user to the group. The group’s permissions are then applicable to all users belonging to it. With this approach, we don’t have to manage permissions for each user.

// Create an IAM group
var groupName = "DevGroup";
iamClient.CreateGroup(new CreateGroupRequest
{
  GroupName = groupName
});

// Add a policy to the group
iamClient.PutGroupPolicy(new PutGroupPolicyRequest
{
  GroupName = groupName,
  PolicyName = allowS3BucketAccess,
  PolicyDocument = s3AccessPolicy
});

// Add the user to the group
iamClient.AddUserToGroup(new AddUserToGroupRequest
{
  UserName = userName,
  GroupName = groupName
});

The preceding code creates an IAM group, assigns a policy, and then adds a user to the group. If you are wondering how the the permissions are evaluated when a group has multiple policies or a user belongs to multiple groups, IAM Policy Evaluation Logic explains this in detail.

Generate access key for an IAM user

To access AWS using the API or command line interface (CLI), the IAM user needs an access key that consists of the access key ID and secret access key.

// Create an access key for the IAM user
AccessKey accessKey = iamClient.CreateAccessKey(new CreateAccessKeyRequest
{
  UserName = userName
}).AccessKey;

The CreateAccessKey method returns an instance of the AccessKey class that contains the access key ID [AccessKey.AccessKeyId] and secret access key [AccessKey.SecretAccessKey]. You will need to save the secret key or securely distribute it to the user since you will not be able to retrieve it again. You can always create a new access key and delete the old access key (using the DeleteAccessKey method) if you lose it.

Enable access to the AWS Management Console

IAM users can access the AWS Management Console to administer the resources to which they have permissions. To enable access to the AWS Management Console, you need to create a login profile for the user and then provide them with the URL of your account’s sign-in page.

// Allow the IAM user to access AWS Console
iamClient.CreateLoginProfile(new CreateLoginProfileRequest
{
  UserName = userName,
  Password = "" // Put the user's console password here.
});

In this post we saw how to use IAM users for accessing AWS instead of the root access keys of your AWS account. In the next post in this series, we’ll talk about rotating credentials.

Configuring DynamoDB Tables for Development and Production

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

The Object Persistence Model API in the SDK uses annotated classes to tell the SDK which table to store objects in. For example, the DyanmoDBTable attribute on the Users class below tells the SDK to store instances of the Users class into the "Users" table.

[DynamoDBTable("Users")]
public class Users
{
    [DynamoDBHashKey]
    public string Id { get; set; }

    public string FirstName { get; set; }

    public string LastName { get; set; }
	
    ...
}

A common scenario is to have a different set of tables for production and development. To handle this scenario, the SDK supports setting a prefix in the application’s app.config file with the AWS.DynamoDBContext.TableNamePrefix app setting. This app.config file indicates that all the tables used by the Object Persistence Model should have the "Dev_" prefix.

<appSettings>
  ...
  <add key="AWSRegion" value="us-west-2" />
  <add key="AWS.DynamoDBContext.TableNamePrefix" value="Dev_"/>
  ...
</appSettings>

The prefix can also be modified at run time by setting either the global property AWSConfigs.DynamoDBContextTableNamePrefix or the TableNamePrefix property for the DynamoDBContextConfig used to store the objects.