Category: .NET

New AWS Elastic Beanstalk Deployment Wizard

Today, we released version 1.8 of the AWS Toolkit for Visual Studio. For this release, we revamped our wizard to deploy your ASP.NET Applications. Our goal was to make deployment easier as well as take advantage of some of the new features AWS Elastic Beanstalk has added.

What happened to the AWS CloudFormation deployments?

Unlike the new deployment wizard, the previous wizard had the option to deploy using the Load Balanced and Single Instance Templates, which would deploy using AWS CloudFormation templates. This deployment option was added before we had Elastic Beanstalk, which has since added features that make these deployment templates obsolete. If you still need access to this deployment mechanism, on the first page of the new wizard you can choose to relaunch the legacy wizard.

So what’s new?

Rolling deployments

If you are deploying your applications to a load balanced environment, you can configure how new versions of your applications are deployed to the instances in your environment. You can also configure how changes to your environment are made. For example, if you have 4 instances in your environment and you want to change the instance type, you can configure the environment to change 2 instances at a time keeping your application up and running while the change is being made.

AWS Identity and Access Management roles

AWS Identity and Access Management roles are an important way of getting AWS credentials to your deployed application. With the new wizard, you can select an existing role or choose to create a new role based on a number of role templates. It is easy in the new wizard to set up a new role that gives access to Amazon S3 and DynamoDB. After deployment, you can refine the role from the AWS Explorer.

Application options

The application options page has several new features. You can now choose which build configuration to use. You can also set any application settings you want to be pushed into the web.config appSettings section when the application is being deployed.

In the previous deployment wizard, applications were deployed to a sub-folder in IIS based on the project name with the suffix "_deploy". It appeared as if it was deployed at the root because URL rewrite rules were added to the root. This worked for most cases, but there are some edge cases where this caused problems. With the new wizard, applications can be configured to deploy at any folder and by default it will be deployed at the root folder of IIS. If the application is deployed to anywhere other then the root, the URL rewrite rules are added to the root.


We hope that you like the new wizard and that it makes things easier for you. For a full walk through of the new wizard check out the user guide for the AWS Toolkit for Visual Studio. We would love to hear your feedback on the new wizard. We would also love to hear about any interesting deployment issues you have and where you would like help from AWS .NET tooling.


Amazon EC2 ImageUtilities and Get-EC2ImageByName Updates

Versions 2.3.14 of the AWS SDK for .NET and AWS Tools for Windows PowerShell, released today (December 18, 2014), contain updates to the utilities and the Get-EC2ImageByName cmdlet used to query common Microsoft Windows 64-bit Amazon Machine Images using version-independent names. Briefly, we renamed some of the keys used to identify Microsoft Windows Server 2008 images to address confusion over what versions are actually returned, and we added the ability to retrieve some additional images. In the Get-EC2ImageByName cmdlet, we made a small behavior change to help when running the cmdlet in a pipeline when more than one image version exists (as happens when Amazon periodically revises the images) – the cmdlet by default now outputs only the very latest image. The previous behavior that output all available versions (latest + prior) can be enabled using a new switch.

Renamed and New Image Keys

This change affects both the SDK Amazon.EC2.Util.ImageUtilities class and the Get-EC2ImageByName cmdlet. For some time now, the keys prefixed with Windows_2008_* have returned Microsoft Windows Server 2008 R2 images, not the original Windows Server 2008 editions, leading to some confusion. We addressed this by adding a new set of R2-specific keys—these all have the prefix Windows_2008R2_*. To maintain backward compatibility, the SDK retains the old keys, but we have tagged them with the [Obsolete] attribute and a message detailing the corresponding R2-based key you should use. Additionally, these old keys will still return Windows Server 2008 R2 images.

Note that the Get-EC2ImageByName cmdlet will not display the obsolete keys (when run with no parameters), but you can still supply them for the -Name parameter so your existing scripts will continue to function.

We also added three new keys enabling you to retrieve 64-bit Windows Server 2008 SP3 editions (base image, plus SQL Server 2008 Standard and SQL Server 2008 Express images). The keys for these images are WINDOWS_2008RTM_BASE, WINDOWS_2008RTM_SQL_SERVER_EXPRESS_2008, and WINDOWS_2008RTM_SQL_SERVER_STANDARD_2008.

The following keys are displayed when you run the cmdlet with no parameters:

PS C:> Get-EC2ImageByName

The following keys are deprecated but still recognized:


Get-EC2ImageByName Enhancements

Amazon periodically revises the set of Microsoft Windows images that Amazon makes available to customers and for a period, the Get-EC2ImageByName cmdlet could return the latest image for a key, plus one or more prior versions. For example, at the time of writing this post, running the command Get-EC2ImageByName -Name windows_2012r2_base emitted two images as output. If run in a pipeline that then proceeds to invoke the New-EC2Instance cmdlet, for example, instances of multiple images could then be started—perhaps not what was expected. To obtain and start the latest image only, you would have to either index the returned collection, which could contain one or several objects, or insert a call to Select-Object in your pipeline to extract the first item before then calling New-EC2Instance (the first item in the output from Get-EC2ImageByName is always the latest version).

With the new release, when a single key is supplied to the -Name parameter, the cmdlet emits only the single latest machine image that is available. This makes using the cmdlet in a ‘get | start’ pattern much safer and more convenient:

# guaranteed to only return one image to launch
PS C:> Get-EC2ImageByName -Name windows_2012r2_base | New-EC2Instance -InstanceType t1.micro ...

If you do need to get all versions of a given image, this is supported using the new ”-AllAvailable” switch. The following command outputs all available versions of the Windows Server 2012 R2 image, which may be one or several images:

PS C:> Get-EC2ImageByName -Name windows_2012r2_base -AllAvailable

The cmdlet can also emit all available versions when either more than one value is supplied for the -Name parameter or a custom key value is supplied, as it is assumed in these scenarios you are expecting a collection to work with:

# use of multiple keys (custom or built-in) yields all versions
PS C:> Get-EC2ImageByName -Name windows_2012r2_base,windows_2008r2_base

# use of a custom key, single or multiple, yields all versions
PS C:> Get-EC2ImageByName -Name "Windows_Server-2003*"

These updates to the Get-EC2ImageByName cmdlet were driven in part by feedback from our users. If you have an idea or suggestion for new features that would make your scripting life easier, please get in touch with us! One way is via the AWS PowerShell Scripting forum here.

Preview release of AWS Resource APIs for .NET

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

We have released a preview of AWS Resource APIs for .NET, which is a brand new high-level API. The latest version of the preview ships with the resource APIs for the following AWS services, support for other services will be added in the near future.

  • Amazon Glacier
  • Amazon Simple Notification Service (SNS)
  • Amazon Simple Queue Service (SQS)
  • AWS CloudFormation
  • AWS Identity and Access Management (IAM)

The goal of this preview is to provide early access to the new API and to get feedback from you that we can incorporate in the GA release. The source code for the preview is available as a new branch of the aws-sdk-net GitHub repository, and the binaries are available here.

The resource APIs allows you to work more directly with the resources that are managed by AWS services. A resource is a logical object exposed by an AWS service’s API. For example, User, Group, and Role are some of the resources exposed by the IAM service. Here are the benefits of using the resource APIs :

Easy to understand

The low-level APIs are request-response style APIs that corresponds to the actions exposed by an AWS service. The resource APIs are  higher-level object-oriented APIs that represent the logical relationships between the resources within a service. When you work with a resource object, only the operations and relationships applicable to it are visible, in contrast to the low-level API where you can see all the operations for a service on the service client object. This makes it easier to understand and explore the features of a service.

Write less code

The resource APIs reduces the the amount of code you need to write to achieve the same results.

  • Operations on resource objects infer identifier parameters from its current context. This allows you to write code where you don’t have to specify identifiers repeatedly.

    // No need to specify ResyncMFADeviceRequest.UserName 
    // as it is inferred from the user object
    user.Resync(new ResyncMFADeviceRequest
        SerialNumber = "",
        AuthenticationCode1 ="",
        AuthenticationCode2 = ""
  • Simplified method overloads eliminate creating request objects for commonly used and mandatory request parameters. You can also use the overload, which accepts a request object for complex usages.

    group.AddUser(user.Name); // Use this simplified overload instead of  
    group.AddUser(new AddUserToGroupRequest { UserName = user.Name});  
  • Auto pagination for operations that support paging – The resource APIs will make multiple service calls for APIs that support paging as you enumerate through the results. You do not have to write additional code to make multiple service calls and to capture/resend pagination tokens.

Using the API

The entry point for using the resource APIs is the service object. It represents an AWS service itself, in this case IAM. Using the service object, you can access top-level resources and operations on a service. Once you get the resource objects, further operations can be performed on them. The following code demonstrates various API usages with IAM and resource objects.

using Amazon.IdentityManagement.Model;
using Amazon.IdentityManagement.Resources; // Namespace for IAM resource APIs


// AWS credentials or profile is picked up from app.config 
var iam = new IdentityManagementService();            

// Get a group by its name
var adminGroup = iam.GetGroupByName("admins");

// List all users in the admins group.          
// GetUsers() calls an API that supports paging and 
// automatically makes multiple service calls if
// more results are available as we enumerate
// through the results.
foreach (var user in adminGroup.GetUsers())

// Create a new user and add the user to the admins group
var userA= iam.CreateUser("Alice");

// Create a new access key for a user
var userB = iam.GetUserByName("Bob");
var accessKey = userB.CreateAccessKey();

// Deactivate all MFA devices for a user
var userC = iam.GetUserByName("Charlie");
foreach (var mfaDevice in userC.GetMfaDevices())

// Update an existing policy for a user
var policy = userC.GetUserPolicyByName("S3AccessPolicy");            

The AWS SDK for .NET Developer Guide has code examples and more information about the resource APIs. We would really like to hear your feedback and suggestions about this new API. You can provide your feedback through GitHub and the AWS forums.

DynamoDB JSON Support

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

The latest Amazon DynamoDB update added support for JSON data, making it easy to store JSON documents in a DynamoDB table while preserving their complex and possibly nested shape. Now, the AWS SDK for .NET has added native JSON support, so you can use raw JSON data when working with DynamoDB. This is especially helpful if your application needs to consume or produce JSON—for instance, if your application is talking to a client-side component that uses JSON to send and receive data—as you no longer need to manually parse or compose this data.

Using the new features

The new JSON functionality is exposed in the AWS SDK for .NET through the Document class:

  • ToJson – This method converts a given Document to its JSON representation
  • FromJson – This method creates a Document for a given JSON string

Here’s a quick example of this feature in action.

// Create a Document from JSON data
var jsonDoc = Document.FromJson(json);

// Use the Document as an attribute
var doc = new Document();
doc["Id"] = 123;
doc["NestedDocument"] = jsonDoc;

// Put the item

// Load the item
doc = table.GetItem(42);

// Convert the Document to JSON
var jsonText = doc.ToJson();
var jsonPrettyText = doc["NestedDocument"].AsDocument().ToJsonPretty();

This example shows how a JSON-based Document can be used as an attribute, but you can also use the converted Document directly, provided that it has the necessary key attributes.
Also note that we have introduced the methods ToJson and ToJsonPretty. The difference between the two is that the latter will produce indented JSON that is easier to read.

JSON types

DynamoDB data types are a superset of JSON data types. This means that all JSON data can be represented as DynamoDB data, while the opposite isn’t true.

So if you perform the conversion JSON -> Document -> JSON, the starting and final JSON will be identical (except for formatting). However, since not all DynamoDB data types can be converted to JSON, the conversion Document -> JSON -> Document may result in a different representation of your data.

The differences between DynamoDB and JSON are:

  • JSON has no sets, just arrays, so DynamoDB sets (SS, NS, and BS types) will be converted to JSON arrays.
  • JSON has no binary representation, so DynamoDB binary scalars and sets (B and BS types) will be converted to base64-encoded JSON strings or lists of strings.

If you do end up with a Document instance that has base64-encoded data, we have provided a method on the Document object to decode this data and replace it with the correct binary representation. Here is a simple example:

doc.DecodeBase64Attributes("Data", "DataSet");

After executing the above code, the "Data" attribute will contain binary data, while the "DataSet" attribute will contain a list of binary data.

I hope you find this feature a useful addition to the AWS SDK for .NET. Please give it a try and let us know what you think on GitHub or here in the comments!

AWS re:Invent 2014 Recap

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Another AWS re:Invent has come and gone. Steve and I were lucky enough to be there and meet many developers using AWS in such interesting ways. We also gave a talk showing off some the new features the team added to the SDK this year. The talk has been made available online.

In our talk, we showed demos for:


We hope to hear from more .NET developers at next year’s re:Invent. Until then, feel free to contact us either in our forums or on GitHub.

AWS Toolkit support for Visual Studio Community 2013

We often hear from our customers that they would like our AWS Toolkit for Visual Studio to work with the Express editions of Visual Studio. We understand how desirable this is, but due to restrictions built into the Express editions of Visual Studio, it hasn’t been possible…until now.

With the recent announcement of the new Visual Studio Community 2013 edition, it is now possible to get the full functionality of our AWS Toolkit for Visual Studio inside a free edition of Visual Studio. This includes the AWS Explorer for managing resources, Web Application deployment from the Solution Explorer, and the AWS CloudFormation editor for authoring and deploying your CloudFormation templates.

So if you haven’t tried the AWS Toolkit for Visual Studio, now is a great time to check it out.

Stripe Windows Ephemeral Disks at Launch

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today we have another guest post by AWS Solutions Architect David Veith.

Amazon EC2 currently offers more than 20 current-generation instance types for your Windows operating system workloads. The root volume for Windows instances will always be a volume provided by the Amazon EBS service. Additional EBS drives can easily be added as desired.

Depending on the EC2 instance type selected, there will also be from zero to 24 instance-store volumes automatically available to the instance. Instance-store volumes provide temporary block-level storage to the instance. The data in an instance store persists only during the lifetime of its associated instance. Because of the temporary nature of instance-store volumes, they are often referred to as ””’ephemeral”’—not lasting, enduring, or permanent”.

Many workloads can benefit from this type of temporary block-level storage, and it’s important to mention that ephemeral volumes also come with no extra cost.

This blog post describes how the ephemeral volumes of any Windows EC2 instance can be detected at launch, and then automatically striped into one large OS volume. This is a common use case for many AWS customers.

Detecting Ephemeral vs. EBS Volumes

In order to build a striped volume consisting only of instance-store volumes, we first need a mechanism to distinguish the volume types (EBS or ephemeral) associated with the instance. The EC2 metadata service provides a mechanism to determine this.

The following PowerShell statement retrieves all the block devices of a running Windows EC2 instance it is executed upon:

$alldrives = (Invoke-WebRequest -Uri

Here’s an example of the data returned from the metadata service for an M3.Xlarge instance (launched from the AWS Management Console) with one root EBS volume and two instance-store volumes:

Using the same instance type (M3.Xlarge), but this time launching the instance from an AWS CloudFormation script (or AWS command-line tools), the same code produces this output:

Why the difference?

When an instance is launched from the AWS Management Console, the console performs some additional steps to have the instance metadata reflect only the ephemeral drives that are actually present. In order for our code to handle both cases, we can query WMI to see if the OS actually sees the volume.

$disknumber = (Get-WmiObject -Class Win32_DiskDrive | where-object {$_.SCSITargetId -eq $scsiid}).Index
if ($disknumber -ne $null)

How EC2 Windows Maps Drives

Hopefully, you noticed in the code directly above that we queried WMI with the SCSI ID of each volume. Where did we get the SCSI ID?

To answer that question, we need to explain how EC2 Windows instances map block devices’ SCSI IDs in the operating system. The following table shows this:

For example, we can see that ‘xvdcb’ will always map to SCSI ID ’79’. We could build a lookup table that contains all the potential mount points and their corresponding SCSI IDs, but a more elegant approach is to use a simple algorithm based on ASCII arithmetic.

We know that all device mappings for Windows instances begin with the ‘xvd’ prefix. If we remove this prefix, we can use the remaining portion (‘cb’ in this example) to derive the correct SCSI ID.

'c' = ASCII 99
'b' = ASCII 98

(('c' - 97) * 26) + ('b' - 97) = 79

In the final PowerShell script below, this psuedo-code is implemented as the GetSCSI function.

The Complete Powershell Script

#  Detect the Ephemeral drives and stripe them  

# Be sure to choose a drive letter that will not already be assigned
$DriveLetterToAssign = "K:"

#  Given a device (e.g. xvda), strip off 
# "xvd" and convert the remainder to the 
# appropriate SCSI ID   
function GetSCSI {
        # remove xvd prefix
	$deviceSuffix = $device.substring(3)      

        if ($deviceSuffix.length -eq 1) {
		$scsi = (([int][char] $deviceSuffix[0]) - 97)
	else {
		$scsi = (([int][char] $deviceSuffix[0]) - 96) *  26 
                            +  (([int][char] $deviceSuffix[1]) - 97)

	return $scsi

#  Main   

# From metadata read the device list and grab only 
# the ephemeral volumes

$alldrives = (Invoke-WebRequest -Uri
$ephemerals = $alldrives.Split(10) | where-object {$_ -like 'ephemeral*'} 

# Build a list of scsi ID's for the ephemeral volumes

$scsiarray = @()
foreach ($ephemeral in $ephemerals) {
	$device = (Invoke-WebRequest -Uri$ephemeral).Content
	$scsi = GetSCSI $device
	$scsiarray = $scsiarray + $scsi

# Convert the scsi ID's to OS drive numbers and set them up with diskpart

$diskarray = @()
foreach ($scsiid in $scsiarray) {
	$disknumber = (Get-WmiObject -Class Win32_DiskDrive | where-object {$_.SCSITargetId -eq $scsiid}).Index
	if ($disknumber -ne $null)
		$diskarray += $disknumber
		$dpcommand = "select disk $disknumber
	                        select partition 1
	                        delete partition
	                        convert dynamic
	    $dpcommand | diskpart

# Build the stripe from the diskarray

$diskseries = $diskarray -join ','

if ($diskarray.count -gt 0) 
	if ($diskarray.count -eq 1) {
		$type = "simple"
	else {
		$type = "stripe"
	$dpcommand = "create volume $type disk=$diskseries
		         format fs=ntfs quick
                         assign letter=$DriveLetterToAssign
	$dpcommand | diskpart

Extra Credit

In Windows Server 2012 R2, Microsoft introduced new PowerShell storage-management cmdlets that replace the need to use the diskpart utility in many cases. If you know your servers will be running only Windows Server 2012 R2, or later, you might want to use these newer Microsoft cmdlets. You can find more information on these cmdlets at

Utilizing Amazon ElastiCache Auto Discovery in .NET Through Enyim

by Mason Schneider | on | in .NET | Permalink | Comments |  Share

Today, we released a new library, Amazon ElastiCache Cluster Configuration, that allows .NET applications to easily leverage ElastiCache features. This post explains why a programmer would want to use this library and offers a quick and easy way to try it yourself.

What is Memcached?

Memcached provides a way to easily avoid some of the latency that comes with using a database, and it can also help applications at scale by removing some of the strain that can be placed on databases. This is accomplished by having Memcached servers be an intermediary, in-memory cache that can return results much faster than a normal database. In a typical program flow, this is accomplished by requesting a key from a group of cache servers and, if a value is retrieved, no database query is needed. Below is a simple diagram of what this function normally looks like: Memcached Diagram

Why ElastiCache?

ElastiCache provides a way to dynamically add and remove Memcached servers inside of a cache cluster. All servers are completely managed, which means that when servers are added they are automatically configured for Memcached. They are also added to the cluster and, when they are deleted, the cluster is updated. This means you spend less time configuring Memcached servers and more time working on things that matter. Being able to add and remove these nodes dynamically also means your application can easily scale whenever necessary through the AWS Management Console or through one of the many AWS APIs.

Using ElastiCache in .NET

Many .NET developers leverage ElastiCache through the Enyim framework. Enyim provides a client that manages server connections as well as what server your cache data should be stored on. To be aware of the Memcached servers, the Enyim client is configured on instantiation with IPs and ports for all of the servers. When the server information changes, the client must be disposed and re-instantiated with the new server information. The re-instantiation of the client tends to be tedious and can also cause issues if you update your configuration incorrectly when nodes change. One feature of ElastiCache that helps avoid this issue is Auto Discovery. This feature allows clients to find out the cluster configuration through an endpoint URL. The endpoint URL is sort of like an alias that just points to one of the servers in the cluster. Each server holds information about the configuration such as how many times the configuration has changed and the hostname, ip, and port of each server in the cluster. For more information on how Auto Discovery works, visit this page.

ElastiCache Cluster Configuration

Although Auto Discovery is useful, it is not accessible through Enyim’s client because it is not something that is found in standard Memcached clusters. Currently, Amazon has released clients for Java and PHP that extend Memcached clients in order to take advantage of Auto Discovery. With today’s release of the ElastiCache Cluster Configuration library, any .NET application using framework 3.5 or higher can now take full advantage of this great feature. All that is required is to add the clusterclient section to your App.config or instantiate the configuration object through parameters. After that, pass it as the configuration for the Enyim MemcachedClient, and you now have a MemcachedClient functioning through ElastiCache Auto Discovery.

Here is a sample App.config that shows how to specify the Amazon ElastiCache cluster.

<?xml version="1.0" encoding="utf-8"?>
      <section name="clusterclient" type="Amazon.ElastiCacheCluster.ClusterConfigSettings, Amazon.ElastiCacheCluster" />

      <endpoint hostname="" port="11211" />
      <poller intervalDelay="60000" />



Try it now!

For this walkthrough, we assume you have an AWS account with the ability to control Amazon EC2 and ElastiCache services, internet access, and the AWS Tools for Windows PowerShell installed and configured with an API key and secret key.

Create AWS Services using PowerShell

Start a PowerShell instance as Administrator. Below, you can edit these variable values to change property names, and then copy and paste the rest.

$ec2SecurityGroupName = "myElastiCacheGroup"
$cacheGroupName = "myElastiCacheSecGroup"
$keyPairName = "myConfigKeyPair"
$cacheClusterName = "demoCluster"

To use ElastiCache, you must create an ElastiCache cluster and an EC2 instance to access it. First, we create a key pair and an EC2 security group, and then we create an EC2 instance based on those values.

$myConfigKeyPair = New-EC2KeyPair -KeyName $keyPairName
$myConfigKeyPair.KeyMaterial | Out-File -Encoding ascii C:$keyPairName.pem

New-EC2SecurityGroup -GroupName $ec2SecurityGroupName -GroupDescription "ElastiCache Config Demo"

$cidrBlocks = @("")
$ipPermissions = New-Object Amazon.EC2.Model.IpPermission -Property @{IpProtocol = "tcp"; FromPort = 11211; ToPort = 11211; IpRanges = $cidrBlocks}
Grant-EC2SecurityGroupIngress -GroupName $ec2SecurityGroupName -IpPermissions $ipPermissions
$ipPermissions = New-Object Amazon.EC2.Model.IpPermission -Property @{IpProtocol = "tcp"; FromPort = 3389; ToPort = 3389; IpRanges = $cidrBlocks}
Grant-EC2SecurityGroupIngress -GroupName $ec2SecurityGroupName -IpPermissions $ipPermissions

$image = Get-EC2ImageByName -Names WINDOWS_2012R2_BASE
if($image -is [system.array]) {$image = $image[0]}

$reservation = New-EC2Instance -ImageId $image.ImageId -KeyName $keyPairName -SecurityGroups $ec2SecurityGroupName -InstanceType t1.micro

After that is complete, we create a new ElastiCache cluster with three nodes and add the EC2 security group to its policy.

New-ECCacheSecurityGroup -CacheSecurityGroupName $cacheGroupName -Description "Demo for ElastiCache Config"
$secGroup = Get-EC2SecurityGroup -GroupNames $ec2SecurityGroupName
Approve-ECCacheSecurityGroupIngress -CacheSecurityGroupName $cacheGroupName -EC2SecurityGroupName $ec2SecurityGroupName -EC2SecurityGroupOwnerId $secGroup.OwnerId

New-ECCacheCluster -CacheNodeType cache.t1.micro -CacheClusterId $cacheClusterName -CacheSecurityGroupNames $cacheGroupName -Engine memcached -EngineVersion 1.4.14 -NumCacheNodes 3 -Port 11211

Create the Application

To demonstrate how to use ElastiCache Cluster Configuration, we’ll make a quick console application. From the start page of Visual Studio 2010 or higher, click "New Project…", and in the new project dialog, create a new Visual C# Console Application named "Cluster Config Demo". After the project is created, in Solution Explorer, right-click on the "References" section, and in the drop-down menu, click "Manage NuGet Packages…". After that, we’re going to search for "ElastiCacheClusterConfig" and then install it into our current project. When you install this package, Enyim is also installed on the project. Now that we have the project configured, let’s write the code. First, add the packages to the code by pasting this code into the top of the file "Program.cs".

using Enyim.Caching;
using Amazon.ElastiCacheCluster;

Next, copy the code below and paste it into the Main function in the "Program.cs" file. This snippet creates an ElastiCacheClusterConfig object using the hostname and port specified in the parameters, and then defaults the rest of the settings. It then creates a MemcachedClient through the Enyim framework by passing in the ElastiCacheClusterConfig as an IMemcachedClientConfiguration. The program then attempts to store a value to the cache followed by trying to retrieve a value in the cache.

Console.WriteLine("Creating config...");
ElastiCacheClusterConfig config = new ElastiCacheClusterConfig("YOUR-URL-HERE", 11211);
Console.WriteLine("Creating client...");
MemcachedClient client = new MemcachedClient(config);
if (client.Store(Enyim.Caching.Memcached.StoreMode.Set, "Demo", "Hello World"))
    Console.WriteLine("Stored to cache successfully");
   Console.WriteLine("Did not store to cache successfully");

Object value;
if (client.TryGet("Demo", out value))
   Console.WriteLine("Got the value: " + (value as string));
   // Search Database if the get fails
   Console.WriteLine("Checking database because get failed");


Be sure to replace "YOUR-URL-HERE" with the endpoint URL of your cluster. You can find this URL by setting your variables from earlier if you closed out of PowerShell, and then running the following:

(Get-ECCacheCluster -CacheCluster $cacheClusterName).ConfigurationEndpoint.Address

Now, go ahead and build your project by right-clicking on the project in Solution Explorer and clicking "Build". If you run this code on your local machine, it will throw an error because you can only connect to ElastiCache inside of an EC2 instance. That’s why we need to transfer it to the EC2 instance we created earlier.

Upload it to the EC2 instance and test it

There are many ways to access and upload your ElastiCache application to EC2 such as the Visual Studio Toolkit, opening PowerShell access remotely on the instance and downloading it from a URL, or using remote desktop into the instance. Today, we’ll use remote desktop just for simplicity, even though there are much better ways to do this in a development stack. Run the following cmdlet to open a remote desktop connection to the instance. If you closed PowerShell earlier, be sure to copy in the predefined variables. If login fails, which is caused by a changed registry value, simply copy the value from $pass, paste it in as the password, and then login.

$secGroup = Get-EC2SecurityGroup -GroupNames $ec2SecurityGroupName
$groupArray = @($secGroup.GroupId)
$filter_groupId= New-Object Amazon.EC2.Model.Filter -Property @{Name = "group-id"; Values = $groupArray}
$instances = (Get-EC2Instance -Filter $filter_groupId).Instances
$pass = Get-EC2PasswordData -InstanceId $instances.InstanceId -PemFile C:$keyPairName.pem
$dns = $instances.PublicDnsName
cmdkey /generic:$dns /user:administrator /pass:$pass
mstsc /v:$dns

If you would rather use the console, you can find the .pem file for the password in the base directory of C:. Now that we have our connection open, go ahead and copy the executable and .dlls we created earlier and paste them into the instance. Run it and you should see the following output:

Creating config...
Creating client...
Stored to cache successfully
Got the value: Hello World

Delete Demo Services

Once you’ve successfully run your application, you can then delete the resources we created from AWS using the cmdlets below. Note: Be sure to copy the variables from earlier if you’ve closed the PowerShell window before now.

$secGroup = Get-EC2SecurityGroup -GroupNames $ec2SecurityGroupName
$groupArray = @($secGroup.GroupId)
$filter_groupId= New-Object Amazon.EC2.Model.Filter -Property @{Name = "group-id"; Values = $groupArray}
$instances = (Get-EC2Instance -Filter $filter_groupId).Instances
Stop-EC2Instance -Instance $instances -Terminate -Force

Remove-EC2KeyPair -KeyName $keyPairName -Force

Remove-ECCacheCluster -CacheClusterId $cacheClusterName -Force

And delete the policies when both services have finished terminating the resources.

$secGroup = Get-EC2SecurityGroup -GroupNames $ec2SecurityGroupName
Remove-EC2SecurityGroup -GroupId $secGroup.GroupId -Force
Remove-ECCacheSecurityGroup -CacheSecurityGroupName $cacheGroupName -Force

And that’s it for using Amazon’s new ElastiCache Cluster Configuration library for .NET. If you’d like to find out more, visit this wiki or fork the code at our Github repository.

DynamoDB Series – Expressions

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

For the final installment of our Amazon DynamoDB series, we are going to look at the new expression support. There are two types of expressions used by DynamoDB. First you can use expressions to update specific fields in an item. The other way is to use expressions on puts, updates, or deletes to prevent the operation from succeeding if the item in DynamoDB doesn’t meet the expression.

Update Expressions

Update expressions are great for atomic updates to attributes in an item in DynamoDB. For example, let’s say we add a player item to a DynamoDB table that records the number of games won or lost and the last time a game was played.

PutItemRequest putRequest = new PutItemRequest
    TableName = tableName,
    Item = new Dictionary<string, AttributeValue>
        {"id", new AttributeValue{S = "1"}},
        {"name", new AttributeValue{S = "Norm"}},
        {"wins", new AttributeValue{N = "0"}},
        {"loses", new AttributeValue{N = "0"}}


When a player wins the game, we need to update the wins attribute and set the time the last game was played and who the opponent was. To do that, we could get the item and look up how many wins the player currently has and then update the wins with the current wins + 1. The tricky thing is what happens if there is an update to the item in between the get and the update. We can handle that by putting an ExpectedAttribute value on the update, which will cause the update to fail, and then we could retry the whole process.

Now, using expressions, we can increment the wins attribute without having to first read the value. Let’s look at the update call to see how that works.

UpdateItemRequest updateRequest = new UpdateItemRequest
    TableName = tableName,
    Key = new Dictionary<string, AttributeValue>
        {"id", new AttributeValue{S = "1"}}
    UpdateExpression = "ADD #a :increment SET #b = :date, #c = :opponent",
    ExpressionAttributeNames = new Dictionary<string, string>
        {"#a", "wins"},
        {"#b", "last-played"},
        {"#c", "last-opponent"}
    ExpressionAttributeValues = new Dictionary<string, AttributeValue>
        {":increment", new AttributeValue{N = "1"}},
        {":date", new AttributeValue{S = DateTime.UtcNow.ToString("O")}},
        {":opponent", new AttributeValue{S = "Celeste"}}


The TableName and Key properties are used to identify the item we want to update. The UpdateExpression property is the interesting property where we can see the expression that is run on the item. Let’s break this statement down by each token.

The ADD token is the command token, and for a numeric attribute it adds the specified value to the attribute. Next is the #a token, which is a variable. The ‘#’ means this variable will be replaced with an attribute name. :increment is another variable that is the value to be added to the attribute #a. All tokens that start with ‘:’ are variables that will have a value supplied in the update request.

SET is another command token. It means all the attributes following will have their value set. The #b variable will get its value from the :date variable, and #c will get is value from the :opponent variable.

It is also possible to remove an attribute using the REMOVE command token.

The ExpressionAttributeNames property is used to set all the attribute variables in the expression to the actual attributes we want to use. ExpressionAttributeValues property is used to set all the value variables to the values we want to use in the expression.

Once we invoke the update, DynamoDB guarantees that all the attributes in the expression will be updated at the same time without the worry of some other thread coming in and updating the item in the middle of the process. This also saves us from using up any of our read capacity to do the the update.

Check out the DynamoDB Developer Guide for more information on how to use update expressions.

Conditional Expressions

For Puts, Updates, and Deletes, a conditional expression can be set. If the expression evaluates to false, then a ConditionalCheckFailedException exception is thrown. On the low-level service client, this can be done using the ConditionExpression property. Conditional expressions can also be used on the Document Model API. To take a look how this is done, let’s first create a game document in our game table.

DateTime lastUpdated = DateTime.Now;
Table gameTable = Table.LoadTable(ddbClient, tableName, DynamoDBEntryConversion.V2);

Document game = new Document();
game["id"] = gameId;
game["players"] = new List<string>{"Norm", "Celeste"};
game["last-updated"] = lastUpdated;

For the game’s logic, every time the game document is updated the last-updated attribute is checked to make sure it hasn’t changed since the document was retrieved and then updated to a new date. So first let’s get the document and update the winner.

Document game = gameTable.GetItem(gameId);

game["winner"] = "Norm";
game["last-updated"] = DateTime.Now;

To declare the conditional expression I need to create an Expression object.

var expr = new Expression();
expr.ExpressionStatement = "attribute_not_exists(#timestamp) or #timestamp = :timestamp";
expr.ExpressionAttributeNames["#timestamp"] = "last-updated";
expr.ExpressionAttributeValues[":timestamp"] = lastUpdated;

This expression evaluates to true if the last-updated attribute does not exist or is equal to the last retrieved timestamp. Then, to use the expression, assign it to the UpdateItemOperationConfig and pass it to the UpdateItem operation.

UpdateItemOperationConfig updateConfig = new UpdateItemOperationConfig
    ConditionalExpression = expr

    gameTable.UpdateItem(game, updateConfig);
catch(ConditionalCheckFailedException e)
    // Retry logic

To handle the expression evaluating to false, we need to catch the ConditionalCheckFailedException and call our retry logic. To avoid having to catch exceptions in our code, we can use the new "Try" methods added to the SDK, which return true or false depending on whether the write was successful. So the above code could be rewritten like this:

if(!gameTable.TryUpdateItem(game, updateConfig))
    // Retry logic

This same pattern can be used for Puts and Deletes. For more information about using conditional expressions, check out the Amazon DynammoDB Developer Guide.


We hope you have enjoyed our series on Amazon DynamoDB this week. Hopefully, you have learned some new tricks that you can use in your application. Let us know what you think either in the comments below or through our forums.

DynamoDB Series – Object Persistence Model

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

This week, we are running a series of five daily blog posts that will explain new DynamoDB changes and how they relate to the AWS SDK for .NET. This is the fourth blog post, and today we will be discussing the Object Persistence Model.

Object Persistence Model

The Object Persistence Model API provides a simple way to work with Plain Old CLR Objects (POCO), as the following examples illustrate.

First, let’s look at the POCO class definition. (Notice that the class is marked up with multiple Amazon DynamoDB* attributes. These are included for clarity, even though these attributes are now optional, and will be removed in the next sample.)

public class Product
    public int Id { get; set; }
    public string Name { get; set; }

    public List<string> Aliases { get; set; }
    public bool IsPublic { get; set; }

Next, we can create, store, load, and query DynamoDB, all while using our POCO.

var product = new Product
    Id = 1,
    Name = "CloudSpotter",
    Aliases = new List<string> { "Prod", "1.0" },
    IsPublic = true,
var retrieved = Context.Load(2);
var products = Context.Query<Product>(1, QueryOperator.BeginsWith, "Cloud");

The addition of the DynamoDB data type M (a string-key map of arbitrary data) allows the Object Persistence Model API to store complex data types as attributes of a single DynamoDB item. (We covered the new DynamoDB types earlier this week. It might be a good idea for you to review this again.) To illustrate this, let’s consider the following example where our Product class may reference another class.

Here are the new class definitions we will be working with.

public class Product
    public int Id { get; set; }
    public string Name { get; set; }
    public List<string> Aliases { get; set; }
    public bool IsPublic { get; set; }
    public Dictionary<string, string> Map { get; set; }
    public Metadata Meta { get; set; }
public class Metadata
    public double InternalVersion { get; set; }
    HashSet<string> Developers { get; set; }

Notice that we are going to use Dictionary objects, which will also be stored as M data types. (The only limitations are that the key must be of type string, and the value must be a supported primitive type or a complex structure.)

Now we can instantiate and work with our objects as we normally would.

Product product = new Product
    Id = 1,
    Name = "CloudSpotter",
    Aliases = new List<string> { "Prod", "1.0" },
    IsPublic = true,
    Meta = new Metadata
        InternalVersion = 1.2,
        Developers = new HashSet<string> { "Alan", "Franco" }
    Map = new Dictionary<string, string>
        { "a", "1" },
        { "b", "2" }
var retrieved = Context.Load(2);
var products = Context.Query<Product>(1, QueryOperator.BeginsWith, "Cloud");

As you can see, the new DynamoDB data types really expand the range of data that you can maintain and work with. Though, you do have to be careful that the objects you are creating do not end up having circular references, because the API will end up throwing an exception for these objects.