AWS Developer Blog

See You at ZendCon 2013

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Are you attending ZendCon this year? You are? Great! The AWS SDK for PHP team will be there too, and we hope to see you there.

We will have a booth in the expo area, so make sure to come and see us. We will have goodies to hand out, and will be ready to answer questions or help you get started with AWS, the AWS SDK for PHP, and AWS Elastic Beanstalk.

Also, make sure to check out the UnCon schedule, because we will be doing presentations about the AWS SDK for PHP and Guzzle.

See you there!

Release: AWS SDK for Java 1.6.0

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

We released version 1.6.0 of the AWS SDK for Java last Friday. This version has some exciting features!

  • A new type of POJO attribute named S3Link for the DynamoDBMapper class. This new attribute allows you to easily work with binary data in Amazon S3 and store links to that data in Amazon DynamoDB.
  • The Amazon CloudSearch client now supports setting the text processor for TextOptions.
  • The Amazon CloudFront client now allows you to display custom error pages for origin errors and control how long error responses are cached.
  • Many new enums for common string values in the Amazon EC2 client. You now have these string values at your fingertips in the SDK and in the SDK documentation, instead of having to go to the Amazon EC2 API Reference to search for them.

Download the latest release of the AWS SDK for Java now!

Handling Credentials with AWS Tools for Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

The cmdlets provided in the AWS Tools for Windows PowerShell provide three ways to express credential information. Some approaches are more secure than others. Let’s take a look.

Using Credential Parameters

All cmdlets in the toolset accept -AccessKey, -SecretKey and -SessionToken parameters (-SessionToken is used when the access key and security key are part of time-limited temporary credentials obtained from the AWS Security Token Service). The intent of these parameters is to allow you to specify credentials for AWS Identity and Access Management (IAM) user accounts that you have created, optionally with restricted access to services and/or service operations. Using these parameters to pass your root account credentials can be considered the least secure (and therefore least desirable) way of passing credential information to cmdlets and we strongly recommend you investigate and use IAM user accounts.

IAM accounts can be created using the AWS Management Console or using the Visual Studio toolkit. In Visual Studio, open the AWS Explorer window and expand the AWS Identity and Access Management node. Right-click on the Users node and select Create User…. In the resulting dialog box, give the user account a name and select OK. The new user account will be added to the tree in AWS Explorer, double-click it to open a window in the IDE where we can configure what the account has access to.

The first step in configuring the IAM user is to obtain AWS access and secret keys. Select the Access Keys tab, and then click the Create button. A dialog box will appear with an option to also save the generated keys locally (and securely) in the toolkit – be sure to check this option so that we can subsequently view the secret key. Click OK and the window will update to show the generated keys, which you can now copy and paste for use with PowerShell:

The second step in configuring the IAM user is to add a policy that will give access to AWS. By default, the new policy will give access to all AWS services, service operations, and resources, but using the editor in Visual Studio (or the AWS Management Console) you can restrict this. To create the policy in Visual Studio, select the Policies tab and click Add Policy. Give the policy a name in the dialog box that appears and click OK. You can then see the policy and edit to suit; when you’ve finished, click Save on the window’s toolbar:

Now that you have obtained the keys and set up a policy governing access, you can use the IAM user account with PowerShell using the previously mentioned -AccessKey and -SecretKey parameters:


If you want to be even more secure, we recommend that you further configure the new user account with a policy that restricts access to just the service(s), service operations, and AWS resources that you want to use with PowerShell. How to do that is beyond the scope of this blog, but see IAM Users and Groups for more information on how to set up and configure IAM user accounts.

To use session tokens, first get the token with Get-STSSessionToken and then pass it with the temporary credentials on subsequent commands:

# This example shows how to get and use temporary session credentials
PS C:> $tempcreds = Get-STSSessionToken -AccessKey 123MYIAMUSERACCESSKEY -SecretKey 456MYIAMUSERSECRETKEY -DurationSeconds 900
PS C:> Get-EC2Instance -AccessKey $tempcreds.AccessKeyId -SecretKey $tempcreds.SecretAccessKey -SessionToken $tempcreds.SessionToken
... call other cmdlets until session token expires...

The disadvantage to using the credential parameters is that you need to repeat them for every cmdlet in your script or at the shell prompt. A secure key value is a lot to type accurately even once, so you might therefore be tempted to place this information into variables at the head of your script and simply reference those; however, we’d very much like you to think twice before doing this! It’s easy to forget the values are there and share the script and…well, the credentials have then leaked. If these are your root credentials then this is, in a classic piece of understatement, a very bad idea! With IAM user accounts you can at least rotate the credentials, but it’s best not to get into this situation at all. So given that using raw credentials is inconvenient and less secure, what better ways exist?

Using the Initialize-AWSDefaults Cmdlet

After installation of the toolset, you might have noticed a new Start menu entry called Windows PowerShell for AWS. This launches a PowerShell shell with the AWSPowerShell module loaded (useful for machines that have PowerShell version 2 installed, where modules do not auto-import). This shell then runs a cmdlet named Initialize-AWSDefaults, which performs a number of checks:

  1. Have you set up a default set of credentials (which have the fixed name AWS PS Default) on the machine/EC2 instance? If so, the cmdlet reads the credentials securely into the current shell. They are then automatically available to future cmdlets that you run in that shell without needing to specify any credential data on a per-cmdlet basis (unless you want to of course).
  2. If the cmdlet is running on an EC2 instance and the default set of credentials does not exist, can we obtain credentials from the role that the instance was launched with by inspecting instance metadata? If so, the cmdlet retrieves the credential data and stores it locally on the instance (again with the name AWS PS Default) before loading the credentials into the shell ready for use.
  3. If credential data cannot be satisfied from the local encrypted store or role information in the instance metadata, the cmdlet prompts you to supply the credentials. This is where you would get to type an access key and secret key – which could be an IAM user account.

This example shows the shell after using the Windows PowerShell for AWS shortcut on the Start menu for the first time on an EC2 instance that was launched using a role:

Initialize-AWSDefaults: Credentials for this shell were set using instance profile

Specify region
Please choose one of the following regions to use or specify one by system name
[]   [] us-east-1  [] us-west-1  [] us-west-2  [] eu-west-1  [] ap-northeast-1
[] ap-southeast-1[] ap-southeast-2  [] sa-east-1  [] us-gov-west-1  [?] Help
(default is ""):

Note the text following the cmdlet name – this confirms that credential data was successfully obtained, securely, from the role that the EC2 instance was launched with. As this is the first run, the cmdlet then asks you to select a default region (it won’t ask for this on subsequent runs). The credential data you supply via the role, or enter manually, is stored and will be used in future shells that run Initialize-AWSDefaults, unless you override them.

The Initialize-AWSDefaults cmdlet is therefore very useful in a couple of situations. Its main job is in setting up credentials in EC2 instances launched using a role without ever needing the user to explicitly enter access and secret keys. It can also be used on your own machine, either via the Start menu shortcut or by running it when you start a new shell.

Note though that Initialize-AWSDefaults works best if you have only one AWS account. As a developer, I tend to use multiple accounts, so I prefer the third and final method, the Set-AWSCredentials cmdlet that gives me ultimate control.

Using the Set-AWSCredentials Cmdlet

As I mentioned above, Set-AWSCredentials is my preferred go-to cmdlet for both loading and saving credential data on my machine as it has the most flexibility when I need to manage multiple sets of accounts, including IAM user accounts I have created that have restricted access to services, service operations, and AWS resources.

Credential data is stored in a per-user encrypted file and is shared between PowerShell cmdlets and the AWS Toolkit for Visual Studio. If you have already registered AWS accounts in AWS Explorer inside Visual Studio, then these credentials are available right away in PowerShell. Any accounts that you register through PowerShell will also show up in Visual Studio (including the AWS PS Default account you may have set up with Initialize-AWSDefaults).

Usage of the Set-AWSCredentials cmdlet falls into two areas: storing credential data, and loading it for use. To store credentials, you use the -StoreAs parameter to assign a name to the credentials, along with the credential information. The cmdlet then saves the data into the local encrypted credential file:

PS C:> Set-AWSCredentials -AccessKey 123MYACCESSKEY -SecretKey 456SECRETKEY -StoreAs myAWScredentials

Having saved the credentials you can discard the current shell and start a new one. To load the credentials into the new shell, you run the same cmdlet, but this time pass the name you specified as the -StoredCredentials parameter:

PS C:> Set-AWSCredentials -StoredCredentials myAWScredentials

Once the credentials are loaded, the cmdlets you run in that shell do not need to have credential data supplied – it will be retrieved from the current shell instance automatically. If you need to change credentials temporarily, all cmdlets accept a -StoredCredentials parameter that looks up the credentials for the name specified and uses them for that particular cmdlet’s invocation:

PS C:> Set-AWSCredentials myAWSCredentials

# These two examples yield the same data
PS C:> Get-EC2Instance -StoredCredentials myAWScredentials 
PS C:> Get-EC2Instance 

# This invocation returns different data, as alternate credentials are specified
PS C:> Get-EC2Instance -StoredCredentials myOtherAWScredentials 

By the way, the -StoredCredentials parameter can also be used with Get-STSSessionToken (shown earlier) to avoid having to expose your keys when obtaining temporary session credentials.

Loading Credentials from a PowerShell Profile

Remembering to run Set-AWSCredentials (or Initialize-AWSDefaults) in each shell or PowerShell host that you launch can be tiresome, so I make use of my user profile to do this for me and to also set a default region for my shells. Your user profile is simply a script file named Microsoft.PowerShell_profile.ps1 that exists in a folder named WindowsPowerShell in your user documents location. See How to Create a Windows PowerShell Profile for more details on the preferred way to create this file.

Once the file is created, load it into a text editor and add the call to Set-AWSCredentials (and Set-DefaultAWSRegion if you like) to initialize all shells you load, however they are launched. For example, my profile contains these lines. The first loads my personal AWS credentials stored with the name ‘steve’.

Set-AWSCredentials -StoredCredentials steve
Set-DefaultAWSRegion us-west-2

Note that if you are using PowerShell version 2, you will need to import the AWSPowerShell module before running those cmdlets. Under PowerShell version 3, the module auto-imports whenever any of the cmdlets it contains is run.

As I routinely switch credentials and regions for AWS SDK testing, I also have a custom prompt function in my profile that shows me the current user and region for the shell – you may find this useful too:

function prompt 

    $prompt = "PS "
    if ($StoredAWSCredentials -ne $null)
        $prompt += "$StoredAWSCredentials"
    if ($StoredAWSRegion -ne $null) 
        $prompt += "@"
        $prompt += "$StoredAWSRegion" 
    $prompt += " "
    $prompt += $pwd.ProviderPath
    $prompt += "> "


This function (which is called automatically by PowerShell) displays a custom shell prompt:

PS steve@us-west-2 C:Dev> 

In addition, as I change region/credentials, it updates automatically. Cool!


This post has shown you a number of ways in which credential data can be supplied to AWS cmdlets. Hopefully, you can now see how to pass credential data without compromising your root AWS keys by making use AWS Identity and Access Management (IAM) user accounts and the encrypted credentials file shared with the Visual Studio toolkit or using roles with EC2 instances to pass credentials without them ever appearing in plain view.

Amazon S3 TransferManager – Batched File Uploads

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

In addition to all the cool features in TransferManager around asynchronous upload and download management, there are some other great features around batched uploads and downloads of multiple files.

The uploadDirectory and uploadFileList methods in TransferManager make it easy to upload a complete directory, or a list of specific files to Amazon S3, as one background, asynchronous task.

In some cases though, you might want more control over how that data is uploaded, particularly around additional metadata you want to provide for the data you’re uploading. A second form of uploadFileList allows you to pass in an implementation of an ObjectMetadataProvider interface that will let you do just that. For each of the files being uploaded, this ObjectMetadataProvider will receive a callback via the provideObjectMetadata method, allowing it to fill in any additional metadata you’d like to store alongside your object data in Amazon S3.

The following code demonstrates how easy it is to use the ObjectMetadataProvider interface to pass along additional metadata to your uploaded files.

TransferManager tm = new TransferManager(myCredentials);

ObjectMetadataProvider metadataProvider = new ObjectMetadataProvider() {
    void provideObjectMetadata(File file, ObjectMetadata metadata) {
        // If this file is a JPEG, then parse some additional info
        // from the EXIF metadata to store in the object metadata
        if (isJPEG(file)) {

MultipleFileUpload upload = tm.uploadFileList(
        myBucket, myKeyPrefix, rootDirectory, fileList, metadataProvider);

Using Non-.NET Languages for Windows Store Apps

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In Version 2 of our AWS SDK for .NET, we added support for Windows Store apps by creating a .NET Windows Store app class library. This approach works great if you are writing your Windows Store app in a .NET language like C# or VB. It means most code written for the AWS SDK for .NET 3.5 and 4.5 will also work for Window Store apps (with the biggest difference being that all service operations must instead be called asynchronously). But what if you’re using C++ or Javascript instead of .NET and want to access AWS in a Windows Store app? This is still possible by creating a Windows Runtime Component that wraps the AWS calls you want to make.

What is a Windows Runtime Component

A Windows Runtime component is like a class library except it can be called into by any languages supported by the Windows Runtime. An important distinction from class libraries is all parameters and return types must be compatible Windows Runtime types. Windows Runtime components can be written in any supported Windows Runtime language. In our case, it needs to be done in C# or Visual Basic because we want to access the AWS SDK, which is a .NET class library.

Creating the Wrapper

In my example, I want my C++ Windows Store app to be able to put and get objects from Amazon S3. To get started, I’m going to create a C# Windows Runtime Component project called AWSWrapper. Then I’ll add a class called S3Wrapper with the following code.

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

using Windows.Foundation;
using Windows.Storage;
using Windows.Storage.Streams;

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;

namespace AWSWrapper
    public sealed class S3Wrapper
	// For demo purposes, I'll embed the credentials. To get 
	// credentials securely to your application,
	// developers should look into strategies like the 
	// token vending machine, 
        //, or 
	// IAM Web Identity, 
        const string ACCESSKEY = "";
        const string SECRETKEY = "";

        IAmazonS3 s3Client;

        private IAmazonS3 S3Client
                if (this.s3Client == null)
                    this.s3Client = new AmazonS3Client(
                        ACCESSKEY, SECRETKEY, RegionEndpoint.USWest2);

                return this.s3Client;

        public IAsyncAction PutObjectAsync(string bucketName, 
              string key, IStorageFile storageFile)
            PutObjectRequest request = new PutObjectRequest()
                BucketName = bucketName,
                Key = key,
                StorageFile = storageFile

            return this.S3Client.PutObjectAsync(request).AsAsyncAction();

        public IAsyncOperation GetObjectAsync(string bucketName, string key)
            GetObjectRequest request = new GetObjectRequest()
                BucketName = bucketName,
                Key = key

            return Task.Run(() =>
                var task = this.S3Client.GetObjectAsync(request);
                var response = task.Result;
                return response.ResponseStream.AsInputStream();

This class wraps both the put and get operations to Amazon S3. Since this is a Windows Runtime component, I need to make sure the return types are valid Windows Runtime types. This is why instead of tasks being returned they are converted to IAsyncAction for put and IAsyncOperation with an IInputStream because neither Task nor Stream are valid Windows Runtime types. Note that error handling is being ignored for the purposes of keeping the sample simple.

Consuming the Wrapper

Now, in my C++ Windows Store app, I can add my newly created Windows Runtime component as a reference. Here is a sample showing the wrapper being used from a file picked using the FileOpenPicker.

void CppS3Browser::MainPage::Button_Click(
     Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)
	FileOpenPicker^ openPicker = ref new FileOpenPicker();
	openPicker->ViewMode = PickerViewMode::Thumbnail; 
	openPicker->SuggestedStartLocation = PickerLocationId::PicturesLibrary; 

            .then([this](StorageFile^ file) 
		if (file) 
			AWSWrapper::S3Wrapper^ s3wrapper = 
                            ref new AWSWrapper::S3Wrapper();
                            this->bucketName, file->Name, file);

You can extend this pattern for any operations in the AWS SDK for .NET. Just make sure to convert the parameters and return types to Windows Runtime types.

Saving Money with Amazon EC2 Reserved Instances

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Are you or your company using Amazon EC2 instances? Are you using Amazon EC2 Reserved Instances yet? Reserved Instances are often one of the easiest and most effective ways to save money on your Amazon EC2 bill. They can allow you to significantly reduce the price you pay for Amazon EC2 instance hours over a one or three year period compared to on-demand rates. Many customers using Amazon EC2 Reserved Instances are saving lots of money, and we’d love to see more customers using them!

Reserved Instances got even more attractive recently, with an API update that allows you to modify the details of your Reserved Instances. Until this release, the Availability Zone you specified at the time of purchase remained fixed for the duration of the term of the Reserved Instances. This release gives you the ability to migrate your Reserved Instances to a different Availability Zone within the same region, making Reserved Instances even more flexible.

You can modify your Reserved Instances through the AWS Management Console, or you can use one of the AWS SDKs to pragmatically modify them:

AmazonEC2Client ec2 = new AmazonEC2Client(...);
ReservedInstancesConfiguration configuration = new ReservedInstancesConfiguration()
ModifyReservedInstanceRequest request = new ModifyReservedInstancesRequest()

.NET Application Deployment Roundup

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

.NET Application Deployment Roundup

In this post, we talk about several customer questions that have come up in the AWS Forums.

Deploying MVC4 applications on AWS Elastic Beanstalk

Deploying MVC4 applications to an AWS Elastic Beanstalk environment is just as easy as deploying other types of .NET web applications, and does not require pre-installing any software on instances in order to work. All you need to do is to make sure that the necessary project references have Copy Local set to True. For example, setting Copy Local to False for System.Web.Mvc and System.Web.Razor will cause your application to work on your development system, but fail when it gets deployed to the instance.

New MVC4 projects created from the "ASP.NET MVC 4 Web Application" template in Visual Studio should have the references set up correctly for deploying to Elastic Beanstalk.

Deploying applications to the root

By default, Visual Studio configures web applications to be deployed to a virtual directory. For applications in virtual directories, the .NET Elastic Beanstalk container deploys the application into the virtual directory, then creates a URL rewrite rule to direct requests from to For some applications, this rewrite rule can cause issues, since you might have other reasons to deploy your application to the root level.

In Visual Studio 2010, you can change the deployment location with the following steps:

  • Open the Properties pane for the web application.
  • Navigate to the Package/Publish Web tab.
  • Edit the value of IIS Web site/application name to use on the destination server to be DefaultWebSite.

If your version of Visual Studio 2012 has Update 2 or later, this option will not be present in the Properties pane, but you can add the <DeployIisAppPath> to the appropriate <PropertyGroup> element in your .csproj file. If you want it to apply to all Configurations and Platforms and deploy at the root, you can include it in the <PropertyGroup> element, i.e.:

  <DeployIisAppPath>Default Web Site/</DeployIisAppPath>

Or for Release|AnyCPU build target:

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
  <DeployIisAppPath>Default Web Site/</DeployIisAppPath>

Avoid maintaining state on instances

The nature of Elastic Beanstalk environments is that instances can come and go over time. For that reason, you should design your application so that as instances are added or removed from your environment due to failures or scaling events, they have everything they need to correctly serve your application. Similarly, when instances are removed from the environment, you shouldn’t lose important state or data.

Maintaining application state across servers is a great use of AWS services, such as Amazon S3 for file content, or Amazon DynamoDB for key/value storage.

Using Visual Studio 2013 Preview

With the latest release of the AWS Toolkit for Visual Studio, the installer now supports installation of the toolkit and project templates into the preview editions of Visual Studio 2013. As with prior releases of the toolkit, professional or higher versions of the IDE support the AWS Explorer tool window, the AWS CloudFormation template editor, and a set of project templates for a variety of AWS services. For users with Express editions of the IDE, only the project templates are installed due to licensing restrictions for these editions.

Snippet: Creating Amazon DynamoDB Tables

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

In many applications, it’s important to make sure your code handles creating any resources that it needs in order to run. Otherwise, you’ll have to manually create those resources whenever you want to run your application with a new AWS account.

For example, if you have an application that needs to store data in an Amazon DynamoDB table, then you’ll probably want your application to check if that table exists at startup, create it if necessary, and only let your application logic start running once that table is ready to use.

The following code demonstrates how to create a simple Amazon DynamoDB table using the SDK:

AmazonDynamoDB dynamo = new AmazonDynamoDBClient(myCredentials);

CreateTableRequest request = new CreateTableRequest().withTableName("customers");

request.withKeySchema(new KeySchemaElement()

request.withAttributeDefinitions(new AttributeDefinition()

request.setProvisionedThroughput(new ProvisionedThroughput()


This code creates a simple table called customers, specifies low values for provisioned throughput, and declares the hash key (think: primary key) to be an attribute named id with type String.

Once you’ve created your table, you’ll want to make sure it’s ready for use before you let your application logic start executing; otherwise, you’ll get errors from Amazon DynamoDB when you try to use it.

The following function, taken from some of our SDK test code for DynamoDB, demonstrates how to poll the status of a table and detect when the table is ready for use.

protected static void waitForTableToBecomeAvailable(String tableName) throws InterruptedException {
    System.out.println("Waiting for " + tableName + " to become ACTIVE...");

    long startTime = System.currentTimeMillis();
    long endTime = startTime + (10 * 60 * 1000);
    while ( System.currentTimeMillis() < endTime ) {
        Thread.sleep(1000 * 20);
        try {
            DescribeTableRequest request = new DescribeTableRequest()
            TableDescription table = dynamo.describeTable(request).getTable();
            if ( table == null ) continue;

            String tableStatus = table.getTableStatus();
            System.out.println("  - current state: " + tableStatus);
            if ( tableStatus.equals(TableStatus.ACTIVE.toString()) )
        } catch ( AmazonServiceException ase ) {
            if (!ase.getErrorCode().equalsIgnoreCase("ResourceNotFoundException"))
                throw ase;

    throw new RuntimeException("Table " + tableName + " never went active");

You can use this same logic to wait for your new table to become active. Then it’s ready for your data!

How are you managing your AWS resources? Do your applications automatically create all the AWS resources they need? Are you using AWS CloudFormation to handle resource creation?

VPC and AWS Elastic Beanstalk

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

We recently released a new version of our AWS Elastic Beanstalk .NET container which, like the other Beanstalk containers, is based on AWS CloudFormation and lets you take advantage of all the latest features that have been added to Beanstalk. One of the exciting new features is the ability to deploy into Amazon VPC. The AWS Toolkit for Visual Studio has also been updated to support creation of VPCs and launching instances into VPCs. The Beanstalk deployment wizard was also updated so you can create Beanstalk environments in a VPC.


The first step to deploying into a VPC is to create the VPC. To do this in the toolkit, open the VPC view via AWS Explorer and click Create VPC.

To get this VPC ready for Beanstalk, check the With Public Subnet check box, which specifies where the load balancer will be created. You also need to check the With Private Subnet check box, which specifies where the EC2 instances will be launched. You can leave the rest of the fields at their defaults. Once everything is created, deploy your application by right-clicking on your project and selecting Publish to AWS… just as you would for non-VPC deployments. The AWS Options page has changed to contain an option to deploy into a VPC:

Check the Launch into VPC check box and click Next. The subsequent page allows you to configure the VPC settings for the deployment:

Another helpful feature we’ve implemented in the VPC create dialog box for the toolkit was to put name tags on the subnets and security groups. The launch wizard looks for these tags when you select a VPC, and if it finds them, it auto-selects the appropriate values. In this case, all you need to do is select your new VPC and then continue with your deployment.

That’s all there is to deploying into VPC with Beanstalk. For more information, see Using AWS Elastic Beanstalk with Amazon VPC.

Working with Regions in the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In earlier versions of the AWS SDK for .NET, using services in regions other than us-east-1 required you to

  • create a config object for the client
  • set the ServiceURL property on the config
  • construct a client using the config object

Here’s an example of what that looks like for Amazon DynamoDB:

var config = new AmazonDynamoDBConfig
    ServiceURL = ""
var dynamoDBClient = new AmazonDynamoDBClient(accessKey, secretKey, config);

In version of the SDK, this was simplified so you can easily set the region in the constructor of the clients using a region constant and remove the burden of knowing the URL to the region. For example, the preceding code can now be replaced with this:

var dynamoDBClient = new AmazonDynamoDBClient(accessKey, secretKey, RegionEndpoint.USWest2);

The previous way of using config objects still works with the SDK. The region constant also works with the config object. For example, if you still need to use the config object to set up a proxy, you can take advantage of the new regions support like this:

var config = new AmazonDynamoDBConfig()
    RegionEndpoint = RegionEndpoint.USWest2,
    ProxyHost = "webproxy",
    ProxyPort = 80
var dynamoDBClient = new AmazonDynamoDBClient(accessKey, secretKey, config);

In the recently released version 2.0 of the SDK, the region can be set in the app.config file along with the access and secret key. For example, here is an app.config file that instructs the application to use region us-west-2:

    <add key="AWSAccessKey" value="YOUR_ACCESS_KEY"/>
    <add key="AWSSecretKey" value="YOUR_SECRET_KEY"/>
    <add key="AWSRegion" value="us-west-2"/>

And by running this code, which uses the empty constructor of the Amazon EC2 client, we can see it print out all the Availability Zones in us-west-2.

var ec2Client = new AmazonEC2Client();

var response = ec2Client.DescribeAvailabilityZones();

foreach (var zone in response.AvailabilityZones)

For a list of region constants, you can check the API documentation.