Tag: .NET


AWS Regions and Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

The majority of the cmdlets in the AWS Tools for Windows PowerShell require that you specify an AWS region. Specifying a region defines the service endpoint that is used for the request, in addition to scoping the resources you want to operate on. There are, however, a couple of exceptions to this rule:

  • Some services are considered region-less. This means that the service exposes an endpoint that does not contain any region information, for example AWS Identity and Access Management (IAM) and Amazon Route 53 fall into this category.
  • Some services expose only a single regional endpoint, usually in the US East (Northern Virginia) region. Examples for this category are Amazon Simple Email Service (SES) and AWS OpsWorks.

Cmdlets for services in these categories do not require that you specify a region and are designed, in the case of the second category, to automatically select the single regional endpoint for you. Note that although Amazon Simple Storage Service (S3) has multiple regional endpoints, its cmdlets can also operate without the need for an explicit region, falling back to the US East (Northern Virginia) region in this scenario. This may or may not work based on location constraints on your buckets. You therefore might want to consider always specifying a region anyway (this also safeguards against assumptions for services that may expand to other regions in the future).

This blog post describes how to specify the region for a cmdlet and how to specify a default region. A useful summary guide to endpoints and regions for services can be found at Regions and Endpoints in the Amazon Web Services General Reference.

Specifying the Region for a Cmdlet

All cmdlets that require region information to operate expose a -Region parameter. This parameter accepts a string value, which is the system name of the AWS region. For example, we can obtain a list of all running Amazon EC2 instances in the US West (Oregon) region as follows:

PS C:> Get-EC2Instance -Region us-west-2

Note:For simplicity, the cmdlet examples shown here assume that your AWS credential information is being obtained automatically, as described in Handling Credentials with AWS Tools for Windows PowerShell.

Similarly, we can obtain the set of Amazon Machine Images (AMIs) for Microsoft Windows Server 2012, this time in the EU (Ireland) region:

PS C:> Get-EC2ImageByName -Region eu-west-1 -Name "windows_2012_base"

Given these examples, you might write the following command to start an instance:

PS C:> Get-EC2ImageByName -Region eu-west-1 -Name "windows_2012_base" | New-EC2Instance -InstanceType m1.small -MinCount 1 -MaxCount 1 
New-EC2Instance: The image id '[ami-a63edbd1]' does not exist
At line:1 char:66
+ Get-EC2ImageByName -Region eu-west-1 -Name "windows_2012_base" 
        | New-EC2Instance ...
+         ~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (Amazon.PowerShe...2InstanceCmdlet:NewEC2InstanceCmdlet) [New-EC2Instance], InvalidOperationException
    + FullyQualifiedErrorId : Amazon.EC2.AmazonEC2Exception,Amazon.PowerShell.Cmdlets.EC2.NewEC2InstanceCmdlet

Oops! As you can see, the -Region parameter is scoped to the individual cmdlet, so the AMI that is returned is specific to the EU (Ireland) region. The New-EC2Instance cmdlet also needs to use the EU (Ireland) region, otherwise the AMI will not be found, so we must supply a matching -Region parameter (or, as shown later, have this region be our shell default):

PS C:> Get-EC2ImageByName -Region eu-west-1 -Name "windows_2012_base" | New-EC2Instance -InstanceType m1.small -MinCount 1 -MaxCount 1 -Region eu-west-1
ReservationId   : r-12345678
OwnerId         : ############
RequesterId     :
GroupId         : {sg-abc12345}
GroupName       : {default}
RunningInstance : {}

Specifying a Default Region

Adding an explicit -Region parameter to each cmdlet can become awkward for anything more than one or two commands, so I make use of a default region in my shell. To manage this, I make use of the region cmdlets in the toolset:

  • Set-DefaultAWSRegion
  • Get-DefaultAWSRegion
  • Get-AWSRegion
  • Clear-DefaultAWSRegion

Set-DefaultAWSRegion accepts the (string) system name of an AWS region (similar to the -Region parameter on cmdlets) or an AWSRegion object, which can be obtained from Get-AWSRegion:

# set a default region of EU West (Ireland) for all subsequent cmdlets
PS C:> Set-DefaultAWSRegion eu-west-1

# query the set of AWS regions (to include AWS GovCloud, add the -IncludeGovCloud switch)
PS C:> Get-AWSRegion
Region              Name                                  IsShellDefault
------              ----                                  --------------
us-east-1           US East (Virginia)                             False
us-west-1           US West (N. California)                        False
us-west-2           US West (Oregon)                               False
eu-west-1           EU West (Ireland)                               True
ap-northeast-1      Asia Pacific (Tokyo)                           False
ap-southeast-1      Asia Pacific (Singapore)                       False
ap-southeast-2      Asia Pacific (Sydney)                          False
sa-east-1           South America (Sao Paulo)                      False

# use the region list to set another default by selection:
PS C:> Get-AWSRegion |? { $_.Name.Contains("Tokyo") } | Set-DefaultAWSRegion

# test it!
PS C:> Get-DefaultAWSRegion 

Region              Name                                  IsShellDefault
------              ----                                  --------------
ap-northeast-1      Asia Pacific (Tokyo)                            True

Clear-AWSDefaultRegion can be used to clear a default region. After you use this cmdlet you need to start using the -Region parameter with the service cmdlets again. In scripts that run a lot of service cmdlets, you may find it useful to use the Get-DefaultAWSRegion and Set-DefaultAWSRegion cmdlets at the start and end of the script, perhaps in conjunction with a region script parameter, to temporarily switch away from your regular shell default and restore the original default on exit.

By the way, setting a default region doesn’t preclude overriding this subsequently on a per-cmdlet basis. Simply add the -Region parameter as needed for the particular cmdlet invocation.

Handling Credentials with AWS Tools for Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

The cmdlets provided in the AWS Tools for Windows PowerShell provide three ways to express credential information. Some approaches are more secure than others. Let’s take a look.

Using Credential Parameters

All cmdlets in the toolset accept -AccessKey, -SecretKey and -SessionToken parameters (-SessionToken is used when the access key and security key are part of time-limited temporary credentials obtained from the AWS Security Token Service). The intent of these parameters is to allow you to specify credentials for AWS Identity and Access Management (IAM) user accounts that you have created, optionally with restricted access to services and/or service operations. Using these parameters to pass your root account credentials can be considered the least secure (and therefore least desirable) way of passing credential information to cmdlets and we strongly recommend you investigate and use IAM user accounts.

IAM accounts can be created using the AWS Management Console or using the Visual Studio toolkit. In Visual Studio, open the AWS Explorer window and expand the AWS Identity and Access Management node. Right-click on the Users node and select Create User…. In the resulting dialog box, give the user account a name and select OK. The new user account will be added to the tree in AWS Explorer, double-click it to open a window in the IDE where we can configure what the account has access to.

The first step in configuring the IAM user is to obtain AWS access and secret keys. Select the Access Keys tab, and then click the Create button. A dialog box will appear with an option to also save the generated keys locally (and securely) in the toolkit – be sure to check this option so that we can subsequently view the secret key. Click OK and the window will update to show the generated keys, which you can now copy and paste for use with PowerShell:

The second step in configuring the IAM user is to add a policy that will give access to AWS. By default, the new policy will give access to all AWS services, service operations, and resources, but using the editor in Visual Studio (or the AWS Management Console) you can restrict this. To create the policy in Visual Studio, select the Policies tab and click Add Policy. Give the policy a name in the dialog box that appears and click OK. You can then see the policy and edit to suit; when you’ve finished, click Save on the window’s toolbar:

Now that you have obtained the keys and set up a policy governing access, you can use the IAM user account with PowerShell using the previously mentioned -AccessKey and -SecretKey parameters:

PS C:> Get-EC2Instance -AccessKey 123MYIAMUSERACCESSKEY -SecretKey 456MYIAMUSERSECRETKEY

If you want to be even more secure, we recommend that you further configure the new user account with a policy that restricts access to just the service(s), service operations, and AWS resources that you want to use with PowerShell. How to do that is beyond the scope of this blog, but see IAM Users and Groups for more information on how to set up and configure IAM user accounts.

To use session tokens, first get the token with Get-STSSessionToken and then pass it with the temporary credentials on subsequent commands:

# This example shows how to get and use temporary session credentials
PS C:> $tempcreds = Get-STSSessionToken -AccessKey 123MYIAMUSERACCESSKEY -SecretKey 456MYIAMUSERSECRETKEY -DurationSeconds 900
PS C:> Get-EC2Instance -AccessKey $tempcreds.AccessKeyId -SecretKey $tempcreds.SecretAccessKey -SessionToken $tempcreds.SessionToken
... call other cmdlets until session token expires...

The disadvantage to using the credential parameters is that you need to repeat them for every cmdlet in your script or at the shell prompt. A secure key value is a lot to type accurately even once, so you might therefore be tempted to place this information into variables at the head of your script and simply reference those; however, we’d very much like you to think twice before doing this! It’s easy to forget the values are there and share the script and…well, the credentials have then leaked. If these are your root credentials then this is, in a classic piece of understatement, a very bad idea! With IAM user accounts you can at least rotate the credentials, but it’s best not to get into this situation at all. So given that using raw credentials is inconvenient and less secure, what better ways exist?

Using the Initialize-AWSDefaults Cmdlet

After installation of the toolset, you might have noticed a new Start menu entry called Windows PowerShell for AWS. This launches a PowerShell shell with the AWSPowerShell module loaded (useful for machines that have PowerShell version 2 installed, where modules do not auto-import). This shell then runs a cmdlet named Initialize-AWSDefaults, which performs a number of checks:

  1. Have you set up a default set of credentials (which have the fixed name AWS PS Default) on the machine/EC2 instance? If so, the cmdlet reads the credentials securely into the current shell. They are then automatically available to future cmdlets that you run in that shell without needing to specify any credential data on a per-cmdlet basis (unless you want to of course).
  2. If the cmdlet is running on an EC2 instance and the default set of credentials does not exist, can we obtain credentials from the role that the instance was launched with by inspecting instance metadata? If so, the cmdlet retrieves the credential data and stores it locally on the instance (again with the name AWS PS Default) before loading the credentials into the shell ready for use.
  3. If credential data cannot be satisfied from the local encrypted store or role information in the instance metadata, the cmdlet prompts you to supply the credentials. This is where you would get to type an access key and secret key – which could be an IAM user account.

This example shows the shell after using the Windows PowerShell for AWS shortcut on the Start menu for the first time on an EC2 instance that was launched using a role:

Initialize-AWSDefaults: Credentials for this shell were set using instance profile

Specify region
Please choose one of the following regions to use or specify one by system name
[]   [] us-east-1  [] us-west-1  [] us-west-2  [] eu-west-1  [] ap-northeast-1
[] ap-southeast-1[] ap-southeast-2  [] sa-east-1  [] us-gov-west-1  [?] Help
(default is ""):

Note the text following the cmdlet name – this confirms that credential data was successfully obtained, securely, from the role that the EC2 instance was launched with. As this is the first run, the cmdlet then asks you to select a default region (it won’t ask for this on subsequent runs). The credential data you supply via the role, or enter manually, is stored and will be used in future shells that run Initialize-AWSDefaults, unless you override them.

The Initialize-AWSDefaults cmdlet is therefore very useful in a couple of situations. Its main job is in setting up credentials in EC2 instances launched using a role without ever needing the user to explicitly enter access and secret keys. It can also be used on your own machine, either via the Start menu shortcut or by running it when you start a new shell.

Note though that Initialize-AWSDefaults works best if you have only one AWS account. As a developer, I tend to use multiple accounts, so I prefer the third and final method, the Set-AWSCredentials cmdlet that gives me ultimate control.

Using the Set-AWSCredentials Cmdlet

As I mentioned above, Set-AWSCredentials is my preferred go-to cmdlet for both loading and saving credential data on my machine as it has the most flexibility when I need to manage multiple sets of accounts, including IAM user accounts I have created that have restricted access to services, service operations, and AWS resources.

Credential data is stored in a per-user encrypted file and is shared between PowerShell cmdlets and the AWS Toolkit for Visual Studio. If you have already registered AWS accounts in AWS Explorer inside Visual Studio, then these credentials are available right away in PowerShell. Any accounts that you register through PowerShell will also show up in Visual Studio (including the AWS PS Default account you may have set up with Initialize-AWSDefaults).

Usage of the Set-AWSCredentials cmdlet falls into two areas: storing credential data, and loading it for use. To store credentials, you use the -StoreAs parameter to assign a name to the credentials, along with the credential information. The cmdlet then saves the data into the local encrypted credential file:

PS C:> Set-AWSCredentials -AccessKey 123MYACCESSKEY -SecretKey 456SECRETKEY -StoreAs myAWScredentials

Having saved the credentials you can discard the current shell and start a new one. To load the credentials into the new shell, you run the same cmdlet, but this time pass the name you specified as the -StoredCredentials parameter:

PS C:> Set-AWSCredentials -StoredCredentials myAWScredentials

Once the credentials are loaded, the cmdlets you run in that shell do not need to have credential data supplied – it will be retrieved from the current shell instance automatically. If you need to change credentials temporarily, all cmdlets accept a -StoredCredentials parameter that looks up the credentials for the name specified and uses them for that particular cmdlet’s invocation:

PS C:> Set-AWSCredentials myAWSCredentials

# These two examples yield the same data
PS C:> Get-EC2Instance -StoredCredentials myAWScredentials 
PS C:> Get-EC2Instance 

# This invocation returns different data, as alternate credentials are specified
PS C:> Get-EC2Instance -StoredCredentials myOtherAWScredentials 

By the way, the -StoredCredentials parameter can also be used with Get-STSSessionToken (shown earlier) to avoid having to expose your keys when obtaining temporary session credentials.

Loading Credentials from a PowerShell Profile

Remembering to run Set-AWSCredentials (or Initialize-AWSDefaults) in each shell or PowerShell host that you launch can be tiresome, so I make use of my user profile to do this for me and to also set a default region for my shells. Your user profile is simply a script file named Microsoft.PowerShell_profile.ps1 that exists in a folder named WindowsPowerShell in your user documents location. See How to Create a Windows PowerShell Profile for more details on the preferred way to create this file.

Once the file is created, load it into a text editor and add the call to Set-AWSCredentials (and Set-DefaultAWSRegion if you like) to initialize all shells you load, however they are launched. For example, my profile contains these lines. The first loads my personal AWS credentials stored with the name ‘steve’.

Set-AWSCredentials -StoredCredentials steve
Set-DefaultAWSRegion us-west-2

Note that if you are using PowerShell version 2, you will need to import the AWSPowerShell module before running those cmdlets. Under PowerShell version 3, the module auto-imports whenever any of the cmdlets it contains is run.

As I routinely switch credentials and regions for AWS SDK testing, I also have a custom prompt function in my profile that shows me the current user and region for the shell – you may find this useful too:

function prompt 
{
    $realLASTEXITCODE = $LASTEXITCODE

    $prompt = "PS "
    if ($StoredAWSCredentials -ne $null)
    {
        $prompt += "$StoredAWSCredentials"
    }
    
    if ($StoredAWSRegion -ne $null) 
    { 
        $prompt += "@"
        $prompt += "$StoredAWSRegion" 
    }
    
    $prompt += " "
    $prompt += $pwd.ProviderPath
    $prompt += "> "

    $global:LASTEXITCODE = $realLASTEXITCODE
    $prompt
}

This function (which is called automatically by PowerShell) displays a custom shell prompt:

PS steve@us-west-2 C:Dev> 

In addition, as I change region/credentials, it updates automatically. Cool!

Summary

This post has shown you a number of ways in which credential data can be supplied to AWS cmdlets. Hopefully, you can now see how to pass credential data without compromising your root AWS keys by making use AWS Identity and Access Management (IAM) user accounts and the encrypted credentials file shared with the Visual Studio toolkit or using roles with EC2 instances to pass credentials without them ever appearing in plain view.

Using Non-.NET Languages for Windows Store Apps

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In Version 2 of our AWS SDK for .NET, we added support for Windows Store apps by creating a .NET Windows Store app class library. This approach works great if you are writing your Windows Store app in a .NET language like C# or VB. It means most code written for the AWS SDK for .NET 3.5 and 4.5 will also work for Window Store apps (with the biggest difference being that all service operations must instead be called asynchronously). But what if you’re using C++ or Javascript instead of .NET and want to access AWS in a Windows Store app? This is still possible by creating a Windows Runtime Component that wraps the AWS calls you want to make.

What is a Windows Runtime Component

A Windows Runtime component is like a class library except it can be called into by any languages supported by the Windows Runtime. An important distinction from class libraries is all parameters and return types must be compatible Windows Runtime types. Windows Runtime components can be written in any supported Windows Runtime language. In our case, it needs to be done in C# or Visual Basic because we want to access the AWS SDK, which is a .NET class library.

Creating the Wrapper

In my example, I want my C++ Windows Store app to be able to put and get objects from Amazon S3. To get started, I’m going to create a C# Windows Runtime Component project called AWSWrapper. Then I’ll add a class called S3Wrapper with the following code.

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

using Windows.Foundation;
using Windows.Storage;
using Windows.Storage.Streams;

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;

namespace AWSWrapper
{
    public sealed class S3Wrapper
    {
	// For demo purposes, I'll embed the credentials. To get 
	// credentials securely to your application,
	// developers should look into strategies like the 
	// token vending machine, 
        // http://aws.amazon.com/articles/4611615499399490, or 
	// IAM Web Identity, 
        // http://aws.typepad.com/aws/2013/05/aws-iam-now-supports-amazon-facebook-and-google-identity-federation.html/
        const string ACCESSKEY = "";
        const string SECRETKEY = "";

        IAmazonS3 s3Client;

        private IAmazonS3 S3Client
        {
            get
            {
                if (this.s3Client == null)
                {
                    this.s3Client = new AmazonS3Client(
                        ACCESSKEY, SECRETKEY, RegionEndpoint.USWest2);
                }

                return this.s3Client;
            }
        }

        public IAsyncAction PutObjectAsync(string bucketName, 
              string key, IStorageFile storageFile)
        {
            PutObjectRequest request = new PutObjectRequest()
            {
                BucketName = bucketName,
                Key = key,
                StorageFile = storageFile
            };

            return this.S3Client.PutObjectAsync(request).AsAsyncAction();
        }

        public IAsyncOperation GetObjectAsync(string bucketName, string key)
        {
            GetObjectRequest request = new GetObjectRequest()
            {
                BucketName = bucketName,
                Key = key
            };

            return Task.Run(() =>
            {
                var task = this.S3Client.GetObjectAsync(request);
                var response = task.Result;
                return response.ResponseStream.AsInputStream();
            }).AsAsyncOperation();
        }
    }
}

This class wraps both the put and get operations to Amazon S3. Since this is a Windows Runtime component, I need to make sure the return types are valid Windows Runtime types. This is why instead of tasks being returned they are converted to IAsyncAction for put and IAsyncOperation with an IInputStream because neither Task nor Stream are valid Windows Runtime types. Note that error handling is being ignored for the purposes of keeping the sample simple.

Consuming the Wrapper

Now, in my C++ Windows Store app, I can add my newly created Windows Runtime component as a reference. Here is a sample showing the wrapper being used from a file picked using the FileOpenPicker.

void CppS3Browser::MainPage::Button_Click(
     Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)
{
	FileOpenPicker^ openPicker = ref new FileOpenPicker();
	openPicker->ViewMode = PickerViewMode::Thumbnail; 
	openPicker->SuggestedStartLocation = PickerLocationId::PicturesLibrary; 
	openPicker->FileTypeFilter->Append("*"); 

	create_task(openPicker->PickSingleFileAsync())
            .then([this](StorageFile^ file) 
	{
		if (file) 
		{ 
			AWSWrapper::S3Wrapper^ s3wrapper = 
                            ref new AWSWrapper::S3Wrapper();
			s3wrapper->PutObjectAsync(
                            this->bucketName, file->Name, file);
		}
	});
}

You can extend this pattern for any operations in the AWS SDK for .NET. Just make sure to convert the parameters and return types to Windows Runtime types.

.NET Application Deployment Roundup

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

.NET Application Deployment Roundup

In this post, we talk about several customer questions that have come up in the AWS Forums.

Deploying MVC4 applications on AWS Elastic Beanstalk

Deploying MVC4 applications to an AWS Elastic Beanstalk environment is just as easy as deploying other types of .NET web applications, and does not require pre-installing any software on instances in order to work. All you need to do is to make sure that the necessary project references have Copy Local set to True. For example, setting Copy Local to False for System.Web.Mvc and System.Web.Razor will cause your application to work on your development system, but fail when it gets deployed to the instance.

New MVC4 projects created from the "ASP.NET MVC 4 Web Application" template in Visual Studio should have the references set up correctly for deploying to Elastic Beanstalk.

Deploying applications to the root

By default, Visual Studio configures web applications to be deployed to a virtual directory. For applications in virtual directories, the .NET Elastic Beanstalk container deploys the application into the virtual directory, then creates a URL rewrite rule to direct requests from http://my-app.elasticbeanstalk.com/ to http://my-app.elasticbeanstalk.com/MyApp. For some applications, this rewrite rule can cause issues, since you might have other reasons to deploy your application to the root level.

In Visual Studio 2010, you can change the deployment location with the following steps:

  • Open the Properties pane for the web application.
  • Navigate to the Package/Publish Web tab.
  • Edit the value of IIS Web site/application name to use on the destination server to be DefaultWebSite.

If your version of Visual Studio 2012 has Update 2 or later, this option will not be present in the Properties pane, but you can add the <DeployIisAppPath> to the appropriate <PropertyGroup> element in your .csproj file. If you want it to apply to all Configurations and Platforms and deploy at the root, you can include it in the <PropertyGroup> element, i.e.:

<PropertyGroup>
  .....
  <DeployIisAppPath>Default Web Site/</DeployIisAppPath>
</PropertyGroup>

Or for Release|AnyCPU build target:

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
  <DeployIisAppPath>Default Web Site/</DeployIisAppPath>
</PropertyGroup>

Avoid maintaining state on instances

The nature of Elastic Beanstalk environments is that instances can come and go over time. For that reason, you should design your application so that as instances are added or removed from your environment due to failures or scaling events, they have everything they need to correctly serve your application. Similarly, when instances are removed from the environment, you shouldn’t lose important state or data.

Maintaining application state across servers is a great use of AWS services, such as Amazon S3 for file content, or Amazon DynamoDB for key/value storage.

Using Visual Studio 2013 Preview

With the latest release of the AWS Toolkit for Visual Studio, the installer now supports installation of the toolkit and project templates into the preview editions of Visual Studio 2013. As with prior releases of the toolkit, professional or higher versions of the IDE support the AWS Explorer tool window, the AWS CloudFormation template editor, and a set of project templates for a variety of AWS services. For users with Express editions of the IDE, only the project templates are installed due to licensing restrictions for these editions.

VPC and AWS Elastic Beanstalk

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

We recently released a new version of our AWS Elastic Beanstalk .NET container which, like the other Beanstalk containers, is based on AWS CloudFormation and lets you take advantage of all the latest features that have been added to Beanstalk. One of the exciting new features is the ability to deploy into Amazon VPC. The AWS Toolkit for Visual Studio has also been updated to support creation of VPCs and launching instances into VPCs. The Beanstalk deployment wizard was also updated so you can create Beanstalk environments in a VPC.

 

The first step to deploying into a VPC is to create the VPC. To do this in the toolkit, open the VPC view via AWS Explorer and click Create VPC.

To get this VPC ready for Beanstalk, check the With Public Subnet check box, which specifies where the load balancer will be created. You also need to check the With Private Subnet check box, which specifies where the EC2 instances will be launched. You can leave the rest of the fields at their defaults. Once everything is created, deploy your application by right-clicking on your project and selecting Publish to AWS… just as you would for non-VPC deployments. The AWS Options page has changed to contain an option to deploy into a VPC:

Check the Launch into VPC check box and click Next. The subsequent page allows you to configure the VPC settings for the deployment:

Another helpful feature we’ve implemented in the VPC create dialog box for the toolkit was to put name tags on the subnets and security groups. The launch wizard looks for these tags when you select a VPC, and if it finds them, it auto-selects the appropriate values. In this case, all you need to do is select your new VPC and then continue with your deployment.

That’s all there is to deploying into VPC with Beanstalk. For more information, see Using AWS Elastic Beanstalk with Amazon VPC.

Working with Regions in the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In earlier versions of the AWS SDK for .NET, using services in regions other than us-east-1 required you to

  • create a config object for the client
  • set the ServiceURL property on the config
  • construct a client using the config object

Here’s an example of what that looks like for Amazon DynamoDB:

var config = new AmazonDynamoDBConfig
{
    ServiceURL = "https://dynamodb.eu-west-1.amazonaws.com/"
};
var dynamoDBClient = new AmazonDynamoDBClient(accessKey, secretKey, config);

In version 1.5.0.0 of the SDK, this was simplified so you can easily set the region in the constructor of the clients using a region constant and remove the burden of knowing the URL to the region. For example, the preceding code can now be replaced with this:

var dynamoDBClient = new AmazonDynamoDBClient(accessKey, secretKey, RegionEndpoint.USWest2);

The previous way of using config objects still works with the SDK. The region constant also works with the config object. For example, if you still need to use the config object to set up a proxy, you can take advantage of the new regions support like this:

var config = new AmazonDynamoDBConfig()
{
    RegionEndpoint = RegionEndpoint.USWest2,
    ProxyHost = "webproxy",
    ProxyPort = 80
};
var dynamoDBClient = new AmazonDynamoDBClient(accessKey, secretKey, config);

In the recently released version 2.0 of the SDK, the region can be set in the app.config file along with the access and secret key. For example, here is an app.config file that instructs the application to use region us-west-2:

<configuration>
  <appSettings>
    <add key="AWSAccessKey" value="YOUR_ACCESS_KEY"/>
    <add key="AWSSecretKey" value="YOUR_SECRET_KEY"/>
    <add key="AWSRegion" value="us-west-2"/>
  </appSettings>
</configuration>

And by running this code, which uses the empty constructor of the Amazon EC2 client, we can see it print out all the Availability Zones in us-west-2.

var ec2Client = new AmazonEC2Client();

var response = ec2Client.DescribeAvailabilityZones();

foreach (var zone in response.AvailabilityZones)
{
    Console.WriteLine(zone.ZoneName);
}

For a list of region constants, you can check the API documentation.

EC2Metadata

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

A few months ago we added a helper utility to the SDK called EC2Metadata. This is a class that provides convenient access to EC2 Instance Metada. The utility surfaces most instance data as static strings and some complex data as .NET structures. For instance, the following code sample illustrates how you can retrieve the current EC2 instance’s Id and network interfaces:

string instanceId = EC2Metadata.InstanceId;
Console.WriteLine("Current instance: {0}", instanceId);

var networkInstances = EC2Metadata.NetworkInterfaces;
foreach(var netInst in networkInstances)
{
    Console.WriteLine("Network Interface: Owner = {0}, MacAddress = {1}", netInst.OwnerId, netInst.MacAddress);
}

The utility also exposes methods to retrieve data that may not have yet been modeled in EC2Metadata. These are EC2Metadata.GetItems(string path) and EC2Metadata.GetData(string path). GetItems returns a collection of items for that source, while GetData returns the metadata for that path (if the path is invalid or the item doesn’t exist, GetData returns null). For example, to retrieve the current instance Id you can use the InstanceId property or, equivalently, you can use GetData:

string instanceId = EC2Metadata.GetData("/instance-id");
Console.WriteLine("Current instance: {0}", instanceId);

Similarly, you can use GetItems to retrieve the available nodes for a specific path:

// Retrieve nodes from the root, http://169.254.169.254/latest/meta-data/
var rootNodes = EC2Metadata.GetItems(string.Empty);
foreach(var item in rootNodes)
{
    Console.WriteLine(item);
}

Note: since instance metadata is accessible only from an EC2 instance, the SDK will throw an exception if you attempt to use this utility anywhere outside of an EC2 instance (for example, your desktop).

Uploading to Amazon S3 with HTTP POST using the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Generally speaking, access to your Amazon S3 resources requires your AWS credentials, though there are situations where you would like to grant certain forms of limited access to other users. For example, to allow users temporary access to download a non-public object, you can generate a pre-signed URL.

Another common situation is where you want to give users the ability to upload multiple files over time to an S3 bucket, but you don’t want to make the bucket public. You might also want to set some limits on what type and/or size of files users can upload. For this case, S3 allows you to create an upload policy that describes what a third-party user is allowed to upload, sign that policy with your AWS credentials, then give the user the signed policy so that they can use it in combination with HTTP POST uploads to S3.

The AWS SDK for .NET comes with some utilities that make this easy.

Writing an Upload Policy

First, you need to create the upload policy, which is a JSON document that describes the limitations Amazon S3 will enforce on uploads. This policy is different from an Identity and Access Management policy.

Here is a sample upload policy that specifies

  • The S3 bucket must be the-s3-bucket-in-question
  • Object keys must begin with donny/uploads/
  • The S3 canned ACL must be private
  • Only text files can be uploaded
  • The POST must have an x-amz-meta-yourelement specified, but it can contain anything.
  • Uploaded files cannot be longer than a megabyte.
{"expiration": "2013-04-01T00:00:00Z",
  "conditions": [ 
    {"bucket": "the-s3-bucket-in-question"}, 
    ["starts-with", "$key", "donny/uploads/"],
    {"acl": "private"},
    ["eq", "$Content-Type", "text/plain"],
    ["starts-with", "x-amz-meta-yourelement", ""],
    ["content-length-range", 0, 1048576]
  ]
}

It’s a good idea to place as many limitations as you can on these policies. For example, make the expiration as short as reasonable, restrict separate users to separate key prefixes if using the same bucket, and constrain file sizes and types. For more information about policy construction, see the Amazon Simple Storage Service Developer Guide.

 

Signing a Policy

Once you have a policy, you can sign it with your credentials using the SDK.

using Amazon.S3.Util;
using Amazon.Runtime;

var myCredentials = new BasicAWSCredentials(ACCESS_KEY_ID, SECRET_ACCESS_KEY);
var signedPolicy = S3PostUploadSignedPolicy.GetSignedPolicy(policyString, myCredentials);

Ideally, the credentials used to sign the request would belong to an IAM user created for this purpose, and not your root account credentials. This allows you to further constrain access with IAM policies, and it also gives you an avenue to revoke the signed policy (by rotating the credentials of the IAM user).

In order to successfully sign POST upload policies, the IAM user permissions must allow the actions s3:PutObject and s3:PutObjectAcl.

Uploading an Object Using the Signed Policy

You can add this signed policy object to an S3PostUploadRequest.

var postRequest = new S3PostUploadRequest 
{
    Key = "donny/uploads/throwing_rocks.txt",
    Bucket = "the-s3-bucket-in-question",
    CannedACL = S3CannedACL.Private,
    InputStream = File.OpenRead("c:throwing_rocks.txt"),
    SignedPolicy = signedPolicy
};

postRequest.Metadata.Add("yourelement", myelement);

var response = AmazonS3Util.PostUpload(postRequest);

Keys added to the S3PostUploadRequest.Metadata dictionary will have the x-amz-meta- prefix added to them if it isn’t present. Also, you don’t always have to explicitly set the Content-Type if it can be inferred from the extension of the file or key.

Any errors returned by the service will result in an S3PostUploadException, which will contain an explanation of why the upload failed.

 

Exporting and Importing a Signed Policy

You can export the S3PostUploadSignedPolicy object to JSON or XML to be transferred to other users.

var policyJson = signedPolicy.ToJson();
var policyXml = signedPolicy.ToXml();

And the receiving user can re-create S3PostUploadSignedPolicy objects with serialized data.

var signedPolicy = S3PostUploadSignedPolicy.GetSignedPolicyFromJson(policyJson);
vat signedPolicy2 = S3PostUploadSignedPolicy.GetSignedPolicyFromXml(policyXML);

For more information about uploading objects to Amazon S3 with HTTP POST, including how to upload objects with a web browser, see the Amazon Simple Storage Service Developer Guide.

 

Scripting your EC2 Windows fleet using Windows PowerShell and Windows Remote Management

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today we have a guest post by one of our AWS Solutions Architects, James Saull, discussing how to take advantage of Windows PowerShell and Windows Remote Management (WinRM) to script your Windows fleet.

One of the advantages of using AWS is on-demand access to an elastic fleet of machines—continuously adjusting in response to demand and ranging, potentially, from zero machines to thousands. This presents a couple of challenges: within your infrastructure, how might you identify and run your script against a large and varying number of machines at the same time? In this post, we take a look at how to use EC2 tags for targeting and Windows Remote Management to simultaneously run PowerShell scripts.

Launching an AWS EC2 Windows instance from the console and connecting via RDP is a simple matter. You can even do it directly from within Visual Studio as recently documented here. From the RDP session, you might perform tasks such as updating the assets of an ASP.Net web application. If you had a second machine, you could open a second RDP session and repeat those tasks. Alternatively, if you are running in AWS VPC, you could avoid opening additional RDP sessions and just use PowerShell’s Enter-PSSession to the second machine. This does require that all instances are members of security groups that will allow Windows Remote Management traffic.

Below is an example of connecting to another host in a VPC and issuing a simple command (notice the date time stamps are different on the second host):

However, as the number of machines grows, you will quickly want the ability to issue a command once and have it run against the whole fleet simultaneously. To do this, we can use PowerShell’s Invoke-Command. Let’s take a look at how we might instruct a fleet of Windows EC2 hosts to all download the latest version of my web application assets from Amazon S3.

First, using EC2 tags, we will identify which machines are web servers, as only they should be downloading these files. The example below uses the cmdlets Get-EC2Instance and Read-S3Object, which are part of the AWS Tools for Windows PowerShell and are installed by default on AWS Windows Machine Images:

$privateIp = ((Get-EC2Instance -Region eu-west-1).RunningInstance `
            | Where-Object {
                $_.Tag.Count –gt 0 `
                –and $_.Tag.Key -eq  "Role" `
                -and $_.Tag.Value -match "WebServer"}).PrivateIpAddress 

Establish a session with each of the web servers:

$s = New-PSSession -ComputerName $privateIp 

Invoke the command that will now simultaneously run on each of the web servers:

Invoke-Command -Session $s -ScriptBlock {
    Read-S3Object   -BucketName mysourcebucket `
                    -KeyPrefix /path/towebassets/ `
                    -Directory z:webassets `
                    -Region eu-west-1 } 

This works well, but what if I want to run something that is individualized to the instance? There are many possible ways, but here is one example:

$scriptBlock = {
 param (
            [int] $clusterPosition , [int] $numberOfWebServers
        )
        "I am Web Server $clusterPosition out of $numberOfWebServers" | Out-File z:afile.txt
}

$position = 1
foreach($machine in $privateIp)
{
    Invoke-Command  -ComputerName $machine `
                    -ScriptBlock $scriptBlock `
                    -ArgumentList $position , ($PrivateIp.Length) `
                    -AsJob -JobName DoSomethingDifferent
    $position++
} 

Summary

This post showed how using EC2 tags can make scripting a fleet of instances via Windows Remote Management very convenient. We hope you find these tips helpful, and as always, let us know what other .NET or PowerShell information would be most valuable to you.

Release 2.0.0.3 of the AWS SDK V2.0 for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

We have just released a new version of the AWS SDK V2.0 for .NET. You can download version 2.0.0.3 of the SDK here.

This release adds support for Amazon SNS mobile push notifications and fixes an issue with uploading large objects to Amazon S3 using the .NET 4.5 Framework version of the SDK.

Please let us know what you think of this latest version of the AWS SDK V2.0 for .NET. You can contact us through our GitHub repository or our forums.