Category: .NET


VPC and AWS Elastic Beanstalk

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

We recently released a new version of our AWS Elastic Beanstalk .NET container which, like the other Beanstalk containers, is based on AWS CloudFormation and lets you take advantage of all the latest features that have been added to Beanstalk. One of the exciting new features is the ability to deploy into Amazon VPC. The AWS Toolkit for Visual Studio has also been updated to support creation of VPCs and launching instances into VPCs. The Beanstalk deployment wizard was also updated so you can create Beanstalk environments in a VPC.

 

The first step to deploying into a VPC is to create the VPC. To do this in the toolkit, open the VPC view via AWS Explorer and click Create VPC.

To get this VPC ready for Beanstalk, check the With Public Subnet check box, which specifies where the load balancer will be created. You also need to check the With Private Subnet check box, which specifies where the EC2 instances will be launched. You can leave the rest of the fields at their defaults. Once everything is created, deploy your application by right-clicking on your project and selecting Publish to AWS… just as you would for non-VPC deployments. The AWS Options page has changed to contain an option to deploy into a VPC:

Check the Launch into VPC check box and click Next. The subsequent page allows you to configure the VPC settings for the deployment:

Another helpful feature we’ve implemented in the VPC create dialog box for the toolkit was to put name tags on the subnets and security groups. The launch wizard looks for these tags when you select a VPC, and if it finds them, it auto-selects the appropriate values. In this case, all you need to do is select your new VPC and then continue with your deployment.

That’s all there is to deploying into VPC with Beanstalk. For more information, see Using AWS Elastic Beanstalk with Amazon VPC.

Working with Regions in the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In earlier versions of the AWS SDK for .NET, using services in regions other than us-east-1 required you to

  • create a config object for the client
  • set the ServiceURL property on the config
  • construct a client using the config object

Here’s an example of what that looks like for Amazon DynamoDB:

var config = new AmazonDynamoDBConfig
{
    ServiceURL = "https://dynamodb.eu-west-1.amazonaws.com/"
};
var dynamoDBClient = new AmazonDynamoDBClient(accessKey, secretKey, config);

In version 1.5.0.0 of the SDK, this was simplified so you can easily set the region in the constructor of the clients using a region constant and remove the burden of knowing the URL to the region. For example, the preceding code can now be replaced with this:

var dynamoDBClient = new AmazonDynamoDBClient(accessKey, secretKey, RegionEndpoint.USWest2);

The previous way of using config objects still works with the SDK. The region constant also works with the config object. For example, if you still need to use the config object to set up a proxy, you can take advantage of the new regions support like this:

var config = new AmazonDynamoDBConfig()
{
    RegionEndpoint = RegionEndpoint.USWest2,
    ProxyHost = "webproxy",
    ProxyPort = 80
};
var dynamoDBClient = new AmazonDynamoDBClient(accessKey, secretKey, config);

In the recently released version 2.0 of the SDK, the region can be set in the app.config file along with the access and secret key. For example, here is an app.config file that instructs the application to use region us-west-2:

<configuration>
  <appSettings>
    <add key="AWSAccessKey" value="YOUR_ACCESS_KEY"/>
    <add key="AWSSecretKey" value="YOUR_SECRET_KEY"/>
    <add key="AWSRegion" value="us-west-2"/>
  </appSettings>
</configuration>

And by running this code, which uses the empty constructor of the Amazon EC2 client, we can see it print out all the Availability Zones in us-west-2.

var ec2Client = new AmazonEC2Client();

var response = ec2Client.DescribeAvailabilityZones();

foreach (var zone in response.AvailabilityZones)
{
    Console.WriteLine(zone.ZoneName);
}

For a list of region constants, you can check the API documentation.

EC2Metadata

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

A few months ago we added a helper utility to the SDK called EC2Metadata. This is a class that provides convenient access to EC2 Instance Metada. The utility surfaces most instance data as static strings and some complex data as .NET structures. For instance, the following code sample illustrates how you can retrieve the current EC2 instance’s Id and network interfaces:

string instanceId = EC2Metadata.InstanceId;
Console.WriteLine("Current instance: {0}", instanceId);

var networkInstances = EC2Metadata.NetworkInterfaces;
foreach(var netInst in networkInstances)
{
    Console.WriteLine("Network Interface: Owner = {0}, MacAddress = {1}", netInst.OwnerId, netInst.MacAddress);
}

The utility also exposes methods to retrieve data that may not have yet been modeled in EC2Metadata. These are EC2Metadata.GetItems(string path) and EC2Metadata.GetData(string path). GetItems returns a collection of items for that source, while GetData returns the metadata for that path (if the path is invalid or the item doesn’t exist, GetData returns null). For example, to retrieve the current instance Id you can use the InstanceId property or, equivalently, you can use GetData:

string instanceId = EC2Metadata.GetData("/instance-id");
Console.WriteLine("Current instance: {0}", instanceId);

Similarly, you can use GetItems to retrieve the available nodes for a specific path:

// Retrieve nodes from the root, http://169.254.169.254/latest/meta-data/
var rootNodes = EC2Metadata.GetItems(string.Empty);
foreach(var item in rootNodes)
{
    Console.WriteLine(item);
}

Note: since instance metadata is accessible only from an EC2 instance, the SDK will throw an exception if you attempt to use this utility anywhere outside of an EC2 instance (for example, your desktop).

Uploading to Amazon S3 with HTTP POST using the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Generally speaking, access to your Amazon S3 resources requires your AWS credentials, though there are situations where you would like to grant certain forms of limited access to other users. For example, to allow users temporary access to download a non-public object, you can generate a pre-signed URL.

Another common situation is where you want to give users the ability to upload multiple files over time to an S3 bucket, but you don’t want to make the bucket public. You might also want to set some limits on what type and/or size of files users can upload. For this case, S3 allows you to create an upload policy that describes what a third-party user is allowed to upload, sign that policy with your AWS credentials, then give the user the signed policy so that they can use it in combination with HTTP POST uploads to S3.

The AWS SDK for .NET comes with some utilities that make this easy.

Writing an Upload Policy

First, you need to create the upload policy, which is a JSON document that describes the limitations Amazon S3 will enforce on uploads. This policy is different from an Identity and Access Management policy.

Here is a sample upload policy that specifies

  • The S3 bucket must be the-s3-bucket-in-question
  • Object keys must begin with donny/uploads/
  • The S3 canned ACL must be private
  • Only text files can be uploaded
  • The POST must have an x-amz-meta-yourelement specified, but it can contain anything.
  • Uploaded files cannot be longer than a megabyte.
{"expiration": "2013-04-01T00:00:00Z",
  "conditions": [ 
    {"bucket": "the-s3-bucket-in-question"}, 
    ["starts-with", "$key", "donny/uploads/"],
    {"acl": "private"},
    ["eq", "$Content-Type", "text/plain"],
    ["starts-with", "x-amz-meta-yourelement", ""],
    ["content-length-range", 0, 1048576]
  ]
}

It’s a good idea to place as many limitations as you can on these policies. For example, make the expiration as short as reasonable, restrict separate users to separate key prefixes if using the same bucket, and constrain file sizes and types. For more information about policy construction, see the Amazon Simple Storage Service Developer Guide.

 

Signing a Policy

Once you have a policy, you can sign it with your credentials using the SDK.

using Amazon.S3.Util;
using Amazon.Runtime;

var myCredentials = new BasicAWSCredentials(ACCESS_KEY_ID, SECRET_ACCESS_KEY);
var signedPolicy = S3PostUploadSignedPolicy.GetSignedPolicy(policyString, myCredentials);

Ideally, the credentials used to sign the request would belong to an IAM user created for this purpose, and not your root account credentials. This allows you to further constrain access with IAM policies, and it also gives you an avenue to revoke the signed policy (by rotating the credentials of the IAM user).

In order to successfully sign POST upload policies, the IAM user permissions must allow the actions s3:PutObject and s3:PutObjectAcl.

Uploading an Object Using the Signed Policy

You can add this signed policy object to an S3PostUploadRequest.

var postRequest = new S3PostUploadRequest 
{
    Key = "donny/uploads/throwing_rocks.txt",
    Bucket = "the-s3-bucket-in-question",
    CannedACL = S3CannedACL.Private,
    InputStream = File.OpenRead("c:throwing_rocks.txt"),
    SignedPolicy = signedPolicy
};

postRequest.Metadata.Add("yourelement", myelement);

var response = AmazonS3Util.PostUpload(postRequest);

Keys added to the S3PostUploadRequest.Metadata dictionary will have the x-amz-meta- prefix added to them if it isn’t present. Also, you don’t always have to explicitly set the Content-Type if it can be inferred from the extension of the file or key.

Any errors returned by the service will result in an S3PostUploadException, which will contain an explanation of why the upload failed.

 

Exporting and Importing a Signed Policy

You can export the S3PostUploadSignedPolicy object to JSON or XML to be transferred to other users.

var policyJson = signedPolicy.ToJson();
var policyXml = signedPolicy.ToXml();

And the receiving user can re-create S3PostUploadSignedPolicy objects with serialized data.

var signedPolicy = S3PostUploadSignedPolicy.GetSignedPolicyFromJson(policyJson);
vat signedPolicy2 = S3PostUploadSignedPolicy.GetSignedPolicyFromXml(policyXML);

For more information about uploading objects to Amazon S3 with HTTP POST, including how to upload objects with a web browser, see the Amazon Simple Storage Service Developer Guide.

 

Scripting your EC2 Windows fleet using Windows PowerShell and Windows Remote Management

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today we have a guest post by one of our AWS Solutions Architects, James Saull, discussing how to take advantage of Windows PowerShell and Windows Remote Management (WinRM) to script your Windows fleet.

One of the advantages of using AWS is on-demand access to an elastic fleet of machines—continuously adjusting in response to demand and ranging, potentially, from zero machines to thousands. This presents a couple of challenges: within your infrastructure, how might you identify and run your script against a large and varying number of machines at the same time? In this post, we take a look at how to use EC2 tags for targeting and Windows Remote Management to simultaneously run PowerShell scripts.

Launching an AWS EC2 Windows instance from the console and connecting via RDP is a simple matter. You can even do it directly from within Visual Studio as recently documented here. From the RDP session, you might perform tasks such as updating the assets of an ASP.Net web application. If you had a second machine, you could open a second RDP session and repeat those tasks. Alternatively, if you are running in AWS VPC, you could avoid opening additional RDP sessions and just use PowerShell’s Enter-PSSession to the second machine. This does require that all instances are members of security groups that will allow Windows Remote Management traffic.

Below is an example of connecting to another host in a VPC and issuing a simple command (notice the date time stamps are different on the second host):

However, as the number of machines grows, you will quickly want the ability to issue a command once and have it run against the whole fleet simultaneously. To do this, we can use PowerShell’s Invoke-Command. Let’s take a look at how we might instruct a fleet of Windows EC2 hosts to all download the latest version of my web application assets from Amazon S3.

First, using EC2 tags, we will identify which machines are web servers, as only they should be downloading these files. The example below uses the cmdlets Get-EC2Instance and Read-S3Object, which are part of the AWS Tools for Windows PowerShell and are installed by default on AWS Windows Machine Images:

$privateIp = ((Get-EC2Instance -Region eu-west-1).RunningInstance `
            | Where-Object {
                $_.Tag.Count –gt 0 `
                –and $_.Tag.Key -eq  "Role" `
                -and $_.Tag.Value -match "WebServer"}).PrivateIpAddress 

Establish a session with each of the web servers:

$s = New-PSSession -ComputerName $privateIp 

Invoke the command that will now simultaneously run on each of the web servers:

Invoke-Command -Session $s -ScriptBlock {
    Read-S3Object   -BucketName mysourcebucket `
                    -KeyPrefix /path/towebassets/ `
                    -Directory z:webassets `
                    -Region eu-west-1 } 

This works well, but what if I want to run something that is individualized to the instance? There are many possible ways, but here is one example:

$scriptBlock = {
 param (
            [int] $clusterPosition , [int] $numberOfWebServers
        )
        "I am Web Server $clusterPosition out of $numberOfWebServers" | Out-File z:afile.txt
}

$position = 1
foreach($machine in $privateIp)
{
    Invoke-Command  -ComputerName $machine `
                    -ScriptBlock $scriptBlock `
                    -ArgumentList $position , ($PrivateIp.Length) `
                    -AsJob -JobName DoSomethingDifferent
    $position++
} 

Summary

This post showed how using EC2 tags can make scripting a fleet of instances via Windows Remote Management very convenient. We hope you find these tips helpful, and as always, let us know what other .NET or PowerShell information would be most valuable to you.

Release 2.0.0.3 of the AWS SDK V2.0 for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

We have just released a new version of the AWS SDK V2.0 for .NET. You can download version 2.0.0.3 of the SDK here.

This release adds support for Amazon SNS mobile push notifications and fixes an issue with uploading large objects to Amazon S3 using the .NET 4.5 Framework version of the SDK.

Please let us know what you think of this latest version of the AWS SDK V2.0 for .NET. You can contact us through our GitHub repository or our forums.

Using Amazon CloudFront with ASP.NET Apps

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today, I’m going to talk about using Amazon CloudFront to boost the performance of ASP.NET web apps that are deployed to Amazon Web Services. CloudFront is a content delivery service that can cache content in edge locations across the world to give users low-latency access to static content and relieve some of the pressure from web servers.

The main entity that is created in CloudFront is a distribution. A CloudFront distribution contains all the configuration of how content will be cached and the domain name that users will use to access the content. You can create distributions using many different tools like the AWS Management Console or the AWS Explorer in the AWS Toolkit for Visual Studio. I’m going show how to create a distribution using AWS CloudFormation to script the creation of our CloudFront distribution so it can be easily reproduced in other web applications. Then I’ll show how to use the AWS Toolkit to deploy it.

Deploying an App

First, I’m going to deploy an application to AWS using AWS Elastic Beanstalk. To keep things simple, I’m going to create a new project in Visual Studio, select ASP.NET MVC 4 Web Application, and then select Internet Application. To keep the focus on CloudFront, I’m going to only lightly cover Elastic Beanstalk deployments. For more in-depth information on deployments, please review our Elastic Beanstalk user guide.

The first step in deploying is to right-click on our project and select Publish to AWS.

Then walk through the wizard setting using the following instructions.

Template Page

  • Select Account and Region to deploy to
  • Select Deploy new application with template
  • Select AWS Elastic Beanstalk
  • Click Next

Application Page

  • Leave values at the default and click Next

Environment Page

  • Enter a name for the environment
  • Verify the environment URL is unique
  • Click Next

AWS Options Page

  • Select a key pair or create a new one
  • Click Next

Application Options Page

  • Click Next

Amazon RDS Database Security Group Page

  • Click Next

Review Page

  • Click Deploy

After you click Deploy, the application will be built and deployed to Elastic Beanstalk. The AWS Explorer will be refreshed showing the new environment and the Environment view will be displayed as well.

Creating the AWS CloudFormation Template

CloudFormation uses templates—which are JSON text files—to script the creation of AWS resources. I’m going to create a template that will create my CloudFront distribution using the CloudFormation editor that is part of the Visual Studio Toolkit. To get started, I’m going to right-click on the solution, select Add New Project, and then select the AWS CloudFormation project.

In the project wizard, I’m going to select Create with empty template and then click Finish.

Once the project is created, I can use the following template to create the distribution. In the CloudFormation editor, you can hover over any of the keys to get a description of what they mean.

{
    "AWSTemplateFormatVersion" : "2010-09-09",

    "Description" : "",

The only parameter needed is the domain name of our application. In this case, it will be the URL of the Elastic Beanstalk environment. In other examples, this could be the DNS name of an Elastic Load Balancer or EC2 instance.

    "Parameters" : {
        "CloudFrontDomain" : {
            "Type" : "String",
            "Description" : "The domain of the website"
        }
    },

    "Resources" : {

Define the CloudFront distribution.

"Distribution" : {
            "Type" : "AWS::CloudFront::Distribution",
            "Properties" : {
                "DistributionConfig" : {
                    "DefaultRootObject" : "/",

An origin is the source of content for a distribution. In this case, there is only one origin, which is the Elastic Beanstalk environment. In advanced situations, there could be multiple origins. One use case for having multiple origins would be having an Elastic Beanstalk environment to serve up the dynamic content and the static content coming from an Amazon S3 bucket. For this advanced case, refer to the CloudFront documentation on setting up multiple cache behaviors.

"Origins" : [
                        {
                            "DomainName" : { "Ref" : "CloudFrontDomain" },
                            "Id" : "webapp-dns",
                            "CustomOriginConfig" : {
                                "HTTPPort" : "80",
                                "HTTPSPort" : "443",
                                "OriginProtocolPolicy" : "match-viewer"
                            }
                        }
                    ],

All distributions have a default cache behavior that tells which origin to use. The query string needs to be forwarded since the application is serving up dynamic content based on the query string.

"DefaultCacheBehavior" : {
                        "ForwardedValues" : {
                            "QueryString" : true
                        },
                        "TargetOriginId"  : "webapp-dns",
                        "ViewerProtocolPolicy" : "allow-all"
                    },
                    "Enabled" : true,

This section enables CloudFront access logging. The logs are similar to IIS logs and are great for understanding the request coming into your site.

"Logging" : {
                        "Bucket" : {"Fn::GetAtt" : [ "LoggingBucket", "DomainName"]},
                        "Prefix" : "cflogs/"
                    }
                }
            }
        },

Create an Amazon S3 bucket for the CloudFront logs to be delivered to.

"LoggingBucket" : {
            "Type" : "AWS::S3::Bucket",
            "Properties" : {
            }
        }
    },

    "Outputs" : {

Output the URL to access our web application through CloudFront.

"CloudFrontDomainName" : {
            "Value" : {"Fn::Join" : [ "", ["http://", {"Fn::GetAtt" : [ "Distribution", "DomainName"]}, "/" ] ]},
            "Description" : "Use this URL to access your website through CloudFront"
        },

Output the name of the Amazon S3 bucket created for the CloudFront logs to be delivered to.

"LoggingBucket" : {
            "Value" : { "Ref" : "LoggingBucket" },
            "Description" : "Bucket where CloudFront logs will be written to"
        }
    }
}

Deploying the AWS CloudFormation Template

With the template done, the next step is to deploy to CloudFormation, which will create a stack that represents all the actual AWS resources defined in the template. To deploy this template, I right-click on the template in Solution Explorer and then click Deploy to AWS CloudFormation.

On the first page of the wizard, I’ll enter cloudfrontdemo for the name of the stack that is going to be created. Then I click Next.

On the second page, which is for filling out any parameters defined in the template, I enter the Elastic Beanstalk environment DNS name, then click Next for the review page, and then click Finish.

Now CloudFormation is creating my stack with my distribution. Once the status of the stack transitions to CREATE_COMPLETE, I can check the Outputs tab to get the URL of the CloudFront distribution.

Summary

When users hit my application using the distribution’s URL, they are directed to the nearest edge location to them. The edge location either returns its cached value for the request, and if it doesn’t contain a value, it will reach back to my Elastic Beanstalk environment to fetch the latest value. How long CloudFront caches values is controlled by Cache-Control headers returned by the origin, which in this case is my Elastic Beanstalk environment. If there is no Cache-Control header, then CloudFront defaults to 24 hours, which will be the case for all my static content such as images and javascript. By default, ASP.NET is going to return the Cache-Control header with a value of private for dynamic content, which indicates that all or part of the response message is intended for a single user and must not be cached by a shared cache. This way the content coming from my ViewControllers will always be fresh, whereas my static content can be cached. If the dynamic content can be cached for periods of time, then by using the HttpResponse object, I can indicate that to CloudFront. For example, the code snippet below will let CloudFront know that this content can be cached for 30 minutes.

protected void Page_Load(object sender, EventArgs e)
{
    Response.Cache.SetCacheability(HttpCacheability.Public);
    Response.Cache.SetMaxAge(TimeSpan.FromMinutes(30));
}

For more information on controlling the length of caches, review the CloudFront documentation on expiration.

Web Identity Federation using the AWS SDK for .NET

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today’s post is about web identity federation. AWS Security Token Service (STS) has introduced this new feature, which allows customers to give constrained, time-limited access of their AWS resources to users who identify themselves via popular third-party identity providers (IdPs). AWS currently supports Amazon, Facebook, and Google as IdPs whose tokens can be used to gain access to AWS resources. This feature enables scenarios where app developers can give their customers access to AWS resources under their own (developers’) accounts in a controlled fashion using the customer’s existing account with any of the IdPs. By taking this approach, developers don’t need to distribute their AWS credentials in their applications and do account management for their customers. If you are interested in using this feature in your Windows Phone or Windows Store apps, check out the Developer Preview of the next version of the AWS SDK for .NET. The Developer Preview adds support for .NET Framework 4.5 and the Windows Phone and WinRT platforms.

We’ll now look at the steps required for you to use web identity federation and a few C# code snippets that will show you how to get temporary access tokens and access AWS resources after authenticating with an IdP. We are using Facebook as the IdP in the sample below. For details on using other IdPs, check this link.

Setting up an IAM role

We start off by creating an IAM role (this is a one-time activity.) This is the role that your users will assume when they successfully authenticate through an IdP. When you create this role, you need to specify two policies: the trust policy, which specifies who can assume the role (the trusted entity, or principal), and the access policy, which describes privileges associated with this role. Below is an example of a trust policy using Facebook as the IdP.

{
  "Version":"2012-10-17",
  "Id":"RoleForFacebook",
  "Statement":[{
      "Principal":{"Federated":"graph.facebook.com"},
      "Effect":"Allow",
      "Action":"sts:AssumeRoleWithWebIdentity",
      "Condition": {
          "StringEquals":
              {“graph.facebook.com:app_id":"MY_APP_ID"}
       }
   }]
}

You’ll need to replace the string MY_APP_ID with your Facebook app ID. This policy allows the users authenticated through Facebook IdP to use the web identity federation API (AssumeRoleWithWebIdentity operation), which grants the users temporary AWS credentials. We also have a condition in the policy that the Facebook app ID should match the specified one. This policy also makes use of policy variables, which are discussed in more detail here.

When creating your IAM role via the AWS Management Console, the Role Creation wizard will walk you through the process of creating the trust policy, but you will need to supply the access policy by hand. Below is the access policy that specifies the privileges associated with this role. In this sample, we will provide access to S3 operations on a bucket designated for the Facebook app. You’ll need to replace MY_APPS_BUCKET_NAME with the bucket name for your app.

{
 "Version":"2012-10-17",
 "Statement":[{
   "Effect":"Allow",
   "Action":["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
   "Resource": "arn:aws:s3:::MY_APPS_BUCKET_NAME/${graph.facebook.com:id}/*"
  },
  {
   "Effect":"Allow",
   "Action":"s3:ListBucket",
   "Resource":["arn:aws:s3:::MY_APPS_BUCKET_NAME"],
   "Condition":
     {"StringLike":
       {"s3:prefix":"${graph.facebook.com:id}/"}
     }
  }
 ]
}

The first statement in this policy allows each user of the app to Get, Put, or Delete objects in the specified S3 bucket with a prefix containing their Facebook user ID. This has the effect of creating "folders" for each user under which their objects will reside. The second statement allows users to retrieve only their objects’ contents by enforcing the prefix condition on the specified bucket.

Connecting with the identity provider

Now that the IAM role is in place you can call the STS AssumeRoleWithWebIdentity API, specifying the ARN of the IAM role that you created and a token provided by the IdP. In this example using Facebook, the token would be the access token that Facebook login provides in response to an authentication request (details on how we get the Facebook access token are not covered in this post, this link is a good starting point to understand the Facebook login process). Here is the C# snippet for calling AssumeRoleWithWebIdentity. Notice that we pass in an AnonymousAWSCredentials object for the credentials parameter when constructing the STS client, as we do not need to have AWS credentials to make this call.

var stsClient = new AmazonSecurityTokenServiceClient(new AnonymousAWSCredentials());

// Assume the role using the token provided by Facebook.
var assumeRoleResult = stsClient.AssumeRoleWithWebIdentity(new AssumeRoleWithWebIdentityRequest
{
    WebIdentityToken = "FACEBOOK_ACCESS_TOKEN",
    ProviderId = "graph.facebook.com",
    RoleArn = "ROLE_ARN",
    RoleSessionName = "MySession",
    DurationSeconds = 3600
}).AssumeRoleWithWebIdentityResult;

Here are the parameters we pass to the API.

  • WebIdentityToken – the token received from the IdP after a user authenticates with it.
  • ProviderId – the name of the IdP. The supported values are graph.facebook.com, www.amazon.com, and googleapis.com.
  • Role ARN – the Amazon Resource Name of the role the user will assume. The ARN is of the format arn:aws:iam::123456789012:role/RoleForFacebook.
  • RoleSessionName – the name to give to this specific session. This name is used to identify the session.
  • DurationSeconds – the duration for which the security token that is returned will be valid, in seconds. The default value is 1 hour (3600 seconds).

Accessing AWS resources

The AssumeRoleWithWebIdentity API returns a session token that your application can use to access any resource mapped to the role. This is done by constructing a SessionAWSCredentials object and using it for subsequent calls to access resources and perform actions permitted by the assumed role. The below sample code accesses the objects in the app’s S3 bucket and performs operations on them. Remember that in the access policy you provided when creating the role, the user was restricted to access only S3 objects that contained the Facebook username prefixed to the path. Here assumeRoleResult.SubjectFromWebIdentityToken is the Facebook provided user id of the customer and objectName is the name of the S3 object being created.

// Create an S3 client using session credentials returned by STS
var credentials = assumeRoleResult.Credentials;
SessionAWSCredentials sessionCreds = new SessionAWSCredentials(credentials.AccessKeyId, credentials.SecretAccessKey, credentials.SessionToken);
var s3Client = new AmazonS3Client(sessionCreds, s3Config);

var key = string.Format("{0}/{1}", assumeRoleResult.SubjectFromWebIdentityToken, objectName);

// Put an object in the user's "folder".
s3Client.PutObject(new PutObjectRequest
{
    BucketName = "MY_APPS_BUCKET_NAME",
    Key = key,
    ContentBody = content
});

// List objects in the user's "folder".
var listObjectResponse = s3Client.ListObjects(new ListObjectsRequest
{
    BucketName = "MY_APPS_BUCKET_NAME",
    Prefix = assumeRoleResult.SubjectFromWebIdentityToken + "/"
});

// Get the object with the specified key.
var getObjectRespone = s3Client.GetObject(new GetObjectRequest
{
    BucketName = "MY_APPS_BUCKET_NAME",
    Key = key
});

Summary

In this post, we saw how web identity federation can be used to give access to AWS resources to customers who authenticate through one of the supported IdPs. We also walked through the steps and code snippets to use this feature.

AWS SDK for .NET Version 2.0 Preview

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today, we are excited to announce a preview of our upcoming version 2 of the AWS SDK for .NET, which you can download here.

One of the most exciting new features of version 2 is the ability to have Windows Store and Windows Phone 8 Apps use our SDK. Like other SDKs for these new platforms, all method calls that make requests to AWS are asynchronous methods.

Another big improvement we made to the SDK for asynchronous programming is that when you target Windows Store, Windows Phone 8, or .NET 4.5 the SDK uses the new Task-based pattern for asynchronous programming instead of the old style using pairs of Begin and End methods. Version 2 of the SDK also consists of a version compiled for .NET 3.5 Framework that contains the Begin and End methods for applications that aren’t yet ready to move to .NET 4.5.

For a deeper dive into the differences in version 2, check out our migration guide.

We would really like to hear your feedback on version 2. If you have any suggestions for what you would like to see in our new SDK or have any issues trying out the preview, please let us know. You can let us know using our forums.

AWS.Extensions renaming

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Earlier this week, you may have noticed that the assembly AWS.Extensions—which contained DynamoDBSessionStateStore—has been renamed to AWS.SessionProvider. Our original intent with AWS.Extensions was to create a place for SDK extensions, which aren’t strictly part of the AWS SDK for .NET. We have since developed another extension, DynamoDBTraceListener, a TraceListener that allows the logging of trace and debug output to Amazon DynamoDB.

Unfortunately, the two extensions have distinct requirements: DynamoDBSessionStateStore references System.Web, and thus cannot be used in a Client Profile, while DynamoDBTraceListener does not. So to avoid requiring customers to reference unnecessary assemblies, we’ve decided to separate the AWS.Extensions project into multiple task-oriented solutions. Thus, customers who only require DynamoDB logging will not have to import the server assemblies required by the session provider.

Migration

If you are referencing AWS.Extensions.dll in your project, simply change the reference to AWS.SessionProvider.dll. There are no code changes to be made for this.

NuGet users should remove the reference to AWS.Extensions and instead use the new AWS.SessionProvider package. The existing AWS.Extensions package is now marked OBSOLETE.