Tag: .NET


Using Amazon CloudFront with ASP.NET Apps

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today, I’m going to talk about using Amazon CloudFront to boost the performance of ASP.NET web apps that are deployed to Amazon Web Services. CloudFront is a content delivery service that can cache content in edge locations across the world to give users low-latency access to static content and relieve some of the pressure from web servers.

The main entity that is created in CloudFront is a distribution. A CloudFront distribution contains all the configuration of how content will be cached and the domain name that users will use to access the content. You can create distributions using many different tools like the AWS Management Console or the AWS Explorer in the AWS Toolkit for Visual Studio. I’m going show how to create a distribution using AWS CloudFormation to script the creation of our CloudFront distribution so it can be easily reproduced in other web applications. Then I’ll show how to use the AWS Toolkit to deploy it.

Deploying an App

First, I’m going to deploy an application to AWS using AWS Elastic Beanstalk. To keep things simple, I’m going to create a new project in Visual Studio, select ASP.NET MVC 4 Web Application, and then select Internet Application. To keep the focus on CloudFront, I’m going to only lightly cover Elastic Beanstalk deployments. For more in-depth information on deployments, please review our Elastic Beanstalk user guide.

The first step in deploying is to right-click on our project and select Publish to AWS.

Then walk through the wizard setting using the following instructions.

Template Page

  • Select Account and Region to deploy to
  • Select Deploy new application with template
  • Select AWS Elastic Beanstalk
  • Click Next

Application Page

  • Leave values at the default and click Next

Environment Page

  • Enter a name for the environment
  • Verify the environment URL is unique
  • Click Next

AWS Options Page

  • Select a key pair or create a new one
  • Click Next

Application Options Page

  • Click Next

Amazon RDS Database Security Group Page

  • Click Next

Review Page

  • Click Deploy

After you click Deploy, the application will be built and deployed to Elastic Beanstalk. The AWS Explorer will be refreshed showing the new environment and the Environment view will be displayed as well.

Creating the AWS CloudFormation Template

CloudFormation uses templates—which are JSON text files—to script the creation of AWS resources. I’m going to create a template that will create my CloudFront distribution using the CloudFormation editor that is part of the Visual Studio Toolkit. To get started, I’m going to right-click on the solution, select Add New Project, and then select the AWS CloudFormation project.

In the project wizard, I’m going to select Create with empty template and then click Finish.

Once the project is created, I can use the following template to create the distribution. In the CloudFormation editor, you can hover over any of the keys to get a description of what they mean.

{
    "AWSTemplateFormatVersion" : "2010-09-09",

    "Description" : "",

The only parameter needed is the domain name of our application. In this case, it will be the URL of the Elastic Beanstalk environment. In other examples, this could be the DNS name of an Elastic Load Balancer or EC2 instance.

    "Parameters" : {
        "CloudFrontDomain" : {
            "Type" : "String",
            "Description" : "The domain of the website"
        }
    },

    "Resources" : {

Define the CloudFront distribution.

"Distribution" : {
            "Type" : "AWS::CloudFront::Distribution",
            "Properties" : {
                "DistributionConfig" : {
                    "DefaultRootObject" : "/",

An origin is the source of content for a distribution. In this case, there is only one origin, which is the Elastic Beanstalk environment. In advanced situations, there could be multiple origins. One use case for having multiple origins would be having an Elastic Beanstalk environment to serve up the dynamic content and the static content coming from an Amazon S3 bucket. For this advanced case, refer to the CloudFront documentation on setting up multiple cache behaviors.

"Origins" : [
                        {
                            "DomainName" : { "Ref" : "CloudFrontDomain" },
                            "Id" : "webapp-dns",
                            "CustomOriginConfig" : {
                                "HTTPPort" : "80",
                                "HTTPSPort" : "443",
                                "OriginProtocolPolicy" : "match-viewer"
                            }
                        }
                    ],

All distributions have a default cache behavior that tells which origin to use. The query string needs to be forwarded since the application is serving up dynamic content based on the query string.

"DefaultCacheBehavior" : {
                        "ForwardedValues" : {
                            "QueryString" : true
                        },
                        "TargetOriginId"  : "webapp-dns",
                        "ViewerProtocolPolicy" : "allow-all"
                    },
                    "Enabled" : true,

This section enables CloudFront access logging. The logs are similar to IIS logs and are great for understanding the request coming into your site.

"Logging" : {
                        "Bucket" : {"Fn::GetAtt" : [ "LoggingBucket", "DomainName"]},
                        "Prefix" : "cflogs/"
                    }
                }
            }
        },

Create an Amazon S3 bucket for the CloudFront logs to be delivered to.

"LoggingBucket" : {
            "Type" : "AWS::S3::Bucket",
            "Properties" : {
            }
        }
    },

    "Outputs" : {

Output the URL to access our web application through CloudFront.

"CloudFrontDomainName" : {
            "Value" : {"Fn::Join" : [ "", ["http://", {"Fn::GetAtt" : [ "Distribution", "DomainName"]}, "/" ] ]},
            "Description" : "Use this URL to access your website through CloudFront"
        },

Output the name of the Amazon S3 bucket created for the CloudFront logs to be delivered to.

"LoggingBucket" : {
            "Value" : { "Ref" : "LoggingBucket" },
            "Description" : "Bucket where CloudFront logs will be written to"
        }
    }
}

Deploying the AWS CloudFormation Template

With the template done, the next step is to deploy to CloudFormation, which will create a stack that represents all the actual AWS resources defined in the template. To deploy this template, I right-click on the template in Solution Explorer and then click Deploy to AWS CloudFormation.

On the first page of the wizard, I’ll enter cloudfrontdemo for the name of the stack that is going to be created. Then I click Next.

On the second page, which is for filling out any parameters defined in the template, I enter the Elastic Beanstalk environment DNS name, then click Next for the review page, and then click Finish.

Now CloudFormation is creating my stack with my distribution. Once the status of the stack transitions to CREATE_COMPLETE, I can check the Outputs tab to get the URL of the CloudFront distribution.

Summary

When users hit my application using the distribution’s URL, they are directed to the nearest edge location to them. The edge location either returns its cached value for the request, and if it doesn’t contain a value, it will reach back to my Elastic Beanstalk environment to fetch the latest value. How long CloudFront caches values is controlled by Cache-Control headers returned by the origin, which in this case is my Elastic Beanstalk environment. If there is no Cache-Control header, then CloudFront defaults to 24 hours, which will be the case for all my static content such as images and javascript. By default, ASP.NET is going to return the Cache-Control header with a value of private for dynamic content, which indicates that all or part of the response message is intended for a single user and must not be cached by a shared cache. This way the content coming from my ViewControllers will always be fresh, whereas my static content can be cached. If the dynamic content can be cached for periods of time, then by using the HttpResponse object, I can indicate that to CloudFront. For example, the code snippet below will let CloudFront know that this content can be cached for 30 minutes.

protected void Page_Load(object sender, EventArgs e)
{
    Response.Cache.SetCacheability(HttpCacheability.Public);
    Response.Cache.SetMaxAge(TimeSpan.FromMinutes(30));
}

For more information on controlling the length of caches, review the CloudFront documentation on expiration.

Web Identity Federation using the AWS SDK for .NET

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today’s post is about web identity federation. AWS Security Token Service (STS) has introduced this new feature, which allows customers to give constrained, time-limited access of their AWS resources to users who identify themselves via popular third-party identity providers (IdPs). AWS currently supports Amazon, Facebook, and Google as IdPs whose tokens can be used to gain access to AWS resources. This feature enables scenarios where app developers can give their customers access to AWS resources under their own (developers’) accounts in a controlled fashion using the customer’s existing account with any of the IdPs. By taking this approach, developers don’t need to distribute their AWS credentials in their applications and do account management for their customers. If you are interested in using this feature in your Windows Phone or Windows Store apps, check out the Developer Preview of the next version of the AWS SDK for .NET. The Developer Preview adds support for .NET Framework 4.5 and the Windows Phone and WinRT platforms.

We’ll now look at the steps required for you to use web identity federation and a few C# code snippets that will show you how to get temporary access tokens and access AWS resources after authenticating with an IdP. We are using Facebook as the IdP in the sample below. For details on using other IdPs, check this link.

Setting up an IAM role

We start off by creating an IAM role (this is a one-time activity.) This is the role that your users will assume when they successfully authenticate through an IdP. When you create this role, you need to specify two policies: the trust policy, which specifies who can assume the role (the trusted entity, or principal), and the access policy, which describes privileges associated with this role. Below is an example of a trust policy using Facebook as the IdP.

{
  "Version":"2012-10-17",
  "Id":"RoleForFacebook",
  "Statement":[{
      "Principal":{"Federated":"graph.facebook.com"},
      "Effect":"Allow",
      "Action":"sts:AssumeRoleWithWebIdentity",
      "Condition": {
          "StringEquals":
              {“graph.facebook.com:app_id":"MY_APP_ID"}
       }
   }]
}

You’ll need to replace the string MY_APP_ID with your Facebook app ID. This policy allows the users authenticated through Facebook IdP to use the web identity federation API (AssumeRoleWithWebIdentity operation), which grants the users temporary AWS credentials. We also have a condition in the policy that the Facebook app ID should match the specified one. This policy also makes use of policy variables, which are discussed in more detail here.

When creating your IAM role via the AWS Management Console, the Role Creation wizard will walk you through the process of creating the trust policy, but you will need to supply the access policy by hand. Below is the access policy that specifies the privileges associated with this role. In this sample, we will provide access to S3 operations on a bucket designated for the Facebook app. You’ll need to replace MY_APPS_BUCKET_NAME with the bucket name for your app.

{
 "Version":"2012-10-17",
 "Statement":[{
   "Effect":"Allow",
   "Action":["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
   "Resource": "arn:aws:s3:::MY_APPS_BUCKET_NAME/${graph.facebook.com:id}/*"
  },
  {
   "Effect":"Allow",
   "Action":"s3:ListBucket",
   "Resource":["arn:aws:s3:::MY_APPS_BUCKET_NAME"],
   "Condition":
     {"StringLike":
       {"s3:prefix":"${graph.facebook.com:id}/"}
     }
  }
 ]
}

The first statement in this policy allows each user of the app to Get, Put, or Delete objects in the specified S3 bucket with a prefix containing their Facebook user ID. This has the effect of creating "folders" for each user under which their objects will reside. The second statement allows users to retrieve only their objects’ contents by enforcing the prefix condition on the specified bucket.

Connecting with the identity provider

Now that the IAM role is in place you can call the STS AssumeRoleWithWebIdentity API, specifying the ARN of the IAM role that you created and a token provided by the IdP. In this example using Facebook, the token would be the access token that Facebook login provides in response to an authentication request (details on how we get the Facebook access token are not covered in this post, this link is a good starting point to understand the Facebook login process). Here is the C# snippet for calling AssumeRoleWithWebIdentity. Notice that we pass in an AnonymousAWSCredentials object for the credentials parameter when constructing the STS client, as we do not need to have AWS credentials to make this call.

var stsClient = new AmazonSecurityTokenServiceClient(new AnonymousAWSCredentials());

// Assume the role using the token provided by Facebook.
var assumeRoleResult = stsClient.AssumeRoleWithWebIdentity(new AssumeRoleWithWebIdentityRequest
{
    WebIdentityToken = "FACEBOOK_ACCESS_TOKEN",
    ProviderId = "graph.facebook.com",
    RoleArn = "ROLE_ARN",
    RoleSessionName = "MySession",
    DurationSeconds = 3600
}).AssumeRoleWithWebIdentityResult;

Here are the parameters we pass to the API.

  • WebIdentityToken – the token received from the IdP after a user authenticates with it.
  • ProviderId – the name of the IdP. The supported values are graph.facebook.com, www.amazon.com, and googleapis.com.
  • Role ARN – the Amazon Resource Name of the role the user will assume. The ARN is of the format arn:aws:iam::123456789012:role/RoleForFacebook.
  • RoleSessionName – the name to give to this specific session. This name is used to identify the session.
  • DurationSeconds – the duration for which the security token that is returned will be valid, in seconds. The default value is 1 hour (3600 seconds).

Accessing AWS resources

The AssumeRoleWithWebIdentity API returns a session token that your application can use to access any resource mapped to the role. This is done by constructing a SessionAWSCredentials object and using it for subsequent calls to access resources and perform actions permitted by the assumed role. The below sample code accesses the objects in the app’s S3 bucket and performs operations on them. Remember that in the access policy you provided when creating the role, the user was restricted to access only S3 objects that contained the Facebook username prefixed to the path. Here assumeRoleResult.SubjectFromWebIdentityToken is the Facebook provided user id of the customer and objectName is the name of the S3 object being created.

// Create an S3 client using session credentials returned by STS
var credentials = assumeRoleResult.Credentials;
SessionAWSCredentials sessionCreds = new SessionAWSCredentials(credentials.AccessKeyId, credentials.SecretAccessKey, credentials.SessionToken);
var s3Client = new AmazonS3Client(sessionCreds, s3Config);

var key = string.Format("{0}/{1}", assumeRoleResult.SubjectFromWebIdentityToken, objectName);

// Put an object in the user's "folder".
s3Client.PutObject(new PutObjectRequest
{
    BucketName = "MY_APPS_BUCKET_NAME",
    Key = key,
    ContentBody = content
});

// List objects in the user's "folder".
var listObjectResponse = s3Client.ListObjects(new ListObjectsRequest
{
    BucketName = "MY_APPS_BUCKET_NAME",
    Prefix = assumeRoleResult.SubjectFromWebIdentityToken + "/"
});

// Get the object with the specified key.
var getObjectRespone = s3Client.GetObject(new GetObjectRequest
{
    BucketName = "MY_APPS_BUCKET_NAME",
    Key = key
});

Summary

In this post, we saw how web identity federation can be used to give access to AWS resources to customers who authenticate through one of the supported IdPs. We also walked through the steps and code snippets to use this feature.

AWS SDK for .NET Version 2.0 Preview

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today, we are excited to announce a preview of our upcoming version 2 of the AWS SDK for .NET, which you can download here.

One of the most exciting new features of version 2 is the ability to have Windows Store and Windows Phone 8 Apps use our SDK. Like other SDKs for these new platforms, all method calls that make requests to AWS are asynchronous methods.

Another big improvement we made to the SDK for asynchronous programming is that when you target Windows Store, Windows Phone 8, or .NET 4.5 the SDK uses the new Task-based pattern for asynchronous programming instead of the old style using pairs of Begin and End methods. Version 2 of the SDK also consists of a version compiled for .NET 3.5 Framework that contains the Begin and End methods for applications that aren’t yet ready to move to .NET 4.5.

For a deeper dive into the differences in version 2, check out our migration guide.

We would really like to hear your feedback on version 2. If you have any suggestions for what you would like to see in our new SDK or have any issues trying out the preview, please let us know. You can let us know using our forums.

AWS.Extensions renaming

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Earlier this week, you may have noticed that the assembly AWS.Extensions—which contained DynamoDBSessionStateStore—has been renamed to AWS.SessionProvider. Our original intent with AWS.Extensions was to create a place for SDK extensions, which aren’t strictly part of the AWS SDK for .NET. We have since developed another extension, DynamoDBTraceListener, a TraceListener that allows the logging of trace and debug output to Amazon DynamoDB.

Unfortunately, the two extensions have distinct requirements: DynamoDBSessionStateStore references System.Web, and thus cannot be used in a Client Profile, while DynamoDBTraceListener does not. So to avoid requiring customers to reference unnecessary assemblies, we’ve decided to separate the AWS.Extensions project into multiple task-oriented solutions. Thus, customers who only require DynamoDB logging will not have to import the server assemblies required by the session provider.

Migration

If you are referencing AWS.Extensions.dll in your project, simply change the reference to AWS.SessionProvider.dll. There are no code changes to be made for this.

NuGet users should remove the reference to AWS.Extensions and instead use the new AWS.SessionProvider package. The existing AWS.Extensions package is now marked OBSOLETE.

DynamoDBTraceListener

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

We recently introduced the DynamoDBTraceListener, a System.Diagnostics TraceListener that can be used to log events straight to Amazon DynamoDB. In this post, we show how simple it is to configure the listener and how to customize the data that is being logged.

Configuration

You can configure the listener either through code or by using a config file. (For console applications, this will be app.config, while IIS projects will use web.config.) Here is a sample configuration that lists a few of the possible configuration parameters:

<system.diagnostics>
  <trace autoflush="true">
    <listeners>
      <add name="dynamo" type="Amazon.TraceListener.DynamoDBTraceListener, AWS.TraceListener"
                      Region="us-west-2"
                      ExcludeAttributes="Callstack"
                      HashKeyFormat="%ComputerName%-{EventType}-{ProcessId}"
                      RangeKeyFormat="{Time}"
        />
    </listeners>
  </trace>    
</system.diagnostics>  

Web.config parameters

Here are all the possible parameters you can define in the config file, their meanings and defaults:

  • AWSAccessKey : Access key to use.
  • AWSSecretKey : Secret key to use. The access and secret keys can be set either in the listener definition or in the appSettings section. If running on an EC2 instance with a role, the listener can use the instance credentials. When specifying these, consider using an IAM user with a restricted policy like the example at the bottom of this post.
  • Region : Region to use DynamoDB in. The default is "us-west-2".
  • Table : Table to log to. The default is "Logs".
  • CreateIfNotExist : Controls whether the table will be auto created if it doesn’t exist. The default is true. If this flag is set to false and the table doesn’t exist, an exception is thrown.
  • ReadCapacityUnits : Read capacity units if the table is not yet created. The default is 1.
  • WriteCapacityUnits : Write capacity units if the table is not yet created. The default is 10.
  • HashKey : Name of the hash key if the table is not yet created. The default is "Origin".
  • RangeKey : Name of the range key if the table is not yet created. The default is "Timestamp".
  • MaxLength : Maximum length of any single attribute. The default is 10,000 characters ("10000").
  • ExcludeAttributes : Comma-separated list of attributes that should not be logged. The default is null – all possible attributes are logged.
  • HashKeyFormat : Format of the hash-key for each logged item. Default format is "{Host}". See format description below.
  • RangeKeyFormat : Format of the range-key for each logged item. Default format is "{Time}. See format description below.
  • WritePeriodMs : Frequency of writes to DynamoDB, in milliseconds. The listener will accumulate logs in a local file until this time has elapsed. The default is one minute ("60000").
  • LogFilesDir : Directory to write temporary logs to. If you don’t specify a directory, the listener attempts to use the current directory, then the temporary directory. If neither is available for writing, the listener will be disabled.

Hash/range key formats

As you’ve noticed from our example, the hash and range keys can be compounded. The format can consist of strings, existing attribute names (e.g., {Host}), environment variables (e.g., %ComputerName%), or any combination of these. Here is an example that combines all possible approaches:

Prod-%ComputerName%-{EventType}

When constructing the format, you can use the following attributes: Callstack, EventId, EventType, Host, Message, ProcessId, Source, ThreadId, Time. These are also the attributes that can be excluded from being logged with the ExcludeAttributes configuration.

Using DynamoDBTraceListener programmatically

Should you need to create and use the listener in your code, this is a simple and straightforward operation. The next sample shows how a to create and invoke a listener.

DynamoDBTraceListener listener = new DynamoDBTraceListener
{
    Configuration = new DynamoDBTraceListener.Configs
    {
        AWSCredentials = new BasicAWSCredentials(accessKey, secretKey),
        Region = RegionEndpoint.USEast1,
        HashKeyFormat = "%ComputerName%-{EventType}"
    }
};
listener.WriteLine("This is a test", "Test Category");
listener.Flush();

Background logging

DynamoDBTraceListener logs events in two separate stages. First, we write the event data to a file on the disk. Then, at periodic intervals, these files are pushed to DynamoDB. We use this approach for a number of reasons, including asynchronous logging and the batching of writes, but most importantly it is done to prevent loss of data if the hosting application terminates unexpectedly. If this happens, we will push any existing log files the next time the application runs and the listener pushes logs to DynamoDB.

Even though the listener writes data to DynamoDB on a periodic basis, it is important to remember to flush the listener or to properly dispose of whatever resources you have that log, such as the client objects in the AWS SDK for .NET. Otherwise, you may find some of your logs are not being uploaded to DynamoDB.

When the listener first starts, we attempt to find a directory for the log files. Three different locations are considered: LogFilesDir, if one is configured by the user; the directory containing the current assembly; the current user’s temporary folder (as resolved by the Path.GetTempPath method). Once a location is determined, an information event is written to the Event Log specifying the current logging location. If none of these locations are available, however, an error event is written to the Event Log and the listener is disabled.

IAM user

For safety, you may not want to put your root account credentials in the application config. A much better approach is to create an IAM user with specific permissions. Below is an example of a policy that limits a user’s permissions to just DynamoDB and only for those operations that the listener actually uses. Furthermore, we’re limiting access to just the log table.

{
  "Statement" : [
    {
      "Effect" : "Allow",
      "Action" : [
        "dynamodb:DescribeTable",
        "dynamodb:CreateTable",
        "dynamodb:BatchWriteItem"
      ],
      "Resource" : "arn:aws:dynamodb:us-west-2:YOUR-ACCOUNT-ID:table/Logs"
    }
  ]
}

IAM Roles

If you are using DynamoDBTraceListener in an environment that is configured with an IAM Role, you can omit the AWSAccessKey and AWSSecretKey parameters from the config file. In this case, DynamoDBTraceListener will access DynamoDB with permissions configured for the IAM Role.

Customizing Windows Elastic Beanstalk Environments – Part 2

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

In the previous post in this series, we introduced the .ebextensions/*config file, and showed how you can use it to install packages, download files, run commands, and start services.

In this post, we’re going to dig a little bit into managing settings through this mechanism.

Writing configuration files

A common way to configure software systems is through text-based configuration. Elastic Beanstalk gives a couple of ways for us to write files in the filesystem that are not necessarily part of our web application, and which may live outside the web application directory. Files can be downloaded from a web-accessible place (such as an S3 bucket) or inlined in the .ebextensions/*.config file directly.

files:
   c:/MyApplicationSupport/main.conf:
     content: |
       <configuration>
         <environment>production</environment>
         <maxConnections>500</maxConnection>
         <defaultUser>guest</defultUser>
       <configuration>
   c:/MyApplicationSupport/auxilliary.conf:
     source: http://my-application-support/auxilliary.conf

The first file in the files: array shows an example of inlining.

Inlining can be handy for situations where you are adjusting content frequently—for example, during development when you are deploying your application more often. Using the source: key requires uploading any changes before deployment, so that’s a better method for more complex, or less volatile files.

Variable interpolation

Another benefit to inlining is that you can use the AWS CloudFormation intrinsic functions to interpolate information from the CloudFormation template associated with your Elastic Beanstalk environment into your configuration files.

Here are some examples of interpolation:

files:
   c:/cfn/environmentInfo.txt:
     content : |
       Environment Name: `{"Ref": "AWSEBEnvironmentName" }`
       Environment Id:   `{"Ref": "AWSEBEnvironmentId" }`
       Instance Type:    `{"Ref": "InstanceType" }`
       Stack Id:         `{"Ref": "AWS::StackId" }`
       Region:           `{"Ref": "AWS::Region" }`
       AMI Id:           `{"Fn::FindInMap": [ "AWSEBAWSRegionArch2AMI", { "Ref": "AWS::Region" }, { "Fn::FindInMap": [ "AWSEBAWSInstanceType2Arch", { "Ref": "InstanceType" }, "Arch" ]}]}`

For more ideas about what can be extracted from the CloudFormation template, you can inspect the template for your environment using the AWS Toolkit for Visual Studio. To do that, simply add a project of type AWS CloudFormation to your solution. When you create the project, you will be prompted with a dialog to choose a template source. Choose Create from existing AWS CloudFormation Stack, choose the correct account, region, and the stack associated with your environment, and then click Finish.

Customizing Windows Elastic Beanstalk Environments – Part 1

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

AWS Elastic Beanstalk recently announced support for customizing Windows environments with configuration files. Before this, the only way to customize a .NET container was to create a custom AMI in each region you wanted to run your application in.

Adding a configuration file to your application allows you to

  • install packages
  • write files in locations other than the application folder
  • execute commands, and run scripts
  • start services
  • set configuration options

    • for your application
    • for the Elastic Beanstalk environment.

Let’s walk through a simple example to show how it’s done.

Installing packages and starting custom services

Our hypothetical application MyApp relies on a third-party package called WebMagicThingy, which is packaged as an MSI. In addition, we have written a Windows service that periodically performs maintenance and cleanup operations on the host for our application. We want to install and start that service on each instance in our Elastic Beanstalk environment.

The initial step is to make our service code and the third-party package available on the web. We’ll put them in an S3 bucket called my-app-support.

my-app-support/WebMagicThingy.msi
my-app-support/MyAppJanitor.zip

Next, we’ll create a folder in our application called .ebextensions, and in that folder create a file called MyAppSupport.config. The .ebextensions folder can contain more than one file with a .config extension. You can either include these files in your project, or alternatively you can select All Files in the Project Folder for the Items to deploy option on the Package/Publish Web tab of the project properties pane to ensure that they get included in the deployment bundle.

The format of the configuration files is YAML. Visual Studio expects files with a .config extension to be XML files that conform to a specific schema, so it may be easier to create these files in an external editor, then include them in the project. Ours will look like this:

 packages:
   msi:
     WebMagicThingy: http://s3.amazonaws.com/my-app-support/WebMagicThingy.msi
 sources:
   c:/AppSupport/MyAppJanitor: http://s3.amazonaws.com/my-app-support/MyAppJanitor.zip
 commands:
   install-janitor:
     command: C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil MyAppJanitor.exe
     cwd: c:/AppSupport/MyAppJanitor
     waitForCompletion:0
 services:
   windows:
     MyAppJanitor:
       enabled: true
       ensureRunning: true
       commands: install-janitor

Each level of indentation is two spaces. Take care that your editor doesn’t replace consecutive spaces with tabs, or the file will not be interpreted correctly, and Elastic Beanstalk will stop the deployment with an error.

This configuration does four things on each instance in the Elastic Beanstalk environment:

  • Installs the MSI that we put in our S3 bucket earlier
  • Expands the MyAppJanitor.zip from the S3 bucket to the location C:AppSupportMyAppJanitor
  • Runs installutil.exe to install the MyAppJanitor service

    • The cwd: directory allows you to specify what folder the command is run in
    • If waitForCompletion is not specified for a command, the container will wait for 60 seconds by default.
  • Makes sure that the service is started

You may notice inconsistent use of path separators and escaping in various places in the file. Most directives can use forward-slash as the path separator, but invoking commands that are not in the PATH requires escaped backward-slash path separators.

In upcoming posts, we’ll explore other ways to customize Windows Elastic Beanstalk environments with the .ebextensions mechanism. In the meantime, you can explore the Elastic Beanstalk documentation on the topic and see what things you come up with.

Output Pagination with AWS Tools for PowerShell

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Amongst the changes to the cmdlets in version 1.1 of the AWS Tools for Windows PowerShell are added support for both automatic and manual pagination of the output from services to the pipeline. Most of the time, you’ll probably want to use automatic paging to get all the data from a cmdlet, but on occasion you may prefer to control this yourself.

Automatic Pagination

Making use of automatic output pagination is simple—run the cmdlet with no paging parameters:

"mybucket" | Get-S3Object

The output from this to the pipeline will be zero or more S3Object instances. By default Amazon S3 returns a maximum of 1000 items per call, so the cmdlet keeps calling on your behalf and emitting each batch of results to the pipe until S3 signals that there is no more content.

Here’s another example of using automatic pagination in conjunction with the improved pipeline support, in this case to get the size (in GB in this example) of a bucket:

((Get-S3Object -BucketName mybucket).Size | Measure-Object -Sum).Sum / 1024 / 1024 / 1024

Automatic pagination doesn’t just work with Amazon S3:

Get-CFNStack | Get-CFNStackResources

In this example, Get-CFNStack enumerates all of your AWS CloudFormation stacks (in the current region set for the shell) and emits each to the downstream Get-CFNStackResources cmdlet to get additional resource information on the stack.

If you’re curious, or need to diagnose a problem, you can see the repeated calls that the cmdlets make to obtain a full set of results by inspecting the entries in the $AWSHistory shell variable (this is also new with version 1.1). You can also it to see how the cmdlets track the ‘next page’ marker for each service call on your behalf.

Controlling Pagination Yourself

Most of the time, automatic pagination can be used without further thought. However, there may be occasions when you are dealing with huge data sets in a memory-constrained environment. In these cases, you may elect to control the pagination yourself. The names of the parameters to use to control paging do vary by service but are usually called NextMarker, Marker, NextToken, and so on.

The cmdlets for S3 use parameters called MaxKeys and Marker to control the iteration (and NextMarker and IsTruncated fields in the service response to indicate where the next page of results can be fetched, if any). The first example in this post can be rewritten like this to fetch all the S3Object instances in batches of 50 at a time:

$nextMarker = $null
$keysPerPage = 50
do
{
	$objects = Get-S3Object "mybucketname" -KeyPrefix / -MaxKeys $keysPerPage -Marker $nextMarker
	$nextMarker = $AWSHistory.LastServiceResponse.NextMarker
        # do something with the batch of results in $objects here
} while ($nextMarker)

When you handle paging yourself, be sure to capture the ‘next marker’ token from the response straight away and before you make another call to AWS; otherwise, the response that $AWSHistory.LastServiceResponse points to may not be what you think it is.

Similar patterns can be followed for other services that support paginated result sets.

Parameter Aliases

As mentioned above, Amazon S3 cmdlets use MaxKeys and Marker parameters to expose pagination, mirroring the underlying service API. Other services use different names (NextToken for example, or MaxRecords). This can be difficult to remember, especially if you’re unfamiliar with the service. To help, we add aliases to the parameters so that you can expect a consistent naming scheme, regardless of the service. These aliases are NextToken and MaxItems. MaxItems, by the way, is the maximum number of items to emit to the pipeline, which may be more or less than the underlying service’s maximum per call.

Using aliases, these two commands are the same:

Get-S3Object mybucket -KeyPrefix / -Marker "token2" -MaxKeys 200
Get-S3Object mybucket -KeyPrefix / -NextToken "token2" -MaxItems 200

Summary

In this post, we covered how to make use of the automatic results pagination now available in the AWS PowerShell cmdlets and how to take control and perform the pagination yourself using parameters and aliases for a uniform experience across services. It also showed how to access additional information from the service responses being logged by the new $AWSHistory shell variable.

Instance Status Checks with the AWS SDK for .NET

by Wade Matveyenko | on | in .NET | Permalink | Comments |  Share

A question that we have heard from our customers is, "How do we get access to the Amazon EC2 instance status checks?" If you go the AWS Management Console, you can easily see those status checks displayed. The Amazon EC2 DescribeInstanceStatus API action returns the instance status for one or more EC2 instances. This code shows you how to make that call in the AWS SDK for .NET:

var instanceId = "yourInstanceIdHere";
var ec2Client = new AmazonEC2Client(RegionEndpoint.USWest2);
var statusRequest = new DescribeInstanceStatusRequest
{
    InstanceId = { instanceId }
};
var result = ec2Client.DescribeInstanceStatus(statusRequest).DescribeInstanceStatusResult;

This block of code returns an InstanceStatusResult. Inside of this object is an array of InstanceStatus objects. You get one ‘InstanceStatus’ for each instance that you asked for. In this example, since we asked for only one instance, there is only one element in that array. You access this object by retrieving result.InstanceStatus[0]; Status checks are separated into two types: system status checks and instance status checks. For more information about the types of status checks, see the EC2 documentation— Monitoring Instances with Status Checks. This code shows how to access the EC2 instance status checks and writes the output to the console:

//Get the instance status checks
Console.WriteLine("Instance Status = " +
    status.InstanceStatusDetail.Status);
Console.WriteLine("Instance Status Detail Name = " +
    status.InstanceStatusDetail.Detail[0].Name);
Console.WriteLine("Instance Status Detail Status = " +
    status.InstanceStatusDetail.Detail[0].Status;);

//Get the system status checks
Console.WriteLine("System Status = " +
    status.SystemStatusDetail.Status);
Console.WriteLine("System Status Detail Name = " +
    status.SystemStatusDetail.Detail[0].Name);
Console.WriteLine("System Status Detail Status = " +
    status.SystemStatusDetail.Detail[0].Status);

The output for this looks like:

Instance Status = ok
Instance Status Detail Name = reachability
Instance Status Detail Status = passed
System Status = ok
System Status Detail Name = reachability
System Status Detail Status = passed

The current status check API has only one check, so the Detail property contains a single element. The Name and Status properties contain the name of the check, currently only reachability, and the status of the check, which can contain passed | failed | insufficient-data. There is another property, ImpairedSince, which would contain a String of when the status check failed. If the status check was not a failure, this property will be empty. Now that you know how to use instance status checks, how will you use them? Let us know in the comments!

Getting the Latest Windows AMIs

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

More and more developers are launching the AWS base Windows AMIs and configuring them during startup. You can do this either by adding a PowerShell script to the user data field or by using an AWS CloudFormation template to configure it. We are constantly updating these base AMIs to include the latest patches. The SDK contains the ImageUtilities class, which you can find in the Amazon.EC2.Util namespace. This class is useful for finding the latest AMIs using named service pack/’RTM’ independent constants. For example, the following code will find the latest Windows 2012 with SQL Server Express AMI:

Image image = ImageUtilities.FindImage(ec2Client, ImageUtilities.WINDOWS_2012_SQL_SERVER_EXPRESS_2012);

Using the version-independent constants means that you do not need to rebuild your code when the Amazon EC2 team revises the published AMIs. The new EC2 sample that was recently added to Visual Studio under ”Compute and Networking” demonstrates how to use the ImageUtilities class and execute a PowerShell script at startup. Using AWS Tools for PowerShell you can also use the Get-EC2ImageByName cmdlet:

"WINDOWS_2012_SQL_SERVER_EXPRESS_2012" | Get-EC2ImageByName | New-EC2Instance ...

The cmdlet accepts either the logical, service pack/RTM-independent names or specific name patterns. The current names can be seen by invoking the cmdlet with no parameters. Just like using the SDK, if you script using logical names to address the AMIs, your script does not need to be updated when Amazon EC2 revises the current AMIs as new service packs are released!