Category: .NET


Updates to Credential and Region Handling

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Version 3.1.73.0 of the AWS Tools for Windows PowerShell and AWS SDK for .NET (AWSSDK.Core version 3.1.6.0),  released today, contain enhancements to the way credentials and region data can be supplied to your SDK applications and PowerShell scripts, including the ability to use SAML federated credentials in your SDK applications. We’ve also refactored support for querying Amazon EC2 instance metadata and made it available to your code. Let’s take a look at the changes.

SDK Updates

Credential Handling

In 2015, we launched support for using SAML federated credentials with the AWS PowerShell cmdlets. (See New Support for Federated Users in the AWS Tools for Windows PowerShell.) We’ve now extended the SDK so that applications written against it can also use the SAML role profiles described in the blog post. To use this support in your application, you must first set up the role profile. The details for using the PowerShell cmdlets to do this appear in the blog post. Then, in the same way you do with other credential profiles, you simply reference the profile in your application’s app.config/web.config files with the AWSProfileName appsetting key. The SAML support to obtain AWS credentials is contained in the SDK’s Security Token Service assembly (AWSSDK.SecurityToken.dll), which is loaded at runtime. Be sure this assembly is available to your application at runtime.

The SDK has also been updated to support reading credentials from the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables (the same variables used with other AWS SDKs). For legacy support, the AWS_SECRET_KEY variable is still supported.

If no credentials are supplied to the constructors of the service clients, the SDK will probe to find a set to use. As of this release, the current probing tests are followed:

  1. If an explicit access key/secret access key or profile name is found in the application’s app.config/web.config files, use it.
  2. If a credential profile named "default" exists, use it. (This profile can contain AWS credentials or it can be a SAML role profile.)
  3. If credentials are found in environment variables, use them.
  4. Finally, check EC2 instance metadata for instance profile credentials.

Specifying Region

To set the region when you instantiated AWS service clients in your SDK applications, you used to have to two options: hard-code the region in the application code using the system name (for example, ‘us-east-1’) or the RegionEndpoint helper properties (for example, RegionEndpoint.USEast1) or supply the region system name in the application’s app.config/web.config files using the AWSRegion appsetting key. The SDK has now been updated to enable region detection through an environment variable or, if your code is running on an EC2 instance, from instance metadata.

To use an environment variable to set the AWS region, you simply set the variable AWS_REGION to the system name of the region you want service clients to use. If you need to override this for a specific client, simply pass the required region in the service client constructor. The AWS_REGION variable is used by the other AWS SDKs.

When running on an EC2 instance, your SDK-based applications will auto-detect the region in which the instance is running from EC2 instance metadata if no explicit setting is found. This means you can now deploy code without needing to hard-code any region in your app.config/web.config files. You can instead rely on the SDK to auto-detect the region when your application instantiates clients for AWS services.

Just as with credentials, if no region information is supplied to a service client constructor, the SDK probes to see if the region can be determined automatically. As of right now, these are the tests performed:

  1. Is an AWSRegion appsetting key present in the application’s app.config/web.config files? If so, use it.
  2. Is the AWS_REGION environment variable set? If so, use it.
  3. Attempt to read EC2 instance metadata and obtain the region in which the instance is running.

PowerShell Updates

Credential Handling

You can now supply credentials to the cmdlets by using the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. (You might find this helpful when you attempt to run the cmdlets in a user identity where credential profiles are inconvenient to set up, for example, the local system account.)

If you have enabled SAML federated authentication for use with the cmdlets, they now support the use of proxy data configured using the Set-AWSProxy cmdlet when making authentication requests against the ADFS endpoint. Previously, a proxy had to be set at the machine-wide level.

When the AWS cmdlets run, they follow this series of tests to obtain credentials:

  1. If explicit credential parameters (-AccessKey, -SecretKey, -SessionToken, for example) have been supplied to the cmdlet or if a profile has been specified using the -ProfileName parameter, use those credentials. The profile supplied to -ProfileName can contain regular AWS credentials or it can be a SAML role profile.
  2. If the current shell has been configured with default credentials (held in the $StoredAWSCredentials variable), use them.
  3. If a credential profile with the name "default" exists, use it. (This profile can contain regular AWS credentials or it can be a SAML role profile.)
  4. If the new AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY environment variables are set, use the credentials they contain.
  5. If EC2 instance metadata is available, look for instance profile credentials.

Specifying Region

In addition to existing support for specifying region using the -Region parameter on cmdlets (or setting a shell-wide default using Set-DefaultAWSRegion), the cmdlets in the AWSPowerShell module can now detect region from the AWS_REGION environment variable or from EC2 instance metadata.

Some users may run the Initialize-AWSDefaults cmdlet when opening a shell on an EC2 instance. Now that you can detect region from instance metadata, the first time you run this cmdlet on an EC2 instance, you are no longer prompted to select a region from the menu. If you want to run PowerShell scripts using a region for the AWS services different from the region in which the instance is running, you can override the default detection by supplying the -Region parameter, with appropriate value, to the cmdlet. You can also continue to use the Set-DefaultAWSRegion cmdlet in your shell or scripts, or add the -Region parameter to any cmdlet to direct calls to a region that differs from the region hosting the instance.

Just as with credentials, the cmdlets will search for the appropriate region to use when invoked:

  1. If a -Region parameter was supplied to the cmdlet, use it.
  2. If the current shell contains a default region ($StoredAWSRegion variable), use it.
  3. If the AWS_REGION environment variable is set, use it.
  4. If the credential profile ‘default’ exists and it contains a default region value (set by previous use of Initalize-AWSDefaults), use it.
  5. If EC2 instance metadata is available, inspect it to determine the region.

Reading EC2 Instance Metadata

As part of extending the SDK and PowerShell tools to read region information from EC2 instance metadata, we have refactored the metadata reader class (Amazon.EC2.Util.EC2Metadata) from the AWSSDK.EC2.dll assembly into the core runtime (AWSSDK.Core.dll) assembly. The original class has been marked obsolete.

The replacement class is Amazon.Util.EC2InstanceMetadata. It contains additional helper methods to read more of the EC2 instance metadata than the original class. For example, you can now read from the dynamic data associated with the instance. For more information, see [http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-instance-metadata.html ec2-instance-metadata.html]. Region information is held in what is known as the identity document for the instance. The document is in JSON format. The class contains a helper property, Region, which extracts the relevant data for you and returns it as a RegionEndpoint instance, making it super easy to query this in your own applications. You can also easily read the instance monitoring, signature, and PKCS7 data from convenient properties.

We’ve not forgotten scripters either! Previously, to read instance metadata from PowerShell, you had to have run the Invoke-WebRequest cmdlet against the metadata endpoint and parsed the data yourself. The AWSPowerShell module now contains a cmdlet dedicated to the task: Get-EC2InstanceMetadata. Some examples:

PS C:UsersAdministrator> Get-EC2InstanceMetadata -Category LocalIpv4
10.232.46.188

PS C:UsersAdministrator> Get-EC2InstanceMetadata -Category AvailabilityZone
us-west-2a

PS C:UsersAdministrator> Get-EC2InstanceMetadata -ListCategory
AmiId
LaunchIndex
ManifestPath
AncestorAmiId
BlockDeviceMapping
InstanceId
InstanceType
LocalHostname
LocalIpv4
KernelId
AvailabilityZone
ProductCode
PublicHostname
PublicIpv4
PublicKey
RamdiskId
Region
ReservationId
SecurityGroup
UserData
InstanceMonitoring
IdentityDocument
IdentitySignature
IdentityPkcs7

PS C:UsersAdministrator> Get-EC2InstanceMetadata -path /public-keys/0/openssh-key
ssh-rsa AAAAB3N...na27jfTV keypairname

We hope you find these new capabilities helpful. Be sure to let us know in the comments if there are other scenarios we should look at!

AWS SDK for .NET Version 2 Status

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Version 3 of the AWS SDK for .NET has been generally available since 7/28/2015. Although the legacy version (v2) of the SDK will continue to work, we strongly recommend that all customers migrate to the latest version 3 to take advantage of various improvements including modularized architecture, portable class library support, and .NET Core support. There are only a few backward-incompatible changes between version 2 and version 3 (see the migration guide for details). Additionally, the last few legacy releases of the version 2 SDK (versions 2.3.50 and later) have all the classes and methods that are changed in version 3 marked obsolete, so compile-time warnings can help you make forward-compatible updates before upgrading to the version 3 SDK.

To help customers plan their migration, our current maintenance timeline for the version 2 SDK is provided below.

Security issues and critical bugs

Critical bugs with no reasonable workaround as well as any security-related issues will be addressed with the highest priority. We will continue to support fixing such issues indefinitely.

Non-critical bugs

We will continue to address non-critical bugs in the version 2 SDK until the end of 2016. These bugs will be fixed in relative priority order. Factors considered in prioritization will include

  • Number of affected customers
  • Severity of the problem (broken feature vs. typo fix in documentation)
  • Whether the issue is already fixed in version 3
  • Risk of the fix causing unintended side effects

Service API updates

We will continue to add API updates to existing service clients based on customer request (GitHub Issue) until 8/1/2016.

New service clients

New service clients will not be added to the version 2 SDK. They will only be added to version 3.

As always, please find us in the Issues section of the SDK repository on GitHub, if you would like to report bugs, request service API updates, or ask general questions.

Retrieving Request Metrics from the AWS SDK for .NET

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

In an earlier post, we discussed how you can turn on AWS SDK for .NET logging and how to log the SDK request metrics in JSON. In this post, we will discuss how you can use log4net or System.Diagnostics logging to gain access to the real RequestMetrics objects and work with raw metrics.

This approach may be useful as part of your testing to keep track of service calls made by the SDK or to log metrics data using your own loggers without the added overhead of trying to parse logs and reconstruct the metrics information.

To demonstrate, let’s implement a custom appender (MetricsAppender) for log4net that will extract and display metrics data to the console.

public class MetricsAppender : AppenderSkeleton
{
    // override AppenderSkeleton.Append to intercept all messages
    protected override void Append(LoggingEvent loggingEvent)
    {
        // extract the message data
        var logMessage = loggingEvent.MessageObject as ILogMessage;
        if (logMessage != null)
        {
            // find IRequestMetrics in logMessage.Args, if it is present
            var metrics = logMessage.Args == null ? null :
                logMessage.Args.Where(a => a is IRequestMetrics).FirstOrDefault() as IRequestMetrics;
            if (metrics != null)
            {
                // write MethodName and ClientExecuteTime to console
                Console.WriteLine("{0} took {1}ms to complete",
                    metrics.Properties[Metric.MethodName].First(),
                    metrics.Timings[Metric.ClientExecuteTime].First().ElapsedTime.TotalMilliseconds);
            }
        }
    }
}

Here is a simple example of the use of MetricsAppender with the SDK:

// create and configure metrics appender
var appender = new MetricsAppender();
BasicConfigurator.Configure(appender);

// configure the SDK to use log4net for metrics logging
AWSConfigs.LoggingConfig.LogMetrics = true;
AWSConfigs.LoggingConfig.LogTo = LoggingOptions.Log4Net;

// make service call and log the resultant metrics
using (var ddb = new AmazonDynamoDBClient())
{
    ddb.ListTables();
}

After you run this code, a message like this will be written to the console:

ListTablesRequest took 415.5368ms to complete

So what’s going on here? During logging, the AWS SDK for .NET passes an ILogMessage object to the loggers. ILogMessage holds the information required to execute String.Format to create a string that can be logged. String.Format isn’t called until the logger needs to convert the data to text (for example, in a logger that writes output to a file), but it can also be used to work with logged objects in memory. MetricsAppender simply analyzes all of the logged ILogMessage instances and looks for an ILogMessage that contains the IRequestMetrics object. It then extracts the required data from that object and writes this data to the console.

For this post we chose to use log4net because it has the handy AppenderSkeleton, which allowed us to only override one method to create a fully functional appender. This is not present in System.Diagnostics, though it is very easy to implement one. In fact, you can grab the TestTraceListener from this GitHub discussion and use it. As you can see, the inner code for both MetricsAppender and TestTraceListener is virtually identical.

Conclusion

We hope this look at logging in the AWS SDK for .NET will help you take full advantage of metrics data in your own application. If you have questions or feedback about the SDK, feel free to post them to our GitHub repo. In fact, this very post is the result of a user question about SDK metrics logging.

Exploring ASP.NET Core Part 2: Continuous Delivery

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

The first post in this series discussed how to use an Amazon EC2 instance and AWS CodeDeploy to deploy ASP.NET Core applications from GitHub. The setup assumed all git pushes to GitHub were deployed to the running environment without validation. In this post, let’s examine how we can create an AWS environment for our ASP.NET Core application that gives us quality control and takes advantage of AWS cloud scale.

You’ll find the code and setup scripts for this post in the part2 branch of the aws-blog-net-exploring-aspnet-core repository.

Validating the Deployment

In the first post, the appspec.yml file called the InstallApp.ps1 script during the ApplicationStart hook to extract the application and set up IIS. To verify the application is up and running, let’s update the appspec.yml file to call ValidateInstall.ps1 during the ValidateService hook.

version: 0.0
os: windows
files:
  - source: 
    destination: C:ExploringAspNetCore
hooks:
  ApplicationStop:
    - location: .RemoveApp.ps1
      timeout: 30
  ApplicationStart:
    - location: .InstallApp.ps1
      timeout: 300
  ValidateService:
    - location: .ValidateInstall.ps1
      timeout: 300

This script allows us to call tests to make sure our application is running correctly. In the GitHub repository, I added some xunit tests under .SampleAppsrcSmokeTests. My sample application is no more than a simple "Hello World" web application so I just need to test that I can make a web call to the application and get a valid response. In a real-world application you would have a much more exhaustive suite of tests to run during this validation step.

Let’s take a look at ValidateInstall.ps1 to see how the tests are run.

sl C:ExploringAspNetCoreSampleAppsrcSmokeTests

# Restore the nuget references
& "C:Program Filesdotnetdotnet.exe" restore

# Run the smoke tests
& "C:Program Filesdotnetdotnet.exe" test

exit $LastExitCode

To run the tests, switch to the directory where the tests are stored. Restore the dependencies, and then run the dotnet test command. The failure of any test will cause dnx to return a non 0 exit code, which we return from the PowerShell script. AWS CodeDeploy will see the failed exit code and mark the deployment as a failure. We can get logs from the test run from the AWS CodeDeploy console to see what failed.

Using AWS CodePipeline

Now that deployments are running smoke tests, we can detect bad deployments. In a similar way, we want to make sure users aren’t affected by bad deployments. The best practice is to use a pipeline with a beta stage during which we run smoke tests. We promote to production only if beta was successful. This gives us continuous delivery with safety checks. Again, as we discussed in part 1, we benefit from the ability of ASP.NET Core to run from source. It means we do not have to bother configuring a build step in our pipeline. Pipelines can pull source from Amazon S3 or GitHub. To provide a complete sample that can be set up with just a PowerShell script, we’ll use S3 as the source for our pipeline. AWS CodePipeline will monitor S3 for new versions of an object to push them through the pipeline. For information about configuring a GitHub repository, see the AWS CodePipeline User Guide.

Setup Script

The PowerShell script .EnvironmentSetupEnvironmentSetup.ps1 in the repository will create the AWS resources required to deploy the application through a pipeline.

Note: To avoid charges for unused resources, be sure to run .EnvironmentSetupEnvironmentTearDown.ps1 when you are done testing.

The setup script sets up the following resources:

  • An S3 bucket with a zip of the archive as the initial deployment source.
  • A t2.small EC2 instance for beta.
  • An Auto Scaling group with a load balancer using t2.medium instances.
  • An AWS CodeDeploy application for beta using the t2.small EC2 instance.
  • An AWS CodeDeploy application for production using the Auto Scaling group.
  • AWS CodePipeline with the S3 bucket as the source and the beta and production stages configured to use the AWS CodeDeploy applications.

When the script is complete, it will print out the public DNS for the beta EC2 instance and production load balancer. We can monitor pipeline progress in the AWS CodePipeline console to see if the deployment was successful for both stages.

The application was deployed to both environments because the smoke tests were successful during the AWS CodeDeploy deployments.

Failed Deployments

Let’s see what happens when a deployment fails. We’ll force a test failure by opening the .SampleAppsrcSmokeTestsWebsiteTests.cs test file and making a change that will cause the test to fail.

[Fact]
public async Task PassingTest()
{
    using (var client = new HttpClient())
    {
        var response = await client.GetStringAsync("http://localhost-not-a-real-host/");

        Assert.Equal("Exploring ASP.NET Core with AWS.", response);
    }
}

In the repository, we can run the .DeployToPipeline.ps1 script, which will zip the archive and upload it to the S3 location used by the pipeline. This will kick off a deployment to beta. (The deployment will fail because of the bad test.)

A deployment will not be attempted during the production stage because of the failure at beta. This keeps production in a healthy state. To see what went wrong, we can view the deployment logs in the AWS CodeDeploy console.

Conclusion

With AWS CodeDeploy and AWS CodePipeline, we can build out a full continuous delivery system for deploying ASP.NET Core applications. Be sure to check out the GitHub repository for the sample and setup scripts. In the next post in this series, we’ll explore ASP.NET Core cross-platform support.

Exploring ASP.NET Core Part 1: Deploying from GitHub

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

ASP.NET Core, formally ASP.NET 5, is a platform that offers lots of possibilities for deploying .NET applications. This series of posts will explore options for deploying ASP.NET applications on AWS.

What Is ASP.NET Core?

ASP.NET Core is the new open-source, cross-platform, and modularized implementation of ASP.NET. It is currently under development, so expect future posts to cover updates and changes (for example, the new CLI).

Deploying from GitHub

The AWS CodeDeploy deployment service can be configured to trigger deployments from GitHub. Before ASP.NET Core, .NET applications had to be built before they were deployed. ASP.NET Core applications can be deployed and run from the source.

Sample Code and Setup Scripts

The code and setup scripts for this blog can be found in the aws-blog-net-exploring-aspnet-core repository in the part1 branch.

Setting Up AWS CodeDeploy

AWS CodeDeploy automates deployments to Amazon EC2 instances that you set up and configure as a deployment group. For more information, see the AWS CodeDeploy User Guide.

Although ASP.NET Core offers cross-platform support, in this post we are using instances running Microsoft Windows Server 2012 R2. The Windows EC2 instances must have IIS, .NET Core SDK and the Windows Server Hosting installed. The Windows Server Hosting, also called the ASP.NET Core Module, is required to enable IIS to communicate with the ASP.NET Core web server, Kestrel.

To set up the AWS CodeDeploy environment, you can run the .\EnvironmentSetup\EnvironmentSetup.ps1 PowerShell script in the GitHub repository. This script will create an AWS CloudFormation stack that will set up an EC2 instance and configure AWS CodeDeploy and IIS with the .NET Core SDK and Windows Server Hosting. It will then set up an AWS CodeDeploy application called ExploringAspNetCorePart1.

To avoid ongoing charges for AWS resources, after you are done with your testing, be sure to run the .\EnvironmentSetup\EnvironmentTearDown.ps1 PowerShell script.

GitHub and AWS CodeDeploy

You can use the AWS CodeDeploy console to connect your AWS CodeDeploy application to a GitHub repository. Then you can initiate deployments to the AWS CodeDeploy application by specifying the GitHub repository and commit ID. The AWS CodeDeploy team has written a blog post that describes how to configure the repository to automatically push a deployment to the AWS CodeDeploy application.

Deploying from Source

When you deploy from GitHub, the deployment bundle is a zip archive of the repository. In the root of the repository is an appspec.yml file that tells AWS CodeDeploy how to deploy our application. For our application, the appspec.yml is very simple:

version: 0.0
os: windows
files:
  - source: 
    destination: C:\ExploringAspNetCore
hooks:
  ApplicationStop:
    - location: .\RemoveApp.ps1
      timeout: 30
  ApplicationStart:
    - location: .\InstallApp.ps1
      timeout: 300

The file tells AWS CodeDeploy to extract the files from our repository to C:\ExploringAspNetCore and then run the PowerShell script, InstallApp.ps1, to start the application. The script has three parts. The first part restores all the dependencies for the application.

# Restore the nuget references
"C:Program Files\dotnet\dotnet.exe" restore

The second part packages the application for publishing.

# Publish application with all of its dependencies and runtime for IIS to use
"C:Program Files\dotnet\dotnet.exe" publish --configuration release -o c:\ExploringAspNetCore\publish --runtime active

The third part updates IIS to point to the publishing folder. The AWS CodeDeploy agent is a 32-bit application and runs PowerShell scripts with the 32-bit version of PowerShell. To access IIS with PowerShell, we need to use the 64-bit version. That’s why this section passes the script into the 64-bit version of powershell.exe.

C:\Windows\SysNative\WindowsPowerShell\v1.0\powershell.exe -Command {
             Import-Module WebAdministration
             Set-ItemProperty 'IIS:sitesDefault Web Site' 
                 -Name physicalPath -Value c:\ExploringAspNetCore\publish
}

Note: This line was formatted for readability. For the correct syntax, view the script in the repository.

If we have configured the GitHub repository to push deployments to AWS CodeDeploy, then after every push, the code change will be zipped up and sent to AWS CodeDeploy. Then AWS CodeDeploy will execute the appspec.yml and InstallApp.ps1 and the EC2 instance will be up-to-date with the latest code — no build step required.

Share Your Feedback

Check out the aws-blog-net-exploring-aspnet-core repository and let us know what you think. We’ll keep adding ideas to the repository. Feel free to open an issue to share your own ideas for deploying ASP.NET Core applications.

Contributing to the AWS SDK for .NET

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

The AWS SDK for .NET is an open source project available on GitHub. This post is to help community developers navigate the SDK code base with an eye toward contributing features and fixes to the SDK.

Code Generation

The first gotcha for contributors is that major portions of the code are generated from models of the service APIs. In version 3 of the SDK, we reorganized the code base to make it obvious which code is generated. You’ll now find it under each service client folder in folders named "Generated." Similarly, handwritten code is now found in folders named "Custom." Most of the generated code is in partial classes to facilitate extending it without having changes get clobbered by the code generator.

The convention we use when adding extensions to generated partial classes is to place a file called Class.Extensions.cs under the Custom folder with the same hierarchy as the generated file.

The code generator can be found here. The models are under the ServiceModels folder. To add a client, add a model to the ServiceModels folder, update the _manifest.json file in the same folder, and run the generator. The customization files in the folder handle ways in which we can override the behavior of the generator, mostly to keep some consistency across older and newer services, as well as make adjustments to make the API more C#-friendly.

It is sometimes necessary to update the code generator to add a feature or fix an issue. Because changes to the generator may impact all existing services and require a lot of testing, these changes should not be undertaken lightly.

Platform Support

Another thing you may notice about the code base is that some files are under folders like _bcl, _bcl35, _bcl45, _mobile, or _async. This is how the SDK controls which files are included in platform-specific project files.

As an example, if you look at the AutoScaling client folder you will see the folders

Model
_bcl35
_bcl45
_mobile

The _bcl45 folder contains the Auto Scaling client and interface for version 4.5 of the AWS SDK for .NET. It differs from the 3.5 version in that it exposes Async/Await versions of the service APIs, where the 3.5 version exposes Begin/End for asynchrony. The Model folder contains code common to all platforms. For this reason, don’t use Visual Studio to add files to an SDK project. Instead, add the file to the appropriate file system location, and then reload the project. We try to use this subdirectory mechanism rather than #if directives in the code where possible.

Testing

It will be much easier to evaluate and provide feedback on contributions if they are accompanied by unit tests, which can be added to the UnitTests folder.

Sometimes it is good to include some integration tests that hit the service endpoint, too. Integration tests will, by necessity, create AWS resources under your account, so it’s possible you will incur some costs. Try to design integration tests that favor APIs that don’t create resources, are fast to run, and account for the eventual consistency of the APIs you’re testing.

Community Feature Requests

Many contributions are driven by the specific needs of a community member, but sometimes they’re driven simply by a desire to get involved. If you would like to get involved, we collect community feature requests in the FEATURE_REQUESTS.md at the top level of the repository.

DynamoDB Document Model Manual Pagination

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

In version 3.1.1.2 of the DynamoDB .NET SDK package, we added pagination support to the Document Model. This feature allows you to use a pagination token returned by the API to paginate a set of Query or Scan results across sessions. Until now, it was not possible to resume pagination of Query or Scan results without retrieving the already encountered items. This post includes two simple examples of this new functionality.

The first example makes the initial Query call to retrieve all movies from the year 2012, and then saves the pagination token returned by the Search.PaginationToken property. The second example retrieves the PaginationToken and continues the same query. For these examples, we will assume we have functions (void SaveToken(string token) and string LoadToken()) to persist the pagination token across sessions. (If this were an ASP.NET application, the functions would use the session store to store the token, but this can be any similar environment where manual pagination is used.)

Initial query:

var client = new AmazonDynamoDBClient();
Table moviesTable = Table.LoadTable(client, "MoviesByYear");

// start initial query
var search = moviesTable.Query(new QueryOperationConfig
{
    Filter = new QueryFilter("Year", QueryOperator.Equal, 2012),
});

// retrieve one pages of items
List<Document> items = search.GetNextSet();

// get pagination token
string token = search.PaginationToken;

// persist the token in session data or something similar
SaveToken(token);

Resumed query:

var client = new AmazonDynamoDBClient();
Table moviesTable = Table.LoadTable(client, "MoviesByYear");

// load persisted token
string token = LoadToken();

// use token to resume query from last position
var search = moviesTable.Query(new QueryOperationConfig
{
    Filter = new QueryFilter("Year", QueryOperator.Equal, 2012),
    PaginationToken = token,
});
List<Document> items = search.GetNextSet();

// pagination token changed, persist new value
SaveToken(search.PaginationToken);

DataModel support

Although this functionality has not yet been added to the Object Persistence Model, it is possible to work around this limitation. In the following code sample, we can use the DocumentModel API to manually paginate our data, and then use DynamoDBContext to convert the retrieved Documents into .NET objects. Because we are using DynamoDBContext and don’t want to stray too far into the Document Model API, we’re going to use DynamoDBContext.GetTargetTable to avoid the manual construction of our Table instance.

// create DynamoDBContext object
var context = new DynamoDBContext(client);

// get the target table from the context
var moviesTable = context.GetTargetTable<Movie>();

// use token to resume query from last position
var search = moviesTable.Query(new QueryOperationConfig
{
    Filter = new QueryFilter("Year", QueryOperator.Equal, 2012),
    PaginationToken = token,
});
List<Document> items = search.GetNextSet();

// pagination token changed, persist new value
SaveToken(search.PaginationToken);

// convert page of Documents in .NET objects and enumerate over them
IEnumerable<Movie> movies = context.FromDocuments<Movie>(items);
foreach (var movie in movies)
    Log("{0} ({1})", movie.Title, movie.Year);

As you can see, even though we executed our Query using a Table object, we can continue working with familiar .NET classes while controlling the pagination of our data.

Installing Scheduled Tasks on EC2 Windows Instances Using EC2 Run

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today’s guest post is the second part of the two part series by AWS Solutions Architect Russell Day. Part one can be found here.

In the previous post, we showed how to use the User data field to install Windows scheduled tasks automatically when Windows EC2 instances are launched. In this post, we will demonstrate how to do the same thing using the new EC2 Run Command, which provides a simple way to remotely execute PowerShell commands against EC2 instances.

Use the EC2 Run Command to Install scheduled tasks automatically

Just as we did in the previous post, we will demonstrate two methods for using the EC2 Run Command: the Amazon EC2 console and AWS Tools for PowerShell.

Use the EC2 console to execute the EC2 Run Command

  1. Complete steps 1 through 4 in the previous post.
  2. In the EC2 console, choose Commands.

  3. Choose the Run a command button.
  4. Under Command document, choose AWS-RunPowerShellScript.
  5. Select your target instances.
  6. Paste the PowerShell script from step 5 of the previous post into the Commands text box as shown.

  7. Leave all other fields at their defaults, and choose Run to invoke the PowerShell script on the target instances.
  8. You can monitor progress and view the output of the invoked scripts as shown.

Use PowerShell to Execute the EC2 Run Command

Alternatively, you can use PowerShell to invoke the EC2 Run Command as shown.

  1. If you have not already configured your PowerShell environment, follow these instructions to configure your PowerShell console to use the AWS Tools for Windows PowerShell.
  2. Save the PowerShell script from step 5 in the previous post as InstallWindowsTasks.ps1.

From a PowerShell session, simply replace ‘Instance-ID’ with the instance IDs of your target instances and provide the path to InstallWindowsTasks.ps1 as shown.

$runPSCommand=Send-SSMCommand 
    -InstanceId @('Instance-ID', 'Instance-ID') 
    -DocumentName AWS-RunPowerShellScript 
    -Parameter @{'commands'=
         @([System.IO.File]::ReadAllText("C:...InstallWindowsTasks.ps1"))}	

You can use the following commands to monitor the status.

Retrieve the command execution status:


Get-SSMCommand -CommandId $runPSCommand.CommandId

Retrieve the status of the command execution on a per-instance basis:


Get-SSMCommandInvocation -CommandId $runPSCommand.CommandId

Retrieve the command information with response data for an instance. (Be sure to replace Instance-ID)

Get-SSMCommandInvocation -CommandId $runPSCommand.CommandId `
      -Details $true -InstanceId Instance-ID | 
   select -ExpandProperty CommandPlugins

Summary

The EC2 Run Command simplifies remote management and customization of your EC2 instances. For more information, see the following resources:

http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/execute-remote-commands.html
https://aws.amazon.com/ec2/run-command/.

AWS SDK for .NET Refresh for ASP.NET 5

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today we refreshed our ASP.NET 5 and CoreCLR support for the AWS SDK for .NET. This means we have pulled in all of the latest service updates, new services like AWS IoT, and enhancements from our stable 3.1 line of NuGet packages into new 3.2 beta versions of the SDK. Because there are a few remaining dependencies in our AWSSDK.Core package that are still in beta, we still need to keep our support in beta.

SDK Credentials Store

As part of CoreCLR support in the SDK, we have also enabled the SDK credentials store. The SDK credentials store is the encrypted storage for AWS credentials that you can manage using the AWS Explorer in Visual Studio. This means when you use the SDK on Windows and target the new CoreCLR runtime, the credential search pattern will be the same as the regular AWS SDK for .NET. On non-Windows platforms, we recommend using the shared credentials file.

Installing Scheduled Tasks on EC2 Windows Instances

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today’s guest post is part one of a two part series by AWS Solutions Architect Russell Day.

Windows administrators and developers often use scheduled tasks to run programs or scripts on a recurring basis. In this post, we will demonstrate how to use the Amazon EC2 User data option to install scheduled tasks on Windows EC2 instances automatically at launch.

Using the user data field to specify scripts that will automatically configure instances is commonly referred to as bootstrapping. In this post, we will specify a PowerShell script in the user data field to install scheduled tasks when EC2 instances are launched. We will demonstrate two methods for launching EC2 instances: the EC2 console and AWS Tools for PowerShell.

Before we can get started, we need to export the scheduled tasks and store them in a location accessible to our EC2 instances. We will use Task Scheduler to export the scheduled tasks to XML and store them in an Amazon S3 bucket.

Export scheduled tasks.

In Task Scheduler, right-click on the scheduled tasks you want to export and install as XML files.

Create an S3 bucket to store the XML scheduled task definitions.

Use the S3 Console, CLI, or AWS Tools for Windows PowerShell to create an S3 bucket that will store the XML task definition files created in step 1.

Create manifest file(s).

The manifest file(s) contains the scheduled tasks you want to install on the target instances. Consider using a separate manifest file for each unique set of tasks (for example, ProductionServerTasks.xml, DevelopmentServerTasks.xml).

Modify and save the following XML to create your manifest file(s).

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <scheduledTasks>
    <task name="Daily Scheduled Task 1" source="ScheduledTask1.xml" />
    <task name="Daily Scheduled Task 2" source="ScheduledTask2.xml" />
    <task name="Daily Scheduled Task 3" source="ScheduledTask3.xml" />
    <task name="Daily Scheduled Task 4" source="ScheduledTask4.xml" />
  </scheduledTasks>
</configuration>

Upload the exported scheduled task definitions and manifest file(s) to S3.

Upload the scheduled tasks definitions created in step 1 and the manifest file(s) created in step 3 to the S3 bucket created in step 2.

Create a PowerShell script to download and install the scheduled tasks.

The following PowerShell script contains functions to download and install the scheduled tasks stored in our S3 bucket. Replace the $S3Bucket and $TaskManifest parameters with your S3 bucket name and manifest file name.

$VerbosePreference = "Continue";
$WorkingDirectory = "c:tasks";
$TaskManifest = "TaskManifest.xml";
$S3Bucket = "YourS3BucketName";
function Invoke-Functions
{
    Download-ScheduledTasks
    Install-ScheduledTasks
}
function Download-ScheduledTasks
{
    Read-S3Object `
        -BucketName $S3Bucket `
        -Key $TaskManifest `
        -File $WorkingDirectory$TaskManifest

    [xml]$cfg = gc $WorkingDirectory$TaskManifest;
    $cfg.configuration.scheduledtasks.task | 
        %{ 
           $task = $_;
           [string] $TaskFile = $task.source
           Read-S3Object `
                -BucketName $S3Bucket `
                -Key $task.source `
                -File "$WorkingDirectory$TaskFile" 
        }	
}

function Install-ScheduledTasks
{		
    [xml]$cfg = gc $WorkingDirectory$TaskManifest;
    $cfg.configuration.scheduledtasks.task | 
        %{
           $task = $_;
           [string] $TaskFile = $task.source
            Register-ScheduledTask `
                -Xml (get-content "$WorkingDirectory$TaskFile" | out-string) `
                -TaskName $task.name
        }
}

Invoke-Functions | Out-File "c:InstallTasksLog.txt" -Verbose;

Create an EC2 role to allow GetObject permissions to the S3 bucket.

Our PowerShell script uses the Read-S3Object PowerShell cmdlet to download the scheduled task definitions from S3. Therefore, we need to create an EC2 role that allows our EC2 instances to access our S3 bucket objects on our behalf.

Follow these steps to create the EC2 role.

  1. Open the IAM console.
  2. In the navigation pane, choose Policies.
  3. Choose Create Policy.
  4. Choose Create Your Own Policy, and use the following policy template. Replace [YourS3BucketName] with the name of your bucket.

  5. In the navigation pane, choose Roles.
  6. Choose Create Role.
  7. In the Role Name field, type a name for your role.
  8. Under AWS Service Roles, choose Amazon EC2, and then choose Select.
  9. On the Attach Policy page, choose the policy you created, and then choose Next Step.
  10. On the Review page, choose Create Role.

Use the EC2 Console to Launch EC2 Instance(s).

  1. Open the EC2 console, and choose Launch Instance.
  2. Choose your version of Microsoft Windows Server.
  3. Continue to Step: 3 Configure Instance Details.

    • For IAM Role, choose the EC2 role you just created.
    • In Advanced Details, paste the PowerShell script into the text box. Be sure to enclose it in tags as shown here.

  4. Complete the wizard steps and launch the Windows EC2 instance(s).
  5. After your instance(s) have been launched, you can verify the installation of your scheduled tasks.

Use AWS Tools for Windows PowerShell to Launch EC2 Instances.

In keeping with our theme of automation, you can use PowerShell to create the instances programmatically.

  1. If you have not already configured your PowerShell environment, follow these instructions to configure your PowerShell console to use the AWS Tools for Windows PowerShell.
  2. Save the PowerShell script that will download and install the scheduled tasks as InstallWindowsTasks.ps1.
  3. Save the following PowerShell script as a module named AWSHelper.psm1. This allows you to reuse it when you launch Windows EC2 instances in the future. Modify the following parameters with your environment resource values:

    # the key pair to associate with the instance(s)
    $KeyPairName
    # the EC2 instance(s) security group ID
    $SecurityGroupId
    # the subnet ID for the instance(s) after launch
    $SubnetId
    # the ARN of the EC2 role we created to allow access to our S3 bucket
    $InstanceProfile
    

     

    $VerbosePreference = "Continue";
    $scriptpath = $MyInvocation.MyCommand.Path;
    $moduledirectory = Split-Path $scriptpath;
    
    function ConvertTo-Base64($string) {
       $bytes = [System.Text.Encoding]: UTF8.GetBytes ($string);
       $encoded = [System.Convert]::ToBase64String($bytes); 
       return $encoded;
    }
    
    function New-WindowsEC2Instance
    {
      [CmdletBinding()]
      Param
      (                    
        [Parameter(Mandatory=$false)]
        [string] $InstanceType = "t2.micro",
        [Parameter(Mandatory=$false)]
        [string] $KeyPairName = "YourKeyPair", 
        [Parameter(Mandatory=$false)]
        [string] $SecurityGroupId = "sg-5xxxxxxx", 
        [Parameter(Mandatory=$false)]
        [string] $SubnetId = "subnet-1xxxxxxx",	
        [Parameter(Mandatory=$true)]
        [int32] $Count, 
        [Parameter(Mandatory=$false)]
        [string] $InstanceProfile ="EC2RoleARN",
        [Parameter(Mandatory=$false)]
        [string] $UserScript 
            = (Join-Path $script:moduledirectory "InstallWindowsTasks.ps1")
      )
      Process
      {
        $ami = Get-EC2ImageByName -Names 'WINDOWS_2012R2_BASE'
        $ImageID =  $ami[0].ImageId
        $UserData = "";
        if ($userScript -and (Test-Path $userScript))
        {
          $contents = "" + [System.IO.File]::ReadAllText($UserScript) + "";
    	  $filePath = gi $UserScript;
          $UserData = ConvertTo-Base64($contents);
        }
    
        $params = @{};
        $params.Add("ImageID", $ImageID);
        $params.Add("InstanceType", $InstanceType);
        $params.Add("KeyName", $KeyPairName); 
        $params.Add("MaxCount", $Count);
        $params.Add("MinCount", $Count);
        $params.Add("InstanceProfile_Arn", $InstanceProfile);
        $params.Add("SecurityGroupId", $SecurityGroupId); 
        $params.Add("SubnetId", $SubnetId);
        $params.Add("UserData", $UserData); 	
    
        $reservation = New-EC2Instance @params;
      }
    }
    
  4. To invoke the PowerShell code, import the AWSHelper.psm1 module, and then call the New-WindowsEC2Instance cmdlet as shown. Type the number of instances at the prompt.

Summary

The User data option provides a convenient way to automate the customization of your EC2 instances. For more information, see the following resources:

http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/UsingConfig_WinAMI.html#user-data-execution
http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/walkthrough-powershell.html