Tag: .NET


Argument Completion Support in AWS Tools for Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Version 3.1.93.0 of the AWS Tools for Windows PowerShell now includes support for tab completion of parameters that map to enumeration types in service APIs. Let’s look at how these types are implemented in the underlying AWS SDK for .NET and then see how this new support helps you at the Windows PowerShell command line or in script editors (like the PowerShell ISE) that support parameter IntelliSense.

Enumerations in the AWS SDK for .NET

You might expect the SDK to implement enumeration types used in service APIs as enum types but this isn’t the case. The SDK contains a ConstantClass base class from which it derives classes for service-specific enumeration types. These derived classes implement the permitted values for a service enumeration as a set of read-only static strings. For example, here’s a snippet of the InstanceType enumeration for Amazon Elastic Compute Cloud (Amazon EC2) instances (comments removed for brevity):

public class InstanceType : ConstantClass
{
    public static readonly InstanceType C1Medium = new InstanceType("c1.medium");
    public static readonly InstanceType C1Xlarge = new InstanceType("c1.xlarge");
    public static readonly InstanceType C32xlarge = new InstanceType("c3.2xlarge");
	...

    public InstanceType(string value)
           : base(value)
    {
    }
	...
}

In a typical SDK application, you would use the defined types (from Amazon EC2’s RunInstances API), like this:

var request = new RunInstancesRequest
{
    InstanceType = InstanceType.C1XLarge,
	...
};
var response = EC2Client.RunInstances(request);
...

In this way, the SDK’s enumerations are not very different from regular enum types but offer one very powerful capability over regular enums: When services update their enumeration values (for example when EC2 adds a new instance type) you do not need to update the version of the SDK your application is built against to use the new value! The new value won’t appear as a member of the enumeration class until you update your SDK but you can simply write code to use the value with whatever version you have. It just works:

var request = new RunInstancesRequest
{
    InstanceType = "new-instance-type-code"
	...
};
var response = EC2Client.RunInstances(request);
...

This ability to adopt new values also applies to the response data from the service. The SDK will simply unmarshal the response data and accept the new value. This is unlike what would happen with the use of real enum types that would throw an error. You are therefore insulated on both sides from services adding new enumeration values when you want or need to remain at a particular SDK version.

Using Service Enumerations from PowerShell

Let’s say we are working at a console and want to use New-EC2Instance (which maps to the RunInstances API):

PS C:> New-EC2Instance -InstanceType ???

As we noted, the underlying SDK does not use regular .NET enum types for the allowed values so there’s no data for the shell to run against in order to offer a suggestion list. Obviously this is a problem when you don’t know the permitted values but it’s also an issue when you know the value but not the casing. Windows PowerShell may be case-insensitive but some services require the casing shown in their enumerations for their API call to succeed.

Why Not Use ValidateSet?

One way to tackle this problem would be to use PowerShell’s ValidateSet attribution on parameters that map to service enumerations, but this has a shortcoming: validation! Using the example of the -InstanceType parameter again, should EC2 add a new type you would need to update your AWSPowerShell module in order to use the new value. Otherwise, the shell would reject your use of values not included in the ValidateSet attribution at the time we shipped the module. The ability to make use of new enumeration values without being forced to recompile the application with an updated SDK is very useful. It’s certainly a capability we wanted to extend to our Windows PowerShell module users.

What we want is behavior similar to these screenshots of Invoke-WebRequest but without locking users into requiring updates for new values. In the ISE, we get a pop-up menu of completions:

At the console when we press Ctrl+Space we get a list of completions to select from:

We can also use the Tab key to iterate through the completions one by one or we can enter a partial match and the resulting completions are filtered accordingly.

The Solution: Argument Completers

Support for custom argument completers was added in Windows PowerShell version 3. Custom argument completers allow cmdlet authors and users to register a script block to be called by the shell when a parameter is specified. This script block is responsible for returning a set of valid completions given the data at hand. The set of completions (if any) is displayed to the user in the same way as if the data were specified using ValidateSet attribution or through a regular .NET enum type.

Third-party modules like TabExpansionPlusPlus (formerly TabExpansion++) also contributed to this mechanism to give authors and users a convenient way to register the completers. Beginning in Windows PowerShell version 5, a new native cmdlet can perform the registration.

In version 3.1.93.0 of the AWSPowerShell module we have added a nested script module that implements argument completers across the supported AWS services. The data used by these completers to offer suggestion lists for parameters comes from the SDK enumeration classes at the time we build the module. The SDK’s data is created based on the service models when we build the SDK. The permitted values are therefore correctly cased for those services that are case-sensitive; no more guessing how a value should be expressed when at the command line.

Here’s an example of the InstanceType completer (shortened) for EC2:

$EC2_Completers = {
    param($commandName, $parameterName, $wordToComplete, $commandAst, $fakeBoundParameter)
    
    # to allow for same-name parameters of different ConstantClass-derived
    # types, check on command name concatenated with parameter name.
    switch ($("$commandName/$parameterName"))
	{	...
        # Amazon.EC2.InstanceType
        {
            ($_ -eq "Get-EC2ReservedInstancesOffering/InstanceType") -Or
            ($_ -eq "New-EC2Instance/InstanceType") -Or
            ($_ -eq "Request-EC2SpotInstance/LaunchSpecification_InstanceType")
        }
        {
            $v = "c1.medium","c1.xlarge",...,"t2.nano","t2.small",...
            break
        }
	...
	}
	
    # the standard code pattern for completers is to pipe through sort-object
    # after filtering against $wordToComplete, but our members are already sorted.
    $v |
        Where-Object { $_ -like "$wordToComplete*" } |
        ForEach-Object { New-Object System.Management.Automation.CompletionResult $_, $_, 'ParameterValue', $_ }
}        

When the AWSPowerShell module is loaded, the nested module is automatically imported and executed, registering all of the completer script blocks it contains. Completion support works with Windows PowerShell versions 3 and later. For Windows PowerShell version 5 or later, the module uses the native Register-ArgumentCompleter cmdlet. For earlier versions it determines if this cmdlet is available in your installed modules (this will be the case if you have TabExpansionPlusPlus installed). If the cmdlet cannot be found the shell’s completer table is updated directly (you’ll find several blog posts on how this is done if you search for Windows PowerShell argument completion).

The net effect of this is that when you are constructing a command at the console or writing a script you get a suggestion list for the values accepted by these parameters. No more hunting through documentation to determine the allowed values and their casing! As we required, the ISE displays the list immediately after you enter a space after the parameter name and partial content will filter the list:

In a console the Tab key will cycle through the available options. Pressing Ctrl+Space displays a pop-up selection list that you can cursor around. In both cases you can filter the display by typing in the partial content:

A Note About Script Security

To accompany the new completion script module we made one other significant change in the 3.1.93.0 release: to add an Authenticode signature to all script and module artifacts (in effect, all .psd1, .psm1, .ps1 and .ps1xml files contained in the module). A side-benefit of this is that the module is now compatible with where the execution policy for Windows PowerShell scripts is set to "AllSigned". More information on execution policies can be found in this TechNet article.

Wrap

We hope you enjoy the new support for parameter value completion and the ability to now use the module in environments that require the execution policy to be ‘AllSigned’. Happy scripting!

AWS SDK for .NET Status Update for .NET Core Support

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

The AWS SDK for .NET has included support for the .NET Core platform in our NuGet 3.2 beta packages since last September. With our recent push of version 3.2.6-beta to NuGet, we’ve switched from netstandard 1.5 to 1.3 to increase the SDK’s compatibility with other libraries. This version also includes many of the high-level abstractions and utility methods that were previously only available on the 3.5 and 4.5 .NET Framework versions of the SDK.

After some final testing and performance optimization, we will move our .NET Core support out of beta and into our master branch. (We’re currently investigating differences in the way HTTP connections are handled between .NET Framework and .NET Core.) At that point, we will bump the version number of all NuGet packages for the SDK to 3.3. Then .NET Core support for future service updates will be provided with the regular .NET Framework version of the SDK.

Now is a great time to give us feedback on our .NET Core support. Feel free to use the 3.2 beta versions of the SDK from NuGet and leave your feedback on our GitHub repository.

Updates to Credential and Region Handling

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Version 3.1.73.0 of the AWS Tools for Windows PowerShell and AWS SDK for .NET (AWSSDK.Core version 3.1.6.0),  released today, contain enhancements to the way credentials and region data can be supplied to your SDK applications and PowerShell scripts, including the ability to use SAML federated credentials in your SDK applications. We’ve also refactored support for querying Amazon EC2 instance metadata and made it available to your code. Let’s take a look at the changes.

SDK Updates

Credential Handling

In 2015, we launched support for using SAML federated credentials with the AWS PowerShell cmdlets. (See New Support for Federated Users in the AWS Tools for Windows PowerShell.) We’ve now extended the SDK so that applications written against it can also use the SAML role profiles described in the blog post. To use this support in your application, you must first set up the role profile. The details for using the PowerShell cmdlets to do this appear in the blog post. Then, in the same way you do with other credential profiles, you simply reference the profile in your application’s app.config/web.config files with the AWSProfileName appsetting key. The SAML support to obtain AWS credentials is contained in the SDK’s Security Token Service assembly (AWSSDK.SecurityToken.dll), which is loaded at runtime. Be sure this assembly is available to your application at runtime.

The SDK has also been updated to support reading credentials from the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables (the same variables used with other AWS SDKs). For legacy support, the AWS_SECRET_KEY variable is still supported.

If no credentials are supplied to the constructors of the service clients, the SDK will probe to find a set to use. As of this release, the current probing tests are followed:

  1. If an explicit access key/secret access key or profile name is found in the application’s app.config/web.config files, use it.
  2. If a credential profile named "default" exists, use it. (This profile can contain AWS credentials or it can be a SAML role profile.)
  3. If credentials are found in environment variables, use them.
  4. Finally, check EC2 instance metadata for instance profile credentials.

Specifying Region

To set the region when you instantiated AWS service clients in your SDK applications, you used to have to two options: hard-code the region in the application code using the system name (for example, ‘us-east-1’) or the RegionEndpoint helper properties (for example, RegionEndpoint.USEast1) or supply the region system name in the application’s app.config/web.config files using the AWSRegion appsetting key. The SDK has now been updated to enable region detection through an environment variable or, if your code is running on an EC2 instance, from instance metadata.

To use an environment variable to set the AWS region, you simply set the variable AWS_REGION to the system name of the region you want service clients to use. If you need to override this for a specific client, simply pass the required region in the service client constructor. The AWS_REGION variable is used by the other AWS SDKs.

When running on an EC2 instance, your SDK-based applications will auto-detect the region in which the instance is running from EC2 instance metadata if no explicit setting is found. This means you can now deploy code without needing to hard-code any region in your app.config/web.config files. You can instead rely on the SDK to auto-detect the region when your application instantiates clients for AWS services.

Just as with credentials, if no region information is supplied to a service client constructor, the SDK probes to see if the region can be determined automatically. As of right now, these are the tests performed:

  1. Is an AWSRegion appsetting key present in the application’s app.config/web.config files? If so, use it.
  2. Is the AWS_REGION environment variable set? If so, use it.
  3. Attempt to read EC2 instance metadata and obtain the region in which the instance is running.

PowerShell Updates

Credential Handling

You can now supply credentials to the cmdlets by using the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. (You might find this helpful when you attempt to run the cmdlets in a user identity where credential profiles are inconvenient to set up, for example, the local system account.)

If you have enabled SAML federated authentication for use with the cmdlets, they now support the use of proxy data configured using the Set-AWSProxy cmdlet when making authentication requests against the ADFS endpoint. Previously, a proxy had to be set at the machine-wide level.

When the AWS cmdlets run, they follow this series of tests to obtain credentials:

  1. If explicit credential parameters (-AccessKey, -SecretKey, -SessionToken, for example) have been supplied to the cmdlet or if a profile has been specified using the -ProfileName parameter, use those credentials. The profile supplied to -ProfileName can contain regular AWS credentials or it can be a SAML role profile.
  2. If the current shell has been configured with default credentials (held in the $StoredAWSCredentials variable), use them.
  3. If a credential profile with the name "default" exists, use it. (This profile can contain regular AWS credentials or it can be a SAML role profile.)
  4. If the new AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY environment variables are set, use the credentials they contain.
  5. If EC2 instance metadata is available, look for instance profile credentials.

Specifying Region

In addition to existing support for specifying region using the -Region parameter on cmdlets (or setting a shell-wide default using Set-DefaultAWSRegion), the cmdlets in the AWSPowerShell module can now detect region from the AWS_REGION environment variable or from EC2 instance metadata.

Some users may run the Initialize-AWSDefaults cmdlet when opening a shell on an EC2 instance. Now that you can detect region from instance metadata, the first time you run this cmdlet on an EC2 instance, you are no longer prompted to select a region from the menu. If you want to run PowerShell scripts using a region for the AWS services different from the region in which the instance is running, you can override the default detection by supplying the -Region parameter, with appropriate value, to the cmdlet. You can also continue to use the Set-DefaultAWSRegion cmdlet in your shell or scripts, or add the -Region parameter to any cmdlet to direct calls to a region that differs from the region hosting the instance.

Just as with credentials, the cmdlets will search for the appropriate region to use when invoked:

  1. If a -Region parameter was supplied to the cmdlet, use it.
  2. If the current shell contains a default region ($StoredAWSRegion variable), use it.
  3. If the AWS_REGION environment variable is set, use it.
  4. If the credential profile ‘default’ exists and it contains a default region value (set by previous use of Initalize-AWSDefaults), use it.
  5. If EC2 instance metadata is available, inspect it to determine the region.

Reading EC2 Instance Metadata

As part of extending the SDK and PowerShell tools to read region information from EC2 instance metadata, we have refactored the metadata reader class (Amazon.EC2.Util.EC2Metadata) from the AWSSDK.EC2.dll assembly into the core runtime (AWSSDK.Core.dll) assembly. The original class has been marked obsolete.

The replacement class is Amazon.Util.EC2InstanceMetadata. It contains additional helper methods to read more of the EC2 instance metadata than the original class. For example, you can now read from the dynamic data associated with the instance. For more information, see [http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-instance-metadata.html ec2-instance-metadata.html]. Region information is held in what is known as the identity document for the instance. The document is in JSON format. The class contains a helper property, Region, which extracts the relevant data for you and returns it as a RegionEndpoint instance, making it super easy to query this in your own applications. You can also easily read the instance monitoring, signature, and PKCS7 data from convenient properties.

We’ve not forgotten scripters either! Previously, to read instance metadata from PowerShell, you had to have run the Invoke-WebRequest cmdlet against the metadata endpoint and parsed the data yourself. The AWSPowerShell module now contains a cmdlet dedicated to the task: Get-EC2InstanceMetadata. Some examples:

PS C:UsersAdministrator> Get-EC2InstanceMetadata -Category LocalIpv4
10.232.46.188

PS C:UsersAdministrator> Get-EC2InstanceMetadata -Category AvailabilityZone
us-west-2a

PS C:UsersAdministrator> Get-EC2InstanceMetadata -ListCategory
AmiId
LaunchIndex
ManifestPath
AncestorAmiId
BlockDeviceMapping
InstanceId
InstanceType
LocalHostname
LocalIpv4
KernelId
AvailabilityZone
ProductCode
PublicHostname
PublicIpv4
PublicKey
RamdiskId
Region
ReservationId
SecurityGroup
UserData
InstanceMonitoring
IdentityDocument
IdentitySignature
IdentityPkcs7

PS C:UsersAdministrator> Get-EC2InstanceMetadata -path /public-keys/0/openssh-key
ssh-rsa AAAAB3N...na27jfTV keypairname

We hope you find these new capabilities helpful. Be sure to let us know in the comments if there are other scenarios we should look at!

AWS SDK for .NET Version 2 Status

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Version 3 of the AWS SDK for .NET has been generally available since 7/28/2015. Although the legacy version (v2) of the SDK will continue to work, we strongly recommend that all customers migrate to the latest version 3 to take advantage of various improvements including modularized architecture, portable class library support, and .NET Core support. There are only a few backward-incompatible changes between version 2 and version 3 (see the migration guide for details). Additionally, the last few legacy releases of the version 2 SDK (versions 2.3.50 and later) have all the classes and methods that are changed in version 3 marked obsolete, so compile-time warnings can help you make forward-compatible updates before upgrading to the version 3 SDK.

To help customers plan their migration, our current maintenance timeline for the version 2 SDK is provided below.

Security issues and critical bugs

Critical bugs with no reasonable workaround as well as any security-related issues will be addressed with the highest priority. We will continue to support fixing such issues indefinitely.

Non-critical bugs

We will continue to address non-critical bugs in the version 2 SDK until the end of 2016. These bugs will be fixed in relative priority order. Factors considered in prioritization will include

  • Number of affected customers
  • Severity of the problem (broken feature vs. typo fix in documentation)
  • Whether the issue is already fixed in version 3
  • Risk of the fix causing unintended side effects

Service API updates

We will continue to add API updates to existing service clients based on customer request (GitHub Issue) until 8/1/2016.

New service clients

New service clients will not be added to the version 2 SDK. They will only be added to version 3.

As always, please find us in the Issues section of the SDK repository on GitHub, if you would like to report bugs, request service API updates, or ask general questions.

Retrieving Request Metrics from the AWS SDK for .NET

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

In an earlier post, we discussed how you can turn on AWS SDK for .NET logging and how to log the SDK request metrics in JSON. In this post, we will discuss how you can use log4net or System.Diagnostics logging to gain access to the real RequestMetrics objects and work with raw metrics.

This approach may be useful as part of your testing to keep track of service calls made by the SDK or to log metrics data using your own loggers without the added overhead of trying to parse logs and reconstruct the metrics information.

To demonstrate, let’s implement a custom appender (MetricsAppender) for log4net that will extract and display metrics data to the console.

public class MetricsAppender : AppenderSkeleton
{
    // override AppenderSkeleton.Append to intercept all messages
    protected override void Append(LoggingEvent loggingEvent)
    {
        // extract the message data
        var logMessage = loggingEvent.MessageObject as ILogMessage;
        if (logMessage != null)
        {
            // find IRequestMetrics in logMessage.Args, if it is present
            var metrics = logMessage.Args == null ? null :
                logMessage.Args.Where(a => a is IRequestMetrics).FirstOrDefault() as IRequestMetrics;
            if (metrics != null)
            {
                // write MethodName and ClientExecuteTime to console
                Console.WriteLine("{0} took {1}ms to complete",
                    metrics.Properties[Metric.MethodName].First(),
                    metrics.Timings[Metric.ClientExecuteTime].First().ElapsedTime.TotalMilliseconds);
            }
        }
    }
}

Here is a simple example of the use of MetricsAppender with the SDK:

// create and configure metrics appender
var appender = new MetricsAppender();
BasicConfigurator.Configure(appender);

// configure the SDK to use log4net for metrics logging
AWSConfigs.LoggingConfig.LogMetrics = true;
AWSConfigs.LoggingConfig.LogTo = LoggingOptions.Log4Net;

// make service call and log the resultant metrics
using (var ddb = new AmazonDynamoDBClient())
{
    ddb.ListTables();
}

After you run this code, a message like this will be written to the console:

ListTablesRequest took 415.5368ms to complete

So what’s going on here? During logging, the AWS SDK for .NET passes an ILogMessage object to the loggers. ILogMessage holds the information required to execute String.Format to create a string that can be logged. String.Format isn’t called until the logger needs to convert the data to text (for example, in a logger that writes output to a file), but it can also be used to work with logged objects in memory. MetricsAppender simply analyzes all of the logged ILogMessage instances and looks for an ILogMessage that contains the IRequestMetrics object. It then extracts the required data from that object and writes this data to the console.

For this post we chose to use log4net because it has the handy AppenderSkeleton, which allowed us to only override one method to create a fully functional appender. This is not present in System.Diagnostics, though it is very easy to implement one. In fact, you can grab the TestTraceListener from this GitHub discussion and use it. As you can see, the inner code for both MetricsAppender and TestTraceListener is virtually identical.

Conclusion

We hope this look at logging in the AWS SDK for .NET will help you take full advantage of metrics data in your own application. If you have questions or feedback about the SDK, feel free to post them to our GitHub repo. In fact, this very post is the result of a user question about SDK metrics logging.

Exploring ASP.NET Core Part 2: Continuous Delivery

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

The first post in this series discussed how to use an Amazon EC2 instance and AWS CodeDeploy to deploy ASP.NET Core applications from GitHub. The setup assumed all git pushes to GitHub were deployed to the running environment without validation. In this post, let’s examine how we can create an AWS environment for our ASP.NET Core application that gives us quality control and takes advantage of AWS cloud scale.

You’ll find the code and setup scripts for this post in the part2 branch of the aws-blog-net-exploring-aspnet-core repository.

Validating the Deployment

In the first post, the appspec.yml file called the InstallApp.ps1 script during the ApplicationStart hook to extract the application and set up IIS. To verify the application is up and running, let’s update the appspec.yml file to call ValidateInstall.ps1 during the ValidateService hook.

version: 0.0
os: windows
files:
  - source: 
    destination: C:ExploringAspNetCore
hooks:
  ApplicationStop:
    - location: .RemoveApp.ps1
      timeout: 30
  ApplicationStart:
    - location: .InstallApp.ps1
      timeout: 300
  ValidateService:
    - location: .ValidateInstall.ps1
      timeout: 300

This script allows us to call tests to make sure our application is running correctly. In the GitHub repository, I added some xunit tests under .SampleAppsrcSmokeTests. My sample application is no more than a simple "Hello World" web application so I just need to test that I can make a web call to the application and get a valid response. In a real-world application you would have a much more exhaustive suite of tests to run during this validation step.

Let’s take a look at ValidateInstall.ps1 to see how the tests are run.

sl C:ExploringAspNetCoreSampleAppsrcSmokeTests

# Restore the nuget references
& "C:Program Filesdotnetdotnet.exe" restore

# Run the smoke tests
& "C:Program Filesdotnetdotnet.exe" test

exit $LastExitCode

To run the tests, switch to the directory where the tests are stored. Restore the dependencies, and then run the dotnet test command. The failure of any test will cause dnx to return a non 0 exit code, which we return from the PowerShell script. AWS CodeDeploy will see the failed exit code and mark the deployment as a failure. We can get logs from the test run from the AWS CodeDeploy console to see what failed.

Using AWS CodePipeline

Now that deployments are running smoke tests, we can detect bad deployments. In a similar way, we want to make sure users aren’t affected by bad deployments. The best practice is to use a pipeline with a beta stage during which we run smoke tests. We promote to production only if beta was successful. This gives us continuous delivery with safety checks. Again, as we discussed in part 1, we benefit from the ability of ASP.NET Core to run from source. It means we do not have to bother configuring a build step in our pipeline. Pipelines can pull source from Amazon S3 or GitHub. To provide a complete sample that can be set up with just a PowerShell script, we’ll use S3 as the source for our pipeline. AWS CodePipeline will monitor S3 for new versions of an object to push them through the pipeline. For information about configuring a GitHub repository, see the AWS CodePipeline User Guide.

Setup Script

The PowerShell script .EnvironmentSetupEnvironmentSetup.ps1 in the repository will create the AWS resources required to deploy the application through a pipeline.

Note: To avoid charges for unused resources, be sure to run .EnvironmentSetupEnvironmentTearDown.ps1 when you are done testing.

The setup script sets up the following resources:

  • An S3 bucket with a zip of the archive as the initial deployment source.
  • A t2.small EC2 instance for beta.
  • An Auto Scaling group with a load balancer using t2.medium instances.
  • An AWS CodeDeploy application for beta using the t2.small EC2 instance.
  • An AWS CodeDeploy application for production using the Auto Scaling group.
  • AWS CodePipeline with the S3 bucket as the source and the beta and production stages configured to use the AWS CodeDeploy applications.

When the script is complete, it will print out the public DNS for the beta EC2 instance and production load balancer. We can monitor pipeline progress in the AWS CodePipeline console to see if the deployment was successful for both stages.

The application was deployed to both environments because the smoke tests were successful during the AWS CodeDeploy deployments.

Failed Deployments

Let’s see what happens when a deployment fails. We’ll force a test failure by opening the .SampleAppsrcSmokeTestsWebsiteTests.cs test file and making a change that will cause the test to fail.

[Fact]
public async Task PassingTest()
{
    using (var client = new HttpClient())
    {
        var response = await client.GetStringAsync("http://localhost-not-a-real-host/");

        Assert.Equal("Exploring ASP.NET Core with AWS.", response);
    }
}

In the repository, we can run the .DeployToPipeline.ps1 script, which will zip the archive and upload it to the S3 location used by the pipeline. This will kick off a deployment to beta. (The deployment will fail because of the bad test.)

A deployment will not be attempted during the production stage because of the failure at beta. This keeps production in a healthy state. To see what went wrong, we can view the deployment logs in the AWS CodeDeploy console.

Conclusion

With AWS CodeDeploy and AWS CodePipeline, we can build out a full continuous delivery system for deploying ASP.NET Core applications. Be sure to check out the GitHub repository for the sample and setup scripts. In the next post in this series, we’ll explore ASP.NET Core cross-platform support.

Exploring ASP.NET Core Part 1: Deploying from GitHub

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

ASP.NET Core, formally ASP.NET 5, is a platform that offers lots of possibilities for deploying .NET applications. This series of posts will explore options for deploying ASP.NET applications on AWS.

What Is ASP.NET Core?

ASP.NET Core is the new open-source, cross-platform, and modularized implementation of ASP.NET. It is currently under development, so expect future posts to cover updates and changes (for example, the new CLI).

Deploying from GitHub

The AWS CodeDeploy deployment service can be configured to trigger deployments from GitHub. Before ASP.NET Core, .NET applications had to be built before they were deployed. ASP.NET Core applications can be deployed and run from the source.

Sample Code and Setup Scripts

The code and setup scripts for this blog can be found in the aws-blog-net-exploring-aspnet-core repository in the part1 branch.

Setting Up AWS CodeDeploy

AWS CodeDeploy automates deployments to Amazon EC2 instances that you set up and configure as a deployment group. For more information, see the AWS CodeDeploy User Guide.

Although ASP.NET Core offers cross-platform support, in this post we are using instances running Microsoft Windows Server 2012 R2. The Windows EC2 instances must have IIS, .NET Core SDK and the Windows Server Hosting installed. The Windows Server Hosting, also called the ASP.NET Core Module, is required to enable IIS to communicate with the ASP.NET Core web server, Kestrel.

To set up the AWS CodeDeploy environment, you can run the .\EnvironmentSetup\EnvironmentSetup.ps1 PowerShell script in the GitHub repository. This script will create an AWS CloudFormation stack that will set up an EC2 instance and configure AWS CodeDeploy and IIS with the .NET Core SDK and Windows Server Hosting. It will then set up an AWS CodeDeploy application called ExploringAspNetCorePart1.

To avoid ongoing charges for AWS resources, after you are done with your testing, be sure to run the .\EnvironmentSetup\EnvironmentTearDown.ps1 PowerShell script.

GitHub and AWS CodeDeploy

You can use the AWS CodeDeploy console to connect your AWS CodeDeploy application to a GitHub repository. Then you can initiate deployments to the AWS CodeDeploy application by specifying the GitHub repository and commit ID. The AWS CodeDeploy team has written a blog post that describes how to configure the repository to automatically push a deployment to the AWS CodeDeploy application.

Deploying from Source

When you deploy from GitHub, the deployment bundle is a zip archive of the repository. In the root of the repository is an appspec.yml file that tells AWS CodeDeploy how to deploy our application. For our application, the appspec.yml is very simple:

version: 0.0
os: windows
files:
  - source: 
    destination: C:\ExploringAspNetCore
hooks:
  ApplicationStop:
    - location: .\RemoveApp.ps1
      timeout: 30
  ApplicationStart:
    - location: .\InstallApp.ps1
      timeout: 300

The file tells AWS CodeDeploy to extract the files from our repository to C:\ExploringAspNetCore and then run the PowerShell script, InstallApp.ps1, to start the application. The script has three parts. The first part restores all the dependencies for the application.

# Restore the nuget references
"C:Program Files\dotnet\dotnet.exe" restore

The second part packages the application for publishing.

# Publish application with all of its dependencies and runtime for IIS to use
"C:Program Files\dotnet\dotnet.exe" publish --configuration release -o c:\ExploringAspNetCore\publish --runtime active

The third part updates IIS to point to the publishing folder. The AWS CodeDeploy agent is a 32-bit application and runs PowerShell scripts with the 32-bit version of PowerShell. To access IIS with PowerShell, we need to use the 64-bit version. That’s why this section passes the script into the 64-bit version of powershell.exe.

C:\Windows\SysNative\WindowsPowerShell\v1.0\powershell.exe -Command {
             Import-Module WebAdministration
             Set-ItemProperty 'IIS:sitesDefault Web Site' 
                 -Name physicalPath -Value c:\ExploringAspNetCore\publish
}

Note: This line was formatted for readability. For the correct syntax, view the script in the repository.

If we have configured the GitHub repository to push deployments to AWS CodeDeploy, then after every push, the code change will be zipped up and sent to AWS CodeDeploy. Then AWS CodeDeploy will execute the appspec.yml and InstallApp.ps1 and the EC2 instance will be up-to-date with the latest code — no build step required.

Share Your Feedback

Check out the aws-blog-net-exploring-aspnet-core repository and let us know what you think. We’ll keep adding ideas to the repository. Feel free to open an issue to share your own ideas for deploying ASP.NET Core applications.

Contributing to the AWS SDK for .NET

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

The AWS SDK for .NET is an open source project available on GitHub. This post is to help community developers navigate the SDK code base with an eye toward contributing features and fixes to the SDK.

Code Generation

The first gotcha for contributors is that major portions of the code are generated from models of the service APIs. In version 3 of the SDK, we reorganized the code base to make it obvious which code is generated. You’ll now find it under each service client folder in folders named "Generated." Similarly, handwritten code is now found in folders named "Custom." Most of the generated code is in partial classes to facilitate extending it without having changes get clobbered by the code generator.

The convention we use when adding extensions to generated partial classes is to place a file called Class.Extensions.cs under the Custom folder with the same hierarchy as the generated file.

The code generator can be found here. The models are under the ServiceModels folder. To add a client, add a model to the ServiceModels folder, update the _manifest.json file in the same folder, and run the generator. The customization files in the folder handle ways in which we can override the behavior of the generator, mostly to keep some consistency across older and newer services, as well as make adjustments to make the API more C#-friendly.

It is sometimes necessary to update the code generator to add a feature or fix an issue. Because changes to the generator may impact all existing services and require a lot of testing, these changes should not be undertaken lightly.

Platform Support

Another thing you may notice about the code base is that some files are under folders like _bcl, _bcl35, _bcl45, _mobile, or _async. This is how the SDK controls which files are included in platform-specific project files.

As an example, if you look at the AutoScaling client folder you will see the folders

Model
_bcl35
_bcl45
_mobile

The _bcl45 folder contains the Auto Scaling client and interface for version 4.5 of the AWS SDK for .NET. It differs from the 3.5 version in that it exposes Async/Await versions of the service APIs, where the 3.5 version exposes Begin/End for asynchrony. The Model folder contains code common to all platforms. For this reason, don’t use Visual Studio to add files to an SDK project. Instead, add the file to the appropriate file system location, and then reload the project. We try to use this subdirectory mechanism rather than #if directives in the code where possible.

Testing

It will be much easier to evaluate and provide feedback on contributions if they are accompanied by unit tests, which can be added to the UnitTests folder.

Sometimes it is good to include some integration tests that hit the service endpoint, too. Integration tests will, by necessity, create AWS resources under your account, so it’s possible you will incur some costs. Try to design integration tests that favor APIs that don’t create resources, are fast to run, and account for the eventual consistency of the APIs you’re testing.

Community Feature Requests

Many contributions are driven by the specific needs of a community member, but sometimes they’re driven simply by a desire to get involved. If you would like to get involved, we collect community feature requests in the FEATURE_REQUESTS.md at the top level of the repository.

DynamoDB Document Model Manual Pagination

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

In version 3.1.1.2 of the DynamoDB .NET SDK package, we added pagination support to the Document Model. This feature allows you to use a pagination token returned by the API to paginate a set of Query or Scan results across sessions. Until now, it was not possible to resume pagination of Query or Scan results without retrieving the already encountered items. This post includes two simple examples of this new functionality.

The first example makes the initial Query call to retrieve all movies from the year 2012, and then saves the pagination token returned by the Search.PaginationToken property. The second example retrieves the PaginationToken and continues the same query. For these examples, we will assume we have functions (void SaveToken(string token) and string LoadToken()) to persist the pagination token across sessions. (If this were an ASP.NET application, the functions would use the session store to store the token, but this can be any similar environment where manual pagination is used.)

Initial query:

var client = new AmazonDynamoDBClient();
Table moviesTable = Table.LoadTable(client, "MoviesByYear");

// start initial query
var search = moviesTable.Query(new QueryOperationConfig
{
    Filter = new QueryFilter("Year", QueryOperator.Equal, 2012),
});

// retrieve one pages of items
List<Document> items = search.GetNextSet();

// get pagination token
string token = search.PaginationToken;

// persist the token in session data or something similar
SaveToken(token);

Resumed query:

var client = new AmazonDynamoDBClient();
Table moviesTable = Table.LoadTable(client, "MoviesByYear");

// load persisted token
string token = LoadToken();

// use token to resume query from last position
var search = moviesTable.Query(new QueryOperationConfig
{
    Filter = new QueryFilter("Year", QueryOperator.Equal, 2012),
    PaginationToken = token,
});
List<Document> items = search.GetNextSet();

// pagination token changed, persist new value
SaveToken(search.PaginationToken);

DataModel support

Although this functionality has not yet been added to the Object Persistence Model, it is possible to work around this limitation. In the following code sample, we can use the DocumentModel API to manually paginate our data, and then use DynamoDBContext to convert the retrieved Documents into .NET objects. Because we are using DynamoDBContext and don’t want to stray too far into the Document Model API, we’re going to use DynamoDBContext.GetTargetTable to avoid the manual construction of our Table instance.

// create DynamoDBContext object
var context = new DynamoDBContext(client);

// get the target table from the context
var moviesTable = context.GetTargetTable<Movie>();

// use token to resume query from last position
var search = moviesTable.Query(new QueryOperationConfig
{
    Filter = new QueryFilter("Year", QueryOperator.Equal, 2012),
    PaginationToken = token,
});
List<Document> items = search.GetNextSet();

// pagination token changed, persist new value
SaveToken(search.PaginationToken);

// convert page of Documents in .NET objects and enumerate over them
IEnumerable<Movie> movies = context.FromDocuments<Movie>(items);
foreach (var movie in movies)
    Log("{0} ({1})", movie.Title, movie.Year);

As you can see, even though we executed our Query using a Table object, we can continue working with familiar .NET classes while controlling the pagination of our data.

Installing Scheduled Tasks on EC2 Windows Instances Using EC2 Run

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today’s guest post is the second part of the two part series by AWS Solutions Architect Russell Day. Part one can be found here.

In the previous post, we showed how to use the User data field to install Windows scheduled tasks automatically when Windows EC2 instances are launched. In this post, we will demonstrate how to do the same thing using the new EC2 Run Command, which provides a simple way to remotely execute PowerShell commands against EC2 instances.

Use the EC2 Run Command to Install scheduled tasks automatically

Just as we did in the previous post, we will demonstrate two methods for using the EC2 Run Command: the Amazon EC2 console and AWS Tools for PowerShell.

Use the EC2 console to execute the EC2 Run Command

  1. Complete steps 1 through 4 in the previous post.
  2. In the EC2 console, choose Commands.

  3. Choose the Run a command button.
  4. Under Command document, choose AWS-RunPowerShellScript.
  5. Select your target instances.
  6. Paste the PowerShell script from step 5 of the previous post into the Commands text box as shown.

  7. Leave all other fields at their defaults, and choose Run to invoke the PowerShell script on the target instances.
  8. You can monitor progress and view the output of the invoked scripts as shown.

Use PowerShell to Execute the EC2 Run Command

Alternatively, you can use PowerShell to invoke the EC2 Run Command as shown.

  1. If you have not already configured your PowerShell environment, follow these instructions to configure your PowerShell console to use the AWS Tools for Windows PowerShell.
  2. Save the PowerShell script from step 5 in the previous post as InstallWindowsTasks.ps1.

From a PowerShell session, simply replace ‘Instance-ID’ with the instance IDs of your target instances and provide the path to InstallWindowsTasks.ps1 as shown.

$runPSCommand=Send-SSMCommand 
    -InstanceId @('Instance-ID', 'Instance-ID') 
    -DocumentName AWS-RunPowerShellScript 
    -Parameter @{'commands'=
         @([System.IO.File]::ReadAllText("C:...InstallWindowsTasks.ps1"))}	

You can use the following commands to monitor the status.

Retrieve the command execution status:


Get-SSMCommand -CommandId $runPSCommand.CommandId

Retrieve the status of the command execution on a per-instance basis:


Get-SSMCommandInvocation -CommandId $runPSCommand.CommandId

Retrieve the command information with response data for an instance. (Be sure to replace Instance-ID)

Get-SSMCommandInvocation -CommandId $runPSCommand.CommandId `
      -Details $true -InstanceId Instance-ID | 
   select -ExpandProperty CommandPlugins

Summary

The EC2 Run Command simplifies remote management and customization of your EC2 instances. For more information, see the following resources:

http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/execute-remote-commands.html
https://aws.amazon.com/ec2/run-command/.