Category: .NET


Exploring ASP.NET Core Part 1: Deploying from GitHub

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

ASP.NET Core, formally ASP.NET 5, is a platform that offers lots of possibilities for deploying .NET applications. This series of posts will explore options for deploying ASP.NET applications on AWS.

What Is ASP.NET Core?

ASP.NET Core is the new open-source, cross-platform, and modularized implementation of ASP.NET. It is currently under development, so expect future posts to cover updates and changes (for example, the new CLI).

Deploying from GitHub

The AWS CodeDeploy deployment service can be configured to trigger deployments from GitHub. Before ASP.NET Core, .NET applications had to be built before they were deployed. ASP.NET Core applications can be deployed and run from the source.

Sample Code and Setup Scripts

The code and setup scripts for this blog can be found in the aws-blog-net-exploring-aspnet-core repository in the part1 branch.

Setting Up AWS CodeDeploy

AWS CodeDeploy automates deployments to Amazon EC2 instances that you set up and configure as a deployment group. For more information, see the AWS CodeDeploy User Guide.

Although ASP.NET Core offers cross-platform support, in this post we are using instances running Microsoft Windows Server 2012 R2. The Windows EC2 instances must have IIS, .NET Core SDK and the Windows Server Hosting installed. The Windows Server Hosting, also called the ASP.NET Core Module, is required to enable IIS to communicate with the ASP.NET Core web server, Kestrel.

To set up the AWS CodeDeploy environment, you can run the .\EnvironmentSetup\EnvironmentSetup.ps1 PowerShell script in the GitHub repository. This script will create an AWS CloudFormation stack that will set up an EC2 instance and configure AWS CodeDeploy and IIS with the .NET Core SDK and Windows Server Hosting. It will then set up an AWS CodeDeploy application called ExploringAspNetCorePart1.

To avoid ongoing charges for AWS resources, after you are done with your testing, be sure to run the .\EnvironmentSetup\EnvironmentTearDown.ps1 PowerShell script.

GitHub and AWS CodeDeploy

You can use the AWS CodeDeploy console to connect your AWS CodeDeploy application to a GitHub repository. Then you can initiate deployments to the AWS CodeDeploy application by specifying the GitHub repository and commit ID. The AWS CodeDeploy team has written a blog post that describes how to configure the repository to automatically push a deployment to the AWS CodeDeploy application.

Deploying from Source

When you deploy from GitHub, the deployment bundle is a zip archive of the repository. In the root of the repository is an appspec.yml file that tells AWS CodeDeploy how to deploy our application. For our application, the appspec.yml is very simple:

version: 0.0
os: windows
files:
  - source: 
    destination: C:\ExploringAspNetCore
hooks:
  ApplicationStop:
    - location: .\RemoveApp.ps1
      timeout: 30
  ApplicationStart:
    - location: .\InstallApp.ps1
      timeout: 300

The file tells AWS CodeDeploy to extract the files from our repository to C:\ExploringAspNetCore and then run the PowerShell script, InstallApp.ps1, to start the application. The script has three parts. The first part restores all the dependencies for the application.

# Restore the nuget references
"C:Program Files\dotnet\dotnet.exe" restore

The second part packages the application for publishing.

# Publish application with all of its dependencies and runtime for IIS to use
"C:Program Files\dotnet\dotnet.exe" publish --configuration release -o c:\ExploringAspNetCore\publish --runtime active

The third part updates IIS to point to the publishing folder. The AWS CodeDeploy agent is a 32-bit application and runs PowerShell scripts with the 32-bit version of PowerShell. To access IIS with PowerShell, we need to use the 64-bit version. That’s why this section passes the script into the 64-bit version of powershell.exe.

C:\Windows\SysNative\WindowsPowerShell\v1.0\powershell.exe -Command {
             Import-Module WebAdministration
             Set-ItemProperty 'IIS:sitesDefault Web Site' 
                 -Name physicalPath -Value c:\ExploringAspNetCore\publish
}

Note: This line was formatted for readability. For the correct syntax, view the script in the repository.

If we have configured the GitHub repository to push deployments to AWS CodeDeploy, then after every push, the code change will be zipped up and sent to AWS CodeDeploy. Then AWS CodeDeploy will execute the appspec.yml and InstallApp.ps1 and the EC2 instance will be up-to-date with the latest code — no build step required.

Share Your Feedback

Check out the aws-blog-net-exploring-aspnet-core repository and let us know what you think. We’ll keep adding ideas to the repository. Feel free to open an issue to share your own ideas for deploying ASP.NET Core applications.

Contributing to the AWS SDK for .NET

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

The AWS SDK for .NET is an open source project available on GitHub. This post is to help community developers navigate the SDK code base with an eye toward contributing features and fixes to the SDK.

Code Generation

The first gotcha for contributors is that major portions of the code are generated from models of the service APIs. In version 3 of the SDK, we reorganized the code base to make it obvious which code is generated. You’ll now find it under each service client folder in folders named "Generated." Similarly, handwritten code is now found in folders named "Custom." Most of the generated code is in partial classes to facilitate extending it without having changes get clobbered by the code generator.

The convention we use when adding extensions to generated partial classes is to place a file called Class.Extensions.cs under the Custom folder with the same hierarchy as the generated file.

The code generator can be found here. The models are under the ServiceModels folder. To add a client, add a model to the ServiceModels folder, update the _manifest.json file in the same folder, and run the generator. The customization files in the folder handle ways in which we can override the behavior of the generator, mostly to keep some consistency across older and newer services, as well as make adjustments to make the API more C#-friendly.

It is sometimes necessary to update the code generator to add a feature or fix an issue. Because changes to the generator may impact all existing services and require a lot of testing, these changes should not be undertaken lightly.

Platform Support

Another thing you may notice about the code base is that some files are under folders like _bcl, _bcl35, _bcl45, _mobile, or _async. This is how the SDK controls which files are included in platform-specific project files.

As an example, if you look at the AutoScaling client folder you will see the folders

Model
_bcl35
_bcl45
_mobile

The _bcl45 folder contains the Auto Scaling client and interface for version 4.5 of the AWS SDK for .NET. It differs from the 3.5 version in that it exposes Async/Await versions of the service APIs, where the 3.5 version exposes Begin/End for asynchrony. The Model folder contains code common to all platforms. For this reason, don’t use Visual Studio to add files to an SDK project. Instead, add the file to the appropriate file system location, and then reload the project. We try to use this subdirectory mechanism rather than #if directives in the code where possible.

Testing

It will be much easier to evaluate and provide feedback on contributions if they are accompanied by unit tests, which can be added to the UnitTests folder.

Sometimes it is good to include some integration tests that hit the service endpoint, too. Integration tests will, by necessity, create AWS resources under your account, so it’s possible you will incur some costs. Try to design integration tests that favor APIs that don’t create resources, are fast to run, and account for the eventual consistency of the APIs you’re testing.

Community Feature Requests

Many contributions are driven by the specific needs of a community member, but sometimes they’re driven simply by a desire to get involved. If you would like to get involved, we collect community feature requests in the FEATURE_REQUESTS.md at the top level of the repository.

DynamoDB Document Model Manual Pagination

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

In version 3.1.1.2 of the DynamoDB .NET SDK package, we added pagination support to the Document Model. This feature allows you to use a pagination token returned by the API to paginate a set of Query or Scan results across sessions. Until now, it was not possible to resume pagination of Query or Scan results without retrieving the already encountered items. This post includes two simple examples of this new functionality.

The first example makes the initial Query call to retrieve all movies from the year 2012, and then saves the pagination token returned by the Search.PaginationToken property. The second example retrieves the PaginationToken and continues the same query. For these examples, we will assume we have functions (void SaveToken(string token) and string LoadToken()) to persist the pagination token across sessions. (If this were an ASP.NET application, the functions would use the session store to store the token, but this can be any similar environment where manual pagination is used.)

Initial query:

var client = new AmazonDynamoDBClient();
Table moviesTable = Table.LoadTable(client, "MoviesByYear");

// start initial query
var search = moviesTable.Query(new QueryOperationConfig
{
    Filter = new QueryFilter("Year", QueryOperator.Equal, 2012),
});

// retrieve one pages of items
List<Document> items = search.GetNextSet();

// get pagination token
string token = search.PaginationToken;

// persist the token in session data or something similar
SaveToken(token);

Resumed query:

var client = new AmazonDynamoDBClient();
Table moviesTable = Table.LoadTable(client, "MoviesByYear");

// load persisted token
string token = LoadToken();

// use token to resume query from last position
var search = moviesTable.Query(new QueryOperationConfig
{
    Filter = new QueryFilter("Year", QueryOperator.Equal, 2012),
    PaginationToken = token,
});
List<Document> items = search.GetNextSet();

// pagination token changed, persist new value
SaveToken(search.PaginationToken);

DataModel support

Although this functionality has not yet been added to the Object Persistence Model, it is possible to work around this limitation. In the following code sample, we can use the DocumentModel API to manually paginate our data, and then use DynamoDBContext to convert the retrieved Documents into .NET objects. Because we are using DynamoDBContext and don’t want to stray too far into the Document Model API, we’re going to use DynamoDBContext.GetTargetTable to avoid the manual construction of our Table instance.

// create DynamoDBContext object
var context = new DynamoDBContext(client);

// get the target table from the context
var moviesTable = context.GetTargetTable<Movie>();

// use token to resume query from last position
var search = moviesTable.Query(new QueryOperationConfig
{
    Filter = new QueryFilter("Year", QueryOperator.Equal, 2012),
    PaginationToken = token,
});
List<Document> items = search.GetNextSet();

// pagination token changed, persist new value
SaveToken(search.PaginationToken);

// convert page of Documents in .NET objects and enumerate over them
IEnumerable<Movie> movies = context.FromDocuments<Movie>(items);
foreach (var movie in movies)
    Log("{0} ({1})", movie.Title, movie.Year);

As you can see, even though we executed our Query using a Table object, we can continue working with familiar .NET classes while controlling the pagination of our data.

Installing Scheduled Tasks on EC2 Windows Instances Using EC2 Run

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today’s guest post is the second part of the two part series by AWS Solutions Architect Russell Day. Part one can be found here.

In the previous post, we showed how to use the User data field to install Windows scheduled tasks automatically when Windows EC2 instances are launched. In this post, we will demonstrate how to do the same thing using the new EC2 Run Command, which provides a simple way to remotely execute PowerShell commands against EC2 instances.

Use the EC2 Run Command to Install scheduled tasks automatically

Just as we did in the previous post, we will demonstrate two methods for using the EC2 Run Command: the Amazon EC2 console and AWS Tools for PowerShell.

Use the EC2 console to execute the EC2 Run Command

  1. Complete steps 1 through 4 in the previous post.
  2. In the EC2 console, choose Commands.

  3. Choose the Run a command button.
  4. Under Command document, choose AWS-RunPowerShellScript.
  5. Select your target instances.
  6. Paste the PowerShell script from step 5 of the previous post into the Commands text box as shown.

  7. Leave all other fields at their defaults, and choose Run to invoke the PowerShell script on the target instances.
  8. You can monitor progress and view the output of the invoked scripts as shown.

Use PowerShell to Execute the EC2 Run Command

Alternatively, you can use PowerShell to invoke the EC2 Run Command as shown.

  1. If you have not already configured your PowerShell environment, follow these instructions to configure your PowerShell console to use the AWS Tools for Windows PowerShell.
  2. Save the PowerShell script from step 5 in the previous post as InstallWindowsTasks.ps1.

From a PowerShell session, simply replace ‘Instance-ID’ with the instance IDs of your target instances and provide the path to InstallWindowsTasks.ps1 as shown.

$runPSCommand=Send-SSMCommand 
    -InstanceId @('Instance-ID', 'Instance-ID') 
    -DocumentName AWS-RunPowerShellScript 
    -Parameter @{'commands'=
         @([System.IO.File]::ReadAllText("C:...InstallWindowsTasks.ps1"))}	

You can use the following commands to monitor the status.

Retrieve the command execution status:


Get-SSMCommand -CommandId $runPSCommand.CommandId

Retrieve the status of the command execution on a per-instance basis:


Get-SSMCommandInvocation -CommandId $runPSCommand.CommandId

Retrieve the command information with response data for an instance. (Be sure to replace Instance-ID)

Get-SSMCommandInvocation -CommandId $runPSCommand.CommandId `
      -Details $true -InstanceId Instance-ID | 
   select -ExpandProperty CommandPlugins

Summary

The EC2 Run Command simplifies remote management and customization of your EC2 instances. For more information, see the following resources:

http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/execute-remote-commands.html
https://aws.amazon.com/ec2/run-command/.

AWS SDK for .NET Refresh for ASP.NET 5

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today we refreshed our ASP.NET 5 and CoreCLR support for the AWS SDK for .NET. This means we have pulled in all of the latest service updates, new services like AWS IoT, and enhancements from our stable 3.1 line of NuGet packages into new 3.2 beta versions of the SDK. Because there are a few remaining dependencies in our AWSSDK.Core package that are still in beta, we still need to keep our support in beta.

SDK Credentials Store

As part of CoreCLR support in the SDK, we have also enabled the SDK credentials store. The SDK credentials store is the encrypted storage for AWS credentials that you can manage using the AWS Explorer in Visual Studio. This means when you use the SDK on Windows and target the new CoreCLR runtime, the credential search pattern will be the same as the regular AWS SDK for .NET. On non-Windows platforms, we recommend using the shared credentials file.

Installing Scheduled Tasks on EC2 Windows Instances

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today’s guest post is part one of a two part series by AWS Solutions Architect Russell Day.

Windows administrators and developers often use scheduled tasks to run programs or scripts on a recurring basis. In this post, we will demonstrate how to use the Amazon EC2 User data option to install scheduled tasks on Windows EC2 instances automatically at launch.

Using the user data field to specify scripts that will automatically configure instances is commonly referred to as bootstrapping. In this post, we will specify a PowerShell script in the user data field to install scheduled tasks when EC2 instances are launched. We will demonstrate two methods for launching EC2 instances: the EC2 console and AWS Tools for PowerShell.

Before we can get started, we need to export the scheduled tasks and store them in a location accessible to our EC2 instances. We will use Task Scheduler to export the scheduled tasks to XML and store them in an Amazon S3 bucket.

Export scheduled tasks.

In Task Scheduler, right-click on the scheduled tasks you want to export and install as XML files.

Create an S3 bucket to store the XML scheduled task definitions.

Use the S3 Console, CLI, or AWS Tools for Windows PowerShell to create an S3 bucket that will store the XML task definition files created in step 1.

Create manifest file(s).

The manifest file(s) contains the scheduled tasks you want to install on the target instances. Consider using a separate manifest file for each unique set of tasks (for example, ProductionServerTasks.xml, DevelopmentServerTasks.xml).

Modify and save the following XML to create your manifest file(s).

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <scheduledTasks>
    <task name="Daily Scheduled Task 1" source="ScheduledTask1.xml" />
    <task name="Daily Scheduled Task 2" source="ScheduledTask2.xml" />
    <task name="Daily Scheduled Task 3" source="ScheduledTask3.xml" />
    <task name="Daily Scheduled Task 4" source="ScheduledTask4.xml" />
  </scheduledTasks>
</configuration>

Upload the exported scheduled task definitions and manifest file(s) to S3.

Upload the scheduled tasks definitions created in step 1 and the manifest file(s) created in step 3 to the S3 bucket created in step 2.

Create a PowerShell script to download and install the scheduled tasks.

The following PowerShell script contains functions to download and install the scheduled tasks stored in our S3 bucket. Replace the $S3Bucket and $TaskManifest parameters with your S3 bucket name and manifest file name.

$VerbosePreference = "Continue";
$WorkingDirectory = "c:tasks";
$TaskManifest = "TaskManifest.xml";
$S3Bucket = "YourS3BucketName";
function Invoke-Functions
{
    Download-ScheduledTasks
    Install-ScheduledTasks
}
function Download-ScheduledTasks
{
    Read-S3Object `
        -BucketName $S3Bucket `
        -Key $TaskManifest `
        -File $WorkingDirectory$TaskManifest

    [xml]$cfg = gc $WorkingDirectory$TaskManifest;
    $cfg.configuration.scheduledtasks.task | 
        %{ 
           $task = $_;
           [string] $TaskFile = $task.source
           Read-S3Object `
                -BucketName $S3Bucket `
                -Key $task.source `
                -File "$WorkingDirectory$TaskFile" 
        }	
}

function Install-ScheduledTasks
{		
    [xml]$cfg = gc $WorkingDirectory$TaskManifest;
    $cfg.configuration.scheduledtasks.task | 
        %{
           $task = $_;
           [string] $TaskFile = $task.source
            Register-ScheduledTask `
                -Xml (get-content "$WorkingDirectory$TaskFile" | out-string) `
                -TaskName $task.name
        }
}

Invoke-Functions | Out-File "c:InstallTasksLog.txt" -Verbose;

Create an EC2 role to allow GetObject permissions to the S3 bucket.

Our PowerShell script uses the Read-S3Object PowerShell cmdlet to download the scheduled task definitions from S3. Therefore, we need to create an EC2 role that allows our EC2 instances to access our S3 bucket objects on our behalf.

Follow these steps to create the EC2 role.

  1. Open the IAM console.
  2. In the navigation pane, choose Policies.
  3. Choose Create Policy.
  4. Choose Create Your Own Policy, and use the following policy template. Replace [YourS3BucketName] with the name of your bucket.

  5. In the navigation pane, choose Roles.
  6. Choose Create Role.
  7. In the Role Name field, type a name for your role.
  8. Under AWS Service Roles, choose Amazon EC2, and then choose Select.
  9. On the Attach Policy page, choose the policy you created, and then choose Next Step.
  10. On the Review page, choose Create Role.

Use the EC2 Console to Launch EC2 Instance(s).

  1. Open the EC2 console, and choose Launch Instance.
  2. Choose your version of Microsoft Windows Server.
  3. Continue to Step: 3 Configure Instance Details.

    • For IAM Role, choose the EC2 role you just created.
    • In Advanced Details, paste the PowerShell script into the text box. Be sure to enclose it in tags as shown here.

  4. Complete the wizard steps and launch the Windows EC2 instance(s).
  5. After your instance(s) have been launched, you can verify the installation of your scheduled tasks.

Use AWS Tools for Windows PowerShell to Launch EC2 Instances.

In keeping with our theme of automation, you can use PowerShell to create the instances programmatically.

  1. If you have not already configured your PowerShell environment, follow these instructions to configure your PowerShell console to use the AWS Tools for Windows PowerShell.
  2. Save the PowerShell script that will download and install the scheduled tasks as InstallWindowsTasks.ps1.
  3. Save the following PowerShell script as a module named AWSHelper.psm1. This allows you to reuse it when you launch Windows EC2 instances in the future. Modify the following parameters with your environment resource values:

    # the key pair to associate with the instance(s)
    $KeyPairName
    # the EC2 instance(s) security group ID
    $SecurityGroupId
    # the subnet ID for the instance(s) after launch
    $SubnetId
    # the ARN of the EC2 role we created to allow access to our S3 bucket
    $InstanceProfile
    

     

    $VerbosePreference = "Continue";
    $scriptpath = $MyInvocation.MyCommand.Path;
    $moduledirectory = Split-Path $scriptpath;
    
    function ConvertTo-Base64($string) {
       $bytes = [System.Text.Encoding]: UTF8.GetBytes ($string);
       $encoded = [System.Convert]::ToBase64String($bytes); 
       return $encoded;
    }
    
    function New-WindowsEC2Instance
    {
      [CmdletBinding()]
      Param
      (                    
        [Parameter(Mandatory=$false)]
        [string] $InstanceType = "t2.micro",
        [Parameter(Mandatory=$false)]
        [string] $KeyPairName = "YourKeyPair", 
        [Parameter(Mandatory=$false)]
        [string] $SecurityGroupId = "sg-5xxxxxxx", 
        [Parameter(Mandatory=$false)]
        [string] $SubnetId = "subnet-1xxxxxxx",	
        [Parameter(Mandatory=$true)]
        [int32] $Count, 
        [Parameter(Mandatory=$false)]
        [string] $InstanceProfile ="EC2RoleARN",
        [Parameter(Mandatory=$false)]
        [string] $UserScript 
            = (Join-Path $script:moduledirectory "InstallWindowsTasks.ps1")
      )
      Process
      {
        $ami = Get-EC2ImageByName -Names 'WINDOWS_2012R2_BASE'
        $ImageID =  $ami[0].ImageId
        $UserData = "";
        if ($userScript -and (Test-Path $userScript))
        {
          $contents = "" + [System.IO.File]::ReadAllText($UserScript) + "";
    	  $filePath = gi $UserScript;
          $UserData = ConvertTo-Base64($contents);
        }
    
        $params = @{};
        $params.Add("ImageID", $ImageID);
        $params.Add("InstanceType", $InstanceType);
        $params.Add("KeyName", $KeyPairName); 
        $params.Add("MaxCount", $Count);
        $params.Add("MinCount", $Count);
        $params.Add("InstanceProfile_Arn", $InstanceProfile);
        $params.Add("SecurityGroupId", $SecurityGroupId); 
        $params.Add("SubnetId", $SubnetId);
        $params.Add("UserData", $UserData); 	
    
        $reservation = New-EC2Instance @params;
      }
    }
    
  4. To invoke the PowerShell code, import the AWSHelper.psm1 module, and then call the New-WindowsEC2Instance cmdlet as shown. Type the number of instances at the prompt.

Summary

The User data option provides a convenient way to automate the customization of your EC2 instances. For more information, see the following resources:

http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/UsingConfig_WinAMI.html#user-data-execution
http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/walkthrough-powershell.html

Code Analyzers Added to AWS SDK for .NET

One of the most exciting Microsoft Visual Studio 2015 features is the ability to have static analysis run on your code as you write it. This allows you to flag code that is syntactically correct but will cause errors when run.

We have added static analyzers to the latest AWS SDK NuGet packages for each of the version 3 service packages. The analyzers will check the values set on the SDK classes to make sure they are valid. For example, for a property that takes in a string, the analyzer will verify the string meets the minimum and maximum length. An analyzer will also run a regular expression to make sure it meets the right pattern.

Let’s say I wanted to create an Amazon DynamoDB table. Table names must be at least three characters and cannot contain characters like @ or #. So if I tried to create a table with the name of "@work", the service would fail the request. The analyzers will detect the issue, display an alert in the code editor, and put warnings in the error list before I even attempt to call the service.

Setup

The analyzers are set up in your project when you add the NuGet reference. To see the installed analyzers, go to the project properties, choose Code Analysis, and then choose the Open button.

The code analyzers can also be disabled here.

Feedback

We hope this is just the start of what we can do with the code analysis features in Visual Studio. If can suggest other common pitfalls that can be avoided through the use of these analyzers, let us know. If you have other ideas or feedback, open an issue in our GitHub repository.

New Support for Federated Users in the AWS Tools for Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Starting with version 3.1.31.0, the AWS Tools for Windows PowerShell support the use of federated user accounts through Active Directory Federation Services (AD FS) for accessing AWS services, using Security Assertion Markup Language (SAML).

In earlier versions, all cmdlets that called AWS services required you to specify AWS access and secret keys through either cmdlet parameters or data stored in credential profiles that were shared with the AWS SDK for .NET and the AWS Toolkit for Visual Studio. Managing groups of users required you to create an AWS Identity and Access Management (IAM) user instance for each user account in order to generate individual access and secret keys.

Support for federated access means your users can now authenticate using your Active Directory directory; temporary credentials will be granted to the user automatically. These temporary credentials, which are valid for one hour, are then used when invoking AWS services. Management of the temporary credentials is handled by the tools. For domain-joined user accounts, if a cmdlet is invoked but the credentials have expired, the user is reauthenticated automatically and fresh credentials are granted. (For non-domain-joined accounts, the user is prompted to enter credentials prior to reauthentication.)

The tools support two new cmdlets, Set-AWSSamlEndpoint and Set-AWSSamlRoleProfile, for setting up federated access:

# first configure the endpoint that one or more role profiles will reference by name
$endpoint = "https://adfs.example.com/adfs/ls/IdpInitiatedSignOn.aspx?loginToRp=urn:amazon:webservices"
Set-AWSSamlEndpoint -Endpoint $endpoint -StoreAs "endpointname"

# if the user can assume more than one role, this will prompt the user to select a role
Set-AWSSamlRoleProfile -EndpointName "endpointname" -StoreAs "profilename"

# if the principal and role ARN data of a role is known, it can be specified directly
$params = @{
 "PrincipalARN"="arn:aws:iam::012345678912:saml-provider/ADFS"
 "RoleARN"="arn:aws:iam::012345678912:role/ADFS-Dev"
}
Set-AWSSamlRoleProfile -EndpointName "endpointname" -StoreAs "ADFS-Dev" @params

# if the user can assume multiple roles, this creates one profile per role using the role name for the profile name
Set-AWSSamlRoleProfile -EndpointName "endpointname" -StoreAllRoles

Role profiles are what users will employ to obtain temporary credentials for a role they have been authorized to assume. When a user needs to authenticate after selecting a role profile, the data configured through Set-AWSSamlEndpoint is used to obtain the HTTPS endpoint that should be accessed. Authentication occurs when you first run a cmdlet that requires AWS credentials. The examples here assume a domain-joined user account is in use. If the user needs to supply network credentials to authenticate, the credentials can be passed with the -NetworkCredential parameter. By default, authentication is performed through Kerberos, but you can override this by passing the -AuthenticationType parameter to Set-AWSSamlEndpoint. (Currently supported values for this parameter are Kerberos, NTLM, Digest, Basic, or Negotiate.)

After role profiles are configured, you use them in the same way you have used AWS credential profiles. Simply pass the profile name to Set-AWSCredentials or to the -ProfileName parameter on individual service cmdlets. That’s all there is to it!

The new support for federated access reduces the burden of creating IAM user accounts for your team members. Currently, the tools support federated users for AD FS and SAML. If you want to use federation with other systems that support SAML, be sure to let us know in the comments. For more information about this feature, and examples that show how SAML based authentication works, see this post on the AWS Security blog.

Using Amazon Kinesis Firehose

Amazon Kinesis Firehose, a new service announced at this year’s re:Invent conference, is the easiest way to load streaming data into to AWS. Firehose manages all of the resources and automatically scales to match the throughput of your data. It can capture and automatically load streaming data into Amazon S3 and Amazon Redshift.

An example use for Firehose is to keep track of traffic patterns in a web application. To do that, we want to stream the records generated for each request to a web application with a record that contains the current page and the page being requested. Let’s take a look.

Creating the Delivery Stream

First, we need to create our Firehose delivery stream. Although we can do this through the Firehose console, let’s take a look at how we can automate the creation of the delivery stream with PowerShell.

In our PowerShell script, we need to set up the account ID and variables for the names of the resources we will create. The account ID is used in our IAM role to restrict access to just the account with the delivery stream.

$accountId = '<account-id>'
$roleName = '<iam-role-name>'
$s3BucketName = '<s3-bucket-name>'
$firehoseDeliveryStreamName = '<delivery-stream-name>'

Because Firehose will push our streaming data to S3, our script will need to make sure the bucket exists.

$s3Bucket = Get-S3Bucket -BucketName $s3BucketName
if($s3Bucket -eq $null)
{
    New-S3Bucket -BucketName $s3BucketName
}

We also need to set up an IAM role that gives Firehose permission to push data to S3. The role will need access to the Firehose API and the S3 destination bucket. For the Firehose access, our script will use the AmazonKinesisFirehoseFullAccess managed policy. For the S3 access, our script will use an inline policy that restricts access to the destination bucket.

$role = (Get-IAMRoles | ? { $_.RoleName -eq $roleName })

if($role -eq $null)
{
    # Assume role policy allowing Firehose to assume a role
    $assumeRolePolicy = @"
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "firehose.amazonaws.com"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId":"$accountId"
        }
      }
    }
  ]
}
"@

    $role = New-IAMRole -RoleName $roleName -AssumeRolePolicyDocument $assumeRolePolicy

    # Add managed policy AmazonKinesisFirehoseFullAccess to role
    Register-IAMRolePolicy -RoleName $roleName -PolicyArn 'arn:aws:iam::aws:policy/AmazonKinesisFirehoseFullAccess'

    # Add policy giving access to S3
    $s3AccessPolicy = @"
{
"Version": "2012-10-17",  
    "Statement":
    [    
        {      
            "Sid": "",      
            "Effect": "Allow",      
            "Action":
            [        
                "s3:AbortMultipartUpload",        
                "s3:GetBucketLocation",        
                "s3:GetObject",        
                "s3:ListBucket",        
                "s3:ListBucketMultipartUploads",        
                "s3:PutObject"
            ],      
            "Resource":
            [        
                "arn:aws:s3:::$s3BucketName",
                "arn:aws:s3:::$s3BucketName/*"		    
            ]    
        } 
    ]
}
"@

    Write-IAMRolePolicy -RoleName $roleName -PolicyName "S3Access" -PolicyDocument $s3AccessPolicy

    # Sleep to wait for the eventual consistency of the role creation
    Start-Sleep -Seconds 2
}

Now that the S3 bucket and IAM role are set up, we will create the delivery stream. We just need to set up an S3DestinationConfiguration object and call the New-KINFDeliveryStream cmdlet.

$s3Destination = New-Object Amazon.KinesisFirehose.Model.S3DestinationConfiguration
$s3Destination.BucketARN = "arn:aws:s3:::" + $s3Bucket.BucketName
$s3Destination.RoleARN = $role.Arn

New-KINFDeliveryStream -DeliveryStreamName $firehoseDeliveryStreamName -S3DestinationConfiguration $s3Destination 

After the New-KINFDeliveryStream cmdlet is called, it will take a few minutes to create the delivery stream. We can use the Get-KINFDeliveryStream cmdlet to check the status. As soon as it is active, we can run the following cmdlet to test our stream.

Write-KINFRecord -DeliveryStreamName $firehoseDeliveryStreamName -Record_Text "test record"

This will send one record to our stream, which will be pushed to the S3 bucket. By default, delivery streams buffer data to either 5 MB or 5 minutes before pushing to S3, so check the bucket in 5 minutes.

Writing to the Delivery Stream

In an ASP.NET application, we can write an IHttpModule so we know about every request. With an IHttpModule, we can add an event handler to the BeginRequest event and inspect where the request is coming from and going to. Here is code for our IHttpModule. The Init method adds the event handler. The RecordRequest method grabs the current URL and the request URL and sends that to the delivery stream.

using System;
using System.IO;
using System.Text;
using System.Web;

using Amazon;
using Amazon.KinesisFirehose;
using Amazon.KinesisFirehose.Model;

namespace KinesisFirehoseDemo
{
    /// 
    /// This http module adds an event handler for incoming requests.
	/// For each request a record is sent to Kinesis Firehose. For this demo a
    /// single record is sent at time with the PutRecord operation to
	/// keep the demo simple. This can be optimized by batching records and
	/// using the PutRecordBatch operation.
    /// 
    public class FirehoseSiteTracker : IHttpModule
    {
        IAmazonKinesisFirehose _client;

        // The delivery stream that was created using the setup.ps1 script.
        string _deliveryStreamName = "";

        public FirehoseSiteTracker()
        {
            this._client = new AmazonKinesisFirehoseClient(RegionEndpoint.USWest2);
        }

        public void Dispose() 
        {
            this._client.Dispose(); 
        }

        public bool IsReusable
        {
            get { return true; }
        }

        /// 
        /// Setup the event handler for BeginRequest events.
        /// 
        /// 
        public void Init(HttpApplication application)
        {
            application.BeginRequest +=
                (new EventHandler(this.RecordRequest));
        }

        /// 
        /// Write to Firehose a record with the starting page and the page being requested.
        /// 
        /// 
        /// 
        private void RecordRequest(Object source, EventArgs e)
        {
            // Create HttpApplication and HttpContext objects to access
            // request and response properties.
            HttpApplication application = (HttpApplication)source;
            HttpContext context = application.Context;

            string startingRequest = string.Empty;
            if (context.Request.UrlReferrer != null)
                startingRequest = context.Request.UrlReferrer.PathAndQuery;

            var record = new MemoryStream(UTF8Encoding.UTF8.GetBytes(string.Format("{0}t{1}n",
                startingRequest, context.Request.Path)));

            var request = new PutRecordRequest
            {
                DeliveryStreamName = this._deliveryStreamName,
                Record = new Record
                {
                    Data = record
                }
            };
            this._client.PutRecordAsync(request);
        }
    }
}

 

<system.webServer>
  <modules>
    <add name="siterecorder" type="KinesisFirehoseDemo.FirehoseSiteTracker"/>
  </modules>
</system.webServer>

Now we can navigate through our ASP.NET application and watch data flow into our S3 bucket.

What’s Next

Now that our data is flowing into S3, we have many options for what to do with that data. Firehose has built-in support for pushing our S3 data straight to Amazon Redshift, giving us lots of power for running queries and doing analytics. We could also set up event notifications to have Lambda functions or SQS pollers read the data getting pushed to Amazon S3 in real time.

Listing Cmdlets by Service

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

In this post, we discussed a new cmdlet you can use to navigate your way around the cmdlets in the AWS Tools for Windows PowerShell module. This cmdlet enables you to search for cmdlets that implemented a service API by answering questions like, "Which cmdlet implements the Amazon EC2 ‘RunInstances’ API?" It can also do a simple translation of AWS CLI commands you might have found in example documentation to give you the equivalent cmdlet.

In the 3.1.21.0 version of the tools, we extended this cmdlet. Now you can use it to get a list of all cmdlets belonging to a service based on words that appear in the service name or the prefix code we use to namespace the cmdlets by service. You could, of course, do something similar by using the PowerShell Get-Command cmdlet and supplying the -Noun parameter with a value that is the prefix with a wildcard (for example, Get-Command -Module AWSPowerShell -Noun EC2*). The problem here is that you need to know the prefix before you can run the command. Although we try to choose memorable and easily guessable prefixes, sometimes the association can be subtle and searching based on one or more words in the service name is more useful.

To list cmdlets by service you supply the -Service parameter. The value for this parameter is always treated as a case-insensitive regular expression and is used to match against cmdlets using both their ‘namespace’ prefix and full name. For example:

PS C:> Get-AWSCmdletName -Service compute

CmdletName                      ServiceOperation             ServiceName
----------                      ----------------             -----------
Add-EC2ClassicLinkVpc           AttachClassicLinkVpc         Amazon Elastic Compute Cloud
Add-EC2InternetGateway          AttachInternetGateway        Amazon Elastic Compute Cloud
Add-EC2NetworkInterface         AttachNetworkInterface       Amazon Elastic Compute Cloud
Add-EC2Volume                   AttachVolume                 Amazon Elastic Compute Cloud
...
Stop-EC2SpotInstanceRequest     CancelSpotInstanceRequests   Amazon Elastic Compute Cloud
Unregister-EC2Address           DisassociateAddress          Amazon Elastic Compute Cloud
Unregister-EC2Image             DeregisterImage              Amazon Elastic Compute Cloud
Unregister-EC2PrivateIpAddress  UnassignPrivateIpAddresses   Amazon Elastic Compute Cloud
Unregister-EC2RouteTable        DisassociateRouteTable       Amazon Elastic Compute Cloud

When a match for the parameter value is found, the output contains a collection of PSObject instances. Each instance has members detailing the cmdlet name, service operation, and service name. You can see the assigned prefix code in the cmdlet name. If the search term fails to match any supported service, you’ll see an error message in the output.

You might be asking yourself why we output the service name. We do this because the parameter value is treated as a regular expression and it attempts to match against two pieces of metadata in the module’s cmdlets (service and prefix). It is therefore possible that a term can match more than one service. For example:

PS C:> Get-AWSCmdletName -Service EC2

CmdletName                      ServiceOperation           ServiceName
----------                      ----------------           -----------
Add-EC2ClassicLinkVpc           AttachClassicLinkVpc       Amazon Elastic Compute Cloud
Add-EC2InternetGateway          AttachInternetGateway      Amazon Elastic Compute Cloud
...
Unregister-EC2RouteTable        DisassociateRouteTable     Amazon Elastic Compute Cloud
Get-ECSClusterDetail            DescribeClusters           Amazon EC2 Container Service
Get-ECSClusters                 ListClusters               Amazon EC2 Container Service
Get-ECSClusterService           ListServices               Amazon EC2 Container Service
...
Unregister-ECSTaskDefinition    DeregisterTaskDefinition   Amazon EC2 Container Service
Update-ECSContainerAgent        UpdateContainerAgent       Amazon EC2 Container Service
Update-ECSService               UpdateService              Amazon EC2 Container Service

You’ll see the result set contains cmdlets for both Amazon EC2 and Amazon EC2 Container Service. This is because the search term ‘EC2’ matched the noun prefix for the cmdlets exposing the Amazon EC2 service API as well as the service name for the container service.

We hope you find this new capability useful. You can get the new version as a Windows Installer package here or through the PowerShell Gallery.

If you have suggestions for other features, let us know in the comments!