Tag: .NET


Code Analyzers Added to AWS SDK for .NET

One of the most exciting Microsoft Visual Studio 2015 features is the ability to have static analysis run on your code as you write it. This allows you to flag code that is syntactically correct but will cause errors when run.

We have added static analyzers to the latest AWS SDK NuGet packages for each of the version 3 service packages. The analyzers will check the values set on the SDK classes to make sure they are valid. For example, for a property that takes in a string, the analyzer will verify the string meets the minimum and maximum length. An analyzer will also run a regular expression to make sure it meets the right pattern.

Let’s say I wanted to create an Amazon DynamoDB table. Table names must be at least three characters and cannot contain characters like @ or #. So if I tried to create a table with the name of "@work", the service would fail the request. The analyzers will detect the issue, display an alert in the code editor, and put warnings in the error list before I even attempt to call the service.

Setup

The analyzers are set up in your project when you add the NuGet reference. To see the installed analyzers, go to the project properties, choose Code Analysis, and then choose the Open button.

The code analyzers can also be disabled here.

Feedback

We hope this is just the start of what we can do with the code analysis features in Visual Studio. If can suggest other common pitfalls that can be avoided through the use of these analyzers, let us know. If you have other ideas or feedback, open an issue in our GitHub repository.

New Support for Federated Users in the AWS Tools for Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Starting with version 3.1.31.0, the AWS Tools for Windows PowerShell support the use of federated user accounts through Active Directory Federation Services (AD FS) for accessing AWS services, using Security Assertion Markup Language (SAML).

In earlier versions, all cmdlets that called AWS services required you to specify AWS access and secret keys through either cmdlet parameters or data stored in credential profiles that were shared with the AWS SDK for .NET and the AWS Toolkit for Visual Studio. Managing groups of users required you to create an AWS Identity and Access Management (IAM) user instance for each user account in order to generate individual access and secret keys.

Support for federated access means your users can now authenticate using your Active Directory directory; temporary credentials will be granted to the user automatically. These temporary credentials, which are valid for one hour, are then used when invoking AWS services. Management of the temporary credentials is handled by the tools. For domain-joined user accounts, if a cmdlet is invoked but the credentials have expired, the user is reauthenticated automatically and fresh credentials are granted. (For non-domain-joined accounts, the user is prompted to enter credentials prior to reauthentication.)

The tools support two new cmdlets, Set-AWSSamlEndpoint and Set-AWSSamlRoleProfile, for setting up federated access:

# first configure the endpoint that one or more role profiles will reference by name
$endpoint = "https://adfs.example.com/adfs/ls/IdpInitiatedSignOn.aspx?loginToRp=urn:amazon:webservices"
Set-AWSSamlEndpoint -Endpoint $endpoint -StoreAs "endpointname"

# if the user can assume more than one role, this will prompt the user to select a role
Set-AWSSamlRoleProfile -EndpointName "endpointname" -StoreAs "profilename"

# if the principal and role ARN data of a role is known, it can be specified directly
$params = @{
 "PrincipalARN"="arn:aws:iam::012345678912:saml-provider/ADFS"
 "RoleARN"="arn:aws:iam::012345678912:role/ADFS-Dev"
}
Set-AWSSamlRoleProfile -EndpointName "endpointname" -StoreAs "ADFS-Dev" @params

# if the user can assume multiple roles, this creates one profile per role using the role name for the profile name
Set-AWSSamlRoleProfile -EndpointName "endpointname" -StoreAllRoles

Role profiles are what users will employ to obtain temporary credentials for a role they have been authorized to assume. When a user needs to authenticate after selecting a role profile, the data configured through Set-AWSSamlEndpoint is used to obtain the HTTPS endpoint that should be accessed. Authentication occurs when you first run a cmdlet that requires AWS credentials. The examples here assume a domain-joined user account is in use. If the user needs to supply network credentials to authenticate, the credentials can be passed with the -NetworkCredential parameter. By default, authentication is performed through Kerberos, but you can override this by passing the -AuthenticationType parameter to Set-AWSSamlEndpoint. (Currently supported values for this parameter are Kerberos, NTLM, Digest, Basic, or Negotiate.)

After role profiles are configured, you use them in the same way you have used AWS credential profiles. Simply pass the profile name to Set-AWSCredentials or to the -ProfileName parameter on individual service cmdlets. That’s all there is to it!

The new support for federated access reduces the burden of creating IAM user accounts for your team members. Currently, the tools support federated users for AD FS and SAML. If you want to use federation with other systems that support SAML, be sure to let us know in the comments. For more information about this feature, and examples that show how SAML based authentication works, see this post on the AWS Security blog.

Using Amazon Kinesis Firehose

Amazon Kinesis Firehose, a new service announced at this year’s re:Invent conference, is the easiest way to load streaming data into to AWS. Firehose manages all of the resources and automatically scales to match the throughput of your data. It can capture and automatically load streaming data into Amazon S3 and Amazon Redshift.

An example use for Firehose is to keep track of traffic patterns in a web application. To do that, we want to stream the records generated for each request to a web application with a record that contains the current page and the page being requested. Let’s take a look.

Creating the Delivery Stream

First, we need to create our Firehose delivery stream. Although we can do this through the Firehose console, let’s take a look at how we can automate the creation of the delivery stream with PowerShell.

In our PowerShell script, we need to set up the account ID and variables for the names of the resources we will create. The account ID is used in our IAM role to restrict access to just the account with the delivery stream.

$accountId = '<account-id>'
$roleName = '<iam-role-name>'
$s3BucketName = '<s3-bucket-name>'
$firehoseDeliveryStreamName = '<delivery-stream-name>'

Because Firehose will push our streaming data to S3, our script will need to make sure the bucket exists.

$s3Bucket = Get-S3Bucket -BucketName $s3BucketName
if($s3Bucket -eq $null)
{
    New-S3Bucket -BucketName $s3BucketName
}

We also need to set up an IAM role that gives Firehose permission to push data to S3. The role will need access to the Firehose API and the S3 destination bucket. For the Firehose access, our script will use the AmazonKinesisFirehoseFullAccess managed policy. For the S3 access, our script will use an inline policy that restricts access to the destination bucket.

$role = (Get-IAMRoles | ? { $_.RoleName -eq $roleName })

if($role -eq $null)
{
    # Assume role policy allowing Firehose to assume a role
    $assumeRolePolicy = @"
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "firehose.amazonaws.com"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId":"$accountId"
        }
      }
    }
  ]
}
"@

    $role = New-IAMRole -RoleName $roleName -AssumeRolePolicyDocument $assumeRolePolicy

    # Add managed policy AmazonKinesisFirehoseFullAccess to role
    Register-IAMRolePolicy -RoleName $roleName -PolicyArn 'arn:aws:iam::aws:policy/AmazonKinesisFirehoseFullAccess'

    # Add policy giving access to S3
    $s3AccessPolicy = @"
{
"Version": "2012-10-17",  
    "Statement":
    [    
        {      
            "Sid": "",      
            "Effect": "Allow",      
            "Action":
            [        
                "s3:AbortMultipartUpload",        
                "s3:GetBucketLocation",        
                "s3:GetObject",        
                "s3:ListBucket",        
                "s3:ListBucketMultipartUploads",        
                "s3:PutObject"
            ],      
            "Resource":
            [        
                "arn:aws:s3:::$s3BucketName",
                "arn:aws:s3:::$s3BucketName/*"		    
            ]    
        } 
    ]
}
"@

    Write-IAMRolePolicy -RoleName $roleName -PolicyName "S3Access" -PolicyDocument $s3AccessPolicy

    # Sleep to wait for the eventual consistency of the role creation
    Start-Sleep -Seconds 2
}

Now that the S3 bucket and IAM role are set up, we will create the delivery stream. We just need to set up an S3DestinationConfiguration object and call the New-KINFDeliveryStream cmdlet.

$s3Destination = New-Object Amazon.KinesisFirehose.Model.S3DestinationConfiguration
$s3Destination.BucketARN = "arn:aws:s3:::" + $s3Bucket.BucketName
$s3Destination.RoleARN = $role.Arn

New-KINFDeliveryStream -DeliveryStreamName $firehoseDeliveryStreamName -S3DestinationConfiguration $s3Destination 

After the New-KINFDeliveryStream cmdlet is called, it will take a few minutes to create the delivery stream. We can use the Get-KINFDeliveryStream cmdlet to check the status. As soon as it is active, we can run the following cmdlet to test our stream.

Write-KINFRecord -DeliveryStreamName $firehoseDeliveryStreamName -Record_Text "test record"

This will send one record to our stream, which will be pushed to the S3 bucket. By default, delivery streams buffer data to either 5 MB or 5 minutes before pushing to S3, so check the bucket in 5 minutes.

Writing to the Delivery Stream

In an ASP.NET application, we can write an IHttpModule so we know about every request. With an IHttpModule, we can add an event handler to the BeginRequest event and inspect where the request is coming from and going to. Here is code for our IHttpModule. The Init method adds the event handler. The RecordRequest method grabs the current URL and the request URL and sends that to the delivery stream.

using System;
using System.IO;
using System.Text;
using System.Web;

using Amazon;
using Amazon.KinesisFirehose;
using Amazon.KinesisFirehose.Model;

namespace KinesisFirehoseDemo
{
    /// 
    /// This http module adds an event handler for incoming requests.
	/// For each request a record is sent to Kinesis Firehose. For this demo a
    /// single record is sent at time with the PutRecord operation to
	/// keep the demo simple. This can be optimized by batching records and
	/// using the PutRecordBatch operation.
    /// 
    public class FirehoseSiteTracker : IHttpModule
    {
        IAmazonKinesisFirehose _client;

        // The delivery stream that was created using the setup.ps1 script.
        string _deliveryStreamName = "";

        public FirehoseSiteTracker()
        {
            this._client = new AmazonKinesisFirehoseClient(RegionEndpoint.USWest2);
        }

        public void Dispose() 
        {
            this._client.Dispose(); 
        }

        public bool IsReusable
        {
            get { return true; }
        }

        /// 
        /// Setup the event handler for BeginRequest events.
        /// 
        /// 
        public void Init(HttpApplication application)
        {
            application.BeginRequest +=
                (new EventHandler(this.RecordRequest));
        }

        /// 
        /// Write to Firehose a record with the starting page and the page being requested.
        /// 
        /// 
        /// 
        private void RecordRequest(Object source, EventArgs e)
        {
            // Create HttpApplication and HttpContext objects to access
            // request and response properties.
            HttpApplication application = (HttpApplication)source;
            HttpContext context = application.Context;

            string startingRequest = string.Empty;
            if (context.Request.UrlReferrer != null)
                startingRequest = context.Request.UrlReferrer.PathAndQuery;

            var record = new MemoryStream(UTF8Encoding.UTF8.GetBytes(string.Format("{0}t{1}n",
                startingRequest, context.Request.Path)));

            var request = new PutRecordRequest
            {
                DeliveryStreamName = this._deliveryStreamName,
                Record = new Record
                {
                    Data = record
                }
            };
            this._client.PutRecordAsync(request);
        }
    }
}

 

<system.webServer>
  <modules>
    <add name="siterecorder" type="KinesisFirehoseDemo.FirehoseSiteTracker"/>
  </modules>
</system.webServer>

Now we can navigate through our ASP.NET application and watch data flow into our S3 bucket.

What’s Next

Now that our data is flowing into S3, we have many options for what to do with that data. Firehose has built-in support for pushing our S3 data straight to Amazon Redshift, giving us lots of power for running queries and doing analytics. We could also set up event notifications to have Lambda functions or SQS pollers read the data getting pushed to Amazon S3 in real time.

Listing Cmdlets by Service

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

In this post, we discussed a new cmdlet you can use to navigate your way around the cmdlets in the AWS Tools for Windows PowerShell module. This cmdlet enables you to search for cmdlets that implemented a service API by answering questions like, "Which cmdlet implements the Amazon EC2 ‘RunInstances’ API?" It can also do a simple translation of AWS CLI commands you might have found in example documentation to give you the equivalent cmdlet.

In the 3.1.21.0 version of the tools, we extended this cmdlet. Now you can use it to get a list of all cmdlets belonging to a service based on words that appear in the service name or the prefix code we use to namespace the cmdlets by service. You could, of course, do something similar by using the PowerShell Get-Command cmdlet and supplying the -Noun parameter with a value that is the prefix with a wildcard (for example, Get-Command -Module AWSPowerShell -Noun EC2*). The problem here is that you need to know the prefix before you can run the command. Although we try to choose memorable and easily guessable prefixes, sometimes the association can be subtle and searching based on one or more words in the service name is more useful.

To list cmdlets by service you supply the -Service parameter. The value for this parameter is always treated as a case-insensitive regular expression and is used to match against cmdlets using both their ‘namespace’ prefix and full name. For example:

PS C:> Get-AWSCmdletName -Service compute

CmdletName                      ServiceOperation             ServiceName
----------                      ----------------             -----------
Add-EC2ClassicLinkVpc           AttachClassicLinkVpc         Amazon Elastic Compute Cloud
Add-EC2InternetGateway          AttachInternetGateway        Amazon Elastic Compute Cloud
Add-EC2NetworkInterface         AttachNetworkInterface       Amazon Elastic Compute Cloud
Add-EC2Volume                   AttachVolume                 Amazon Elastic Compute Cloud
...
Stop-EC2SpotInstanceRequest     CancelSpotInstanceRequests   Amazon Elastic Compute Cloud
Unregister-EC2Address           DisassociateAddress          Amazon Elastic Compute Cloud
Unregister-EC2Image             DeregisterImage              Amazon Elastic Compute Cloud
Unregister-EC2PrivateIpAddress  UnassignPrivateIpAddresses   Amazon Elastic Compute Cloud
Unregister-EC2RouteTable        DisassociateRouteTable       Amazon Elastic Compute Cloud

When a match for the parameter value is found, the output contains a collection of PSObject instances. Each instance has members detailing the cmdlet name, service operation, and service name. You can see the assigned prefix code in the cmdlet name. If the search term fails to match any supported service, you’ll see an error message in the output.

You might be asking yourself why we output the service name. We do this because the parameter value is treated as a regular expression and it attempts to match against two pieces of metadata in the module’s cmdlets (service and prefix). It is therefore possible that a term can match more than one service. For example:

PS C:> Get-AWSCmdletName -Service EC2

CmdletName                      ServiceOperation           ServiceName
----------                      ----------------           -----------
Add-EC2ClassicLinkVpc           AttachClassicLinkVpc       Amazon Elastic Compute Cloud
Add-EC2InternetGateway          AttachInternetGateway      Amazon Elastic Compute Cloud
...
Unregister-EC2RouteTable        DisassociateRouteTable     Amazon Elastic Compute Cloud
Get-ECSClusterDetail            DescribeClusters           Amazon EC2 Container Service
Get-ECSClusters                 ListClusters               Amazon EC2 Container Service
Get-ECSClusterService           ListServices               Amazon EC2 Container Service
...
Unregister-ECSTaskDefinition    DeregisterTaskDefinition   Amazon EC2 Container Service
Update-ECSContainerAgent        UpdateContainerAgent       Amazon EC2 Container Service
Update-ECSService               UpdateService              Amazon EC2 Container Service

You’ll see the result set contains cmdlets for both Amazon EC2 and Amazon EC2 Container Service. This is because the search term ‘EC2’ matched the noun prefix for the cmdlets exposing the Amazon EC2 service API as well as the service name for the container service.

We hope you find this new capability useful. You can get the new version as a Windows Installer package here or through the PowerShell Gallery.

If you have suggestions for other features, let us know in the comments!

AWS re:Invent 2015 Recap

Another AWS re:Invent in the bag. It was great to talk to so many of our customers about .NET and PowerShell. Steve and I gave two talks this year. The first session was about how to take advantage of ASP.NET 5 in AWS. The second session was our first-ever PowerShell talk at re:Invent. It was great to see community excitement for our PowerShell support. If you weren’t able to come to re:Invent this year, you can view our sessions online.

We published the source code and scripts used in our talks in the reInvent-2015 folder in our .NET SDK samples repository.

Hope to see you at next year’s AWS re:Invent!

AWS re:Invent 2015

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

This year’s AWS re:Invent conference is just a few days away. Norm, Milind, and I from the .NET SDK and Tools team at AWS will be attending. We are looking forward to meeting with as many of you as we can.

This year we have two .NET-related breakout sessions:

On Wednesday afternoon, we will show you how to develop and host ASP.NET 5 applications on AWS. Check out DEV302: Hosting ASP.NET 5 Applications in AWS with Docker and AWS CodeDeploy in the session catalog.

On Thursday afternoon, we will hold our first-ever re:Invent session on the AWS Tools for Windows PowerShell! Check out DEV202: Under the Desk to the AWS Cloud with Windows PowerShell in the session catalog. The session will walk through how some easy-to-use scripts can be used to handle the workflow of moving a virtualized server into the cloud.

If you’re attending the conference this year, be sure to stop by the SDKs and Tools booth in the Exhibit Hall and say hello. We’d love to get feedback on what we can do to help with your day-to-day work with the AWS SDK for .NET, the AWS Tools for Windows PowerShell, and the AWS Toolkit for Visual Studio. See you in Las Vegas!

New Support for ASP.NET 5 in AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today we have released beta support for ASP.NET 5 in the AWS SDK for .NET. ASP.NET 5 is an exciting development for .NET developers with modularization and cross-platform support being major goals for the new platform.

Currently, ASP.NET 5 is on beta 7. There may be more changes before its 1.0 release. For this reason, we have released a separate 3.2 version of the SDK (marked beta) to NuGet. We will continue to maintain the 3.1 version as the current, stable version of the SDK. When ASP.NET 5 goes out of beta, we will take version 3.2 of the SDK out of beta.

CoreCLR

ASP.NET 5 applications can run on .NET 4.5.2, mono 4.0.1, or the new CoreCLR runtime. If you are targeting the new CoreCLR runtime, be aware of these coding differences:

  • Service calls must be made asynchronously. This is because the HTTP client used for CoreCLR supports asynchronous calls only. Coding your application to use asynchronous operations can improve your application performance because fewer tasks are blocked waiting for a response from the server.
  • The CoreCLR version of the AWS SDK for .NET currently does not support our encrypted SDK credentials store, which is available in the .NET 3.5 and 4.5 versions of the AWS SDK for .NET. This is because the encrypted store uses P/Invoke to make system calls into Windows to handle the encryption. Because CoreCLR is cross-platform, that option is not available. For local development with CoreCLR, we recommend you use the shared credentials file. When running in EC2 instances, Identity and Access Management (IAM) roles are the preferred mechanism for delivering credentials to your application.

AWS re:Invent

If you are attending AWS re:Invent next month, I’m going to address a breakout session about ASP.NET 5 development with AWS and options for deploying ASP.NET 5 applications to AWS.

Feedback

To give us feedback on ASP.NET 5 support or to suggest AWS features to better support ASP.NET 5, open a GitHub issue on the repository for the AWS SDK for .NET. Check out the dnxcore-development branch to see where the ASP.NET 5 work is being done.

DynamoDB DataModel Enum Support

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

In version 3.1.1 of the DynamoDB .NET SDK package, we added enum support to the Object Persistence Model. This feature allows you to use enums in .NET objects you store and load in DynamoDB. Before this change, the only way to support enums in your objects was to use a custom converter to serialize and deserialize the enums, storing them either as string or numeric representations. With this change, you can use enums directly, without having to implement a custom converter. The following two code samples show an example of this:

Definitions:

[DynamoDBTable("Books")]
public class Book
{
    [DynamoDBHashKey]
    public string Title { get; set; }
    public List Authors { get; set; }
    public EditionTypes Editions { get; set; }
}
[Flags]
public enum EditionTypes
{
    None      = 0,
    Paperback = 1,
    Hardcover = 2,
    Digital   = 4,
}

Using enums:

var client = new AmazonDynamoDBClient();
DynamoDBContext context = new DynamoDBContext(client);

// Store item
Book book = new Book
{
    Title = "Cryptonomicon",
    Authors = new List { "Neal Stephenson" },
    Editions = EditionTypes.Paperback | EditionTypes.Digital
};
context.Save(book);

// Get item
book = context.Load("Cryptonomicon");
Console.WriteLine("Title = {0}", book.Title);
Console.WriteLine("Authors = {0}", string.Join(", ", book.Authors));
Console.WriteLine("Editions = {0}", book.Editions);

Custom Converters

With OPM enum support, enums are stored as their numeric representations in DynamoDB. (The default underlying type is int, but you can change it, as described in this MSDN article.) If you were previously working with enums by using a custom converter, you may now be able to remove it and use this new support, depending on how your converter was implemented:

  • If your converter stored the enum into its corresponding numeric value, this is the same logic we use, so you can remove it.
  • If your converter turned the enum into a string (if you use ToString and Parse), you can discontinue the use of a custom converter as long as you do this for all of the clients. This feature is able to convert strings to enums when reading data from DynamoDB, but will always save an enum as its numeric representation. This means that if you load an item with a "string" enum, and then save it to DynamoDB, the enum will now be "numeric." As long as all clients are updated to use the latest SDK, the transition should be seamless.
  • If your converter worked with strings and you depend on them elsewhere (for example, queries or scans that depend on the string representation), continue to use your current converter.

Enum changes

Finally, it’s important to keep in mind the fact that enums are stored as their numeric representations because updates to the enum can create problems with existing data and code. If you modify an enum in version B of an application, but have version A data or clients, it’s possible some of your clients may not be able to properly handle the newer version of the enum values. Even something as simple as reorganizing the enum values can lead to some very hard-to-identify bugs. This MSDN blog post provides some very good advice to keep in mind when designing an enum.

Xamarin Support Out of Preview

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Last month, with the release of version 3 of the AWS SDK for .NET, Xamarin and Portable Class Library (PCL) support was announced as an in-preview feature. We’ve worked hard to stabilize this feature and with today’s release, we are labeling Xamarin and PCL support production-ready. This applies to Windows Phone and Windows Store support, too. If you’ve been waiting for the production-ready version of the SDK for these platforms, you can now upgrade from version 2 to this release of the SDK.

The immediate impact of this push is that the AWSSDK.CognitoSync, AWSSDK.SyncManager, and AWSSDK.MobileAnalytics NuGet packages are no longer marked as preview. The versions of other AWS SDK NuGet packages have been incremented.

Happy coding!

S3 Transfer Utility Upgrade

by Tyler Moore | on | in .NET | Permalink | Comments |  Share

Version 3 of the AWS SDK for .NET includes an update to the S3 transfer utility. Before this update, if an S3 download of a large file failed, the entire download would be retried. Now the retry logic has been updated so that any retry attempts will use bits that have already been laid down. This means better performance for customers. Because the retry attempt no longer requests the entire file, there is less data to stream from S3 when a download is interrupted.

As long as you are already using the S3 transfer utility, there is no code work required to take advantage of this update. It’s available in the AWSSDK.S3 package in version 3.1.2 and later. For more information about the S3 transfer utility, see Amazon S3 Transfer Utility for Windows Store and Windows Phone.