Category: .NET


Removal of Nullable Parameter Types in AWS Tools for Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

We wanted to let you know of a change to the Tools for Windows PowerShell, in which ‘nullable’ parameters used by some cmdlets will be removed. This affects some boolean, int, and DateTime parameters in a small number of cmdlets that up until now have been surfaced as Nullable<bool>, Nullable<int> or Nullable<DateTime> parameter types.

We’ve decided to make this change based on community feedback that nullable parameter types are not standard PowerShell practice and tend to confuse some beginners with the tools. In addition, specifying $null as the value to one of these parameters actually has no effect; the value is never passed onto the underlying service API call that the cmdlet makes, so surfacing nullable parameter types serves no useful purpose.

After the change, the parameters will become simple boolean, int and DateTime types. Note that this will only be a breaking change in your scripts if you have passed the value $null to one of these parameters. For example, you may have written:

PS C:> New-ASLaunchConfiguration -AssociatePublicIpAddress $null ...

If you look at the help for this cmdlet, it currently shows the AssociatePublicIpAddress parameter as being of type System.Boolean? (or, in other words, Nullable<bool>). Passing $null actually had no effect within the cmdlet; the parameter value was never passed onto the underlying service API call. After the update is released, your script will trigger an error if you use $null:

PS C:> New-ASLaunchConfiguration -AssociatePublicIpAddress $null ...

New-ASLaunchConfiguration : Cannot bind argument to parameter 'AssociatePublicIpAddress' because it is null.
At line:1 char:53
+ New-ASLaunchConfiguration -AssociatePublicIpAddress $null
+                                                     ~~~~~
    + CategoryInfo          : InvalidArgument: (:) [New-ASLaunchConfiguration], ParameterBindingException
    + FullyQualifiedErrorId : ParameterArgumentValidationErrorNullNotAllowed,Amazon.PowerShell.Cmdlets.AS.NewASLaunchConfigurationCmdlet

The fix is to simply remove the parameter; this is safe since, as noted above, the value was never used anyway. If your scripts pass actual values to these parameters (in this example, $true or $false), you won’t see any difference in behavior after the change is released.

The team would like to take this opportunity to give a shout-out to PowerShell MVP Jeff Wouters. Jeff has been taking the time to provide useful and actionable suggestions on the Tools for Windows PowerShell cmdlets since late last year, which we really appreciate.

If you have ideas about other changes we could make to the tools to enhance your scripting experience with AWS, then be sure to let us know!

AWS Lambda Support in Visual Studio

Today we released version 1.9.0 of the AWS Toolkit for Visual Studio with support for AWS Lambda. AWS Lambda is a new compute service in preview that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.

Lambda functions are written in Node.js. To help Visual Studio developers, we have integrated with the Node.js Tools for Visual Studio plugin, which you can download here. Once the Node.js plugin and the latest AWS Toolkit are installed, it is easy to develop and debug locally and then deploy to AWS Lambda when you are ready. Let’s walk through the process of developing and deploying a Lambda function.

Setting up the project

To get started, we need to create a new project. There is a new AWS Lambda project template in the Visual Studio New Project dialog.

The Lambda project wizard has three ways to get started. The first option is to create a simple project that just contains the bare necessities to get started developing and testing. The second option allows you to pull down the source of a function that was already deployed. The last option allows you to create a project from a sample. For this walkthrough, select the the "Thumbnail Creator" sample and choose Finish.

Once this function is deployed, it will get called when images are uploaded to an S3 bucket. The function will then resize the image into a thumbnail, and will upload the thumbnail to another bucket. The destination bucket for the thumbnail will be the same name as the bucket containing the original image plus a "-thumbnails" suffix.

The project will be set up containing three files and the dependent Node.js packages. This sample also has a dependency on the ImageMagick CLI, which you can download from http://www.imagemagick.org/. Lambda has ImageMagick pre-configured on the compute instances that will be running the Lambda function.

Let’s take a look at the files added to the project.

app.js Defines the function that Lambda will invoke when it receives events.
_sampleEvent.json An example of what an event coming from S3 looks like.
_testdriver.js Utility code for executing the Lambda function locally. It will read in the _sampleEvent.json file and pass it into the Lambda function defined in app.js

Credentials

To access AWS resources from Lamdba, functions use the AWS SDK for Node.js which has a different path for finding credentials than the AWS SDK for .NET. The AWS SDK for Node.js looks for credentials in the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY or through the shared credentials file. For further information about configuring the AWS SDK for Node.js refer to the AWS SDK for Node.js documentation

Running locally

To run this sample, you will need to create the source and target S3 buckets. Pick a bucket name for the source bucket, and then create the bucket using AWS Explorer. Create a second bucket with the same name as the source bucket but with the "-thumbnails" suffix. For example, you could have a pair of buckets called foobar and foobar-thumbnails. Note: the _testdriver.js defaults the region to us-west-2, so be sure to update this to whatever region you create the buckets in. Once the buckets are created, upload an image to the source bucket so that you have an image to test with.

Open the _sampleEvent.js file and update the bucket name property to the source bucket and the object key property to the image that was uploaded.

Now, you can run and debug this like any other Visual Studio project. Go ahead and open up _testdriver.js and set a breakpoint and press F5 to launch the debugger.

Deploying the function to AWS Lambda

Once we have verified the function works correctly locally, it is time to deploy it. To do that, right-click on the project and select Upload to AWS Lambda….

This opens the Upload Lambda Function dialog.

You need to enter a Function Name to identify the function. You can leave the File Name and Handler fields at the default, which indicates what function to call on behalf of the event. You then need to configure an IAM role that Lambda can use to invoke your function. For this walkthrough, you are going to create a new role by selecting that we need Amazon S3 access and Amazon CloudWatch access. It is very useful to give access to CloudWatch so that Lambda can write debugging information to Amazon CloudWatch Logs and give you monitoring on the usage of the function. You can always refine these permissions after the function is uploaded. Once all that is set, go ahead and choose OK.

Once the upload is complete the Lambda Function status view will be displayed. The last step is to tell Amazon S3 to send events to your Lambda function. To do that, click the Add button for adding an event source.

Leave the Source Type set to Amazon S3 and select the Source bucket. S3 will need permission to send events to Lambda. This is done by assigning a role to the event source. By default, the dialog will create a role that gives S3 permission. Event sources to S3 are unique in that the configuration is actually done to the S3 bucket’s notification configuration. When you choose OK on this dialog, the event source will not show up here, but you can view it by right-clicking on the bucket and selecting properties.

 

Now that the function is deployed and S3 is configured to send events to our function, you can test it by uploading an image to the source bucket. Very shortly after uploading an image to the source bucket, your thumbnail will show up in the thumbnails bucket.

 

Calling from S3 Browser

Your function is set up to create thumbnails for any newly uploaded images. But what if you want to run our Lambda function on images that have already been uploaded? You can do that by opening the S3 bucket from AWS Explorer and navigating to the image you need the Lambda function to run against and choosing Invoke Lambda Function.

Next select the function we want to invoke and choose OK. The toolkit will then create the event object that S3 would have sent to Lambda and then calls Invoke on the function.

This can be done for an individual file or by selecting multiple files or folders in the S3 Browser. This is helpful when you make a code change to your Lambda function and you want to reprocess all the objects in your bucket with the new code.

Conclusion

Creating thumbnails is just one example you can use AWS Lambda for, but I’m sure you can imagine many ways you can use the power of Lambda’s event-based compute power. Currently, you can create event sources to Amazon S3, Amazon Kinesis, and Amazon DynamoDB Streams, which is currently in preview. It is also possible to invoke Lambda functions for your own custom events using any of AWS SDKs.

Try out the new Lambda features in the toolkit and let us know what you think. Given that AWS Lambda is in preview, we would love to get your feedback about these new features and what else we can add to make you successful using Lambda.

ElastiCache as an ASP.NET Session Store

by Brian Beach | on | in .NET | Permalink | Comments |  Share

Are you hosting an ASP.NET application on AWS? Do you want the benefits of Elastic Load Balancing (ELB) and Auto Scaling, but feel limited by a dependency on ASP.NET session state? Rather than rely on sticky sessions, you can use an out-of-process session state provider to share session state between multiple web servers. In this post, I will show you how to configure ElastiCache and the RedisSessionStateProvider from Microsoft to eliminate the dependency on sticky sessions.

Background

An ASP.NET session state provider maintains a user’s session between requests to an ASP.NET application. For example, you might store the contents of a shopping cart in session state. The default provider stores the user’s session in memory on the web server that received the request.

Using the default provider, your ELB must send every request from a specific user to the same web server. This is known as sticky sessions and greatly limits your elasticity. First, the ELB cannot distribute traffic evenly, often sending a disproportionate amount of traffic to one server. Second, Auto Scaling cannot terminate web servers without losing some user’s session state.

By moving the session state to a central location, all the web servers can share a single copy of session state. This allows the ELB to send requests to any web server, better distributing load across all the web servers. In addition, Auto Scaling can terminate individual web servers without losing session state information.

There are numerous providers available that allow multiple web servers to share session state. One option is use the DynamoDB Session State Provider that ships with the AWS SDK for .NET. This post introduces another option, storing session state in an ElastiCache cluster.

ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud. ElastiCache supports both Memcached and Redis cache clusters. While either technology can store ASP.NET session state, Microsoft offers a provider for Redis, and I will focus on Redis here.

Launch an Elasticache for Redis Cluster

Let us begin by launching a new Elasticache for Redis cluster in the default VPC using PowerShell. Note that you can use the ElastiCache console if you prefer.

First, get a reference to the default VPC and create a new security group for the cluster. The security group must allow inbound requests to Redis, which uses TCP port 6379.

$VPC = Get-EC2Vpc -Filter @{name='isDefault'; value='true'}
$Group = New-EC2SecurityGroup -GroupName 'ElastiCacheRedis' -Description 'Allows TCP Port 6379'
Grant-EC2SecurityGroupIngress -GroupId $Group -IpPermission  @{ IpProtocol="tcp"; FromPort="6379"; ToPort="6379"; IpRanges=$VPC.CidrBlock }

Second, launch a new Redis cluster. In the example below, I launch a single node cluster named “aspnet” running on a t2.micro. Make sure you specify the security group you created above.

New-ECCacheCluster -CacheClusterId 'aspnet' -Engine 'redis' -CacheNodeType 'cache.t2.micro' -NumCacheNode 1 -SecurityGroupId $Group

Finally, get the endpoint address of the instance you just created. Note that you must wait a few minutes for the cluster to launch before the address is available.

(Get-ECCacheCluster -CacheClusterId 'aspnet' -ShowCacheNodeInfo $true).CacheNodes[0].Endpoint.Address

The endpoint address is a fully qualified domain name that ends in cache.amazon.com and resolves to a private IP address in the VPC. For example, ElastiCache assigned my cluster the address below.

aspnet.k30h8n.0001.use1.cache.amazonaws.com

Configuring the Redis Session State Provider

With the Redis cluster running, you are ready to add the RedisSessionStateProvider to your ASP.NET application. Open your project in Visual Studio. First, right click on the project in Solution Explorer and select Manage NuGet Packages. Then, search for “RedisSessionStateProvider” and click the Install button as show below.

Manage NuGet Packages

NuGet will add a custom session state provider to your project’s web.config file. Open the web.config file and locate the Microsoft.Web.Redis.RedisSessionStateProvider shown below.

<sessionState mode="Custom" customProvider="MySessionStateStore">
  <providers>
    <add name="MySessionStateStore" type="Microsoft.Web.Redis.RedisSessionStateProvider" host="127.0.0.1" accessKey="" ssl="false" />
  </providers>
</sessionState>

Now replace the host attribute with the endpoint address you received from Get-ECCacheCluster. For example, my configuration looks like this.

<sessionState mode="Custom" customProvider="MySessionStateStore">
  <providers>
    <add name="MySessionStateStore" type="Microsoft.Web.Redis.RedisSessionStateProvider" host="aspnet.k30h8n.0001.use1.cache.amazonaws.com" accessKey="" ssl="false" />
  </providers>
</sessionState>

You are now ready to deploy and test your application. Wasn’t that easy?

Summary

You can use ElastiCache to share ASP.NET session information with multiple web servers and eliminate the dependency on ELB stick sessions. ElastiCache is simple to use and integrates with ASP.NET using the RedisSessionStateProvider available as a NuGet package. For more information about ElastiCache, see the ElastiCache documentation.

Clock-skew correction

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Clock skew is the difference in time between two computers. In the context of this blog post, it’s the difference between the time on a computer running your .NET application (client) and Amazon’s (server). If the client time is different from server time by more than about 15 minutes, the requests your application makes will be signed with the incorrect time, and the server will reject them with an InvalidSignatureException or similar error.

The solution to this problem is to correct your system clock, but unfortunately that isn’t always an option. The application may not have permissions to update the time, or the user may have set an incorrect time on purpose. The latest release of the AWS SDK for .NET includes a new feature to help out in this case: the SDK will now identify and correct for clock skew. This feature is enabled by default, so you don’t have to make any changes to your application.

For the most part, this process is transparent: the SDK will make a request, and if the server responds with a clock skew error, the SDK will calculate a clock offset (how much client time is different from server time) and will then retry the original request with the correct time. If you are interested in knowing the clock offset that the SDK calculated, the SDK stores this value in AWSConfigs.ClockOffset. You can also turn this feature on or off with the AWSConfigs.CorrectForClockSkew property or by using the below configuration, though disabling clock skew correction will of course result in the SDK throwing signature errors if there is clock skew on your system.

<configuration>
  <configSections>
    <section name="aws" type="Amazon.AWSSection, AWSSDK" />
  </configSections>
  <aws correctForClockSkew="true OR false" />
</configuration>

Using NuGet and Chocolatey package managers in AWS CloudFormation and AWS Elastic Beanstalk

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

In this guest post by AWS Solutions Architect Lee Atkinson, we are going to describe how you can take advantage of the NuGet and Chocolatey package managers inside your CloudFormation templates and Elastic Beanstalk applications.

AWS CloudFormation and AWS Elastic Beanstalk support the Microsoft Windows Installer for installing .msi files onto Microsoft Windows instances managed by those services. For details on how to do this for CloudFormation, see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html#aws-resource-init-packages, and for Elastic Beanstalk, see http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html#customize-containers-windows-format-packages.

NuGet (pronounced ‘New Get’) is a package manager for installing .NET development packages onto your development machine. It is available as a Microsoft Windows Visual Studio plugin as well as a standalone command line tool. Users can install packages from, and publish packages to, a central repository for packages located at http://www.nuget.org/.

Chocolatey NuGet builds on top of NuGet to provide a package manager for Microsoft Windows applications and describes itself as "a Machine Package Manager, somewhat like apt-get, but built with Windows in mind." It has a command line tool and a central repository located at http://chocolatey.org/.

AWS CloudFormation supports the downloading of files and execution of commands on EC2 instance creation using an application called ‘cfn-init.exe’ installed on instances running Microsoft Windows. We can leverage this functionality to install and execute both NuGet and Chocolatey. For more information on bootstrapping Microsoft Windows instances in CloudFormation, see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-windows-stacks-bootstrapping.html.

Similarly, AWS Elastic Beanstalk supports the downloading of files and execution of commands on instance creation using container customization. We can use this functionality to install and execute both NuGet and Chocolatey. For more information on customizing Microsoft Windows containers in Elastic Beanstalk, see http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html.

Using NuGet in AWS CloudFormation

For installing NuGet packages on an EC2 instance using AWS CloudFormation, we can use the NuGet command line tool. First, we need to download the tool to the Microsoft Windows instance, and we can use the CloudFormation ‘file’ declaration. Then, to install NuGet packages, we can use a CloudFormation ‘command’ declaration.

Here’s an excerpt from an example AWS CloudFormation template to:

  1. Download NuGet.exe
  2. Install the JSON.NET NuGet package
  3. Install the Entity Framework NuGet package
"AWS::CloudFormation::Init": {
  "config": {
    "files" : {
      "c:/tools/nuget.exe" : {
        "source" : "https://nuget.org/nuget.exe"
      }
    },
    "commands" : {
      "1-create-myapp-folder" : {
        "command" : "if not exist c:\myapp mkdir c:\myapp",
        "waitAfterCompletion" : "0"
      },
      "2-install-json-net" : {
        "command" : "c:\tools\nuget install Newtonsoft.Json -NonInteractive -OutputDirectory c:\myapp",
        "waitAfterCompletion" : "0"
      },
      "3-install-entityframework" : {
        "command" : "c:\tools\nuget install EntityFramework -NonInteractive -OutputDirectory c:\myapp",
        "waitAfterCompletion" : "0"
      }
    }
  }
}

Using Chocolatey in AWS CloudFormation

Installing and using Chocolatey is similar to NuGet above, though the recommended way of installing Chocolatey is to execute a Microsoft Windows PowerShell script. As CloudFormation ‘command’ declarations are executed by cmd.exe, we need to execute PowerShell.exe and provide the install command to that.

The Chocolatey installer and the packages it installs may modify the machine’s PATH environment variable. This adds complexity since subsequent commands after these installations are executed in the same session, which does not have the updated PATH. To overcome this, we utilize a command file to set the session’s PATH to that of the machine before it executes our command.

Here’s an excerpt from an example AWS CloudFormation template to:

  1. Create a command file ‘ewmp.cmd’ to execute a command with the machine’s PATH
  2. Install Chocolatey
  3. Install Sublime Text 3
  4. Install Firefox
"AWS::CloudFormation::Init": {
  "config": {
    "files" : {
      "c:/tools/ewmp.cmd" : {
        "content": "@ECHO OFFnFOR /F "tokens=3,*" %%a IN ('REG QUERY "HKLM\System\CurrentControlSet\Control\Session Manager\Environment" /v PATH') DO PATH %%a%%bn%*"
      }
    },
    "commands" : {
      "1-install-chocolatey" : {
        "command" : "powershell -NoProfile -ExecutionPolicy unrestricted -Command "Invoke-Expression ((New-Object Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))""
      },
      "2-install-sublimetext" : {
        "command" : "c:\tools\ewmp choco install sublimetext3"
      },
      "3-install-firefox" : {
        "command" : "c:\tools\ewmp choco install firefox"
      }
    }
  }
}

Using NuGet and Chocolatey together in AWS CloudFormation

Another example for NuGet is when you are cloning a repository from a version control system that does not have the NuGet packages checked-in, which means those packages are missing from the clone. In this case, you can perform a NuGet Restore, which instructs NuGet to download the packages specified within the repository.

But we need to install git before we can clone—so we use Chocolatey!

Here’s an excerpt from an example AWS CloudFormation template to:

  1. Download NuGet.exe
  2. Create a command file ‘ewmp.cmd’ to execute a command with the machine’s PATH
  3. Install Chocolatey
  4. Install Git
  5. Clone a Git repository
  6. Restore NuGet packages defined in the repository’s solution file
"AWS::CloudFormation::Init": {
  "config": {
    "files" : {
      "c:/tools/nuget.exe" : {
        "source" : "https://nuget.org/nuget.exe"
      },
      "c:/tools/ewmp.cmd" : {
        "content": "@ECHO OFFnFOR /F "tokens=3,*" %%a IN ('REG QUERY "HKLM\System\CurrentControlSet\Control\Session Manager\Environment" /v PATH') DO PATH %%a%%bn%*"
      }
    },
    "commands" : {
      "1-install-chocolatey" : {
        "command" : "powershell -NoProfile -ExecutionPolicy unrestricted -Command "Invoke-Expression ((New-Object Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"",
        "waitAfterCompletion" : "0"
      },
      "2-install-git" : {
        "command" : "c:\tools\ewmp choco install git",
        "waitAfterCompletion" : "0"
      },
      "3-create-myapp-folder" : {
        "command" : "if not exist c:\myapp mkdir c:\myapp",
        "waitAfterCompletion" : "0"
      },
      "4-clone-repo" : {
        "command" : "c:\tools\ewmp git clone git://github.com/aws/aws-sdk-net c:\myapp",
        "waitAfterCompletion" : "0"
      },
      "5-nuget-restore" : {
        "command" : "c:\tools\nuget restore c:\myapp\AWSSDK_DotNet.Mobile.sln",
        "waitAfterCompletion" : "0"
      }
    }
  }
}

Using NuGet and Chocolatey in AWS Elastic Beanstalk

The above examples can be translated into AWS Elastic Beanstalk config files to enable use of both NuGet and Chocolatey in Elastic Beanstalk. For Elastic Beanstalk, we create YAML .config files inside the .ebextensions folder of our source bundle.

Here’s an example .ebextensions config file to:

  1. Download NuGet.exe
  2. Install the JSON.NET NuGet package
  3. Install the Entity Framework NuGet package
files:
  c:/tools/nuget.exe:
    source: https://nuget.org/nuget.exe
commands:
  1-create-myapp-folder:
    command: if not exist c:myapp mkdir c:myapp
    waitAfterCompletion: 0
  2-install-json-net:
    command: c:toolsnuget install Newtonsoft.Json -NonInteractive -OutputDirectory c:myapp
    waitAfterCompletion: 0
  3-install-entityframework:
    command: c:toolsnuget install EntityFramework -NonInteractive -OutputDirectory c:myapp
    waitAfterCompletion: 0

Here’s an example .ebextensions config file to:

  1. Create a command file ‘ewmp.cmd’ to execute a command with the machine’s PATH
  2. Install Chocolatey
  3. Install Sublime Text 3
  4. Install Firefox
files:
  c:/tools/ewmp.cmd:
    content: |
      @ECHO OFF
      FOR /F "tokens=3,*" %%a IN ('REG QUERY "HKLMSystemCurrentControlSetControlSession ManagerEnvironment" /v PATH') DO PATH %%a%%b
      %*
commands:
  1-install-chocolatey:
    command: powershell -NoProfile -ExecutionPolicy unrestricted -Command "Invoke-Expression ((New-Object Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"
  2-install-sublimetext:
    command: c:toolsewmp choco install sublimetext3
  3-install-firefox:
    command: c:toolsewmp choco install firefox

Here’s an example .ebextensions config file to:

  1. Download NuGet.exe
  2. Create a command file ‘ewmp.cmd’ to execute a command with the machine’s PATH
  3. Install Chocolatey
  4. Install Git
  5. Clone a Git repository
  6. Restore NuGet packages defined in the repository’s solution file
files:
  c:/tools/nuget.exe:
    source: https://nuget.org/nuget.exe
  c:/tools/ewmp.cmd:
    content: |
      @ECHO OFF
      FOR /F "tokens=3,*" %%a IN ('REG QUERY "HKLMSystemCurrentControlSetControlSession ManagerEnvironment" /v PATH') DO PATH %%a%%b
      %*
commands:
  1-install-chocolatey:
    command: powershell -NoProfile -ExecutionPolicy unrestricted -Command "Invoke-Expression ((New-Object Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"
    waitAfterCompletion: 0
  2-install-git:
    command: c:toolsewmp choco install git
    waitAfterCompletion: 0
  3-create-myapp-folder:
    command: if not exist c:myapp mkdir c:myapp
    waitAfterCompletion: 0
  4-clone-repo:
    command: c:toolsewmp git clone git://github.com/aws/aws-sdk-net c:myapp
    waitAfterCompletion: 0
  5-nuget-restore:
    command: c:toolsnuget restore c:myappAWSSDK_DotNet.Mobile.sln
    waitAfterCompletion: 0

Summary

I hope this provides inspiration on how you can leverage both NuGet and Chocolatey to configure your Microsoft Windows instances managed by either AWS CloudFormation or AWS Elastic Beanstalk.

Mapping Cmdlets to AWS Service APIs

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

The consistency of the standardized verb and naming scheme used by Windows PowerShell makes learning the basics of the shell relatively easy, but translating knowledge of an existing API to the standard names can be difficult at first. Starting with version 2.3.19, AWS Tools for Windows PowerShell contains a new cmdlet to help with discovery: Get-AWSCmdletName. This cmdlet accepts the name of an AWS service API and emits the names of cmdlets that invoke an API matching that name pattern. It can also accept an AWS CLI command line and give you back the corresponding PowerShell cmdlet—handy if you are converting an AWS CLI sample.

Discovering Service APIs

Running the PowerShell Get-Command cmdlet with verb and/or noun filtering only gets you so far in discovering the cmdlets that are available in a module. You as a user still need to make the mental leap to associate the verb and noun combination to a known service API. Sometimes this is obvious, sometimes not so much. To get the name of a cmdlet that invokes a known AWS service API is now as easy as:

PS C:> Get-AWSCmdletName -ApiOperation describeinstances

CmdletName              ServiceOperation
----------              ----------------
Get-EC2Instance         DescribeInstances
Get-OPSInstances        DescribeInstances

Note that the full name of the service, and the noun prefix, are displayed in additional columns that are not shown in these examples for brevity.

The parameter name -ApiOperation can be omitted to save typing. You can see from the output that the cmdlet has scanned all cmdlets contained in the AWS PowerShell module and output those that invoke a service API DescribeInstances regardless of the service.

If you know the service of interest, you can restrict the search using the optional -Service parameter:

PS C:> Get-AWSCmdletName describeinstances -Service ec2

CmdletName              ServiceOperation
----------              ----------------
Get-EC2Instance         DescribeInstances

The value supplied to the -Service parameter can be either the prefix code that is applied to the noun part of the name of cmdlets belonging to a service, or one or more words from the service name. For example, these two commands return the same output as the example above:

PS C:> Get-AWSCmdletName describeinstances -Service compute
PS C:> Get-AWSCmdletName describeinstances -Service "compute cloud"

Note that all searches are case insensitive.

If you know the exact name of the service API you are interested in, then you are good to go. But what if you want to find all cmdlets that have something to do with, say, security groups (based on the premise that the term ‘securitygroup’ forms part of the API name)? You might try this:

PS C:> Get-AWSCmdletName securitygroup

As you’ll see if you run the example, the cmdlet displays no output because there is no service API matching that name. What we need is a more flexible way to specify the pattern to match. You can do this by adding the -MatchWithRegex switch:

PS C:> Get-AWSCmdletName securitygroup -MatchWithRegex

CmdletName                              ServiceOperation
----------                              ----------------
Approve-ECCacheSecurityGroupIngress     AuthorizeCacheSecurityGroupIngress
Get-ECCacheSecurityGroup                DescribeCacheSecurityGroups
New-ECCacheSecurityGroup                CreateCacheSecurityGroup
Remove-ECCacheSecurityGroup             DeleteCacheSecurityGroup
Revoke-ECCacheSecurityGroupIngress      RevokeCacheSecurityGroupIngress
Get-EC2SecurityGroup                    DescribeSecurityGroups
Grant-EC2SecurityGroupEgress            AuthorizeSecurityGroupEgress
Grant-EC2SecurityGroupIngress           AuthorizeSecurityGroupIngress
New-EC2SecurityGroup                    CreateSecurityGroup
Remove-EC2SecurityGroup                 DeleteSecurityGroup
Revoke-EC2SecurityGroupEgress           RevokeSecurityGroupEgress
Revoke-EC2SecurityGroupIngress          RevokeSecurityGroupIngress
Join-ELBSecurityGroupToLoadBalancer     ApplySecurityGroupsToLoadBalancer
Enable-RDSDBSecurityGroupIngress        AuthorizeDBSecurityGroupIngress
Get-RDSDBSecurityGroup                  DescribeDBSecurityGroups
New-RDSDBSecurityGroup                  CreateDBSecurityGroup
Remove-RDSDBSecurityGroup               DeleteDBSecurityGroup
Revoke-RDSDBSecurityGroupIngress        RevokeDBSecurityGroupIngress
Approve-RSClusterSecurityGroupIngress   AuthorizeClusterSecurityGroupIngress
Get-RSClusterSecurityGroups             DescribeClusterSecurityGroups
New-RSClusterSecurityGroup              CreateClusterSecurityGroup
Remove-RSClusterSecurityGroup           DeleteClusterSecurityGroup
Revoke-RSClusterSecurityGroupIngress    RevokeClusterSecurityGroupIngress

As you can see its now easy to find all cmdlets that have something to do with a particular term, or object, across all services. When the -MatchWithRegex parameter is used the value of the -ApiOperation parameter is interpreted as a regular expression.

If we wanted to restrict the search to a specific service, we would just add the -Service parameter too, as shown earlier. The -Service parameter value always accepts a regular expression and is not affected by the -MatchWithRegex switch. When looking at the name of the owning service for a cmdlet, Get-AWSCmdletName automatically uses the -Service value as a regular expression, and if that does not yield a match, it then attempts to use the value in a simple text comparison on the service prefix that is used in cmdlet names to effectively namespace the cmdlets.

Translating from AWS CLI

The verb-noun naming standard of PowerShell is considered one of its strengths and one that we are pleased to support to give users a consistent experience. The AWS CLI follows more closely the AWS API naming conventions. Get-AWSCmdletName has one further ability and that is to be able to make a "best effort" at translating an AWS CLI command line to yield the corresponding AWS PowerShell cmdlet. This can be useful when translating a sample:

PS C:> Get-AWSCmdletName -AwsCliCommand "aws ec2 authorize-security-group-ingress"

CmdletName                           ServiceOperation
----------                           ----------------
Grant-EC2SecurityGroupIngress        AuthorizeSecurityGroupIngress

The supplied AWS CLI command is parsed to recover the service identifier and the operation name (which is stripped of any hyphens). You only need to specify enough of the command to allow the service and operation to be identified – the "aws" prefix in the parameter value can be omitted. Also, if you’ve pasted the parameter value from a sample and it contains any CLI options—identified by a ‘–‘ prefix— they are skipped.

Hopefully, you’ll find this new cmdlet useful in discovering and navigating the cmdlets available for working with AWS. Do you have an idea for something that would be useful for you and potentially others? Let us know in the comments!

Automatic Pagination of Responses in the AWS SDK for .NET (Preview)

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

As part of our recent preview release of Resource APIs for .NET we have exposed one of the underlying features in the low-level .NET SDK as well.

Many of the AWS APIs that return collections of items have a pagination interface. Rather than return all results at once, the service returns a certain number of results per request, and provides a token in the response to get the next "page" of results. In this way, you can chain requests together using the token to get as many results as you need.

Here’s what that looks like using the SDK for .NET to get all the IAM users for an account, 20 at a time:

ListUsersResponse response;
ListUsersRequest request = new ListUsersRequest { MaxItems = 20 };

do
{
    response = iam.ListUsers(request);
    ProcessUsers(response.Users);
    request.Marker = response.Marker;
}
while (response.IsTruncated);

In order to make the resource APIs feel more natural, we built in a mechanism that does something like the above code behind the scenes through an IEnumerable interface. Using the resource APIs, you can get the users like this:

var users = iam.GetUsers();
foreach (var user in users)
{
    Console.WriteLine("User: {0}", user.Name);
}

The first line does not result in a service call. No service calls will be made until your code starts iterating over IEnumerable, and subsequent calls will be made as needed under the covers.

This seemed useful to expose through the low-level API as well, so we added methods on some of the clients as part of the SDK Preview for the following clients:

  • Amazon.GlacierClient
  • Amazon.IdentityManagementServiceClient
  • Amazon.OpsWorksClient
  • Amazon.SimpleNotificationServiceClient

Using the paginators from the low-level request interface looks like this:

var users = client.ListUsersEnumerator(new ListUsersRequest { MaxItems = 20 });
foreach(var user in users)
{
    Console.WriteLine("User: {0}", user.Name);
}

As usual with IEnumerable, you will need to pay special attention when using LINQ and/or the System.Linq.Enumerable extension methods. Calling various extensions like .Count(), .Where(), or .Last() on one of the IEnumerables returned by these methods could result in multiple, unintended calls to the service. In those instances where you do need to use those methods, it can be a good idea to cache the IEnumerable returned for as long as possible.

Let us know if you find this facility useful. We look forward to hearing from you on GitHub and the AWS forums.

Upcoming Modularization of the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today, I would like to announce our plans to modularize the AWS SDK for .NET into individual assemblies and NuGet packages. This work will take us a few months to complete, but we recognize this will be a pretty big change to how developers see the SDK and want to give as much of a heads-up as we can. It is our intention to have as few breaking changes as possible, but a few will be unavoidable. For those changes, we plan on marking obsolete in the current version SDK as soon as possible. The most notable breaking change will be the removal of Amazon.AWSClientFactory because that class requires a reference to every service.

Why are we doing this?

When we first released the AWS SDK for .NET, there were 10 services and the total size of the SDK was about 600 KB. Today, the SDK has grown to support over 40 services and has grown to over 6 MB. We’ve heard from many of our users that they want a smaller SDK containing just the services they need. This is especially important for developers who are using our SDK for Windows Phone and Windows Store Apps.

Another reason we are doing this is the frequency of releases from AWS. If you take a look at our Nuget package, you can see we release the SDK nearly weekly, sometimes even more frequently. Our hope is that this change will allow developers to update their SDK only when the services they use are updated.

What happens next?

It will take a us a few months to update our build and release process. We’ll keep you updated as more information becomes available. Watch for methods being marked as obsolete, and move away from them as soon as possible. As we are doing all this refactoring, this is a perfect time for feedback from users of the SDK. If there are problems the SDK is not solving for you or things are hard to discover, let us know. You can give feedback either here, in our forums, or on GitHub.

Updated Amazon Cognito Credentials Provider

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Amazon Cognito allows you to get temporary AWS credentials, so that you don’t have to distribute your own credentials with your application. Last year we added a Cognito credentials provider to the AWS SDK for .NET to simplify this process.

With the latest update to Cognito, we are now making it even easier to use Cognito with your application. Using the latest version of the SDK, you no longer need to specify IAM roles in your application if you have already associated the correct roles with your identity pool.

Below is an example of how you can construct and use the new credentials provider:

CognitoAWSCredentials credentials = new CognitoAWSCredentials(
    identityPoolId,   // identity pool id
    region);          // identity pool region

using (var s3Client = new AmazonS3Client(credentials))
{
    s3Client.ListBuckets();
}

Something to note is that even if you have associated roles with an identity pool, you can still specify IAM roles—even ones that are different from the roles configured on the identity pool—when creating these credentials. This gives you finer control over what resources and operations these credentials have access to.

Cross-Account IAM Roles in Windows PowerShell

by Brian Beach | on | in .NET | Permalink | Comments |  Share

As a company’s adoption of Amazon Web Services (AWS) grows, most customers adopt a multi-account strategy. Some customers choose to create an account for each application, while others create an account for each business unit or environment (development, testing, production). Whatever the strategy, there is often a use case that requires access to multiple accounts at once. This post examines cross-account access and the AssumeRole API, known as Use-STSRole in Windows PowerShell.

A role consists of a set of permissions that grant access to actions or resources in AWS. An application uses a role by calling the AssumeRole API function. The function returns a set of temporary credentials that the application can use in subsequent function calls. Cross-account roles allow an application in one account to assume a role (and act on resources) in another account.

One common example of cross-account access is maintaining a configuration management database (CMDB). Most large enterprise customers have a requirement that all servers, including EC2 instances, must be tracked in the CMDB. Example Corp., shown in Figure 1, has a Payer account and three linked accounts: Development, Testing, and Production.

Figure 1: Multiple Accounts Owned by a Single Customer

Note that linked accounts are not required to use cross-account roles, but they are often used together. You can use cross-account roles to access accounts that are not part of a consolidated billing relationship or between accounts owned by different companies. See the user guide to learn more about linked accounts and consolidated billing.

Scenario

Bob, a Windows administrator at Example Corp., is tasked with maintaining an inventory of all the instances in each account. Specifically, he needs to send a list of all EC2 instances in all accounts to the CMDB team each night. He plans to create a Windows PowerShell script to do this.

Bob could create an IAM user in each account and hard-code the credentials in the script. Though this would be simple, hard-coding credentials is not the most secure solution. The AWS best practice is to Use IAM roles. Bob is familiar with IAM roles for Amazon EC2 and wants to learn more about cross-account roles.

Bob plans to script the process shown in Figure 2. The CMDB script will run on an EC2 instance using the CMDBApplication role. For each account, the script will call Use-STSRole to retrieve a set of temporary credentials for the CMDBDiscovery role. The script will then iterate over each region and call Get-EC2Instance using the CMDBDiscovery credentials to access the appropriate account and list all of its instances.

Figure 2: CMDB Application and Supporting IAM Roles

Creating IAM Roles

Bob begins to build his solution by creating the IAM roles shown in Figure 3. The Windows PowerShell script will run on a new EC2 instance in the Payer account. Bob creates a CMDBApplication role in the Payer account. This role in used by the EC2Instance allowing the script to run without requiring IAM user credentials. In addition, Bob will create a CMDBDiscovery role in every account. The CMDBDiscovery role has permission to list (or “Discover”) the instances in that account.

Figure 3: CMDB Application and Supporting IAM Roles

Notice that Bob has created two roles in the Payer account: CMDBApplication and CMDBDiscovery. You may be asking why he needs a cross-account role in the same account as the application. Creating the CMDBDiscovery role in every account makes the code easier to write because all accounts are treated the same. Bob can treat the Payer account just like any of the other accounts without a special code branch.

Bob first creates the Amazon EC2 role, CMDBApplication, in the Payer account. This role will be used by the EC2 instance that runs the Windows PowerShell script. Bob signs in to the AWS Management Console for the Payer account and follows the instructions to create a new IAM Role for Amazon EC2 with the following custom policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["sts:AssumeRole"],
      "Resource": ["*"]
    }
  ]
}

Policy 1: Policy Definition for the CMDBApplication IAM Role

The CMDBApplication role grants a single permission, sts:AssumeRole, which allows the application to call the AssumeRole API to get temporary credentials for another account. Notice that Bob is following the best practice of Least Privilege and has assigned only one permission to the application.

Next, Bob creates a cross-account role called CMDBDiscovery in each of the accounts, including the Payer account. This role will be used to list the EC2 instances in that account. Bob signs in to the console for each account and follows the instructions to create a new IAM role for cross-account access. In the wizard, Bob supplies the account ID of the Payer account (111111111111 in our example) and specifies the following custom policy.

Note that when creating the role, there are two options. One provides access between accounts you own, and the other provides access from a third-party account. Third-party account roles include an external ID, which is outside the scope of this article. Bob chooses the first option because his company owns both accounts.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["ec2:DescribeInstances"],
      "Resource": ["*"]
    }
  ]
}

Policy 2: Policy Definition for the CMDBDiscovery IAM Role

Again, this policy follows the best practice of least privilege and assigns a single permission,ec2:DescribeInstances, which allows the caller to list the EC2 instances in the account.

Creating the CMDB Script

With the IAM roles created, Bob next launches a new EC2 instance in the Payer account. This instance will use the CMDBApplication role. When the instance is ready, Bob signs in and creates a Windows PowerShell script that will list the instances in each account and region.

The first part of the script, shown in Listing 1, lists the instances in a given region and account. Notice that in addition to the account number and region, the function expects a set of credentials. These credentials represent the CMDBDiscovery role and will be retrieved from the AssumeRole API in the second part of the script.

Function ListInstances {
    Param($Credentials, $Account, $Region)
          
    #List all instances in the region
    (Get-EC2Instance -Credential $Credentials -Region $Region).Instances | % {
        If($Instance = $_) {
  
            #If there are instances in this region return the desired attributes
            New-Object PSObject -Property @{
                Account = $Account
                Region = $Region
                InstanceId = $Instance.InstanceId
                Name = ($Instance.Tags | Where-Object {$_.Key -eq 'Name'}).Value
            }
        }
    }
}

Listing 1: Windows PowerShell Function to List EC2 Instances

The magic happens in the second part of the script, shown in Listing 2. We know that the script is running on the new EC2 instance using the CMDBApplication. Remember that the only thing this role can do is call the AssumeRole API. Therefore, we should expect to see a call to AssumeRole. The Windows PowerShell cmdlet that implements AssumeRole is Use-STSRole.

#List of accounts to check
$Accounts = @(111111111111, 222222222222, 333333333333, 444444444444)
  
#Iterate over each account
$Accounts | % {
    $Account = $_
    $RoleArn = "arn:aws:iam::${Account}:role/CMDBDiscovery"
  
    #Request temporary credentials for each account and create a credential object
    $Response = (Use-STSRole -Region $Region -RoleArn $RoleArn -RoleSessionName 'CMDB').Credentials
    $Credentials = New-AWSCredentials -AccessKey $Response.AccessKeyId -SecretKey $Response.SecretAccessKey -SessionToken $Response.SessionToken
  
    #Iterate over all regions and list instances
    Get-AWSRegion | % {
        ListInstances -Credential $Credentials -Account $Account -Region $_.Region
    }
  
} | ConvertTo-Csv

Listing 2: Windows PowerShell Script That Calls AssumeRole

Use-STSRole will retrieve temporary credentials for the IAM role specified in the ARN parameter. The ARN uses the following format, where “ROLE_NAME” is the role you created in “TARGET_ACCOUNT_NUMBER” (e.g. CMDBDiscovery).

arn:aws:iam::TARGET_ACCOUNT_NUMBER:role/ROLE_NAME

Use-STSRole will return an AccessKey, a SecretKey, and a SessionToken that can be used to access the account specified in the role ARN. The script uses this information to create a new credential object, which it passes to ListInstances. ListInstances uses the credential object to list EC2 instances in the account specified in the role ARN.

That’s all there is to it. Bob creates a scheduled task that executes this script each night and sends the results to the CMDB team. When the company adds additional accounts, Bob simply adds the CMDBDiscovery role to the new account and updates the account list in his script.

Summary

Cross-account roles are a valuable tool for large customers with multiple accounts. Roles provide temporary credentials that a user or application in one account can use to access resources in another account. These temporary credentials do not need to be stored or rotated, resulting in a secure and maintainable architecture.