Tag: .NET


DynamoDB Table Cache

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Version 3 of the AWS SDK for .NET includes a new feature, the SDK Cache. This is an in-memory cache used by the SDK to store information like DynamoDB table descriptions. Before version 3, the SDK retrieved table information when you constructed a Table or DynamoDBContext object. For example, the following code creates a table and performs several operations on it. The LoadTable method makes a DescribeTable call to DynamoDB, so this sample will make three service calls: DescribeTable, GetItem, and UpdateItem.

var table = Table.LoadTable(ddbClient, "TestTable");
var item = table.GetItem(42);
item["Updated"] = DateTime.Now;
table.UpdateItem(item);

In most cases, your application will use tables that do not change, so constantly retrieving the same table information is wasteful and unnecessary. In fact, to keep the number of service calls to a minimum, the best option is to create a single copy of the Table or DynamoDBContext object and keep it around for the lifetime of your application. This, of course, requires a change to the way your application uses the AWS SDK for .NET.

We will now attempt to retrieve table information from the SDK Cache. Even if your code is constructing a new Table or DynamoDBContext object for each call, the SDK will only make a single DescribeTable call per table, and will keep this data around for the lifetime of the process. So if you ran the preceding code twice, only the first invocation of LoadTable would result in a DescribeTable call.

This change will reduce the number of DescribeTable calls your application makes, but in some cases you may need to get the most up-to-date table information from the service (for example, if you are developing a generic DynamoDB table scanner utility). You have two options: periodically clear the table metadata cache or disable the SDK Cache.

The first approach is to call Table.ClearTableCache(), a static method on the Table class. This operation will clear out the entire table metadata cache, so any Table or DynamoDBContext objects you create after this point will result in one new DescribeTable call per table. (Of course, after the data is retrieved once, it will again be stored in the cache. This approach will work only if you know when your table metadata changes and clear the cache intermittently.)

The second approach is to disable the SDK Cache, forcing the SDK to always retrieve the current table configuration. This can be accomplished through code or the app.config/web.config file, as illustrated below. (Disabling the SDK Cache will revert to version 2 behavior, so unless you hold on to the Table or DynamoDBContext objects as you create them, your application will end up making DescribeTable service calls.)

Disabling the cache through code:

// Disable SDK Cache for the entire application
AWSConfigs.UseSdkCache = false;

Disabling the cache through app.config:

<configuration>
  <appSettings>
	<!-- Disables SDK Cache for the entire application -->
    <add key="AWSCache" value="false" />
  </appSettings>
</configuration>

Version 3 of the AWS SDK for .NET Out of Preview

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Back in February, we announced our intention to release a new major version of the AWS SDK for .NET. In April, we released a preview on NuGet. After receiving great feedback from users, today we are taking version 3 of the AWS SDK for .NET out of preview. This means the preview flag has been removed from the NuGet packages. The SDK is now included in our MSI installer from our website.

Version 3 is a new, modularized SDK. Every service is a separate assembly and distributed as a separate NuGet package. Each service has a dependency on a common runtime, AWSSDK.Core. This has been a major request from our users, especially now that AWS has grown to over 50 services. This design also gives SDK users better control over when to upgrade to the newest service updates.

We wanted to make the transition to version 3 as easy possible, so there are very few breaking changes to the public API. For the full list of changes, see our API Reference which contains a migration guide.

Our hope is that most users will just need to replace the old reference to version 2 and add the reference to the services they are using. If you are using NuGet to get the SDK, the reference to our core runtime package will be added automatically. If you are getting the SDK from the installer on our website, then you will need to add a reference to AWSSDK.Core.

Xamarin Preview

We recently announced a public preview of Xamarin support, which is part of version 3. Even though the SDK is now widely available, Xamarin and the Portable Class Library version of the SDK are still in preview. We encourage you to try the new Xamarin support and give us feedback, but we are not ready for users to publish production applications just yet. Users with an immediate need for Windows Phone and Windows Store support should continue using version 2 until the PCL version of the SDK version 3 is production-ready.

PowerShell

With our move to version 3, we have also switched our AWS Tools for Windows PowerShell to the new SDK. The version numbers for AWS SDK for .NET and our AWS Tools for Windows PowerShell are kept in sync, so AWS Tools for Windows PowerShell is getting a major version bump to 3. There are otherwise no major changes to AWS Tools for Windows PowerShell.

Changes to Our Installer

The installer has been updated to contain version 3 of the SDK, but it also contains version 2 for users who are not ready to move to version 3. The Portable Class Library version of the SDK (which includes Xamarin support) is only distributed through NuGet and will not be available through the installer. The Portable Class Library uses platform-specific dependencies which are automatically resolved when references are added through NuGet. This would be a complex process if done manually or without NuGet.

Packages on NuGet

For an up to date list of the version 3 NuGet packages check out the NuGet section in the SDK’s github README.md.

Using the New Import Cmdlets for Amazon EC2

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Using the New Import Cmdlets for Amazon EC2

Amazon EC2 recently released an updated set of APIs for importing virtual machine images and disks. These new APIs, ImportImage and ImportSnapshot, are faster and more flexible than the original import APIs and are now available in the AWS Tools for Windows PowerShell (from version 2.3.43.0) through two new cmdlets – Import-EC2Image and Import-EC2Snapshot. Let’s take a look at how we use the new cmdlets to perform imports.

Importing a VM Image

Importing an image to EC2 can be done in just a couple of steps. First, we have to upload the disk image to Amazon S3, and then we run the import cmdlet that will yield an Amazon Machine Image (AMI) we can launch. We also need to set up an Identity and Access Management role, plus associated role policy, that gives EC2 access to the S3 artifacts. This is a one-time operation.

Import Prerequisites

As detailed in the EC2 user guide topic, the new import service APIs use an Identity and Access Management role, with associated role policy, to access the image file(s) that you upload to Amazon S3 during import. Setting these up is a one-time operation (assuming you use the same bucket to hold the image file for each import) and can be done from PowerShell very easily, as follows.

First, we create the role. The EC2 import API defaults to a role name of ”vmimport” if a custom role name is not supplied when we run the import command. For the sake of simplicity, that’s the name we’ll use in this blog example:

PS C:> $importPolicyDocument = @"
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Sid":"",
         "Effect":"Allow",
         "Principal":{
            "Service":"vmie.amazonaws.com"
         },
         "Action":"sts:AssumeRole",
         "Condition":{
            "StringEquals":{
               "sts:ExternalId":"vmimport"
            }
         }
      }
   ]
}
"@

PS C:> New-IAMRole -RoleName vmimport -AssumeRolePolicyDocument $importPolicyDocument

Now that we have created the role, we add a policy allowing EC2 access to the bucket containing our image:

PS C:> $bucketName = "myvmimportimages"
PS C:> $rolePolicyDocument = @"
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource":[
            "arn:aws:s3:::$bucketName"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObject"
         ],
         "Resource":[
            "arn:aws:s3:::$bucketName/*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ec2:ModifySnapshotAttribute",
            "ec2:CopySnapshot",
            "ec2:RegisterImage",
            "ec2:Describe*"
         ],
         "Resource":"*"
      }
   ]
}
"@

PS C:> Write-IAMRolePolicy -RoleName vmimport -PolicyName vmimport -PolicyDocument $rolePolicyDocument

That completes the prerequisites. If we want to use a different bucket (or additional buckets) in the future, we simply reconstruct the policy here-string shown above with the name(s) of the new or additional buckets and re-run the Write-IAMRolePolicy cmdlet.

Uploading the Image

The VM or disk image must be uploaded to S3. To do this, we use the Write-S3Object cmdlet. Assume we have a Windows Server 2012 R2 image consisting of a single disk that we want to import. This image is located on disk in the file C:CustomWindows2012R2.vhd. We’re also using the same bucket declared in the prerequisites above, ”myvmimportimages”, which we captured in a variable:

PS C:> Write-S3Object -BucketName $bucketName -File .CustomWindows2012R2.vhd

Because we did not supply a -Key parameter to the cmdlet to identify the object in the bucket, the file name is used by default. If the VM image to be imported consists of multiple disk images, simply repeat the use of Write-S3Object to upload all the images.

We’re now ready to import the image.

Importing the Image

The cmdlet to import VM images, Import-EC2Image, accepts a number of parameters that allow you to describe the import for future reference and detail which object in S3 contains the image EC2 should operate on. You can also specify a custom role name (with the -RoleName parameter) granting EC2 access to the S3 object. Earlier in this post we showed how to set up the role and policy using the default name EC2 assumes if a custom role is not specified, so this parameter will not be used here.

First, we must construct one or more ImageDiskContainer instances. If you are importing a VM that consists of multiple disk images (and therefore multiple S3 objects), we would create multiple container instances and pass them as an array to the cmdlet. Our sample image for this post contains just a single image file:

PS C:> $windowsContainer = New-Object Amazon.EC2.Model.ImageDiskContainer
PS C:> $windowsContainer.Format="VHD"

Details of the S3 location of the image file are specified in a nested object:

PS C:> $userBucket = New-Object Amazon.EC2.Model.UserBucket
PS C:> $userBucket.S3Bucket = $bucketName
PS C:> $userBucket.S3Key = "CustomWindows2012R2.vhd"
PS C:> $windowsContainer.UserBucket = $userBucket

Having constructed the disk container object(s), we can set up the parameters to the import cmdlet. One of the parameters, ClientToken, allows us to pass an idempotency token – this ensures that if a problem arises and we need to re-run the command, EC2 does not start a new import:

PS C:> $params = @{
    "ClientToken"="CustomWindows2012R2_" + (Get-Date)
    "Description"="My custom Windows 2012R2 image import"
    "Platform"="Windows"
    "LicenseType"="AWS"
}

We’re now ready to run the import cmdlet:

PS C:> Import-EC2Image -DiskContainer $windowsContainer @params 

Architecture    : 
Description     : My custom Windows 2012R2 image import
Hypervisor      : 
ImageId         : 
ImportTaskId    : import-ami-abcdefgh
LicenseType     : AWS
Platform        : Windows
Progress        : 2
SnapshotDetails : {}
Status          : active
StatusMessage   : pending

We can check progress on an import (or set of imports) using the Get-EC2ImportImageTask cmdlet, which outputs the same information as above for each import task. Optionally, we can query a specific import by supplying a value to the ImportTaskId parameter. We can also supply a set of filters if we don’t want to slice-n-dice the output through the PowerShell pipeline.

To abandon an import, we use the Stop-EC2ImportTask cmdlet. This cmdlet is used for both VM image and disk snapshot imports. It accepts the import task id of the import to be stopped.

Importing a Disk Snapshot

Importing disk snapshots to be used as additional EBS volumes to attach to EC2 instances is very similar to importing a VM image except that we’re always importing a single image:

PS C:> Write-S3Object -BucketName $bucketName -File .DataDisk.vhd
PS C:> $params = @{
    "ClientToken"="MySnapshotImport_" + (Get-Date)
    "Description"="My Data Disk Image"
    "DiskContainer_Description" = "Data disk import"
    "DiskContainer_Format" = "VHD"
    "DiskContainer_S3Bucket" = $bucketName
    "DiskContainer_S3Key" = "DataDisk.vhd"
}

PS C:> Import-EC2Snapshot @params | fl

Description         : My Data Disk Image
ImportTaskId        : import-snap-abcdefg
SnapshotTaskDetail  : Amazon.EC2.Model.SnapshotTaskDetail

To check progress of a snapshot import, we use the Get-EC2ImportSnapshotTask cmdlet, which is very similar to Get-EC2ImportImageTask. As mentioned earlier, a snapshot import can be stopped using Stop-EC2ImportTask.

Deprecated: Original Import Cmdlets

The original import cmdlets (Import-EC2Instance, Import-EC2Volume, Get-EC2ConversionTask and Stop-EC2ConversionTask) have now been marked as deprecated. They will be removed in a future release.

More Information

We hope you find the new cmdlets easier to use! For more information about importing VM images and disk snapshots to Amazon EC2, see this post on the official AWS Blog. You can also access the EC2 documentation for the feature.

RegisterProfile

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

The .NET SDK team is aware that some customers are having issues using the Amazon.Util.ProfileManager.RegisterProfile method, so this blog will attempt to explain what this method does, when it should be used, and more importantly, why it should never be called inside your development application.

We discussed RegisterProfile in an earlier blog post about storing and loading AWS credentials. Take a look at this post for more information about profiles and how they can be used to simplify local credentials management.

Let’s start with what Amazon.Util.ProfileManager.RegisterProfile is and how it should be used. The RegisterProfile method creates a new profile or updates an existing profile with a given set of credentials. After this is done, the profile can be used in the SDK, PowerShell, or the Visual Studio Toolkit to make AWS calls with a set of credentials, without having to constantly include the credentials in your code.

When using the SDK, we can access our profile by specifying it in our app.config/web.config file:

<configuration>
   <appSettings>
      <add key="AWSProfileName" value="profile-name"/>
   </appSettings>
</configuration>

Or explicitly with the following code:

var credentials = new Amazon.Runtime.StoredProfileAWSCredentials("profile-name");

In PowerShell, the profile can be accessed like this:

Set-AWSCredentials -ProfileName development

Finally, when using the Visual Studio Toolkit, you simply choose the desired profile from the Profile drop-down menu.

In this sense, RegisterProfile is a utility method and should be called only once: when you want to configure or update your current environment. After a profile is configured, you should not be making calls to RegisterProfile.

You should not be calling this method in your main AWS application. After you’ve configured your environment with the credentials you want to use, calls to RegisterProfile will not have any effect and, as illustrated in a recent forum post, in some cases can actually cause your application to crash. (Unfortunately, if you are running your application under IIS, the SDK credential store will not work. The credentials are encrypted for the currently logged-on user, and the system account running IIS will not be able to decrypt them. In this case, you could use the shared credentials with the AWSProfileLocation app setting, as outlined above.)

We hope this clears up the confusion about Amazon.Util.ProfileManager.RegisterProfile. Happy coding!

AWS SDK for .NET Office Hour

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

The AWS SDKs and Tools team invites you to the first-ever online office hour hosted by the maintainers of the AWS SDK for .NET, AWS Toolkit for Visual Studio, and AWS Tools for Windows PowerShell. It will be held via Google Hangouts at 9:30-10:30am PDT (UTC -7:00) on Thursday 6/18. If you don’t have one already, you will be required to create an account with Google to join the video chat.

This first office hour will be entirely driven by customer questions. We expect to focus on questions about the three developer tools we manage, but any questions related to Windows and .NET development on AWS are welcome. We’re excited to meet you and help you be successful in developing .NET applications on AWS!

The event details can be easily added to your calendar using this link. Alternatively, you can directly join the video call at the scheduled time via this link.

SDK Extensions Moved to Modularization

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

We are currently finalizing the move of the AWS Tools for Windows PowerShell and the AWS Toolkit for Visual Studio to the new modularized SDK. In addition, we have released new versions of the ASP.NET session provider and the .NET System.Diagnostics trace listener. These two extensions have moved from the SDK GitHub repository into their own separate repositories for better discoverability and to make it easier to track progress.

Session Provider

The Amazon DynamoDB session state provider allows ASP.NET applications to store their sessions inside Amazon DynamoDB. This helps applications scale across multiple application servers while maintaining session state across the system. To get started, check out the NuGet package or view the source on GitHub.

Trace Listener

The AWS DynamoDB trace listener allows System.Diagnostics.Trace calls to be written to Amazon DynamoDB. It is really useful when running an application over several hosts to have all the log messages in one location where the data can be searched through. To get started, check out the NuGet package or view the source on GitHub.

Serving Private Content Through Amazon CloudFront Using Signed Cookies

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

Private content can be served through Amazon CloudFront in two ways: through signed URLs or signed cookies. For information about which approach to choose, see Choosing Between Signed URLs and Signed Cookies.

The AWS SDK for .NET includes an Amazon.CloudFront.AmazonCloudFrontUrlSigner utility class that can be used to generate signed URLs. Based on a customer request, we recently added the Amazon.CloudFront.AmazonCloudFrontCookieSigner utility class to make it easier to generate the cookies required to access private content through Amazon CloudFront.

To start serving private content through Amazon CloudFront:

  • Creating CloudFront Key Pairs for Your Trusted Signers. You can either create a new key pair using the AWS Management Console or, if you have your own RSA key pair, you can upload the public key to create a key pair. Each key pair has a key pair ID, which will be used to create the signed cookies.
  • The RSA key pair file (.pem file) must be available when creating signed cookies. If you created the key pair using the AWS Management Console, you can download the key pair file and store it locally.
  • Adding Trusted Signers to Your Distribution. You can do this through the AWS Management Console or programmatically, through the Amazon.CloudFront.IAmazonCloudFront.CreateDistribution or Amazon.CloudFront.IAmazonCloudFront.UpdateDistribution APIs.

 

Creating Signed Cookies for Canned Policies

Canned policies allow you to specify an expiration date only. Custom policies allow more complex restrictions. For a comparison between the two types of policies, see Choosing Between Canned and Custom Policies for Signed Cookies.

The following code snippet shows the use of the Amazon.CloudFront.AmazonCloudFrontCookieSigner.GetCookiesForCannedPolicy method to create signed cookies for canned policies.

// The key pair Id for the CloudFront key pair
var cloudFrontKeyPairId = "key_pair_id";

// The RSA key pair file (.pem file) that contains the private key    
var privateKeyFile = new FileInfo(@"rsa_file_path"); 

// Path to resource served through a CloudFront distribution
var resourceUri = "http://xyz.cloudfront.net/image1.jpeg" 
    
var cookies = AmazonCloudFrontCookieSigner.GetCookiesForCannedPolicy(
    resourceUri,
    cloudFrontKeyPairId,
    privateKeyFile,
    DateTime.Today.AddYears(1)); // Date until which the signed cookies are valid

Creating Signed Cookies for Custom Policies

You should use custom policies to apply complex restrictions to the accessing of private content. In addition to an expiration date, custom policies allow you to set resource paths with wildcards, activation time, and IP address/address ranges.

The following code snippet shows how to generate signed cookies for custom policies.

// The key pair Id for the CloudFront key pair
var cloudFrontKeyPairId = "key_pair_id"; 

// The RSA key pair file (.pem file) that contains the private key    
var privateKeyFile = new FileInfo(@"rsa_file_path"); 

// Path to resource served through a CloudFront distribution
var resourceUri = "http://xyz.cloudfront.net/image1.jpeg" 

var cookies = AmazonCloudFrontCookieSigner.GetCookiesForCustomPolicy(
    AmazonCloudFrontCookieSigner.Protocols.Http | 
    AmazonCloudFrontCookieSigner.Protocols.Https, // Allow either http or https

    "xyz.cloudfront.net",      // CloudFront distribution domain
    privateKeyFile,
    "content/*.jpeg",          // Allows use of wildcards
    cloudFrontKeyPairId, 
    DateTime.Today.AddDays(1), // Date till which the signed cookies are valid
    DateTime.MinValue,         // Date from which the signed cookies are valid,
	                       // a value of DateTime.MinValue is ignored			
    "192.0.2.0/24");           // Source IP or range of IP addresses,
                               // a value of string.Empty or null is ignored

Send Cookies to a User’s Browser

Typically, you would create signed cookies when a user visits your website and signs in (or meets some other criteria). At that point, the cookies are generated on the web server and included in response. The user’s browser caches these cookies and includes them in subsequent requests to Amazon CloudFront, when the user accesses the URL for private content hosted on Amazon CloudFront.

The following snippet sends the generated cookies back in the HTTP response to the browser in an ASP.NET web application.

using System.Web;
...
// Set signed cookies for precanned policies
Response.Cookies.Add(new HttpCookie(cookies.Expires.Key, cookies.Expires.Value));
Response.Cookies.Add(new HttpCookie(cookies.Signature.Key, cookies.Signature.Value));
Response.Cookies.Add(new HttpCookie(cookies.KeyPairId.Key, cookies.KeyPairId.Value));

//Or set signed cookies for custom policies 
Response.Cookies.Add(new HttpCookie(cookies.Policy.Key, cookies.Policy.Value));
Response.Cookies.Add(new HttpCookie(cookies.Signature.Key, cookies.Signature.Value));
Response.Cookies.Add(new HttpCookie(cookies.KeyPairId.Key, cookies.KeyPairId.Value));

In this blog post, we showed how to use the customer-suggested Amazon.CloudFront.AmazonCloudFrontCookieSigner utility class to generate signed cookies to access private content from Amazon CloudFront. If you have ideas for new utilities or high-level APIs to add to the SDK, please provide your feedback here.

Announcing Support for the PowerShell Gallery

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

The AWS Tools for Windows PowerShell have until now been made available in a single MSI installer that also contains the AWS SDK for .NET and AWS Toolkit for Visual Studio. MSIs have historically been the primary method of installing software on Windows. On the Linux and OS X platforms, package managers have become the primary mechanism for distributing and acquiring software, with package mangers like apt-get, yum, npm and pip providing simple experiences to install software from large repositories of packages. The Windows ecosystem has several package managers, Nuget targeted at developers and Chocolately for general purpose software.

We’re pleased to announce that you can now obtain the AWSPowerShell module from the new PowerShell Gallery (https://www.powershellgallery.com/packages/AWSPowerShell/). The PowerShell Gallery is a Microsoft repository for the new PowerShell package management system. This post explains how to get started using the Gallery to install and update the tools.

Gallery Requirements

As noted on the PowerShell Gallery homepage, to use the Gallery the Microsoft Windows Management Framework (WMF) v5 preview is required. Follow the link to obtain and install the preview if needed for your system.

Installing the AWSPowerShell Module

Once you have installed the WMF version 5 preview, you can run the Get-PSRepository cmdlet to see details of the Gallery setup:

PS C:> Get-PSRepository

Name       OneGetProvider  InstallationPolicy  SourceLocation
----       --------------  ------------------  --------------
PSGallery  NuGet           Untrusted           https://...

You install the module by using the Install-Module cmdlet. To install the AWSPowerShell tools (you may want to consider adding the -Verbose switch to see additional progress information from the install cmdlet):

PS C:> Install-Module -Name AWSPowerShell

If you already have the AWSPowerShell module installed from earlier use of our MSI installer, Install-Module will exit with a message similar to this:

PS C:> Install-Module -Name AWSPowerShell
WARNING: Version '2.3.31.0' of module 'AWSPowerShell' is already installed at 'C:Program Files (x86)AWS ToolsPowerShellAWSPowerShell'. 
To delete version '2.3.31.0' and install version '2.3.35.0', run Install-Module, and add the -Force parameter.
PS C:>

To force the install to occur, re-run the command as suggested in the message:

PS C:> Install-Module -Name AWSPowerShell -Force -Verbose
VERBOSE: The -Repository parameter was not specified.  PowerShellGet will use all of the registered repositories.
VERBOSE: Getting the provider object for the PackageManagement Provider 'NuGet'.
VERBOSE: The specified Location is 'https://www.powershellgallery.com/api/v2/' and PackageManagementProvider is
'NuGet'.
VERBOSE: The specified module will be installed in 'C:Program FilesWindowsPowerShellModules'.
VERBOSE: The specified Location is 'NuGet' and PackageManagementProvider is 'NuGet'.
VERBOSE: Downloading module 'AWSPowerShell' with version '2.3.35.0' from the repository
'https://www.powershellgallery.com/api/v2/'.
VERBOSE: NuGet: Installing 'AWSPowerShell 2.3.35.0'.
VERBOSE: NuGet: Successfully installed 'AWSPowerShell 2.3.35.0'.
VERBOSE: Module 'AWSPowerShell' was installed successfully.
PS C:>

Install-Module will not in this case actually uninstall the previous version. As explained later in this blog post, the default ordering of search paths for modules means that the newer version installed from the Gallery will take precedence—this is useful if you are running the tools on an EC2 instance (although remember that to use the Gallery, you need to install the WMF version 5 preview first). Once you close and open a new shell console, the new version will be running, which you can verify by running Get-AWSPowerShellVersion.

Where is the Module Installed?

The default install location used by Install-Module is ”C:Program FilesWindowsPowerShellModules” It’s also possible to have Install-Module install the tools to your local profile folder (”C:UsersuseridDocumentsWindowsPowerShellModules”) using the -Scope parameter:

PS C:> Install-Module -Name AWSPowerShell -Scope CurrentUser

The default value for -Scope if the parameter is not specified is ”’AllUsers”’.

Installing Updates

To install new versions of the module, you use the Update-Module cmdlet:

PS C:> Update-Module -Name AWSPowerShell

Which Should I Choose – MSI or Gallery?

That depends! If you only need the AWS Tools for Windows PowerShell, then you may want to consider uninstalling the version you currently have (which is located in ”C:Program Files (x86)AWS ToolsPowerShellAWSPowerShell”) and moving over to using the PowerShell Gallery instead (or use the -Force switch with Install-Module, as noted earlier). Just as we do with the MSI installer, we’ll be keeping the Gallery updated each time we ship a new version so you won’t miss anything whichever approach you use.

If you need the AWS Toolkit for Visual Studio, or want to reference the AWS SDK for .NET assemblies from disk rather than via Nuget, then you may want to consider continuing to use the MSI installer. Note that in the installer, the tools are preselected for installation.

If you are running on an EC2 instance and you want to update the AWSPowerShell module, perhaps to take advantage of new features released between the periodic updating of the Amazon EC2 Windows images, then (provided you have installed the WMF version 5 preview) just run Install-Module with the -Force switch as shown earlier.

What Happens If I Use Both?

Install-Module will report if the requested module is already installed (either from the Gallery or our MSI). In this scenario, you’ll need to use the -Force switch to cause the Gallery version to be downloaded and installed. When you open shell windows, which version wins depends on your system’s %PSModulePath% environment variable, but the default values mean that the Gallery versions take precedence, as follows.

The default value of %PSModulePath% causes PowerShell to first look in your user profile location for modules. So AWSPowerShell installed using Install-Module with the -Scope CurrentUser parameter value will be found first. If the module is not located there, PowerShell will then check the system location at ”C:Program FilesWindowsPowerShellModules”. AWSPowerShell installed with the default setting for the -Scope parameter of Install-Module will be found. If neither of the locations used by Install-Module yield the AWSPowerShell module, then any custom paths added to the environment variable will be searched. If you used the MSI installer, your module path will have ”C:Program Files (x86)AWS ToolsPowerShell” after the defaults, and the version installed by our MSI will then be found.

Wrap

We’re very excited to offer access to the AWSPowerShell module from the PowerShell Gallery. Let us know in the comments if there are any other mechanisms you would like us to consider for distributing the tools.

Modularization Released to NuGet in Preview

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today, we pushed our new modularized version of the AWS SDK for .NET to NuGet in preview. This means there are separate NuGet packages for each AWS service. For example, if your application uses Amazon S3 and Amazon DynamoDB, then instead of including the existing AWSSDK package that includes all the AWS services, you can add the AWSSDK.S3 and AWSSDK.DynamoDB packages. This allows your application to include much smaller assemblies, and you’ll need to update these packages only when the services you use are updated.

Why Preview?

The modularized version of the SDK is production ready, so we encourage developers to start using the modularized version now. We marked the modularized SDK as a preview while we are tweaking our release process and documentation. When adding preview packages, be sure to select Include Prerelease.

Check our previous blog post to learn about the differences. You can also follow our development on the modularization branch in GitHub.

NuGet Packages

Service Name NuGet Package
Auto Scaling AWSSDK.AutoScaling
AWS Support API AWSSDK.AWSSupport
AWS CloudFormation AWSSDK.CloudFormation
Amazon CloudFront AWSSDK.CloudFront
AWS CloudHSM AWSSDK.CloudHSM
Amazon CloudSearch AWSSDK.CloudSearch
Amazon CloudSearch Domain AWSSDK.CloudSearchDomain
AWS CloudTrail AWSSDK.CloudTrail
Amazon CloudWatch AWSSDK.CloudWatch
Amazon CloudWatch Logs AWSSDK.CloudWatchLogs
AWS CodeDeploy AWSSDK.CodeDeploy
Amazon Cognito Identity AWSSDK.CognitoIdentity
Amazon Cognito Sync AWSSDK.CognitoSync
AWS Config AWSSDK.ConfigService
AWS Data Pipeline AWSSDK.DataPipeline
AWS Direct Connect AWSSDK.DirectConnect
Amazon DynamoDB (v2) AWSSDK.DynamoDBv2
Amazon Elastic Compute Cloud (EC2) AWSSDK.EC2
Amazon EC2 Container Service AWSSDK.ECS
Amazon ElastiCache AWSSDK.ElastiCache
AWS Elastic Beanstalk AWSSDK.ElasticBeanstalk
Elastic Load Balancing AWSSDK.ElasticLoadBalancing
Amazon Elastic MapReduce AWSSDK.ElasticMapReduce
Amazon Elastic Transcoder AWSSDK.ElasticTranscoder
Amazon Glacier AWSSDK.Glacier
AWS Identity and Access Management (IAM) AWSSDK.IdentityManagement
AWS Import/Export AWSSDK.ImportExport
AWS Key Management Service AWSSDK.KeyManagementService
Amazon Kinesis AWSSDK.Kinesis
AWS Lambda AWSSDK.Lambda
Amazon Machine Learning AWSSDK.MachineLearning
AWS OpsWorks AWSSDK.OpsWorks
Amazon Relational Database Service (RDS) AWSSDK.RDS
Amazon Redshift AWSSDK.Redshift
Amazon Route 53 AWSSDK.Route53
Amazon Route 53 Domains AWSSDK.Route53Domains
Amazon Simple Storage Service (S3) AWSSDK.S3
AWS Security Token Service (STS) AWSSDK.SecurityToken
Amazon SimpleDB AWSSDK.SimpleDB
Amazon Simple Email Service (SES) AWSSDK.SimpleEmail
Amazon Simple Notification Service (SNS) AWSSDK.SimpleNotificationService
Amazon EC2 Simple Systems Manager (SSM) AWSSDK.SimpleSystemsManagement
Amazon Simple Workflow Service AWSSDK.SimpleWorkflow
Amazon Simple Queue Service (SQS) AWSSDK.SQS
AWS Storage Gateway AWSSDK.StorageGateway
Amazon WorkSpaces AWSSDK.WorkSpaces

 

Update on Modularization of the SDK

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

As mentioned earlier, we are currently working on modularizing the AWS SDK for .NET into individual packages for each service. We have pushed the changes to the modularization branch in GitHub. If you use the solution file AWSSDK.sln, it will produce a core assembly for each supported platform and individual service assemblies for each supported platform. Since this solution builds Windows Store and Windows Phone versions of the SDK for .NET, we recommend that you use Windows 8.1 as your platform. We still have more testing, clean up, and work to do on our build/release process before we release modularization for general availability.

Breaking changes

We have tried to keep the list of breaking changes to a minimum to make it as easy as possible to adopt to the new modularized SDK. Here are the breaking changes in the SDK.

Amazon.AWSClientFactory Removed

This class was removed because in the modularized SDK it didn’t make sense to have a class that had a dependency to every service. Instead, the preferred way to construct a service client is to just use its constructor.

Amazon.Runtime.AssumeRoleAWSCredentials Removed

This class was removed because it was in a core namespace but had a dependency to the AWS Security Token Service. It has been obsolete in the SDK for quite some time and will be removed with the new structure. Use Amazon.SecurityToken.AssumeRoleAWSCredentials instead.

SetACL from S3Link

S3Link is part of the Amazon DynamoDB package and is used for storing objects in Amazon S3 that are references in a DynamoDB item. This is a useful feature, but we didn’t want to cause a compile dependency on the S3 package for DynamoDB. Consequently, we needed to simplify the exposed S3 methods from S3Link, so we replaced SetACL with MakeS3ObjectPublic. For more control over the ACL on the object, you’ll need to use the S3 package directly.

Removal of Obsolete Result Classes

For most all services in the SDK, operations return a response object that contains metadata for the operation such as the request ID and a result object. We found having a separate response and result class was redundant and mostly just caused extra typing for developers. About a year and half ago when version 2 of the SDK was released, we put all the information that was on the result class on to the response class. We also marked the result classes obsolete to discourage their use. In the new modularized SDK currently in development, we removed these obsolete result classes. This helps us reduce the size of the SDK.

AWS Config Section Changes

It is possible to do advanced configuration of the SDK through the app.config or web.config file. This is done through an aws config section like the following that references the SDK assembly name.

<configuration>
  <configSections>
    <section name="aws" type="Amazon.AWSSection, AWSSDK"/>
  </configSections>
  <aws region="us-west-2">
    <logging logTo="Log4Net"/>  
  </aws>
</configuration>

In the modularized SDK, there is no longer an assembly called AWSSDK. Instead, we need to reference the new core assembly like this.

<configuration>
  <configSections>
    <section name="aws" type="Amazon.AWSSection, AWSSDK.Core"/>
  </configSections>
  <aws region="us-west-2">
    <logging logTo="Log4Net"/>  
  </aws>
</configuration>

You can also manipulate the config settings through an Amazon.AWSConfigs object. In the modularized SDK, we moved config settings for DynamoDB from the Amazon.AWSConfigs object to Amazon.AWSConfigsDynamoDB.

What’s next

We are making good progress getting our process and development switched over to the new modularized approach. We still have a bit to go, but in the meantime, we would love to hear any feedback on our upcoming changes. Until we’ve completed our switchover, you can still use the current version of the SDK to make all the changes except for the configuration. This means you can make those updates now to ready your code for the modularized SDK.