AWS Developer Blog

Multiple Application Support for .NET and Elastic Beanstalk

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In the previous post we talked about the new deployment manifest you can use to deploy applications to AWS Elastic Beanstalk. You can now use the deployment manifest to deploy multiple applications to the same Elastic Beanstalk environment.

The deployment manifest supports ASP.NET Core web applications and msdeploy archives for traditional ASP.NET applications. Imagine a scenario in which you’ve written a new amazing application using ASP.NET Core for the front end and a Web API project for an extensions API. You also have an admin app that you wrote using traditional ASP.NET.

The toolkit’s deployment wizard focuses on deploying a single project. To take advantage of the multiple application deployment, you have to construct the application bundle by hand. To start, you need to write the manifest. For this example, write the manifest at the root of your solution.

The deployment section in the manifest has two children: an array of ASP.NET Core web applications to deploy, and an array of msdeploy archives to deploy. For each application, you set the IIS path and the location of the application’s bits relative to the manifest.

{
  "manifestVersion": 1,
  "deployments": {
 
    "aspNetCoreWeb": [
      {
        "name": "frontend",
        "parameters": {
          "appBundle": "./frontend",
          "iisPath": "/frontend"
        }
      },
      {
        "name": "ext-api",
        "parameters": {
          "appBundle": "./ext-api",
          "iisPath": "/ext-api"
        }
      }
    ],
    "msDeploy": [
      {
        "name": "admin",
        "parameters": {
          "appBundle": "AmazingAdmin.zip",
          "iisPath": "/admin"
        }
      }
    ]
  }
}

With the manifest written, you’ll use Windows PowerShell to create the application bundle and update an existing Elastic Beanstalk environment to run it. To get the full version of the Windows PowerShell script used in this example, right-click here. The script is written with the assumption that it will be run from the folder that contains your Visual Studio solution.

The first thing you need to do in the script is set up a workspace folder to create the application bundle.

$publishFolder = "c:temppublish"

$publishWorkspace = [System.IO.Path]::Combine($publishFolder, "workspace")
$appBundle = [System.IO.Path]::Combine($publishFolder, "app-bundle.zip")

If (Test-Path $publishWorkspace){
	Remove-Item $publishWorkspace -Confirm:$false -Force
}
If (Test-Path $appBundle){
	Remove-Item $appBundle -Confirm:$false -Force
}

Once the workspace is set up, you can get the front end ready. To do that use the dotnet CLI to publish the application.

Write-Host 'Publish the ASP.NET Core frontend'  
$publishFrontendFolder = [System.IO.Path]::Combine($publishWorkspace, "frontend")
dotnet publish .srcAmazingFrontendproject.json -o $publishFrontendFolder -c Release -f netcoreapp1.0

Notice that the subfolder "frontend" was used for the output folder matching the folder set in the manifest. Now let’s do the same for the Web API project.

Write-Host 'Publish the ASP.NET Core extensiblity API' 
$publishExtAPIFolder = [System.IO.Path]::Combine($publishWorkspace, "ext-api") 

dotnet publish .srcAmazingExtensibleAPIproject.json -o $publishExtAPIFolder -c Release -f netcoreapp1.0 

The admin site is a traditional ASP.NET application, so you can’t use the dotnet CLI. For this project, use msbuild, passing in the build target package to create the msdeploy archive. By default, the package target creates the msdeploy archive under the objReleasePackage folder, so you need to copy the archive to the publish workspace.

Write-Host 'Create msdeploy archive for admin site'

msbuild .srcAmazingAdminAmazingAdmin.csproj /t:package /p:Configuration=Release

Copy-Item .srcAmazingAdminobjReleasePackageAmazingAdmin.zip $publishWorkspace

To tell the Elastic Beanstalk environment what to do with all these applications, you copy the manifest from your solution to the publish workspace and then zip up the folder.

Write-Host 'Copy deployment manifest'
Copy-Item .aws-windows-deployment-manifest.json $publishWorkspace

Write-Host 'Zipping up publish workspace to create app bundle'
Add-Type -assembly "system.io.compression.filesystem"
[io.compression.zipfile]::CreateFromDirectory( $publishWorkspace, $appBundle)

Now that you have the application bundle, you can go to the web console and upload your archive to an Elastic Beanstalk environment. Or you can keep using Windows PowerShell and use the AWS PowerShell cmdlets to update the Elastic Beanstalk environment to the application bundle. Be sure you’ve set the current profile and region to the profile and region that has your Elastic Beanstalk environment by using the Set-AWSCredentials and Set-DefaultAWSRegion cmdlets.

Write-Host 'Write application bundle to S3'
# Determine S3 bucket to store application bundle
$s3Bucket = New-EBStorageLocation
Write-S3Object -BucketName $s3Bucket -File $appBundle


$applicationName = "ASPNETCoreOnAWS"
$environmentName = "ASPNETCoreOnAWS-dev"
$versionLabel = [System.DateTime]::Now.Ticks.ToString()

Write-Host 'Update Beanstalk environment for new application bundle'

New-EBApplicationVersion -ApplicationName $applicationName -VersionLabel $versionLabel -SourceBundle_S3Bucket $s3Bucket -SourceBundle_S3Key app-bundle.zip

Update-EBEnvironment -ApplicationName $applicationName -EnvironmentName $environmentName -VersionLabel $versionLabel

Now check the update status in either the Elastic Beanstalk environment status page or in the web console. Once complete, you can navigate to each of the applications you deployed at the IIS path set in the deployment manifest.

We hope you’re excited about the features we added to AWS Elastic Beanstalk for Windows and AWS Toolkit for Visual Studio. Visit our forums and let us know what you think of the new tooling and what else you would like to see us add.

Customizing ASP.NET Core Deployments

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In our previous post we announced support for deploying ASP.NET Core applications with AWS Elastic Beanstalk and the AWS Toolkit for Visual Studio. Today, we’ll talk about how deployment works and how you can customize it.

After you go through the deployment wizard in the AWS Toolkit for Visual Studio, the toolkit bundles the application and sends it to Elastic Beanstalk. When the toolkit creates the bundle, the first step is to use the new dotnet CLI and the publish command to prepare the application for publishing. The settings in the wizard pass the framework and configuration to the publish command. So if you selected Release for configuration and netcoreapp1.0 for the framework, the toolkit will execute the following command.

dotnet publish --configuration Release --framework netcoreapp1.0

When the publish command finishes, the toolkit writes the new deployment manifest into the publishing folder. The deployment manifest is a JSON file named aws-windows-deployment-manifest.json, which is read by the new tooling added to the 1.2 version of the Elastic Beanstalk Windows container to figure out how to deploy the application. For example, for an ASP.NET Core application that you want to deploy at the root of IIS, the toolkit generates a manifest file that looks like this.

{
  "manifestVersion": 1,
  "deployments": {
 
    "aspNetCoreWeb": [
      {
        "name": "app",
        "parameters": {
          "appBundle": ".",
          "iisPath": "/",
          "iisWebSite": "Default Web Site"
        }
      }
    ]
  }
}

The appBundle property indicates where the application bits are in relation to the manifest file. This property can point to either a directory or a ZIP archive. The iisPath and iisWebSite properties indicate where in IIS to host the application.

Declaring the Manifest

The toolkit only writes the manifest file if it doesn’t already exist in the publishing folder. If the file does exist, the toolkit updates the appBundle, iisPath, and iisWebSite properties in the first application listed in the aspNetCoreWeb section of the manifest. This allows you to add the aws-windows-deployment-manifest.json to your project and customize the manifest. To do this for an ASP.NET Core Web application in Visual Studio, add a new JSON file to the root of the project and name it aws-windows-deployment-manifest.json.

The manifest must be named aws-windows-deployment-manifest.json and it must be at the root of the project. The Elastic Beanstalk container looks for the manifest at the root and, if it finds it, will invoke the new deployment tooling. If the file doesn’t exist, the Elastic Beanstalk container falls back to the older deployment tooling, which assumes the archive is a msdeploy archive.

To ensure the dotnet CLI publish command includes the manifest, you must update the project.json file to include the manifest file in the include section that’s under publishOptions.

Customizing the Manifest File for Application Deployment

Now that you’ve declared the manifest so that it’s included in the app bundle, you can go beyond what the wizard supports to customize your application’s deployment. AWS has defined a JSON schema for aws-windows-deployment-manifest.json. When you installed the AWS Toolkit for Visual Studio, the setup registers deployment manifest schema.

If you look at the Schema box, you’ll see the schema is set to aws-windows-deployment-manifest-schema.json. When the schema is selected, Visual Studio provides IntelliSense while you’re editing the manifest.

For non Visual Studio users the deployment manifest schema can be accessed online here.

One customization you can do is to configure the IIS application pool that runs the application. The following example shows how you can define an IIS application pool that recycles the process every hour and assigns it to the application.

Additionally, the manifest can declare Windows PowerShell scripts to run before and after the install, restart, and uninstall actions. For example, the following manifest runs the Windows PowerShell script PostInstallSetup.ps1 to do more setup work after the ASP.NET Core application is deployed to IIS. Remember, just like the aws-windows-deployment-manifest.json file, be sure to add your scripts to the include section under publishOptions in the project.json file. If you don’t, the scripts won’t be included in the dotnet CLI publish command.

{
  "manifestVersion": 1,
  "deployments": {
    "aspNetCoreWeb": [
      {
        "name": "app",
        "scripts": {
          "postInstall": {
            "file": "SetupScripts/PostInstallSetup.ps1"
          }  
        }  
      }   
    ]
  }
}

What about ebextensions?

The Elastic Beanstalk ebextensions configuration files are still supported like all the other Elastic Beanstalk containers. To include them in an ASP.NET Core application, add the .ebextensions directory to the include section under publishOptions in the project.json file. For more information about ebextensions, check out the Elastic Beanstalk Developer Guide.

We hope this post gives you a better understanding of how deploying ASP.NET Core applications to Elastic Beanstalk works. In our next post, we’ll show you how to use the new deployment manifest file to deploy multiple applications to the same Elastic Beanstalk environment.

ASP.NET Core Support for Elastic Beanstalk

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today, we release support for deploying ASP.NET Core applications to AWS by using AWS Elastic Beanstalk (Elastic Beanstalk) and the AWS Toolkit for Visual Studio. This Elastic Beanstalk release expands the support the service already offers for deploying applications, including traditional ASP.NET applications, in a variety of languages to AWS.

Let’s walk through the deployment experience. The AWS Toolkit for Visual Studio is the easiest way to get started deploying ASP.NET Core applications to Elastic Beanstalk. If you have used the toolkit before to deploy traditional ASP.NET applications, you’ll find the experience for ASP.NET Core to be very similar.

If you’re new to the toolkit, after you install it, the first thing you need to do is register your AWS credentials with it.

In Visual Studio, from the Views menu, choose AWS Explorer.

Click the Add button to add your AWS credentials.

To deploy an ASP.NET Core web application, right-click the project in the Solution Explorer, and choose Publish to AWS.

On the Publish to AWS Elastic Beanstalk page, you’ll choose to create an Elastic Beanstalk application. This is a logical representation of your application that will contain a collection of application versions and environments. The environments contain the actual AWS resources that will run an application version. Every time you deploy an application that creates an application version and points the environment to that version.

Next, set names for the application and its first environment. Each environment will have a unique CNAME associated with it that you can use to access your application when the deployment is complete.

On the AWS Options page, you configure the type of AWS resources to use for your application. For this example, we can leave the default values except for the Key pair section. Key pairs allow you to retrieve the Windows Administrator password so you can log into the machine. If you haven’t already created a key pair, you can select Create new key pair.

On the Permissions page, you assign AWS credentials to the Amazon EC2 instances running your application. This is important if your application is using the AWS SDK for .NET to access other AWS services. If you’re not using any other services from your application, you can leave this page at its default.

The Application Options page is the one that differs from deploying traditional ASP.NET Core applications. Here, you can specify the build configuration and framework used to package the application and also what IIS resource path to configure for the application.

Once that’s finished, click Next to review the settings. Then, click Deploy to begin the deployment process. After the application is packaged and uploaded to AWS, you can check the status of the Elastic Beanstalk environment by opening the environment status view from AWS Explorer.

Events are displayed in Status as the environment comes online. When everything is complete, the environment status changes to Environment is healthy. You can click the URL to view the site. Also from this page, you can pull the logs from the environment or remote desktop into the EC2 instances that are part of your Elastic Beanstalk environment.

The first deployment will take a little more time as it creates new AWS resources. As you iterate over your application, you can quickly redeploy by going back through the wizard or selecting Republish when you right-click the project.

Republish will package your application using the settings from the last time through the deployment wizard, and then upload the application bundle to the existing Elastic Beanstalk environment.

We hope you’re excited to try out our expanded ASP.NET Core support in AWS. In our next post, we’ll dive deeper into how the deployment works and how you can customize it.

Introducing AWS Tools for PowerShell Core Edition

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today Microsoft announced PowerShell support on Windows, Linux and OS X platforms (read the blog announcement here). We’re pleased to also announce support for PowerShell on these new platforms with a new module, the AWS Tools for PowerShell Core Edition or "AWSPowerShell.NetCore" to give it its module name. Just like PowerShell itself, this new module can be used on Windows, Linux, and OS X platforms.

The AWSPowerShell.NetCore module is built on top of the .NET Core version of the AWS SDK for .NET, which is in beta while we finish up testing and port a few more features from the .NET Framework version of the AWS SDK for .NET. Note that updates to the new module for new service features may lag a little behind the sister AWS Tools for Windows PowerShell ("AWSPowerShell") module while the .NET Core version of the AWS SDK for .NET is in beta.

We hope to publish the new module to the PowerShell Gallery in a few days, after we finish final testing against the PowerShell libraries that Microsoft released today. We will update this blog post, together with a link to the new module on the gallery, once we have done so.

Update August 23rd 2016

The first beta release of the new module is now available on the PowerShell gallery. The version number is currently 3.2.7.0 to match the underlying AWS SDK for .NET product version. The services and service APIs supported in this release correspond to the 3.1.94.0 release of the AWSPowerShell module.

Some users are reporting issues with the Install-Module cmdlet built into PowerShell Core with errors related to semantic versioning (see GitHub Issue 202). Using the NuGet provider appears to resolve the issue currently. To install using this provider run this command, setting an appropriate destination folder (on Linux for example try -Destination ~/.local/share/powershell/Modules):

Install-Package -Name AWSPowerShell.NetCore -Source https://www.powershellgallery.com/api/v2/ -ProviderName NuGet -ExcludeVersion -Destination destfolder 

What Happens to AWS Tools for Windows PowerShell?

Nothing! We will continue to update this module just as we have being doing as new service features and new services are launched. The version number of this module will remain at 3.1.* for the time being (AWSPowerShell.NetCore is v3.2.* to match the .NET SDK Core beta, which also has a version of 3.2.*).

Once the .NET Core version of the AWS SDK for .NET exits beta status, so will the AWSPowerShell.NetCore module. At that point, both the AWSPowerShell and AWSPowerShell.NetCore modules will version bump to 3.3.*. Again, this corresponds to the underlying SDK product version. From then on, both modules will update in lockstep with the underlying SDK just as we have being doing with the AWSPowerShell module, with updates being pushed to the gallery.

One small difference is that we currently plan to distribute the AWSPowerShell.NetCore module only through the gallery. The AWSPowerShell module will continue to be distributed to the gallery and through a downloadable Windows Installer that also contains the SDK and AWS Toolkit for Visual Studio. If you would like the new module to also be included in this installer, please let us know in the comments.

Module Compatibility

There is a high degree of compatibility between the two modules.

All cmdlets that map to service APIs are present and function in the same way between the two modules, with the exception that cmdlets for the deprecated Amazon Elastic Compute Cloud (Amazon EC2) import APIs, Import-EC2Instance and Import-EC2Volume, are not present in AWSPowerShell.NetCore. If you currently use these cmdlets in the AWSPowerShell module, we encourage you to investigate their replacements, Import-EC2Image and Import-EC2Snapshot, because the new Amazon EC2 import APIs are faster and more convenient (see blog post for more detail).

A small number of cmdlets are currently still being ported to the new module and have been removed from the first release to the gallery. We hope to add these back as soon as we can, but for now, the following are the cmdlets that are currently unavailable in the AWSPowerShell.NetCore module.

Proxy cmdlets:

  • Set-AWSProxy
  • Clear-AWSProxy

Logging cmdlets:

  • Add-AWSLoggingListener
  • Remove-AWSLoggingListener
  • Set-AWSResponseLogging
  • Enable-AWSMetricsLogging
  • Disable-AWSMetricsLogging

SAML federated credentials cmdlets:

  • Set-AWSSamlEndpoint
  • Set-AWSSamlRoleProfile

Note that credential profiles holding regular AWS Access and Secret keys can be used, and you can also obtain credentials from Instance Profiles when running the new module on Amazon EC2 instances, even on Linux instances.

Credential Handling

All cmdlets accept AWS Access and Secret keys or the names of credential profiles when they run, the same as you use them today with the current AWSPowerShell module. When running on Windows, both modules have access to the AWS SDK for .NET credential store file (stored per-user AppDataLocalAWSToolkitRegisteredAccounts.json file). This file stores your keys in encrypted format and cannot be used on a different machine. This file is the first to be inspected for the AWSPowerShell module when looking for a credential profile, and is also where it stores new credential profiles. (The AWSPowerShell module does not currently support writing credentials to other files or locations.)

Both modules can also read profiles from the ini-format shared credentials file that is used by other AWS SDKs and the AWS CLI. On Windows, the default location for this file is ”’C:Users”userid”.awscredentials”’. On non-Windows platforms, it is at ”’~/.aws/credentials”’. The -ProfilesLocation parameter can be used to point to a non-default file name or file location.

The SDK credential store holds your credentials in encrypted form by using Windows crypto APIs. These APIs are not available on other platforms, so the AWSPowerShell.NetCore module uses the ini-format shared credentials file exclusively and also supports writing new credential profiles to the shared credential file. This support will be extended to the AWSPowerShell module in a future release.

These example commands that use the Set-AWSCredentials cmdlet show the options for handling credential profiles on Windows with either the AWSPowerShell or AWSPowerShell.NetCore modules:

# Writes a new (or updates existing) profile with name "myProfileName"
# in the encrypted SDK store file
Set-AWSCredentials -AccessKey akey -SecretKey skey -StoreAs myProfileName

# Checks the encrypted SDK credential store for the profile and then
# falls back to the shared credentials file in the default location
Set-AWSCredentials -ProfileName myProfileName

# Bypasses the encrypted SDK credential store and attempts to load the
# profile from the ini-format credentials file "mycredentials" in the
# folder C:MyCustomPath
Set-AWSCredentials -ProfileName myProfileName -ProfilesLocation C:MyCustomPathmycredentials

These examples show the behavior of the AWSPowerShell.NetCore module on Linux or OS X:

# Writes a new (or updates existing) profile with name "myProfileName"
# in the default shared credentials file ~/.aws/credentials
Set-AWSCredentials -AccessKey akey -SecretKey skey -StoreAs myProfileName

# Writes a new (or updates existing) profile with name "myProfileName"
# into an ini-format credentials file "~/mycustompath/mycredentials"
Set-AWSCredentials -AccessKey akey -SecretKey skey -StoreAs myProfileName -ProfilesLocation ~/mycustompath/mycredentials

# Reads the default shared credential file looking for the profile "myProfileName"
Set-AWSCredentials -ProfileName myProfileName

# Reads the specified credential file looking for the profile "myProfileName"
Set-AWSCredentials -ProfileName myProfileName -ProfilesLocation ~/mycustompath/mycredentials

Watch this space!

We’ll continue to update this post as new information becomes available and we finalize the publishing of the new module to the PowerShell Gallery. We hope you’re as excited about the ability to use PowerShell on platforms other than Windows as we are!

Argument Completion Support in AWS Tools for Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Version 3.1.93.0 of the AWS Tools for Windows PowerShell now includes support for tab completion of parameters that map to enumeration types in service APIs. Let’s look at how these types are implemented in the underlying AWS SDK for .NET and then see how this new support helps you at the Windows PowerShell command line or in script editors (like the PowerShell ISE) that support parameter IntelliSense.

Enumerations in the AWS SDK for .NET

You might expect the SDK to implement enumeration types used in service APIs as enum types but this isn’t the case. The SDK contains a ConstantClass base class from which it derives classes for service-specific enumeration types. These derived classes implement the permitted values for a service enumeration as a set of read-only static strings. For example, here’s a snippet of the InstanceType enumeration for Amazon Elastic Compute Cloud (Amazon EC2) instances (comments removed for brevity):

public class InstanceType : ConstantClass
{
    public static readonly InstanceType C1Medium = new InstanceType("c1.medium");
    public static readonly InstanceType C1Xlarge = new InstanceType("c1.xlarge");
    public static readonly InstanceType C32xlarge = new InstanceType("c3.2xlarge");
	...

    public InstanceType(string value)
           : base(value)
    {
    }
	...
}

In a typical SDK application, you would use the defined types (from Amazon EC2’s RunInstances API), like this:

var request = new RunInstancesRequest
{
    InstanceType = InstanceType.C1XLarge,
	...
};
var response = EC2Client.RunInstances(request);
...

In this way, the SDK’s enumerations are not very different from regular enum types but offer one very powerful capability over regular enums: When services update their enumeration values (for example when EC2 adds a new instance type) you do not need to update the version of the SDK your application is built against to use the new value! The new value won’t appear as a member of the enumeration class until you update your SDK but you can simply write code to use the value with whatever version you have. It just works:

var request = new RunInstancesRequest
{
    InstanceType = "new-instance-type-code"
	...
};
var response = EC2Client.RunInstances(request);
...

This ability to adopt new values also applies to the response data from the service. The SDK will simply unmarshal the response data and accept the new value. This is unlike what would happen with the use of real enum types that would throw an error. You are therefore insulated on both sides from services adding new enumeration values when you want or need to remain at a particular SDK version.

Using Service Enumerations from PowerShell

Let’s say we are working at a console and want to use New-EC2Instance (which maps to the RunInstances API):

PS C:> New-EC2Instance -InstanceType ???

As we noted, the underlying SDK does not use regular .NET enum types for the allowed values so there’s no data for the shell to run against in order to offer a suggestion list. Obviously this is a problem when you don’t know the permitted values but it’s also an issue when you know the value but not the casing. Windows PowerShell may be case-insensitive but some services require the casing shown in their enumerations for their API call to succeed.

Why Not Use ValidateSet?

One way to tackle this problem would be to use PowerShell’s ValidateSet attribution on parameters that map to service enumerations, but this has a shortcoming: validation! Using the example of the -InstanceType parameter again, should EC2 add a new type you would need to update your AWSPowerShell module in order to use the new value. Otherwise, the shell would reject your use of values not included in the ValidateSet attribution at the time we shipped the module. The ability to make use of new enumeration values without being forced to recompile the application with an updated SDK is very useful. It’s certainly a capability we wanted to extend to our Windows PowerShell module users.

What we want is behavior similar to these screenshots of Invoke-WebRequest but without locking users into requiring updates for new values. In the ISE, we get a pop-up menu of completions:

At the console when we press Ctrl+Space we get a list of completions to select from:

We can also use the Tab key to iterate through the completions one by one or we can enter a partial match and the resulting completions are filtered accordingly.

The Solution: Argument Completers

Support for custom argument completers was added in Windows PowerShell version 3. Custom argument completers allow cmdlet authors and users to register a script block to be called by the shell when a parameter is specified. This script block is responsible for returning a set of valid completions given the data at hand. The set of completions (if any) is displayed to the user in the same way as if the data were specified using ValidateSet attribution or through a regular .NET enum type.

Third-party modules like TabExpansionPlusPlus (formerly TabExpansion++) also contributed to this mechanism to give authors and users a convenient way to register the completers. Beginning in Windows PowerShell version 5, a new native cmdlet can perform the registration.

In version 3.1.93.0 of the AWSPowerShell module we have added a nested script module that implements argument completers across the supported AWS services. The data used by these completers to offer suggestion lists for parameters comes from the SDK enumeration classes at the time we build the module. The SDK’s data is created based on the service models when we build the SDK. The permitted values are therefore correctly cased for those services that are case-sensitive; no more guessing how a value should be expressed when at the command line.

Here’s an example of the InstanceType completer (shortened) for EC2:

$EC2_Completers = {
    param($commandName, $parameterName, $wordToComplete, $commandAst, $fakeBoundParameter)
    
    # to allow for same-name parameters of different ConstantClass-derived
    # types, check on command name concatenated with parameter name.
    switch ($("$commandName/$parameterName"))
	{	...
        # Amazon.EC2.InstanceType
        {
            ($_ -eq "Get-EC2ReservedInstancesOffering/InstanceType") -Or
            ($_ -eq "New-EC2Instance/InstanceType") -Or
            ($_ -eq "Request-EC2SpotInstance/LaunchSpecification_InstanceType")
        }
        {
            $v = "c1.medium","c1.xlarge",...,"t2.nano","t2.small",...
            break
        }
	...
	}
	
    # the standard code pattern for completers is to pipe through sort-object
    # after filtering against $wordToComplete, but our members are already sorted.
    $v |
        Where-Object { $_ -like "$wordToComplete*" } |
        ForEach-Object { New-Object System.Management.Automation.CompletionResult $_, $_, 'ParameterValue', $_ }
}        

When the AWSPowerShell module is loaded, the nested module is automatically imported and executed, registering all of the completer script blocks it contains. Completion support works with Windows PowerShell versions 3 and later. For Windows PowerShell version 5 or later, the module uses the native Register-ArgumentCompleter cmdlet. For earlier versions it determines if this cmdlet is available in your installed modules (this will be the case if you have TabExpansionPlusPlus installed). If the cmdlet cannot be found the shell’s completer table is updated directly (you’ll find several blog posts on how this is done if you search for Windows PowerShell argument completion).

The net effect of this is that when you are constructing a command at the console or writing a script you get a suggestion list for the values accepted by these parameters. No more hunting through documentation to determine the allowed values and their casing! As we required, the ISE displays the list immediately after you enter a space after the parameter name and partial content will filter the list:

In a console the Tab key will cycle through the available options. Pressing Ctrl+Space displays a pop-up selection list that you can cursor around. In both cases you can filter the display by typing in the partial content:

A Note About Script Security

To accompany the new completion script module we made one other significant change in the 3.1.93.0 release: to add an Authenticode signature to all script and module artifacts (in effect, all .psd1, .psm1, .ps1 and .ps1xml files contained in the module). A side-benefit of this is that the module is now compatible with where the execution policy for Windows PowerShell scripts is set to "AllSigned". More information on execution policies can be found in this TechNet article.

Wrap

We hope you enjoy the new support for parameter value completion and the ability to now use the module in environments that require the execution policy to be ‘AllSigned’. Happy scripting!

General Availability Release of the aws-record Gem

by Alex Wood | on | in Ruby | Permalink | Comments |  Share

Today, we’re pleased to announce the GA release of version 1.0.0 of the aws-record gem.

What Is aws-record?

In version 1 of the AWS SDK for Ruby, the AWS::Record class provided a data mapping abstraction over Amazon DynamoDB operations. Earlier this year, we released the aws-record developer preview as a separately packaged library to provide a similar data mapping abstraction for DynamoDB, built on top of the AWS SDK for Ruby version 2. After customer feedback and some more development work, we’re pleased to move the library out of developer preview to general availability.

How to Include the aws-record Gem in Your Project

The aws-record gem is available now from RubyGems:

 

gem install aws-record

 

You can also include it in your project’s Gemfile:

 

# Gemfile
gem 'aws-record', '~> 1.0'

 

This automatically includes a dependency on the aws-sdk-resources gem, major version 2. Be sure to include the aws-sdk or aws-sdk-resources gem in your Gemfile if you need to lock to a specific version, like so:

 

 # Gemfile
gem 'aws-record', '~> 1.0'
gem 'aws-sdk-resources', '~> 2.5'

 

Working with DynamoDB Tables Using the aws-record Gem

Defining an Aws::Record Model

The aws-record gem provides the Aws::Record module, which you can include in a class definition. This decorates your class with a variety of helper methods that can simplify interactions with Amazon DynamoDB. For example, the following model uses a variety of preset attribute definition helper methods and attribute options:

 

require 'aws-record'

class Forum
  include Aws::Record  

  string_attr     :forum_uuid, hash_key: true
  integer_attr    :post_id,    range_key: true
  string_attr     :author_username
  string_attr     :post_title
  string_attr     :post_body
  string_set_attr :tags,       default_value: Set.new 
  datetime_attr   :created_at, database_attribute_name: "PostCreatedAtTime"
  boolean_attr    :moderation, default_value: false
end

 

Using Validation Libraries with an Aws::Record Model

The aws-record gem does not come with a built-in validation process. Rather, it is designed to be a persistence layer, and to allow you to bring your own validation library. For example, the following model includes the popular ActiveModel::Validations module, and has defined a set of validations that will be run when we attempt to save an item:

 

require 'aws-record'
require 'active_model'

class Forum
  include Aws::Record
  include ActiveModel::Validations

  string_attr     :forum_uuid, hash_key: true
  integer_attr    :post_id,    range_key: true
  string_attr     :author_username
  string_attr     :post_title
  string_attr     :post_body
  string_set_attr :tags,       default_value: Set.new 
  datetime_attr   :created_at, database_attribute_name: "PostCreatedAtTime"
  boolean_attr    :moderation, default_value: false 


  validates_presence_of :forum_uuid, :post_id, :author_username
  validates_length_of :post_title, within: 4..30
  validates_length_of :post_body,  within: 2..5000
end

 

Creating a DynamoDB Table for a Model with Aws::Record::TableMigration

The aws-record gem provides a helper class for table operations, such as migrations. If we wanted to create a table for our Forum model in DynamoDB, we would run the following migration:

 

migration = Aws::Record::TableMigration.new(Forum)
migration.create!(
  provisioned_throughput: {
    read_capacity_units: 5,
    write_capacity_units: 2
  }
)
migration.wait_until_available

 

You can write these migrations in your Rakefile or as standalone helper scripts for your application. Because you don’t need to update your table definition for additions of non-key attributes, you may find that you’re not running migrations as often for your Aws::Record models.

Working with DynamoDB Items Using the aws-record Gem

Creating and Persisting a New Item

Using the example model above, once it has been created in the DynamoDB remote end using Aws::Record::TableMigration (or if it already existed in the remote end), it is simple to create and save a new item:

 

post = Forum.new(
  forum_uuid: FORUM_UUID,
  post_id: 1,
  author_username: "Author One",
  post_title: "Hello!",
  post_body: "Hello, world!"
)
post.created_at = Time.now
post.save # Performs a put_item call.

 

You can set attributes when you initialize a new item and with setter methods that are defined for you automatically.

Finding and Modifying an Item

A class-level method #find is provided to look up items from DynamoDB using your model’s key attributes. After setting a few new attribute values, calling #save will make an update call to DynamoDB, reflecting only the item changes you’ve made. This is important for users who are fetching items with projections (which may not include all attributes), or using single-table inheritance patterns (who may not have modeled all attributes present in a remote item), to avoid clobbering unmodeled or non-included attribute values.

 

post = Forum.find(forum_uuid: FORUM_UUID, post_id: 1)
post.post_title = "(Removed)"
post.post_body = "(Removed)"
post.moderation = true
post.save # Performs an update_item call on dirty attributes only.

 

There is also a class-level method to directly build and make an update call to DynamoDB, using key attributes to identify the item and non-key attributes to form the update expression:

 

Forum.update(
  forum_uuid: FORUM_UUID,
  post_id: 1,
  post_title: "(Removed)",
  post_body: "(Removed)",
  moderation: true
)

 

The preceding two code examples are functionally equivalent. You’ll have the same database state after running either snippet.

A Note on Dirty Tracking

In our last example, we talked about how item updates only reflect changes to modified attributes. Users of ActiveRecord or similar libraries will be familiar with the concept of tracking dirty attribute values, but aws-record is a bit different. That is because DynamoDB supports collection attribute types, and in Ruby, collection types are often modified through object mutation. To properly track changes to an item when objects can be changed through mutable state, Aws::Record items will, by default, keep deep copies of your attribute values when loading from DynamoDB. Attribute changes through mutation, like this example, will work the way you expect:

 

post = Forum.find(forum_uuid: FORUM_UUID, post_id: 1)
post.tags.add("First")
post.dirty? # => true
post.save # Will call update_item with the new tags collection.

 

Tracking deep copies of attribute values has implications for performance and memory. You can turn off mutation tracking at the model level. If you do so, dirty tracking will still work for new object references, but will not work for mutated objects:

 

class NoMTModel
  include Aws::Record
  disable_mutation_tracking
  string_attr :key, hash_key: true
  string_attr :body
  map_attr    :map
end

item = NoMTModel.new(key: "key", body: "body", map: {})
item.save # Will call put_item
item.map[:key] = "value"
item.dirty? # => false, because we won't track mutations to objects
item.body = "New Body"
item.dirty? # => true, because we will still notice reassignment
# Will call update_item, but only update :body unless we mark map as dirty explicitly.
item.save

 

Try the aws-record Gem Today!

We’re excited to hear about what you’re building with aws-record. Feel free to leave your feedback in the comments, or open an issue in our GitHub repo. Read through the documentation and get started!

Hello, Perl Developers!

by Richard Elberger | on | in Perl | Permalink | Comments |  Share

Did you know that Perl developers can leverage the many powerful services of AWS?  For years, customers have been running Perl workloads on AWS, but if you want to use services like Amazon Simple Queue Service (Amazon SQS) or Amazon Simple Notification Service (Amazon SNS), where do you start?

Whether you’re a Perl beginner or veteran, look no further than Paws, an AWS Community SDK for the Perl programming language.  Whether you’re building a new application or migrating a long-standing workload, Paws can help you leverage all of the services AWS has to offer, so you don’t need to spend vital resources on undifferentiated heavy lifting. And when you move an existing Perl application to AWS, you now have an avenue to incrementally refactor your application to leverage AWS managed services.

About Paws

Paws is the brainchild of José Luis Martínez, CTO at CAPSiDE, which is based in Barcelona, Spain. In his talk, Writing Paws: a Perl AWS SDK., at YAPC::Europe 2015, Martínez outlined his method for getting Paws off the ground.  He asked himself, “How can I possibly cover all the available services?”  His answer was to auto-generate the Perl code from the boto3 code base.

The first version of Paws was released on April 1, 2015.  Since then, there have been more than twenty-two releases, with each improving on the previous while including new services and functionality.  Last November, Martínez presented an update on its progress, Paws – A Perl AWS SDK, at Barcelona.pm.

With the help of many contributors, Paws has become very robust. The possibilities are endless.  And if you’re a Perl coder, you will be pleasantly surprised to see how easy it is to get started.

What’s Next?

Watch for more blog posts about Paws and running Perl workloads on AWS.

Until then, happy Perl programming!

Waiters in the AWS SDK for Java

by Meghana Lokesh Byaramadu | on | in Java | Permalink | Comments |  Share

We’re pleased to announce the addition of the waiters feature in the AWS SDK for Java (take a look at the release notes). Waiters make it easier to wait for a resource to transition into a desired state, which is a very common task when you’re working with services that are eventually consistent (such as Amazon DynamoDB) or have a lead time for creating resources (such as Amazon EC2). Before waiters, it was difficult to come up with the polling logic to determine whether a particular resource had transitioned into a desired state. Now with waiters, you can more simply and easily abstract out the polling logic into a simple API call.

Polling without Waiters

For example, let’s say you wanted to create a DynamoDB table and access it soon after it’s created to add an item into it. There’s a chance that if the table isn’t created already, a ResourceNotFoundException error will be thrown. In this scenario, you have to poll until the table becomes active and ready for use.

//Create an AmazonDynamoDb client 
AmazonDynamoDB client = AmazonDynamoDBClientBuilder
                	.standard()
                	.withRegion(Regions.US_WEST_2)
                	.build();

//Create a table
 client.createTable(new CreateTableRequest().withTableName(tableName)
            .withKeySchema(new KeySchemaElement().withKeyType(KeyType.HASH)
                                                 .withAttributeName("hashKey"))
            .withAttributeDefinitions(new AttributeDefinition()
                                                 .withAttributeType(ScalarAttributeType.S)
                                                 .withAttributeName("hashKey"))
            .withProvisionedThroughput(new ProvisionedThroughput(5L, 5L)));

Without waiters, polling would look like this.

//Polling 5 times for table to become active 
int attempts = 0;
while(attempts < 5){
     try{
           DescribeTableRequest request = new DescribeTableRequest(tableName);
           DescribeTableResult result = client.describeTable(request);
           String status = res.getTable().getTableStatus();
           if(status.equals(“ACTIVE”)){
               break;
	   }
	   Thread.sleep(5000);
	   attempts++;
     }
     catch(ResourceNotFoundException e){
     }
}

Polling with Waiters

Waiters make it easier to abstract out the polling logic into a simple API call. Let’s take a look at how you can create and use waiters to more easily determine whether a DynamoDB table is successfully created and ready to use for further transactions.

//Create waiter to wait on successful creation of table.
Waiter waiter = client.waiters().tableExists();
     try{
          waiter.run(new WaiterParameters<>(new DescribeTableRequest(tableName)); 
     }
     catch(WaiterUnrecoverableException e){
          //Explicit short circuit when the resource transitions into 
          //an undesired state. 
     }
     catch(WaiterTimedOutException e){
          //Failed to transition into desired state even after polling
     }
     catch(DynamoDBException e){
          //Unexpected service exception
     }

For more details, see AmazonDynamoDBWaiters.

Async Waiters

We also offer an async variant of waiters that returns a Future object that promises to hold the result of the computation after it’s done. An async waiter requires a callback interface that is invoked after the Future object is fulfilled. Callback provides an interface to carry out other tasks, depending on whether the resource entered a desired state (onWaitSuccess) or not (onWaitFailure).

To use an async waiter, you must call an async variant of run.

Future future = client.waiters()
                   .tableExists()
                   .runAsync(new WaiterParameters()
                      .withRequest(new DescribeTableRequest(tableName)),
                      new WaiterHandler() {                
                      @Override
                      public void onWaitSuccess(DescribeTableRequest request) {
                          System.out.println("Table creation success!!!!!");
                      }

                      @Override
                      public void onWaitFailure(Exception e) {
	                      e.printStackTrace();
                      }
                 });
				
future.get(5, TimeUnit.MINUTES);

To learn more, see Waiters.

We are excited about this new addition to the SDK! Let us know what you think in the comments section below.

Throttled Retries Now Enabled by Default

by Kyle Thomson | on | in Java | Permalink | Comments |  Share

Back in March (1.10.59), the AWS SDK for Java introduced throttled retries, an opt-in feature that could be enabled in the SDK ClientConfiguration to retry failed service requests. Typically, client-side retries are used to avoid unnecessarily surfacing exceptions caused by transient network or service issues. However, when there are longer-running issues (for example, a network or service outage) these retries are less useful. With throttled retries enabled, service calls can fail fast rather than retrying pointlessly.

After testing this code for the past five months, we’ve turned on throttled retries by default.

If you use Amazon CloudWatch to collect AWS SDK metrics (see this post for details), you’ll be pleased to know that there’s a new metric that tracks when retries are throttled. Look for the ThrottledRetryCount metric in the CloudWatch console.

Of course, this remains an option you can configure. If you don’t want to use throttled retries, you can disable the feature through the ClientConfiguration option like so:

ClientConfiguration config = new ClientConfiguration().withThrottledRetries(false);

Feel free to leave questions or feedback in the comments.

DevOps Meets Security: Security Testing Your AWS Application: Part III – Continuous Testing

by Marcilio Mendonca | on | in Java | Permalink | Comments |  Share

This is part III of a blog post series in which we do a deep dive on automated security testing for AWS applications. In part I, we discussed how AWS Java developers can create security unit tests to verify the correctness of their AWS applications by testing individual units of code in isolation. In part II we went one step further and showed how developers can create integration tests that, unlike unit tests, interact with real software components and AWS resources. In this last post in the series, we’ll walk you through how to incorporate the provided security tests into a CI/CD pipeline (created in AWS CodePipeline) to automate security verification when new changes are pushed into the code repository

Security Tests

In part I and part II of this post, we created a suite of unit and integration tests for a simple S3 wrapper Java class. Unit tests focused on testing the class in isolation by using mock objects instead of real Amazon S3 objects and resources. In addition, integration tests were created to complement unit tests and provide an additional layer of verification that uses real objects and resources like S3 buckets, objects, and versions. In this last post in the series (part III), we’ll show how the unit and integration security tests can be incorporated into a CI/CD pipeline to automatically verify the security behavior of code being pushed through the pipeline.

Incorporating Security Tests into a CI/CD Pipeline

Setting Up

Git and CodeCommit 

Follow the steps in the Integrating AWS CodeCommit with Jenkins blog post to install Git and create an AWS CodeCommit repo. Download the source code and push it to the AWS CodeCommit repo you created.

Jenkins and plugins on EC2

Follow the steps in the Building Continuous Deployment on AWS with AWS CodePipeline, Jenkins and AWS Elastic Beanstalk blog post to install and configure Jenkins on Amazon EC2. Make sure you install the AWS CodePipeline Jenkins plugin to enable AWS CodePipeline and Jenkins integration. In addition, create three Jenkins Maven jobs following the steps described in section “Create a Jenkins Build Job” in that blog post. However, for the Job parameters described below use the values indicated in the table instead.

Jenkins Project Name

SecTestsOnAWS
(maven project)

SecUnitTestsOnAWS
(maven project)

SecIntegTestOnAWS
(maven project)

AWS Region

choose an AWS region

choose an AWS region

choose an AWS region

Source Code Mngt: Category

Build

Test

Test

Source Code Mngt: Provider

SecTestsBuildProvider

SecUnitTestsProvider

SecIntegTestsProvider

Build: Goals and options

package -DskipUnitTests=true
-DskipIntegrationTests=true

verify
-DskipIntegrationTests=true

verify
-DskipUnitTests=true

Post-build Actions:AWS CodePipelinePublisher:Output Locations: Location

target/

target/

target/

Make sure you pick an AWS region where AWS CodePipeline is available.

Here’s an example of the configuration options in the Jenkins UI for project SecTestsOnAWS:

Setting up Jenkins to build S3ArtifactManager using Maven

AWS CodePipeline

In the AWS CodePipeline console, create a pipeline with three stages, as shown here.

AWS CodePipeline CI/CD pipeline with security unit/integration tests actions

Stage #1: Source

  • Choose AWS CodeCommit as your source provider and enter your repo and branch names where indicated.

Stage #2: Build

Create a build action with the following parameters:

  • Action category: Build
  • Action name: Build
  • Build provider: SecTestsBuildProvider (must match the corresponding Jenkins entry in project SecTestsOnAWS)
  • Project name: SecTestsOnAWS
  • Input Artifact #1: MyApp
  • Output Artifact #1: MyAppBuild

Stage #3: Security-Tests

Create two pipeline actions as follows:

Action #1: Unit-Tests Action #2: Integration-Tests
  • Action category: Test
  • Action name: Unit-Tests
  • Build provider: SecUnitTestsProvider (must match the corresponding Jenkins entry in project SecUnitTestsOnAWS)
  • Project name: SecUnitTestsOnAWS
  • Input Artifact #1: MyApp
  • Output Artifact #1: MyUnitTestedBuild
  • Action category: Test
  • Action name: Integration-Tests
  • Build provider: SecIntegTestsProvider (must match the corresponding Jenkins entry in project SecIntegTestsOnAWS)
  • Project name: SecIntegTestsOnAWS
  • Input Artifact #1: MyApp
  • Output Artifact #1: MyIntegTestedBuild

We are not adding a pipeline stage/action for application deployment because we have built a software component (S3ArtifactManager), not a full-fledged application. However, we encourage the reader to create a simple web or standalone application that uses the S3ArtifactManager class and then add a deployment action to the pipeline targeting an AWS Elastic Beanstalk environment, as described in this blog post.

Triggering the Pipeline

After the pipeline has been created, choose the Release Change button and watch the pipeline build the S3ArtifactManager component.

If you are looking for a more hands-on experience in writing security tests, we suggest that you extend the S3ArtifactManager API to allow clients to retrieve versioned objects from an S3 bucket (for example, getObject(String bucketName, String key, String versionId)) and write security tests for the new API. 

Final Remarks

In this last post of the series, we showed how to automate the building and testing of our S3ArtifactManager component by creating a pipeline using AWS CodePipeline, AWS CodeCommit, and Jenkins. As a result, any code changes pushed to the repo are now automatically verified by the pipeline and rejected if security tests fail.

We hope you found this series helpful. Feel free to leave your feedback in the comments.

Happy security testing!