Category: .NET


Announcing the AWS Tools for Microsoft Visual Studio Team Services

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today Amazon Web Services announced the AWS Tools for Microsoft Visual Studio Team Services (VSTS). The tools are free to use and are distributed in the Visual Studio Marketplace. You can use these tasks in build and release pipelines hosted within VSTS and Team Foundation Server to interact with AWS services. For example, you can use tasks to copy content to and from Amazon S3 buckets, or add tasks into your pipelines to deploy build outputs to AWS Elastic Beanstalk, AWS CodeDeploy and AWS Lambda. The tools are also open source and can be found on GitHub.

In this post, we are going to take a look at how to install the tools, provide an overview of the tasks they contain, and then walk through a simple scenario to validate setup and show how easy they are to use. In subsequent posts we will dive deeper into the tasks and how you might use them in your VSTS pipelines.

Installation

Installing the AWS Tools for Microsoft Visual Studio Team Services is quick and easy! First visit the Visual Studio Marketplace. As shown below, you have two options for installing the tools. You can install them into your online VSTS account, or download the tools and install them into an on-premises Team Foundation Server instance.

That’s all there is to it! The tasks in the extension are now available for use in your account or on-premises instance, so let’s do a quick review of the tasks provided in this initial release. As mentioned earlier, subsequent posts will take a deeper dive into some of these tasks.

  • AWS CloudFormation Create/Update Stack. This task enables you to create or update a stack in AWS CloudFormation by using a template file and an optional parameters file. The task switches automatically between updating an existing stack or creating a new stack, depending on whether the stack already exists. You don’t need to select a “mode”, which makes this task convenient to use in pipelines. In addition to choosing the template and parameters file, you can elect to use a change set to create or update the stack, with the added option to automatically execute the change set (if it validates successfully). Or you can use the Execute Change Set task to execute the validated change set at a later time.
  • AWS CloudFormation Delete Stack. This task deletes a stack identified by name or ID. You might use it to clean up development or test environment stacks after a new, fresh deployment in a tear-down-and-rebuild scenario.
  • AWS CloudFormation Execute Change Set. As we said earlier, the Create/Update Stack task gives you the option to perform changes using a change set and, if the set validates, to execute the changes immediately or by using this task at a later time. You provide the name of the change set and the associated stack and the task does the rest, waiting for the stack to reach create or update complete status.
  • AWS Elastic Beanstalk Deployment. With this task you can deploy traditional ASP.NET applications using WebDeploy archives or deploy ASP.NET Core applications.
  • AWS Lambda .NET Core Deployment. This task enables deployment of standalone functions or serverless applications to AWS Lambda. The task uses the same dotnet CLI extensions as the AWS Visual Studio Toolkit, so you have the full customization capabilities of the command line tool switches available within the task.
  • AWS Lambda Invoke Function. In addition to deploying to AWS Lambda, you use this task to trigger Lambda functions to run from within your pipeline. The results of the function can be emitted into a variable for subsequent tasks in your pipeline to consume.
  • AWS S3 Download. Using a combination of bucket name and optional key prefix, this task uses a set of one or more globbing patterns to enable the download of content from an Amazon S3 bucket into your pipeline’s working folders. For example, you can use this to inject custom static content into a build.
  • AWS S3 Upload. Similarly to the S3 download task, this task takes a bucket name and set of globbing patterns to be run in a source folder to upload content from the pipeline’s working folders to a bucket.
  • AWS Tools for Windows PowerShell Script. This task enables you to run scripts that use cmdlets from the Tools for Windows PowerShell (AWSPowerShell) module, optionally installing the module before the script runs.
  • AWS CLI. This task enables you to run individual AWS CLI commands. However, you must have already installed the AWS CLI into the build host.

Configuring and Using a Task

Now that you know a little about the tasks contained in the release, let’s quickly walk through how you might use the AWS S3 Upload task in a pipeline. This also enables you to validate setup of the tools and show how credentials are handled for the tasks.

For this walkthrough, note that we assume you have an existing build or release definition that fetches artifacts to build and/or deploy. We’re simply adding the new task to the end of the pipeline, and configuring it to upload the built or deployable artifacts to an S3 bucket. Go ahead and select the build definition you want to use, or create a new one. When you’ve chosen the definition or created one, select the option to edit the definition.

In the following example screenshot, we’ve chosen to create a new build definition for an ASP.NET Core project. The tasks listed are the assigned defaults.

1. Add the S3 Upload Task to the pipeline

For this walkthrough, we want to capture the build output produced by the Publish task and upload it to Amazon S3. Therefore, we insert our new task between the existing Publish task and Publish Artifacts task. To do this, choose Add Task. In the panel on the right, scroll through the available tasks until you see the AWS tasks, specifically AWS S3 Upload. Choose Add to add it to our build definition.

If the new task isn’t added immediately after the Publish task, drag it into position. Then we can start to configure it.

2. Configure Task Credentials

Tasks that make requests of AWS services such as Amazon S3 need to have credentials configured. In Team Systems terminology, these are known as service endpoints. The AWS tasks provide a service endpoint type named AWS to enable you to provide credentials. To quickly add credentials for this task, click the “+” icon to the right of the AWS Credentials box.

Clicking the gear icon opens a new browser page in a tab, where you can manage all your service endpoints (including the new AWS type). You might do this if you want to set up multiple sets of AWS credentials for your tasks to use.

Having clicked the “+” icon, a pop-up window appears in which we can enter our AWS keys.

If you’re accustomed to using any of the AWS SDKs or tools, such as the AWS CLI or AWS modules for PowerShell, the options here might look familiar. Just as in those SDKs and tools, we are essentially constructing an AWS credential profile. Profiles have names, in this case the value entered for Connection name, which we use to refer to this set of credentials in our task configuration. Go ahead and enter the access key and secret keys for the credentials you want to use, assign a name that you will remember, and then click OK to save them. The pop-up will close and return us to the S3 Upload task configuration with our new credentials preselected.

You can reuse the credentials you entered in other tasks. Simply select the name you used to identify the credentials in the AWS Credentials list for the task you are configuring.

Note
We do not recommend that you use your account’s root credentials. Instead, create one or more IAM users, and then use those credentials. For more information, see Best Practices for Managing AWS Access Keys.

3. Configure Task Options

With credentials configured and selected, we can now complete the task configuration.

  • Set the region in which the bucket exists (or will be created in), for example, us-east-1, us-west-2, etc.
  • Enter the name of the bucket (bucket names must be globally unique).
  • The Source Folder points to a folder in your build area that contains the content to upload. Team Services provides several variables, detailed here, that you can use to avoid hard-coded paths. For this walkthrough, we choose to use the variable Build.ArtifactStagingDirectory, which is defined as …the local path on the agent where artifacts are copied to before being pushed to their destination. Perfect!
  • Filename Patterns can contain one or more globbing patterns used to select files under the Source Folder for upload. The default value shown here selects all files recursively. You can specify multiple patterns, one per line. For this walkthrough, the preceding task (Publish) emits a zip file containing the build. This is the file that will be uploaded.
  • Target Folder is the key prefix in the bucket that will be applied to all of the uploaded files. You can think of this like a folder path. If no value is given, the files are uploaded to the root of the bucket. By default, the relative folder hierarchy is preserved.
  • Finally, there are additional options you can set:
    • Create S3 bucket if it does not exist. The task will fail if the bucket cannot be created.
    • Overwrite (in the Advanced section). This is selected by default.
    • Flatten folders (in the Advanced section). This removes the path of each file relative to the Source Folder and places all files directly into the Target Folder.

4. Run the Build

With the new task configured, we’re ready to run our build. Choose Save & queue.

During the build, the task outputs messages to the log.

Wrap

As you can see, using the new tasks is simple. In future posts, we’ll give more details about some of the deployment tasks and how you can use them. We hope you’re as excited as we are by the launch of the new tools, and that you find them useful in your VSTS environments. Be sure to provide feedback in the GitHub repo to guide future development!

Acknowledgements

We’d like to acknowledge the assistance of the Visual Studio ALM Rangers for their help and support in bringing these new tools to the Visual Studio Marketplace.

AWS and .NET Core 2.0

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Yesterday, .NET Core 2.0 was released, and at AWS we’re very excited about the new features and maturity added to the .NET Core platform. In the coming months, we’ll be updating AWS services to have first-class support for .NET Core 2.0. You can get started using .NET Core 2.0 on AWS right away in two easy ways.

Using AWS Elastic Beanstalk

Elastic Beanstalk lets you easily deploy web applications, and currently supports the .NET Framework and .NET Core 1.1. The Elastic Beanstalk platform will be updated to have .NET Core 2.0 soon. Until the platform is updated you can customize the deployment package to instruct Beanstalk to install .NET Core 2.0 on the instance during deployment.

When an ASP.NET Core application is deployed to Beanstalk a JSON manifest called aws-windows-deployment-manifest.json is created by the toolkit to instruct Beanstalk how to deploy the application. In a previous blog post we talked about how you can customize this manifest. We can use that ability to customize the manifest to run a PowerShell script before deployment to install .NET Core 2.0.

The first step is to add a file to our ASP.NET Core 2.0 project called aws-windows-deployment-manifest.json. In the properties window for aws-windows-deployment-manifest.json, be sure to set the Copy to Output Directory field to Copy Always. This file is normally generated by the toolkit but when the toolkit finds the file already exists it will instead modify the existing file with the settings made in the deployment wizard.

Next copy and paste the content below into the aws-windows-deployment-manifest.json. This says we want to deploy one ASP.NET Core application and before it is deployed execute the ./Scripts/installnetcore20.ps1 PowerShell script.


{
  "manifestVersion": 1,
  "deployments": {

    "aspNetCoreWeb": [
      {
        "name": "app",
        "parameters": {
          "appBundle": ".",
          "iisPath": "/",
          "iisWebSite": "Default Web Site"
        },
        "scripts": {
          "preInstall": {
            "file": "./Scripts/installnetcore20.ps1"
          }
        }
      }
    ]
  }
}

Now that we have added the manifest we need to add the PowerShell script. In the ASP.NET Core project add ./Scripts/installnetcore20.ps1 file. Again be sure to set the Copy to Output Directory field to Copy Always to make sure it is added to the deployment package. The script below downloads the .NET Core 2.0 installer and run the installer.

 


$localPath = 'C:\dotnet-sdk-2.0.0-win-x64.exe'

if(!(Test-Path $localPath))
{
    Invoke-WebRequest -Uri 'https://download.microsoft.com/download/0/F/D/0FD852A4-7EA1-4E2A-983A-0484AC19B92C/dotnet-sdk-2.0.0-win-x64.exe' -OutFile $localPath
    & $localPath /quiet /log c:\InstallNetCore20.log
}

In this script, I’m downloading .NET Core 2.0 from Microsoft’s official link. For a faster download during deployment and to protect yourself from the link being changed, I recommend copying the .NET Core 2.0 installation into an Amazon S3 bucket that is in the same region as your Elastic Beanstalk environment.

Now with this customization to the deployment manifest you can easily deploy ASP .NET Core 2.0 applications to Elastic Beanstalk today.

Using Docker-based services

For Docker-based services that execute Docker containers, such as Amazon EC2 Container Service and AWS CodeBuild, you can get started immediately by using the published Docker images from Docker Hub. For example, when setting up an AWS CodeBuild project in the console, you can specify a custom Docker image from Docker Hub.

For more information about the .NET Core Docker images, see the GitHub repository. For more information about running Docker containers on AWS, see Getting Started with Amazon ECS.

AWS SDK for .NET

.NET Core 2.0 supports .NET Standard 2.0 which means any NuGet packages that target .NET Standard 2.0 and below are supported on .NET Core 2.0. The AWS SDK for .NET targets .NET Standard 1.3 which means you can use it for either .NET Core 1.x or .NET Core 2.0.

Conclusion

To stay up to date about .NET Core 2.0 and AWS, keep following the AWS .NET Development blog and find us on Twitter.

ASP.NET Core and AWS CodeStar Deep Dive

by Steven Kang | on | in .NET | Permalink | Comments |  Share

The AWS CodeStar team recently announced the addition of two ASP.NET Core project templates. As you might know, AWS CodeStar creates a code-integration and code-deployment(CI/CD) pipeline on behalf of developers, so they can spend their valuable time building applications instead of building infrastructure. With the new ASP.NET Core project templates, .NET developers can build and deploy their AWS applications on day one. Tara Walker’s excellent blog post covers how to create ASP.NET Core applications on AWS CodeStar. In this blog post, we take a deeper look into what goes on behind the scenes as we learn how to add tests to your ASP.NET Core project for AWS CodeStar.

Adding a unit test project

Our goal is to add a simple test case that exercises HelloController’s functionality. I’m assuming that you have a brand new ASP.Net Core web service project. If you don’t, you can follow Tara’s blog post (mentioned above) to create one. Be sure to choose the ASP.NET Core Web service template. After you create the ASP.NET Core for AWS CodeStar project, clone the project repository through Team Explorer, and load the AspNetCoreWebService solution, you should be able to follow along with the rest of the blog post. If you need some guidance setting up your repo through Team Explorer, check out Steve Robert’s Visual Studio and AWS CodeCommit integration announcement in May.

First, add a new xUnit project named AspNetCoreWebServiceTest to the AspNetCoreWebService solution. Our new test project will reference the HelloController class and JsonResult, so we should add AspNetCoreWebService as a project reference and Microsoft.AspNetCore.Mvc as a NuGet reference. Once you add them to the test project, you should see the following addition in AspNetCoreWebServiceTest.csproj.

<ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.3" />
    ...
</ItemGroup>
...
<ItemGroup>
    <ProjectReference Include="..\AspNetCoreWebService\AspNetCoreWebService.csproj" />
</ItemGroup>

This should allow you to make direct references to the HelloController class and unpack JsonResult. Let’s add a simple test case, as follows.

using System;
using Xunit;
using Microsoft.AspNetCore.Mvc;
using AspNetCoreWebService.Controllers;

namespace AspNetCoreWebServiceTest
{
    public class HelloControllerTest
    {
        [Fact]
        public void SimpleTest()
        {
            HelloController controller = new HelloController();
            var response = controller.Get("AWS").Value as Response;
            Assert.Equal(response.output, "Hello AWS!");
        }
    }
}

Notice that we have renamed the file name, namespace, class name, and the method name. Run the test and verify that it passes. You should see the following in Solution Explorer.

Now that we have a working test project, we should update our pipeline to build and run the test before deploying the application.

Updating the AWS CodeBuild job

Let’s first look at how the project is built. When you or your team member pushes a change to the repo, your pipeline automatically begins the build process against the latest change. During this step, AWS CodeBuild uses the buildspec.yml file in the root of the repository to drive the build process.

version: 0.2
phases:
  pre_build:
    commands:
      - echo Restore started on `date`
      - dotnet restore AspNetCoreWebService/AspNetCoreWebService.csproj
  build:
    commands:
      - echo Build started on `date`
      - dotnet publish -c release -o ./build_output AspNetCoreWebService/AspNetCoreWebService.csproj
artifacts:
  files:
    - AspNetCoreWebService/build_output/**/*
    - scripts/**/*
    - appspec.yml

The AWS CodeBuild job uses the .NET Core image for AWS CodeBuild, which contains the .NET Core SDK and CLI you will invoke in buildspec.yml.  Since this project consists of one web service, a single buildspec.yml file should be sufficient. As your project grows and the complexity of the build process increases, you may want to drive the build process externally via a shell script or an MSBuild .proj file and simply invoke the script/build file in buildspec.yml.

I would like to bring your attention to the dotnet publish command. This publishing step is crucial here, because it packages all dependencies together so that they are immediately available on the host machine. As defined in the artifacts section of the buildspec.yml file shown above, the list of files will be stored in an Amazon S3 bucket for AWS CodeDeploy to use to deploy your application onto the host.  scripts/**/* contains all scripts that appsec.yml depends on. If you’re not familiar with appsec.yml or want to know more about it, we’ll go over it in the next section.

In the previous section, we added a test project to our AWS CodeCommit repository. Now we should update buildspec.yml to build our new test project. We could simply run dotnet vstest as part of the build stage. However, in this exercise, let’s follow best practices by building separate stages for build and test. Let’s modify the buildspec.yml to build the test binaries and publish the bits into the AspNetCoreWebServiceTest/test_output directory.

pre_build:
    commands:
        ...
        - dotnet restore AspNetCoreWebServiceTest/AspNetCoreWebServiceTest.csproj
post_build:
    commands:
        ...
        - dotnet publish -c release -o ./test_output AspNetCoreWebServiceTest/AspNetCoreWebServiceTest.csproj  
artifacts:
    files:
        ...
        - AspNetCoreWebServiceTest/test_output/**/*

Notice that we added AspNetCoreWebServiceTest/test_output/**/* as an artifact. In effect, this directs the AWS CodeBuild service to upload the published test binaries to Amazon S3, so that we can reference them in the test job we will create next.

Updating AWS CodePipeline

In the previous sections, we added a new test project and modified buildspec.yml to build and save the binaries we need to run the tests. Now we’ll go over how to add a test stage in our pipeline. Let’s begin by adding a Test stage and a UnitTest action to the pipeline in the console.

Follow the rest of the UI and fill in these parameters:

  • Action category: Test
  • Action name: UnitTest
  • Test provider: AWS CodeBuild
  • Select Create a new build project
  • Project name: <your project name>-test
  • Operating system: Ubuntu
  • Runtime: .NET Core
  • Version: aws/codebuild/dot-net:core-1
  • For Build specification, select Insert build Commands
  • Build command: dotnet vstest AspNetCoreWebServiceTest/test_output/AspNetCoreWebServiceTest.dll
  • For Role name, select CodeStarWorker-<your project name>-CodeBuild from the list
  • For Input artifacts #1, select <your project name>-BuildArtifact from the list

The key piece of information here is the build command you provide. Our test job will run dotnet vstest against the test .dll built in the previous stage. Your pipeline should now look like this.

We’re almost done! If you run this pipeline by pressing Release change, the pipeline will fail on the Test stage with the message Error Code: AccessDeniedException. This is because the AWS CodeStar service doesn’t have permission to run our new Test stage. Let’s figure out how to grant appropriate access to our AWS CodeStar project.

Updating the role policy

Your AWS CodeStar project created policies for minimum permission for various services and workers to sync, build, and deploy your application. Because we added a new AWS CodeBuild job, we need to grant access to our new resource in CodeStarWorkerCodePipelinePolicy. Let’s navigate to the IAM console to make this change. On the Roles tab, search using the “codebuild” keyword. The role should be in the format CodeStarWorker-<project name>-CodePipeline. Then, edit the policy attached to the role. This is shown below.

The change we want to make is to add our new codebuild resource arn:aws:codebuild:us-east-1:532345249509:project/<your project name>-test that is associated with AWS CodeBuild actions in the policy.

{
    "Action": [
        "codebuild:StartBuild",
        "codebuild:BatchGetBuilds",
        "codebuild:StopBuild"
    ],
    "Resource": [
        "arn:aws:codebuild:us-east-1:532345249509:project/<your project name>"
        "arn:aws:codebuild:us-east-1:532345249509:project/<your project name>-test"
    ],
    "Effect": "Allow"
}

That’s it. Your AWS CodeStar project should now have appropriate permission to build the new job. Give it a try by pressing Release change.

ASP.NET Core application deployment

So far we’ve seen how AWS CodeStar builds and tests your project. In this section, we look closer at the deployment process. As part of the AWS CodeStar project creation, the AWS CodeStar service creates an Amazon EC2 instance to host your application. It also installs code-deploy-agent, which runs the deployment process on that instance following the instructions in appspec.yml. Let’s take a look at appspec.yml.

version: 0.0
os: linux
files:
  - source: AspNetCoreWebService/build_output
    destination: /home/ubuntu/aspnetcoreservice
  - source: scripts/virtualhost.conf
    destination: /home/ubuntu/aspnetcoreservice 
hooks:
  ApplicationStop:
    - location: scripts/stop_service
      timeout: 300
      runas: root

  BeforeInstall:
    - location: scripts/remove_application
      timeout: 300
      runas: root

  AfterInstall:
    - location: scripts/install_dotnetcore
      timeout: 500
      runas: root

    - location: scripts/install_httpd
      timeout: 300
      runas: root

  ApplicationStart:
    - location: scripts/start_service
      timeout: 300
      runas: root

Each script is run at various stages of the deployment process:

  • install_dotnetcore – Installs dotnet core if it isn’t already installed, and updates the package cache on the first run. This is Microsoft’s recommended way of installing .NET Core on Ubuntu.
  • install_httpd – Installs HTTPD daemon and mods, and overwrites the HTTPD configuration file to enable reverse-proxy.
  • start_service – Restarts the HTTPD service and restarts the existing ASP.NET application/service process.
  • scripts/stop_service  – Stops the HTTPD service and stops the ASP.NET application/service if it is already running.
  • remove_application – Removes the deployed application from the instance.

The code-deploy-agent on the instance runs these hooks during the application deployment to install and start the service. You can monitor the event activities on the AWS CodeDeploy console and we can grab a detailed log from the EC2 instance. After opening an SSH connection to the instance, navigate to /var/log/aws/codedeploy-agent to find the deployment logs.

Conclusion

In this blog post, you learned how your ASP.NET Core project for AWS CodeStar is built and deployed through the example of adding a test stage to your application’s pipeline. I hope this post helped you understand how various components and AWS services interact to provide you with a complete CI/CD system under AWS CodeStar. To learn more, visit the AWS CodeStar user guide. If you run into issues that are specific to AWS CodeStar, see the AWS CodeStar troubleshooting guide.

Screencast using .NET Core with AWS Serverless from NDC Oslo

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Last month I had the pleasure of speaking at the NDC conference in Oslo talking about .NET Core and AWS Serverless technologies.

The talk focused around a new reference application I have been working on called Pollster. Two years ago at the 2015 AWS re:Invent conference we demoed a version of Pollster using .NET Core, which was back then called ASP.NET 5, and Docker. It was great revisiting this app and think of how to solve the technology challenges of the app using Serverless technology.

Thanks to the NDC team a screencast of my talk has been uploaded. Check out the screencast and see how I used AWS Serverless services like AWS Lambda, Amazon API Gateway and AWS Step Functions. The application isn’t feature complete yet but you can find the source on GitHub.

Improvements for AWS CloudFormation and Amazon CloudWatch in the AWS Tools for PowerShell Modules

Trevor Sullivan, a Systems Development Engineer here at Amazon, recently contributed some new AWS CloudFormation helper cmdlets and improved formatting for types he works with on a daily basis. These updates were released in version 3.3.119.0 of the AWS Tools for PowerShell modules (AWSPowerShell and AWSPowerShell.NetCore), in addition to new support in Amazon CloudWatch metrics for customizable dashboards. In this guest post, Trevor takes us through the updates.

Pause a script until a CloudFormation stack status is reached

If you want to pause your PowerShell script until a CloudFormation stack reaches a certain status, you can use the Wait-CFNStack cmdlet. You use Wait-CFNStack to specify a CloudFormation stack name and the status code that you want to wait for. All of the supported CloudFormation statuses are provided with IntelliSense/tab-completion for the -Status parameter, so you don’t need to look them up! Let’s take a look at how you use this cmdlet.

$Common = @{
    ProfileName = 'default'
    Region = 'us-east-2'
}
$CloudFormation = @{
    StackName = 'AWSCloudFormation'
    TemplateBody = @'
    AWSTemplateFormatVersion: '2010-09-09'
        Resources:
            myBucket:
                Type: AWS::S3::Bucket
        Outputs:
            BucketName:
            Value: !Ref myBucket
'@
}
New-CFNStack @CloudFormation @Common
Wait-CFNStack -StackName $CloudFormation.StackName @Common

Test the existence of the CloudFormation stack

Have you ever wanted to simply test whether a CloudFormation stack exists in a certain AWS Region? If so, we now have a cmdlet for that. The Test-CFNStack cmdlet simply returns a Boolean $true if the specified stack exists, or $false if it doesn’t. If your stack doesn’t exist, you no longer have to worry about catching exceptions thrown by the Get-CFNStack cmdlet!

$Common = @{
    ProfileName = 'default'
    Region = 'us-east-2'
}

if (Test-CFNStack -StackName $CloudFormation.StackName @Common) {
    Remove-CFNStack -StackName $CloudFormation.StackName –Force @Common
}

Format types

Another customer-obsessed enhancement in the latest version of the modules deals with the default display of certain objects. In earlier versions complex objects such as CloudFormation stacks were typically displayed in the vertical “list” format (see the Format-List PowerShell cmdlet). The “list” output format doesn’t use horizontal screen space very effectively. As a result, you have to scroll a lot to find what you want and the output isn’t easy to consume.

Instead, we opted to improve the default output to use the PowerShell table format. This makes data easier to consume, so you don’t have to scroll as much. It also limits focus to the object properties that you care about the most.

If you prefer the “list” format, you can still use it by piping your objects into the Format-List PowerShell cmdlet. The default output has simply been changed to use a tabular format to make data easier to interact with and consume.

The new format types work with cmdlets that emit complex objects, such as:

  • Get-CFNStackEvent
  • Get-CFNStack
  • Get-IAMRoleList
  • Get-CWERule
  • Get-LMFunctionList
  • Get-ASAutoScalingGroup
  • Get-WKSWorkspace
  • Get-CWAlarm

The changelog for version 3.3.119.0 of the module on the PowerShell Gallery lists all the types that new formats have been specified for. You can view the changelog for the release on the PowerShell Gallery.

Manage CloudWatch dashboards

AWS customers who use CloudWatch to store and view metrics will appreciate the new CloudWatch dashboard APIs. You can now use PowerShell cmdlets to create, list, and delete CloudWatch dashboards!

I’ve already created a CloudWatch dashboard in my account, so let’s check out how we can export it, modify it, and then update it. Let’s start by discovering which AWS cmdlets relate to CloudWatch dashboards by using Get-AWSCmdletName.

PS /Users/tsulli> Get-AWSCmdletName –MatchWithRegex dashboard

CmdletName           ServiceOperation         ServiceName       CmdletNounPrefix
----------           ----------------         -----------       ----------------
Get-CWDashboard      GetDashboard             Amazon CloudWatch CW
Get-CWDashboardList  ListDashboards           Amazon CloudWatch CW
Remove-CWDashboard   DeleteDashboards         Amazon CloudWatch CW
Write-CWDashboard    PutDashboard             Amazon CloudWatch CW

Now, let’s discover which CloudWatch dashboards already exist in the us-west-2 AWS Region by using Get-CWDashboardList.

PS /Users/tsulli> Get-CWDashboardList -Region us-west-2

DashboardArn   DashboardName   LastModified        Size
------------   -------------   ------------        ----
               MacBook-Pro     7/6/17 7:50:16 PM   1510

As you can see, I’ve got a single CloudWatch dashboard in my test account, with some interesting metrics about my MacBook Pro. Coincidentally, these hardware metrics are also being written to CloudWatch metrics using the AWSPowerShell.NETCore module.

Now let’s grab some detailed information about this specific CloudWatch dashboard. We do this using the Get-CWDashboard cmdlet, and simply passing in the region and name of the dashboard. Be sure to remember that the dashboard name is a case-sensitive input parameter.

PS /Users/tsulli> $Dashboard = Get-CWDashboard -DashboardName MacBook-Pro -Region us-west-2 | Format-List

LoggedAt : 7/7/17 1:44:44 PM
DashboardArn : arn:aws:cloudwatch::123456789012:dashboard/MacBook-Pro
DashboardBody : {"widgets......
DashboardName :
ResponseMetadata : Amazon.Runtime.ResponseMetadata
ContentLength : 3221
HttpStatusCode : OK

For readability in this article, I’ve trimmed the DashboardBody property. However, it contains a lengthy string with the JSON that represents my CloudWatch dashboard. I can use the ConvertFrom-Json cmdlet to convert the string to a usable object in PowerShell.

PS /Users/tsulli> $DashboardObject = $Dashboard.DashboardBody | ConvertFrom-Json

Now let’s update the title field of all the widgets on the CloudWatch dashboard. Let’s change the beginning of each widget’s title from “Trevor” to “David”. Right now, the title reads “Trevor’s MacBook Pro”. After updating it, the widget titles will read “David’s MacBook Pro”. We’ll use the ForEach method syntax in PowerShell to do this. Each widget has a property named //properties//, which has a //title// string property. We’ll do a simple string replacement operation on this property’s value.

PS /Users/tsulli> $DashboardObject.widgets.ForEach({ $PSItem.properties.title = $PSItem.properties.title.Replace('Trevor', 'David') })

Now that we’ve modified the widget titles, let’s convert the dashboard back to JSON and overwrite our dashboard! We’ll use ConvertTo-Json to convert the dashboard object back into its JSON representation. Then we’ll call Write-CWDashboard to commit the updated dashboard back to the CloudWatch service.

PS /Users/tsulli> $DashboardJson = $DashboardObject | ConvertTo-Json -Depth 8
PS /Users/tsulli> Write-CWDashboard -DashboardBody $DashboardJson -DashboardName MacBook-Pro -Region us-west-2

Great! Now if you go back to the AWS Management Console and visit your CloudWatch dashboard, you’ll see that your widgets have updated titles!

Conclusion

We hope you enjoy the continued improvements to the AWS Tools for PowerShell customer experience! If you have feedback on these improvements, please let us know. You can:

* Leave comments and feedback in our AWS SDK forums.
* Tweet to us at @awscloud and @awsfornet.
* Comment on this article!

General Availability of the AWS Toolkit for Visual Studio 2017

We’re pleased to announce that the AWS Toolkit for Visual Studio is now generally available (GA) for Visual Studio 2017. You can install it from the Visual Studio Gallery. The GA version is 1.12.1.0.

Unlike with previous versions of the toolkit, we are using the Visual Studio Gallery to distribute new versions that target Visual Studio 2017 editions and those that follow. New features in the Visual Studio installation experience enable setup experiences that we previously required a Windows Installer to provide. No more! Now you’re able to use the familiar Extensions and Updates dialog box in the IDE to obtain updates.

For users who require the toolkit in earlier versions of Visual Studio (2013 and 2015), or the awsdeploy.exe standalone deployment tool, we’ll continue to maintain our current Windows Installer. In addition to the toolkit and deployment tool, this installer contains the .NET 3.5 and .NET 4.5 AWS SDK for .NET assemblies. It also contains the AWS Tools for Windows PowerShell module. If you have PowerShell version 5, you can also install the AWSPowerShell module from the PowerShell Gallery.

Thanks to everyone who tried out the preview versions of the AWS Toolkit for Visual Studio 2017 and for providing feedback!

Updates to AWSPowerShell Cmdlet Names

Since we launched the first AWS module for PowerShell five years ago, we’ve been hugely encouraged by the feedback from the user community, from first-time PowerShell users to PowerShell MVPs. In that time, we’ve acted immediately on some feedback, and put more complex changes into our backlog for future consideration.

One common request from experienced community members was to rename some cmdlets to change plural nouns to singular, as is the recommended practice. (We had initially created them with plural nouns as we tried to map them closely to the underlying service operation names).

Today we’re pleased to announce we’ve completed the remapping work. We’ve released new versions of the AWSPowerShell and AWSPowerShell.NetCore modules – version 3.3.96.0 – with singular cmdlet names and other cross-service consistency changes.

All the cmdlet name changes have backward-compatible aliases, so you should not have to update any existing scripts. You can continue to use the earlier names.

As we analyzed the cmdlets to determine where we needed to make changes, we found fewer problematic cases than we feared, but more than we’d like! To list them all in this blog post would be overwhelming. Instead, we added a page to our  cmdlet cross-reference on the web at https://docs.aws.amazon.com/powershell/latest/reference/items/pstoolsref-legacyaliases.html. You can also quickly inspect the changes by opening the AWSPowerShellLegacyAliases.psm1 file in the module distribution. For convenience both these resources list the aliases by service.

Please keep your feedback and feature requests coming! We really enjoy getting feedback (good or bad) and using it to plan improvements to the tools to address the problems you handle day to day.

Security update to AWS SDK for .NET’s Amazon CloudFront Cookie Signer

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

The AWS SDK for .NET has a utility class, Amazon.CloudFront.AmazonCloudFrontCookieSigner, for creating signed cookies to access private content served using Amazon CloudFront. This blog contains details on usage of this utility class along with sample code.

Specifying AmazonCloudFrontCookieSigner.Protocols.Https as the protocol parameter creates a cookie with incorrect policy; the policy contains a resource restriction of “http*://” instead of “https://” .

Potential Impact

CloudFront distributions configured to serve HTTP and HTTPS requests are affected by this issue, unless “Viewer Protocol Policy” is configured as HTTPS. In this case, CloudFront will block attempts to access content over HTTP.

Impacted SDK versions

  • Versions 2.3.36 to 2.3.55 for version 2 of the AWS SDK for .NET
  • Versions 3.0.1-preview to 3.3.3.6 for package AWSSDK.CloudFront of the AWS SDK for .NET
  • Versions 3.2.0-beta to 3.2.3.7-beta, and 3.2.8-rc for package AWSSDK.CloudFront in the preview version 3.2 of the AWS SDK for .NET, that targets .NET Core

Mitigation

Update your dependency to the latest version of the SDK. The fix contains a change to the AmazonCloudFrontCookieSigner.Protocols enum’s underlying values (a breaking change) and requires a recompilation of the consuming application. The assembly version of the SDK package has been updated for this fix. There are no other breaking API changes in this version.

  • Version 2.3.55.2 and above for package AWSSDK in version 2 of the AWS SDK for .NET
  • Version 3.3.4.0 and above for package AWSSDK.CloudFront in version 3 of the AWS SDK for .NET

Using AWS CodeCommit with Visual Studio Team Explorer

We recently announced support for new features in the AWS Toolkit for Visual Studio that make working with AWS CodeCommit repositories easy and convenient from within Visual Studio Team Explorer. In this post, we take a look at getting started with setting up credentials, and then how to create and clone repositories from within Team Explorer.

Credential types for AWS CodeCommit

If you’re an existing user of the AWS Toolkit for Visual Studio, you’re aware of setting up AWS credential profiles that contain your access and secret keys. These credential profiles are used in the Toolkit for Visual Studio to enable the Toolkit to call service APIs on your behalf, for example, to list your Amazon S3 buckets in AWS Explorer or to launch an Amazon EC2 instance. The integration of AWS CodeCommit with Team Explorer also uses these credential profiles. However, to work with Git itself we need additional credentials, specifically, Git credentials for HTTPS connections. You can read about these kinds of credentials (a user name and password) at Setup for HTTPS Users Using Git Credentials in the AWS CodeCommit user guide.

You can create the Git credentials for AWS CodeCommit only for Identity and Access Management (IAM) user accounts. You cannot create them for a root account. You can create up to two sets of these credentials for the service and, although you can mark a set of credentials as inactive, inactive sets still count toward your limit of two sets. Note that you can delete and recreate credentials at any time. When you use AWS CodeCommit from within Visual Studio, your traditional AWS credentials are used for working with the service itself, for example, when you’re creating and listing repositories. When working with the actual Git repositories hosted in AWS CodeCommit, you use the Git credentials.

As part of the support for AWS CodeCommit, we’ve extended the Toolkit for Visual Studio to automatically create and manage these Git credentials for you and associate them with your AWS credential profile. That way, you don’t need to worry about having the right set of credentials at hand to perform Git operations within Team Explorer. Once you connect to Team Explorer with your AWS credential profile, the associated Git credentials are used automatically whenever you work with a Git remote.

Later in this post we’ll go over how and when to set up the Git credentials that you need. Just remember that you have to use an IAM user account (which we strongly recommend you do anyway).

Connecting to AWS CodeCommit

When you open the Team Explorer window in Visual Studio 2015 or later, you’ll see a new entry in the Hosted Service Providers section of Manage Connections, as shown.

Choosing Sign up opens the AWS home page in a browser window. What happens when you choose Connect depends on whether the Toolkit for Visual Studio can find a credential profile with AWS access and secret keys to enable it to make calls to AWS on your behalf. You might have set up a credential profile by using the new Getting Started page that displays in the IDE when the Toolkit cannot find any locally stored credentials. Or you might have been using our Toolkit, the AWS Tools for PowerShell, or the AWS CLI and already have AWS credential profiles available for the Toolkit to use.

When you choose Connect, the toolkit starts the process to find a credential profile to use in the connection. If the Toolkit can’t find a credential profile, it opens a dialog box that invites you to enter the access and secret keys for your AWS account. We strongly recommend that you use an IAM user account, and not your root credentials. In addition, as noted earlier, the Git credentials you will eventually need can only be created for IAM users. Once the access and secret keys are provided and the credential profile is created, the connection between Team Explorer and AWS CodeCommit is ready for use.

If the Toolkit finds more than one AWS credential profile, you’re prompted to select the account you want to use within Team Explorer, as shown.

If you have only one credential profile, the toolkit bypasses the profile selection dialog box and you’re connected immediately.

When a connection is established between Team Explorer and AWS CodeCommit via your credential profiles, the invitation dialog box closes and the connection panel is displayed, as shown below.

Because we have no repositories cloned locally, the panel shows just the operations we can perform: Clone, Create, and Sign out. Like other providers, AWS CodeCommit in Team Explorer can be bound to only a single AWS credential profile at any given time. To switch accounts, you use Sign out to remove the connection so you can start a new connection using a different account. We’ll see how this panel expands to display our local AWS CodeCommit repositories later in the post.

Now that we have established a connection, we can create a repository by clicking the Create link.

Creating a repository

When we click the Create link, the Create a New AWS CodeCommit Repository dialog box opens.

AWS CodeCommit repositories are organized by region, so in Region we can select the region in which to host the repository. The list has all the regions in which AWS CodeCommit is supported. We provide the Name (required) and Description (optional) for our new repository.

The default behavior of the dialog box is to suffix the folder location for the new repository with the repository name (as you enter the name, the folder location also updates). To use a different folder name, edit the Clone into folder path after you finish entering the repository name.

You can also elect to automatically create an initial .gitignore file for the repository. The AWS Toolkit for Visual Studio provides a built-in default for Visual Studio file types. Or you can choose to have no file or to use a custom existing file that you would like to reuse across repositories. Simply select Use custom in the list and navigate to the custom file to use.

Once we have a repository name and location, we’re ready to click OK and start creating the repository. The Toolkit requests that the service create the repository and then clone the new repository locally, adding an initial commit for the .gitignore file, if we’re using one. It’s at this point that we start working with the Git remote, so the Toolkit now needs access to the Git credentials we described earlier.

Setting up Git credentials

Until now we’ve been using AWS access and secret keys to request that the service create our repository. Now we need to work with Git itself to do the actual clone operation, and Git doesn’t understand AWS access and secret keys. Instead, we need to supply the user name and password credentials to Git to use on an HTTPS connection with the remote.

As we said earlier, the Git credentials we’re going to use must be associated with an IAM user. You cannot generate them for root AWS credentials (this is another reason why we recommend you set up your AWS credential profiles to contain IAM user access and secret keys, and not root keys). The Toolkit can attempt to set up Git credentials for AWS CodeCommit for you, and associate them with the AWS credential profile that we used to connect in Team Explorer earlier. Let’s take a look at the process.

When you choose OK in the Create a New AWS CodeCommit Repository dialog box and successfully create the repository, the Toolkit checks the AWS credential profile that is connected in Team Explorer to determine if Git credentials for AWS CodeCommit exist and are associated locally with the profile. If so, the Toolkit instructs Team Explorer to commence the clone operation on the new repository. If Git credentials are not available locally, the Toolkit checks the type of account credentials that were used in the connection in Team Explorer. If the credentials are for an IAM user, as we recommend, the following message is shown.

If the credentials are root credentials, the following message is shown instead.

In both cases, the Toolkit offers to attempt to do the work to create the necessary Git credentials for you. In the first scenario, all it needs to create are a set of Git credentials for the IAM user. When a root account is in use, the Toolkit first attempts to create an IAM user and then proceeds to create Git credentials for that new user. If the Toolkit has to create a new user, it applies the AWS CodeCommit Power User managed policy to that new user account. This policy allows access to AWS CodeCommit (and nothing else) and enables all operations to be performed with AWS CodeCommit except for repository deletion.

When you’re creating credentials, you can only view them once. Therefore, the toolkit prompts you to save the newly created credentials (as a .csv file) before continuing.

You won’t be surprised to learn that this is something we also strongly recommend (and be sure to save them to a secure location)!

There might be cases where the Toolkit can’t automatically create credentials. For example, you may already have created the maximum number of sets of Git credentials for AWS CodeCommit (two), or you might not have sufficient programmatic rights for the Toolkit to do the work for you (if you’re signed in as an IAM user). In these cases, you can log into the AWS Management Console to manage the credentials or obtain them from your administrator. You can then enter them in the Git Credentials for AWS CodeCommit dialog box, which the Toolkit displays.

Now that the credentials for Git are available, the clone operation for the new repository proceeds (see progress indication for the operation inside Team Explorer). If you elected to have a default .gitignore file applied, it is committed to the repository with a comment of ‘Initial Commit’.

That’s all there is to setting up credentials and creating a repository within Team Explorer. Once the required credentials are in place, all you see when creating new repositories in the future is the Create a New AWS CodeCommit Repository dialog itself. Now let’s look at cloning an existing repository.

Cloning a repository

To clone a repository, we return to the connection panel for AWS CodeCommit in Team Explorer. We click the Clone link to open the Clone AWS CodeCommit Repository dialog box, and then select the repository to clone and the location on disk where we want to place it.

Once we choose the region, the Toolkit queries the service to discover the repositories that are available in that region and displays them in the central list portion of the dialog box. The name and optional description of each repository are also displayed. You can reorder the list to sort it by either repository name or the last modified date, and to sort each in ascending or descending order.

Once we select our repository we can choose the location to clone to. This defaults to the same repository location used in other plugins to Team Explorer, but you can browse to or enter any other location. By default, the repository name is suffixed onto the selected path. However, if you want a specific path, simply edit the text box after you select the folder. Whatever text is in the box when you click OK will be the folder in which the cloned repository will be found.

Having selected the repository and a folder location, we then click OK to proceed with the clone operation. Just as with creating a repository, you can see the progress of the clone operation reported in Team Explorer.

Working with repositories

When you clone and/or create repositories, notice that the set of local repositories for the connection are listed in the connection panel in Team Explorer under the operation links. These entries give you a convenient way to access the repository to browse content. Simply right-click the repository and choose Browse in Console.

You can also use Update Git Credentials to update the stored Git credentials associated with the credential profile. This is useful if you’ve rotated the credentials. The command will display the Git Credentials for AWS CodeCommit dialog box we noted earlier for you to enter or import the new credentials.

Git operations on the repositories work as you’d expect. You can make local commits and, when you are ready to share, you use the Sync option in Team Explorer. Because the Git credentials are already stored locally and associated with our connected AWS credential profile, we won’t be prompted to supply them again for operations against the AWS CodeCommit remote.

Wrap

We hope you found this post useful in detailing how to manage credentials for AWS CodeCommit inside Team Explorer and using them to create and clone repositories within the IDE!

Updates for .NET Core Lambda Libraries

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

With our release of .NET Core support in AWS Lambda, we also released many NuGet packages to help you develop Lambda functions. We’ve been constantly updating them on our GitHub repository as well. Let’s look at some of the recent updates.

Amazon.Lambda.Tools

This package contains the integration with the .NET Core CLI, which you can use to deploy your functions. The AWS Toolkit for Visual Studio also uses this package to perform the deployment. For information about this package, see this previous post.

Lambda supports .NET Core 1.0. If you add a dependency to your .NET Core project that requires .NET Core 1.1, the .NET Core publishing tooling used by Amazon.Lambda.Tools will run without errors. However, when you run the function you’ll get errors because of the incompatibility. In version 1.5.0 of Amazon.Lambda.Tools we added validation on top of the .NET Core publishing tool to ensure that none of the dependencies for the project require a later runtime than Lambda supports.

New Events Packages

We have many NuGet packages that contain typed classes modeling the Lambda event types for the services. We recently added two more packages: Amazon.Lambda.LexEvents and Amazon.Lambda.KinesisFirehoseEvents.

Amazon.Lambda.LexEvents

Amazon Lex is a service for creating bots. You can use Lambda functions to process the incoming requests to the bot. The Amazon.Lambda.LexEvents package contains the LexEvent and LexResponse classes that you can use as parameter and return for your Lambda functions.

In the Amazon Lex console you can create several getting started Amazon Lex bots. Book Trips is one of the getting started samples you can use to simulate booking a hotel or car. We added a blueprint in Visual Studio that you can use to create the Lambda processor for the Book Trips bot.

Amazon.Lambda.KinesisFirehoseEvents

Amazon Kinesis Firehose recently added support for using Lambda functions to transform the data being streamed to Amazon S3. The Amazon.Lambda.KinesisFirehoseEvents package contains the KinesisFirehoseEvent and KinesisFirehoseResponse classes. We also added a new getting started blueprint to Visual Studio for Firehose.

Serialization Debugging

As we mentioned, we have many packages providing typed classes that you can use for Lambda functions. You can also define your own classes, and the Amazon.Lambda.Serialization.Json package, which is registered in all of the blueprints we provide, will automatically handle all serializing and deserializing into JSON. In version 1.1.0 of the Amazon.Lambda.Serialization.Json package, we added a new debugging feature to help diagnose serialization issues you might have with your custom types. If you add the environment variable LAMBDA_NET_SERIALIZER_DEBUG with the value of true, the Amazon.Lambda.Serialization.Json package writes the incoming and outgoing JSON to the Amazon CloudWatch log stream. This can be very useful to verify that typed classes are being sent back as you expect.

ASP.NET Core Web API Support

We continue to add features to our ASP.NET Core Web API support on top of Lambda. We are also getting some great support from our community on this project with pull and feature requests. Please keep the feedback coming. In versions 0.10.1-preview1 of Amazon.Lambda.AspNetCoreServer we added:

  • Binary support – see the README.md file for details on how to set this up.
  • Filling in the RemoteIpAddress and RemotePort on HttpContext.Connection from the Amazon API Gateway request.
  • New APIGatewayProxyRequest and ILambdaContext objects for the Lambda function to the HttpContext.Items collection with the collection keys APIGatewayRequest and LambdaContext.

Amazon.Lambda.Templates (1.2.1)

The NuGet package Amazon.Lambda.Templates makes all the blueprints offered in Visual Studio available to the dotnet new command. We recently released version 1.2.1 with the new Amazon Lex and Firehose blueprints, and we updated all the dependencies for the other blueprints. See this earlier blog post on how to install and use the blueprints from the dotnet new command.

Summary

We are continually improving our Lambda packages to enhance the experience of developing Lambda functions. Check out the GitHub repo, which is also a great place to give us your feedback. You can also track the releases of the packages in the RELEASE.CHANGELOG.md file.