Category: .NET


Deploying .NET Web Applications Using AWS Elastic Beanstalk with Visual Studio Team Services

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

We recently announced the new AWS Tools for Microsoft Visual Studio Team Services. Today let’s take a deeper look at how you can use the new tools to support deploying your .NET web applications from Team Services to AWS Elastic Beanstalk.

Elastic Beanstalk uses environments to run .NET web applications. Before using this task, you first need to create an environment. You can do this from the Elastic Beanstalk console with a sample application, or by using the AWS Toolkit for Visual Studio with an initial version of your application.

In this post, we won’t cover setting up the tools in Team Services. However, we’re going to assume you already have a Team Services account and know how to push your source code to the repository used in your Team Services build. This post focuses on build definitions that show how to use the AWS Elastic Beanstalk Deployment task.

Setting up the build definition for an ASP.NET application

The AWS Elastic Beanstalk Deployment task in the AWS Tools for Team Services supports either ASP.NET applications that are packaged as a Web Deploy archive, or ASP.NET Core applications published with the dotnet publish command. First let’s take a look at using an ASP.NET application.

Probably the easiest way to get an ASP.NET application built and packaged as a Web Deploy archive is by using the ASP.NET Build Template and removing the Published Artifacts task. This task will be replaced with the AWS Elastic Beanstalk Deployment task.

If you already have a build definition, be sure your Build Solution task has the following MSBuild arguments. These arguments ensure the web application is packaged as a Web Deploy archive and placed into the build artifacts staging directory.


/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactstagingdirectory)\\"

Now that the project is building, we need to add the AWS Elastic Beanstalk Deployment task. Choose Add Task, and then search for “Beanstalk”. On the found task, choose Add to include the task in the build definition.

For the new task, we need to make the following configuration changes:

  • AWS Credentials: The AWS credentials used to perform the deployment. Our previous post on the Team Services tools discusses setting up AWS credentials in Team Services. For this task, the credentials should be for an AWS Identity and Access Management (IAM) user, with a policy that enables the user to update an Elastic Beanstalk environment and describe an environment status and events.
  • AWS Region: The AWS Region that the Elastic Beanstalk environment is running in.
  • Application Type: Set to ASP.NET.
  • Web Deploy Archive: The path to the Web Deploy archive. If the archive was created using the arguments above, the file will have the same name as the directory that contains the web application, and will have a .zip extension. You can find it in the build artifacts staging directory, which can be referenced as $(build.artifactstagingdirectory).
  • Beanstalk Application Name: The name you used to create the Elastic Beanstalk application. An Elastic Beanstalk application is the container for a collection of environments running the .NET web application.
  • Beanstalk Environment Name: The name you used to create the Elastic Beanstalk environment. An Elastic Beanstalk environment contains the actual provisioned resources that are running the .NET web application.

Now that we have configured the task, we’re ready to deploy to Elastic Beanstalk. If you queue a build now, you should see output similar to this for the deployment.

Setting up the build definition for an ASP.NET Core

For an ASP.NET Core deployment, we don’t use Web Deploy. Instead we need the output folder from the dotnet publish command. The easiest way to get started for this type of deployment is to use the ASP.NET Core build template, and again remove the Publish Artifacts task.

This creates a build definition using the dotnet CLI that restores NuGet dependencies, builds the projects, runs any tests, and then finally executes the dotnet publish command on any web projects. After the publish task, we need to add the AWS Elastic Beanstalk Deployment task the same way we added it for the ASP.NET application.

There are two parameters in the configurations on the AWS Elastic Beanstalk Deployment task that are different for ASP.NET Core:

  • Application Type: Set to ASP.NET Core.
  • Published Application Path: The path to the .zip file archive of the dotnet publish output. If you didn’t choose Zip Published Projects in the publish task, this parameter can point to the directory the dotnet publish command wrote to.

That’s it. Everything else is the same as the ASP.NET deployment.

Conclusion

We hope Visual Studio Team Services users will find the AWS Elastic Beanstalk Deployment task helpful and easy to use. We also appreciate hearing your feedback on our GitHub repository for these Team Services tasks.

CognitoAuthentication Extension Library Developer Preview

by Sam Mousigian | on | in .NET | Permalink | Comments |  Share

We are pleased to announce the Developer Preview of the CognitoAuthentication extension library. This library simplifies the authentication process of Amazon Cognito User Pools for .NET 4.5, .NET Core, and Xamarin developers. Many customers reported that they directly implemented the Secure Remote Password (SRP) protocol themselves. This process requires hundreds of lines of difficult cryptography implementation. Our goal in creating the CognitoAuthentication extension library is to eliminate this hassle and allow you to use the authentication methods for Amazon Cognito User Pools with only a few short method calls. We also want to make the process more intuitive. Instead of having to read through pages of documentation to know which parameters to send for each type of authentication, we ask for the necessary fields in the corresponding request parameter of each authentication method and then produce the proper service request on your behalf. The library is built on top of the Amazon Cognito Identity Provider API to create and send these API calls to authenticate users. We hope this library helps you use Amazon Cognito User Pools to authenticate users, and we look forward to your feedback.

Getting Started

To set up an AWS account and install the AWS SDK for .NET to take advantage of this library, see Getting Started with the AWS SDK for .NET. Create a new project in Visual Studio and add the CognitoAuthentication extension library as a reference to the project. You can find it in the NuGet gallery as AWSSDK.Extensions.CognitoAuthentication. Using the library to make calls to the Amazon Cognito Identity Provider API from the AWS SDK for .NET is as simple as creating the necessary CognitoAuthentication objects and calling the appropriate AmazonCognitoIdentityProviderClient methods. The principal Amazon Cognito authentication objects are:

  • CognitoUserPool objects store information about a user pool, including the poolID, clientID, and other pool attributes.
  • CognitoUser objects contain a user’s username, the pool they are associated with, session information, and other user properties.
  • CognitoDevice objects include device information, such as the device key.

You can use AnonymousAWSCredentials when creating the identity provider client, which results in requests not being signed before they are sent to the service. Any service that does not accept unsigned requests returns a service exception in this case. This is appropriate for releasing in your final product to end users that should not have proper credentials until they are authenticated.

Authenticating with Secure Remote Protocol (SRP)

Instead of implementing hundreds of lines of cryptographic methods yourself, you now only need to create the necessary AmazonCognitoIdentityProviderClient, CognitoUserPool, CognitoUser, and InitiateSrpAuthRequest objects and then call StartWithSrpAuthAsync. We made these structures as lightweight as possible while still providing a large amount of functionality. The InitiateSrpAuthRequest currently only requires the password for the user; all other required information is already stored in the CognitoAuthentication objects. This handles creating and responding to the USER_SRP_AUTH and PASSWORD_VERIFIER challenges during the authentication flow on behalf of the developer, and then returns an AuthFlowResponse object. The AuthenticationResult property of the AuthFlowResponse object contains the user’s session tokens if the user was successfully authenticated. If more challenge responses are required, this field is null and the ChallengeName property describes the next challenge, such as multi-factor authentication. You would then call the appropriate method to continue the authentication flow. The following code snippet shows how the library performs SRP authentication:

using Amazon.Runtime;
using Amazon.CognitoIdentityProvider;
using Amazon.Extensions.CognitoAuthentication;

public async void AuthenticateWithSrpAsync()
{
    var provider = new AmazonCognitoIdentityProviderClient(new AnonymousAWSCredentials(),
                                                           FallbackRegionFactory.GetRegionEndpoint());
    CognitoUserPool userPool = new CognitoUserPool("poolID", "clientID", provider);
    CognitoUser user = new CognitoUser("username", "clientID", userPool, provider);

    string password = "userPassword";

    AuthFlowResponse context = await user.StartWithSrpAuthAsync(new InitiateSrpAuthRequest()
    {
        Password = password
    }).ConfigureAwait(false);
}

Authenticating with Multiple Forms of Authentication

Continuing the authentication flow with challenges, such as with NewPasswordRequired and Multi-Factor Authentication (MFA), is simpler as well. The only things required are the CognitoAuthentication objects, user’s password for SRP, and the necessary information for the next challenge, acquired after prompting the user to enter it. The following code shows one way to check the challenge type and get the appropriate responses for MFA and NewPasswordRequired challenges during the authentication flow:

using System;

using Amazon.Runtime;
using Amazon.CognitoIdentityProvider;
using Amazon.Extensions.CognitoAuthentication;

public async void AuthenticateUserAsync()
{
    var provider = new AmazonCognitoIdentityProviderClient(new AnonymousAWSCredentials(),
                                                           FallbackRegionFactory.GetRegionEndpoint());
    CognitoUserPool userPool = new CognitoUserPool("poolID", "clientID", provider);
    CognitoUser user = new CognitoUser("username", "clientID", userPool, provider);

    string password = "userPassword";
    AuthFlowResponse authResponse = null;

    authResponse = await user.StartWithSrpAuthAsync(new InitiateSrpAuthRequest()
    {
        Password = password
    }).ConfigureAwait(false);

    while (authResponse.AuthenticationResult == null)
    {
        if (authResponse.ChallengeName == ChallengeNameType.NEW_PASSWORD_REQUIRED)
        {
            Console.WriteLine("Enter your desired new password:");
            string newPassword = Console.ReadLine();

            authResponse = 
                await user.RespondToNewPasswordRequiredAsync(new RespondToNewPasswordRequiredRequest()
                {
                    SessionID = authResponse.SessionID,
                    NewPassword = newPassword
                }).ConfigureAwait(false);
        }
        else if (authResponse.ChallengeName == ChallengeNameType.SMS_MFA)
        {
            Console.WriteLine("Enter the MFA Code sent to your device:");
            string mfaCode = Console.ReadLine();

            authResponse = await user.RespondToSmsMfaAuthAsync(new RespondToSmsMfaRequest()
            {
                 SessionID = authResponse.SessionID,
                 MfaCode = mfaCode
            }).ConfigureAwait(false);
         }
         else
         {
             Console.WriteLine("Unrecognized authentication challenge.");
             break;
         }
    }

    if (authResponse.AuthenticationResult != null)
    {
        Console.WriteLine("User successfully authenticated.");
    }
    else
    {
        Console.WriteLine("Error in authentication process.");
    }
}

Similar to the SRP authentication model, if the user is authenticated after these method calls, the AuthenticationResult property of the mfaResponse object contains the user’s session tokens. Otherwise, continue prompting the user for the necessary information required for the next authentication challenge described in the mfaResponse ChallengeName field. As shown above, the main variables to maintain between authentication flow calls are the SessionID and ChallengeName of the AuthFlowResponse. You should also handle the case of an unrecognized challenge. This can occur when there is an out-of-date SDK and if the service introduced a new authentication challenge. It can also occur if the application does not support a certain type of challenge. Here, we simply output this to the user and tell the user that there was an error in the authentication process; however, this scenario can be handled in the way that best suits the application.

Using AWS Resources After Authentication

Once a user is authenticated using the CognitoAuthentication library, the next step is to allow them to access the appropriate AWS resources. This requires you to create an identity pool through the Amazon Cognito Federated Identities console. By specifying your created Amazon Cognito User Pool as a provider using its poolID and clientID, you can allow your Amazon Cognito User Pool users to access AWS resources connected to your account. You can also specify different roles for both unauthenticated and authenticated users to be able to access different resources. These roles can be changed in the IAM console where you can add or remove permissions in the “Action” field of the role’s attached policy. Then, using the appropriate identity pool, user pool, and Amazon Cognito user information, calls can be made to different AWS resources. The following shows a user authenticated with SRP accessing the developer’s different S3 buckets permitted by the associated identity pool’s role:

using System;

using Amazon;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.CognitoIdentity;
using Amazon.CognitoIdentityProvider;
using Amazon.Extensions.CognitoAuthentication;

public async void GetS3BucketsAsync()
{
    var provider = new AmazonCognitoIdentityProviderClient(new AnonymousAWSCredentials(),
                                                            FallbackRegionFactory.GetRegionEndpoint());
    CognitoUserPool userPool = new CognitoUserPool("poolID", "clientID", provider);
    CognitoUser user = new CognitoUser("username", "clientID", userPool, provider);

    string password = "userPassword";

    AuthFlowResponse context = await user.StartWithSrpAuthAsync(new InitiateSrpAuthRequest()
    {
        Password = password
    }).ConfigureAwait(false);

    CognitoAWSCredentials credentials = 
        user.GetCognitoAWSCredentials("identityPoolID", RegionEndpoint.<YourIdentityPoolRegion>);

    using (var client = new AmazonS3Client(credentials))
    {
        ListBucketsResponse response = 
            await client.ListBucketsAsync(new ListBucketsRequest()).ConfigureAwait(false);

        foreach (S3Bucket bucket in response.Buckets)
        {
            Console.WriteLine(bucket.BucketName);
        }
    }
}

Other Forms of Authentication

In addition to SRP, NewPasswordRequired, and MFA, the CognitoAuthentication extension library offers an easier authentication flow for:

  • Custom – Begins with a call to StartWithCustomAuthAsync(InitiateCustomAuthRequest customRequest)
  • RefreshToken – Begins with a call to StartWithRefreshTokenAuthAsync(InitiateRefreshTokenAuthRequest refreshTokenRequest)
  • RefreshTokenSRP – Begins with a call to StartWithRefreshTokenAuthAsync(InitiateRefreshTokenAuthRequest refreshTokenRequest)
  • AdminNoSRP – Begins with a call to StartWithAdminNoSrpAuth(InitiateAdminNoSrpAuthRequest adminAuthRequest)

Call the appropriate method depending on the desired flow, and then continue prompting the user with challenges as they are presented in the AuthFlowResponse objects of each method call. Also call the appropriate response method, such as RespondToSmsMfaAuthAsync for MFA challenges and RespondToCustomAuthAsync for custom challenges. If new authentication methods are created for Amazon Cognito User Pools in the future, they will be added to the library as well.

Conclusion

We hope you find this preview of the CognitoAuthentication extension library useful as it simplifies the authentication process from hundreds of lines of code to only a few. We appreciate all of your feedback. You can provide public feedback on our GitHub page or our Gitter page. Those who prefer not to give public feedback can reach out to our support team by filing a support ticket through the AWS Management Console.

Announcing the AWS Tools for Microsoft Visual Studio Team Services

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today Amazon Web Services announced the AWS Tools for Microsoft Visual Studio Team Services (VSTS). The tools are free to use and are distributed in the Visual Studio Marketplace. You can use these tasks in build and release pipelines hosted within VSTS and Team Foundation Server to interact with AWS services. For example, you can use tasks to copy content to and from Amazon S3 buckets, or add tasks into your pipelines to deploy build outputs to AWS Elastic Beanstalk, AWS CodeDeploy and AWS Lambda. The tools are also open source and can be found on GitHub.

In this post, we are going to take a look at how to install the tools, provide an overview of the tasks they contain, and then walk through a simple scenario to validate setup and show how easy they are to use. In subsequent posts we will dive deeper into the tasks and how you might use them in your VSTS pipelines.

Installation

Installing the AWS Tools for Microsoft Visual Studio Team Services is quick and easy! First visit the Visual Studio Marketplace. As shown below, you have two options for installing the tools. You can install them into your online VSTS account, or download the tools and install them into an on-premises Team Foundation Server instance.

That’s all there is to it! The tasks in the extension are now available for use in your account or on-premises instance, so let’s do a quick review of the tasks provided in this initial release. As mentioned earlier, subsequent posts will take a deeper dive into some of these tasks.

  • AWS CloudFormation Create/Update Stack. This task enables you to create or update a stack in AWS CloudFormation by using a template file and an optional parameters file. The task switches automatically between updating an existing stack or creating a new stack, depending on whether the stack already exists. You don’t need to select a “mode”, which makes this task convenient to use in pipelines. In addition to choosing the template and parameters file, you can elect to use a change set to create or update the stack, with the added option to automatically execute the change set (if it validates successfully). Or you can use the Execute Change Set task to execute the validated change set at a later time.
  • AWS CloudFormation Delete Stack. This task deletes a stack identified by name or ID. You might use it to clean up development or test environment stacks after a new, fresh deployment in a tear-down-and-rebuild scenario.
  • AWS CloudFormation Execute Change Set. As we said earlier, the Create/Update Stack task gives you the option to perform changes using a change set and, if the set validates, to execute the changes immediately or by using this task at a later time. You provide the name of the change set and the associated stack and the task does the rest, waiting for the stack to reach create or update complete status.
  • AWS Elastic Beanstalk Deployment. With this task you can deploy traditional ASP.NET applications using WebDeploy archives or deploy ASP.NET Core applications.
  • AWS Lambda .NET Core Deployment. This task enables deployment of standalone functions or serverless applications to AWS Lambda. The task uses the same dotnet CLI extensions as the AWS Visual Studio Toolkit, so you have the full customization capabilities of the command line tool switches available within the task.
  • AWS Lambda Invoke Function. In addition to deploying to AWS Lambda, you use this task to trigger Lambda functions to run from within your pipeline. The results of the function can be emitted into a variable for subsequent tasks in your pipeline to consume.
  • AWS S3 Download. Using a combination of bucket name and optional key prefix, this task uses a set of one or more globbing patterns to enable the download of content from an Amazon S3 bucket into your pipeline’s working folders. For example, you can use this to inject custom static content into a build.
  • AWS S3 Upload. Similarly to the S3 download task, this task takes a bucket name and set of globbing patterns to be run in a source folder to upload content from the pipeline’s working folders to a bucket.
  • AWS Tools for Windows PowerShell Script. This task enables you to run scripts that use cmdlets from the Tools for Windows PowerShell (AWSPowerShell) module, optionally installing the module before the script runs.
  • AWS CLI. This task enables you to run individual AWS CLI commands. However, you must have already installed the AWS CLI into the build host.

Configuring and Using a Task

Now that you know a little about the tasks contained in the release, let’s quickly walk through how you might use the AWS S3 Upload task in a pipeline. This also enables you to validate setup of the tools and show how credentials are handled for the tasks.

For this walkthrough, note that we assume you have an existing build or release definition that fetches artifacts to build and/or deploy. We’re simply adding the new task to the end of the pipeline, and configuring it to upload the built or deployable artifacts to an S3 bucket. Go ahead and select the build definition you want to use, or create a new one. When you’ve chosen the definition or created one, select the option to edit the definition.

In the following example screenshot, we’ve chosen to create a new build definition for an ASP.NET Core project. The tasks listed are the assigned defaults.

1. Add the S3 Upload Task to the pipeline

For this walkthrough, we want to capture the build output produced by the Publish task and upload it to Amazon S3. Therefore, we insert our new task between the existing Publish task and Publish Artifacts task. To do this, choose Add Task. In the panel on the right, scroll through the available tasks until you see the AWS tasks, specifically AWS S3 Upload. Choose Add to add it to our build definition.

If the new task isn’t added immediately after the Publish task, drag it into position. Then we can start to configure it.

2. Configure Task Credentials

Tasks that make requests of AWS services such as Amazon S3 need to have credentials configured. In Team Systems terminology, these are known as service endpoints. The AWS tasks provide a service endpoint type named AWS to enable you to provide credentials. To quickly add credentials for this task, click the “+” icon to the right of the AWS Credentials box.

Clicking the gear icon opens a new browser page in a tab, where you can manage all your service endpoints (including the new AWS type). You might do this if you want to set up multiple sets of AWS credentials for your tasks to use.

Having clicked the “+” icon, a pop-up window appears in which we can enter our AWS keys.

If you’re accustomed to using any of the AWS SDKs or tools, such as the AWS CLI or AWS modules for PowerShell, the options here might look familiar. Just as in those SDKs and tools, we are essentially constructing an AWS credential profile. Profiles have names, in this case the value entered for Connection name, which we use to refer to this set of credentials in our task configuration. Go ahead and enter the access key and secret keys for the credentials you want to use, assign a name that you will remember, and then click OK to save them. The pop-up will close and return us to the S3 Upload task configuration with our new credentials preselected.

You can reuse the credentials you entered in other tasks. Simply select the name you used to identify the credentials in the AWS Credentials list for the task you are configuring.

Note
We do not recommend that you use your account’s root credentials. Instead, create one or more IAM users, and then use those credentials. For more information, see Best Practices for Managing AWS Access Keys.

3. Configure Task Options

With credentials configured and selected, we can now complete the task configuration.

  • Set the region in which the bucket exists (or will be created in), for example, us-east-1, us-west-2, etc.
  • Enter the name of the bucket (bucket names must be globally unique).
  • The Source Folder points to a folder in your build area that contains the content to upload. Team Services provides several variables, detailed here, that you can use to avoid hard-coded paths. For this walkthrough, we choose to use the variable Build.ArtifactStagingDirectory, which is defined as …the local path on the agent where artifacts are copied to before being pushed to their destination. Perfect!
  • Filename Patterns can contain one or more globbing patterns used to select files under the Source Folder for upload. The default value shown here selects all files recursively. You can specify multiple patterns, one per line. For this walkthrough, the preceding task (Publish) emits a zip file containing the build. This is the file that will be uploaded.
  • Target Folder is the key prefix in the bucket that will be applied to all of the uploaded files. You can think of this like a folder path. If no value is given, the files are uploaded to the root of the bucket. By default, the relative folder hierarchy is preserved.
  • Finally, there are additional options you can set:
    • Create S3 bucket if it does not exist. The task will fail if the bucket cannot be created.
    • Overwrite (in the Advanced section). This is selected by default.
    • Flatten folders (in the Advanced section). This removes the path of each file relative to the Source Folder and places all files directly into the Target Folder.

4. Run the Build

With the new task configured, we’re ready to run our build. Choose Save & queue.

During the build, the task outputs messages to the log.

Wrap

As you can see, using the new tasks is simple. In future posts, we’ll give more details about some of the deployment tasks and how you can use them. We hope you’re as excited as we are by the launch of the new tools, and that you find them useful in your VSTS environments. Be sure to provide feedback in the GitHub repo to guide future development!

Acknowledgements

We’d like to acknowledge the assistance of the Visual Studio ALM Rangers for their help and support in bringing these new tools to the Visual Studio Marketplace.

AWS and .NET Core 2.0

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Yesterday, .NET Core 2.0 was released, and at AWS we’re very excited about the new features and maturity added to the .NET Core platform. In the coming months, we’ll be updating AWS services to have first-class support for .NET Core 2.0. You can get started using .NET Core 2.0 on AWS right away in two easy ways.

Using AWS Elastic Beanstalk

Elastic Beanstalk lets you easily deploy web applications, and currently supports the .NET Framework and .NET Core 1.1. The Elastic Beanstalk platform will be updated to have .NET Core 2.0 soon. Until the platform is updated you can customize the deployment package to instruct Beanstalk to install .NET Core 2.0 on the instance during deployment.

When an ASP.NET Core application is deployed to Beanstalk a JSON manifest called aws-windows-deployment-manifest.json is created by the toolkit to instruct Beanstalk how to deploy the application. In a previous blog post we talked about how you can customize this manifest. We can use that ability to customize the manifest to run a PowerShell script before deployment to install .NET Core 2.0.

The first step is to add a file to our ASP.NET Core 2.0 project called aws-windows-deployment-manifest.json. In the properties window for aws-windows-deployment-manifest.json, be sure to set the Copy to Output Directory field to Copy Always. This file is normally generated by the toolkit but when the toolkit finds the file already exists it will instead modify the existing file with the settings made in the deployment wizard.

Next copy and paste the content below into the aws-windows-deployment-manifest.json. This says we want to deploy one ASP.NET Core application and before it is deployed execute the ./Scripts/installnetcore20.ps1 PowerShell script.


{
  "manifestVersion": 1,
  "deployments": {

    "aspNetCoreWeb": [
      {
        "name": "app",
        "parameters": {
          "appBundle": ".",
          "iisPath": "/",
          "iisWebSite": "Default Web Site"
        },
        "scripts": {
          "preInstall": {
            "file": "./Scripts/installnetcore20.ps1"
          }
        }
      }
    ]
  }
}

Now that we have added the manifest we need to add the PowerShell script. In the ASP.NET Core project add ./Scripts/installnetcore20.ps1 file. Again be sure to set the Copy to Output Directory field to Copy Always to make sure it is added to the deployment package. The script below downloads the .NET Core 2.0 installer and run the installer.

 


$localPath = 'C:\dotnet-sdk-2.0.0-win-x64.exe'

if(!(Test-Path $localPath))
{
    Invoke-WebRequest -Uri 'https://download.microsoft.com/download/0/F/D/0FD852A4-7EA1-4E2A-983A-0484AC19B92C/dotnet-sdk-2.0.0-win-x64.exe' -OutFile $localPath
    & $localPath /quiet /log c:\InstallNetCore20.log
}

In this script, I’m downloading .NET Core 2.0 from Microsoft’s official link. For a faster download during deployment and to protect yourself from the link being changed, I recommend copying the .NET Core 2.0 installation into an Amazon S3 bucket that is in the same region as your Elastic Beanstalk environment.

Now with this customization to the deployment manifest you can easily deploy ASP .NET Core 2.0 applications to Elastic Beanstalk today.

Using Docker-based services

For Docker-based services that execute Docker containers, such as Amazon EC2 Container Service and AWS CodeBuild, you can get started immediately by using the published Docker images from Docker Hub. For example, when setting up an AWS CodeBuild project in the console, you can specify a custom Docker image from Docker Hub.

For more information about the .NET Core Docker images, see the GitHub repository. For more information about running Docker containers on AWS, see Getting Started with Amazon ECS.

AWS SDK for .NET

.NET Core 2.0 supports .NET Standard 2.0 which means any NuGet packages that target .NET Standard 2.0 and below are supported on .NET Core 2.0. The AWS SDK for .NET targets .NET Standard 1.3 which means you can use it for either .NET Core 1.x or .NET Core 2.0.

Conclusion

To stay up to date about .NET Core 2.0 and AWS, keep following the AWS .NET Development blog and find us on Twitter.

ASP.NET Core and AWS CodeStar Deep Dive

by Steven Kang | on | in .NET | Permalink | Comments |  Share

The AWS CodeStar team recently announced the addition of two ASP.NET Core project templates. As you might know, AWS CodeStar creates a code-integration and code-deployment(CI/CD) pipeline on behalf of developers, so they can spend their valuable time building applications instead of building infrastructure. With the new ASP.NET Core project templates, .NET developers can build and deploy their AWS applications on day one. Tara Walker’s excellent blog post covers how to create ASP.NET Core applications on AWS CodeStar. In this blog post, we take a deeper look into what goes on behind the scenes as we learn how to add tests to your ASP.NET Core project for AWS CodeStar.

Adding a unit test project

Our goal is to add a simple test case that exercises HelloController’s functionality. I’m assuming that you have a brand new ASP.Net Core web service project. If you don’t, you can follow Tara’s blog post (mentioned above) to create one. Be sure to choose the ASP.NET Core Web service template. After you create the ASP.NET Core for AWS CodeStar project, clone the project repository through Team Explorer, and load the AspNetCoreWebService solution, you should be able to follow along with the rest of the blog post. If you need some guidance setting up your repo through Team Explorer, check out Steve Robert’s Visual Studio and AWS CodeCommit integration announcement in May.

First, add a new xUnit project named AspNetCoreWebServiceTest to the AspNetCoreWebService solution. Our new test project will reference the HelloController class and JsonResult, so we should add AspNetCoreWebService as a project reference and Microsoft.AspNetCore.Mvc as a NuGet reference. Once you add them to the test project, you should see the following addition in AspNetCoreWebServiceTest.csproj.

<ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.3" />
    ...
</ItemGroup>
...
<ItemGroup>
    <ProjectReference Include="..\AspNetCoreWebService\AspNetCoreWebService.csproj" />
</ItemGroup>

This should allow you to make direct references to the HelloController class and unpack JsonResult. Let’s add a simple test case, as follows.

using System;
using Xunit;
using Microsoft.AspNetCore.Mvc;
using AspNetCoreWebService.Controllers;

namespace AspNetCoreWebServiceTest
{
    public class HelloControllerTest
    {
        [Fact]
        public void SimpleTest()
        {
            HelloController controller = new HelloController();
            var response = controller.Get("AWS").Value as Response;
            Assert.Equal(response.output, "Hello AWS!");
        }
    }
}

Notice that we have renamed the file name, namespace, class name, and the method name. Run the test and verify that it passes. You should see the following in Solution Explorer.

Now that we have a working test project, we should update our pipeline to build and run the test before deploying the application.

Updating the AWS CodeBuild job

Let’s first look at how the project is built. When you or your team member pushes a change to the repo, your pipeline automatically begins the build process against the latest change. During this step, AWS CodeBuild uses the buildspec.yml file in the root of the repository to drive the build process.

version: 0.2
phases:
  pre_build:
    commands:
      - echo Restore started on `date`
      - dotnet restore AspNetCoreWebService/AspNetCoreWebService.csproj
  build:
    commands:
      - echo Build started on `date`
      - dotnet publish -c release -o ./build_output AspNetCoreWebService/AspNetCoreWebService.csproj
artifacts:
  files:
    - AspNetCoreWebService/build_output/**/*
    - scripts/**/*
    - appspec.yml

The AWS CodeBuild job uses the .NET Core image for AWS CodeBuild, which contains the .NET Core SDK and CLI you will invoke in buildspec.yml.  Since this project consists of one web service, a single buildspec.yml file should be sufficient. As your project grows and the complexity of the build process increases, you may want to drive the build process externally via a shell script or an MSBuild .proj file and simply invoke the script/build file in buildspec.yml.

I would like to bring your attention to the dotnet publish command. This publishing step is crucial here, because it packages all dependencies together so that they are immediately available on the host machine. As defined in the artifacts section of the buildspec.yml file shown above, the list of files will be stored in an Amazon S3 bucket for AWS CodeDeploy to use to deploy your application onto the host.  scripts/**/* contains all scripts that appsec.yml depends on. If you’re not familiar with appsec.yml or want to know more about it, we’ll go over it in the next section.

In the previous section, we added a test project to our AWS CodeCommit repository. Now we should update buildspec.yml to build our new test project. We could simply run dotnet vstest as part of the build stage. However, in this exercise, let’s follow best practices by building separate stages for build and test. Let’s modify the buildspec.yml to build the test binaries and publish the bits into the AspNetCoreWebServiceTest/test_output directory.

pre_build:
    commands:
        ...
        - dotnet restore AspNetCoreWebServiceTest/AspNetCoreWebServiceTest.csproj
post_build:
    commands:
        ...
        - dotnet publish -c release -o ./test_output AspNetCoreWebServiceTest/AspNetCoreWebServiceTest.csproj  
artifacts:
    files:
        ...
        - AspNetCoreWebServiceTest/test_output/**/*

Notice that we added AspNetCoreWebServiceTest/test_output/**/* as an artifact. In effect, this directs the AWS CodeBuild service to upload the published test binaries to Amazon S3, so that we can reference them in the test job we will create next.

Updating AWS CodePipeline

In the previous sections, we added a new test project and modified buildspec.yml to build and save the binaries we need to run the tests. Now we’ll go over how to add a test stage in our pipeline. Let’s begin by adding a Test stage and a UnitTest action to the pipeline in the console.

Follow the rest of the UI and fill in these parameters:

  • Action category: Test
  • Action name: UnitTest
  • Test provider: AWS CodeBuild
  • Select Create a new build project
  • Project name: <your project name>-test
  • Operating system: Ubuntu
  • Runtime: .NET Core
  • Version: aws/codebuild/dot-net:core-1
  • For Build specification, select Insert build Commands
  • Build command: dotnet vstest AspNetCoreWebServiceTest/test_output/AspNetCoreWebServiceTest.dll
  • For Role name, select CodeStarWorker-<your project name>-CodeBuild from the list
  • For Input artifacts #1, select <your project name>-BuildArtifact from the list

The key piece of information here is the build command you provide. Our test job will run dotnet vstest against the test .dll built in the previous stage. Your pipeline should now look like this.

We’re almost done! If you run this pipeline by pressing Release change, the pipeline will fail on the Test stage with the message Error Code: AccessDeniedException. This is because the AWS CodeStar service doesn’t have permission to run our new Test stage. Let’s figure out how to grant appropriate access to our AWS CodeStar project.

Updating the role policy

Your AWS CodeStar project created policies for minimum permission for various services and workers to sync, build, and deploy your application. Because we added a new AWS CodeBuild job, we need to grant access to our new resource in CodeStarWorkerCodePipelinePolicy. Let’s navigate to the IAM console to make this change. On the Roles tab, search using the “codebuild” keyword. The role should be in the format CodeStarWorker-<project name>-CodePipeline. Then, edit the policy attached to the role. This is shown below.

The change we want to make is to add our new codebuild resource arn:aws:codebuild:us-east-1:532345249509:project/<your project name>-test that is associated with AWS CodeBuild actions in the policy.

{
    "Action": [
        "codebuild:StartBuild",
        "codebuild:BatchGetBuilds",
        "codebuild:StopBuild"
    ],
    "Resource": [
        "arn:aws:codebuild:us-east-1:532345249509:project/<your project name>"
        "arn:aws:codebuild:us-east-1:532345249509:project/<your project name>-test"
    ],
    "Effect": "Allow"
}

That’s it. Your AWS CodeStar project should now have appropriate permission to build the new job. Give it a try by pressing Release change.

ASP.NET Core application deployment

So far we’ve seen how AWS CodeStar builds and tests your project. In this section, we look closer at the deployment process. As part of the AWS CodeStar project creation, the AWS CodeStar service creates an Amazon EC2 instance to host your application. It also installs code-deploy-agent, which runs the deployment process on that instance following the instructions in appspec.yml. Let’s take a look at appspec.yml.

version: 0.0
os: linux
files:
  - source: AspNetCoreWebService/build_output
    destination: /home/ubuntu/aspnetcoreservice
  - source: scripts/virtualhost.conf
    destination: /home/ubuntu/aspnetcoreservice 
hooks:
  ApplicationStop:
    - location: scripts/stop_service
      timeout: 300
      runas: root

  BeforeInstall:
    - location: scripts/remove_application
      timeout: 300
      runas: root

  AfterInstall:
    - location: scripts/install_dotnetcore
      timeout: 500
      runas: root

    - location: scripts/install_httpd
      timeout: 300
      runas: root

  ApplicationStart:
    - location: scripts/start_service
      timeout: 300
      runas: root

Each script is run at various stages of the deployment process:

  • install_dotnetcore – Installs dotnet core if it isn’t already installed, and updates the package cache on the first run. This is Microsoft’s recommended way of installing .NET Core on Ubuntu.
  • install_httpd – Installs HTTPD daemon and mods, and overwrites the HTTPD configuration file to enable reverse-proxy.
  • start_service – Restarts the HTTPD service and restarts the existing ASP.NET application/service process.
  • scripts/stop_service  – Stops the HTTPD service and stops the ASP.NET application/service if it is already running.
  • remove_application – Removes the deployed application from the instance.

The code-deploy-agent on the instance runs these hooks during the application deployment to install and start the service. You can monitor the event activities on the AWS CodeDeploy console and we can grab a detailed log from the EC2 instance. After opening an SSH connection to the instance, navigate to /var/log/aws/codedeploy-agent to find the deployment logs.

Conclusion

In this blog post, you learned how your ASP.NET Core project for AWS CodeStar is built and deployed through the example of adding a test stage to your application’s pipeline. I hope this post helped you understand how various components and AWS services interact to provide you with a complete CI/CD system under AWS CodeStar. To learn more, visit the AWS CodeStar user guide. If you run into issues that are specific to AWS CodeStar, see the AWS CodeStar troubleshooting guide.

Screencast using .NET Core with AWS Serverless from NDC Oslo

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Last month I had the pleasure of speaking at the NDC conference in Oslo talking about .NET Core and AWS Serverless technologies.

The talk focused around a new reference application I have been working on called Pollster. Two years ago at the 2015 AWS re:Invent conference we demoed a version of Pollster using .NET Core, which was back then called ASP.NET 5, and Docker. It was great revisiting this app and think of how to solve the technology challenges of the app using Serverless technology.

Thanks to the NDC team a screencast of my talk has been uploaded. Check out the screencast and see how I used AWS Serverless services like AWS Lambda, Amazon API Gateway and AWS Step Functions. The application isn’t feature complete yet but you can find the source on GitHub.

Improvements for AWS CloudFormation and Amazon CloudWatch in the AWS Tools for PowerShell Modules

Trevor Sullivan, a Systems Development Engineer here at Amazon, recently contributed some new AWS CloudFormation helper cmdlets and improved formatting for types he works with on a daily basis. These updates were released in version 3.3.119.0 of the AWS Tools for PowerShell modules (AWSPowerShell and AWSPowerShell.NetCore), in addition to new support in Amazon CloudWatch metrics for customizable dashboards. In this guest post, Trevor takes us through the updates.

Pause a script until a CloudFormation stack status is reached

If you want to pause your PowerShell script until a CloudFormation stack reaches a certain status, you can use the Wait-CFNStack cmdlet. You use Wait-CFNStack to specify a CloudFormation stack name and the status code that you want to wait for. All of the supported CloudFormation statuses are provided with IntelliSense/tab-completion for the -Status parameter, so you don’t need to look them up! Let’s take a look at how you use this cmdlet.

$Common = @{
    ProfileName = 'default'
    Region = 'us-east-2'
}
$CloudFormation = @{
    StackName = 'AWSCloudFormation'
    TemplateBody = @'
    AWSTemplateFormatVersion: '2010-09-09'
        Resources:
            myBucket:
                Type: AWS::S3::Bucket
        Outputs:
            BucketName:
            Value: !Ref myBucket
'@
}
New-CFNStack @CloudFormation @Common
Wait-CFNStack -StackName $CloudFormation.StackName @Common

Test the existence of the CloudFormation stack

Have you ever wanted to simply test whether a CloudFormation stack exists in a certain AWS Region? If so, we now have a cmdlet for that. The Test-CFNStack cmdlet simply returns a Boolean $true if the specified stack exists, or $false if it doesn’t. If your stack doesn’t exist, you no longer have to worry about catching exceptions thrown by the Get-CFNStack cmdlet!

$Common = @{
    ProfileName = 'default'
    Region = 'us-east-2'
}

if (Test-CFNStack -StackName $CloudFormation.StackName @Common) {
    Remove-CFNStack -StackName $CloudFormation.StackName –Force @Common
}

Format types

Another customer-obsessed enhancement in the latest version of the modules deals with the default display of certain objects. In earlier versions complex objects such as CloudFormation stacks were typically displayed in the vertical “list” format (see the Format-List PowerShell cmdlet). The “list” output format doesn’t use horizontal screen space very effectively. As a result, you have to scroll a lot to find what you want and the output isn’t easy to consume.

Instead, we opted to improve the default output to use the PowerShell table format. This makes data easier to consume, so you don’t have to scroll as much. It also limits focus to the object properties that you care about the most.

If you prefer the “list” format, you can still use it by piping your objects into the Format-List PowerShell cmdlet. The default output has simply been changed to use a tabular format to make data easier to interact with and consume.

The new format types work with cmdlets that emit complex objects, such as:

  • Get-CFNStackEvent
  • Get-CFNStack
  • Get-IAMRoleList
  • Get-CWERule
  • Get-LMFunctionList
  • Get-ASAutoScalingGroup
  • Get-WKSWorkspace
  • Get-CWAlarm

The changelog for version 3.3.119.0 of the module on the PowerShell Gallery lists all the types that new formats have been specified for. You can view the changelog for the release on the PowerShell Gallery.

Manage CloudWatch dashboards

AWS customers who use CloudWatch to store and view metrics will appreciate the new CloudWatch dashboard APIs. You can now use PowerShell cmdlets to create, list, and delete CloudWatch dashboards!

I’ve already created a CloudWatch dashboard in my account, so let’s check out how we can export it, modify it, and then update it. Let’s start by discovering which AWS cmdlets relate to CloudWatch dashboards by using Get-AWSCmdletName.

PS /Users/tsulli> Get-AWSCmdletName –MatchWithRegex dashboard

CmdletName           ServiceOperation         ServiceName       CmdletNounPrefix
----------           ----------------         -----------       ----------------
Get-CWDashboard      GetDashboard             Amazon CloudWatch CW
Get-CWDashboardList  ListDashboards           Amazon CloudWatch CW
Remove-CWDashboard   DeleteDashboards         Amazon CloudWatch CW
Write-CWDashboard    PutDashboard             Amazon CloudWatch CW

Now, let’s discover which CloudWatch dashboards already exist in the us-west-2 AWS Region by using Get-CWDashboardList.

PS /Users/tsulli> Get-CWDashboardList -Region us-west-2

DashboardArn   DashboardName   LastModified        Size
------------   -------------   ------------        ----
               MacBook-Pro     7/6/17 7:50:16 PM   1510

As you can see, I’ve got a single CloudWatch dashboard in my test account, with some interesting metrics about my MacBook Pro. Coincidentally, these hardware metrics are also being written to CloudWatch metrics using the AWSPowerShell.NETCore module.

Now let’s grab some detailed information about this specific CloudWatch dashboard. We do this using the Get-CWDashboard cmdlet, and simply passing in the region and name of the dashboard. Be sure to remember that the dashboard name is a case-sensitive input parameter.

PS /Users/tsulli> $Dashboard = Get-CWDashboard -DashboardName MacBook-Pro -Region us-west-2 | Format-List

LoggedAt : 7/7/17 1:44:44 PM
DashboardArn : arn:aws:cloudwatch::123456789012:dashboard/MacBook-Pro
DashboardBody : {"widgets......
DashboardName :
ResponseMetadata : Amazon.Runtime.ResponseMetadata
ContentLength : 3221
HttpStatusCode : OK

For readability in this article, I’ve trimmed the DashboardBody property. However, it contains a lengthy string with the JSON that represents my CloudWatch dashboard. I can use the ConvertFrom-Json cmdlet to convert the string to a usable object in PowerShell.

PS /Users/tsulli> $DashboardObject = $Dashboard.DashboardBody | ConvertFrom-Json

Now let’s update the title field of all the widgets on the CloudWatch dashboard. Let’s change the beginning of each widget’s title from “Trevor” to “David”. Right now, the title reads “Trevor’s MacBook Pro”. After updating it, the widget titles will read “David’s MacBook Pro”. We’ll use the ForEach method syntax in PowerShell to do this. Each widget has a property named //properties//, which has a //title// string property. We’ll do a simple string replacement operation on this property’s value.

PS /Users/tsulli> $DashboardObject.widgets.ForEach({ $PSItem.properties.title = $PSItem.properties.title.Replace('Trevor', 'David') })

Now that we’ve modified the widget titles, let’s convert the dashboard back to JSON and overwrite our dashboard! We’ll use ConvertTo-Json to convert the dashboard object back into its JSON representation. Then we’ll call Write-CWDashboard to commit the updated dashboard back to the CloudWatch service.

PS /Users/tsulli> $DashboardJson = $DashboardObject | ConvertTo-Json -Depth 8
PS /Users/tsulli> Write-CWDashboard -DashboardBody $DashboardJson -DashboardName MacBook-Pro -Region us-west-2

Great! Now if you go back to the AWS Management Console and visit your CloudWatch dashboard, you’ll see that your widgets have updated titles!

Conclusion

We hope you enjoy the continued improvements to the AWS Tools for PowerShell customer experience! If you have feedback on these improvements, please let us know. You can:

* Leave comments and feedback in our AWS SDK forums.
* Tweet to us at @awscloud and @awsfornet.
* Comment on this article!

General Availability of the AWS Toolkit for Visual Studio 2017

We’re pleased to announce that the AWS Toolkit for Visual Studio is now generally available (GA) for Visual Studio 2017. You can install it from the Visual Studio Gallery. The GA version is 1.12.1.0.

Unlike with previous versions of the toolkit, we are using the Visual Studio Gallery to distribute new versions that target Visual Studio 2017 editions and those that follow. New features in the Visual Studio installation experience enable setup experiences that we previously required a Windows Installer to provide. No more! Now you’re able to use the familiar Extensions and Updates dialog box in the IDE to obtain updates.

For users who require the toolkit in earlier versions of Visual Studio (2013 and 2015), or the awsdeploy.exe standalone deployment tool, we’ll continue to maintain our current Windows Installer. In addition to the toolkit and deployment tool, this installer contains the .NET 3.5 and .NET 4.5 AWS SDK for .NET assemblies. It also contains the AWS Tools for Windows PowerShell module. If you have PowerShell version 5, you can also install the AWSPowerShell module from the PowerShell Gallery.

Thanks to everyone who tried out the preview versions of the AWS Toolkit for Visual Studio 2017 and for providing feedback!

Updates to AWSPowerShell Cmdlet Names

Since we launched the first AWS module for PowerShell five years ago, we’ve been hugely encouraged by the feedback from the user community, from first-time PowerShell users to PowerShell MVPs. In that time, we’ve acted immediately on some feedback, and put more complex changes into our backlog for future consideration.

One common request from experienced community members was to rename some cmdlets to change plural nouns to singular, as is the recommended practice. (We had initially created them with plural nouns as we tried to map them closely to the underlying service operation names).

Today we’re pleased to announce we’ve completed the remapping work. We’ve released new versions of the AWSPowerShell and AWSPowerShell.NetCore modules – version 3.3.96.0 – with singular cmdlet names and other cross-service consistency changes.

All the cmdlet name changes have backward-compatible aliases, so you should not have to update any existing scripts. You can continue to use the earlier names.

As we analyzed the cmdlets to determine where we needed to make changes, we found fewer problematic cases than we feared, but more than we’d like! To list them all in this blog post would be overwhelming. Instead, we added a page to our  cmdlet cross-reference on the web at https://docs.aws.amazon.com/powershell/latest/reference/items/pstoolsref-legacyaliases.html. You can also quickly inspect the changes by opening the AWSPowerShellLegacyAliases.psm1 file in the module distribution. For convenience both these resources list the aliases by service.

Please keep your feedback and feature requests coming! We really enjoy getting feedback (good or bad) and using it to plan improvements to the tools to address the problems you handle day to day.

Security update to AWS SDK for .NET’s Amazon CloudFront Cookie Signer

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

The AWS SDK for .NET has a utility class, Amazon.CloudFront.AmazonCloudFrontCookieSigner, for creating signed cookies to access private content served using Amazon CloudFront. This blog contains details on usage of this utility class along with sample code.

Specifying AmazonCloudFrontCookieSigner.Protocols.Https as the protocol parameter creates a cookie with incorrect policy; the policy contains a resource restriction of “http*://” instead of “https://” .

Potential Impact

CloudFront distributions configured to serve HTTP and HTTPS requests are affected by this issue, unless “Viewer Protocol Policy” is configured as HTTPS. In this case, CloudFront will block attempts to access content over HTTP.

Impacted SDK versions

  • Versions 2.3.36 to 2.3.55 for version 2 of the AWS SDK for .NET
  • Versions 3.0.1-preview to 3.3.3.6 for package AWSSDK.CloudFront of the AWS SDK for .NET
  • Versions 3.2.0-beta to 3.2.3.7-beta, and 3.2.8-rc for package AWSSDK.CloudFront in the preview version 3.2 of the AWS SDK for .NET, that targets .NET Core

Mitigation

Update your dependency to the latest version of the SDK. The fix contains a change to the AmazonCloudFrontCookieSigner.Protocols enum’s underlying values (a breaking change) and requires a recompilation of the consuming application. The assembly version of the SDK package has been updated for this fix. There are no other breaking API changes in this version.

  • Version 2.3.55.2 and above for package AWSSDK in version 2 of the AWS SDK for .NET
  • Version 3.3.4.0 and above for package AWSSDK.CloudFront in version 3 of the AWS SDK for .NET