Category: .NET


Deploy an Existing ASP.NET Core Web API to AWS Lambda

by Norm Johanson | on | in .NET | | Comments

In the previous post, we talked about the new ASP.NET Core Web API blueprint for AWS Lambda, and the Amazon.Lambda.AspNetCoreServer NuGet package that made it possible to run the ASP.NET Core Web API through Lambda. But what if you already have an existing ASP.NET Core Web API that you want to try as a serverless application? You can do this by following these steps:

  • Add the Amazon.Lambda.AspNetCoreServer NuGet package.
  • Add a Lambda function and bootstrap the ASP.NET Core framework.
  • Add the Amazon.Lambda.Tools NuGet package to enable the toolkit’s deployment features.
  • Add a serverless.template file to define Amazon API Gateway.
  • Deploy the project.

Let’s take a deeper look at each step.

Setting Up the Lambda Function

The first step is to add the Amazon.Lambda.AspNetCoreServer NuGet package that bridges the communication between Amazon API Gateway and the ASP.NET Core framework.

After you add the package, add a new class named LambdaFunction and have it extend from Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction. You have to implement the abstract method Init to bootstrap the ASP.NET Core framework.


public class LambdaFunction : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction
{
    protected override void Init(IWebHostBuilder builder)
    {
        builder
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseStartup()
            .UseApiGateway();
    }
}

Enable Tool Support in the AWS Toolkit for Visual Studio

In order for the AWS Toolkit for Visual Studio to recognize the project as a Lambda project, you have to add the Amazon.Lambda.Tools NuGet package. This package isn’t used as part of the runtime of the function and is added as a build tool.


{  
  "dependencies": {
    ...

    "Amazon.Lambda.AspNetCoreServer": "0.8.4-preview1",
    "Amazon.Lambda.Tools": {
      "type": "build",
      "version": "1.1.0-preview1"
    }
  },

  ...
}

To also enable the integration with the .NET Core CLI, list the Amazon.Lambda.Tools NuGet package in the tools section in the project.json file.


{
  ...

  "tools": {
    "Microsoft.AspNetCore.Server.IISIntegration.Tools": "1.0.0-preview2-final",
    "Amazon.Lambda.Tools": "1.1.0-preview1"
  },

  ...
}

Configuring Amazon API Gateway

At this point, you could right-click the project and deploy it to Lambda, but it wouldn’t be fronted by API Gateway exposing the function as an HTTP REST API. The easiest way to do that is to add a serverless.template file to the project and deploy the project as an AWS Serverless project.

Add a serverless.template file to the project by right-clicking the project and choosing Add, AWS Serverless Template.

add-serverless

The default serverless.template file contains one function definition configured to be exposed by API Gateway using proxy integration, so all requests will go to that function. This is exactly what you need for an ASP.NET Core Web API project. The only thing that needs to be updated is the handler field. The format for the handler field is <assembly-name>::<namespace>.LambdaFunction::FunctionHandlerAsync. The FunctionHandlerAsync method is inherited from the base class of our LambdaFunction class.


{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Transform" : "AWS::Serverless-2016-10-31",
  "Description" : "Starting template for an AWS Serverless Application.",
  "Parameters" : {
  },
  "Resources" : {
    "DefaultFunction" : {
      "Type" : "AWS::Serverless::Function",
      "Properties": {
        "Handler": "ExistingWebAPI::ExistingWebAPI.LambdaFunction::FunctionHandlerAsync",
        "Runtime": "dotnetcore1.0",
        "CodeUri": "",
        "Description": "Default function",
        "MemorySize": 256,
        "Timeout": 30,
        "Role": null,
        "Policies": [ "AWSLambdaFullAccess" ],
        "Events": {
          "PutResource": {
            "Type": "Api",
            "Properties": {
              "Path": "/{proxy+}",
              "Method": "ANY"
            }
          }
        }
      }
    }
  },
  "Outputs" : {
  }
}

Deploy

Now you can deploy the ASP.NET Core Web API to either AWS Elastic Beanstalk or Lambda. The deployment process works in the same way that we’ve shown in previous blog posts about AWS Serverless projects.

deploy-selector

And that’s all you have to do to deploy an existing ASP.NET Core Web API project to Lambda.

Visit our .NET Core Lambda GitHub repository to let us know what you think of running ASP.NET Core applications as an AWS Serverless functions and issues you might have. This will help us take the Amazon.Lambda.AspNetCoreServer NuGet package out of preview status.

Running Serverless ASP.NET Core Web APIs with Amazon Lambda

by Norm Johanson | on | in .NET | | Comments

One of the coolest things we demoed at our recent AWS re:Invent talk about .NET Core support for AWS Lambda was how to run an ASP.NET Core Web API with Lambda. We did this with the NuGet package Amazon.Lambda.AspNetCoreServer (which is currently in preview) and Amazon API Gateway. Today we’ve released a new AWS Serverless blueprint that you’ll see in Visual Studio or with our Yeoman generator that makes it easy to set up an ASP.NET Core Web API project as a Lambda project.

Blueprint Picker

How Does It Work?

Depending on your platform, a typically deployed ASP.NET Core application is fronted by either IIS or NGINX, which forwards requests to the ASP.NET Core web server named Kestrel. Kestrel marshals the request into the ASP.NET Core hosting framework.

Normal Flow

When running an ASP.NET Core application as an AWS Serverless application, IIS is replaced with API Gateway and Kestrel is replaced with a Lambda function contained in the Amazon.Lambda.AspNetCoreServer package which marshals the request into the ASP.NET Core hosting framework.

Serverless Flow

The Blueprint

The blueprint creates a project that’s very similar to the one you would get if you selected the .NET Core ASP.NET Core Web Application and chose the Web API template. The key difference is instead of having a Program.cs file that contains a Main function bootstrapping the ASP.NET Core framework, the blueprint has LambdaEntryPoint.cs that bootstraps the ASP.NET Core framework.


public class LambdaEntryPoint : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction
{
    protected override void Init(IWebHostBuilder builder)
    {
        builder
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseStartup()
            .UseApiGateway();
    }
}

The actual Lambda function comes from the base class. The function handler for the Lambda function is set in the AWS CloudFormation template named serverless.template, which will be in the format <assembly-name>::<namespace>.LambdaEntryPoint::FunctionHandlerAsync.

The blueprint also has LocalEntryPoint.cs that works in the same way as the original Program.cs file, enabling you to run and develop your application locally and then deploy it to Lambda.

The remainder of the project’s files are the usual ones you would find in an ASP.NET Core application. The blueprint contains two Web API controllers. The first is the example ValuesController, which is found in the starter ASP.NET Core Web API project. The other controller is S3ProxyController, which demonstrates how to use HTTP GET, PUT, and DELETE requests to a controller and uses the AWS SDK for .NET to make the calls to an Amazon S3 bucket. The name of the S3 bucket to use is obtained from the Configuration object, which means you can set the bucket in the appsettings.json file for local development.


{
  ...

  "AppS3Bucket": "ExampleBucketName"
}

The Configuration object is built by using environment variables.


public Startup(IHostingEnvironment env)
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    builder.AddEnvironmentVariables();
    Configuration = builder.Build();
}

When the application is deployed, serverless.template is used to create the bucket and then pass the bucket’s name to the Lambda function as an environment variable.


...

"Get" : {
  "Type" : "AWS::Serverless::Function",
  "Properties": {
    "Handler": "AspNetCoreWithLambda::AspNetCoreWithLambda.LambdaEntryPoint::FunctionHandlerAsync",
    "Runtime": "dotnetcore1.0",
    "CodeUri": "",
    "MemorySize": 256,
    "Timeout": 30,
    "Role": null,
    "Policies": [ "AWSLambdaFullAccess" ],
    "Environment" : {
      "Variables" : {
        "AppS3Bucket" : { "Fn::If" : ["CreateS3Bucket", {"Ref":"Bucket"}, { "Ref" : "BucketName" } ] }
      }
    },
    "Events": {
      "PutResource": {
        "Type": "Api",
        "Properties": {
          "Path": "/{proxy+}",
          "Method": "ANY"
        }
      }
    }
  }
},

...

Logging

ASP.NET Core introduced a new logging framework. To help integrate with the logging framework, we’ve also released the NuGet package Amazon.Lambda.Logging.AspNetCore. This logging provider allows any code that uses the ILogger interface to record log messages to the associated Amazon CloudWatch log group for the Lambda function. When used outside of a Lambda function, the log messages are written to the console.

The blueprint enables the provider in Startup.cs, where other services are configured.


public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    loggerFactory.AddLambdaLogger(Configuration.GetLambdaLoggerOptions());
    app.UseMvc();
}

This following snippet shows the call GetLambdaLoggerOptions from the Configuration object, which grabs the configuration of what messages to write to CloudWatch Logs. The appsettings.json file in the blueprint configures logging so that messages coming from classes under the Microsoft namespace are written if they’re informational level and above. For all other log messages, write debug level messages and above.


{
  "Lambda.Logging": {
    "LogLevel": {
      "Default": "Debug",
      "Microsoft": "Information"
    }
  },

  ...
}

For more information about this package, see the GitHub repository.

Deployment

Deploying the ASP.NET Core Web API works exactly as we showed you in the previous post about the AWS Serverless projects.

Deploy from Solution Explorer

Once deployed, a single Lambda function and an API Gateway REST API are configured to send all requests to the Lambda function. Then the Lambda function uses the ASP.NET Core framework to route to the correct Web API controller. You can test the deployment by accessing the two controllers using the AWS Serverless URL found in the CloudFormation stack view.

  • <aws-serverless-url>/api/values – Example controller
  • <aws-serverless-url>/api/s3proxy – S3 Proxy controller.

Feedback

We’re very excited about running ASP.NET Core applications on AWS Lambda. As you can imagine, the option of running the ASP.NET Core framework on top of Lambda opens lots of possibilities. The Amazon.Lambda.AspNetCoreServer package is in preview while we explore those possibilities. I highly encourage .NET developers to check out this blueprint and the Amazon.Lambda.AspNetCoreServer package and let us know on our GitHub repository or our new Gitter channel what you think and how we can continue to improve the library.

Amazon CloudWatch Logs and .NET Logging Frameworks

by Norm Johanson | on | in .NET | | Comments

You can use Amazon CloudWatch Logs to monitor, store, and access your application’s logs. To get log data into CloudWatch Logs, you can use an AWS SDK or install the CloudWatch Log agent to monitor certain log folders. Today, we’ve made it even easier to use CloudWatch Logs with .NET applications by integrating CloudWatch Logs with several popular .NET logging frameworks.

The supported .NET logging frameworks are NLog, Log4net, and the new built-in ASP.NET Core logging framework. For each framework, all you need to do is add the appropriate NuGet package, add CloudWatch Logs as an output source, and then use your logging library as you normally would.

For example to use CloudWatch Logs with a .NET application using NLog, add the AWS.Logger.NLog NuGet package, and then add the AWS target into your NLog.config file. Here is an example of an NLog.config file that enables both CloudWatch Logs and the console as output for the log messages.


<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      throwExceptions="true">
  <targets>
    <target name="aws" type="AWSTarget" logGroup="NLog.ConfigExample" region="us-east-1"/>
    <target name="logfile" xsi:type="Console" layout="${callsite} ${message}" />
  </targets>
  <rules>
    <logger name="*" minlevel="Info" writeTo="logfile,aws" />
  </rules>
</nlog>

After performing these steps, when you run your application the log messages written with NLog will be sent to CloudWatch Logs. Then you can view your application’s log messages in near real time from the CloudWatch Logs console. You can also set up metrics and alarms from the CloudWatch Logs console, based on your application’s log messages.

These logging plugins are all built on top of the AWS SDK for .NET, and use the same behavior used by the SDK to find AWS credentials. The credentials used by the logging plugins must have the following permissions to access CloudWatch Logs.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents",
        "logs:DescribeLogGroups"
      ],
      "Resource": [
        "arn:aws:logs:*:*:*"
      ]
    }
  ]
}

The AWS .NET logging plugins are a new open source project on GitHub. All of the plugins are there, including samples and instructions on how to configure CloudWatch Logs for each of the supported .NET logging frameworks.

For any comments or issues for the new libraries, open an issue in the GitHub repository.

AWS Serverless Applications in Visual Studio

by Norm Johanson | on | in .NET | | Comments

In the last post, I talked about the AWS Lambda Project template. The other new project template we added to Visual Studio is the AWS Serverless Application. This is our AWS Toolkit for Visual Studio implementation of the new AWS Serverless Application Model. Using this project type, you can develop a collection of AWS Lambda functions and deploy them with any necessary AWS resources as a whole application, using AWS CloudFormation to orchestrate the deployment.

To demonstrate this, let’s create a new AWS Serverless Application and name it Blogger.

serverless-new-project

As in the AWS Lambda Project, we can then choose a blueprint to help get started. For this post, we’re going to use the Blog API using DynamoDB blueprint.

serverless-blueprints

The Project Files

Now let’s take a look at the files in our serverless application.

Blog.cs

This is a simple class used to represent the blog items that are stored in Amazon DynamoDB.

Functions.cs

This file defines all the C# functions we want to expose as Lambda functions. There are four functions defined to manage a blog platform:

  • GetBlogsAsync – gets a list of all the blogs.
  • GetBlogAsync – gets a single blog identified by either the query parameter Id or the ID added to the URL resource path.
  • AddBlogAsync – adds a blog to DynamoDB table.
  • RemoveBlogAsync – removes a blog from the DynamoDB table.

Each of these functions accepts an APIGatewayProxyRequest object and returns an APIGatewayProxyResponse. That is because these Lambda functions will be exposed as an HTTP API using Amazon API Gateway. The APIGatewayProxyRequest contains all the information representing the HTTP request. In the GetBlogAsync operation, you can see how we can find the ID of the blog from the resource path or query string.


public async Task GetBlogAsync(APIGatewayProxyRequest request, ILambdaContext context)
{
    string blogId = null;
    if (request.PathParameters != null && request.PathParameters.ContainsKey(ID_QUERY_STRING_NAME))
        blogId = request.PathParameters[ID_QUERY_STRING_NAME];
    else if (request.QueryStringParameters != null && request.QueryStringParameters.ContainsKey(ID_QUERY_STRING_NAME))
        blogId = request?.QueryStringParameters[ID_QUERY_STRING_NAME];

    ...
}

In the default constructor for this class, we can also see how the name of the DynamoDB table storing our blogs is passed in as an environment variable. This environment variable is set when Lambda deploys our function.


public Functions()
{
    // Check to see if a table name was passed in through environment variables and, if so,
    // add the table mapping
    var tableName = System.Environment.GetEnvironmentVariable(TABLENAME_ENVIRONMENT_VARIABLE_LOOKUP);
    if(!string.IsNullOrEmpty(tableName))
    {
        AWSConfigsDynamoDB.Context.TypeMappings[typeof(Blog)] = new Amazon.Util.TypeMapping(typeof(Blog), tableName);
    }

    var config = new DynamoDBContextConfig { Conversion = DynamoDBEntryConversion.V2 };
    this.DDBContext = new DynamoDBContext(new AmazonDynamoDBClient(), config);
}

serverless.template

This file is the AWS CloudFormation template used to deploy the four functions. The parameters for the template enable us to set the name of the DynamoDB table, and choose whether we want CloudFormation to create the table or to assume the table is already created.

The template defines four resources of type AWS::Serverless::Function. This is a special meta resource defined as part of the AWS Serverless Application Model specification. The specification is a transform that is applied to the template as part of the CloudFormation deployment. The transform expands the meta resource type into the more concrete resources, like AWS::Lambda::Function and AWS::IAM::Role. The transform is declared at the top of the template file, as follows.


{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Transform" : "AWS::Serverless-2016-10-31",

  ...
  
 }

Now let’s take a look at the GetBlogs declaration in the template, which is very similar to the other function declarations.


"GetBlogs" : {
  "Type" : "AWS::Serverless::Function",
  "Properties": {
    "Handler": "Blogger::Blogger.Functions::GetBlogsAsync",
    "Runtime": "dotnetcore1.0",
    "CodeUri": "",
    "Description": "Function to get a list of blogs",
    "MemorySize": 256,
    "Timeout": 30,
    "Role": null,
    "Policies": [ "AWSLambdaFullAccess" ],
    "Environment" : {
      "Variables" : {
        "BlogTable" : { "Fn::If" : ["CreateBlogTable", {"Ref":"BlogTable"}, { "Ref" : "BlogTableName" } ] }
      }
    },
    "Events": {
      "PutResource": {
        "Type": "Api",
        "Properties": {
          "Path": "/",
          "Method": "GET"
        }
      }
    }
  }
}

You can see a lot of the fields here are very similar to what we saw when we did a Lambda project deployment. In the Environment property, notice how the name of the DynamoDB table is being passed in as an environment variable. The CodeUri property tells CloudFormation where in Amazon S3 your application bundle is stored. Leave this property blank because the toolkit will fill it in during deployment, after it uploads the application bundle to S3 (it won’t change the template file on disk when it does so).

The Events section is where we can define the HTTP bindings for our Lambda function. This takes care of all the API Gateway setup we need to do for our function. You can also set up other types of event sources in this section.

template-addeventsource

One of the great benefits of using CloudFormation to manage the deployment is we can also add and configure any other AWS resources necessary for our application in the template, and let CloudFormation take care of creating and deleting the resources.

template-addresources

Deploying

We deploy our serverless application in the same way we deployed the Lamba project previously: right-click the project and choose Publish to AWS Lambda.

serverless-publishmenu

This launches the deployment wizard, but this time it’s quite a bit simpler. Because all the Lambda configuration was done in the serverless.template file, all we need to supply are the following:

  • The name of our CloudFormation stack, which will be the container for all the resources declared in the template.
  • The S3 bucket to upload our application bundle to.

These should exist in the same AWS Region.

serverless-first-page

Because the serverless template has parameters, an additional page is displayed in the wizard where we specify the values for the parameters. We can leave the BlogTableName property blank and let CloudFormation generate a unique name for the table. We do need to set ShouldCreateTable to true so that CloudFormation will create the table. To use an existing table, enter the table name and set the ShouldCreateTable parameter to false. We can leave the other fields at their default values and choose Publish.

serverless-second-page

Once the publish step is complete, the CloudFormation stack view is displayed in AWS Explorer. This view shows the progress of the creation of all the resources declared in our serverless template.

serverless-stack-view

When the stack creation is complete, the root URL for the API Gateway is displayed on the page. If we click that link, it returns an empty JSON array because we haven’t added any blogs to our table. To get blogs in our table, we need to make an HTTP PUT method to this URL, passing in a JSON document that represents the blog. We can do that in code or in any number of tools. I’ll use the Postman tool, which is a Chrome browser extension, but you can use any tool you like. In this tool, I’ll set the URL and change the method to be PUT. In the Body tab, I’ll put in some sample content. When we make the HTTP call, you can see that we get back the blog ID.

procman-addpost

Now if we go back to the browser with the link to our AWS Serverless URL, you can see we are getting back the blog we just posted.

get-post

Conclusion

Using the AWS Serverless Application template, you can manage a collection of Lambda functions and the application’s other AWS resources. Also, with the new AWS Serverless Application Model specification, we can use a simplified syntax to declare our serverless application in the CloudFormation template. If you have any questions or suggestions for blueprints, feel free to reach out to us on our .NET Lambda GitHub repository.

Using the AWS Lambda Project in Visual Studio

by Norm Johanson | on | in .NET | | Comments

Last week we launched C# and .NET Core support for AWS Lambda. That release provided updated tooling for Visual Studio to help you get started writing your AWS Lambda functions and deploy them right from Visual Studio. In this post, we describe how to create, deploy, and test an AWS Lambda project.

Creating a Lambda Project

To get started writing Lambda functions in Visual Studio, you first need to create an AWS Lambda project. You can do this by using the Visual Studio 2015 New Project wizard. Under the Visual C# templates, there is a new category called AWS Lambda. You can choose between two types of project, AWS Lambda Project and AWS Serverless Application, and you also have the option to add a test project. In this post, we’ll focus on the AWS Lambda project and save AWS Serverless Application for a separate post. To begin, choose AWS Lambda Project with Tests (.NET Core), name the project ImageRekognition, and then choose OK.

lambda-new-project

On the next page, you choose the blueprint you want to get started with. Blueprints provide starting code to help you write your Lambda functions. For this example, choose the Detect Image Labels blueprint. This blueprint provides code for listening to Amazon S3 events and uses the newly released Amazon Rekognition service to detect labels and then add them to the S3 object as tags.

lambda-blueprints

When the project is complete, you will have a solution with two projects, as shown: the source project that contains your Lambda function code that will be deployed to AWS Lambda, and a test project using xUnit for testing your function locally.

lambda-solution-explorer

You might notice when you first create your projects that Visual Studio does not find all the NuGet references. This happens because these blueprints require dependencies that must be retrieved from NuGet. When new projects are created, Visual Studio only pulls in local references and not remote references from NuGet. You can fix this easily by right-clicking your references and choosing Restore Packages.

Lambda Function Source

Now let’s open the Function.cs file and look at the code that came with the blueprint. The first bit of code is the assembly attribute that is added to the top of the file.

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializerAttribute(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

By default, Lambda accepts only input parameters and return types of type System.IO.Stream. To use typed classes for input parameters and return types, we have to register a serializer. This assembly attribute is registering the Lambda JSON serializer, which uses Newtonsoft.Json to convert the streams to typed classes. The serializer can be set at the assembly or method level.

The class has two constructors. The first is a default constructor that is used when Lambda invokes your function. This constructor creates the S3 and Rekognition service clients, and will get the AWS credentials for these clients from the IAM role we’ll assign to the function when we deploy it. The AWS Region for the clients will be set to the region your Lambda function is running in. In this blueprint, we only want to add tags to our S3 object if the Rekognition service has a minimum level of confidence about the label. This constructor will check the environment variable MinConfidence to determine the acceptable confidence level. We can set this environment variable when we deploy the Lambda function.

public Function()
{
    this.S3Client = new AmazonS3Client();
    this.RekognitionClient = new AmazonRekognitionClient();

    var environmentMinConfidence = System.Environment.GetEnvironmentVariable(MIN_CONFIDENCE_ENVIRONMENT_VARIABLE_NAME);
    if(!string.IsNullOrWhiteSpace(environmentMinConfidence))
    {
        float value;
        if(float.TryParse(environmentMinConfidence, out value))
        {
            this.MinConfidence = value;
            Console.WriteLine($"Setting minimum confidence to {this.MinConfidence}");
        }
        else
        {
            Console.WriteLine($"Failed to parse value {environmentMinConfidence} for minimum confidence. Reverting back to default of {this.MinConfidence}");
        }
    }
    else
    {
        Console.WriteLine($"Using default minimum confidence of {this.MinConfidence}");
    }
}

We can use the second constructor for testing. Our test project configures its own S3 and Rekognition clients and passes them in.

public Function(IAmazonS3 s3Client, IAmazonRekognition rekognitionClient, float minConfidence)
{
    this.S3Client = s3Client;
    this.RekognitionClient = rekognitionClient;
    this.MinConfidence = minConfidence;
}

FunctionHandler is the method Lambda will call after it constructs the instance. Notice that the input parameter is of type S3Event and not a Stream. We can do this because of our registered serializer. The S3Event contains all the information about the event triggered in S3. The function loops through all the S3 objects that were part of the event and tells Rekognition to detect labels. After the labels are detected, they are added as tags to the S3 object.

public async Task FunctionHandler(S3Event input, ILambdaContext context)
{
    foreach(var record in input.Records)
    {
        if(!SupportedImageTypes.Contains(Path.GetExtension(record.S3.Object.Key)))
        {
            Console.WriteLine($"Object {record.S3.Bucket.Name}:{record.S3.Object.Key} is not a supported image type");
            continue;
        }

        Console.WriteLine($"Looking for labels in image {record.S3.Bucket.Name}:{record.S3.Object.Key}");
        var detectResponses = await this.RekognitionClient.DetectLabelsAsync(new DetectLabelsRequest
        {
            MinConfidence = MinConfidence,
            Image = new Image
            {
                S3Object = new Amazon.Rekognition.Model.S3Object
                {
                    Bucket = record.S3.Bucket.Name,
                    Name = record.S3.Object.Key
                }
            }
        });

        var tags = new List();
        foreach(var label in detectResponses.Labels)
        {
            if(tags.Count < 10)
            {
                Console.WriteLine($"\tFound Label {label.Name} with confidence {label.Confidence}");
                tags.Add(new Tag { Key = label.Name, Value = label.Confidence.ToString() });
            }
            else
            {
                Console.WriteLine($"\tSkipped label {label.Name} with confidence {label.Confidence} because maximum number of tags reached");
            }
        }

        await this.S3Client.PutObjectTaggingAsync(new PutObjectTaggingRequest
        {
            BucketName = record.S3.Bucket.Name,
            Key = record.S3.Object.Key,
            Tagging = new Tagging
            {
                TagSet = tags
            }
        });
    }
    return;
}

Notice that the code contains calls to Console.WriteLine(). When the function is being run in AWS Lambda, all calls to Console.WriteLine() will redirect to Amazon CloudWatch Logs.

Default Settings File

Another file that was created with the blueprint is aws-lambda-tools-defaults.json. This file contains default values that the blueprint has set to help prepopulate some of the fields in the deployment wizard. It is also helpful in setting command line options with our integration with the new .NET Core CLI. We’ll dive deeper into the CLI integration in a later post, but to get started using it, navigate to the function’s project directory and type dotnet lambda help.

{
  "Information" : [
    "This file provides default values for the deployment wizard inside Visual Studio and the AWS Lambda commands added to the .NET Core CLI.",
    "To learn more about the Lambda commands with the .NET Core CLI execute the following command at the command line in the project root directory.",

    "dotnet lambda help",

    "All the command line options for the Lambda command can be specified in this file."
  ],

  "profile":"",
  "region" : "",
  "configuration" : "Release",
  "framework" : "netcoreapp1.0",
  "function-runtime":"dotnetcore1.0",
  "function-memory-size" : 256,
  "function-timeout" : 30,
  "function-handler" : "ImageRekognition::ImageRekognition.Function::FunctionHandler"
}

An important field to understand is the function-handler. This indicates to Lambda the method to call in our code in response to our function being invoked. The format of this field is <assembly-name>::<full-type-name>::<method-name>. Be sure to include the namespace with the type name.

Deploying the Function

To get started deploying the function, right-click the Lambda project and then choose

Publish to AWS Lambda. This starts the deployment wizard. Notice that many of the fields are already set. These values came from the aws-lambda-tools-defaults.json file described earlier. We do need to enter a function name. For this example, let’s name it ImageRekognition, and then choose Next.

lambda-deployment-wizard-page1

On the next page, we need to select an IAM role that gives permission for our code to access S3 and Rekognition. To keep this post short, let’s select the Power User managed policy; the tools create a role for us based on this policy. Note that the Power User managed policy was added to use to create a role in version 1.11.1.0 of the toolkit.

Finally, we set the environment variable MinConfidence to 60, and then choose Publish.

lambda-deployment-wizard-page2

This launches the deployment process, which builds and packages the Lambda project and then creates the Lambda function. Once publishing is complete, the Function view in the AWS Explorer window is displayed. From here, we can invoke a test function, view CloudWatch Logs for the function, and configure event sources.

lambda-function-view

With our function deployed, we need to configure S3 to send its events to our new function. We do this by going to the event source tab and choosing Add. Then, we choose Amazon S3 and choose the bucket we want to connect to our Lambda function. The bucket must be in the same region as the region where the Lambda function was deployed.

Testing the Function

Now that the function is deployed and an S3 bucket is configured as an event source for it, open the S3 bucket browser from the AWS Explorer for the bucket we selected and upload some images.

When the upload is complete, we can confirm that our function ran by looking at the logs from our function view. Or, we can right-click the images in the bucket browser and select Properties. In the Properties dialog box on the Tags tab, we can view the tags that were applied to our object.

lambda-object-properties

Conclusion

We hope this post gives you a good understanding of how our tooling inside Visual Studio works for developing and creating Lambda functions. We’ll be adding more blueprints over time to help you get started using other AWS services with Lambda. The blueprints are hosted in our new Lambda .NET GitHub repository. If you have any suggestions for new blueprints, open an issue and let us know.

Retry Throttling

by Sattwik Pati | on | in .NET | | Comments

In this blog post, we discuss the existing request retry feature, and the new retry throttling feature that we have rolled out in the AWS SDK for .NET V3 from version 3.3.4.0 of the AWSSDK.Core package.

In request retry, client side requests are retried, and often succeed, in cases involving transient network or service issues. The advantage to you as a client is that you don’t have to experience noise resulting from these exceptions, and are saved the trouble of writing code that would retry these requests. The downside to this retry feature is that situations such as network connectivity or unavailability, in which all retried requests fail, leads to tying up the client application thread and fail-slow behavior. The client eventually ends up getting a service unavailable exception that could have been relayed earlier, without the retry loop. This delay in surfacing the exception hurts the client’s recovery times and prolongs the client side impact. We want to walk a middle ground where we provide the retry request feature but with some limiting constraints.

Retry throttling, like its name suggests, throttles retry attempts when a large number of retry requests are failing. Each time a retry request is made, an internal retry capacity pool is drained. Retry requests are no longer made if the capacity pool is depleted. Retry requests are attempted only when the client starts getting successful responses, which refills the client’s capacity pool. Retry throttling takes care of “retry storm” situations by entering a fail-fast mode, in which the exceptions are surfaced and the needless retry loop is skipped. Also, because retry throttling kicks in only when a large number of requests and their retry attempts fail, transient retries are unaffected by this feature.

The AWS SDK for Java has already introduced this feature to great effect. Their blog post contains the metrics that compare situations when throttling is enabled versus when it is not.

Disabling Retry Throttling

Retry throttling is enabled by default and can be disabled easily by changing the ThrottleRetries property to false on the config object. We demonstrate this in the following by using an AmazonS3Config object.

AmazonS3Config config = new AmazonS3Config();
config.ThrottleRetries = false; // Default value is true

As you can see, it’s easy to opt out of this feature. Retry throttling can improve the ability of the SDK to adapt to sub-optimal situations. Feel free to leave questions or comments below!

General Availability for .NET Core Support in the AWS SDK for .NET

by Norm Johanson | on | in .NET | | Comments

Today, we announce the general availability (GA) of our .NET Core support in the AWS SDK for .NET. Previously, we’ve supported .NET Core in our 3.2.x beta NuGet packages while maintaining our 3.1.x NuGet packages on our stable master branch with the frequent AWS service updates.

With the move to GA status for .NET Core, we’ve merged .NET Core support into the stable master branch and, going forward, will release version 3.3.x NuGet packages for the AWS SDK for .NET. We’ll add AWS service updates to our .NET Core version at the same time we add them to the rest of the .NET platforms we support, like .NET Framework 3.5 and 4.5. The SDK’s change of status also means our AWS Tools for PowerShell Core module (AWSPowerShell.NetCore) is at GA status, and its version bumps to 3.3.x to match the underlying SDK version.

This release is one more step in our continuing support for .NET Core on AWS. Other exciting .NET Core releases we’ve had this year include:

For help setting up and configuring the SDK for use with .NET Core, see our previous post on some of the extensions we added to take advantage of the new .NET Core frameworks.

We welcome your feedback. Check out our GitHub repository and let us know what you think of our .NET and .NET Core support.

Configuring AWS SDK with .NET Core

by Norm Johanson | on | in .NET | | Comments

One of the biggest changes in .NET Core is the removal of ConfigurationManager and the standard app.config and web.config files that were used ubiquitously with .NET Framework and ASP.NET applications. The AWS SDK for .NET used this configuration system to set things like AWS credentials and region so that you wouldn’t have to do this in code.

A new configuration system in .NET Core allows any type of input source from any location. Also, the configuration object isn’t a global singleton like the old ConfigurationManager was, so the AWS SDK for .NET doesn’t have access to read settings from it.

To make it easy to use the AWS SDK for .NET with .NET Core, we have released a new NuGet package called AWSSDK.Extensions.NETCore.Setup. Like many .NET Core libraries, it adds extension methods to the IConfiguration interface to make getting the AWS configuration seamless.

Using AWSSDK.Extensions.NETCore.Setup

If we create an ASP.NET Core MVC application in Visual Studio, the constructor for Startup.cs handles configuration by reading in various input sources, using the ConfigurationBuilder and setting the Configuration property to the built IConfiguration object.

public Startup(IHostingEnvironment env)
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
        .AddEnvironmentVariables();
    Configuration = builder.Build();
}

To use the Configuration object to get the AWS options, we first add the AWSSDK.Extensions.NETCore.Setup NuGet package. Then, we add our options to the configuration file. Notice one of the files added to the ConfigurationBuilder is called $"appsettings.{env.EnvironmentName}.json". If you look at the Debug tab in the project’s properties, you can see this file is set to Development. This works great for local testing because we can put our configuration in the appsettings.Development.json file, which is loaded only during local testing in Visual Studio. When we deploy to an Amazon EC2 instance the EnvironmentName will default to Production and this file will be ignored causing the AWS SDK for .NET to fall back to the IAM credentials and region configured for the EC2 instance.

Let’s add an appsettings.Development.json file to our project and supply our AWS settings.

{
  "AWS": {
    "Profile": "local-test-profile",
    "Region": "us-west-2"
  }
}

To get the AWS options set in the file, we call the extension method that is added to IConfiguration, GetAWSOptions. To construct a service client from these options, we call CreateServiceClient. The following example code shows how to create an S3 service client.

var options = Configuration.GetAWSOptions();
IAmazonS3 client = options.CreateServiceClient();

ASP.NET Core Dependency Injection

The AWSSDK.Extensions.NETCore.Setup NuGet package also integrates with a new dependency injection system in ASP.NET Core. The ConfigureServices method in Startup is where the MVC services are added. If the application is using Entity Framework, this is also where that is initialized.

public void ConfigureServices(IServiceCollection services)
{
    // Add framework services.
    services.AddMvc();
}

The AWSSDK.Extensions.NETCore.Setup NuGet package adds new extension methods to IServiceCollection that you can use to add AWS services to the dependency injection. The following code shows how we add the AWS options read from IConfiguration and add S3 and Amazon DynamoDB to our list of services.

public void ConfigureServices(IServiceCollection services)
{
    // Add framework services.
    services.AddMvc();
    services.AddDefaultAWSOptions(Configuration.GetAWSOptions());
    services.AddAWSService<IAmazonS3>();
    services.AddAWSService<IAmazonDynamoDB>();
}

Now, if our MVC controllers use either IAmazonS3 or IAmazonDynamoDB as parameters in their constructors, the dependency injection system passes those services in.

public class HomeController : Controller
{
    IAmazonS3 S3Client { get; set; }

    public HomeController(IAmazonS3 s3Client)
    {
        this.S3Client = s3Client;
    }

    ...

}

Summary

We hope this new AWSSDK.Extensions.NETCore.Setup NuGet package helps you get started with ASP.NET Core and AWS. Feel free to give us your feedback at our GitHub repository for the AWS SDK for .NET

Custom Elastic Beanstalk Application Deployments

by Norm Johanson | on | in .NET | | Comments

In the previous post, you learned how to use the new deployment manifest for the the Windows container in AWS Elastic Beanstalk to deploy a collection of ASP.NET Core and traditional ASP.NET applications. The deployment manifest supports a third deployment type, custom application deployment.

Custom application deployment is a powerful feature for advanced users who want to leverage the power of Elastic Beanstalk to create and manage their AWS resources and also have complete control over how their application is deployed. For a custom application deployment, you declare the PowerShell scripts for the three actions that Elastic Beanstalk performs: install, restart, and uninstall. Install is used when a deployment is initiated, restart is used when the RestartAppServer API is called (which can be done from either the toolkit or the web console), and uninstall is invoked on the previous deployment whenever a new deployment occurs.

For example, you might have an ASP.NET application that you want to deploy, and your documentation team has written a static website that they want to include with the deployment. You can do this by writing your deployment manifest as follows.

{
  "manifestVersion": 1,
  "deployments": {
 
    "msDeploy": [
      {
        "name": "app",
        "parameters": {
          "appBundle": "CoolApp.zip",
          "iisPath": "/"
        }
      }
    ],
    "custom": [
      {
        "name": "PowerShellDocs",
        "scripts": {
          "install": {
            "file": "install.ps1"
          },
          "restart": {
            "file": "restart.ps1"
          },
          "uninstall": {
            "file": "uninstall.ps1"
          }
        }
      }
    ]
  }
}

The scripts listed for each action are in the application bundle relative to the deployment manifest file. For this example, the application bundle will also contain a documentation.zip file that contains the static website from your documentation team.

The install.ps1 script extracts the .zip file and sets up the IIS path.

Add-Type -assembly "system.io.compression.filesystem"
[io.compression.zipfile]::ExtractToDirectory('./documentation.zip', 'c:inetpubwwwrootdocumentation')

C:WindowsSysNativeWindowsPowerShellv1.0powershell.exe -Command {New-WebApplication -Name documentation -PhysicalPath  c:inetpubwwwrootdocumentation -Force}

Because your application is running in IIS, the restart action will invoke an IIS reset.

iisreset /timeout:1

For uninstall scripts, it’s important to clean up all settings and files that were performed during the install stage so that when the new version is being installed, you can avoid any collision with the previous deployments. For this example, you need to remove the IIS application for the static website and remove the files.

C:WindowsSysNativeWindowsPowerShellv1.0powershell.exe -Command {Remove-WebApplication -Name documentation}

Remove-Item -Recurse -Force 'c:inetpubwwwrootdocumentation'

Using these scripts files and the documentation.zip file that are included in the application bundle, the deployment will deploy your ASP.NET application and then deploy the documentation site.

This example showed a simple deployment of a simple static website. By using the power of custom application deployment, you can deploy any type of application and let Elastic Beanstalk manage the AWS resources for the application.

Multiple Application Support for .NET and Elastic Beanstalk

by Norm Johanson | on | in .NET | | Comments

In the previous post we talked about the new deployment manifest you can use to deploy applications to AWS Elastic Beanstalk. You can now use the deployment manifest to deploy multiple applications to the same Elastic Beanstalk environment.

The deployment manifest supports ASP.NET Core web applications and msdeploy archives for traditional ASP.NET applications. Imagine a scenario in which you’ve written a new amazing application using ASP.NET Core for the front end and a Web API project for an extensions API. You also have an admin app that you wrote using traditional ASP.NET.

The toolkit’s deployment wizard focuses on deploying a single project. To take advantage of the multiple application deployment, you have to construct the application bundle by hand. To start, you need to write the manifest. For this example, write the manifest at the root of your solution.

The deployment section in the manifest has two children: an array of ASP.NET Core web applications to deploy, and an array of msdeploy archives to deploy. For each application, you set the IIS path and the location of the application’s bits relative to the manifest.

{
  "manifestVersion": 1,
  "deployments": {
 
    "aspNetCoreWeb": [
      {
        "name": "frontend",
        "parameters": {
          "appBundle": "./frontend",
          "iisPath": "/frontend"
        }
      },
      {
        "name": "ext-api",
        "parameters": {
          "appBundle": "./ext-api",
          "iisPath": "/ext-api"
        }
      }
    ],
    "msDeploy": [
      {
        "name": "admin",
        "parameters": {
          "appBundle": "AmazingAdmin.zip",
          "iisPath": "/admin"
        }
      }
    ]
  }
}

With the manifest written, you’ll use Windows PowerShell to create the application bundle and update an existing Elastic Beanstalk environment to run it. To get the full version of the Windows PowerShell script used in this example, right-click here. The script is written with the assumption that it will be run from the folder that contains your Visual Studio solution.

The first thing you need to do in the script is set up a workspace folder to create the application bundle.

$publishFolder = "c:temppublish"

$publishWorkspace = [System.IO.Path]::Combine($publishFolder, "workspace")
$appBundle = [System.IO.Path]::Combine($publishFolder, "app-bundle.zip")

If (Test-Path $publishWorkspace){
	Remove-Item $publishWorkspace -Confirm:$false -Force
}
If (Test-Path $appBundle){
	Remove-Item $appBundle -Confirm:$false -Force
}

Once the workspace is set up, you can get the front end ready. To do that use the dotnet CLI to publish the application.

Write-Host 'Publish the ASP.NET Core frontend'  
$publishFrontendFolder = [System.IO.Path]::Combine($publishWorkspace, "frontend")
dotnet publish .srcAmazingFrontendproject.json -o $publishFrontendFolder -c Release -f netcoreapp1.0

Notice that the subfolder "frontend" was used for the output folder matching the folder set in the manifest. Now let’s do the same for the Web API project.

Write-Host 'Publish the ASP.NET Core extensiblity API' 
$publishExtAPIFolder = [System.IO.Path]::Combine($publishWorkspace, "ext-api") 

dotnet publish .srcAmazingExtensibleAPIproject.json -o $publishExtAPIFolder -c Release -f netcoreapp1.0 

The admin site is a traditional ASP.NET application, so you can’t use the dotnet CLI. For this project, use msbuild, passing in the build target package to create the msdeploy archive. By default, the package target creates the msdeploy archive under the objReleasePackage folder, so you need to copy the archive to the publish workspace.

Write-Host 'Create msdeploy archive for admin site'

msbuild .srcAmazingAdminAmazingAdmin.csproj /t:package /p:Configuration=Release

Copy-Item .srcAmazingAdminobjReleasePackageAmazingAdmin.zip $publishWorkspace

To tell the Elastic Beanstalk environment what to do with all these applications, you copy the manifest from your solution to the publish workspace and then zip up the folder.

Write-Host 'Copy deployment manifest'
Copy-Item .aws-windows-deployment-manifest.json $publishWorkspace

Write-Host 'Zipping up publish workspace to create app bundle'
Add-Type -assembly "system.io.compression.filesystem"
[io.compression.zipfile]::CreateFromDirectory( $publishWorkspace, $appBundle)

Now that you have the application bundle, you can go to the web console and upload your archive to an Elastic Beanstalk environment. Or you can keep using Windows PowerShell and use the AWS PowerShell cmdlets to update the Elastic Beanstalk environment to the application bundle. Be sure you’ve set the current profile and region to the profile and region that has your Elastic Beanstalk environment by using the Set-AWSCredentials and Set-DefaultAWSRegion cmdlets.

Write-Host 'Write application bundle to S3'
# Determine S3 bucket to store application bundle
$s3Bucket = New-EBStorageLocation
Write-S3Object -BucketName $s3Bucket -File $appBundle


$applicationName = "ASPNETCoreOnAWS"
$environmentName = "ASPNETCoreOnAWS-dev"
$versionLabel = [System.DateTime]::Now.Ticks.ToString()

Write-Host 'Update Beanstalk environment for new application bundle'

New-EBApplicationVersion -ApplicationName $applicationName -VersionLabel $versionLabel -SourceBundle_S3Bucket $s3Bucket -SourceBundle_S3Key app-bundle.zip

Update-EBEnvironment -ApplicationName $applicationName -EnvironmentName $environmentName -VersionLabel $versionLabel

Now check the update status in either the Elastic Beanstalk environment status page or in the web console. Once complete, you can navigate to each of the applications you deployed at the IIS path set in the deployment manifest.

We hope you’re excited about the features we added to AWS Elastic Beanstalk for Windows and AWS Toolkit for Visual Studio. Visit our forums and let us know what you think of the new tooling and what else you would like to see us add.