AWS Developer Blog

AWS SDK for Go 2.0 – Generated Marshalers

The AWS SDK for Go 2.0 has released generated marshalers for the restjson and restxml protocols. Generated marshalers will help with the performance and customer issues the SDK had been receiving.

To better understand what was causing the performance hit, we used Go’s benchmark tooling to help us determine the main bottleneck—reflection. The reflection package was consuming a large amount of memory and performance, as shown below.

Roughly 50% of the time is spent in the JSON marshaler, which uses a lot of the reflection package. To improve both memory and CPU performance, we implemented generated marshalers. The idea was to bypass the package that was affecting performance, like reflection, and set values directly in the query, header, or body of requests.

The following benchmark data was gathered from restjson and restxml benchmark tests on GitHub.

REST JSON Benchmarks

benchmark                                                   old ns/op     new ns/op     delta
BenchmarkRESTJSONBuild_Complex_ETCCreateJob-4               91645         36968         -59.66%
BenchmarkRESTJSONBuild_Simple_ETCListJobsByPipeline-4       8323          5722          -31.25%
BenchmarkRESTJSONRequest_Complex_CFCreateJob-4              274958        221579        -19.41%
BenchmarkRESTJSONRequest_Simple_ETCListJobsByPipeline-4     147774        140943        -4.62%

benchmark                                                   old allocs     new allocs     delta
BenchmarkRESTJSONBuild_Complex_ETCCreateJob-4               334            366            +9.58%
BenchmarkRESTJSONBuild_Simple_ETCListJobsByPipeline-4       73             59             -19.18%
BenchmarkRESTJSONRequest_Complex_CFCreateJob-4              679            711            +4.71%
BenchmarkRESTJSONRequest_Simple_ETCListJobsByPipeline-4     251            237            -5.58%

The restjson protocol shows great performance gains. We see that for the Complex_ETCCreateJob benchmark, the speed improved by 59.66%. However, the overall gains in memory allocation were far less than expected, and some benchmarks took even more memory

REST XML Benchmarks

benchmark                                                   old ns/op     new ns/op     delta
BenchmarkRESTXMLBuild_Complex_CFCreateDistro-4              212835        63765         -70.04%
BenchmarkRESTXMLBuild_Simple_CFDeleteDistro-4               8942          6893          -22.91%
BenchmarkRESTXMLBuild_REST_S3HeadObject-4                   17222         7194          -58.23%
BenchmarkRESTXMLBuild_XML_S3PutObjectAcl-4                  36723         14958         -59.27%
BenchmarkRESTXMLRequest_Complex_CFCreateDistro-4            416695        231318        -44.49%
BenchmarkRESTXMLRequest_Simple_CFDeleteDistro-4             143133        137391        -4.01%
BenchmarkRESTXMLRequest_REST_S3HeadObject-4                 182617        187526        +2.69%
BenchmarkRESTXMLRequest_XML_S3PutObjectAcl-4                212515        174650        -17.82%

benchmark                                                   old allocs     new allocs     delta
BenchmarkRESTXMLBuild_Complex_CFCreateDistro-4              1341           439            -67.26%
BenchmarkRESTXMLBuild_Simple_CFDeleteDistro-4               70             50             -28.57%
BenchmarkRESTXMLBuild_REST_S3HeadObject-4                   143            65             -54.55%
BenchmarkRESTXMLBuild_XML_S3PutObjectAcl-4                  260            122            -53.08%
BenchmarkRESTXMLRequest_Complex_CFCreateDistro-4            1627           723            -55.56%
BenchmarkRESTXMLRequest_Simple_CFDeleteDistro-4             237            217            -8.44%
BenchmarkRESTXMLRequest_REST_S3HeadObject-4                 452            374            -17.26%
BenchmarkRESTXMLRequest_XML_S3PutObjectAcl-4                476            338            -28.99%

The restxml protocol greatly improved both memory and speed. The RESTXMLBuild_Complex_CFCreateDistro benchmark had a about a 70% improvement in both.

Overall for both protocols there were massive speed improvements in more complex shapes, but only small improvements in the simple shapes. There are some outliers in the data that have minor performance or memory hits. However, there are more optimizations we can do to potentially eliminate them.

Try out the developer preview of the AWS SDK for Go 2.0 here, and let us know what you think in the comments below!

Publishing to HTTP/HTTPs Endpoints Using SNS and the AWS SDK for Java

We’re pleased to announce new additions to the AWS SDK for Java (version 1.11.274 or later) that makes it easy to securely process Amazon SNS messages via an HTTP/HTTPS endpoint. Before this update, customers had to deal with unmarshalling Amazon SNS messages sent to HTTP endpoints and validating their authenticity. Not only was this tedious, it was easy not to follow security best practices when validating the authenticity of the messages. This new addition to the SDK takes care of all that, and allows you to focus on just writing the code to process the message. In this blog post, we walk through deploying an example SNS message processor application using Spring Boot and AWS Elastic Beanstalk. We also look at the various ways you can use this new utility to build your own message processing application.


To use this tutorial, you need to have the following software. This post assumes you have some familiarity with Java programming and the Maven build system.

Java 8 JDK

Apache Maven

Example message processing application

In this tutorial we take an example message processing application that’s built using Spring Boot and deploy it to AWS using AWS Elastic Beanstalk.

Create an IAM role

Before we can create the Elastic Beanstalk application we need to create a new AWS Identity and Access Management (IAM) role that has permissions to SNS.

1. Visit the IAM console.

2. In the navigation pane, choose Roles.


3. Choose Create Role.

4. Choose EC2 as the trusted service, and then select the first use case, EC2. This creates an instance profile you can use as you create the Elastic Beanstalk application.

5. Attach the AmazonSNSFullAccess managed policy in the Permissions page.

6. Name the role and then choose Create. This name is referenced later when you create the Elastic Beanstalk application.

Create the Elastic Beanstalk application

Next, we create the Elastic Beanstalk application that will host our message processor application.

1. Navigate to the AWS Elastic Beanstalk console.

2. Choose Create New Application.

3. Give the application a name.

4. Choose a Web Server Environment.

5. For Environment Type, choose Java and Single instance. Most production applications should use a load-balanced environment type for availability.

6. For now, just choose Sample application as the source. We’ll upload the actual source later in this tutorial.

7. Choose an appropriate URL for the environment. Make a note of this endpoint, because we’ll reference it in code later.

8. Accept the defaults until you reach the Permissions pane. Here we must select the IAM Role we created earlier as the Instance profile. You can leave the Service role as the default, it will be created if it doesn’t already exist.

9. Review the environment configuration, and then create it.

Build the application

Next we will download the source code for the application and modify it to use the correct endpoint.

1. Download and extract the source code.

2. Open src/main/java/com/example/ Then modify the ENDPOINT constant to point to the endpoint of the Elastic Beanstalk environment you created previously. Be sure to include the protocol (i.e., http:// or https://) in the endpoint.

3. Build the project using Apache Maven. At the root of the source, run the following command.

mvn clean install

This command compiles the application and creates target/sns-message-processor-1.0-SNAPSHOT.war. We’ll upload this WAR file to Elastic Beanstalk to deploy the application.

Deploy the application

We’ll deploy the compiled application to our Elastic Beanstalk environment.

1. Navigate to the Elastic Beanstalk console.

2. Open the dashboard for the Elastic Beanstalk environment.

3. Choose Upload and Deploy.

4. Choose target/sns-message-processor-1.0-SNAPSHOT.war as the Upload application source.

5. Choose Deploy.

6. Wait until the application is fully deployed. This can take several minutes.

7. Visit the endpoint to verify the application is running. You should see the following message.

Send SNS messages

Now we’ll send some sample messages via SNS to demonstrate the functionality of this example application.

1. Navigate to the SNS console and choose the topic. The topic will be named “SnsMessageProcessorExampleTopic”, unless you changed the code to use a different topic name.

2. Select Publish to topic.


3. Enter a value for Subject and Message, and then click Publish message.

4. Refresh the application’s endpoint and verify that the message was received. This can take several seconds.

5. Next, we’ll send a special message to unsubscribe the endpoint from the topic. Publish a new message with the subject “unsubscribe”.

6. Refresh the application’s endpoint. You should see two more messages: the message with the “unsubscribe” subject, and an unsubscribe confirmation message that notifies the endpoint that it was successfully unsubscribed.

Code deep dive

Let’s take a closer look at what it takes to build a message processing application using the new additions to the SDK. For a complete example, see the SnsServletProcessor class in the example code.

When creating your own message processing application, the first thing you need is an SnsMessageManager. This is the entry point for parsing and validating messages received by SNS. Here we create it via the default constructor.

SnsMessageManager snsManager = new SnsMessageManager();

When using the default constructor, the manager is pinned to the region that your application is deployed in. This means that the manager can only process messages that are sent by an SNS topic in that same region. If you must handle messages from another region, you can do so via the overloaded constructor.

SnsMessageManager snsManager = new SnsMessageManager("us-east-2");

After you have a SnsMessageManager, you’re ready to start processing messages. The SnsMessageManager exposes two methods, parseMessage and handleMessage. We’ll discuss handleMessage first, which uses the visitor pattern.


The handleMessage method takes two parameters. The first is the InputStream of the HTTP request. Obtaining this can differ depending on which Java frameworks you’re using. In this example, we obtain it from the HttpServletRequest. The second parameter is an implementation of SnsMessageHandler. Here we extend the DefaultSnsMessageHandler, which implements all message types except for SnsNotification. Additionally the DefaultSnsMessageHandler automatically confirms subscriptions when a SnsSubscriptionConfirmation message is received. In this example, we have to implement only the SnsNotification overload to do our message processing.

    public void process(HttpServletRequest httpRequest, HttpServletResponse httpResponse) throws IOException {
        snsManager.handleMessage(httpRequest.getInputStream(), new DefaultSnsMessageHandler() {
            public void handle(SnsNotification snsNotification) {
                    System.out.printf("Received message %n"
                                       + "Subject=%s %n"
                                       + "Message = %s %n",
                                       snsNotification.getSubject(), snsNotification.getMessage());

Typically, you only need to override the handle method for the SnsNotification message type. However, you can override the other methods if required. Note that if you’re extending DefaultSnsMessageHandler and you override the handle method for the SnsSubscriptionConfirmation message type, you must call super.handle if you want to automatically confirm the subscription. Here is an example with additional methods overridden.

    public void process(HttpServletRequest httpRequest, HttpServletResponse httpResponse) throws IOException {
        snsManager.handleMessage(httpRequest.getInputStream(), new DefaultSnsMessageHandler() {
            public void handle(SnsNotification snsNotification) {
                System.out.printf("Received message %n"
                                   + "Subject=%s %n"
                                   + "Message = %s %n",
                                   snsNotification.getSubject(), snsNotification.getMessage());

            public void handle(SnsUnsubscribeConfirmation message) {
                System.out.println("Received unsubscribe confirmation.");

            public void handle(SnsSubscriptionConfirmation message) {
                System.out.println("Received subscription confirmation.");


The other method on SnsMessageManager is parseMessage. This method is a bit more lower level and generally not recommended for most use cases. The visitor pattern is an easier and more intuitive way to build a message processing application. The parseMessage method takes only one parameter (the InputStream of the HTTP request) and returns an unmarshalled message object. The message object returned is of the type SnsMessage, which is the base class for all SNS message types. An application that uses parseMessage looks something like the following. Notice the need for instanceof checks and casting, which isn’t required when using the visitor pattern. Also notice that there is no automatic confirmation of subscriptions, as there is when you extend the DefaultSnsMessageHandler.

        SnsMessage snsMessage = snsManager.parseMessage(httpRequest.getInputStream());
        if (snsMessage instanceof SnsNotification) {
            SnsNotification snsNotification = (SnsNotification) snsMessage;
            System.out.printf("Received message %n"
                              + "Subject=%s %n"
                              + "Message = %s %n",
                              snsNotification.getSubject(), snsNotification.getMessage());
        } else if (snsMessage instanceof SnsSubscriptionConfirmation) {
            ((SnsSubscriptionConfirmation) snsMessage).confirmSubscription();

Signature validation

The most important part of writing an application that receives SNS messages is ensuring those messages are actually sent by SNS. Before SnsMessageManager, customers had to use SignatureChecker to validate the authenticity of the SNS message. Although this correctly implements the signature verification algorithm, it doesn’t check that the SigningCertURL is vended securely by SNS. As a result, it was up to the application writer to add these additional security checks. With the addition of SnsMessageManager, all of these concerns are handled behind the scenes. You only have to provide the InputStream of the HTTP request and the SDK will validate that the message is authentic and throw an exception if it’s not. It also handles it in a performant manner by caching SNS certificates in memory. Because this is an official part of the SDK, you can trust that all best practices for verifying the signatures of SNS messages are followed, and focus solely on writing your business functionality.


In this blog post we showed you how to deploy a sample Amazon SNS message processor application using Spring Boot and AWS Elastic Beanstalk. We showcased the functionality of this application using the Amazon SNS console to publish messages to the application. Finally, we took a closer look at the new additions to the SDK, and showed how you can build your own message processing application using these new features. Please give this new utility a try and let us know what you think on our Gitter channel!

New AWS X-Ray .NET Core Support

In our AWS re:Invent talk this year, we preannounced support for .NET Core 2.0 with AWS Lambda and support for .NET Core 2.0 with AWS X-Ray. Last month we released the AWS Lambda support for .NET Core 2.0. This week we released the AWS X-Ray support for .NET Core 2.0, with new 2.0 beta versions of the AWS X-Ray SDK for .NET NuGet packages.

AWS X-Ray is a service that collects data about requests that your application serves. X-Ray provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. For any traced request to your application, you can see detailed information in the AWS X-Ray console. This includes information not only about the request and response, but also about calls that your application makes to downstream AWS resources, microservices, databases, and HTTP web APIs.

In our .NET re:Invent talk, we showed a demo using X-Ray with Lambda, ASP.NET Core, and a couple AWS services to detect a performance problem. To better understand how you can use X-Ray to detect and diagnose performance problems with your application, check out the AWS X-Ray developer guide.

The biggest feature with the 2.0 beta release is support for .NET Core 2.0, which means you can use X-Ray with all of your AWS .NET Core applications. The 2.0 beta release also has new features such as improved integration with the AWS SDK for .NET, asynchronous support, and improved ASP.NET tracing support.

The 2.0 beta release is a major release for the AWS X-Ray SDK for .NET, so our initial release is a beta release. We encourage you to try it out now and give us feedback. You can do that through the new GitHub repository for the AWS X-Ray SDK for .NET. To add the 2.0 beta release to your project in Visual Studio, be sure to check the Include prerelease check box.

Adding X-Ray to your application

For X-Ray to provide the details of how your application is performing, your application must be instrumented to send tracing data to X-Ray. The AWS X-Ray SDK for .NET provides NuGet packages to make it easy to instrument your application for common scenarios.


If your application is using the AWS SDK for .NET to access other AWS services, use the AWSXRayRecorder.Handlers.AwsSdk NuGet package to enable the collection of tracing data for your AWS requests.

For the 2.0 beta release we simplified how to integrate X-Ray with the AWS SDK for .NET. In the previous version of the X-Ray SDK, you had to register every instance of an AWS service client that you created with X-Ray. This caused you to add X-Ray to all parts of your application that were using AWS. That can be challenging if the service clients were created outside of your application code, for example, in a third-party library or a dependency injection framework like the one provided in ASP.NET Core applications.

With the new release, you just add a little bit of code at the start of your application and any AWS service clients created in your applications, including those created by third-party libraries or dependency injection frameworks, are enabled for X-Ray.

To register all AWS service clients, add the following line at the start of your application.


To register only certain AWS service clients with X-Ray, use the following command.


After these lines of code run, all AWS service clients created after this point are enabled to collect tracing data for X-Ray.

ASP.NET Applications

We’ve also improved support for collecting tracing data in ASP.NET applications. In the previous release, tracing data was collected only for web API controllers. For the 2.0 beta release, we’ve deprecated the AWSXRayRecorder.Handlers.AspNet.WebApi NuGet package in favor of the new AWSXRayRecorder.Handlers.AspNet package. AWSXRayRecorder.Handlers.AspNet works for both web API controllers and MVC controllers. To use this new support, you must remove the deprecated AWSXRayRecorder.Handlers.AspNet.WebApi package.

To enable AWS X-Ray tracing in your ASP.NET application, override the init method in your Global.asax.cs file, and then call the RegisterXRay method.

public override void Init()

	AWSXRayASPNET.RegisterXRay(this, "XRayAspNetSample"); // default name of the web app

ASP.NET Core Applications

.NET Core 2.0 support is the major feature of the 2.0 beta release, so of course we wanted to ensure getting tracing data from the ASP.NET Core HTTP request was also simple. You do this using the AWSXRayRecorder.Handlers.AspNetCore NuGet package. After you add the package, just add the “app.UseXRay(“XRayAspNetCoreSample”);” line of code in the Configure method for the Startup class. Here is a full example.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    if (env.IsDevelopment())



The order in which features are registered to the IApplicationBuilder controls the order in which the features, like X-Ray, are called. Because we want X-Ray to capture as much request data as possible, we highly recommended that you add the UseXRay call earlier in the Configure method, but after the UseExceptionHandler. The exception handler will take away some of the exception data for the tracing if it’s registered after X-Ray.

Deploying Applications Enabled for X-Ray

To send tracing data collected by the AWS X-Ray SDK for .NET to the X-Ray service, the X-Ray daemon must be running in the same environment as the application. For AWS Elastic Beanstalk and AWS Lambda, this is taken care of for you when you enable X-Ray during the deployment. With the latest release of the AWS Toolkit for Visual Studio, the Elastic Beanstalk and Lambda deployment wizards are updated to enable X-Ray.

In the Elastic Beanstalk deployment wizard, enable X-Ray on the Application Options page.

In the Lambda deployment wizard, enable X-Ray on the Advanced Function Details page.

If you’re deploying Lambda functions as a serverless application, you can enable X-Ray in the serverless.template file by setting the Tracing property of your AWS::Serverless::Function resource to Active.

  "AWSTemplateFormatVersion" : "2010-09-09",
  "Transform" : "AWS::Serverless-2016-10-31",
  "Resources" : {
    "Get" : {
      "Type" : "AWS::Serverless::Function",
      "Properties": {
        "Handler": "XRayServerlessSample::XRayServerlessSample.Functions::Get",
        "Runtime": "dotnetcore2.0",
        "CodeUri": "",
        "MemorySize": 256,
        "Timeout": 30,
        "Policies": [ "AWSLambdaBasicExecutionRole" ],

        "Tracing" : "Active",

        "Events": {
          "PutResource": {
            "Type": "Api",
            "Properties": {
              "Path": "/",
              "Method": "GET"
  "Outputs" : {



In addition, the Amazon.Lambda.Tools and Amazon.ElasticBeanstalk.Tools extensions to the dotnet CLI were updated to support enabling X-Ray tracing in the target deployment environments.


I think .NET developers will be very excited to see the type of data AWS X-Ray can show about their applications, so we hope you check out this new update. Also, you’ll find a lot more information on the advanced configuration you can do with X-Ray on the AWS X-Ray SDK for .NET GitHub repository. If you have any issues with the 2.0 beta release, let us know by opening an issue in our GitHub repository.

AWS Service Provider for Symfony v2 with Support for Symfony v4

Version 2.0.0 of the AWS Service Provider for Symfony has been released with support for Symfony v4. You can upgrade through Composer using the following command:

composer require aws/aws-sdk-php-symfony ~2.0

This AWS Service Provider for Symfony release is compatible with version 3 of the AWS SDK for PHP and versions 2, 3, and 4 of Symfony. You can find more details on using the AWS Service Provider for Symfony on GitHub.

Serverless ASP.NET Core 2.0 Applications

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In our previous post, we announced the release of the .NET Core 2.0 AWS Lambda runtime and new versions of our .NET tooling to help you develop .NET Core 2.0-based serverless applications. Also, with the new .NET Core 2.0 Lambda runtime, we’ve released our ASP.NET Core NuGet Package, Amazon.Lambda.AspNetCoreServer, for general availability.

Version 2.0.0 of the Amazon.Lambda.AspNetCoreServer has been upgraded to target .NET Core 2.0. If you’re already using this library, you need to update your ASP.NET Core project to .NET Core 2.0 before using this latest version.

ASP.NET Core 2.0 has a lot of changes that make running a serverless ASP.NET Core Lambda function even more exciting. These include performance improvements in the ASP.NET Core and underlying .NET Core libraries.

Razor Pages

The Lambda runtime now supports Razor Pages. This means we can deploy both ASP.NET Core Web API and ASP.NET Core web applications. An important change with ASP.NET Core 2.0 is that Razor Pages are now precompiled at publish time. This means when our serverless Razor Pages are first rendered, Lambda compute time isn’t spent compiling the Razor Pages from cshtml to machine instructions.

Runtime package store

Starting with .NET Core 2.0 there is a new runtime package store feature, which is a cache of NuGet packages already installed on the target deployment platform. These packages have also been pre-jitted, meaning they’re already compiled from .NET’s intermediate language (IL) to machine instructions. This improves startup time when you use these packages. The store also reduces your deployment package size, further improving the cold startup time. For example, our existing ASP.NET Core Web API blueprint for .NET Core 1.0 had a minimum size of about 2.5 MB for the deployment package. For the .NET Core 2.0 version of the blueprint, the size is about 0.5 MB.

To indicate that you want to use the runtime package store for an ASP.NET Core application, you add a NuGet dependency to Microsoft.AspNetCore.All. Adding this dependency makes all of the ASP.NET Core packages and Entity Framework Core packages available to your application. However, it doesn’t include them in the deployment package because they’re already available in Lambda.

The Lambda blueprints that are available in Visual Studio are configured to use Microsoft.AspNetCore.All, just like the Microsoft-provided ASP.NET Core Web project templates inside Visual Studio. If you’re migrating a .NET Core 1.0 project to .NET Core 2.0, I highly recommend swapping out individual ASP.NET Core references to Microsoft.AspNetCore.All.

.NET Core and runtime package store version

Currently, the .NET Core 2.0 Lambda runtime is running .NET Core 2.0.4 and includes version 2.0.3 of Microsoft.AspNetCore.All. As the .NET Core 2.0 Lambda runtime was rolling out to the AWS Regions, Microsoft released version 2.0.5 of the .NET Core runtime and 2.0.5 of Microsoft.AspNetCore.All in the runtime package store. The Lambda runtime will be updated to include the latest versions shortly. However, in the meantime, if you update your Microsoft.AspNetCore.All reference to version 2.0.5, the Lambda function will fail to find the dependency when it runs. If you use either the AWS Toolkit for Visual Studio or our dotnet CLI extensions to perform the deployment, and attempt to deploy with a newer version of Microsoft.AspNetCore.All than is available in Lambda, our packaging will prevent the deployment and inform you of the latest version you can use with Lambda. This is another reason we recommend you use either the AWS Toolkit for Visual Studio or our dotnet CLI extensions to create the Lambda deployment package, so that we can provide that extra verification of your project.

Getting started

The AWS Toolkit for Visual Studio provides two blueprints for ASP.NET Core applications. The first is the ASP.NET Core Web API blueprint, which we updated from the preview in .NET Core 1.0 to take advantage of the new .NET Core 2.0 features. The second is a new ASP.NET Core Web App blueprint, which demonstrates the use of the ASP.NET Core 2.0 new Razor Pages feature in a serverless environment. Let’s take a look at that blueprint now.

To access the Lambda blueprints, choose File, New Project in Visual Studio. Under Visual C#, choose AWS Lambda.

The ASP.NET Core blueprints are serverless applications, because we want to use AWS CloudFormation to configure Amazon API Gateway to expose the Lambda function running ASP.NET Core to an HTTP endpoint. To continue, choose AWS Serverless Application (.NET Core), name your project, and then click OK.

On the Select Blueprint page, you can see the two ASP.NET Core blueprints. Choose the ASP.NET Core Web App blueprint, and then click Finish.

When the project is created, it looks just like a regular ASP.NET Core project. The main difference is that Program.cs was renamed to LocalEntryPoint.cs, which enables you to run the ASP.NET Core project locally. Another difference is the file LambdaEntryPoint.cs. This file contains a class that derives from Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction and implements the Init method that’s used to configure the IWebHostBuilder, similar to LocalEntryPoint.cs. The only required element to configure is the startup class that ASP.NET Core will call to configure the web application.

The APIGatewayProxyFunction base class contains the FunctionHandlerAsync method. This method is declared as the Lambda handler in the serverless.template file that defines the AWS Lambda function and configures Amazon API Gateway. If you rename the class or namespace, be sure to update the Lambda handler in the serverless.template file to reflect the new name.

public class LambdaEntryPoint : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction
    /// <summary>
    /// The builder has configuration, logging, and Amazon API Gateway already configured. The startup class
    /// needs to be configured in this method using the UseStartup<>() method.
    /// </summary>
    /// <param name="builder"></param>
    protected override void Init(IWebHostBuilder builder)

To deploy the ASP.NET Core application to Lambda, right-click the project in Solution Explorer, and then choose Publish to AWS Lambda. This starts the deployment wizard. Because no parameters are defined in the serverless.template, we just need to enter an AWS CloudFormation stack name and an Amazon S3 bucket in the region the application is being deployed, to which the Lambda deployment package will be uploaded. After that, choose Publish to begin the deployment process.

Once the Lambda deployment package is created and uploaded to Amazon S3, and the creation of the AWS CloudFormation stack is initiated, the AWS CloudFormation stack view is launched. This view lists the events as the AWS resources are created. When the stack is created, a URL to the generated API Gateway endpoint is shown.

Clicking the link displays your new serverless ASP.NET Core web application.

Using images

If your web application displays images, we recommend you serve those images from Amazon S3. This is more efficient for returning static content like images, Cascading Style Sheets, etc. Also, to return images from your Lambda function to the browser, you need to do extra configuration in API Gateway for binary data.

Migrating Existing ASP.NET Core Web API Projects

Before our new release we already had a preview blueprint for using ASP.NET Core Web API on Lambda using .NET Core 1.0. To migrate that project make the following changes to your csproj file project.

  • Make sure the Sdk attribute in root element of your csproj is set to Microsoft.NET.Sdk.Web. The preview blueprint had this attribute set to Microsoft.NET.Sdk.
    <Project Sdk="Microsoft.NET.Sdk.Web">
  • Update Amazon.Lambda.AspNetCoreServer reference to 2.0.0
    <PackageReference Include="Amazon.Lambda.AspNetCoreServer" Version="2.0.0" />
  • Replace any references to Microsoft.AspNetCore.* and Microsoft.Extensions.* with Microsoft.AspNetCore.All version 2.0.3
    <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.3" />
  • Update target framework to netcoreapp2.0
  • Set the property GenerateRuntimeConfigurationFiles to true to make sure a project-name.runtimeconfig.json is created.
  • If your csproj has the following xml you can remove it because appsetings.json will now be included by default since you changed the Sdk attribute to Microsoft.NET.Sdk.Web.
      <Content Include="appsettings.json">

After that make any changed necessary to your code to be compatible with ASP.NET Core 2.0 and you are ready to deploy.


With all of the improvements in .NET Core and ASP.NET Core 2.0, it’s exciting to see it running in a serverless environment. There’s a lot of potential with running ASP.NET Core on Lambda, and we’re excited to hear your thoughts about running a serverless ASP.NET Core application. Check out our GitHub repository which contains our libraries that make this possible. Feel free to open issues for any questions you have.

AWS Lambda .NET Core 2.0 Support Released

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today we’ve released the highly anticipated .NET Core 2.0 AWS Lambda runtime that is available in all Lambda-supported regions. With .NET Core 2.0, it’s easier to move existing .NET Framework code to .NET Core with the much larger API defined in .NET Standard 2.0, which .NET Core 2.0 implements.

Using Visual Studio 2017

The easiest way to get started with .NET Core Lambda is to use Visual Studio 2017 and our AWS Toolkit for Visual Studio. We released version of the toolkit today with updates to support using .NET Core 2.0 on AWS Lambda. The AWS Lambda project templates have been updated to .NET Core 2.0. You can easily deploy to Lambda by right-clicking your Lambda project and selecting Publish to AWS Lambda.

If you haven’t used the toolkit before, these previous blog posts can help you get started:

Deploying from the command line

Although you can create a Lambda package bundle by zipping up the output of the dotnet publish command, we recommend that you use our dotnet CLI extension, Amazon.Lambda.Tools. Using this tool over dotnet publish enables our tooling to ensure the package bundle has all of the required files. These include the <my-project>.runtimeconfig.json file that the .NET Core 2.0 Lambda runtime requires, but which isn’t always produced by dotnet publish. The tooling also shrinks your package bundle by removing Windows-specific and macOS-specific dependencies that dotnet publish would put in the publish folder.

This tool is set up by default in all of our AWS Lambda project templates because we added the following section in the project file.

  <DotNetCliToolReference Include="Amazon.Lambda.Tools" Version="2.0.0" />

As part of our release today, version 2.0.0 of Amazon.Lambda.Tools was pushed to to add support for .NET Core 2.0.

Depending on the type of project you create, you can use this extension to deploy your Lambda functions from the command line by using the dotnet lambda deploy-function command or the dotnet lambda deploy-serverless command.

If you’re just building your Lambda package bundle as part of your CI system and don’t want the extension to deploy, you can use the dotnet lambda package command to produce the package bundle .zip file to pass along through your CI system.

This earlier blog post has more details about our Lambda CLI extension.

Creating Lambda projects without Visual Studio

If you’re not using Visual Studio, you can create any of our Lambda projects using the dotnet new command by installing our Amazon.Lambda.Templates package with the following command.

dotnet new -i Amazon.Lambda.Templates::* 

The ::* syntax at the end of the command indicates that the latest version should be installed. This is version 2.0.0, also released today, to update the project templates to support .NET Core 2.0. See this blog post for more details about these templates.

Updating existing functions

Because the programming model hasn’t changed, it’s easy to migrate your existing .NET Core 1.0 Lambda functions to the new runtime. To migrate, you need to update the target framework of your project to netcoreapp2.0 and, optionally, update any of the dependencies for your project to the latest version. Your project probably has an aws-lambda-tools-defaults.json file, which is a JSON file of saved settings from your deployment. Update the framework property to netcoreapp2.0. If the file also contains the field function-runtime, update that to dotnetcore2.0. If you’re deploying a Lambda function as a serverless application using an AWS CloudFormation template (usually named serverless.template), update the Runtime property of any AWS::Serverless::Function or AWS::Lambda::Function AWS CloudFormation resources to dotnetcore2.0.

With these changes, you should be able to simply redeploy using the new .NET Core 2.0 runtime with our AWS Toolkit for Visual Studio or dotnet CLI extension.

Using AWS Tools for Visual Studio Team Services

The AWS Tools for VSTS support two tasks related to performing Lambda deployments from within your VSTS or TFS pipelines. The general-purpose AWS Lambda deployment task, which can deploy prepackaged functions that target any supported AWS Lambda runtime, has been updated in version 1.0.16 of the tools to support selection of the new dotnetcore2.0 runtime. The .NET Core-specific Lambda task, which uses the Lambda dotnet CLI extension, will operate without requiring changes to the task configuration. You just need to update the project files built by this task, as described earlier.


We’re excited to see what you build with our new .NET Core runtime and to expand our .NET Core 2.0 support across AWS. Visit our GitHub repository for our .NET Core tooling and libraries for additional help with .NET Core and Lambda.

Remote Debug an IIS .NET Application Running in AWS Elastic Beanstalk

In this guest post by AWS Partner Solution Architect Sriwantha Attanayake, we take a look at how you can set up remote debugging for ASP.NET applications deployed to AWS Elastic Beanstalk.

We love to run IIS websites on AWS Elastic Beanstalk. With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.

How can you remote debug a .NET application running on Elastic Beanstalk? This article describes a one-time setup of Elastic Beanstalk that enables you to remote debug in real time. You can use this approach in your development environments.

First, we create an Amazon EC2 instance from a base Elastic Beanstalk image. Next, we install Visual Studio remote debugger as a service and create a custom image from it. Then, we start an Elastic Beanstalk environment with this custom image. To allow communication with the Visual Studio remote debugger, we set up proper security groups. Finally, we attach the Visual Studio debugger to the remote process running inside the EC2 instance started by Elastic Beanstalk.

How to identify which Elastic Beanstalk image to customize

  1. Open the Elastic Beanstalk console and create a new application by choosing Create New Application.
  2. Create a new Web server environment.

  3. On the Create new environment page, choose .NET (Windows/IIS) as the preconfigured platform.
  4. Choose Configure more options.
  5. Under Instances, you’ll find the default AMI that Elastic Beanstalk will use. This is determined by the selected platform and the region. For example, for the Sydney region, for the 64bit Windows Server 2016 v1.2.0 running IIS 10.0 platform, the AMI ID is ami-e04aa682. Make a note of this AMI ID. This is the base image you’ll customize later.

Customize the image

  1. Now that you know the base AMI used by Elastic Beanstalk, start an EC2 instance with this base image. You can find this image under Community AMIs.
  2. Once the EC2 instance is started, remotely log in to it.
    Install Remote Tools as a Service. The installer depends on the Visual Studio version you use for development. See Remote Debugging for the steps to install the remote debugger.
  3. When the installation is complete, run the Visual Studio remote debugger configuration wizard.

Note: If you do not want to create a custom image another approach you can use to install the Visual Studio remote debugger is to use .ebextensions. As detailed in Customizing Software on Windows Servers an .ebextension file can include commands that can run the installation when Elastic Beanstalk deploys the application.

Whichever approach you use, be sure of the following:

  • You run the remote debugger as a service. The service account has to have permissions to run as a Windows service and must be a member of the local Administrators group.
  • You allow network connections from all types of networks.
  • The remote debugger service has started.
  • Windows firewall doesn’t block the remote debugger.

Create an image from a customized EC2 instance

  1. When the installation is complete, Sysprep the machine using EC2 launch settings. You can find the EC2 launch settings at C:\ProgramData\Amazon\EC2-Windows\Launch\Settings\Ec2LaunchSettings.exe. Choose Shutdown with Sysprep.

    For a detailed explanation, see Configuring EC2Launch.
  2. After the instance shuts down, you can create an image from it. Make a note of this AMI ID. The next time you start an Elastic Beanstalk environment, use this custom image ID.

Connecting to your Elastic Beanstalk environment

  1. When you start your Elastic Beanstalk environment, be sure you configure your security groups in a way that opens remote debugger ports to your development machine. Which ports to open depends on which Visual Studio environment you’re running. In the following example, port 4022 is for Visual Studio 2017, and port 4016 is for Visual Studio 2012.

    See Remote Debugger Port Assignments to learn about the ports used in different Visual Studio environments. In the previous example, I have opened remote debugger ports corresponding to different editions of Visual Studio to any network. This poses a security risk. Please ensure you open only the ports necessary for your edition of Visual Studio to the development networks you trust. Once you are done with debugging, you can remove these security groups.
  2. Be sure you specify a key pair for the Elastic Beanstalk EC2 instance, so that you can retrieve the autogenerated Administrator password for remote access.
  3. Make a note of the IP address (public/private) of the EC2 instance started by the Elastic Beanstalk environment.
  4. Once you open the Visual Studio project (e.g., ASP.NET application) that is being deployed to Elastic Beanstalk, select Debug, Attach to Process.
  5. For Connection Target, enter the IP address of the EC2 instance started by Elastic Beanstalk. For example, if your development machine is in a private network with network reachability to the EC2 instance, use the private IP address.  Depending on where your development machine is, you can use the public IP address. Finally, choose Show processes from all users.
  6. In the popup window that appears, you can enter your login information to the EC2 instance. Enter the Administrator user name and password of the EC2 instance that Elastic Beanstalk has started. The reason we started the Elastic Beanstalk EC2 instances with a key pair is to retrieve this password.
  7. If the login succeeds, you will see all the processes running inside the EC2 instance started by Elastic Beanstalk. If you don’t see the IIS worker process (w3wp.exe), ensure you have viewed your website at least once, and then choose Refresh. Choose Attach to attach the remote IIS worker process to Visual Studio and then confirm the attachment.
  8. You can now live debug the .NET application running inside Elastic Beanstalk. You will get a hit on a  debug point when you execute the relevant code fragment.


In this post, we showed how you can remote debug a .NET web application running on Elastic Beanstalk.  .NET remote debugging on Elastic Beanstalk is no different from .NET remote debugging you would do on a Windows server. Once you have an AMI with your custom tools installed, you can use it as your preferred Elastic Beanstalk image.

As noted earlier, another way to install the Visual Studio remote debugger is through an .ebextensions file. Using this approach, you don’t need to create a custom image. See Customizing Software on Windows Servers for details about advanced environment customization using Elastic Beanstalk configuration files.

Although you have the option of doing remote debugging on Elastic Beanstalk, don’t enable this feature on a production environment. In addition, don’t open the ports related to remote debugging on a production environment. The proper way to analyze issues on a production environment is to do proper logging. For example, in an ASP/MVC .NET application, you can catch all the unhandled exceptions in Global.asax and log them. For a large-scale complex logging solution, you can explore the best practices in Centralized Logging.

AWS Support for PowerShell Core 6.0

Announced in a Microsoft blog post yesterday, PowerShell Core 6.0 is now generally available. AWS continues to support this new cross-platform version of PowerShell with our AWS Tools for PowerShell Core module also known by its module name, AWSPowerShell.NetCore. This post recaps the modules available from AWS for PowerShell users wanting to script their AWS resources.

AWS Tools for Windows PowerShell

Released in 2012, this module, also known by the module name AWSPowerShell, supports users working with the traditional Windows-only version of PowerShell. It supports PowerShell version 2.0 through to 5.1. It can be installed from the AWS SDK and Tools for .NET Windows Installer, which also contains .NET 3.5 and 4.5 versions of the AWS SDK for .NET and the AWS Toolkit for Visual Studio 2013 and 2015. The module is also distributed on the PowerShell Gallery and is pre-installed on Amazon EC2 Windows-based images.

AWS Tools for PowerShell Core

This version of the tools was first released in August 2016 to coincide with the announcement of the first public alpha release of PowerShell Core 6.0. Since then it has continued to be updated in sync with the AWS Tools for Windows PowerShell module. This module, named AWSPowerShell.NetCore, is only distributed on the PowerShell Gallery.

Module Compatibility

Both modules are highly compatible with each other. In terms of the cmdlets they expose for AWS service APIs, they match completely and both modules are updated in sync. As noted in our original launch blog post for our module running on PowerShell Core, back in August 2016, the AWSPowerShell.NetCore module is missing only a handful of cmdlets, as follows.

Proxy cmdlets:


Logging cmdlets:


SAML federated credentials cmdlets:


Now that PowerShell Core is generally available (GA), we’ll be taking another look at these to see if we can add them.

We hope you’re enjoying the new PowerShell Core GA release and the ability to script and access your AWS resources from PowerShell on any system!

Send Real-Time Amazon CloudWatch Alarm Notifications to Amazon Chime

This post was authored by Trevor Sullivan, a Solutions Architect for Amazon Web Services (AWS) based in Seattle, Washington. The post was also peer-reviewed by Andy Elmhorst, Senior Solutions Architect for AWS.


When you’re developing, deploying, and supporting business-critical applications, timely system notifications are crucial to keeping your services up and running reliably for your customers. If your team actively collaborates using Amazon Chime, you might want to receive critical system notifications directly within your team chat rooms. This is possible using the Amazon Chime incoming webhooks feature.

Using Amazon CloudWatch alarms, you can set up metric thresholds and send alerts to Amazon Simple Notification Service (SNS). SNS can send notifications using e-mail, HTTP(S) endpoints, and Short Message Service (SMS) messages to mobile phones, and it can even trigger a Lambda function.

Because SNS doesn’t currently support sending messages directly to Amazon Chime chat rooms, we’ll insert a Lambda function in between them. By triggering a Lambda function from SNS instead, we can consume the event data from the CloudWatch alarm and craft a human-friendly message before sending it to Amazon Chime.

Here’s a simple architectural diagram that demonstrates how the various components will work together to make this solution work. Feel free to refer back to this diagram as you make your way through the remainder of this article.


Throughout this article, we make the following assumptions:

  • You have created an Amazon EC2 instance running Ubuntu Linux.
  • Detailed CloudWatch monitoring is enabled for this EC2 instance.
  • Amazon Chime is already set up and accessible to you.
  • You’ve installed PowerShell Core, or can run it in a Docker container.
  • You have installed and configured IAM credentials for the AWS Tools for PowerShell.
  • Python 3.6 and pip3 are installed on your development system.

NOTE: There is an additional cost to capture detailed CloudWatch metrics for EC2 instances, detailed here.

Set up Amazon Chime

Before implementing your backend application code, there are a couple steps you need to perform within Amazon Chime. To set up your incoming webhook, you first need to create a new Amazon Chime chat room. Webhooks are created as a resource in the context of the chat room. As of this writing, Chime Webhooks must be created using the native Amazon Chime client for Microsoft Windows or Apple MacOS.

Create an Amazon Chime chat room

First you create a new chat room in Amazon Chime. You’ll use this chat room for testing and, once you understand and successfully implement this solution, you can replicate it in your live chat rooms.

  1. Open Amazon Chime.
  2. Choose the Rooms button.
  3. Choose the New room button.
  4. Give the chat room a name, and then choose the Create button.

Create an Amazon Chime incoming webhook

Now that you’ve created your new Amazon Chime chat room, you need to generate a webhook URL. This webhook URL authorizes your application to send messages to the chat room. Be sure to handle the URL with the same level of security that you would handle any other secrets or passwords.

In the Amazon Chime chat room, click the gear icon, and then select the Manage Webhooks menu item. In the webhook management window, choose the New button and use the name CriticalAlerts. Click the Copy webhook URL link and paste it into a temporary notepad. We’ll need to configure this URL on our Lambda function later on.

Create an SNS topic

In this section, you create a Simple Notification Service (SNS) topic. This SNS topic will be triggered by a CloudWatch alarm when its configured metric threshold is exceeded. You can name the SNS topic whatever you prefer, but in this example, I’ll use the name chimewebhook.

It’s possible to create the SNS topic after creating your CloudWatch alarm. However, in this case, you would have to go back and reconfigure your CloudWatch alarm to point to the new SNS topic. In this example, we create the topic first to minimize the amount of context switching between services.

Use the following PowerShell command to create an SNS topic, and store the resulting topic Amazon Resource Name (ARN) in a variable named $Topic. We’ll use this variable later on, so don’t close your PowerShell session.

$TopicArn = New-SNSTopic -Name chimewebhook -Region us-west-2

Create a CloudWatch alarm

In this section, you create an Amazon CloudWatch alarm. Then you configure this alarm to trigger an alert state when the CPU usage metric of your EC2 instance exceeds 10%. Alarms can be configured with zero or more actions; you’ll configure a single action to send a notification to the SNS topic you previously created.

  • Navigate to the CloudWatch alarms feature in the AWS Management Console.
  • Choose the blue Create Alarm button.
  • Search for your EC2 instance ID.
  • Select the CPUUtilization metric for your EC2 instance.
  • On the next screen, give the CloudWatch alarm a name and useful description.
  • Configure the CPUUtilization threshold for 10%, and be sure the Period is set to 1 minute.
  • In the Actions section, select your SNS topic.
  • Save the CloudWatch alarm.

If you’d prefer to use a PowerShell script to deploy the CloudWatch alarm, use the following example script. Be sure you specify the correct parameter values for your environment:

  • EC2 instance ID that you’re monitoring
  • AWS Region that your EC2 instance resides in
  • ARN of the SNS topic that CloudWatch will publish alerts to
### Create a CloudWatch dimension object, to alarm against the correct EC2 instance ID
$MetricDimension = [Amazon.CloudWatch.Model.Dimension]::new()
$MetricDimension.Name = 'InstanceId'
$MetricDimension.Value = 'i-04043befbbfcdc51e'

### Set up the parameters to create a CloudWatch alarm in a PowerShell HashTable
$Alarm = @{
  AlarmName = 'EC2 instance exceeded 10% CPU'
  ActionsEnabled = $true
  AlarmAction = $TopicArn
  ComparisonOperator = ([Amazon.CloudWatch.ComparisonOperator]::GreaterThanOrEqualToThreshold)
  Threshold = 10
  Namespace = 'AWS/EC2'
  MetricName = 'CPUUtilization'
  Dimension = $MetricDimension
  Period = 60
  EvaluationPeriod = 1
  Statistic = [Amazon.CloudWatch.Statistic]::Maximum
  Region = 'us-west-2'
Write-CWMetricAlarm @Alarm

Set up AWS Lambda

In this section, you will create an AWS Lambda function, based on Python 3, that will be triggered by the SNS topic that you created earlier. This Lambda function will parse some of the fields of the message that’s forwarded from CloudWatch to SNS.

Create the Lambda Function

To successfully invoke an Amazon Chime webhook, your HTTP invocation must follow these criteria:

  • Webhook URL is predefined by Amazon Chime
  • Request is sent using the HTTP POST verb
  • Content-Type HTTP header must be application/json
  • HTTP body must contain a JSON object with Content property

We’ll use the open source Python requests library to make the Amazon Chime webhook invocation, as it provides a simple development interface. Because you’re adding a dependency on an external library, you need to author your Lambda function locally, package it up into a ZIP archive, and then deploy the ZIP archive to Lambda.

Start by creating the following three files in a working directory.

This file contains the AWS Lambda function that is invoked when a CloudWatch alarm is triggered.

import os
import boto3
import requests
from base64 import b64decode

def get_message(event):
  This function retrieves the message that will be sent to the Amazon Chime webhook. If the Lambda
  function is triggered manually, it will return some static text. However, if the Lambda function
  is invoked by SNS from CloudWatch Alarms, it will emit the Alarm's subject line.
    return event['Records'][0]['Sns']['Subject']
  except KeyError:
    return 'test message'

def handler(event, context):
  The 'handler' Python function is the entry point for AWS Lambda function invocations.
  print('Getting ready to send message to Amazon Chime room')
  content = 'CloudWatch Alarm! {0}'.format(get_message(event))
  webhook_uri = os.environ['CHIME_WEBHOOK'], json={ 'Content': content })
  print('Finished sending notification to Amazon Chime room')





Build and deploy the Lambda package

Now that you’ve created the previous source files, you’ll need a PowerShell script to build the ZIP archive for Lambda, create the Lambda function, and give SNS access to invoke the Lambda function. Save the following PowerShell script file into the same working directory, update the <YourChimeWebhookURL> text with your actual Amazon Chime webhook URL, and then run the script.

NOTE: This PowerShell script has a dependency on the Mac and Linux zip utility. If you’re running this code on Windows 10, you can use the Compress-Archive PowerShell command, or run the PowerShell script in the Windows Subsystem for Linux feature.


Set-DefaultAWSRegion -Region us-west-2

$ZipFileName = ''

Set-Location -Path $PSScriptRoot

Write-Host -Object 'Restoring dependencies ...'
pip3 install -r $PSScriptRoot/requirements.txt -t $PSScriptRoot/

Write-Host -Object 'Compressing files ...'
Get-ChildItem -Recurse | ForEach-Object -Process {
  $NewPath = $PSItem.FullName.Substring($PSScriptRoot.Length + 1)
  zip -u "$PSScriptRoot/$ZipFileName" $NewPath

Write-Host -Object 'Deploying Lambda function'

$Function = @{
  FunctionName = 'AmazonChimeAlarm'
  Runtime = 'python3.6'
  Description = 'Sends a message to an Amazon Chime room when a CloudWatch alarm is triggered.'
  ZipFilename = $ZipFileName
  Handler = 'index.handler'
  Role = 'arn:aws:iam::{0}:role/service-role/lambda' -f (Get-STSCallerIdentity).Account
  Environment_Variable = @{
    CHIME_WEBHOOK = '<YourChimeWebhookURL>'
Remove-LMFunction -FunctionName $Function.FunctionName -Force
Publish-LMFunction @Function

Write-Host -Object 'Deployment completed' -ForegroundColor Green

Once you’ve executed this deployment script, you should see an AWS Lambda function named AmazonChimeAlarm in the AWS Management Console.

Configure Lambda function policy

AWS Lambda functions have their own resource-level policies that are somewhat similar to IAM policies. These function policies are what grant other cloud resources the access that they need to invoke the function. In this scenario, you need to grant the SNS service access to trigger your Lambda function.

The following PowerShell script adds the necessary permissions to your Lamdba function.

### Enables the Amazon Simple Notification Service (SNS) to invoke your Lambda function
$LMPermission = @{
  FunctionName = $Function.FunctionName
  Action = 'lambda:InvokeFunction'
  Principal = ''
  StatementId = 1
Add-LMPermission @LMPermission

Keep in mind that this Lambda function policy broadly allows any SNS topic, in any AWS account, to trigger your Lambda function. For production applications, you should use the -SourceArn parameter to restrict access to specific event sources that will be able to trigger your Lambda function.

Subscribe the Lambda function to the SNS topic

Now you’ve created your Lambda function, and granted access to SNS to trigger it, you need to subscribe the Lambda function to the topic. This subscription/association is what starts the flow of events from SNS to Lambda. Without the subscription, CloudWatch alarms would trigger your SNS topic successfully, but the event flow would stop there.

$Subscription = @{
  Protocol = 'lambda'
  Endpoint = $NewFunction.FunctionArn
  TopicArn = $TopicArn
Connect-SNSNotification @Subscription

Trigger a test event

Now that you’ve finished configuring your AWS account, you can go ahead and test the end-to-end process to ensure it’s working properly. Ensure you’ve got your Amazon Chime client running, and select your test chat room that you created earlier.

Next, invoke a process on your instance that will consume many CPU cycles. Connect to your EC2 instance using SSH and run the following shell commands.

sudo apt install --yes
sudo usermod --append --groups docker ubuntu
exit # SSH back in after this, so group memberships take effect

docker run --rm --detach trevorsullivan/cpuburn burnP5

This Ubuntu-based Docker container image contains the preinstalled CPU burn program, which will cause your EC2 instance’s CPU consumption to spike to 100%. Because you’ve enabled detailed CloudWatch metrics on your EC2 instance, after a couple of minutes, the CloudWatch alarm that you created should get triggered.

Once you’ve finished with the test, or if you want to trigger the CloudWatch alarm again, make sure that you stop the Docker container running the CPU burn program. Because you specified the --rm argument upon running the container, the container will be deleted after it has stopped.

docker ps # Find the container ID
docker rm -f <containerID> # Remove the container

Potential problems

If you run into any problems with testing the end-to-end solution, check out the following list of potential issues you could run into and ways to troubleshoot:

  • The CPU burn program might not result in adequate CPU consumption, which wouldn’t trigger the test event correctly. Use the Linux top command to ensure that you trigger the test event. Or simply pull up the CPUUtilization metric in CloudWatch and see what values are being recorded.
  • If your Lambda function is not correctly configured to accept invocations from SNS, your SNS topic will fail to invoke it. Be sure that you understand how Lambda function policies work, and ensure that your Lambda function has the appropriate resource-level IAM policy to enable SNS to invoke it.
  • By default, your EC2 instances include basic metrics for a five-minute period. If you don’t have detailed monitoring enabled for the EC2 instance you’ve used in this article, you might have to wait several minutes for the next metric data point to be recorded. For more immediate feedback, ensure that your EC2 instance has detailed monitoring configured, so that you get instance-level metrics on a per-minute period instead.
  • Ensure your Lambda function is subscribed to your SNS topic. If your SNS topic doesn’t have any subscribers, it won’t know how to “handle” the alarm state from CloudWatch, and will effectively discard the message.
  • If you aren’t receiving any notifications in your Amazon Chime chat room, ensure that your CloudWatch alarm is in an OK state before attempting to retrigger it. CloudWatch sends a single notification to the configured SNS topics when the alarm’s threshold is breached, and doesn’t continually send notifications while it’s in an alarm state.
  • If your HTTP POST request to the Chime Webhook URL fails with HTTP 429, then your application might be rate-limited by Amazon Chime. Please refer to the official product documentation for more information. As of this writing, Chime Webhook URLs support 1 transaction per second (TPS).


In this article, you configured your AWS account to send Amazon Chime notifications to a team chat room, when CloudWatch alarm thresholds are breached. You can repeat this process as many times as you need to, in order to send the right notifications to your team’s chat room. Chat rooms in general can get noisy quickly, if you don’t take care to determine which notifications are the most critical to your team. Hence, I recommend that you discuss with your team which notifications you need immediate notification about, before spending the effort to build a solution using this technique.

Thanks for taking the time to read this article and learn about how you can integrate Amazon Chime with your critical system notifications.

Build on!
Trevor Sullivan, Solutions Architect
Amazon Web Services (AWS)


AWS SDK for Go 2.0 Developer Preview

We’re pleased to announce the Developer Preview release of the AWS SDK for Go 2.0. Many aspects of the SDK have been refactored based on your feedback, with a strong focus on performance, consistency, discoverability, and ease of use. The Developer Preview is here for you to provide feedback, and influence the direction of the AWS SDK for Go 2.0 before its production-ready, general availability launch. Tell us what you like, and what you don’t like. Your feedback matters to us. Find details at the bottom of this post on how to give feedback and contribute.

You can safely use the AWS SDK for Go 2.0 in parallel with the 1.x SDK, with both SDKs coexisting in the same Go application. We won’t drop support for the 1.0 SDK any time soon. We know there are a lot of customers depending on the 1.x SDK, and we will continue to support them. As we get closer to general availability for 2.0, we’ll share a more detailed plan about how we’ll support the 1.x SDK.

Getting started

Let’s walk through setting up an application with the 2.0 SDK. We’ll build a simple Go application using the 2.0 SDK to make a service API request. For this walkthrough, you’ll need to use a minimum of Go 1.9.

  1. Create a new Go file for the example application. We’ll name ours main.go in a new directory, awssdkgov2-example, in our Go Workspace.

    mkdir -p $(go env GOPATH)/src/awssdkgov2-example
    cd $(go env GOPATH)/src/awssdkgov2-example

    All of the following steps assume you will execute them from the awssdkgov2-example directory.

  2. To get the AWS SDK for Go 2.0, you can go get the SDK manually or use a dependency management tool such as Dep. You can manually get the 2.0 SDK with go get

  3. In your favorite editor create a main.go file, and add the following code. This code loads the SDK’s default configuration, and creates a new Amazon DynamoDB client. See the external package for more details on how to customize the way the SDK’s default configuration is loaded.

    package main
    import (
    func main() {
        // Using the SDK's default configuration, loading additional config
        // and credentials values from the environment variables, shared
        // credentials, and shared configuration files
        cfg, err := external.LoadDefaultAWSConfig()
        if err != nil {
            panic("unable to load SDK config, " + err.Error())
        // Set the AWS Region that the service clients should use
        cfg.Region = endpoints.UsWest2RegionID
        // Using the Config value, create the DynamoDB client
        svc := dynamodb.New(cfg)
  4. Build the service API request, send it, and process the response.

    // Build the request with its input parameters
    req := svc.DescribeTableRequest(&dynamodb.DescribeTableInput{
        TableName: aws.String("myTable"),
    // Send the request, and get the response or error back
    resp, err := req.Send()
    if err != nil {
        panic("failed to describe table, "+err.Error())
    fmt.Println("Response", resp)

What has changed?

Our focus for the 2.0 SDK is to improve the SDK’s development experience and performance, make the SDK easy to extend, and add new features. The changes made in the Developer Preview target the major pain points of configuring the SDK and using AWS service API calls. Check out our 2.0 repo for details on pending changes that are in development and designs we’re discussing.

The following are some of the larger changes included in the AWS SDK for Go 2.0 Developer Preview.

SDK configuration

The 2.0 SDK simplifies how you configure the SDK’s service clients by using a single Config type. This simplifies the Session and Config type interaction from the 1.x SDK. In addition, we’ve moved the service-specific configuration flags to the individual service client types. This reduces confusion about where service clients will use configuration members.

We added the external package to provide the utilities for you to use external configuration sources to populate the SDK’s Config type. External sources include environmental variables, shared credentials file (~/.aws/credentials), and shared config file (~/.aws/config). By default, the 2.0 SDK will now automatically load configuration values from the shared config file. The external package also provides you with the tools to customize how the external sources are loaded and used to populate the Config type.

You can even customize external configuration sources to include your own custom sources, for example, JSON files or a custom file format.

Use LoadDefaultAWSConfig in the external package to create the default Config value, and load configuration values from the external configuration sources.

cfg, err := external.LoadDefaultAWSConfig()

To specify the shared configuration profile load used in code, use the WithSharedConfigProfile helper passed into LoadDefaultAWSConfig with the profile name to use.

cfg, err := external.LoadDefaultAWSConfig(

Once a Config value is returned by LoadDefaultAWSConfig, you can set or override configuration values by setting the fields on the Config struct, such as Region.

cfg.Region = endpoints.UsWest2RegionID

Use the cfg value to provide the loaded configuration to new AWS service clients.

svc := dynamodb.New(cfg)

API request methods

The 2.0 SDK consolidates several ways to make an API call, providing a single request constructor for each API call. This means that your application will create an API request from input parameters, then send it. This enables you to optionally modify and configure how the request will be sent. This includes, but isn’t limited to, modifications such as setting the Context per request, adding request handlers, and enabling logging.

Each API request method is suffixed with Request and returns a typed value for the specific API request.

As an example, to use the Amazon Simple Storage Service GetObject API, the signature of the method is:

func (c *S3) GetObjectRequest(*s3.GetObjectInput) *s3.GetObjectRequest

To use the GetObject API, we pass in the input parameters to the method, just like we would with the 1.x SDK. The 2.0 SDK’s method will initialize a GetObjectRequest value that we can then use to send our request to Amazon S3.

req := svc.GetObjectRequest(params)

// Optionally, set the context or other configuration for the request to use

// Send the request and get the response
resp, err := req.Send()

API enumerations

The 2.0 SDK uses typed enumeration values for all API enumeration fields. This change provides you with the type safety that you and your IDE can leverage to discover which enumeration values are valid for particular fields. Typed enumeration values also provide a stronger type safety story for your application than using string literals directly. The 2.0 SDK uses string aliases for each enumeration type. The SDK also generates typed constants for each enumeration value. This change removes the need for enumeration API fields to be pointers, as a zero-value enumeration always means the field isn’t set.

For example, the Amazon Simple Storage Service PutObject API has a field, ACL ObjectCannedACL. An ObjectCannedACL string alias is defined within the s3 package, and each enumeration value is also defined as a typed constant. In this example, we want to use the typed enumeration values to set an uploaded object to have an ACL of public-read. The constant that the SDK provides for this enumeration value is ObjectCannedACLPublicRead.

	Bucket: aws.String("myBucket"),
	Key:    aws.String("myKey"),
	ACL:    s3.ObjectCannedACLPublicRead,
	Body:   body,

API slice and map elements

The 2.0 SDK removes the need to convert slice and map elements from values to pointers for API calls. This will reduce the overhead of needing to use fields with a type, such as []string, in API calls. The 1.x SDK’s pattern of using pointer types for all slice and map elements was a significant pain point for users, requiring them to convert between the types. The 2.0 SDK does away with the pointer types for slices and maps, using value types instead.

The following example shows how value types for the Amazon Simple Queue Service AddPermission API’s AWSAccountIds and Actions member slices are set.

svc := sqs.New(cfg)

	AWSAcountIds: []string{
	Actions: []string{

	Label:    aws.String("MessageSender"),
	QueueUrl: aws.String(queueURL)

SDK’s use of pointers

We listened to your feedback on the SDK’s significant use of pointers across the surface of the SDK. The Config type was refactored to remove the use of pointers for both Config value and its member fields. Enumeration pointers were removed, and replaced with typed constants. Slice and map elements were also updated to be value types instead of pointers.

The API parameter fields are still pointer types in the 2.0 SDK. We looked at patterns to remove these pointers, changing them to values, but chose not to make a change to the field types. All of the patterns we investigated either didn’t solve the problem fully or imposed an increased risk of unintended API call outcome. For example, unknowingly making an API call with a field type’s zero value required parameters. The 2.0 SDK does include setter’s and utility functions for all API parameter types to provide a workaround to using pointers directly.

Please share your thoughts in GitHub issues or our Gitter channel. We want to hear your feedback.

Giving feedback and contributing

The 2.0 SDK will use GitHub issues to track feature requests and issues with the 2.0 repo. In addition, we’ll use GitHub projects to track large tasks spanning multiple pull requests, such as refactoring the SDK’s internal request lifecycle. You can provide feedback to us in several ways.

GitHub issues. To provide feedback using GitHub issues on the 2.0 repo. This is the preferred mechanism to give feedback so that other users can engage in the conversation, +1 issues, etc. Issues you open will be evaluated, and included in our roadmap for the GA launch.

Gitter channel. For more informal discussions or general feedback, check out our Gitter channel for the 2.0 repo. The Gitter channel is also a great place to ask general questions, and find help to get started with the 2.0 SDK Developer Preview.

Contributing. You can open pull requests for fixes or additions to the AWS SDK for Go 2.0 Developer Preview release. All pull requests must be submitted under the Apache 2.0 license and will be reviewed by an SDK team member before being merged in. Accompanying unit tests, where possible, are appreciated.