AWS Developer Blog

ASP.NET Core and AWS CodeStar Deep Dive

by Steven Kang | on | in .NET | Permalink | Comments |  Share

The AWS CodeStar team recently announced the addition of two ASP.NET Core project templates. As you might know, AWS CodeStar creates a code-integration and code-deployment(CI/CD) pipeline on behalf of developers, so they can spend their valuable time building applications instead of building infrastructure. With the new ASP.NET Core project templates, .NET developers can build and deploy their AWS applications on day one. Tara Walker’s excellent blog post covers how to create ASP.NET Core applications on AWS CodeStar. In this blog post, we take a deeper look into what goes on behind the scenes as we learn how to add tests to your ASP.NET Core project for AWS CodeStar.

Adding a unit test project

Our goal is to add a simple test case that exercises HelloController’s functionality. I’m assuming that you have a brand new ASP.Net Core web service project. If you don’t, you can follow Tara’s blog post (mentioned above) to create one. Be sure to choose the ASP.NET Core Web service template. After you create the ASP.NET Core for AWS CodeStar project, clone the project repository through Team Explorer, and load the AspNetCoreWebService solution, you should be able to follow along with the rest of the blog post. If you need some guidance setting up your repo through Team Explorer, check out Steve Robert’s Visual Studio and AWS CodeCommit integration announcement in May.

First, add a new xUnit project named AspNetCoreWebServiceTest to the AspNetCoreWebService solution. Our new test project will reference the HelloController class and JsonResult, so we should add AspNetCoreWebService as a project reference and Microsoft.AspNetCore.Mvc as a NuGet reference. Once you add them to the test project, you should see the following addition in AspNetCoreWebServiceTest.csproj.

<ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.3" />
    ...
</ItemGroup>
...
<ItemGroup>
    <ProjectReference Include="..\AspNetCoreWebService\AspNetCoreWebService.csproj" />
</ItemGroup>

This should allow you to make direct references to the HelloController class and unpack JsonResult. Let’s add a simple test case, as follows.

using System;
using Xunit;
using Microsoft.AspNetCore.Mvc;
using AspNetCoreWebService.Controllers;

namespace AspNetCoreWebServiceTest
{
    public class HelloControllerTest
    {
        [Fact]
        public void SimpleTest()
        {
            HelloController controller = new HelloController();
            var response = controller.Get("AWS").Value as Response;
            Assert.Equal(response.output, "Hello AWS!");
        }
    }
}

Notice that we have renamed the file name, namespace, class name, and the method name. Run the test and verify that it passes. You should see the following in Solution Explorer.

Now that we have a working test project, we should update our pipeline to build and run the test before deploying the application.

Updating the AWS CodeBuild job

Let’s first look at how the project is built. When you or your team member pushes a change to the repo, your pipeline automatically begins the build process against the latest change. During this step, AWS CodeBuild uses the buildspec.yml file in the root of the repository to drive the build process.

version: 0.2
phases:
  pre_build:
    commands:
      - echo Restore started on `date`
      - dotnet restore AspNetCoreWebService/AspNetCoreWebService.csproj
  build:
    commands:
      - echo Build started on `date`
      - dotnet publish -c release -o ./build_output AspNetCoreWebService/AspNetCoreWebService.csproj
artifacts:
  files:
    - AspNetCoreWebService/build_output/**/*
    - scripts/**/*
    - appspec.yml

The AWS CodeBuild job uses the .NET Core image for AWS CodeBuild, which contains the .NET Core SDK and CLI you will invoke in buildspec.yml.  Since this project consists of one web service, a single buildspec.yml file should be sufficient. As your project grows and the complexity of the build process increases, you may want to drive the build process externally via a shell script or an MSBuild .proj file and simply invoke the script/build file in buildspec.yml.

I would like to bring your attention to the dotnet publish command. This publishing step is crucial here, because it packages all dependencies together so that they are immediately available on the host machine. As defined in the artifacts section of the buildspec.yml file shown above, the list of files will be stored in an Amazon S3 bucket for AWS CodeDeploy to use to deploy your application onto the host.  scripts/**/* contains all scripts that appsec.yml depends on. If you’re not familiar with appsec.yml or want to know more about it, we’ll go over it in the next section.

In the previous section, we added a test project to our AWS CodeCommit repository. Now we should update buildspec.yml to build our new test project. We could simply run dotnet vstest as part of the build stage. However, in this exercise, let’s follow best practices by building separate stages for build and test. Let’s modify the buildspec.yml to build the test binaries and publish the bits into the AspNetCoreWebServiceTest/test_output directory.

pre_build:
    commands:
        ...
        - dotnet restore AspNetCoreWebServiceTest/AspNetCoreWebServiceTest.csproj
post_build:
    commands:
        ...
        - dotnet publish -c release -o ./test_output AspNetCoreWebServiceTest/AspNetCoreWebServiceTest.csproj  
artifacts:
    files:
        ...
        - AspNetCoreWebServiceTest/test_output/**/*

Notice that we added AspNetCoreWebServiceTest/test_output/**/* as an artifact. In effect, this directs the AWS CodeBuild service to upload the published test binaries to Amazon S3, so that we can reference them in the test job we will create next.

Updating AWS CodePipeline

In the previous sections, we added a new test project and modified buildspec.yml to build and save the binaries we need to run the tests. Now we’ll go over how to add a test stage in our pipeline. Let’s begin by adding a Test stage and a UnitTest action to the pipeline in the console.

Follow the rest of the UI and fill in these parameters:

  • Action category: Test
  • Action name: UnitTest
  • Test provider: AWS CodeBuild
  • Select Create a new build project
  • Project name: <your project name>-test
  • Operating system: Ubuntu
  • Runtime: .NET Core
  • Version: aws/codebuild/dot-net:core-1
  • For Build specification, select Insert build Commands
  • Build command: dotnet vstest AspNetCoreWebServiceTest/test_output/AspNetCoreWebServiceTest.dll
  • For Role name, select CodeStarWorker-<your project name>-CodeBuild from the list
  • For Input artifacts #1, select <your project name>-BuildArtifact from the list

The key piece of information here is the build command you provide. Our test job will run dotnet vstest against the test .dll built in the previous stage. Your pipeline should now look like this.

We’re almost done! If you run this pipeline by pressing Release change, the pipeline will fail on the Test stage with the message Error Code: AccessDeniedException. This is because the AWS CodeStar service doesn’t have permission to run our new Test stage. Let’s figure out how to grant appropriate access to our AWS CodeStar project.

Updating the role policy

Your AWS CodeStar project created policies for minimum permission for various services and workers to sync, build, and deploy your application. Because we added a new AWS CodeBuild job, we need to grant access to our new resource in CodeStarWorkerCodePipelinePolicy. Let’s navigate to the IAM console to make this change. On the Roles tab, search using the “codebuild” keyword. The role should be in the format CodeStarWorker-<project name>-CodePipeline. Then, edit the policy attached to the role. This is shown below.

The change we want to make is to add our new codebuild resource arn:aws:codebuild:us-east-1:532345249509:project/<your project name>-test that is associated with AWS CodeBuild actions in the policy.

{
    "Action": [
        "codebuild:StartBuild",
        "codebuild:BatchGetBuilds",
        "codebuild:StopBuild"
    ],
    "Resource": [
        "arn:aws:codebuild:us-east-1:532345249509:project/<your project name>"
        "arn:aws:codebuild:us-east-1:532345249509:project/<your project name>-test"
    ],
    "Effect": "Allow"
}

That’s it. Your AWS CodeStar project should now have appropriate permission to build the new job. Give it a try by pressing Release change.

ASP.NET Core application deployment

So far we’ve seen how AWS CodeStar builds and tests your project. In this section, we look closer at the deployment process. As part of the AWS CodeStar project creation, the AWS CodeStar service creates an Amazon EC2 instance to host your application. It also installs code-deploy-agent, which runs the deployment process on that instance following the instructions in appspec.yml. Let’s take a look at appspec.yml.

version: 0.0
os: linux
files:
  - source: AspNetCoreWebService/build_output
    destination: /home/ubuntu/aspnetcoreservice
  - source: scripts/virtualhost.conf
    destination: /home/ubuntu/aspnetcoreservice 
hooks:
  ApplicationStop:
    - location: scripts/stop_service
      timeout: 300
      runas: root

  BeforeInstall:
    - location: scripts/remove_application
      timeout: 300
      runas: root

  AfterInstall:
    - location: scripts/install_dotnetcore
      timeout: 500
      runas: root

    - location: scripts/install_httpd
      timeout: 300
      runas: root

  ApplicationStart:
    - location: scripts/start_service
      timeout: 300
      runas: root

Each script is run at various stages of the deployment process:

  • install_dotnetcore – Installs dotnet core if it isn’t already installed, and updates the package cache on the first run. This is Microsoft’s recommended way of installing .NET Core on Ubuntu.
  • install_httpd – Installs HTTPD daemon and mods, and overwrites the HTTPD configuration file to enable reverse-proxy.
  • start_service – Restarts the HTTPD service and restarts the existing ASP.NET application/service process.
  • scripts/stop_service  – Stops the HTTPD service and stops the ASP.NET application/service if it is already running.
  • remove_application – Removes the deployed application from the instance.

The code-deploy-agent on the instance runs these hooks during the application deployment to install and start the service. You can monitor the event activities on the AWS CodeDeploy console and we can grab a detailed log from the EC2 instance. After opening an SSH connection to the instance, navigate to /var/log/aws/codedeploy-agent to find the deployment logs.

Conclusion

In this blog post, you learned how your ASP.NET Core project for AWS CodeStar is built and deployed through the example of adding a test stage to your application’s pipeline. I hope this post helped you understand how various components and AWS services interact to provide you with a complete CI/CD system under AWS CodeStar. To learn more, visit the AWS CodeStar user guide. If you run into issues that are specific to AWS CodeStar, see the AWS CodeStar troubleshooting guide.

We want to hear from you!

We’re always looking to make your life easier, and want to hear your ideas on how to make the AWS CLI even better for you!

Today, we’ve opened up a site on UserVoice, that allows you to post your suggestions and ideas about the AWS CLI. After an idea is posted, you can get people to vote on the ideas, and the product team will be responding directly to the most popular suggestions.

This will let us get the most important features to you, by making it easier to search for and show support for the features you care the most about, without diluting the conversation with bug reports.

We’ve imported existing feature requests from GitHub, with all requests starting at with one vote. As it’s a text-only import between the two, we’ll still be keeping in mind the comments and discussions that already exist. GitHub will remain the channel for reporting bugs.

Go and vote for your favorite feature requests at https://aws.uservoice.com!

We’d love to hear from you!

-The AWS SDK & Tools Team

 

Chalice – 1.0.0 GA Release

by John Carlyle | on | in Python | Permalink | Comments |  Share

We’re excited to announce the 1.0.0 GA (Generally Available) release of Chalice!

Chalice is an open source serverless microframework that enables allows you to create and maintain application backends with a variety of AWS resources. These include:

Chalice 1.0.0 is now generally available and ready for production use. If you want, to give it a try you can download it from PyPi. You can install it with pip as follows.

pip install --upgrade chalice

We follow Semantic Versioning, and are dedicated to maintaining backwards compatibility for each major version.

Getting started with Chalice

You can get started with Chalice and deploy a fully functional API in just a few minutes by following our getting started guide.

You can find the complete documentation at readthedocs.

Notable Chalice features

Chalice provides many features to help build serverless applications on AWS. Here we provide an overview of a select few.

Building an API backend

The core of Chalice is the ability to annotate Python functions with a simple decorator that allows Chalice to deploy this function to AWS Lambda, and link it to a route in API Gateway. The following is a fully functional Chalice application, with a single linked route.

from chalice import Chalice

app = Chalice(app_name="helloworld")

@app.route("/")
def hello_world():
    return {"hello": "world"}

This application can be deployed easily by running the command chalice deploy. Chalice takes care of all the machinery around bundling up the application code, deploying that code to Lambda, setting up an API Gateway Rest API with all the routes specified in your code, and linking up the Rest API to the Lambda function. Chalice will print out something like the following while it deploys the application.

Initial creation of lambda function.
Creating role
Creating deployment package.
Initiating first time deployment...
Deploying to: dev
https://.execute-api.us-west-2.amazonaws.com/api/

Once complete, you can send requests to endpoint it printed out at the end.

$ curl https://.execute-api.us-west-2.amazonaws.com/api/
{"hello": "world"}

Dependency packaging

App packaging can be difficult in the Python world. Chalice will try to download or build all of your project requirements that are specified in a special requirements.txt file and add them to the code bundle that is uploaded to Lambda. Chalice will also try to build and deploy dependencies that have C extensions.

Pure Lambda functions

Pure Lambda functions enable you to deploy functions that are not tied to API Gateway. This is useful if you want to take advantage of the Chalice deployment and packaging features, but don’t need to call it over a REST API.

@app.lambda_function()
def custom_lambda_function(event, context):
    # Anything you want here.
    return {}

Scheduled events

Scheduled events let you mark a handler function to be called on some time interval using Amazon CloudWatch Events. It’s easy to add a scheduled job to be run using a Chalice scheduled event handler, as follows:

@app.schedule(Rate(12, unit=Rate.HOURS))
def handler(event):
    backup_logs()

Automatic policy generation

Automatic policy generation means that Chalice can scan your code for AWS calls, and generate an IAM policy with the minimal set of permissions your Lambda function needs to run. You can disable this feature and provide your own IAM policy if you need more control over exactly what permissions the Lambda function should have.

Authorizers

Chalice can handle a lot of common authorization workflows for you by providing hooks into both IAM authorization and Amazon Cognito user pools. If you want more control over your authorization methods, you can use a custom authorizer. This lets you call a Lambda function that runs custom code from your Chalice application to determine whether a request is authorized.

Continuous integration

You can use Chalice to build a continuous integration pipeline. This works by creating an AWS CodeCommit repository for your project code to live in, and an AWS CodePipeline that watches the repository for changes and kicks off a build in AWS CodeBuild whenever there are changes. You can configure the AWS CodeBuild stage to run tests on your code, and automatically deploy code to production if the tests all pass. The pipeline resources are all created and managed with an AWS CloudFormation template that Chalice will generate for you.

Upgrade notes

If you’re already a Chalice user, there are a few changes to be aware of when upgrading to version 1.0.0.

Parameters to route handling functions are now keyword arguments instead of positional arguments. In the following code, captured parts of the URI will be assigned to the argument with a matching name, rather than in the order they appear.

@route('/user/{first_name}/{last_name}')
def name_builder(last_name, first_name):
    return '%s %s' % (first_name, last_name)

This means that code in which the variable names don’t match the URI will now be broken. For example the following code will not work because the parameter and URI capture group don’t work.

@route('/user/{user_id}')
def get_user(user_number):
    return get_user(user_number)

Support for policy.json has been removed. It now must be suffixed with the stage name, for example, policy-dev.json.

Let us know what you think

We would love to hear your feedback. Feel free to leave comments or suggestions on our GitHub page.

Resources

AWS SDK for Java 2.0 – Feedback Needed

by Matthew Miller | on | in Java | Permalink | Comments |  Share

This is the first in a series of blog posts that outline changes coming in the AWS SDK for Java 2.0. Read our developer preview announcement for more information about why we’re so excited for this new version of the SDK.

We want your help to shape the future of the AWS SDK for Java 2.0. We have lots of features we want to add in 2.0 and need your input on which ones are most important to you. We’ve started tracking reimagined 1.11.x features and features completely new to 2.0 in our issue backlog on GitHub. Please let us know what you think by giving the ones that excite you a “+1” or a comment. Is there something you’d like us to add or change, but you don’t see it in our issues backlog? Create a new issue to tell us about it.

Feel free to chat with us about 2.0 in our Gitter channel, as well. If you have feedback to share, but you don’t want to create a GitHub account, you can send us an email at aws-java-sdk-v2-feedback@amazon.com.

Stay tuned to this blog for more AWS SDK for Java 2.0 development updates.

Screencast using .NET Core with AWS Serverless from NDC Oslo

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Last month I had the pleasure of speaking at the NDC conference in Oslo talking about .NET Core and AWS Serverless technologies.

The talk focused around a new reference application I have been working on called Pollster. Two years ago at the 2015 AWS re:Invent conference we demoed a version of Pollster using .NET Core, which was back then called ASP.NET 5, and Docker. It was great revisiting this app and think of how to solve the technology challenges of the app using Serverless technology.

Thanks to the NDC team a screencast of my talk has been uploaded. Check out the screencast and see how I used AWS Serverless services like AWS Lambda, Amazon API Gateway and AWS Step Functions. The application isn’t feature complete yet but you can find the source on GitHub.

Developer Experience of the AWS SDK for C++ Now Simplified by CMake

by Andrew Tang | on | in C++, C++ | Permalink | Comments |  Share

Building a cross-platform C or C++ project is tedious and time consuming. You often have to manage build files for each platform’s build system. On Unix-like systems, you might use Make, while on Windows you would have to use MSBuild. To make matters worse, in each of these build systems you have to manually maintain and configure compiler flags and linker flags.

We’re very pleased to announce that starting with version 1.0.109 of the AWS SDK for C++, you can more easily use CMake to build your project against the SDK. In addition, it’s easier to uninstall the SDK.

Here’s a simple example script that uses CMake to build a project against the SDK.

cmake_minimum_required(VERSION 2.8)
project(s3Encryption)
find_package(AWSSDK REQUIRED)
set(SERVICE s3-encryption)
AWSSDK_DETERMINE_LIBS_TO_LINK(SERVICE OUTPUT)
link_directories("${AWSSDK_LIB_DIR}")
add_executable(s3Encryption s3Encryption.cpp)
target_link_libraries(s3Encryption ${OUTPUT})
target_include_directories(s3Encryption PRIVATE ${AWSSDK_INCLUDE_DIR})

To uninstall the SDK, just run make uninstall inside your build directory.

In earlier versions, each SDK had its own CMake scripts. However, the functionality only told you that the SDK existed. Now when you run sudo make install, this latest version installs a new directory named AWSSDK.

On a Unix-like system, this is the default installation path:
“/usr/local/lib/cmake/AWSSDK”

On Windows, this is the default installation path:
“C:/Program Files/aws-cpp-sdk-all/lib/cmake/AWSSDK”

Several CMake scripts are created in this directory. The most important one is AWSSDKConfig.cmake. CMake can use it to find the AWSSDK module and load the script. For information about naming of this specific file name, see CMake Find Package Config Mode.

Calling find_package(AWSSDK) makes several useful variables and macros available to you, as follows.

Variables:

AWSSDK_LIB_DIR
AWSSDK_BIN_DIR
AWSSDK_INCLUDE_DIR

Macros:

AWSSDK_CPY_DYN_LIBS(SERVICE_LIST CONFIG DESTDIR)
AWSSDK_LIB_DEPS(SERVICE DEPS)
AWSSDK_DETERMINE_LIBS_TO_LINK(SERVICE_LIST OUTPUT)

You can use the AWSSDK_CPY_DYN_LIBS macro to copy all the SDKs that are specified in the SERVICE_LIST. In addition, it copies all their dependent libraries (including recursive dependencies) and the core library to DESTDIR. You use CONFIG to specify the compile time binary configuration of the SDKs. You don’t have to set it, or you can set it to Debug, Release, and others.

For example, S3 encryption depends on Core, Amazon S3, and AWS KMS. Both S3 and KMS depend on Core. So, the following script copies libaws-cpp-sdk-core.so, libaws-cpp-sdk-s3.so, libaws-cpp-sdk-kms.so and libaws-cpp-sdk-s3-encryption.so to the current directory.

Set(SERVICE_LIST s3 s3-encryption)
AWSSDK_CPY_DYN_LIBS(SERVICE_LIST “” “./”)

You could use the AWSSDK_LIB_DEPS macro to output dependent libraries of SERVICE to DEPS. However, remember that SERVICE is just a single SDK’s name instead of a list of all the SDK names, and DEPS is a list of names of simplified libraries such as “core; s3; kms; s3-encryption”.

The AWSSDK_DETERMINE_LIBS_TO_LINK macro is similar to AWSSDK_CPY_DYN_LIBS. However, it doesn’t copy but does output the library names to OUTPUT. Notice that OUTPUT is a complete list of library names, which you could use as arguments of find_library(). For example, “aws-cpp-sdk-core; aws-cpp-sdk-s3;”.

The PkgConfig metadata file of each SDK is installed on all platforms under the same directory as CMake scripts. On Unix-like systems, we can use the PkgConfig module in CMake to simplify this, as we did in the previous example script. But if you want to try a command line build or a simple Makefile build, you can use a command like the following to generate all the flags, libs, and paths you want.

pkg_config –libs –cflags aws-cpp-sdk-s3-encryption

Try this sample project on your own platform. Before you begin, be sure to do the following:

  • Install the latest version of the AWS SDK for C++.
  • Create and set up AWS credentials on your test machine.
  • Create an Amazon S3 bucket under your account. The region must be the same as the region used in your AWS client configuration.
  • Create an AWS KMS master key.
  • Apply changes to main.cpp in this project, such as master key ID, bucket name, key you wanted to use, and so on.

Please reach out to us with questions and improvements. As always, pull requests are welcome!

Chalice Version 1.0.0b1 Is Now Available

by James Saryerwinnie | on | in Python | Permalink | Comments |  Share

We’ve just released Chalice version 1.0.0b1, the first preview release of Chalice 1.0.0. Since our last post that showcased the 0.9.0 release we’ve added numerous features we’re excited to share with you.

  • Support for built in authorizers. In earlier versions of Chalice, you could integrate a custom authorizer with your Chalice application. However, you had to manage the AWS Lambda function separately from your Chalice app. You can now use Chalice to manage the Lambda function used for your custom authorizer. When you define a built-in authorizer in your Chalice application, the chalice deploy command will manage both your Lambda function used for your API handler and the Lambda function used for your authorizer. You register an authorizer function with Chalice by using the @app.authorizer() decorator. Our user guide walks through an example of using built-in authorizers in Chalice.
  • Support for binary Python packges. When it’s possible, Chalice now automatically tries to download binary packages. This allows you to use Python packages that require C extensions, provided they have a manylinux1 wheel available. As a result, Python packages such as numpy, psycopg2, and Pillow will automatically work with Chalice. See 3rd Party Packages in our user guide for more information.
  • Support for scheduled events. Scheduled events has been one of the most requested features of Chalice. In version 1.0.0b1 of Chalice, you can now register a function to be called on a regular schedule. This is powered by Amazon CloudWatch Events. To create a scheduled event, you use the @app.schedule() decorator on any function in your application. Chalice takes care of creating the additional Lambda function, creating the necessary CloudWatch Events rules and targets, and adding the appropriate permissions to the Lambda function policy. See Event Sources in our user guide for more information on using scheduled events in Chalice.
  • Support for pure AWS Lambda functions. The @app.route(), @app.authorizer(), and @app.schedule() decorators not only create Lambda functions for you, they also offer a higher level of abstraction over a standard Lambda function. However, there are times when you just need a pure Lambda function with no additional levels of abstraction. Chalice now supports this with the @app.lambda_function() decorator. By using this decorator, you can still leverage all of Chalice’s deployment capabilities including automatic policy generation, deployment packaging for your requirements.txt file, stage support, etc. See pure Lambda functions in our user guide for more details.

If you’d like to try out this preview version of Chalice 1.0.0, you have two options when using pip:

  • You can specify the --pre flag: pip install --upgrade --pre chalice.
  • You can specify a version range that references this preview release: pip install chalice>=1.0.0b1,<2.0.0. This also installs any future 1.0.0 preview releases of Chalice.

We’d love to hear any feedback you have about Chalice. Try out these new features today and let us know what you think. You can chat with us on our Gitter channel and file feature requests and issues on our GitHub repo. We look forward to hearing from you.

Improvements for AWS CloudFormation and Amazon CloudWatch in the AWS Tools for PowerShell Modules

Trevor Sullivan, a Systems Development Engineer here at Amazon, recently contributed some new AWS CloudFormation helper cmdlets and improved formatting for types he works with on a daily basis. These updates were released in version 3.3.119.0 of the AWS Tools for PowerShell modules (AWSPowerShell and AWSPowerShell.NetCore), in addition to new support in Amazon CloudWatch metrics for customizable dashboards. In this guest post, Trevor takes us through the updates.

Pause a script until a CloudFormation stack status is reached

If you want to pause your PowerShell script until a CloudFormation stack reaches a certain status, you can use the Wait-CFNStack cmdlet. You use Wait-CFNStack to specify a CloudFormation stack name and the status code that you want to wait for. All of the supported CloudFormation statuses are provided with IntelliSense/tab-completion for the -Status parameter, so you don’t need to look them up! Let’s take a look at how you use this cmdlet.

$Common = @{
    ProfileName = 'default'
    Region = 'us-east-2'
}
$CloudFormation = @{
    StackName = 'AWSCloudFormation'
    TemplateBody = @'
    AWSTemplateFormatVersion: '2010-09-09'
        Resources:
            myBucket:
                Type: AWS::S3::Bucket
        Outputs:
            BucketName:
            Value: !Ref myBucket
'@
}
New-CFNStack @CloudFormation @Common
Wait-CFNStack -StackName $CloudFormation.StackName @Common

Test the existence of the CloudFormation stack

Have you ever wanted to simply test whether a CloudFormation stack exists in a certain AWS Region? If so, we now have a cmdlet for that. The Test-CFNStack cmdlet simply returns a Boolean $true if the specified stack exists, or $false if it doesn’t. If your stack doesn’t exist, you no longer have to worry about catching exceptions thrown by the Get-CFNStack cmdlet!

$Common = @{
    ProfileName = 'default'
    Region = 'us-east-2'
}

if (Test-CFNStack -StackName $CloudFormation.StackName @Common) {
    Remove-CFNStack -StackName $CloudFormation.StackName –Force @Common
}

Format types

Another customer-obsessed enhancement in the latest version of the modules deals with the default display of certain objects. In earlier versions complex objects such as CloudFormation stacks were typically displayed in the vertical “list” format (see the Format-List PowerShell cmdlet). The “list” output format doesn’t use horizontal screen space very effectively. As a result, you have to scroll a lot to find what you want and the output isn’t easy to consume.

Instead, we opted to improve the default output to use the PowerShell table format. This makes data easier to consume, so you don’t have to scroll as much. It also limits focus to the object properties that you care about the most.

If you prefer the “list” format, you can still use it by piping your objects into the Format-List PowerShell cmdlet. The default output has simply been changed to use a tabular format to make data easier to interact with and consume.

The new format types work with cmdlets that emit complex objects, such as:

  • Get-CFNStackEvent
  • Get-CFNStack
  • Get-IAMRoleList
  • Get-CWERule
  • Get-LMFunctionList
  • Get-ASAutoScalingGroup
  • Get-WKSWorkspace
  • Get-CWAlarm

The changelog for version 3.3.119.0 of the module on the PowerShell Gallery lists all the types that new formats have been specified for. You can view the changelog for the release on the PowerShell Gallery.

Manage CloudWatch dashboards

AWS customers who use CloudWatch to store and view metrics will appreciate the new CloudWatch dashboard APIs. You can now use PowerShell cmdlets to create, list, and delete CloudWatch dashboards!

I’ve already created a CloudWatch dashboard in my account, so let’s check out how we can export it, modify it, and then update it. Let’s start by discovering which AWS cmdlets relate to CloudWatch dashboards by using Get-AWSCmdletName.

PS /Users/tsulli> Get-AWSCmdletName –MatchWithRegex dashboard

CmdletName           ServiceOperation         ServiceName       CmdletNounPrefix
----------           ----------------         -----------       ----------------
Get-CWDashboard      GetDashboard             Amazon CloudWatch CW
Get-CWDashboardList  ListDashboards           Amazon CloudWatch CW
Remove-CWDashboard   DeleteDashboards         Amazon CloudWatch CW
Write-CWDashboard    PutDashboard             Amazon CloudWatch CW

Now, let’s discover which CloudWatch dashboards already exist in the us-west-2 AWS Region by using Get-CWDashboardList.

PS /Users/tsulli> Get-CWDashboardList -Region us-west-2

DashboardArn   DashboardName   LastModified        Size
------------   -------------   ------------        ----
               MacBook-Pro     7/6/17 7:50:16 PM   1510

As you can see, I’ve got a single CloudWatch dashboard in my test account, with some interesting metrics about my MacBook Pro. Coincidentally, these hardware metrics are also being written to CloudWatch metrics using the AWSPowerShell.NETCore module.

Now let’s grab some detailed information about this specific CloudWatch dashboard. We do this using the Get-CWDashboard cmdlet, and simply passing in the region and name of the dashboard. Be sure to remember that the dashboard name is a case-sensitive input parameter.

PS /Users/tsulli> $Dashboard = Get-CWDashboard -DashboardName MacBook-Pro -Region us-west-2 | Format-List

LoggedAt : 7/7/17 1:44:44 PM
DashboardArn : arn:aws:cloudwatch::123456789012:dashboard/MacBook-Pro
DashboardBody : {"widgets......
DashboardName :
ResponseMetadata : Amazon.Runtime.ResponseMetadata
ContentLength : 3221
HttpStatusCode : OK

For readability in this article, I’ve trimmed the DashboardBody property. However, it contains a lengthy string with the JSON that represents my CloudWatch dashboard. I can use the ConvertFrom-Json cmdlet to convert the string to a usable object in PowerShell.

PS /Users/tsulli> $DashboardObject = $Dashboard.DashboardBody | ConvertFrom-Json

Now let’s update the title field of all the widgets on the CloudWatch dashboard. Let’s change the beginning of each widget’s title from “Trevor” to “David”. Right now, the title reads “Trevor’s MacBook Pro”. After updating it, the widget titles will read “David’s MacBook Pro”. We’ll use the ForEach method syntax in PowerShell to do this. Each widget has a property named //properties//, which has a //title// string property. We’ll do a simple string replacement operation on this property’s value.

PS /Users/tsulli> $DashboardObject.widgets.ForEach({ $PSItem.properties.title = $PSItem.properties.title.Replace('Trevor', 'David') })

Now that we’ve modified the widget titles, let’s convert the dashboard back to JSON and overwrite our dashboard! We’ll use ConvertTo-Json to convert the dashboard object back into its JSON representation. Then we’ll call Write-CWDashboard to commit the updated dashboard back to the CloudWatch service.

PS /Users/tsulli> $DashboardJson = $DashboardObject | ConvertTo-Json -Depth 8
PS /Users/tsulli> Write-CWDashboard -DashboardBody $DashboardJson -DashboardName MacBook-Pro -Region us-west-2

Great! Now if you go back to the AWS Management Console and visit your CloudWatch dashboard, you’ll see that your widgets have updated titles!

Conclusion

We hope you enjoy the continued improvements to the AWS Tools for PowerShell customer experience! If you have feedback on these improvements, please let us know. You can:

* Leave comments and feedback in our AWS SDK forums.
* Tweet to us at @awscloud and @awsfornet.
* Comment on this article!

AWS SDK for Go – Batch Operations with Amazon S3

The v1.9.44 release of the AWS SDK for Go adds support for batched operations in the s3manager package. This enables you to easily upload, download, and delete Amazon S3 objects. The feature uses the iterator, also known as scanner pattern, to enable users to extend the functionality of batching. This blog post shows how to use and extend the new batched operations to fit a given use case.

Deleting objects using ListObjectsIterator

  sess := session.Must(session.NewSession(&aws.Config{}))
  svc := s3.New(sess)

  input := &s3.ListObjectsInput{
    Bucket:  aws.String("bucket"),
    MaxKeys: aws.Int64(100),
  }
  // Create a delete list objects iterator
  iter := s3manager.NewDeleteListIterator(svc, input)
  // Create the BatchDelete client
  batcher := s3manager.NewBatchDeleteWithClient(svc)

  if err := batcher.Delete(aws.BackgroundContext(), iter); err != nil {
    panic(err)
  }

This example lists all objects, one hundred at a time, under the bucket passed in the command line arguments. The example above creates a new delete list iterator and dictates how the BatchDelete client behaves. This means that when we call Delete on the client it will require a BatchDeleteIterator.

Creating a custom iterator

The SDK enables you to pass custom iterators to the new batched operations. For example, if we want to upload a directory, none of the default iterators do this easily. The following example shows how to implement a custom iterator that uploads a directory to S3.

 // DirectoryIterator iterates through files and directories to be uploaded                                          
// to S3.                                                                                                               
type DirectoryIterator struct {                                                                                         
  filePaths []string                                                                                                    
  bucket    string                                                                                                      
  next      struct {                                                                                                    
    path string                                                                                                         
    f    *os.File                                                                                                       
  }                                                                                                                     
  err error                                                                                                             
}                                                                                                                       
                                                                                                                        
// NewDirectoryIterator creates and returns a new BatchUploadIterator                                                
func NewDirectoryIterator(bucket, dir string) s3manager.BatchUploadIterator {                                           
  paths := []string{}                                                                                                   
  filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {                                             
    // We care only about files, not directories                                                                     
    if !info.IsDir() {                                                                                                  
      paths = append(paths, path)                                                                                       
    }                                                                                                                   
    return nil                                                                                                          
  })                                                                                                                    
                                                                                                                        
  return &DirectoryIterator{                                                                                            
    filePaths: paths,                                                                                                   
    bucket:    bucket,                                                                                                  
  }                                                                                                                     
}                                                                                                                       
                                                                                                                        
// Next opens the next file and stops iteration if it fails to open                                             
// a file.                                                                                                              
func (iter *DirectoryIterator) Next() bool {                                                                            
  if len(iter.filePaths) == 0 {                                                                                         
    iter.next.f = nil                                                                                                   
    return false                                                                                                        
  }                                                                                                                     
                                                                                                                        
  f, err := os.Open(iter.filePaths[0])                                                                                  
  iter.err = err                                                                                                        
                                                                                                                        
  iter.next.f = f                                                                                                       
  iter.next.path = iter.filePaths[0]                                                                                    
                                                                                                                        
  iter.filePaths = iter.filePaths[1:]                                                                                   
  return true && iter.Err() == nil                                                                                      
}                                                                                                                       
                                                                                                                        
// Err returns an error that was set during opening the file
func (iter *DirectoryIterator) Err() error {                                                                            
  return iter.err                                                                                                       
}                                                                                                                       
                                                                                                                        
// UploadObject returns a BatchUploadObject and sets the After field to                                              
// close the file.                                                                                                      
func (iter *DirectoryIterator) UploadObject() s3manager.BatchUploadObject {                                             
  f := iter.next.f                                                                                                      
  return s3manager.BatchUploadObject{                                                                                   
    Object: &s3manager.UploadInput{                                                                                     
      Bucket: &iter.bucket,                                                                                             
      Key:    &iter.next.path,                                                                                          
      Body:   f,                                                                                                        
    },
	// After was introduced in version 1.10.7
    After: func() error {                                                                                               
      return f.Close()                                                                                                  
    },                                                                                                                  
  }                                                                                                                     
}

We have defined a new iterator named DirectoryIterator. This satisfies the BatchUploadIterator by defining the three necessary methods of Next, Err, and UploadObject. The Next method on the iterator will let the batch operation know to continue the iteration or not. Err returns an error if there was one. In this case, the only time we will return an error is when we fail to open a file. If this occurs, the Next method will return false. Finally, the UploadObject returns the BatchUploadObject that is used to upload contents to the service. In this example, we see that we create an input object and a closure. This closure ensures that we’re not leaking files. Now let’s define our main function using what we defined above.

func main() {
  region := os.Args[1]
  bucket := os.Args[2]
  path := os.Args[3]
  iter := NewDirectoryIterator(bucket, path)                                                                  
  uploader := s3manager.NewUploader(session.New(&aws.Config{                                                            
    Region: &region,                                                                                    
  }))                                                                                                                   
                                                                                                                        
  if err := uploader.UploadWithIterator(aws.BackgroundContext(), iter); err != nil {                                    
    panic(err)                                                                                                          
  }                                                                                                                     
  fmt.Printf("Successfully uploaded %q to %q", path, bucket)                                                                                                
}  

You can verify that the directory has been uploaded by looking in S3.

Please chat with us on gitter and file feature requests or issues in github. We look forward to your feedback and recommendations!

AWS SDK for Java 2.0 – Developer Preview

by Andrew Shore | on | in Java | Permalink | Comments |  Share

We’re pleased to announce the Developer Preview of the AWS SDK for Java 2.0. The 2.0 version of the SDK is a major rewrite of the 1.11.x code base. It’s built on top of Java 8 and adds several, frequently requested features, like support for non-blocking I/O and the ability to use a different HTTP implementation at runtime. In addition to these new features, many aspects of the SDK have been refactored and cleaned up with a strong focus on consistency, immutability, and ease of use. The Developer Preview is your chance to influence the direction of the AWS SDK for Java 2.0. Tell us what you like, tell us what you don’t like. Your feedback matters to us. Find details on various ways to give feedback at the bottom of this post.

Although we’re excited about the AWS SDK for Java 2.0 Developer Preview, we also want to reassure customers that we’re not dropping support for the 1.x line of the SDK any time soon. We know there are lots of customers who depend on 1.x versions of the SDK, and we will continue to support them. Version 2.0 is also able to run alongside version 1.x in the same JVM to allow partial migration to the new product. As we get closer to general availability for version 2.0, we’ll share a more detailed plan on how we’ll support the 1.x line.

Getting started

Let’s walk through setting up a project that depends on the SDK and makes a simple service call. The following steps use Maven as an example but you can use any build system that supports MavenCentral as an artifact source (Gradle, sbt, etc). These steps assume you have Maven and a Java 8 JDK already installed. See the developer guide for a more detailed tutorial on getting started.

    1. Create a new Java8 Maven project.
    2. Open the pom.xml file, and add a dependency on the Amazon DynamoDB module (see services/pom.xml for a full list of supported services).
      <dependency>
          <groupId>software.amazon.awssdk</groupId>
          <artifactId>dynamodb</artifactId>
          <version>2.0.0-preview-1</version>
      </dependency>
    3. Create a new class with a main method, and create a DynamoDB service client using the client builder.
      package com.example;
      
      import software.amazon.awssdk.auth.ProfileCredentialsProvider;
      import software.amazon.awssdk.regions.Region;
      import software.amazon.awssdk.services.dynamodb.DynamoDBClient;
      import software.amazon.awssdk.services.dynamodb.model.ListTablesRequest;
      
      public class Main {
      
          public static void main(String[] args) {
              // The region and credentials provider are for demonstration purposes. Feel free to use whatever region and credentials
              // are appropriate for you, or load them from the environment (See http://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/setup-credentials.html)
              DynamoDBClient client = DynamoDBClient.builder()
                  .region(Region.US_EAST_1)
                  .credentialsProvider(ProfileCredentialsProvider.builder()
                                                                 .profileName("my-profile")
                                                                 .build())
                  .build();
          }
      }
    4. Make a service request and do something with the response.
      ListTablesResponse response = client.listTables(ListTablesRequest.builder()
                                                                       .limit(5)
                                                                       .build());
      response.tableNames().forEach(System.out::println);
      

New features

Non-blocking I/O

The SDK now supports truly non-blocking I/O. The 1.11.x version of the SDK already has async variants of service clients. However, they are just a wrapper around a thread pool and the blocking sync client, so they don’t provide the benefits of non-blocking I/O (high concurrency with very few threads). Due to the limitations and poor resource use of the thread-per-connection model, many customers requested support for non-blocking I/O, so we are pleased to announce first class support for non-blocking I/O in our async clients. Under the hood, we use an HTTP client built on top of Netty to make the non-blocking HTTP call.

For non-streaming operations, the interfaces are nearly identical to the sync client. The only difference is that a CompletableFuture containing the response is returned immediately instead of blocking the thread until the response is available. Exceptions are delivered by completing the future exceptionally and can be accessed using the appropriate callbacks on the future (see https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html). Here’s an example of a simple service call using the async/non-blocking client.

// Creates a default async client with credentials and regions loaded from the environment
DynamoDBAsyncClient client = DynamoDBAsyncClient.create();
CompletableFuture<ListTablesResponse> response = client.listTables(ListTablesRequest.builder()
                                                                                    .limit(5)
                                                                                    .build());
// Map the response to another CompletableFuture containing just the table names
CompletableFuture<List<String>> tableNames = response.thenApply(ListTablesResponse::tableNames);
// When future is complete (either successfully or in error) handle the response
tableNames.whenComplete((tables, err) -> {
    if (tables != null) {
        tables.forEach(System.out::println);
    } else {
        // Handle error
        err.printStackTrace();
    }
});

Streaming operations are a bit different to allow for full non-blocking I/O. For streaming inputs (like the Amazon S3 PutObject operation), you must supply an AsyncRequestProvider that can produce content incrementally. To support asynchronous backpressure (to prevent out of memory errors if the SDK can’t send data as fast as it’s being produced) the SDK uses the reactive pull model. This is based on the well-known reactive streams interfaces. In fact, the request provider is simply a Publisher of ByteBuffer chunks. The SDK will call subscribe on that Publisher and request chunks of data as its buffer allows.

Here we upload a file asynchronously using the PutObject operation in Amazon S3. We’re using an implementation of AsyncRequestProvider that produces data from a file. It handles backpressure and retries automatically, reducing the burden on the developer. We want to support common implementations and sources of data out of the box, so if you have any suggestions or requests, please let us know.

public static void main(String[] args) {
    S3AsyncClient client = S3AsyncClient.create();
    CompletableFuture<PutObjectResponse> future = client.putObject(
            PutObjectRequest.builder()
                            .bucket(BUCKET)
                            .key(KEY)
                            .build(),
            AsyncRequestProvider.fromFile(Paths.get("myfile.in"))
    );
    future.whenComplete((resp, err) -> {
        try {
            if (resp != null) {
                System.out.println(resp);
            } else {
                // Handle error
                err.printStackTrace();
            }
        } finally {
            // Lets the application shut down. Only close the client when you are completely done with it.
            FunctionalUtils.invokeSafely(client::close);
        }
    });
}

For operations that have a streaming response (such as Amazon S3 GetObject), you must provide an AsyncResponseHandler that processes and transforms the response. This response handler has callback methods for various events in the response lifecycle. It follows the same reactive streams model for handling the data. (In this case, however, it’s the reverse. The SDK is the data publisher and the response handler implementation must subscribe to the publisher and request data from it.) Consult the Java documentation for a more detailed explanation of how to implement AsyncResponseHandler. In the following example we will use a pre-canned implementation that just emits the data to a file.

public static void main(String[] args) {
    S3AsyncClient client = S3AsyncClient.create();
    final CompletableFuture<Void> future = client.getObject(
            GetObjectRequest.builder()
                            .bucket(BUCKET)
                            .key(KEY)
                            .build(),
            AsyncResponseHandler.toFile(Paths.get("myfile.out")));
    future.whenComplete((resp, err) -> {
        try {
            if (resp != null) {
                System.out.println(resp);
            } else {
                // Handle error
                err.printStackTrace();
            }
        } finally {
            // Lets the application shut down. Only close the client when you are completely done with it
            FunctionalUtils.invokeSafely(client::close);
        }
    });
}

Pluggable HTTP layer

All earlier 1.x.x versions of the SDK have had a hard dependency on the Apache HTTP client to make HTTP calls. Although this is fine for most customers, some advanced customers wanted to swap out the default HTTP implementation to be able to use a more optimized one that’s better suited for their runtime environment. The AWS SDK for Java 2.0 now fully supports a pluggable HTTP layer. The SDK continues to ship Apache as the default, but you can remove it and replace it with another implementation that conforms to the appropriate SPI.

The SDK attempts to load an HTTP implementation from the classpath using the ServiceLoader utility. This enables end users to create their own distributions of the SDK with a different default HTTP implementation (by removing the dependency on Apache’s implementation and replacing it with their own). Customers who want to avoid potentially expensive classpath scanning can set the system property software.amazon.awssdk.http.service.impl to explicitly identify the implementation to use. Finally, for customers wanting precise control over how the HTTP client is created and configured, the SDK accepts either an SdkHttpClient instance or SdkHttpClientFactory instance in each service client builder. Passing in an SdkHttpClient enables customers to share a connection pool across multiple service clients for better resource utilization.

Configuring HTTP settings

Due to the pluggable nature of the HTTP layer, customers who want to configure HTTP specific settings such as socket timeout, proxy settings, etc., must declare a dependency on the underlying implementation and configure the client through implementation provided interfaces. In the following examples we show how to configure the default Apache implementation.

  1. Declare a dependency on the Apache implementation in your project.
    <dependency>
        <artifactId>aws-http-client-apache</artifactId>
        <groupId>software.amazon.awssdk</groupId>
        <version>2.0.0-preview-1</version>
    </dependency>
  2. Create and configure the Apache client factory.
    ApacheSdkHttpClientFactory apacheClientFactory = 
        ApacheSdkHttpClientFactory.builder()
                                  .socketTimeout(Duration.ofSeconds(10))
                                  .connectionTimeout(Duration.ofMillis(750))
                                  .build();
  3. Use the Apache client factory to create a SDK service client.
    DynamoDBClient client =
            DynamoDBClient.builder()
                          .httpConfiguration(ClientHttpConfiguration.builder()
                                                                    .httpClientFactory(apacheClientFactory)
                                                                    .build())
                          .build();

Sharing HTTP clients

The SDK now supports sharing HTTP client instances across multiple service clients. This allows you to reuse the same connection pool for better resource utilization. To share a client across multiple SDK service clients, you must depend on a specific implementation and create an HTTP client factory for that implementation, as shown above.

  1. Create an SdkHttpClient instance using the HTTP client factory we created earlier (only follows steps 1 and 2 from above).
    SdkHttpClient sharedClient = apacheClientFactory.createHttpClient();
  2. Register that HTTP client instance with multiple SDK service clients. (You can even share clients across multiple services.)
    DynamoDBClient clientOne =
            DynamoDBClient.builder()
                          .httpConfiguration(ClientHttpConfiguration.builder()
                                                                    .httpClient(sharedClient)
                                                                    .build())
                          .build();
    DynamoDBClient clientTwo =
            DynamoDBClient.builder()
                          .httpConfiguration(ClientHttpConfiguration.builder()
                                                                    .httpClient(sharedClient)
                                                                    .build())
                          .build();
  3. Because the client is shared, the SDK will not close it when the service client is closed. Be sure to explicitly close it when it’s no longer needed.
    sharedClient.close();

Pluggable Async HTTP

The non-blocking async HTTP client is also pluggable, and you can configure or share it in exactly the same way as sync. The interfaces for the factory and client are SdkAsyncHttpClient and SdkAsyncHttpClientFactory, respectively. An implementation built on top of Netty is the default. Add the following to your pom.xml file to configure the default Netty implementation.

<dependency>
    <artifactId>aws-http-nio-client-netty</artifactId>
    <groupId>software.amazon.awssdk</groupId>
    <version>2.0.0-preview-1</version>
</dependency>

API changes

We’ve made several public API changes to improve consistency, make the SDK easier to use, strongly enforce immutability for safer concurrent programming, and remove deprecated or confusing APIs. The following are some of the bigger changes included in the AWS SDK for Java 2.0 Developer Preview.

Client Builders

In 1.11.x versions, we recently deprecated all client constructors and all mutable methods on the client in favor of the client builders. In version 2.0, the client builders are now the only way to create a service client. In addition, clients are 100 percent immutable after creation. For a cleaner programming experience, all interaction with the clients is done through interfaces.

To obtain an instance of the builder, you can use a static factory method on the client interface like this.

DynamoDBClient client = DynamoDBClient.builder().build();

If you want just a quick default client that loads region and credentials from the environment you can use the following. This will fail if region or credentials are not properly setup.

DynamoDBClient client = DynamoDBClient.create();

All builders and POJOs in version 2.0 now follow a new naming convention for setter methods. There is no set/with prefix. The setter method is simply the property name. The setter methods return the builder for method chaining.

DynamoDBClient client = DynamoDBClient.builder()
                                      .region(Region.US_EAST_1)
                                      .build();

Most advanced configuration in 1.11.x versions was HTTP related. Due to the pluggable nature of the HTTP layer, you must now configure this via the HTTP implementation directly (see “New features”, earlier in this post). You can change the non-HTTP related advanced configuration via the overrideConfiguration method.

DynamoDBClient client =
        DynamoDBClient.builder()
                      .overrideConfiguration(
                              ClientOverrideConfiguration.builder()
                                                         .retryPolicy(PredefinedRetryPolicies.NO_RETRY_POLICY)
                                                         .build())
                      .build();

Immutable POJOs

Previously, all request/response POJOs were mutable, which violated the thread safety guarantees of the client. In version 2.0, all POJOs are immutable and must be created through a builder.

ListTablesRequest request = ListTablesRequest.builder()
                                             .limit(5)
                                             .build();

You can modify POJOs only by converting the object into a builder, making the modifications, and rebuilding the object. In the example below, originalRequest is unchanged and a new instance of ListTablesRequest is created and returned.

public static ListTablesRequest updatePaginationToken(ListTablesRequest originalRequest, ListTablesResponse response) {
    return originalRequest.toBuilder()
                          .exclusiveStartTableName(response.lastEvaluatedTableName())
                          .build();
}

Due to the immutability of POJOs and the fluent setters, serialization requires some special care. Here’s an example of serializing a request object to JSON using the Jackson library, and deserializing it back into a request object.

ObjectMapper mapper = new ObjectMapper();
ListTablesRequest request = ListTablesRequest.builder()
                                             .limit(5)
                                             .build();
String serialized = mapper.writeValueAsString(request.toBuilder());

ListTablesRequest deserialized = mapper.readValue(serialized, ListTablesRequest.serializableBuilderClass())
                                       .build();

Regions

In 1.11.x versions of the SDK, there were many different classes used for configuring regions or accessing region metadata (Region, Regions, s3.Region, RegionUtils, etc). In version 2.0, these are all collapsed into a single Region class for simplicity and ease of use.

The new Region class looks similar to an enum and has constants for each region.

DynamoDBClient client = DynamoDBClient.builder()
                                      .region(Region.US_EAST_1)
                                      .build();

Creating a new region is safe to do using the static factory method of. This is useful when the region is coming from an external source such as a configuration file, or for using a region that the SDK doesn’t know about yet.

Region newRegion = Region.of("us-east-42");

You can access metadata about the region (name, partition, or domain) via the RegionMetadata interface.

String domain = RegionMetadata.of(Region.US_EAST_1).getDomain();

You can access region metadata for a service (such as which regions that service is supported in) via the ServiceMetadata interface.

DynamoDBClient.serviceMetadata().regions().forEach(System.out::println);

Streaming

There are substantial changes in the APIs for streaming operations (such as the Amazon S3 GetObject and PutObject) due to the newly added support for non-blocking I/O. Because the programming models for blocking I/O and non-blocking I/O are so radically different, we’ve removed the InputStream from the request/response POJO. Now, the sync and async clients have additional parameters for streaming operations to accept streamed content (PutObject) and to process a streamed response (GetObject). We explained the async streaming APIs earlier in this post, so let’s take a look at the sync versions.

In the following example, we’re uploading a file to S3 via the PutObject operation. Notice that we don’t set the content in the PutObjectRequest, but instead provide it as a second parameter to the putObject method. This content is provided using a new class, RequestBody, which has overloads for many common sources of data (File, String, byte array, ByteBuffer, InputStream).

S3Client client = S3Client.create();
client.putObject(PutObjectRequest.builder()
                                 .bucket(BUCKET)
                                 .key(KEY)
                                 .build(),
                 RequestBody.of(Paths.get("myfile.in")));

Next, we download the same object to a file using the GetObject operation. Again, instead of accessing the InputStream from the GetObjectResponse object, you can now provide a StreamingResponseHandler implementation to process the response contents. This is a functional interface that provides the unmarshalled GetObjectResponse and the input stream as parameters and returns some transformed value (or Void). This transformed value becomes the return value of the getObject method. There are a couple of convenience static factory methods on the interface to create handlers for common situations like dumping the data into a file or writing it to an OutputStream. We use the file one below.

S3Client client = S3Client.create();
client.getObject(GetObjectRequest.builder()
                                 .bucket(BUCKET)
                                 .key(KEY)
                                 .build(),
                 StreamingResponseHandler.toFile(Paths.get("myfile.out")));

S3 client changes

In 1.11.x the S3 service client is not generated like the rest of the SDK. Because of this, it’s somewhat inconsistent with other service clients in the AWS SDK for Java. It also doesn’t exactly match the service’s API, so it can be confusing using another SDK’s S3 client after getting used to the Java client. In version 2.0 we are now generating the S3 client like every other service. Play around with it and let us know what you think.

Giving feedback and contributing

You can provide feedback to us in several ways. Both positive and negative feedback is appreciated.

Public feedback

GitHub issues. Customers who are comfortable giving public feedback can open a Github issue in the V2 repo. This is the preferred mechanism to give feedback so that other customers can engage in the conversation, +1 issues, etc. Issues you open will be evaluated, and included in our roadmap for the GA launch.

Gitter Channel. For informal discussion or general feedback, you may join the Gitter chat for the V2 repo. The Gitter channel is also a great place to get help with the Developer Preview, although feel free to open an issue as well.

Private feedback

Those who prefer not to give public feedback can instead email the aws-java-sdk-v2-feedback@amazon.com mailing list. This list is monitored by the AWS SDK for Java team and will not be shared with anyone outside of AWS. An SDK team member may respond back to ask for clarification or acknowledge that the feedback was received and is being evaluated.

Contributing

You can open pull requests for fixes or additions to the AWS SDK for Java 2.0 Developer Preview. All pull requests must be submitted under the Apache 2.0 license and will be reviewed by an SDK team member prior to merging. Accompanying unit tests are appreciated.