AWS Developer Blog

Automating the Deployment of Encrypted Web Services with the AWS SDK for PHP (Part 2)

by Joseph Fontes | on | in PHP | | Comments

In the first post of this series, we focused on how to use Amazon Route 53 for domain registration and use Amazon Certificate Manager (ACM) to create SSL certificates. With our newly registered domain available for use, we can proceed to deploy and configure the services we need to host the www.dev-null.link website across an encrypted connection. Once complete, the infrastructure configuration will reflect the diagrams below.

Diagram 1

Diagram 2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The first diagram shows the use of Route 53 to route traffic between AWS Elastic Beanstalk environments across multiple regions. The second example adds Amazon CloudFront support to the design.

AWS Elastic Beanstalk

Our first step is to create the Elastic Beanstalk application, which will provide the necessary infrastructure to host our website. The following is the order of the methods used for the AWS Elastic Beanstalk deployment:

  • createApplication
  • createApplicationVersion
  • createConfigurationTemplate
  • createEnvironment

We start by creating the Elastic Beanstalk application.

$ebCreateApplicationData = [ 'ApplicationName' => "DevNullDemo",
            'Description' => "Demo application for ACM deployment" ];

$ebCreateApplicationResult = $ebClient->createApplication($ebCreateApplicationData);

print_r($ebCreateApplicationResult);

Result

…
[Application] => Array
	(
	[ApplicationName] => DevNullDemo
	[Description] => Demo application for ACM deployment
…

Now we’ll create the initial version of the application and name it DevNullDemo. You can use the application archive of your choice, although a simple PHP demo site is available here.

$ebCreateAppVersionData = [ 'ApplicationName' => "DevNullDemo",
                            'VersionLabel' => 'v1',
                            'Description' => 'Initial Create',
                            'SourceBundle' => [ 'S3Bucket' => 'pub-materials',
                                                'S3Key' => 'Sample-App.zip' ] ];

$ebCreateAppVersionResult = $ebClient->createApplicationVersion($ebCreateAppVersionData);

print_r($ebCreateAppVersionResult);

Result

…
            [ApplicationVersion] => Array
                (
                    [ApplicationName] => DevNullDemo
                    [Description] => Initial Create
                    [VersionLabel] => v1
                    [SourceBundle] => Array
                        (
                            [S3Bucket] => pub-materials
                            [S3Key] => Sample-App.zip
                        )
                    [Status] => UNPROCESSED
                )
…

Next, we need to create an Elastic Beanstalk configuration template. This template requires the selection of an Elastic Beanstalk solution stack prior to calling the method, createConfigurationTemplate. The solution stack is the platform you choose to run your application within Elastic Beanstalk. You can find a list of available solution stack choices by using the listAvailableSolutionStacks method.

$ebSolutionStacks = $ebClient->listAvailableSolutionStacks();
print_r($ebSolutionStacks);

Result

…
            [SolutionStacks] => Array
                (
                    [0] => 64bit Windows Server Core 2012 R2 v1.2.0 running IIS 8.5
…
                    [5] => 64bit Amazon Linux 2016.03 v2.1.6 running Java 7
…
                    [9] => 64bit Amazon Linux 2014.03 v1.1.0 running Node.js
                    [14] => 64bit Amazon Linux 2016.03 v2.1.7 running PHP 7.0
                    [15] => 64bit Amazon Linux 2015.09 v2.0.6 running PHP 5.6
                    [16] => 64bit Amazon Linux 2015.09 v2.0.4 running PHP 5.6
…
                    [14] => Array
                        (
                            [SolutionStackName] => 64bit Amazon Linux 2016.03 v2.1.7 running PHP 7.0
                            [PermittedFileTypes] => Array
                                (
                                    [0] => zip
                                )

                        )
…

For our demonstration, we’ll use the, 64bit Amazon Linux 2016.03 v2.1.7 running PHP 7.0 solution stack.

$ebConfigTemplateData = [ 'ApplicationName' => "DevNullDemo",
                          'TemplateName' => 'DevNullDemoTemplate',
                          'SolutionStackName' => '64bit Amazon Linux 2016.03 v2.1.7 running PHP 7.0',
                          'Description' => 'EB Environment template for blog deployment.' ];

$ebConfigTemplateCreateResult = $ebClient->createConfigurationTemplate($ebConfigTemplateData);

print_r($ebConfigTemplateCreateResult);

Result

…
        (
            [SolutionStackName] => 64bit Amazon Linux 2016.03 v2.1.7 running PHP 7.0
            [ApplicationName] => DevNullDemo
            [TemplateName] => DevNullDemoTemplate
            [Description] => EB Environment template for blog deployment.
…

Now we can create and start the infrastructure by using the createEnvironment method. The following example sets additional options such as instance type and ACM SSL certificate. You need to replace the [CERTIFICATEARN] value with the AWS ACM certificate ARN created in part 1 of this series. You can also find this value by using the AWS ACM listCertificates method. For these examples, we’ve created a certificate across multiple regions for the host name eb.dev-null.link, in addition to the previously created www.dev-null.link certificate.

$ebCreateEnvData = [ 'ApplicationName' => "DevNullDemo",
                        'EnvironmentName' => "DevNullEnv",
                        'Description' => "Demo environment for ACM EB deployment.",
                        'TemplateName' => "DevNullDemoTemplate",
                        'VersionLabel' => 'v1',
                        'OptionSettings' => [

                                [ 'Namespace' => 'aws:elb:listener:443',
                                  'OptionName' => 'ListenerProtocol',
                                  'Value' => 'HTTPS' ],

                                [ 'Namespace' => 'aws:elb:listener:443',
                                  'OptionName' => 'SSLCertificateId',
                                  'Value' => '[CERTIFICATEARN]' ],

                                [ 'Namespace' => 'aws:elb:listener:443',
                                  'OptionName' => 'InstancePort',
                                  'Value' => '80' ],

                                [ 'Namespace' => 'aws:elb:listener:443',
                                  'OptionName' => 'InstanceProtocol',
                                  'Value' => 'HTTP' ],

                                [ 'Namespace' => 'aws:autoscaling:launchconfiguration',
                                  'OptionName' => 'InstanceType',
                                  'Value' => 't2.nano' ],

                                ],
                        'Tier' => [ 'Name' => 'WebServer',
                                    'Type' => 'Standard',
                                    'Version' => ' ' ],
                        ];

$ebCreateEnvData = $ebClient->createEnvironment($ebCreateEnvData);

print_r($ebCreateEnvData);

Result

…
            [EnvironmentName] => DevNullEnv
            [EnvironmentId] => e-fnvhjptdjd
            [ApplicationName] => DevNullDemo
            [VersionLabel] => v1
            [SolutionStackName] => 64bit Amazon Linux 2016.03 v2.1.7 running PHP 7.0
            [Description] => Demo environment for ACM EB deployment.

            [Status] => Launching
            [Health] => Grey
            [Tier] => Array
                (
                    [Name] => WebServer
                    [Type] => Standard
                    [Version] =>
                )
…

As the results show, the current status of our environment is Launching. We can periodically check the status with the describeEnvironments method.

$ebDescEnvResult = $ebClient->describeEnvironments();

foreach($ebDescEnvResult['Environments'] as $ebEnvList) {
        print "Name:\t".$ebEnvList['EnvironmentName']."\n";
        print "ID:\t".$ebEnvList['EnvironmentId']."\n";
        print "CNAME:\t".$ebEnvList['CNAME']."\n";
        print "Status:\t".$ebEnvList['Status']."\n\n";
}

Result

Name:  	DevNullEnv
ID:    	[ID]
CNAME: 	DevNullEnv.[ID].[Region].elasticbeanstalk.com
Status:	Ready

When the environment has a status of Ready, we can proceed to create the necessary DNS records. You can also check that the site is functional by pasting the CNAME value into a web browser. Be sure to record this CNAME value so you can use it later.

Demo App

You will want to repeat this process across additional AWS Regions to demonstrate latency-based DNS resolution.

Amazon Route 53

Our next step is to create a Route 53 hosted zone. This hosted zone will define a domain name (or subdomain) for which we are authoritative and, thus, allowed to create DNS records. We’ll start with the createHostedZone method.

$route53Client = $sdk->createRoute53();

$route53Data = [ 'Name' => "dev-null.link",
            'CallerReference' => "BLOGPOSTREF001",
            'HostedZoneConfig' => [ 'Comment' => "AWS SDK sample dev-null.link" ] ];

$route53Result = $route53Client->createHostedZone($route53Data);

Result

            [HostedZone] => Array
                (
                    [Id] => /hostedzone/[Amazon Route 53 Zone ID]
                    [Name] => dev-null.link.
                    [CallerReference] => BLOGPOSTREF001
                    [Config] => Array
                        (
                            [Comment] => AWS SDK sample dev-null.link
                            [PrivateZone] =>
                        )

                    [ResourceRecordSetCount] => 2
                )
…
            [DelegationSet] => Array
                (
                    [NameServers] => Array
                        (
                            [0] => ns-999.awsdns-60.net
		…
)
                )

You should copy the ID of the hosted zone from this result. You can also find a list of all hosted zone ID values by using the listHostedZones method.

$route53ListResult = $route53Client->listHostedZones();

foreach($route53ListResult['HostedZones'] as $zoneItem) {
        print "Name:\t".substr($zoneItem['Name'],0,-1)."\n";
        print "ID:\t".$zoneItem['Id']."\n\n";
}

Result

Name:  	dev-null.link
ID:    	/hostedzone/[Amazon Route 53 Zone ID]

We’ll now create a new DNS entry so that our website is visible via web browser with the eb.dev-null.link host name. For this, we need to use the CNAME value from our Elastic Beanstalk application.

$currentDate = date("r");
$hostedZoneId = "/hostedzone/[Amazon Route 53 Zone ID]";
$subDomain = "eb.dev-null.link";
$ebCname = "DevNullEnv.[EB ID].[Region].elasticbeanstalk.com";

$recordComment = "Created $subDomain record on $currentDate";

$route53RecordData = [ 'HostedZoneId' => $hostedZoneId,
                    'ChangeBatch' => [ 'Comment' => $recordComment,
                    'Changes' => [
                                       [ 'Action' => 'CREATE',
                                         'ResourceRecordSet' => [ 'Name' => $subDomain,
                                                                  'Type' => 'CNAME',
                                                                  'TTL' => 60,
                                                                  'ResourceRecords' => [ [ 'Value' => $ebCname ] ] ]
] ] ] ];

$route53ChangeResult = $route53Client->changeResourceRecordSets($route53RecordData);

print_r($route53ChangeResult);

Result

…
            [ChangeInfo] => Array
                (
                    [Id] => /change/[ChangeInfo ID]
                    [Status] => PENDING
…
                    [Comment] => Created test.jf.unicorn.rentals record on Thu, 15 Sep 2016 12:25:07 -0700
                )
…

As we can see from the result, the status of the change is PENDING. We can check the status with the getChange method using the value of the ChangeInfo ID.

$route53ChangeData = [ 'Id' => "/change/[ChangeInfo ID]" ];
$route53ChangeResult = $route53Client->getChange($route53ChangeData);
print_r($route53ChangeResult);

Result

…
            [ChangeInfo] => Array
                (
                    [Id] => /change/<ChangeInfo ID>
                    [Status] => INSYNC
…
                    [Comment] => Created test.jf.unicorn.rentals record on Thu, 15 Sep 2016 12:25:07 -0700
                )
…

Now that our change has the status of INSYNC, we can view the secure URL in our browser window with the URL https://eb.dev-null.link/.
Demo App

Deploying Across Multiple Regions

This deployment is now serving our website across an encrypted connection in a single AWS Region. For those who would like to use multiple regions, we can expand our current configuration. An AWS ACM certificate is needed in each AWS Region used for the deployment. Because we’ll be using CloudFront, we have to ensure that a certificate is created in the us-east-1 region because CloudFront will source the available AWS ACM certificates from there. You can reference the previous blog post for instructions on creating an AWS ACM certificate in additional regions. Next, run the Elastic Beanstalk creation methods shown earlier in each additional region where you want to deploy the application. Be sure to record the CNAME value for each environment.

After we have all of the necessary Elastic Beanstalk environments running, we need to delete the Route 53 resource record for eb.dev-null.link so that we can replace it with a latency-based record set.

$recordComment = "Deleted $subDomain record on $currentDate";

$route53RecordData = [ 'HostedZoneId' => $hostedZoneId,
                    'ChangeBatch' => [ 'Comment' => $recordComment,
                    'Changes' => [
                                       [ 'Action' => 'DELETE',
                                         'ResourceRecordSet' => [ 'Name' => $subDomain,
                                                                  'Type' => 'CNAME',
                                                                  'TTL' => 60,
                                                                  'ResourceRecords' => [ [ 'Value' => $ebCname ] ] ]
] ] ] ];

$route53ChangeResult = $route53Client->changeResourceRecordSets($route53RecordData);

You might notice that the method and instructions to delete the record are almost identical to the instructions used to create the record. They even use the same method, changeResourceRecordSets.

Result

…
            [ChangeInfo] => Array
                (
                    [Id] => /change/[ChangeInfo ID]
                    [Status] => PENDING

                    [Comment] => Deleted www.dev-null.link record on Thu, 15 Sep 2016 14:58:51 -0700
                )
…

Our next step is to add the latency-based routing rules. This example provides the CNAME of the Elastic Beanstalk environment via the describeEnvironments method.

$currentDate = date("r");
$hostedZoneId = "HOSTED ZONE ID";

$ebDescEnvData = [ 'EnvironmentNames' => [ 'DevNullEnvProd' ] ];
$ebDescEnvResult = $ebClient->describeEnvironments($ebDescEnvData);

$ebCname = $ebDescEnvResult['Environments'][0]['CNAME'];

$recordComment = "Created www record on $currentDate";

$route53RecordData = [ 'HostedZoneId' => $hostedZoneId,
                    'ChangeBatch' => [ 'Comment' => $recordComment,
                    'Changes' => [
                                       [ 'Action' => 'CREATE',
                                         'ResourceRecordSet' => [
                                                'Name' => "eb.dev-null.link",
                                                'Type' => 'CNAME',
                                                'TTL' => 60,
                                                'Region' => $region,
                                                'SetIdentifier' => str_replace("-","",$region),
                                                'ResourceRecords' => [ [ 'Value' => $ebCname ], ],
                                                ],
                                        ],
                                ],
                        ],
                ];

$route53ChangeResult = $route53Client->changeResourceRecordSets($route53RecordData);

Result

…
            [ChangeInfo] => Array
                (
                    [Id] => /change/[ChangeInfo ID]
                    [Status] => PENDING
…

Demo App

In the Route 53 console, we can see there are two CNAMEs that now resolve the host name, eb.dev-null.link. When a user visits the website, the URL returned will correspond to the lower of the record latencies.

Amazon CloudFront

To enhance the user experience, we’ll now configure and deploy a content delivery network solution named Amazon CloudFront. This AWS service provides content delivery acceleration for the media provided with our web application. Each CloudFront deployment is composed of a CloudFront distribution with each distribution having one or more origins. An origin defines the mapping of a URL to a particular destination. The host name, www.dev-null.link, will resolve to our CloudFront distribution, which will then serve pages from the backend eb.dev-null.link load-balanced site. We first create our distribution with the createDistribution method. The value of $cfCreateDistData can be found in this Github Gist.

$cfCreateDistResult = $cfClient->createDistribution($cfCreateDistData);
print_r($cfCreateDistResult);

Result

Aws\Result Object
(
    [data:Aws\Result:private] => Array
        (
            [Distribution] => Array
                (
                    	[Id] => [CF ID Value]
[DomainName] => [CloudFront ID].cloudfront.net
…

Once complete, we need to save the value of DomainName from the result returned. Next, we’ll create a new Route 53 CNAME record that points www.dev-null.link to our CloudFront distribution.

$subDomain = "www.dev-null.link";
$cfCname = "[CloudFront ID].cloudfront.net";

$recordComment = "Created $subDomain record on $currentDate";

$route53RecordData = [ 'HostedZoneId' => $hostedZoneId,
                    'ChangeBatch' => [ 'Comment' => $recordComment,
                    'Changes' => [
                                       [ 'Action' => 'CREATE',
                                         'ResourceRecordSet' => [ 'Name' => $subDomain,
                                                                  'Type' => 'CNAME',
                                                                  'TTL' => 60,
                                                                  'ResourceRecords' => [ [ 'Value' => $cfCname ] ] ]
] ] ] ];

$route53ChangeResult = $route53Client->changeResourceRecordSets($route53RecordData);

Result

Aws\Result Object
(
    [data:Aws\Result:private] => Array
        (
            [ChangeInfo] => Array
                (
[Status] => PENDING
…

Once complete, we can test the new deployment by navigating to the URL (https://www.dev-null.link/) in a web browser.

Conclusion

With our infrastructure configuration completed, we now have a globally load balanced web application that’s accessible via encrypted communications. We’ve shown we can use the AWS SDK for PHP to automate these deployments, which provides the agility to reproduce these environments for customers on demand. Next, we’ll continue this series by reviewing deployments that use Amazon S3 for static content hosting, and the AWS Application Load Balancer and Elastic Load Balancing with Amazon EC2 instance deployments.

Using the AWS SDK for Go Encryption Client

by Ben Powell | on | in Go | | Comments

Overview

AWS SDK for Go released the encryption client last year, and some of our customers have asked us how to use it. We’re very excited to show you some examples in this blog post. Before we get into the examples, let’s look at what client-side encryption is and why you might want to use it.

Client-side encryption is the act of encrypting or decrypting on the client’s side and not relying on a service to do the encryption for you. This has many added benefits, including enabling you to choose what to use to encrypt your data. It also enables extra security so that only those who have the master key can decrypt the data.

The crypto client has three major components: key wrap handler, cipher builder, and client. We use the key wrap handler to generate and encrypt the iv and key. Then we use those keys with the cipher builder to build a new cipher. Lastly, we use all these parts to create a client. To learn more about this process, see envelope encryption.

Prerequisite

To run these examples, we need

  • An AWS KMS encryption key
  • An Amazon S3 bucket bar

Encryption and Decryption

In our implementation, we wanted to provide interoperability across all SDKs and to give customers an easy way to extend the s3crypto package. Let’s first get into an example of putting a simple “hello world” object into S3.


	arn := "arn to our key"
	sess := session.New(&aws.Config{
        Region: aws.String("us-east-1"),
    })
	// This is our key wrap handler, used to generate cipher keys and IVs for
    // our cipher builder. Using an IV allows more “spontaneous” encryption.
    // The IV makes it more difficult for hackers to use dictionary attacks.
    // The key wrap handler behaves as the master key. Without it, you can’t
    // encrypt or decrypt the data.
	keywrap := s3crypto.NewKMSKeyGenerator(kms.New(sess), arn)
	// This is our content cipher builder, used to instantiate new ciphers
    // that enable us to encrypt or decrypt the payload.
	builder := s3crypto.AESGCMContentCipherBuilder(keywrap)
	// Let's create our crypto client!
	client := s3crypto.NewEncryptionClient(sess, builder)
	
	key := "foo"
	bucket := "bar"
	input := &s3.PutObjectInput{
    	Bucket: &bucket,
    	Key:    &key,
    	Body:   bytes.NewReader("Hello world!"),
	}

    _, err := client.PutObject(input)
	// What to expect as errors? You can expect any sort of S3 errors, http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html.
	// The s3crypto client can also return some errors:
    //  * MissingCMKIDError - when using AWS KMS, the user must specify their key's ARN
	if err != nil {
		return err
	}

Now that wasn’t too hard! It looks almost identical to the S3 PutObject! Let’s move on to an example of decryption.


	sess := session.New(&aws.Config{
        Region: aws.String("us-east-1"),
    })
	client := s3crypto.NewDecryptionClient(sess)

	key := "foo"
	bucket := "bar"
	input := &s3.GetObjectInput{
    	Bucket: &bucket,
    	Key:    &key,
	}

    result, err := client.GetObject(input)
  // Aside from the S3 errors, here is a list of decryption client errors:
  //   * InvalidWrapAlgorithmError - returned on an unsupported Wrap algorithm
  //   * InvalidCEKAlgorithmError - returned on an unsupported CEK algorithm
  //   * V1NotSupportedError - the SDK doesn’t support v1 because security is an issue for AES ECB
  // These errors don’t necessarily mean there’s something wrong. They just tell us we couldn't decrypt some data.
  // Users can choose to log this and then continue decrypting the data that they can, or simply return the error.
	if err != nil {
		return err
	}
	
	// Let's read the whole body from the response
	b, err := ioutil.ReadAll(result.Body)
	if err != nil {
		return err
	}
	fmt.Println(string(b))

As the code shows, there’s no difference between using the Amazon S3 client’s GetObject versus the s3crypto.DecryptionClient.GetObject.

Lost or Deleted Master Key

If you lose or delete your master key, there’s no way to decrypt your data. The beauty of client-side encryption is that the master key is never stored with your data. This allows you to specify who can view your data.

Supported Algorithms

The AWS SDK for Go currently supports AWS KMS for key wrapping and AES GCM as a content cipher. However, some users might not want to use AES GCM or KMS for their ciphers. The SDK allows any user to specify any cipher as long as it satisfies our interfaces. With that said, the goal of this crypto client is to allow interoperability between the crypto clients of other SDKs and enable easy extensibility. Please let us know in the comments how you’re using or extending the crypto client.

The documentation for Amazon S3 Encryption Client can be found here.

Check out other SDKs that support the Amazon S3 Encryption Client
AWS SDK for C++
AWS SDK for Java
AWS SDK for Ruby

AWS SDK for Go Adds Error Code Constants

by Jason Del Ponte | on | in Go | | Comments

The AWS SDK for Go v1.6.19 release adds generated constants for all modeled service response error codes. These constants improve discoverability of the error codes that a service can return, and reduce the chance of typos that can cause errors to be handled incorrectly.

You can find the new error code constants within each of the SDK’s service client packages that are prefixed with “ErrCode”.  For example, the “NoSuchBucket” error code returned by Amazon S3 API requests can be found in the s3 package as: “ErrCodeNoSuchBucket”.

Here is an example of how to use the error code constants with the Amazon S3 GetObject API response.

result, err := svc.GetObject(&s3.GetObjectInput{
    Bucket: aws.String("myBucket"),
    Key: aws.String("myKey"),
})
if err != nil {
    if aerr, ok := err.(awserr.Error); ok {
        // Special handling for bucket and key errors
        switch aerr.Code() {
            case s3.ErrCodeNoSuchBucket:
                 // Handling bucket not existing error
                 fmt.Println("bucket does not exist.")
            case s3.ErrCodeNoSuchKey:
                 // Handling key not existing error
                 fmt.Println("key does not exist.")
        }
    }
    return err
}

We’re working to include error codes for all services. Let us know if you find additional error codes to include in the AWS SDK for Go.

Java SDK Bundled Dependency

by Kyle Thomson | on | in Java | | Comments

The AWS SDK for Java depends on a handful of third-party libraries, most notably Jackson for JSON and Apache Commons Http Client for over the wire. For most customers, resolving these as part of their standard Maven dependency resolution is perfectly fine; Maven automatically pulls the required versions in or uses existing versions if they’re specified in the project already.

However, the AWS SDK for Java requires certain minimum versions to function properly and some customers are unable to change the version of the third-party libraries they use. Maybe it’s because another dependency requires a specific version, or there are breaking changes between third-party versions that large portions of the code base relies on. Whatever the case may be, these version conflicts can create problems when you try to use the AWS SDK for Java.

We’re pleased to introduce the AWS SDK for Java Bundle dependency. This new module that you can include in your maven project contains all the SDK clients for all services and all of the third-party libraries in a single JAR. The third-party libraries were “relocated” to new package names to avoid class conflicts with a different version of the same third-party library on a project’s classpath. To use this version of the SDK, simply include the following Maven dependency in your project.

<dependency>
  <groupId>com.amazonaws</groupId>
  <artifactId>aws-java-sdk-bundle</artifactId>
  <version>${aws.sdk.version}</version>
</dependency>

Of course, because we relocated the third-party libraries, they’re no longer available to use under their original import names – unless the project explicitly adds those libraries as dependencies. For example, if a project relied on the AWS SDK for Java bringing in the Joda Time library, when the project switches to use the bundle dependency it also needs to add a specific dependency for Joda Time.

The relocated classes are intended for internal use only by the AWS SDK. It is strongly recommended that you do not refer to classes under com.amazonaws.thirdparty.* in your own code. The following third-party libraries are included in the bundled dependency and moved to the com.amazonaws.thirdparty.* package:

Because the bundle dependency includes all of the dependent libraries, it’s going to be a larger binary to pull down when dependencies get resolved (about 50 MB at the time of this writing, but this will increase with the introduction of each new service and each new third-party library). In addition, if a project explicitly imports one of the third-party libraries that the SDK includes then classes will be duplicated (albeit in different packages). This increases the memory requirement of an application. For these reasons, we recommend that you only use the bundled dependency if you have a need to.

If a project has the combination of a version clash and a limited total project size (e.g., AWS Lambda limits package size to 50MB), the bundled dependency might not be the right solution. Instead, you can build your own version of the AWS SDK for Java from the open sourced code on GitHub. For example, if you needed to resolve a conflict only for the Joda Time library, you can include a build configuration like the following in your maven project:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-shade-plugin</artifactId>
  <version>2.4.3</version>
  <configuration>
    <artifactSet>
      <includes>
        <include>joda-time:joda-time</include>
        <include>com.amazonaws:*</include>
      </includes>
    </artifactSet>
    <relocations>
      <relocation>
        <pattern>org.joda</pattern>
        <shadedPattern>com.amazonaws.thirdparty.joda</shadedPattern>
      </relocation>
    </relocations>
  </configuration>
</plugin>

Although this means you need to build your own version of the SDK and install it into your own repository, it gives you great flexibility for the third-party libraries and/or services you want to include. Check out the Maven Shade Plugin for more details about how it works.

We hope this new module is useful for projects where there’s a dependency clash. As always, please leave your comments or feedback below!

CHANGELOG for the AWS SDK for Java

by Dongie Agnir | on | in Java | | Comments

We are happy to announce that beginning with version 1.11.82, the source and ZIP distributions of the AWS SDK for Java now include a CHANGELOG.md file that lists the most notable changes for each release.

In the past, changes for each release of the AWS SDK for Java were published to the AWS Release Notes website, but this approach had some drawbacks. Customers wishing to view the set of changes for multiple versions on the website needed to run a search for each version they were interested in. Many customers acquire the source code through our GitHub repository, so viewing the release notes meant potentially opening a browser and navigating away from the code itself. Finally, although rare, sometimes there’s a delay between the release of a new version of the SDK and the availability of the release notes.

By implementing a changelog file, we hope to address these problems in a way that is simple and consistent with many other open source software projects, including other AWS SDKs like JavaScript and .NET. New changes are always prepended to the changelog file in a consistent format, so viewing the changes for multiple versions is now a breeze. The changelog is made available with the source and ZIP distributions, enabling customers to quickly access changes without opening a browser. As an added bonus, because it’s a simple text file, the changes up to the current version can easily be made available for viewing offline. Finally, the file is always updated along with the SDK source, so the list of changes is available as soon as the source code is available.

We hope that with this change, customers will find it easier than ever to keep up to date with the exciting changes being introduced in the AWS SDK for Java. As always, please let us know what you think in the comments below.

AWS Step Functions Fluent Java API

by Andrew Shore | on | in Java | | Comments

AWS Step Functions, a new service that launched at re:Invent 2016, makes it easier to build complex, distributed applications in the cloud. Using this service, you can create state machines that can connect microservices and activities into a visual workflow. State machines support branching, parallel execution, retry/error handling, synchronization (via Wait states), and task execution (via AWS Lambda or an AWS Step Functions Activity).

The Step Functions console provides excellent support for visualizing and debugging a workflow and for creating state machine descriptions. State machines are described in a JSON document, as described in detail here. Although the console has a great editor for building these documents visually, you might want to write state machines in your IDE via a native Java API. Today, we’re launching a fluent builder API to create state machines in a readable, compact way. This new API is included in the AWS SDK for Java.

 

To get started, create a new Maven project and declare a dependency on the aws-java-sdk-stepfunctions client.

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-stepfunctions</artifactId>
    <version>1.11.86</version>
</dependency>

Let’s take a look at some examples. We’ll go through each blueprint available in the console and translate that to the Java API.

Hello World

The following is a JSON representation of a simple state machine that consists of a single task state. The task calls out to a Lambda function (identified by ARN), passing the input of the state machine to the function. When the function completes successfully, the state machine terminates with the same output as the function.
JSON

{
  "Comment" : "A Hello World example of the Amazon States Language using an AWS Lambda Function",
  "StartAt" : "Hello World",
  "States" : {
    "Hello World" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    }
  }
}

Java API
Let’s rewrite this simple state machine using the new Java API and transform it to JSON. Be sure you include the static import for the fluent API methods.


package com.example;

import static com.amazonaws.services.stepfunctions.builder.StepFunctionBuilder.*;
import com.amazonaws.services.stepfunctions.builder.ErrorCodes;

public class StepFunctionsDemo {

    public static void main(String[] args) {
        final StateMachine stateMachine = stateMachine()
                .comment("A Hello World example of the Amazon States Language using an AWS Lambda Function")
                .startAt("Hello World")
                .state("Hello World", taskState()
                        .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                        .transition(end()))
                .build();
        System.out.println(stateMachine.toPrettyJson());
    }
}

Let’s take a closer look at the previous example. The very first method you will always call when constructing a state machine, is stateMachine(). This returns a mutable StateMachine.Builder that can be used to configure all properties of a state machine. Here, we’re adding a comment describing the purpose of the state machine, indicating the initial state via the startAt() method, and defining that state via the state() method. Each state machine must have at least one state in it and must have a valid path to a terminal state (that is, a state that causes the state machine to end). In this example, we have a single TaskState (configured via the taskState() method) that also serves as the terminal state via the End transition (configured by transition(end()) ).

Once you configure a state machine to your liking, you can call the build() method on the StateMachineBuilder to produce an immutable StateMachine object. This object can then be transformed into JSON (see toJson() and toPrettyJson()) or it can be passed directly to the CreateStateMachine API in the Java SDK (see below).

The following creates the state machine (created previously) via the service client. The definition() method can take either the raw JSON or a StateMachine object. For more information about getting started with the Java SDK, see our AWS Java Developer Guide.

final AWSStepFunctions client = AWSStepFunctionsClientBuilder.defaultClient();
client.createStateMachine(new CreateStateMachineRequest()
                                          .withName("Hello World State Machine")
                                          .withRoleArn("arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME")
                                          .withDefinition(stateMachine));

 

Wait State

The following state machine demonstrates various uses of the Wait state type, which can be used to wait for a given amount of time or until a specific time. Wait states can dynamically wait based on input using the TimestampPath and SecondsPath properties, which are JSON reference paths to a timestamp or an integer, respectively. The Next property identifies the state to transition to after the wait is complete.
JSON

{
  "Comment" : "An example of the Amazon States Language using wait states",
  "StartAt" : "First State",
  "States" : {
    "First State" : {
      "Next" : "Wait Using Seconds",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    },
    "Wait Using Seconds" : {
      "Seconds" : 10,
      "Next" : "Wait Using Timestamp",
      "Type" : "Wait"
    },
    "Wait Using Timestamp" : {
      "Timestamp" : "2017-01-16T19:18:55.103Z",
      "Next" : "Wait Using Timestamp Path",
      "Type" : "Wait"
    },
    "Wait Using Timestamp Path" : {
      "TimestampPath" : "$.expirydate",
      "Next" : "Wait Using Seconds Path",
      "Type" : "Wait"
    },
    "Wait Using Seconds Path" : {
      "SecondsPath" : "$.expiryseconds",
      "Next" : "Final State",
      "Type" : "Wait"
    },
    "Final State" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    }
  }
}

Java API
Again, we call the stateMachine() method to begin constructing the state machine. Our start-at state is a Task state that has a transition to the Wait Using Seconds state. The Wait Using Seconds state is configured to wait for 10 seconds before proceeding to the Wait Using Timestamp state. Notice that we use the waitState() method to obtain an instance of WaitState.Builder, which we then use to configure the state. The waitFor() method can accept different types of wait strategies (Seconds, SecondsPath, Timestamp, TimestampPath). Each strategy has a corresponding method in the fluent API (seconds, secondsPath, timestamp, and timestampPath, respectively). Both the SecondsPath and TimestampPath strategies require a valid JsonPath that references data in the input to the state. This input is then used to determine how long to wait for.

final Date waitUsingTimestamp =
        Date.from(LocalDateTime.now(ZoneOffset.UTC).plusMinutes(15).toInstant(ZoneOffset.UTC));
final StateMachine stateMachine = stateMachine()
        .comment("An example of the Amazon States Language using wait states")
        .startAt("First State")
        .state("First State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(next("Wait Using Seconds")))
        .state("Wait Using Seconds", waitState()
                .waitFor(seconds(10))
                .transition(next("Wait Using Timestamp")))
        .state("Wait Using Timestamp", waitState()
                .waitFor(timestamp(waitUsingTimestamp))
                .transition(next("Wait Using Timestamp Path")))
        .state("Wait Using Timestamp Path", waitState()
                .waitFor(timestampPath("$.expirydate"))
                .transition(next("Wait Using Seconds Path")))
        .state("Wait Using Seconds Path", waitState()
                .waitFor(secondsPath("$.expiryseconds"))
                .transition(next("Final State")))
        .state("Final State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end()))
        .build();
System.out.println(stateMachine.toPrettyJson());

Retry Failure

Retriers are a mechanism to retry certain types of states on a given set of error codes. They define both the condition on which to retry (via ErrorEquals) and the backoff behavior and maximum number of retry attempts. At the time of this post, they may be used only with Task states and Parallel states. In the following state machine, the Task state has three retriers. The first retrier retries a custom error code named HandledError that might be thrown from the Lambda function. The initial delay of the first retry attempt is one second (as defined by IntervalSeconds). The maximum number of retry attempts is set at five. The BackoffRate is used for subsequent retries to determine the next delay; for example, the delays for the first retrier would be 1, 2, 4, 8, etc. The second retrier uses a predefined error code available that is matched whenever the task fails (for whatever reason). A full list of predefined error codes can be found here. Finally, the last retrier uses the special error code States.ALL to retry on everything else. If you use the States.ALL error code, it must appear in the last retrier and must be the only code present in ErrorEquals.
JSON

{
  "Comment" : "A Retry example of the Amazon States Language using an AWS Lambda Function",
  "StartAt" : "Hello World",
  "States" : {
    "Hello World" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Retry" : [ {
        "ErrorEquals" : [ "HandledError" ],
        "IntervalSeconds" : 1,
        "MaxAttempts" : 5,
        "BackoffRate" : 2.0
      }, {
        "ErrorEquals" : [ "States.TaskFailed" ],
        "IntervalSeconds" : 30,
        "MaxAttempts" : 2,
        "BackoffRate" : 2.0
      }, {
        "ErrorEquals" : [ "States.ALL" ],
        "IntervalSeconds" : 5,
        "MaxAttempts" : 5,
        "BackoffRate" : 2.0
      } ],
      "Type" : "Task"
    }
  }
}

Java API

Let’s see what the previous example looks like in the Java API. Here we use the retrier() method to configure a Retrier.Builder. The errorEquals() method can take one or more error codes that indicate what this retrier handles. The second retrier uses a constant defined in the ErrorCodes class, which contains all predefined error codes supported by the States language. The last retrier uses a special method, retryOnAllErrors(), to indicate the retrier handles any other errors. This is equivalent to errorEquals("States.ALL") but is easier to read and easier to remember. Again, the “retry all” retrier must be last or a validation exception will be thrown.

final StateMachine stateMachine = stateMachine()
        .comment("A Retry example of the Amazon States Language using an AWS Lambda Function")
        .startAt("Hello World")
        .state("Hello World", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end())
                .retrier(retrier()
                                 .errorEquals("HandledError")
                                 .intervalSeconds(1)
                                 .maxAttempts(5)
                                 .backoffRate(2.0))
                .retrier(retrier()
                                 .errorEquals(ErrorCodes.TASK_FAILED)
                                 .intervalSeconds(30)
                                 .maxAttempts(2)
                                 .backoffRate(2.0))
                .retrier(retrier()
                                 .retryOnAllErrors()
                                 .intervalSeconds(5)
                                 .maxAttempts(5)
                                 .backoffRate(2.0))
        )
        .build();

System.out.println(stateMachine.toPrettyJson());

Catch Failure

Catchers are a similar error handling mechanism. Like Retriers, they can be defined to handle certain error codes that can be thrown from a state. Catchers define a state transition that occurs when the error code matches the ErrorEquals list. The transition state can handle the recovery steps needed for that particular failure scenario. Much like retriers, ErrorEquals can contain one or more error codes (either custom or predefined). The States.ALL is a special catch all that must be in the last Catcher, if present.
JSON

{
  "Comment" : "A Catch example of the Amazon States Language using an AWS Lambda Function",
  "StartAt" : "Hello World",
  "States" : {
    "Hello World" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Catch" : [ {
        "Next" : "Custom Error Fallback",
        "ErrorEquals" : [ "HandledError" ]
      }, {
        "Next" : "Reserved Type Fallback",
        "ErrorEquals" : [ "States.TaskFailed" ]
      }, {
        "Next" : "Catch All Fallback",
        "ErrorEquals" : [ "States.ALL" ]
      } ],
      "Type" : "Task"
    },
    "Custom Error Fallback" : {
      "End" : true,
      "Result" : "This is a fallback from a custom lambda function exception",
      "Type" : "Pass"
    },
    "Reserved Type Fallback" : {
      "End" : true,
      "Result" : "This is a fallback from a reserved error code",
      "Type" : "Pass"
    },
    "Catch All Fallback" : {
      "End" : true,
      "Result" : "This is a fallback from a reserved error code",
      "Type" : "Pass"
    }
  }
}

Java API

To configure a catcher, first call the catcher() method to obtain a Catcher.Builder. The first Catcher handles the custom error code HandledError, and transitions to the Custom Error Fallback state. The second handles the predefined States.TaskFailed error code, and transitions to the Reserved Type Fallback state. Finally, the last catcher handles all remaining errors and transitions to the Catch All Fallback state. Like Retriers, there is a special method, catchAll(), that configures the catcher to handle all error codes. Use of catchAll() is preferred over errorEquals("States.ALL").

final StateMachine stateMachine = stateMachine()
        .comment("A Catch example of the Amazon States Language using an AWS Lambda Function")
        .startAt("Hello World")
        .state("Hello World", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end())
                .catcher(catcher()
                                 .errorEquals("HandledError")
                                 .transition(next("Custom Error Fallback")))
                .catcher(catcher()
                                 .errorEquals(ErrorCodes.TASK_FAILED)
                                 .transition(next("Reserved Type Fallback")))
                .catcher(catcher()
                                 .catchAll()
                                 .transition(next("Catch All Fallback"))))
        .state("Custom Error Fallback", passState()
                .result("\"This is a fallback from a custom lambda function exception\"")
                .transition(end()))
        .state("Reserved Type Fallback", passState()
                .result("\"This is a fallback from a reserved error code\"")
                .transition(end()))
        .state("Catch All Fallback", passState()
                .result("\"This is a fallback from a reserved error code\"")
                .transition(end()))
        .build();

System.out.println(stateMachine.toPrettyJson());

Parallel State

You can use a Parallel state to concurrently execute multiple branches. Branches are themselves pseudo state machines and can contain multiple states (and even nested Parallel states). The Parallel state waits until all branches have terminated successfully before transitioning to the next state. Parallel states support retriers and catchers in the event that execution of a branch fails.
JSON

{
  "Comment": "An example of the Amazon States Language using a parallel state to execute two branches at the same time.",
  "StartAt": "Parallel",
  "States": {
    "Parallel": {
      "Type": "Parallel",
      "Next": "Final State",
      "Branches": [
        {
          "StartAt": "Wait 20s",
          "States": {
            "Wait 20s": {
              "Type": "Wait",
              "Seconds": 20,
              "End": true
            }
          }
        },
        {
          "StartAt": "Pass",
          "States": {
            "Pass": {
              "Type": "Pass",
              "Next": "Wait 10s"
            },
            "Wait 10s": {
              "Type": "Wait",
              "Seconds": 10,
              "End": true
            }
          }
        }
      ]
    },
    "Final State": {
      "Type": "Pass",
      "End": true
    }
  }
}

Java API

To create a Parallel state in the Java API, call the parallelState() method to obtain an instance of ParallelState.Builder. Next, you can add branches of execution via the branch() method. Each branch must have StartAt (name of initial state for branch) specified and at least one state.

final StateMachine stateMachine = stateMachine()
        .comment(
                "An example of the Amazon States Language using a parallel state to execute two branches at the same time.")
        .startAt("Parallel")
        .state("Parallel", parallelState()
                .transition(next("Final State"))
                .branch(branch()
                                .startAt("Wait 20s")
                                .state("Wait 20s", waitState()
                                        .waitFor(seconds(20))
                                        .transition(end())))
                .branch(branch()
                                .startAt("Pass")
                                .state("Pass", passState()
                                        .transition(next("Wait 10s")))
                                .state("Wait 10s", waitState()
                                        .waitFor(seconds(10))
                                        .transition(end()))))
        .state("Final State", passState()
                .transition(end()))
        .build();

System.out.println(stateMachine.toPrettyJson());
System.out.println(stateMachine.toPrettyJson());

Choice State

A Choice state adds branching logic to a state machine. It consists of one or more choices and, optionally, a default state transition if no choices matches. Each choice rule represents a condition and a transition to enact if that condition evaluates to true. Choice conditions can be simple (StringEquals, NumericLessThan, etc) or composite conditions using And, Or, Not.

In the following example, we have a choice state with two choices, both using the NumericEquals condition, and a default transition if neither choice rule matches.
JSON

{
  "Comment" : "An example of the Amazon States Language using a choice state.",
  "StartAt" : "First State",
  "States" : {
    "First State" : {
      "Next" : "Choice State",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    },
    "Choice State" : {
      "Default" : "Default State",
      "Choices" : [ {
        "Variable" : "$.foo",
        "NumericEquals" : 1,
        "Next" : "First Match State"
      }, {
        "Variable" : "$.foo",
        "NumericEquals" : 2,
        "Next" : "Second Match State"
      } ],
      "Type" : "Choice"
    },
    "First Match State" : {
      "Next" : "Next State",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:OnFirstMatch",
      "Type" : "Task"
    },
    "Second Match State" : {
      "Next" : "Next State",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:OnSecondMatch",
      "Type" : "Task"
    },
    "Default State" : {
      "Cause" : "No Matches!",
      "Type" : "Fail"
    },
    "Next State" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    }
  }
}

Java API

To add a Choice state to your state machine, use the choiceState() method to obtain an instance of ChoiceState.Builder. You can add choice rules via the choice() method on the builder. For simple conditions, there are several overloads for each comparison operator (LTE, LT, EQ, GT, GTE) and data types (String, Numeric, Timestamp, Boolean). In this example, we’re using the eq() method that takes a string as the first argument, which is the JsonPath expression referencing the input data to apply the condition to. The second argument will differ depending on the type of data you are comparing against. Here we’re using an integer for numeric comparison. Each choice rule must have a transition that should occur if the condition evaluates to true.

final StateMachine stateMachine = stateMachine()
        .comment("An example of the Amazon States Language using a choice state.")
        .startAt("First State")
        .state("First State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(next("Choice State")))
        .state("Choice State", choiceState()
                .choice(choice()
                                .transition(next("First Match State"))
                                .condition(eq("$.foo", 1)))
                .choice(choice()
                                .transition(next("Second Match State"))
                                .condition(eq("$.foo", 2)))
                .defaultStateName("Default State"))
        .state("First Match State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:OnFirstMatch")
                .transition(next("Next State")))
        .state("Second Match State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:OnSecondMatch")
                .transition(next("Next State")))
        .state("Default State", failState()
                .cause("No Matches!"))
        .state("Next State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end()))
        .build();

System.out.println(stateMachine.toPrettyJson());

You can find more references and tools for building state machines in the Step Functions documentation, and post your questions and feedback to the Step Functions Developers Forum.

AWS SDK for Go Update Needed for Go 1.8

by Jason Del Ponte | on | in Go | | Comments

The AWS SDK for Go  is updated for Go 1.8. This update fixes an issue in which some API operations failed with a connection reset by peer error or service error. This failure prevented API operation requests from being made. If you’re using Go 1.8 with a version of the SDK that’s earlier than v1.6.3, you need to update the SDK to at least v1.6.3 to take advantage of this fix.

GitHub issue #984 discovered that the bug was caused by the way the SDK constructed its HTTP request body. The SDK relied on undocumented functionality of Go 1.7 and earlier versions. In that functionality, the Go http.Request automatically determined whether to send the request’s body, based on whether the body was empty.

Go addressed the issue for most use cases in 1.8rc2, but some APIs such as the Amazon Simple Storage Service (Amazon S3) CopyObject API were still affected.

The SDK’s fix for this issue takes advantage of Go 1.8’s new type, http.NoBody. The SDK uses this value to ensure the HTTP request doesn’t contain a body when none is expected. Another option for a fix was to set Request.Body to nil, but this would break backward compatibility because the Request.Body value is accessible.

See #991, #984, and golang/go#18257 for more information.

Thank you, to all who discovered, reported, and helped us resolve this issue.

Deploy an Existing ASP.NET Core Web API to AWS Lambda

by Norm Johanson | on | in .NET | | Comments

In the previous post, we talked about the new ASP.NET Core Web API blueprint for AWS Lambda, and the Amazon.Lambda.AspNetCoreServer NuGet package that made it possible to run the ASP.NET Core Web API through Lambda. But what if you already have an existing ASP.NET Core Web API that you want to try as a serverless application? You can do this by following these steps:

  • Add the Amazon.Lambda.AspNetCoreServer NuGet package.
  • Add a Lambda function and bootstrap the ASP.NET Core framework.
  • Add the Amazon.Lambda.Tools NuGet package to enable the toolkit’s deployment features.
  • Add a serverless.template file to define Amazon API Gateway.
  • Deploy the project.

Let’s take a deeper look at each step.

Setting Up the Lambda Function

The first step is to add the Amazon.Lambda.AspNetCoreServer NuGet package that bridges the communication between Amazon API Gateway and the ASP.NET Core framework.

After you add the package, add a new class named LambdaFunction and have it extend from Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction. You have to implement the abstract method Init to bootstrap the ASP.NET Core framework.


public class LambdaFunction : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction
{
    protected override void Init(IWebHostBuilder builder)
    {
        builder
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseStartup()
            .UseApiGateway();
    }
}

Enable Tool Support in the AWS Toolkit for Visual Studio

In order for the AWS Toolkit for Visual Studio to recognize the project as a Lambda project, you have to add the Amazon.Lambda.Tools NuGet package. This package isn’t used as part of the runtime of the function and is added as a build tool.


{  
  "dependencies": {
    ...

    "Amazon.Lambda.AspNetCoreServer": "0.8.4-preview1",
    "Amazon.Lambda.Tools": {
      "type": "build",
      "version": "1.1.0-preview1"
    }
  },

  ...
}

To also enable the integration with the .NET Core CLI, list the Amazon.Lambda.Tools NuGet package in the tools section in the project.json file.


{
  ...

  "tools": {
    "Microsoft.AspNetCore.Server.IISIntegration.Tools": "1.0.0-preview2-final",
    "Amazon.Lambda.Tools": "1.1.0-preview1"
  },

  ...
}

Configuring Amazon API Gateway

At this point, you could right-click the project and deploy it to Lambda, but it wouldn’t be fronted by API Gateway exposing the function as an HTTP REST API. The easiest way to do that is to add a serverless.template file to the project and deploy the project as an AWS Serverless project.

Add a serverless.template file to the project by right-clicking the project and choosing Add, AWS Serverless Template.

add-serverless

The default serverless.template file contains one function definition configured to be exposed by API Gateway using proxy integration, so all requests will go to that function. This is exactly what you need for an ASP.NET Core Web API project. The only thing that needs to be updated is the handler field. The format for the handler field is <assembly-name>::<namespace>.LambdaFunction::FunctionHandlerAsync. The FunctionHandlerAsync method is inherited from the base class of our LambdaFunction class.


{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Transform" : "AWS::Serverless-2016-10-31",
  "Description" : "Starting template for an AWS Serverless Application.",
  "Parameters" : {
  },
  "Resources" : {
    "DefaultFunction" : {
      "Type" : "AWS::Serverless::Function",
      "Properties": {
        "Handler": "ExistingWebAPI::ExistingWebAPI.LambdaFunction::FunctionHandlerAsync",
        "Runtime": "dotnetcore1.0",
        "CodeUri": "",
        "Description": "Default function",
        "MemorySize": 256,
        "Timeout": 30,
        "Role": null,
        "Policies": [ "AWSLambdaFullAccess" ],
        "Events": {
          "PutResource": {
            "Type": "Api",
            "Properties": {
              "Path": "/{proxy+}",
              "Method": "ANY"
            }
          }
        }
      }
    }
  },
  "Outputs" : {
  }
}

Deploy

Now you can deploy the ASP.NET Core Web API to either AWS Elastic Beanstalk or Lambda. The deployment process works in the same way that we’ve shown in previous blog posts about AWS Serverless projects.

deploy-selector

And that’s all you have to do to deploy an existing ASP.NET Core Web API project to Lambda.

Visit our .NET Core Lambda GitHub repository to let us know what you think of running ASP.NET Core applications as an AWS Serverless functions and issues you might have. This will help us take the Amazon.Lambda.AspNetCoreServer NuGet package out of preview status.

Running Serverless ASP.NET Core Web APIs with Amazon Lambda

by Norm Johanson | on | in .NET | | Comments

One of the coolest things we demoed at our recent AWS re:Invent talk about .NET Core support for AWS Lambda was how to run an ASP.NET Core Web API with Lambda. We did this with the NuGet package Amazon.Lambda.AspNetCoreServer (which is currently in preview) and Amazon API Gateway. Today we’ve released a new AWS Serverless blueprint that you’ll see in Visual Studio or with our Yeoman generator that makes it easy to set up an ASP.NET Core Web API project as a Lambda project.

Blueprint Picker

How Does It Work?

Depending on your platform, a typically deployed ASP.NET Core application is fronted by either IIS or NGINX, which forwards requests to the ASP.NET Core web server named Kestrel. Kestrel marshals the request into the ASP.NET Core hosting framework.

Normal Flow

When running an ASP.NET Core application as an AWS Serverless application, IIS is replaced with API Gateway and Kestrel is replaced with a Lambda function contained in the Amazon.Lambda.AspNetCoreServer package which marshals the request into the ASP.NET Core hosting framework.

Serverless Flow

The Blueprint

The blueprint creates a project that’s very similar to the one you would get if you selected the .NET Core ASP.NET Core Web Application and chose the Web API template. The key difference is instead of having a Program.cs file that contains a Main function bootstrapping the ASP.NET Core framework, the blueprint has LambdaEntryPoint.cs that bootstraps the ASP.NET Core framework.


public class LambdaEntryPoint : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction
{
    protected override void Init(IWebHostBuilder builder)
    {
        builder
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseStartup()
            .UseApiGateway();
    }
}

The actual Lambda function comes from the base class. The function handler for the Lambda function is set in the AWS CloudFormation template named serverless.template, which will be in the format <assembly-name>::<namespace>.LambdaEntryPoint::FunctionHandlerAsync.

The blueprint also has LocalEntryPoint.cs that works in the same way as the original Program.cs file, enabling you to run and develop your application locally and then deploy it to Lambda.

The remainder of the project’s files are the usual ones you would find in an ASP.NET Core application. The blueprint contains two Web API controllers. The first is the example ValuesController, which is found in the starter ASP.NET Core Web API project. The other controller is S3ProxyController, which demonstrates how to use HTTP GET, PUT, and DELETE requests to a controller and uses the AWS SDK for .NET to make the calls to an Amazon S3 bucket. The name of the S3 bucket to use is obtained from the Configuration object, which means you can set the bucket in the appsettings.json file for local development.


{
  ...

  "AppS3Bucket": "ExampleBucketName"
}

The Configuration object is built by using environment variables.


public Startup(IHostingEnvironment env)
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    builder.AddEnvironmentVariables();
    Configuration = builder.Build();
}

When the application is deployed, serverless.template is used to create the bucket and then pass the bucket’s name to the Lambda function as an environment variable.


...

"Get" : {
  "Type" : "AWS::Serverless::Function",
  "Properties": {
    "Handler": "AspNetCoreWithLambda::AspNetCoreWithLambda.LambdaEntryPoint::FunctionHandlerAsync",
    "Runtime": "dotnetcore1.0",
    "CodeUri": "",
    "MemorySize": 256,
    "Timeout": 30,
    "Role": null,
    "Policies": [ "AWSLambdaFullAccess" ],
    "Environment" : {
      "Variables" : {
        "AppS3Bucket" : { "Fn::If" : ["CreateS3Bucket", {"Ref":"Bucket"}, { "Ref" : "BucketName" } ] }
      }
    },
    "Events": {
      "PutResource": {
        "Type": "Api",
        "Properties": {
          "Path": "/{proxy+}",
          "Method": "ANY"
        }
      }
    }
  }
},

...

Logging

ASP.NET Core introduced a new logging framework. To help integrate with the logging framework, we’ve also released the NuGet package Amazon.Lambda.Logging.AspNetCore. This logging provider allows any code that uses the ILogger interface to record log messages to the associated Amazon CloudWatch log group for the Lambda function. When used outside of a Lambda function, the log messages are written to the console.

The blueprint enables the provider in Startup.cs, where other services are configured.


public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    loggerFactory.AddLambdaLogger(Configuration.GetLambdaLoggerOptions());
    app.UseMvc();
}

This following snippet shows the call GetLambdaLoggerOptions from the Configuration object, which grabs the configuration of what messages to write to CloudWatch Logs. The appsettings.json file in the blueprint configures logging so that messages coming from classes under the Microsoft namespace are written if they’re informational level and above. For all other log messages, write debug level messages and above.


{
  "Lambda.Logging": {
    "LogLevel": {
      "Default": "Debug",
      "Microsoft": "Information"
    }
  },

  ...
}

For more information about this package, see the GitHub repository.

Deployment

Deploying the ASP.NET Core Web API works exactly as we showed you in the previous post about the AWS Serverless projects.

Deploy from Solution Explorer

Once deployed, a single Lambda function and an API Gateway REST API are configured to send all requests to the Lambda function. Then the Lambda function uses the ASP.NET Core framework to route to the correct Web API controller. You can test the deployment by accessing the two controllers using the AWS Serverless URL found in the CloudFormation stack view.

  • <aws-serverless-url>/api/values – Example controller
  • <aws-serverless-url>/api/s3proxy – S3 Proxy controller.

Feedback

We’re very excited about running ASP.NET Core applications on AWS Lambda. As you can imagine, the option of running the ASP.NET Core framework on top of Lambda opens lots of possibilities. The Amazon.Lambda.AspNetCoreServer package is in preview while we explore those possibilities. I highly encourage .NET developers to check out this blueprint and the Amazon.Lambda.AspNetCoreServer package and let us know on our GitHub repository or our new Gitter channel what you think and how we can continue to improve the library.

Using the AWS SDK for Go’s Regions and Endpoints Metadata

by Jason Del Ponte | on | in Go | | Comments

In release v1.6.0 of the AWS SDK for Go, we added Regions and Endpoints metadata to the SDK. This feature enables you to easily enumerate the metadata and discover Regions, Services, and Endpoints. You can find this feature in the github.com/aws/aws-sdk-go/aws/endpoints package.

The endpoints package provides a simple interface to get a service’s endpoint URL and enumerate the Region metadata. The metadata is grouped into partitions. Each partition is a group of AWS Regions such as AWS Standard, AWS China, and AWS GovCloud (US).

Resolving Endpoints

The SDK automatically uses the endpoints.DefaultResolver function when setting the SDK’s default configuration. You can resolve endpoints yourself by calling the EndpointFor methods in the endpoints package.

// Resolve endpoint for S3 in us-west-2
resolver := endpoints.DefaultResolver()
endpoint, err := resolver.EndpointFor(endpoints.S3ServiceID, endpoints.UsWest2RegionID)
if err != nil {
        fmt.Println("failed to resolve endpoint", err)
        return
}
 
fmt.Println("Resolved URL:", endpoint.URL)

If you need to add custom endpoint resolution logic to your code, you can implement the endpoints.Resolver interface, and set the value to aws.Config.EndpointResolver. This is helpful when you want to provide custom endpoint logic that the SDK will use for resolving service endpoints.

The following example creates a Session that is configured so that Amazon S3 service clients are constructed with a custom endpoint.

s3CustResolverFn := func(service, region string, optFns ...func(*endpoints.Options)) (endpoints.ResolvedEndpoint, error) {
        if service == "s3" {
               return endpoints.ResolvedEndpoint{
                       URL:           "s3.custom.endpoint.com",
                       SigningRegion: "custom-signing-region",
               }, nil
        }
 
        return defaultResolver.EndpointFor(service, region, optFns...)
}
sess := session.Must(session.NewSessionWithOptions(session.Options{
        Config: aws.Config{
               Region:           aws.String("us-west-2"),
               EndpointResolver: endpoints.ResolverFunc(s3CustResolverFn),
        },
}))

Partitions

The return value of the endpoints.DefaultResolver function can be cast to the endpoints.EnumPartitions interface. This will give you access to the slice of partitions that the SDK will use, and can help you enumerate over partition information for each partition.

// Iterate through all partitions printing each partition's ID.
resolver := endpoints.DefaultResolver()
partitions := resolver.(endpoints.EnumPartitions).Partitions()
 
for _, p := range partitions {
        fmt.Println("Partition:", p.ID())
}

In addition to the list of partitions, the endpoints package also includes a getter function for each partition group. These utility functions enable you to enumerate a specific partition without having to cast and enumerate over all the default resolver’s partitions.

partition := endpoints.AwsPartition()
region := partition.Regions()[endpoints.UsWest2RegionID]
 
fmt.Println("Services in region:", region.ID())
for id, _ := range region.Services() {
        fmt.Println(id)
}

Once you have a Region or Service value, you can call ResolveEndpoint on it. This provides a filtered view of the Partition when resolving endpoints.

Check out the AWS SDK for Go repo for more examples. Let us know in the comments what you think of the endpoints package.