AWS Developer Blog

Receiving Amazon SNS Messages in PHP

by Jonathan Eskew | on | in PHP | Permalink | Comments |  Share

A little over a year ago, we announced a new feature in the AWS SDK for PHP that allowed customers to validate inbound SNS messages. In the latest version of the SDK, this functionality is now available in its own package, one that does not depend on the SDK, so it’s simpler to listen to SNS topics with lightweight web services. In this blog post, I’ll show you how to use the Amazon SNS message validator for PHP to parse messages in a single-file web application.

About SNS

Amazon Simple Notification Service (Amazon SNS) is a fast and fully managed push messaging service that can deliver messages over email, SMS, mobile push, and HTTP/HTTPS endpoints.

With Amazon SNS, you can set up topics to publish custom messages to subscribed endpoints. SNS messages are used by many of the other AWS services to communicate information asynchronously about your AWS resources. Some examples include:

  • Configuring Amazon Glacier to notify you when a retrieval job is complete.
  • Configuring AWS CloudTrail to notify you when a new log file has been written.
  • Configuring Amazon Elastic Transcoder to notify you when a transcoding job changes status (e.g., from “Progressing” to “Complete”)

Receiving SNS Messages and Verifying Their Signature

Using the SNS Message Validator’s Message class, you can easily parse raw POST data from SNS:

<?php

require 'path/to/vendor/autoload.php';

$message = AwsSnsMessage::fromRawPostData();
echo $message->get('Message');

Amazon SNS sends different types of messages, including SubscriptionConfirmation, Notification, and UnsubscribeConfirmation. The formats of these messages are described in the Appendix: Message and JSON Formats section of the Amazon SNS Developer Guide.

Messages from Amazon SNS are signed. As a best practice, you should use the MessageValidator class to verify the signature and ensure a message was sent from Amazon SNS.

<?php

use AwsSnsMessage;
use AwsSnsMessageValidator;

$message = Message::fromRawPostData();

// Validate the message
$validator = new MessageValidator();
$validator->validate($message);

Instances of AwsSnsMessageValidator have two methods for validating messages, both of which take an instance of AwsSnsMessage as their only argument. validate (shown above) will throw an AwsSnsExceptionInvalidSnsMessageException. isValid will return a Boolean — true for valid messages and false for invalid ones.

Confirming a Subscription to a Topic

In order for an HTTP(S) endpoint to receive messages, it must first be subscribed to an SNS topic. Subscriptions are confirmed over HTTP(S), so you’ll need to create and deploy an endpoint before you set up a subscription. SubscriptionConfirmation messages provide a URL you can use to confirm the subscription.

$message = AwsSnsMessage::fromRawPostData();

// Validate the message
$validator = new MessageValidator();
if ($validator->isValid($message)) {
    file_get_contents($message->get('SubscribeURL'));
}

Handling Notifications

Let’s put it all together and add some extra code for handling both notifications and subscription control messages.

<?php

if ('POST' !== $_SERVER['REQUEST_METHOD']) {
    http_response_code(405);
    die;
}

require 'path/to/vendor/autoload.php';

try {
    $message = AwsSnsMessage::fromRawPostData();
    $validator = new AwsSnsMessageValidator();
    $validator->validate($message);

    if (in_array($message['Type'], ['SubscriptionConfirmation', 'UnsubscribeConfirmation']) {
        file_get_contents($message['SubscribeURL']);
    }

    $log = new SplFileObject('../messages.log', 'a');
    $log->fwrite($message['Message'] . "n");
} catch (Exception $e) {
    http_response_code(404);
    die;
}

Conclusion

As you can see, receiving, verifying, and handling Amazon SNS messages is simple. Setting up your application to receive SNS messages will allow you to create applications that can handle asynchronous communication from AWS services and other parts of your application.

Building a serverless developer authentication API in Java using AWS Lambda, Amazon DynamoDB, and Amazon Cognito – Part 2

In part 1 of this blog post, we showed you how to leverage the AWS Toolkit for Eclipse to quickly develop Java functions for AWS Lambda. We then set up a skeleton project and the structure to handle custom objects sent to your Java function.

In part 2 of this blog post, we will implement the handleRequest function that will handle the logic of interacting with Amazon DynamoDB and then generate an OpenID token by using the Amazon Cognito API.

We will now implement the handleRequest function within the AuthenticateUser class. Our final handleRequest function looks like the following:

@Override
public AuthenticateUserResponse handleRequest(Object input, Context context) {
      
    AuthenticateUserResponse authenticateUserResponse = new AuthenticateUserResponse();
    @SuppressWarnings("unchecked")
    LinkedHashMap inputHashMap = (LinkedHashMap)input;
    User user = authenticateUser(inputHashMap);
    if(user!=null){
        authenticateUserResponse.setUserId(user.getUserId());
        authenticateUserResponse.setStatus("true");
        authenticateUserResponse.setOpenIdToken(user.getOpenIdToken());
    }else{
        authenticateUserResponse.setUserId(null);
        authenticateUserResponse.setStatus("false");
        authenticateUserResponse.setOpenIdToken(null);
    }
        
    return authenticateUserResponse;
}

We will need to implement the authenticateUser function for this Lambda Java function to compile properly. Implement the function as shown here:


public User authenticateUser(LinkedHashMap input){
    User user=null;
    	
    String userName = input.get("userName");
    String passwordHash = input.get("passwordHash");
    	
    try{
        AmazonDynamoDBClient client = new AmazonDynamoDBClient();
        client.setRegion(Region.getRegion(Regions.US_EAST_1));
        DynamoDBMapper mapper = new DynamoDBMapper(client);
	    	
        user = mapper.load(User.class, userName);
	    	
        if(user!=null){
            if(user.getPasswordHash().equalsIgnoreCase(passwordHash)){
                String openIdToken = getOpenIdToken(user.getUserId());
                user.setOpenIdToken(openIdToken);
                return user;
            }
        }
    }catch(Exception e){
        System.out.println(e.toString());
    }
    return user;
}

In this function, we use the DynamoDB Mapper to check if a row with the provided username attribute exists in the table User. Make sure you set the region in your code. If a row with the username exists, the code makes a simple check against the provided password hash value. If the passwords match, we will authenticate this user and then follow the developer authentication flow to get an OpenID token from the CognitoIdentityBroker. The token will be passed to the client as an attribute in the AuthenticationResponse object. For about information about the authentication flow for developer authenticated identities, see the Amazon Cognito documentation here. For this Java Lambda function, we will be using the enhanced authflow.

Before we can get an OpenID token, we need to create an identity pool in Amazon Cognito and then register our developer authentication provider with this identity pool. When you create the identity pool, you can keep the default roles provided by the console. In the Authentication Providers field, in the Custom section, type login.yourname.services.

After the pool is created, implement the getOpenIdToken as shown:


private String getOpenIdToken(Integer userId){
    	
    AmazonCognitoIdentityClient client = new AmazonCognitoIdentityClient();
    GetOpenIdTokenForDeveloperIdentityRequest tokenRequest = new GetOpenIdTokenForDeveloperIdentityRequest();
    tokenRequest.setIdentityPoolId("us-east-1:6dbccdfd-9444-4d4c-9e1b-5d1139cbe863");
    	
    HashMap map = new HashMap();
    map.put("login.dhruv.services", userId.toString());
    	
    tokenRequest.setLogins(map);
    tokenRequest.setTokenDuration(new Long(10001));
    	
    GetOpenIdTokenForDeveloperIdentityResult result = client.getOpenIdTokenForDeveloperIdentity(tokenRequest);
    	
    String token = result.getToken();
    	
    return token;
}

This code calls the GetOpenIdTokenForDeveloperIdentity function in the Amazon Cognito API. You need to pass in your Amazon Cognito identity pool ID along with the unique identity provider string you entered in the Custom field earlier. You also have to provide a unique identifier for the user so Amazon Cognito can map that to its Cognito ID. This unique ID is usually the user ID you use internally, but it can be any other unique attribute that allows both your authentication back end and Amazon Cognito to identify a user.

In part 3 of this blog, we will test the Java Lambda function locally using JUnit. Then we will upload and test the function on Lambda.

Building a serverless developer authentication API in Java using AWS Lambda, Amazon DynamoDB, and Amazon Cognito – Part 1

Most of us are aware of the support for a developer authentication backend in Amazon Cognito and how one can use a custom backend service to authenticate and authorize users to access AWS resources using temporary credentials. In this blog, we will create a quick serverless backend authentication API written in Java and deployed on Lambda. You can mirror this workflow in your current backend authentication service, or you can use this service as it is.

The blog will cover the following topics in a four-part series.

  1. Part 1: How to get started with Java development on Lambda using the AWS Toolkit for Eclipse.
  2. Part 1: How to use Java Lambda functions for custom events.
  3. Part 2: How to create a simple authentication microservice that checks users against an Amazon DynamoDB table.
  4. Part 2: How to integrate with the Amazon Cognito Identity Broker to get an OpenID token.
  5. Part 3: How to locally test your Java Lambda functions through JUnit before uploading to Lambda.
  6. Part 4: How to hook up your Lambda function to Amazon API Gateway.

The Lambda workflow support in the latest version of the AWS Toolkit for Eclipse makes it really simple to create Java functions for Lambda. If you haven’t already downloaded Eclipse, you can get it here. We assume you have an AWS account with at least one IAM user with an Administrator role (that is, the user should belong to an IAM group with administrative permissions).

Important: We strongly recommend you do not use your root account credentials to create this microservice.

After you have downloaded Eclipse and set up your AWS account and IAM user, install the AWS Toolkit for Eclipse. When prompted, restart Eclipse.

We will now create an AWS Lambda project. In the Eclipse toolbar, click the yellow AWS icon, and choose New AWS Lambda Java Project.

On the wizard page, for Project name, type AuthenticateUser. For Package Name, type aws.java.lambda.demo (or any package name you want). For Class Name, type AuthenticateUser. For Input Type, choose Custom Object. If you would like to try other predefined events that Lambda supports in Java, such as an S3Event or DynamoDBEvent, see these samples in our documentation here. For Output Type, choose a custom object, which we will define in the code later. The output type should be a Java class, not a primitive type such an int or float.

Choose Finish.

In Package Explorer, you will now see a Readme file in the project structure. You can close the Readme file for now. The structure below shows the main class, AuthenticateUser, which is your Lambda handler class. It’s where you will be implementing the handleRequest function. Later on, we will implement the unit tests in JUnit by modifying the AuthenticateUserTest class to allow local testing of your Lambda function before uploading.

Make sure you have added the AWS SDK for Java Library in your build path for the project. Before we implement the handleRequest function, let’s create a Data class for the User object that will hold our user data stored in a DynamoDB table called User. You will also need to create a DynamoDB table called User with some test data in it. To create a DynamoDB table, follow the tutorial here. We will choose the username attribute as the hash key. We do not need to create any indexes for this table. Create a new User class in the package aws.java.lambda.demo and then copy and paste the following code:

Note: For this exercise, we will create all our resources in the us-east-1 region. This region, along with the ap-northeast-1 (Tokyo) and eu-west-1 (Ireland) regions, supports Amazon Cognito, AWS Lambda, and API Gateway.

package aws.dhruv.lambda.services;

import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;

@DynamoDBTable(tableName="User")
public class User {
	
    private String userName;
    private Integer userId;
    private String passwordHash;
    private String openIdToken;
	
    @DynamoDBHashKey(attributeName="username")
    public String getUserName() { return userName; }
    public void setUserName(String userName) { this.userName = userName; }
	
    @DynamoDBAttribute(attributeName="userid")
    public Integer getUserId() { return userId; }
    public void setUserId(Integer userId) { this.userId = userId; }
	
    @DynamoDBAttribute(attributeName="passwordhash")
    public String getPasswordHash() { return passwordHash; }
    public void setPasswordHash(String passwordHash) { this.passwordHash = passwordHash; }
	
    @DynamoDBAttribute(attributeName="openidtoken")
    public String getOpenIdToken() { return openIdToken; }
    public void setOpenIdToken(String openIdToken) { this.openIdToken = openIdToken; }
	
    public User(String userName, Integer userId, String passwordHash, String openIdToken) {
        this.userName = userName;
        this.userId = userId;
        this.passwordHash = passwordHash;
        this.openIdToken = openIdToken;
    }
	
    public User(){ }	
}

You will see we are leveraging annotations so we can use the advanced features provided by the DynamoDB Mapper. The AWS SDK for Java provides DynamoDBMapper, a high-level interface that automates the process of getting your objects into Amazon DynamoDB and back out again. For more information about annotating your Java classes for use in DynamoDB, see the developer guide here.

Our Java function will ingest a custom object from API Gateway and, after execution, return a custom response object. Our custom input is a JSON POST body that will be invoked through an API Gateway endpoint. A sample request will look like the following:

        {
          "userName": "Dhruv",
          "passwordHash": "8743b52063cd84097a65d1633f5c74f5"
        } 

The data is passed in as a LinkedHashMap of key-value pairs to your handleRequest function. As you will see later, you will need to cast your input properly to extract the values of the POST body. Your custom response object looks like the following:

        {
          "userId": "123",
          "status": "true",
          "openIdToken": "eyJraWQiOiJ1cy1lYXN0LTExIiwidHlwIjoiSldTIiwiYWxnIjoiUl"	 
        }

We need to create an implementation of the Response class in our AuthenticateUser class as follows.

public static class AuthenticateUserResponse{
		
    protected Integer userId;
    protected String openIdToken;
    protected String status;
		
    public Integer getUserId() { return userId; }
    public void setUserId(Integer userId) { this.userId = userId; }

    public String getOpenIdToken() { return openIdToken; }
    public void setOpenIdToken(String openIdToken) { this.openIdToken = openIdToken; }
		
    public String getStatus() {	return status; }
    public void setStatus(String status) { this.status = status; }			
}

Now that we have the structure in place to handle a custom event, in part 2 of this blog post, we will finish the implementation of the handleRequest function that will do user validation and interact with Amazon Cognito.

S3 workflows simplified with Java 8 streams

by Jonathan Breedlove | on | in Java | Permalink | Comments |  Share

Of the many changes brought about with Java 8, the Stream API is perhaps one of the most exciting.  Java 8 streams, which are unrelated to Java’s I/O streams, allow you to perform a series of mutations and transformations against a collection of items.  You can think of a stream as a form of data pipeline, where a collection of data is passed as input and a series of defined steps are performed against that data.  Streams can produce a result in the form of a new collection or directly perform actions against each element of the stream.  Streams can be directly created from multiple sources, including directly specified values, from a collection, or from a Spliterator using a utility method.

The following are some very simple examples of how streams can be used with the Amazon S3 Java client.

Creating a Stream from results

Iterable<S3ObjectSummary> objectSummaries = S3Objects.inBucket(s3Client, "myBucket");
Stream<S3ObjectSummary> objectStream = StreamSupport.stream(objectSummaries.spliterator(), false);

We first make a call through the S3 client to grab a paginated Iterable of result object summaries from the objects in a bucket.  This transparently handles iteration across multiple pages by making additional calls to the service, as needed, to retrieve subsequent result pages.  Now it’s time to create a stream to process our results.  Although Java 8 does not provide a direct way to generate a stream from an Iterable, it does provide a utility class (StreamSupport) with methods to help you do this.  We’re able to use this to pass in a Spliterator (also new to Java 8, it helps facilitate parallelized iteration) grabbed off the Iterable to generate a stream.

Finding the total size of all objects in a bucket

This is a simple example of how using Java 8 streams can reduce the verbosity of an operation.  It’s not uncommon to want to compute the total size of all objects in a bucket and historically one might iterate through the results and keep a running tally of cumulative sizes of each object.

long totalBucketSize = 0L;
for (S3ObjectSummary summary : objectSummaries) {
    totalSize += summary.getSize();
}

Using a stream gives you a neat alternative that does the same thing.

long totalBucketSize = objectStream.mapToLong(obj -> obj.getSize()).sum();

Calling mapToLong on our stream produces a LongStream generated from the results of applying a function (in this case, one that simply grabs the object size from each summary) which allows us to perform subsequent stream operations.  Calling sum (which is a stream terminal reduction operation) returns the sum of all elements of the stream.

Delete all bucket objects older than a specified date

You might regularly run a job that goes through the objects in a bucket and deletes those that were last modified before some date.  Again, streams allow us to perform this operation concisely.  Here we’ll say that we want to delete any objects that were last modified over 30 days ago.

Calendar c = Calendar.getInstance();
c.add(Calendar.DAY_OF_MONTH, -30);
Date cutoffDate = c.getTime();

objectStream.filter(obj -> obj.getLastModified().before(cutoffDate))
    .forEach(obj -> s3Client.deleteObject("myBucket", obj.getKey()));

First we generate our target cutoff date.  In this example we call filter on our stream to filter the stream elements down to those matching our condition.  At that point calling forEach (which itself is a stream terminal operation) executes a function against the remaining stream elements.  In this case it makes a calls to the S3 client to delete each object.

This could also be easily modified to simply return a List of these old objects to pass around.

List<S3ObjectSummary> oldObjects = objectStream
			.filter(obj -> obj.getLastModified().before(cutoffDate))
			.collect(Collectors.toList());

Conclusion

I hope these simple examples give you some ideas for using streams in your application.  Are you using Java 8 streams with the AWS SDK for Java?  Let us know in the comments!

Listing Cmdlets by Service

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

In this post, we discussed a new cmdlet you can use to navigate your way around the cmdlets in the AWS Tools for Windows PowerShell module. This cmdlet enables you to search for cmdlets that implemented a service API by answering questions like, "Which cmdlet implements the Amazon EC2 ‘RunInstances’ API?" It can also do a simple translation of AWS CLI commands you might have found in example documentation to give you the equivalent cmdlet.

In the 3.1.21.0 version of the tools, we extended this cmdlet. Now you can use it to get a list of all cmdlets belonging to a service based on words that appear in the service name or the prefix code we use to namespace the cmdlets by service. You could, of course, do something similar by using the PowerShell Get-Command cmdlet and supplying the -Noun parameter with a value that is the prefix with a wildcard (for example, Get-Command -Module AWSPowerShell -Noun EC2*). The problem here is that you need to know the prefix before you can run the command. Although we try to choose memorable and easily guessable prefixes, sometimes the association can be subtle and searching based on one or more words in the service name is more useful.

To list cmdlets by service you supply the -Service parameter. The value for this parameter is always treated as a case-insensitive regular expression and is used to match against cmdlets using both their ‘namespace’ prefix and full name. For example:

PS C:> Get-AWSCmdletName -Service compute

CmdletName                      ServiceOperation             ServiceName
----------                      ----------------             -----------
Add-EC2ClassicLinkVpc           AttachClassicLinkVpc         Amazon Elastic Compute Cloud
Add-EC2InternetGateway          AttachInternetGateway        Amazon Elastic Compute Cloud
Add-EC2NetworkInterface         AttachNetworkInterface       Amazon Elastic Compute Cloud
Add-EC2Volume                   AttachVolume                 Amazon Elastic Compute Cloud
...
Stop-EC2SpotInstanceRequest     CancelSpotInstanceRequests   Amazon Elastic Compute Cloud
Unregister-EC2Address           DisassociateAddress          Amazon Elastic Compute Cloud
Unregister-EC2Image             DeregisterImage              Amazon Elastic Compute Cloud
Unregister-EC2PrivateIpAddress  UnassignPrivateIpAddresses   Amazon Elastic Compute Cloud
Unregister-EC2RouteTable        DisassociateRouteTable       Amazon Elastic Compute Cloud

When a match for the parameter value is found, the output contains a collection of PSObject instances. Each instance has members detailing the cmdlet name, service operation, and service name. You can see the assigned prefix code in the cmdlet name. If the search term fails to match any supported service, you’ll see an error message in the output.

You might be asking yourself why we output the service name. We do this because the parameter value is treated as a regular expression and it attempts to match against two pieces of metadata in the module’s cmdlets (service and prefix). It is therefore possible that a term can match more than one service. For example:

PS C:> Get-AWSCmdletName -Service EC2

CmdletName                      ServiceOperation           ServiceName
----------                      ----------------           -----------
Add-EC2ClassicLinkVpc           AttachClassicLinkVpc       Amazon Elastic Compute Cloud
Add-EC2InternetGateway          AttachInternetGateway      Amazon Elastic Compute Cloud
...
Unregister-EC2RouteTable        DisassociateRouteTable     Amazon Elastic Compute Cloud
Get-ECSClusterDetail            DescribeClusters           Amazon EC2 Container Service
Get-ECSClusters                 ListClusters               Amazon EC2 Container Service
Get-ECSClusterService           ListServices               Amazon EC2 Container Service
...
Unregister-ECSTaskDefinition    DeregisterTaskDefinition   Amazon EC2 Container Service
Update-ECSContainerAgent        UpdateContainerAgent       Amazon EC2 Container Service
Update-ECSService               UpdateService              Amazon EC2 Container Service

You’ll see the result set contains cmdlets for both Amazon EC2 and Amazon EC2 Container Service. This is because the search term ‘EC2’ matched the noun prefix for the cmdlets exposing the Amazon EC2 service API as well as the service name for the container service.

We hope you find this new capability useful. You can get the new version as a Windows Installer package here or through the PowerShell Gallery.

If you have suggestions for other features, let us know in the comments!

AWS re:Invent 2015 Recap

Another AWS re:Invent in the bag. It was great to talk to so many of our customers about .NET and PowerShell. Steve and I gave two talks this year. The first session was about how to take advantage of ASP.NET 5 in AWS. The second session was our first-ever PowerShell talk at re:Invent. It was great to see community excitement for our PowerShell support. If you weren’t able to come to re:Invent this year, you can view our sessions online.

We published the source code and scripts used in our talks in the reInvent-2015 folder in our .NET SDK samples repository.

Hope to see you at next year’s AWS re:Invent!

AWS re:Invent 2015

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

This year’s AWS re:Invent conference is just a few days away. Norm, Milind, and I from the .NET SDK and Tools team at AWS will be attending. We are looking forward to meeting with as many of you as we can.

This year we have two .NET-related breakout sessions:

On Wednesday afternoon, we will show you how to develop and host ASP.NET 5 applications on AWS. Check out DEV302: Hosting ASP.NET 5 Applications in AWS with Docker and AWS CodeDeploy in the session catalog.

On Thursday afternoon, we will hold our first-ever re:Invent session on the AWS Tools for Windows PowerShell! Check out DEV202: Under the Desk to the AWS Cloud with Windows PowerShell in the session catalog. The session will walk through how some easy-to-use scripts can be used to handle the workflow of moving a virtualized server into the cloud.

If you’re attending the conference this year, be sure to stop by the SDKs and Tools booth in the Exhibit Hall and say hello. We’d love to get feedback on what we can do to help with your day-to-day work with the AWS SDK for .NET, the AWS Tools for Windows PowerShell, and the AWS Toolkit for Visual Studio. See you in Las Vegas!

AWS re:Invent 2015

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

AWS re:Invent 2015 kicks off next week! We couldn’t be more excited to hear how you’re using our SDKs and tools to build your applications.

You can find several sessions covering the AWS SDKs and tools in the Developer Tools track. We’ll also be working at the AWS booth in the Expo Hall, so be sure to come by and see us.

I’ll be co-presenting DEV303: Practical DynamoDB Programming with Java on Thursday morning. Come by to see how we use the AWS SDK for Java along with AWS Lambda and the AWS Toolkit for Eclipse to efficiently work with data in DynamoDB.

As always, the re:Invent 2015 technical sessions will be available to watch online, for free, after the event. Here are a few sessions on the AWS SDK for Java from years past:

Will you be at AWS re:Invent this year? What are most excited about? Let us know in the comments below.

New Support for ASP.NET 5 in AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today we have released beta support for ASP.NET 5 in the AWS SDK for .NET. ASP.NET 5 is an exciting development for .NET developers with modularization and cross-platform support being major goals for the new platform.

Currently, ASP.NET 5 is on beta 7. There may be more changes before its 1.0 release. For this reason, we have released a separate 3.2 version of the SDK (marked beta) to NuGet. We will continue to maintain the 3.1 version as the current, stable version of the SDK. When ASP.NET 5 goes out of beta, we will take version 3.2 of the SDK out of beta.

CoreCLR

ASP.NET 5 applications can run on .NET 4.5.2, mono 4.0.1, or the new CoreCLR runtime. If you are targeting the new CoreCLR runtime, be aware of these coding differences:

  • Service calls must be made asynchronously. This is because the HTTP client used for CoreCLR supports asynchronous calls only. Coding your application to use asynchronous operations can improve your application performance because fewer tasks are blocked waiting for a response from the server.
  • The CoreCLR version of the AWS SDK for .NET currently does not support our encrypted SDK credentials store, which is available in the .NET 3.5 and 4.5 versions of the AWS SDK for .NET. This is because the encrypted store uses P/Invoke to make system calls into Windows to handle the encryption. Because CoreCLR is cross-platform, that option is not available. For local development with CoreCLR, we recommend you use the shared credentials file. When running in EC2 instances, Identity and Access Management (IAM) roles are the preferred mechanism for delivering credentials to your application.

AWS re:Invent

If you are attending AWS re:Invent next month, I’m going to address a breakout session about ASP.NET 5 development with AWS and options for deploying ASP.NET 5 applications to AWS.

Feedback

To give us feedback on ASP.NET 5 support or to suggest AWS features to better support ASP.NET 5, open a GitHub issue on the repository for the AWS SDK for .NET. Check out the dnxcore-development branch to see where the ASP.NET 5 work is being done.

DynamoDB DataModel Enum Support

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

In version 3.1.1 of the DynamoDB .NET SDK package, we added enum support to the Object Persistence Model. This feature allows you to use enums in .NET objects you store and load in DynamoDB. Before this change, the only way to support enums in your objects was to use a custom converter to serialize and deserialize the enums, storing them either as string or numeric representations. With this change, you can use enums directly, without having to implement a custom converter. The following two code samples show an example of this:

Definitions:

[DynamoDBTable("Books")]
public class Book
{
    [DynamoDBHashKey]
    public string Title { get; set; }
    public List Authors { get; set; }
    public EditionTypes Editions { get; set; }
}
[Flags]
public enum EditionTypes
{
    None      = 0,
    Paperback = 1,
    Hardcover = 2,
    Digital   = 4,
}

Using enums:

var client = new AmazonDynamoDBClient();
DynamoDBContext context = new DynamoDBContext(client);

// Store item
Book book = new Book
{
    Title = "Cryptonomicon",
    Authors = new List { "Neal Stephenson" },
    Editions = EditionTypes.Paperback | EditionTypes.Digital
};
context.Save(book);

// Get item
book = context.Load("Cryptonomicon");
Console.WriteLine("Title = {0}", book.Title);
Console.WriteLine("Authors = {0}", string.Join(", ", book.Authors));
Console.WriteLine("Editions = {0}", book.Editions);

Custom Converters

With OPM enum support, enums are stored as their numeric representations in DynamoDB. (The default underlying type is int, but you can change it, as described in this MSDN article.) If you were previously working with enums by using a custom converter, you may now be able to remove it and use this new support, depending on how your converter was implemented:

  • If your converter stored the enum into its corresponding numeric value, this is the same logic we use, so you can remove it.
  • If your converter turned the enum into a string (if you use ToString and Parse), you can discontinue the use of a custom converter as long as you do this for all of the clients. This feature is able to convert strings to enums when reading data from DynamoDB, but will always save an enum as its numeric representation. This means that if you load an item with a "string" enum, and then save it to DynamoDB, the enum will now be "numeric." As long as all clients are updated to use the latest SDK, the transition should be seamless.
  • If your converter worked with strings and you depend on them elsewhere (for example, queries or scans that depend on the string representation), continue to use your current converter.

Enum changes

Finally, it’s important to keep in mind the fact that enums are stored as their numeric representations because updates to the enum can create problems with existing data and code. If you modify an enum in version B of an application, but have version A data or clients, it’s possible some of your clients may not be able to properly handle the newer version of the enum values. Even something as simple as reorganizing the enum values can lead to some very hard-to-identify bugs. This MSDN blog post provides some very good advice to keep in mind when designing an enum.