AWS Developer Blog

PHP application logging with Amazon CloudWatch Logs and Monolog

by Joseph Fontes | on | in PHP | | Comments

Logging and information debugging can be approached from a multitude of different angles. Whether you use an application framework or coding from scratch it’s always comforting to have familiar components and tools across different projects. In our examples today, I am going to enable Amazon CloudWatch Logs logging with a PHP application. To accomplish this, I wanted to use an existing solution that is both already popular and well used, and that is standards compliant. For these reasons, we are going to use the open source log library, PHP Monolog (https://github.com/Seldaek/monolog).

PHP Monolog

For those who work with a new PHP application, framework, or service, one of the technology choices that appears more frequently across solutions is the use of Monolog for application logging. PHP Monolog is a standards-compliant PHP library that enables developers to send logs to various destination types including, databases, files, sockets, and different services. Although PHP Monolog predates the standards for PHP logging defined in PSR-3, it does implement the PSR-3 interface and standards. This makes Monolog compliant with the common interface for logging libraries. Using Monolog with CloudWatch Logs creates a PSR-3 compatible logging solution. Monolog is available for use with a number of different applications and frameworks such as Laravel, Symfony, CakePHP, and many others. Our example today is about using PHP Monolog to send information to CloudWatch Logs for the purpose of application logging and to build a structure and process that enables the use of our application data with CloudWatch alarms and notifications. This enables us to use logs from our application for cross-service actions such as with Amazon EC2 Auto Scaling decisions.

Amazon CloudWatch Logs

As a customer-driven organization, AWS is constantly building and releasing significant features and services requested by AWS customers and partners. One of those services that we highlight today is Amazon CloudWatch Logs. CloudWatch Logs enables you to store log file information from applications, operating systems and instances, AWS services, and various other sources. An earlier blog post highlighted the use of CloudWatch Logs with various programming examples.

Notice in the blog post that there is a PHP example that uses CloudWatch Logs to store an entry from an application. You can use this example and extend it as a standalone solution to provide logging to CloudWatch Logs from within your application. With our examples, we’ll enhance this opportunity by leveraging PHP Monolog.

Implementing Monolog

To begin using Monolog, we install the necessary libraries with the use of Composer (https://getcomposer.org/). The instructions below install the AWS SDK for PHP, PHP Monolog, and an add-on to Monolog that enables logging to CloudWatch Logs.

curl -sS https://getcomposer.org/installer | php
php composer.phar require aws/aws-sdk-php
php composer.phar require monolog/monolog
php composer.phar require maxbanton/cwh:^1.0

Alternatively, you can copy the following entry to the composer.json file and install it via the php composer.phar install command.

{
    "minimum-stability": "stable",
    "require": {
        "aws/aws-sdk-php": "^3.24",
        "aws/aws-php-sns-message-validator": "^1.1",
        "monolog/monolog": "^1.21",
        "maxbanton/cwh": "^1.0"
    }
}

Local logging

Now that PHP Monolog is available for use, we can test the implementation. We start with an example of logging to a single file.

require "vendor/autoload.php";

use Monolog\Logger;
use Monolog\Formatter\LineFormatter;
use Monolog\Handler\StreamHandler;

$logFile = "testapp_local.log";

$logger = new Logger('TestApp01');
$formatter = new LineFormatter(null, null, false, true);
$infoHandler = new StreamHandler(__DIR__."/".$logFile, Logger::INFO);
$infoHandler->setFormatter($formatter);
$logger->pushHandler($infoHandler);
$logger->info('Initial test of application logging.');

In the previous example, we start by requiring the composer libraries we installed earlier. The new Logger line sets the channel name as “TestApp01”. The next line creates a new LineFormatter that removes brackets around unused log items. The next line establishes the destination as the file name we identified, testapp_local.log, and associates that with the INFO log level. Next, we apply the format to our stream handler. Then we add the stream handler with the updated format to the handler list. Finally, a new message is logged with the log level of INFO. For information about log levels and different handlers, see the Monolog GitHub page and IETF RFC 5424 and PSR-3.

We can now view the contents of the log file to ensure functionality:

Syslog logging

Now that we are able to write a simple log entry to a local file, our next example uses the system Syslog to log events.

$logger = new Logger($appName);

$localFormatter = new LineFormatter(null, null, false, true);
$syslogFormatter = new LineFormatter("%channel%: %level_name%: %message% %context% %extra%",null,false,true);

$infoHandler = new StreamHandler(__DIR__."/".$logFile, Logger::INFO);
$infoHandler->setFormatter($localFormatter);

$warnHandler = new SyslogHandler($appName, $facility, Logger::WARNING);
$warnHandler->setFormatter($syslogFormatter);

$logger->pushHandler($warnHandler);
$logger->pushHandler($infoHandler);

$logger->info('Test of PHP application logging.');
$logger->warn('Test of the warning system logging.');

Here we can see that the format of the syslog messages has been changed with the value, $syslogFormatter. Because syslog provides a date/time with each log entry, we don’t need to include these values in our log text. The syslog facility is set to local0 with all WARNING messages sent to syslog with the INFO level messages and WARNING level messages logged to our local file. You can find additional information about Syslog facilities and log levels on the Syslog Wikipedia page.

Logging to CloudWatch Logs

Now that you’ve seen the basic use of Monolog, let’s send some logs over to CloudWatch Logs. We can use the Amazon Web Services CloudWatch Logs Handler for Monolog library to integrate Monolog with CloudWatch Logs. In our example, an authentication application produces log information.

use Aws\CloudWatchLogs\CloudWatchLogsClient;
use Maxbanton\Cwh\Handler\CloudWatch;
use Monolog\Logger;
use Monolog\Formatter\LineFormatter;
use Monolog\Handler\StreamHandler;
use Monolog\Handler\SyslogHandler;

$logFile = "testapp_local.log";
$appName = "TestApp01";
$facility = "local0";

// Get instance ID:
$url = "http://169.254.169.254/latest/meta-data/instance-id";
$instanceId = file_get_contents($url);

$cwClient = new CloudWatchLogsClient($awsCredentials);
// Log group name, will be created if none
$cwGroupName = 'php-app-logs';
// Log stream name, will be created if none
$cwStreamNameInstance = $instanceId;
// Instance ID as log stream name
$cwStreamNameApp = "TestAuthenticationApp";
// Days to keep logs, 14 by default
$cwRetentionDays = 90;

$cwHandlerInstanceNotice = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameInstance, $cwRetentionDays, 10000, [ 'application' => 'php-testapp01' ],Logger::NOTICE);
$cwHandlerInstanceError = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameInstance, $cwRetentionDays, 10000, [ 'application' => 'php-testapp01' ],Logger::ERROR);
$cwHandlerAppNotice = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameApp, $cwRetentionDays, 10000, [ 'application' => 'php-testapp01' ],Logger::NOTICE);

$logger = new Logger('PHP Logging');

$formatter = new LineFormatter(null, null, false, true);
$syslogFormatter = new LineFormatter("%channel%: %level_name%: %message% %context% %extra%",null,false,true);
$infoHandler = new StreamHandler(__DIR__."/".$logFile, Logger::INFO);
$infoHandler->setFormatter($formatter);

$warnHandler = new SyslogHandler($appName, $facility, Logger::WARNING);
$warnHandler->setFormatter($syslogFormatter);

$cwHandlerInstanceNotice->setFormatter($formatter);
$cwHandlerInstanceError->setFormatter($formatter);
$cwHandlerAppNotice->setFormatter($formatter);

$logger->pushHandler($warnHandler);
$logger->pushHandler($infoHandler);
$logger->pushHandler($cwHandlerInstanceNotice);
$logger->pushHandler($cwHandlerInstanceError);
$logger->pushHandler($cwHandlerAppNotice);

$logger->info('Initial test of application logging.');
$logger->warn('Test of the warning system logging.');
$logger->notice('Application Auth Event: ',[ 'function'=>'login-action','result'=>'login-success' ]);
$logger->notice('Application Auth Event: ',[ 'function'=>'login-action','result'=>'login-failure' ]);
$logger->error('Application ERROR: System Error');

In this example, application authentication events are passed as a PHP array and presented in CloudWatch Logs as JSON. The events with a result of login-success and login-failure are sent to both the log stream associated with the instance ID and to the log stream associated with the application name.

 

Using these different stream locations, we can create metrics and alarms at either a per-instance level or per-application level. Let’s assume that we want to create a metric for total number of users logged into our application over the past five minutes. Select your event group and then choose Create Metric Filter.

On the next page, we can create our filter and test in the same window. For the filter data, we use the JSON string from the log entry. Enter the following string to extract all the successful logins.

{ $.result = login-success }

Below, we can see the filter details. I updated the Filter Name to a value that’s easy to identify. The Metric Namespace now has a value associated with the application name and the metric name reflects the number of login-success values.

 

We could now create an alarm to send a notification or perform some action (such as an Amazon EC2 scaling decision), based on this information being received via CloudWatch Logs.

With these values, we would receive an alert each time there were more than 50 successful logins within a five-minute period.

Laravel logging

Monolog is used as the logging solution for a number of PHP applications and frameworks, including, the popular Laravel PHP framework. In this example, we’ll show the use of Monolog with CloudWatch Logs within Laravel. Our first step is to find out the current log settings for our Laravel application. If you open config/app.php within your application root, you see various log settings. By default, Laravel is set to log to a single log file using the baseline log level of debug.

Next, we add the AWS SDK for PHP as a service provider within Laravel using instructions and examples from here.

You also want to add the Monolog library for CloudWatch Logs to the composer.json file for inclusion in the application, as shown.

You now need to extend the current Laravel Monolog configuration with your custom configuration. You can find additional information about this step on the Laravel Error and Logging page. The following is an example of this addition to the bootstrap/app.php file.

use Maxbanton\Cwh\Handler\CloudWatch;

$app->configureMonologUsing( function($monolog) {

    $cwClient = App::make('aws')->createClient('CloudWatchLogs');
    $cwGroupName = env('AWS_CWL_GROUP', 'laravel-app-logs');
    $cwStreamNameApp = env('AWS_CWL_APP', 'laravel-app-name');
    $cwTagName = env('AWS_CWL_TAG_NAME', 'application');
    $cwTagValue = env('AWS_CWL_TAG_VALUE', 'laravel-testapp01');
    $cwRetentionDays = 90;
    $cwHandlerApp = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameApp, $cwRetentionDays, 10000, [ $cwTagName => $cwTagValue ] );

    $monolog->pushHandler($cwHandlerApp);
});

For testing purposes, we add a logging call to a test route in routes/web.php.

Route::get('/test', function () {
    Log::warning('Clicking on test link!!!');
    return view('test');
});

When the test route is invoked, the logs now show in CloudWatch Logs.

Conclusion

In our examples, we’ve shown how to use PHP Monolog to log to a local file, syslog, and CloudWatch Logs. We have also demonstrated the integration of Monolog with CloudWatch Logs within a popular PHP application framework. Finally, we’ve shown how to create CloudWatch Logs metric filters and apply those to CloudWatch Alarms that make the data from the logs actionable with notifications, as well as scaling decisions. CloudWatch Logs provides a central logging capability for your PHP applications and, combined with Monolog, ensures the availability of the library for use within established projects and custom engagements.

AWS Toolkit for Eclipse: Support for AWS CodeCommit and AWS CodeStar

by Zhaoxi Zhang | on | in Java | | Comments

I am pleased to announce that the AWS Toolkit for Eclipse now supports AWS CodeCommit and AWS CodeStar. This means you can create, view, clone, and delete your AWS CodeCommit repositories in the AWS Toolkit for Eclipse. You can also import existing projects under your AWS CodeStar account directly into the Eclipse IDE.

Git Credentials Configuration

We recommend that you use Git credentials with HTTPS to connect to your AWS CodeCommit repositories. For more information, see Use Git Credentials and HTTPS with AWS CodeCommit.

In the new version of the AWS Toolkit for Eclipse, you will see an entry for AWS CodeCommit on the Eclipse Preferences page, shown here. To install the AWS Toolkit for Eclipse, follow the instructions on the AWS Toolkit for Eclipse page. You can configure your Git credentials for your AWS accounts on this page. For information, see Create Git Credentials for HTTPS Connections to AWS CodeCommit. You can type in the newly generated user name and password into the text fields, or import the CSV file generated from the IAM console directly into Eclipse.

AWS CodeCommit Explorer

An entry for AWS CodeCommit also appears in AWS Explorer, as shown here. To open this view, click the drop-down box next to the AWS icon in the toolbar, and select Show AWS Explorer View. You can create, view, clone, and delete repositories in this view.

  • Create a Repository
    To create a repository, right-click AWS CodeCommit and then select Create Repository, as shown here. Type the repository name and an optional description in the Create Repository dialog box. The newly created repository will appear under AWS CodeCommit.

    Figure: AWS CodeCommit Explorer View

    Figure: Create Repository Dialog Box

  • View a Repository
    To view a repository, double-click the repository name in AWS Explorer. This will open the repository editor where you can see the metadata for the repository, as shown here. The repository editor also shows the latest 10 commits for the selected branch. To refresh the repository editor, click the refresh icon on the top-right corner of the page.
  • Clone a Repository
    To clone a repository, click the Check out button in the repository editor, or right-click the repository name in AWS Explorer and select Clone Repository. If you haven’t configured Git credentials for your current AWS account in your Eclipse, a dialog box will prompt you to configure them.


    After you have configured your Git credentials, you will see the following pages for selecting a branch and local destination. You’ll see these pages have the same look and feel as EGit. For information about EGit, see the EGit Tutorial. You can use the Eclipse EGit plugin for managing your projects with Git. 

    Figure: Branch Selection Page

    Figure: Destination Page

  • Delete a Repository
    To delete a repository from AWS CodeCommit, right-click the repository name and select Delete Repository. When the following dialog box is displayed, type the repository name.

AWS CodeStar Project Checkout

You can use the AWS Toolkit for Eclipse to check out AWS CodeStar projects and edit them in the Eclipse IDE. To import your AWS CodeStar projects to Eclipse, click the drop-down box next to the AWS icon in the toolbar, and select Import AWS CodeStar Project. You will see all your AWS CodeStar projects under the selected account and region.

The plugin for AWS CodeStar finds all the AWS CodeCommit repositories that are linked to the selected project. From the Select repository drop-down list, choose the repository, and then click Next. You can also configure the Git credentials on this page if they have not been configured on the selected account.

Resources

For information about AWS CodeCommit, see the AWS CodeCommit documentation. For information about AWS CodeStar, see the AWS CodeStar documentation.

Conclusion

We hope you find these new features useful. If you have questions or other feedback about using the AWS Toolkit for Eclipse, feel free to leave it in the comments.

Make the Most of Community Resources for AWS SDKs and Tools

by Leah Rivers | on | | Comments

As the new year gets well underway, we want to be sure you know the best ways to get help, keep up to date, and join the conversation about tools you use to build on AWS. We’ve recently refreshed our SDK and CLI README files on GitHub with links to Stack Overflow for getting help. It’s a great time to give you an overview of community resources that we hope make it easier for you to develop using AWS. Let us know what you think!

Get Help
We use GitHub for tracking bugs and feature requests: .NET | Java | JavaScript | PHP | Ruby | Python | Go | C++ | CLI

We use Stack Overflow for general help questions. Use these tags for our SDKs and CLI:

Chat with the Community
We ❤ our gitter channels for the CLI and SDKs. We regularly participate in conversations with developers building on AWS to share ideas, get feedback, and answer questions in the context of a community chat. Join the community by checking out our gitter channels: .NET | Java | JavaScript | PHP | Ruby | Python | Go | CLI

Follow us on Twitter
@awscloud – We’ll share blog posts and announcements for all our SDKs and developer tools.
@awsfornet – Follow us here for updates to the AWS SDK for .NET and AWS Toolkit for Visual Studio.
@awsforjava – Follow us here for updates to the AWS SDK for Java and AWS Toolkit for Eclipse.

AWS Toolkit for Eclipse: VPC Configuration Enhancement for AWS Elastic Beanstalk Environments

by Zhaoxi Zhang | on | in Java | | Comments

From the blog post VPC Configuration for an AWS Elastic Beanstalk Environment, you learned how to deploy your web application to AWS Elastic Beanstalk by using the AWS Toolkit for Eclipse. In this blog, I’m happy to announce that you can now configure Elastic Load Balancing (ELB) subnets and Amazon EC2 subnets separately. The following screenshots show the experience in the AWS Toolkit for Eclipse is consistent with that in the Elastic Beanstalk console.

 

VPC Configuration in AWS Elastic Beanstalk Console

VPC Configuration in AWS Toolkit for Eclipse

Notice that the ELB subnet configuration is enabled only when the environment type is Load Balanced Web Server Environment (see the following screenshot for the type selection). Please read through Using Elastic Beanstalk with Amazon VPC to be sure you understand all the VPC parameters. Inappropriate parameter combinations can cause deployment failures. Follow the rules below when you create an AWS Elastic Beanstalk environment:

  • You must select at least one subnet for EC2 and for ELB.
  • You must select at least one ELB subnet in each Availability Zone where there is an EC2 subnet, and vice versa.
  • You may only select one EC2 subnet per Availability Zone.
  • When one subnet is used for both EC2 and ELB, select the Associate Public IP Address check box unless you have set up a NAT instance to route traffic from the Internet to your ELB subnet.

Application and Environment Configuration

Context Pattern added to the AWS SDK for Go

by Jason Del Ponte | on | in Go | | Comments

The AWS SDK for Go v1.8.0 release adds support for the API operation request functional options, and the Context pattern. Both of these features were high demand requests from our users. Request options allow you to easily configure and augment how the SDK makes API operation requests to AWS services. The SDK’s support for the Context pattern allows your application take advantage of cancellation, timeouts, and Context Values on requests.  The new request options and Context pattern give your application even more control over SDK’s request execution and handling.

Request Options

Request Options are functional arguments that you pass in to the SDK’s API operation methods. These enable you to configure the request in line with functional options. Functional options are a pattern you can use to configure an operation via passed-in functions or closures in line with the method call.

For example, you can configure the Amazon S3 API operation PutObject to log debug information about the request directly, without impacting the other API operations used by your application.

// Log this API operation only. 
resp, err := svc.PutObjectWithContext(ctx, params, request.WithLogLevel(aws.LogDebug))

This pattern is also helpful when you want your application to inject request handlers into the request. This allows you to do so in line with the API operation method call.

resp, err := svc.PutObjectWithContext(ctx, params, func(r *request.Request) {
	start := time.Now()
	r.Handlers.Complete.PushBack(func(req *request.Request) {
		fmt.Println("request %s took %s to complete", req.RequestID, time.Since(start))
	})
})

All of the SDK’s new service client methods that have a WithContext suffix support these request options. You can also apply request options to the SDK’s standard Request directly with the ApplyOptions method.

API Operations with Context

All of the new methods of the SDK’s API operations that have a WithContext suffix take a ContextValue. This value must be non-nil. Context allows your application to control API operation request cancellation. This means you can now easily institute request timeouts based on the Context pattern. Go introduced the Context pattern in the experimental package golang.org/x/net/context, and it was later added to the Go standard library in Go 1.7. For backward compatibility with previous Go versions, the SDK created the Context interface type in the github.com/aws/aws-sdk-go/aws package. The SDK’s Context type is compatible with Context from both golang.org/x/net/context and the Go 1.7 standard library Context package.

Here is an example of how to use a Context to cancel uploading an object to Amazon S3. If the put doesn’t complete within the timeout passed in, the API operation is canceled. When a Context is canceled, the SDK returns the CanceledErrorCode error code. A working version of this example can be found in the SDK.

sess := session.Must(session.NewSession())
svc := s3.New(sess)

// Create a context with a timeout that will abort the upload if it takes 
// more than the passed in timeout.
ctx := context.Background()
var cancelFn func()
if timeout > 0 {
	ctx, cancelFn = context.WithTimeout(ctx, timeout)
}
// Ensure the context is canceled to prevent leaking.
// See context package for more information, https://golang.org/pkg/context/
defer cancelFn()

// Uploads the object to S3. The Context will interrupt the request if the 
// timeout expires.
_, err := svc.PutObjectWithContext(ctx, &s3.PutObjectInput{
	Bucket: aws.String(bucket),
	Key:    aws.String(key),
	Body:   body,
})
if err != nil {
	if aerr, ok := err.(awserr.Error); ok && aerr.Code() == request.CanceledErrorCode {
		// If the SDK can determine the request or retry delay was canceled
		// by a context the CanceledErrorCode error code will be returned.
		fmt.Println("request's context canceled,", err)
	}
	return err
}

API Operation Waiters

Waiters were expanded to include support for request Context and waiter options. The new WaiterOption type defines functional options that are used to configure the waiter’s functionality.

For example, the WithWaiterDelay allows you to provide your own function that returns how long the waiter will wait before checking the waiter’s resource state again. This is helpful when you want to configure an exponential backoff, or longer retry delays with ConstantWaiterDelay.

The example below highlights this by configuring the WaitUntilBucketExists method to use a 30-second delay between checks to determine if the bucket exists.

svc := s3.New(sess)
ctx := contex.Background()

_, err := svc.CreateBucketWithContext(ctx, &s3.CreateBucketInput{
	Bucket: aws.String("myBucket"),
})
if err != nil {
	return fmt.Errorf("failed to create bucket, %v", err)
}

err := svc.WaitUntilBucketExistsWithContext(ctx,
	&s3.HeadBucket{
		Bucket: aws.String("myBucket"),
	},
	request.WithWaiterDelay(request.ConstantWaiterDelay(30 * time.Second)),
)
if err != nil {
	return fmt.Errorf("failed to wait for bucket exists, %v", err)
}

fmt.Println("bucket created")

API Operation Paginators

Paginators were also expanded to add support for Context and request options. Configuring request options for pagination applies the options to each new Request that the SDK creates to retrieve the next page. By extending the Pages API methods to include Context and request options the SDK gives you control over how the SDK will make each page request, and cancellation of the pagination.

svc := s3.New(sess)
ctx := context.Background()

err := svc.ListObjectsPagesWithContext(ctx,
	&s3.ListObjectsInput{
		Bucket: aws.String("myBucket"),
		Prefix: aws.String("some/key/prefix"),
		MaxKeys: aws.Int64(100),
	},
	func(page *s3.ListObjectsOutput, lastPage bool) bool {
		fmt.Println("Received", len(page.Contents), "objects in page")
		for _, obj := range page.Contents {
			fmt.Println("Key:", aws.StringValue(obj.Key))
		}
		return true
	},
)
if err != nil {
	return fmt.Errorf("failed to create bucket, %v", err)
}

API Operation Pagination without Callbacks

In addition to the Pages API operations, you can use the new Pagination type in the github.com/aws/aws-sdk-go/aws/request package. This type enables you to control the iterations of pages directly. This is helpful when you do not want to use callbacks for paginating AWS operations. This new type allows you to treat pagination similar to the Go stdlib bufio package’s Scanner type to iterate through pages with a for loop. You can also use this pattern with the Context pattern by calling Request.SetContext on each request in the NewRequest function.

svc := s3.New(sess)

params := s3.ListObjectsInput{
	Bucket: aws.String("myBucket"),
	Prefix: aws.String("some/key/prefix"),
	MaxKeys: aws.Int64(100),
}
ctx := context.Background()

p := request.Pagination{
	NewRequest: func() (*request.Request, error) {
		req, _ := svc.ListObjectsRequest(&params)
		req.SetContext(ctx)
		return req, nil
	},
}

for p.Next(){
	page := p.Page().(*s3.ListObjectsOutput)
	
	fmt.Println("Received", len(page.Contents), "objects in page")
	for _, obj := range page.Contents {
		fmt.Println("Key:", aws.StringValue(obj.Key))
	}
}

return p.Err()

Wrap Up

The addition of Context and request options expands the capabilities of the AWS SDK for Go, giving your applications the tools needed to implement request lifecycle and configuration with the SDK. Let us know your experiences using the new Context pattern and request options features.

Using Python and Amazon SQS FIFO Queues to Preserve Message Sequencing

by Tara Van Unen | on | in Python | | Comments

Thanks to Alexandre Pinhel, Solutions Architect from our team for writing this post!

Amazon SQS
is a managed message queuing service that makes it simple to decouple application components. We recently announced an entirely new queue type, SQS FIFO (first-in, first out) queues with exactly-once processing and deduplication. SQS FIFO queues are now available in the US East (Ohio) and US West (Oregon) regions, with more regions to follow. This new type of queue lets you use Amazon SQS for systems that depend on receiving messages in exact order, and exactly once, such as financial services and e-commerce applications. For example, FIFO queues help ensure mobile banking transactions are processed in the correct sequence, and that inventory updates for online retail sites are processed in the right order. In this post, we show how to use FIFO queues to preserve message sequencing with Python.

FIFO queues complement our existing SQS standard queues, which offer higher throughput, best-effort ordering, and at-least-once delivery. The following diagram compares the features of standard queues vs. FIFO queues. The same API functions apply to both types of queues.

The below use case provides an example of how you can now use SQS FIFO queues to exchange sequence-sensitive information. For more information about developing applications using Amazon SQS, see the Amazon SQS Developer Guide.

SQS FIFO Queues Example

In the capital markets industry, some of the most common patterns for exchanging messages with partners and customers are based on messaging technologies with two types of scenarios:

  1. Communication channels between two messaging managers (one sender channel and one receiver channel). Each messaging manager hosts the local queue and has an alias to the remote queue hosted on the other side (an MQ manager). The messages sent from an MQ manager are not stored locally. The receiving MQ manager stores the messages for the client applications of the named queues.
  2. A single messaging manager that hosts all the queues and that has the associated responsibility for message exchange and backup.

You can use Amazon SQS to decouple the components of an application so that these components can run independently, as expected in a messaging use case. The following diagram shows a sample architecture using an SQS queue with processing servers.


To preserve the order of messages, we use FIFO queues. These queues help ensure that trades are received in the correct order, and a book event is received before an update event or a cancel event.

Important: The name of a FIFO queue must end with the .fifo suffix.

The following diagram shows a financial use case, where Amazon SQS FIFO queues are used with different processing servers based on the type of messages being managed.

 

 

In FIFO queues, Amazon SQS also provides content-based deduplication. Content-based deduplication allows SQS to distinguish the contents of one message from the contents of another message using the message body. This helps eliminate duplicates in referential systems such as those that manage pricing.

In the following example, we simulate the two parts of a capital market exchange. In the first part, we simulate the application sending the trade status and sending messages to the queue named Trade Status. (In Amazon SQS, the queue will be named TradeStatus.fifo.) The application regularly sends trade status received during the trade lifecycle in the queue (for example, trade received, trade checked, trade confirmed, and so on). In the second part, we simulate a client application that gets the trade status to update an internal website or to send status update notifications to other tools. The script stops after the message is read.

To accomplish this, you can use the following two Python code examples. This example is using boto3, the AWS SDK for Python.

This first script sends an XML message to a queue named TradeStatus.fifo, and the second script receives the message from the same queue. Messages can contain up to 256 KB of text in any format. Any component can later retrieve the messages programmatically using the Amazon SQS API. You can manage messages larger than 256 KB by using the SQS Extended Client Library for Java, which uses Amazon S3 to store larger payloads.

For queue creation, please see the Amazon SQS Developer guide.

Name: TradeStatus.fifo

URL: https://sqs.us-west-2.amazonaws.com/12345678/TradeStatus.fifo

The scripts below are in Python2.

import boto3

# Get the service resource
sqs = boto3.resource('sqs')

# Get the queue
queue = sqs.get_queue_by_name(QueueName='TradeStatus.fifo')

try:
    userInput = raw_input("Please enter file name: ")
except NameError:
    pass

with open(userInput, 'r') as myfile:
    data=myfile.read()

response = queue.send_message(
    MessageBody=data,
    MessageGroupId='messageGroup1'
)

# The response is NOT a resource, but gives you a message ID and MD5
print(response.get('MessageId'))
print(response.get('MD5OfMessageBody'))

The following Python code receives the message from the TradeStatus.fifo queue and deletes the message when it’s received. Afterward, the message is no longer available.

import boto3

# Get the service resource
sqs = boto3.resource('sqs')

# Get the queue
queue = sqs.get_queue_by_name(QueueName='TradeStatus.fifo')

# Process messages by printing out body
for message in queue.receive_messages():
    # Print out the body of the message
    print('Hello, {0}'.format(message.body))

    # Let the queue know that the message is processed
    message.delete()

Note: In Python, you need only the name of the queue.

More Resources

In this post, we showed how you can use Amazon SQS FIFO queues to exchange data between distributed systems that depend on receiving messages in exact order, and exactly once. You can get started with SQS FIFO queues using just three simple commands. For more information, see the following resources:

Creating .NET Core AWS Lambda Projects without Visual Studio

by Norm Johanson | on | in .NET | | Comments

In the last post, we talked about AWS Lambda deployment integration with the dotnet CLI, using the Amazon.Lambda.Tools NuGet package to deploy Lambda functions and serverless applications. But what if you want to create an AWS Lambda project outside of Visual Studio? This is especially important if you’re working on platforms other than Windows.

The “dotnet new” Command

The dotnet CLI has a command named new that you can use to create .NET Core projects from the command line. For example, by default there are options for creating many of the common project types.

C:\BlogContent> dotnet new -all                                                                                                             
Template Instantiation Commands for .NET Core CLI.                                                                            
                                                   
Templates                 Short Name       Language      Tags                                                                 
------------------------------------------------------------------------------------------------------                                                      
Console Application       console          [C#], F#      Common/Console                                                       
Class library             classlib         [C#], F#      Common/Library                                                       
Unit Test Project         mstest           [C#], F#      Test/MSTest                                                          
xUnit Test Project        xunit            [C#], F#      Test/xUnit                                                           
ASP.NET Core Empty        web              [C#]          Web/Empty                                                            
ASP.NET Core Web App      mvc              [C#], F#      Web/MVC                                                              
ASP.NET Core Web API      webapi           [C#]          Web/WebAPI                                                           
Nuget Config              nugetconfig                    Config                                                               
Web Config                webconfig                      Config                                                               
Solution File             sln                            Solution                                                             
                                                                                                                             
Examples:                                                                                                                     
    dotnet new mvc --auth None --framework netcoreapp1.1                                                                      
    dotnet new mvc --framework netcoreapp1.1                                                                                  
    dotnet new --help   

The new command also has the ability to add more project types via NuGet. We recently released a new NuGet package named Amazon.Lambda.Templates that wraps up all the templates we expose in Visual Studio as project types you can create from the dotnet CLI. To install this NuGet package, run the following command.

dotnet new -i Amazon.Lambda.Templates::*

The trailing ::* in the command specifies to install the latest version. Once the install is complete, the Lambda templates show up as part of dotnet new.

C:\BlogContent> dotnet new -all                                                                                                             
Template Instantiation Commands for .NET Core CLI.                                                                            
                                                                                                                             
Templates                            Short Name                    Language      Tags                                         
------------------------------------------------------------------------------------------------------                                                      
Lambda Detect Image Labels           lambda.DetectImageLabels      [C#]          AWS/Lambda/Function                          
Lambda Empty Function                lambda.EmptyFunction          [C#]          AWS/Lambda/Function                          
Lambda Simple DynamoDB Function      lambda.DynamoDB               [C#]          AWS/Lambda/Function                          
Lambda Simple Kinesis Function       lambda.Kinesis                [C#]          AWS/Lambda/Function                          
Lambda Simple S3 Function            lambda.S3                     [C#]          AWS/Lambda/Function                          
Lambda ASP.NET Core Web API          lambda.AspNetCoreWebAPI       [C#]          AWS/Lambda/Serverless                        
Lambda DynamoDB Blog API             lambda.DynamoDBBlogAPI        [C#]          AWS/Lambda/Serverless                        
Lambda Empty Serverless              lambda.EmptyServerless        [C#]          AWS/Lambda/Serverless                        
Console Application                  console                       [C#], F#      Common/Console                               
Class library                        classlib                      [C#], F#      Common/Library                               
Unit Test Project                    mstest                        [C#], F#      Test/MSTest                                  
xUnit Test Project                   xunit                         [C#], F#      Test/xUnit                                   
ASP.NET Core Empty                   web                           [C#]          Web/Empty                                    
ASP.NET Core Web App                 mvc                           [C#], F#      Web/MVC                                      
ASP.NET Core Web API                 webapi                        [C#]          Web/WebAPI                                   
Nuget Config                         nugetconfig                                 Config                                       
Web Config                           webconfig                                   Config                                       
Solution File                        sln                                         Solution                                     
                                                                                                                             
Examples:                                                                                                                     
    dotnet new mvc --auth None --framework netcoreapp1.1                                                                      
    dotnet new classlib                                                                                                       
    dotnet new --help                                                                                                         
C:\BlogContent>  

To get details about a template, you can use the help command.


dotnet new lambda.EmptyFunction –help

C:\BlogContent> dotnet new lambda.EmptyFunction --help                                                                                                    
Template Instantiation Commands for .NET Core CLI.                                                                                          
                                                                                                                                           
Lambda Empty Function (C#)                                                                                                                  
Author: AWS                                                                                                                                 
Options:                                                                                                                                    
  -p|--profile  The AWS credentials profile set in aws-lambda-tools-defaults.json and used as the default profile when interacting with AWS.
                string - Optional                                                                                                           
                                                                                                                                           
  -r|--region   The AWS region set in aws-lambda-tools-defaults.json and used as the default region when interacting with AWS.              
                string - Optional       

You can see here that the template takes two optional parameters to set the profile and region. These values are written to the aws-lambda-tools-default.json so you can get started deploying with the Lambda tooling right away.

To create a function, run the following command.

dotnet new lambda.EmptyFunction --name BlogFunction --profile default --region us-east-2

This creates a project for the Lambda function and a test project. We can now use any editor we want to build and test our .NET Core Lambda function. Once we’re ready to deploy the function, we run the following commands.

cd ./BlogFunction/src/BlogFunction
dotnet restore
dotnet lambda deploy-function BlogFunction –function-role TestRole

After deployment,we can even test the function from the command line by using the following command.

dotnet lambda invoke-function BlogFunction --payload "Hello World"
C:\BlogContent> dotnet lambda invoke-function BlogFunction --payload "Hello World"
Payload:
"HELLO WORLD"

Log Tail:
START RequestId: a54b750b-0dca-11e7-9099-27598ea7c35d Version: $LATEST
END RequestId: a54b750b-0dca-11e7-9099-27598ea7c35d
REPORT RequestId: a54b750b-0dca-11e7-9099-27598ea7c35d  Duration: 0.99 ms       Billed Duration: 100 ms         Memory Size: 256 MB     Max Memory Used: 42 MB

Summary

With our Lambda tooling provided by Amazon.Lambda.Tools and our project templates provided by Amazon.Lambda.Templates, you can develop .NET Core Lambda functions on any platform. As always, let us know what you think on our GitHub repository.

Deploying .NET Core AWS Lambda Functions from the Command Line

by Norm Johanson | on | in .NET | | Comments

In previous posts about our .NET Core support with AWS Lambda, we’ve shown how you can create Lambda functions and serverless applications with Visual Studio. But one of the most exciting things about .NET Core is its cross-platform support with the new command line interface (CLI) named dotnet. To help you develop Lambda functions outside of Visual Studio, we’ve released the Amazon.Lambda.Tools NuGet package that integrates with the dotnet CLI.

We released Amazon.Lambda.Tools as a preview with our initial release of .NET Core on Lambda. We kept it in preview while .NET Core tooling, including the dotnet CLI, was in preview. With the recent release of Visual Studio 2017, the dotnet CLI and our integration with it is now marked as generally available (GA). If you’re still using preview versions of the dotnet CLI and the pre-Visual Studio 2017 project structure, the GA release of Amazon.Lambda.Tools will still work for those projects.

.NET Core Project Structure

When .NET Core was originally released last summer, you would define a project in a JSON file named project.json. At that time, it was announced that this was temporary, that .NET Core was moving to be in line with other .NET projects and would be based on the msbuild XML format, and that each project would contain a .csproj file. As part of the GA release of the dotnet CLI tooling, the release includes the switch to the msbuild format.

Amazon.Lambda.Tools Registration

If you create an AWS Lambda project in Visual Studio, the command line integration is set up automatically so that you can easily transition from Visual Studio to the command line. If you inspect a project created in Visual Studio 2017, you’ll notice a DotNetCliToolReference for the Amazon.Lambda.Tools NuGet package.


<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Amazon.Lambda.Core" Version="1.0.0" />
    <PackageReference Include="Amazon.Lambda.Serialization.Json" Version="1.0.1" />
  </ItemGroup>

  <ItemGroup>
    <DotNetCliToolReference Include="Amazon.Lambda.Tools" Version="1.4.0" />
  </ItemGroup>

</Project>

In Visual Studio 2015, which uses the older project.json format, the Amazon.Lambda.Tools package is declared as a build dependency and is also registered in the tools section.


{
  "version": "1.0.0-*",
  "buildOptions": {
  },

  "dependencies": {
    "Microsoft.NETCore.App": {
      "type": "platform",
      "version": "1.0.0"
    },

    "Amazon.Lambda.Core": "1.0.0*",
    "Amazon.Lambda.Serialization.Json": "1.0.1",

    "Amazon.Lambda.Tools" : {
      "type" :"build",
      "version":"1.4.0 "
    }
  },

  "tools": {
    "Amazon.Lambda.Tools" : "1.4.0 "
  },

  "frameworks": {
    "netcoreapp1.0": {
      "imports": "dnxcore50"
    }
  }
}

Adding to an Existing Project

The Amazon.Lambda.Tools NuGet package is marked as DotNetCliTool package type. Right now Visual Studio 2017 doesn’t understand the new package type. If you attempt to add the NuGet package through Visual Studio’s Manage NuGet Packages dialog it won’t be able to add the reference. Till Visual Studio 2017 is updated you will need to manually add the DotNetCliToolReference in the csproj file.

Deploying Lambda Functions

All the tooling we developed for Visual Studio to deploy Lambda functions originated in the Amazon.Lambda.Tools package. That means all the deployment features you use inside Visual Studio you can also do from the command line.

To get started, in a command window navigate to a project you created in Visual Studio. To see the available commands, enter dotnet lambda help.

C:\BlogContent\BlogExample\BlogExample> dotnet lambda help                                                                                     
AWS Lambda Tools for .NET Core functions                                                                 
Project Home: https://github.com/aws/aws-lambda-dotnet                                                   
                                                                                                        
                                                                                                        
Commands to deploy and manage AWS Lambda functions:                                                      
                                                                                                        
        deploy-function         Command to deploy the project to AWS Lambda                              
        invoke-function         Command to invoke a function in Lambda with an optional input            
        list-functions          Command to list all your Lambda functions                                
        delete-function         Command to delete an AWS Lambda function                                 
        get-function-config     Command to get the current runtime configuration for a Lambda function   
        update-function-config  Command to update the runtime configuration for a Lambda function        
                                                                                                        
                                                                                                        
Commands to deploy and manage AWS Serverless applications using AWS CloudFormation:                      
                                                                                                        
        deploy-serverless       Command to deploy an AWS Serverless application                          
        list-serverless         Command to list all your AWS Serverless applications                     
        delete-serverless       Command to delete an AWS Serverless application                          
                                                                                                        
                                                                                                        
Other Commands:                                                                                          
                                                                                                        
        package                 Command to package a Lambda project into a zip file ready for deployment
                                                                                                        
                                                                                                        
To get help on individual commands execute:                                                              
        dotnet lambda help <command>  

By using the dotnet lambda command you have access to a collection of commands to manage Lambda functions and serverless applications. There is also a package command that packages your project into a .zip file, ready for deployment. This can be useful for CI systems.

To see help for an individual command, type dotnet lambda help followed by the command name; for example, dotnet lambda help deploy-function.

C:\BlogContent\BlogExample\BlogExample> dotnet lambda help deploy-function
AWS Lambda Tools for .NET Core functions
Project Home: https://github.com/aws/aws-lambda-dotnet

deploy-function:
   Command to deploy the project to AWS Lambda

   dotnet lambda deploy-function [arguments] [options]
   Arguments:
      <FUNCTION-NAME> The name of the function to deploy
   Options:
     --region                                The region to connect to AWS services, if not set region will be detected from the environment (Default Value: us-east-2)
      --profile                               Profile to use to look up AWS credentials, if not set environment credentials will be used (Default Value: normj+vpc)
     --profile-location                      Optional override to the search location for Profiles, points at a shared credentials file
      -pl    | --project-location             The location of the project, if not set the current directory will be assumed
      -cfg   | --config-file                  Configuration file storing default values for command line arguments. Default is aws-lambda-tools-defaults.json
      -c     | --configuration                Configuration to build with, for example Release or Debug (Default Value: Release)
      -f     | --framework                    Target framework to compile, for example netcoreapp1.0 (Default Value: netcoreapp1.0)
      -pac   | --package                      Application package to use for deployment, skips building the project
      -fn    | --function-name                AWS Lambda function name
      -fd    | --function-description         AWS Lambda function description
      -fp    | --function-publish             Publish a new version as an atomic operation
      -fh    | --function-handler             Handler for the function <assembly>::<type>::<method> (Default Value: BlogExample::BlogExample.Function::FunctionHandler)
      -fms   | --function-memory-size         The amount of memory, in MB, your Lambda function is given (Default Value: 256)
      -frole | --function-role                The IAM role that Lambda assumes when it executes your function
      -ft    | --function-timeout             The function execution timeout in seconds (Default Value: 30)
      -frun  | --function-runtime             The runtime environment for the Lambda function (Default Value: dotnetcore1.0)
      -fsub  | --function-subnets             Comma delimited list of subnet ids if your function references resources in a VPC
      -fsec  | --function-security-groups     Comma delimited list of security group ids if your function references resources in a VPC
      -dlta  | --dead-letter-target-arn       Target ARN of an SNS topic or SQS Queue for the Dead Letter Queue
      -ev    | --environment-variables        Environment variables set for the function. Format is <key1>=<value1>;<key2>=<value2>
      -kk    | --kms-key                      KMS Key ARN of a customer key used to encrypt the function's environment variables
      -sb    | --s3-bucket                    S3 bucket to upload the build output
      -sp    | --s3-prefix                    S3 prefix for for the build output
      -pcfg  | --persist-config-file          If true the arguments used for a successful deployment are persisted to a config file. Default config file is aws-lambda-tools-defaults.json
C:\BlogContent\BlogExample\BlogExample>

As you can see, you can set many options with this command. This is where the aws-lambda-tools-defaults.json file, which is created as part of your project, comes in handy. You can set the options in this file, which is read by the Lambda tooling by default. The project templates created in Visual Studio set many of these fields with default values.


{                                                                                   
  "profile":"default",                                                            
  "region" : "us-east-2",                                                           
  "configuration" : "Release",                                                      
  "framework" : "netcoreapp1.0",                                                    
  "function-runtime":"dotnetcore1.0",                                               
  "function-memory-size" : 256,                                                     
  "function-timeout" : 30,                                                          
  "function-handler" : "BlogExample::BlogExample.Function::FunctionHandler"         
}

When you use this aws-lambda-tools-default.json file, the only things left that the Lambda tooling needs to deploy the function are the name of the Lambda function and the IAM role. You do this by using the following command:

dotnet lambda deploy-function TheFunction --function-role TestRole
C:\BlogContent\BlogExample\BlogExample> dotnet lambda deploy-function TheFunction --function-role TestRole                                                                                                  
Executing publish command                                                                                                                                                           
Deleted previous publish folder                                                                                                                                                     
... invoking 'dotnet publish', working folder 'C:\BlogContent\BlogExample\BlogExample\bin\Release\netcoreapp1.0\publish'                                                            
... publish: Microsoft (R) Build Engine version 15.1.548.43366                                                                                                                      
... publish: Copyright (C) Microsoft Corporation. All rights reserved.                                                                                                              
... publish:   BlogExample -> C:\BlogContent\BlogExample\BlogExample\bin\Release\netcoreapp1.0\BlogExample.dll                                                                      
Zipping publish folder C:\BlogContent\BlogExample\BlogExample\bin\Release\netcoreapp1.0\publish to C:\BlogContent\BlogExample\BlogExample\bin\Release\netcoreapp1.0\BlogExample.zip
... zipping: Amazon.Lambda.Core.dll                                                                                                                                                 
... zipping: Amazon.Lambda.Serialization.Json.dll                                                                                                                                   
... zipping: BlogExample.deps.json                                                                                                                                                  
... zipping: BlogExample.dll                                                                                                                                                        
... zipping: BlogExample.pdb                                                                                                                                                        
... zipping: Newtonsoft.Json.dll                                                                                                                                                    
... zipping: System.Runtime.Serialization.Primitives.dll                                                                                                                            
Creating new Lambda function TheFunction                                                                                                                                            
New Lambda function created                                                                                                                                                         
C:\BlogContent\BlogExample\BlogExample>             

You can also pass an alternative file that contains option defaults by using the –config-file option. This enables you to reuse multiple Lambda configurations.

“dotnet publish” vs “dotnet lambda” Commands

Using the Amazon.Lambda.Tools package is the preferred way to deploy functions to Lambda from the command line versus using the dotnet publish command, zipping that output folder, and sending the zip file to Lambda. The Lambda tooling looks at the publish folder and removes any duplicate native dependency in it, which reduces the size of your Lambda function. For example, if you reference the SQL Server client NuGet package, System.Data.SqlClient, the Lambda tooling produces a package file that is about 1 MB smaller than the zipped publish folder from dotnet publish. It also reworks the layout of native dependencies to ensure that the Lambda service finds the native dependencies.

Summary

We hope the Amazon.Lambda.Tools package helps you with the transition from working in Visual Studio to working in the command line to script and automate your deployments. Let us know what you think on our GitHub repository, and what you’d like to see us add to the tooling.

AWS SDK for .NET Supports Assume Role Profiles and the Shared Credentials File

by John Vellozzi | on | in .NET | | Comments

The AWS SDK for .NET, AWS Tools for PowerShell, and the AWS Toolkit for Visual Studio now support the use of the AWS CLI credentials file. Some of the AWS SDKs have supported shared use of the AWS CLI credentials file for some time, and we’re happy to add the SDK for .NET to that list.

For a long time, the SDK for .NET has supported reading and writing of its own credentials file. We’ve added support for new credential profile types to facilitate feature parity with the shared credentials file. The SDK for .NET and Tools for PowerShell now support reading and writing of basic, session, and assume role credential profiles to both the .NET credentials file and the shared credentials file. The .NET credentials file maintains its support for federated credential profiles.

With the new Amazon.Runtime.CredentialManagement namespace, you now have programmatic access to read and write credential profiles to the .NET credentials file and the shared credentials file. This is a new namespace, and some older classes have been deprecated. Please see the developer guide topic Configuring AWS Credentials and the API Reference for details.

AWS Tools for PowerShell now enable you to read and write credential profiles to both credentials files as well. We’ve added parameters to the credentials-related cmdlets to support the new profile types and the shared credentials file. You can reference the new profiles with the -ProfileName argument in the service cmdlets. You can find more details about the changes to Tools for PowerShell in Shared Credentials in AWS Tools for PowerShell and the AWS Tools for PowerShell Cmdlet Reference.

In Visual Studio you’ll now see profiles stored in (user’s home directory)\.aws\credentials listed in the AWS Explorer. Reading is supported for all profile types and you can edit basic profiles.

What You Need to Know

In addition to the new Amazon.Runtime.CredentialManagement classes, the SDK has some internal changes. The SDK’s region resolution logic now looks for the region in the default credential profile. This is especially important for SDK for .NET applications running in Amazon EC2. The SDK for .NET determines the region for a request from:

  1. The client configuration, or what is explicitly set on the AWS service client.
  2. The AWSConfigs.RegionEndpoint property (set explicitly or in AppConfig).
  3. The AWS_REGION environment variable, if it’s non-empty.
  4. The “default” credential profile. (See “Credential Profile Resolution” below for details.).
  5. EC2 instance metadata.

Checking the “default” credential profile is a new step in the process. If your application relies on EC2 instance metadata for the region, ensure that the SDK doesn’t pick up the wrong region from one of the credentials files.

Although there aren’t any changes to the credentials resolution logic, it’s important to understand how credential profiles fit into that as well. The SDK for .NET will (continue to) determine the credentials to use for service requests from:

  1. The client configuration, or what is explicitly set on the AWS service client.
  2. BasicAWSCredentials that are created from the AWSAccessKey and AWSSecretKey AppConfig values, if they’re available.
  3. A search for a credentials profile with a name specified by a value in AWSConfigs.AWSProfileName (set explicitly or in AppConfig). (See “Credential Profile Resolution” below for details.)
  4. The “default” credentials profile. (See “Credential Profile Resolution” below for details.)
  5. SessionAWSCredentials that are created from the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN environment variables, if they’re all non-empty.
  6. BasicAWSCredentials that are created from the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, if they’re both non-empty.
  7. EC2 instance metadata.

Credential Profile Resolution

With two different credentials file types, it’s important to understand how to configure the SDK and Tools for PowerShell to use them. The AWSConfigs.AWSProfilesLocation (set explicitly or in AppConfig) controls how the SDK finds credential profiles. The -ProfileLocation command line argument controls how Tools for PowerShell find a profile. Here’s how the configuration works in both cases:

Profile Location Value Profile Resolution Behavior
null (not set) or empty *First search the .NET credentials file for a profile with the specified name. If the profile isn’t there, search (user’s home directory)\.aws\credentials. If the profile isn’t there, search (user’s home directory)\.aws\config.
The path to a file in the shared credentials file format Search only the specified file for a profile with the specified name.

*The .NET credentials file is not supported on Mac and Linux platforms, and is skipped when resolving credential profiles.

Time-to-Live Support in Amazon DynamoDB

by Pavel Safronov | on | in .NET | | Comments

Amazon DynamoDB recently added Time-to-Live (TTL) support, a way to automatically delete expired items from your DynamoDB table. This blog post discusses this feature, how it’s exposed in the AWS SDK for .NET, and how you can take advantage of it.

Using Time-to-Live

At a high-level, you configure TTL by choosing a particular attribute on a table that will be treated as a timestamp. Then you simply store an expiration time into this attribute on every item that you need to expire. A periodic process in DynamoDB identifies whether an item’s TTL timestamp attribute is now in the past, and then schedules the removal of that item from the table. The timestamps must be stored as epoch seconds (number of seconds since 12:00:00 AM January 1st, 1970 UTC), which you must calculate or have the SDK calculate for you.

The AWS SDK for .NET has three different DynamoDB APIs, so you have three different ways to use TTL. In the following sections, we discuss these APIs and how you use the TTL feature from each of them.

Low-Level Model – Control Plane

First, the low-level model. This is a thin wrapper around the DynamoDB service operations that you use by instantiating AmazonDynamoDBClient and calling its various operations. This model provides you with the most control, but it also doesn’t have the helpful abstractions of the higher-level APIs. Using the low-level model, you can enable and disable the TTL feature and configure Time-to-Live for your data.

Here’s an example of checking the status of TTL for a table.


using (var client = new AmazonDynamoDBClient())
{
    // Retrieve TTL status
    var ttl = client.DescribeTimeToLive(new DescribeTimeToLiveRequest
    {
        TableName = "SessionData"
    }).TimeToLiveDescription;
    Console.WriteLine($"TTL status = {ttl.TimeToLiveStatus}");
    Console.WriteLine($"TTL attribute {(ttl.AttributeName == null ? "has not been set" : $"= {ttl.AttributeName}")}");

    // Enable TTL
    client.UpdateTimeToLive(new UpdateTimeToLiveRequest
    {
        TableName = "SessionData",
        TimeToLiveSpecification = new TimeToLiveSpecification
        {
            Enabled = true,
            AttributeName = "ExpirationTime"
        }
    });

    // Disable TTL
    client.UpdateTimeToLive(new UpdateTimeToLiveRequest
    {
        TableName = "SessionData",
        TimeToLiveSpecification = new TimeToLiveSpecification
        {
            Enabled = false,
            AttributeName = "ExpirationTime"
        }
    });
}

Note: There is a limit to how often you can enable or disable TTL in a given period of time. Running this sample multiple times will likely result in a ValidationException being thrown.

Low Level – Data Plane

Actually writing and reading TTL data in an item is fairly straightforward, but you are required to write epoch seconds into an AttributeValue. You can calculate the epoch seconds manually or use helper methods in AWSSDKUtils, as shown below.

Here’s an example of using the low-level API to work with TTL data.


using (var client = new AmazonDynamoDBClient())
{
    // Writing TTL attribute
    DateTime expirationTime = DateTime.Now.AddDays(7);
    Console.WriteLine($"Storing expiration time = {expirationTime}");
    int epochSeconds = AWSSDKUtils.ConvertToUnixEpochSeconds(expirationTime);
    client.PutItem("SessionData", new Dictionary<string, AttributeValue>
    {
        { "UserName", new AttributeValue { S = "user1" } },
        { "ExpirationTime", new AttributeValue { N = epochSeconds.ToString() } }
    });

    // Reading TTL attribute
    var item = client.GetItem("SessionData", new Dictionary<string, AttributeValue>
    {
        { "UserName", new AttributeValue { S = "user1" } },
    }).Item;
    string epochSecondsString = item["ExpirationTime"].N;
    epochSeconds = int.Parse(epochSecondsString);
    expirationTime = AWSSDKUtils.ConvertFromUnixEpochSeconds(epochSeconds);
    Console.WriteLine($"Stored expiration time = {expirationTime}");
}

Document Model

The Document Model provides you with Table objects that represent a DynamoDB table, and Document objects that represent a single row of data in a table. You can store primitive .NET types directly in a Document, with the required conversion to DynamoDB types happening in the background. This makes the Document Model API easier to use than the low-level model.

Using the Document Model API, you can easily configure which attributes you’d like to store as epoch seconds by setting the TableConfig.AttributesToStoreAsEpoch collection. Then you can use DateTime objects without needing to convert the data to epoch seconds manually. If you don’t specify which attributes to store as epoch seconds, then instead of writing epoch seconds in that attribute you would end up storing the DateTime as an ISO-8601 string, such as “2017-03-09T05:49:38.631Z”. In that case, DynamoDB Time-to-Live would NOT automatically delete the item. So you need to be sure to specify AttributesToStoreAsEpoch correctly when you’re creating the Table object.

Here’s an example of configuring the Table object, then writing and reading TTL items.


// Set up the Table object
var tableConfig = new TableConfig("SessionData")
{
    AttributesToStoreAsEpoch = new List { "ExpirationTime" }
};
var table = Table.LoadTable(client, tableConfig);

// Write TTL data
var doc = new Document();
doc["UserName"] = "user2";

DateTime expirationTime = DateTime.Now.AddDays(7);
Console.WriteLine($"Storing expiration time = {expirationTime}");
doc["ExpirationTime"] = expirationTime;

table.PutItem(doc);

// Read TTL data
doc = table.GetItem("user2");
expirationTime = doc["ExpirationTime"].AsDateTime();
Console.WriteLine($"Stored expiration time = {expirationTime}");

Object Persistence Model

The Object Persistence Model simplifies interaction with DynamoDB even more, by enabling you to use .NET classes with DynamoDB. This interaction is done by passing objects to the DynamoDBContext, which handles all the conversion logic. Using TTL with the Object Persistence Model is just as straightforward as using it with the Document model: you simply identify the attributes to store as epoch seconds and the SDK performs the required conversions for you.

Consider the following class definition.


[DynamoDBTable("SessionData")]
public class User
{
    [DynamoDBHashKey]
    public string UserName { get; set; }

    [DynamoDBProperty(StoreAsEpoch = true)]
    public DateTime ExpirationTime { get; set; }
}

Once we’ve added the [DynamoDBProperty(StoreAsEpoch = true)] attribute, we can use DateTime objects with the class just like we normally would. However, this time we store epoch seconds, and the items we create are eligible for TTL automatic deletion. And just as with the Document Model, if you omit the StoreAsEpoch attribution, the objects you write will contain ISO-8601 dates and are not eligible for TTL deletion.

Here’s an example of creating the DynamoDBContext object, writing a User object, and reading it out again.


using (var context = new DynamoDBContext(client))
{
    // Writing TTL data
    DateTime expirationTime = DateTime.Now.AddDays(7);
    Console.WriteLine($"Storing expiration time = {expirationTime}");

    var user = new User
    {
        UserName = "user3",
        ExpirationTime = expirationTime
    };
    context.Save(user);

    // Reading TTL data
    user = context.Load("user3");
    expirationTime = user.ExpirationTime;
    Console.WriteLine($"Stored expiration time = {expirationTime}");
}

Conclusion

In this blog post we showed how you can toggle the new Time-to-Live feature for a table. We’ve also showed multiple ways to work with this data. The approach you choose is up to you and, hopefully with these examples, you’ll find it quite easy to schedule your data for automatic deletion. Happy coding!