AWS Developer Blog

Polling Messages from a Amazon SQS Queue

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

We’ve recently added a utility class to the AWS SDK for Ruby that makes it easy to poll an Amazon SQS queue for messages.

poller = Aws::SQS::QueuePoller.new(queue_url)

poller.poll do |msg|
  puts msg.body
end

Messages are automatically deleted from the queue at the end of the block. This tool supports receiving and deleting messages in batches, long-polling, client-side tracking of stats, and more.

Long Polling

By default, messages are received using long polling. This method will force a default :wait_time_seconds of 20 seconds. If you prefer to use the queue default wait time, then pass a nil value for :wait_time_seconds.

# disables 20 second default, use queue ReceiveMessageWaitTimeSeconds attribute
poller.poll(wait_time_seconds:nil) do |msg|
  # ...
end

When disabling :wait_time_seconds by passing nil, you must ensure the queue ReceiveMessageWaitTimeSeconds attribute is set to a non zero value, or you will be short polling. This will trigger significantly more API calls.

Batch Receiving Messages

You can specify a maximum number of messages to receive with each polling attempt via :max_number_of_messages. When this is set to a positive value, greater than 1, the block will receive an array of messages, instead of a single message.

# receives and yields up to 10 messages at a time
poller.poll(max_number_of_messages:10) do |messages|
  messages.each do |msg|
    # ...
  end
end

The maximum value for :max_number_of_messages is enforced by Amazon SQS.

Visibility Timeouts

When receiving messages, you have a fixed amount of time to process and delete each message before it is added back into the queue. This is the visibility timeout. By default, the queue’s VisibilityTimeout attribute is used. You can provide an alternative visibility timeout when polling.

# override queue visibility timeout
poller.poll(visibility_timeout:10) do |msg|
  # do work ...
end

You can reset the visibility timeout of a single message by calling #change_message_visibility. This is useful when you need more time to finish processing the message.

poller.poll do |msg|

  # do work ...

  # need more time for processing
  poller.change_message_visibility(msg, 60)

  # finish work ...

end

If you change the visibility timeout of a message to zero, it will return to the queue immediately.

Deleting Messages

Messages are deleted from the queue when the block returns normally.

poller.poll do |msg|
  # do work
end # messages deleted here

You can skip message deletion by passing skip_delete: true. This allows you to manually delete the messages using {#deletemessage}, or {#deletemessages}.

# single message
poller.poll(skip_delete: true) do |msg|
  poller.delete_message(msg) # if successful
end

# message batch
poller.poll(skip_delete: true, max_number_of_messages:10) do |messages|
  poller.delete_messages(messages)
end

Another way to manage message deletion is to throw :skip_delete from the poll block. You can use this to choose when a message, or message batch is deleted on an individual basis:

poller.poll do |msg|
  begin
    # do work
  rescue
    # unexpected error occurred while processing messages,
    # log it, and skip delete so it can be re-processed later
    throw :skip_delete
  end
end

Terminating the Polling Loop

By default, polling will continue indefinitely. You can stop the poller by providing an idle timeout or by throwing :stop_polling from the {#before_request} callback.

:idle_timeout

This is a configurable, maximum number of seconds to wait for a new message before the polling loop exists. By default, there is no idle timeout.

# stops polling after a minute of no received messages
poller.poll(idle_timeout: 60) do |msg|
  # ...
end

:stop_polling

If you want more fine-grained control, you can configure a before request callback to trigger before each long poll. Throwing :stop_polling from this callback will cause the poller to exit normally without making the next request.

# stop after processing 100 messages
poller.before_request do |stats|
  throw :stop_polling if stats.receive_message_count >= 100
end

poller.poll do |msg|
  # do work ...
end

Tracking Progress

The poller will automatically track a few statistics client-side in a PollerStats object. You can access the poller stats three ways:

  • The first block argument of {#before_request}
  • The second block argument of {#poll}.
  • The return value from {#poll}.

Here are examples of accessing the statistics.

  • Configure a {#before_request} callback.

    poller.before_reqeust do |stats|
      logger.info("requests: #{stats.request_count}")
      logger.info("messages: #{stats.received_message_count}")
      logger.info("last-timestamp: #{stats.last_message_received_at}")
    end
  • Accept a second argument in the poll block, for example:

    poller.poll do |msg, stats|
      logger.info("requests: #{stats.request_count}")
      logger.info("messages: #{stats.received_message_count}")
      logger.info("last-timestamp: #{stats.last_message_received_at}")
    end
  • Return value:

    stats = poller.poll(idle_timeout:10) do |msg|
      # do work ...
    end
    logger.info("requests: #{stats.request_count}")
    logger.info("messages: #{stats.received_message_count}")
    logger.info("last-timestamp: #{stats.last_message_received_at}")

Feedback

Let us know what you think about the new queue poller. Join the conversation in our Gitter channel or open a GitHub issue.

The AWS SDK for JavaScript now supports Amazon S3 Requester Pays buckets

The AWS SDK for JavaScript now has support for Amazon S3 Requester Pays buckets.

With Requester Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket. The bucket owner always pays the cost of storing data. This allows bucket owners to share the operational cost of their buckets.

Support for Requester Pays buckets was recently added in version 2.1.19 of the SDK. This article describes how to use Requester Pays buckets with the AWS SDK for JavaScript.

Setting up a Requester Pays bucket

The easiest way to setup a Requester Pays bucket is to use the Amazon S3 console.

You can also configure a Requester Pays bucket programmatically using the SDK.

var AWS = require('aws-sdk');
var s3 = new AWS.S3({region: 'us-west-2'});

var callback = function(err, data) {
    if (err) console.log(err);
    else console.log(data);
};

s3.putBucketRequestPayment({
    Bucket: 'bucket',
    RequestPaymentConfiguration: {
        Payer: 'Requester'
    }
}, callback);

Accessing objects in Requester Pays buckets

To access objects in Requester Pays buckets, requests made using the SDK must include the RequestPayer parameter. This parameter is translated by the SDK to the x-amz-request-payer header.

Setting this parameter confirms that the requester knows that they will be charged for the request.

Example

var s3 = new AWS.S3({region: 'us-west-2'});

s3.getObject({
    Bucket: 'bucket',
    Key: 'key',
    RequestPayer: 'requester'   
}, callback);

The only valid value for the RequestPayer parameter is requester. If the RequestPayer parameter is not set for a request made on a Requester Pays bucket, then Amazon S3 returns a 403 error and the bucket owner is charged for the request.

Conclusion

The AWS SDK for JavaScript supports Requester Pays for all operations on objects. We hope this feature in the SDK allows you to more easily manage your bucket permissions and cost. We’re eager to know what you think, so leave us a comment or tweet about it @awsforjs.

Update on Modularization of the SDK

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

As mentioned earlier, we are currently working on modularizing the AWS SDK for .NET into individual packages for each service. We have pushed the changes to the modularization branch in GitHub. If you use the solution file AWSSDK.sln, it will produce a core assembly for each supported platform and individual service assemblies for each supported platform. Since this solution builds Windows Store and Windows Phone versions of the SDK for .NET, we recommend that you use Windows 8.1 as your platform. We still have more testing, clean up, and work to do on our build/release process before we release modularization for general availability.

Breaking changes

We have tried to keep the list of breaking changes to a minimum to make it as easy as possible to adopt to the new modularized SDK. Here are the breaking changes in the SDK.

Amazon.AWSClientFactory Removed

This class was removed because in the modularized SDK it didn’t make sense to have a class that had a dependency to every service. Instead, the preferred way to construct a service client is to just use its constructor.

Amazon.Runtime.AssumeRoleAWSCredentials Removed

This class was removed because it was in a core namespace but had a dependency to the AWS Security Token Service. It has been obsolete in the SDK for quite some time and will be removed with the new structure. Use Amazon.SecurityToken.AssumeRoleAWSCredentials instead.

SetACL from S3Link

S3Link is part of the Amazon DynamoDB package and is used for storing objects in Amazon S3 that are references in a DynamoDB item. This is a useful feature, but we didn’t want to cause a compile dependency on the S3 package for DynamoDB. Consequently, we needed to simplify the exposed S3 methods from S3Link, so we replaced SetACL with MakeS3ObjectPublic. For more control over the ACL on the object, you’ll need to use the S3 package directly.

Removal of Obsolete Result Classes

For most all services in the SDK, operations return a response object that contains metadata for the operation such as the request ID and a result object. We found having a separate response and result class was redundant and mostly just caused extra typing for developers. About a year and half ago when version 2 of the SDK was released, we put all the information that was on the result class on to the response class. We also marked the result classes obsolete to discourage their use. In the new modularized SDK currently in development, we removed these obsolete result classes. This helps us reduce the size of the SDK.

AWS Config Section Changes

It is possible to do advanced configuration of the SDK through the app.config or web.config file. This is done through an aws config section like the following that references the SDK assembly name.

<configuration>
  <configSections>
    <section name="aws" type="Amazon.AWSSection, AWSSDK"/>
  </configSections>
  <aws region="us-west-2">
    <logging logTo="Log4Net"/>  
  </aws>
</configuration>

In the modularized SDK, there is no longer an assembly called AWSSDK. Instead, we need to reference the new core assembly like this.

<configuration>
  <configSections>
    <section name="aws" type="Amazon.AWSSection, AWSSDK.Core"/>
  </configSections>
  <aws region="us-west-2">
    <logging logTo="Log4Net"/>  
  </aws>
</configuration>

You can also manipulate the config settings through an Amazon.AWSConfigs object. In the modularized SDK, we moved config settings for DynamoDB from the Amazon.AWSConfigs object to Amazon.AWSConfigsDynamoDB.

What’s next

We are making good progress getting our process and development switched over to the new modularized approach. We still have a bit to go, but in the meantime, we would love to hear any feedback on our upcoming changes. Until we’ve completed our switchover, you can still use the current version of the SDK to make all the changes except for the configuration. This means you can make those updates now to ready your code for the modularized SDK.

Using the AWS SDK for JavaScript from Behind a Proxy

The AWS SDK for JavaScript can be configured to work from behind a network proxy. In browsers, proxy connections are transparently managed, and the SDK works out of the box without any additional configuration. This article focuses on using the SDK in Node.js from behind a proxy.

Node.js itself has no low-level support for proxies, so in order to configure the SDK to work with a proxy, you will need override the default http.Agent.

This article shows you how to override the default http.Agent with the proxy-agent npm module. Note that some http.Agent modules may not support all proxies. You can visit npmjs.com for a list of available http.Agent libraries that support proxies.

Installation

>npm install proxy-agent --save

In your code

The proxy-agent module automatically maps proxy protocols to agent instances. Currently, the supported protocols are HTTP(s), Secure Socket (SOCKS), and Proxy Auto-Config (PAC).

var AWS = require('aws-sdk');
var proxy = require('proxy-agent');

AWS.config.update({
  httpOptions: { 
    agent: proxy('http://user:password@internal.proxy.com') 
  }
});

var s3 = new AWS.S3({region: 'us-west-2'});
s3.getObject({Bucket: 'bucket', Key: 'key'}, function (err, data) {
  console.log(err, data);
});

Overriding the default http.Agent is simple, and allows you to configure proxy settings that SDK can use. We hope you find this information useful and are able to easily use the SDK in Node.js from behind a proxy!

Removal of Nullable Parameter Types in AWS Tools for Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

We wanted to let you know of a change to the Tools for Windows PowerShell, in which ‘nullable’ parameters used by some cmdlets will be removed. This affects some boolean, int, and DateTime parameters in a small number of cmdlets that up until now have been surfaced as Nullable<bool>, Nullable<int> or Nullable<DateTime> parameter types.

We’ve decided to make this change based on community feedback that nullable parameter types are not standard PowerShell practice and tend to confuse some beginners with the tools. In addition, specifying $null as the value to one of these parameters actually has no effect; the value is never passed onto the underlying service API call that the cmdlet makes, so surfacing nullable parameter types serves no useful purpose.

After the change, the parameters will become simple boolean, int and DateTime types. Note that this will only be a breaking change in your scripts if you have passed the value $null to one of these parameters. For example, you may have written:

PS C:> New-ASLaunchConfiguration -AssociatePublicIpAddress $null ...

If you look at the help for this cmdlet, it currently shows the AssociatePublicIpAddress parameter as being of type System.Boolean? (or, in other words, Nullable<bool>). Passing $null actually had no effect within the cmdlet; the parameter value was never passed onto the underlying service API call. After the update is released, your script will trigger an error if you use $null:

PS C:> New-ASLaunchConfiguration -AssociatePublicIpAddress $null ...

New-ASLaunchConfiguration : Cannot bind argument to parameter 'AssociatePublicIpAddress' because it is null.
At line:1 char:53
+ New-ASLaunchConfiguration -AssociatePublicIpAddress $null
+                                                     ~~~~~
    + CategoryInfo          : InvalidArgument: (:) [New-ASLaunchConfiguration], ParameterBindingException
    + FullyQualifiedErrorId : ParameterArgumentValidationErrorNullNotAllowed,Amazon.PowerShell.Cmdlets.AS.NewASLaunchConfigurationCmdlet

The fix is to simply remove the parameter; this is safe since, as noted above, the value was never used anyway. If your scripts pass actual values to these parameters (in this example, $true or $false), you won’t see any difference in behavior after the change is released.

The team would like to take this opportunity to give a shout-out to PowerShell MVP Jeff Wouters. Jeff has been taking the time to provide useful and actionable suggestions on the Tools for Windows PowerShell cmdlets since late last year, which we really appreciate.

If you have ideas about other changes we could make to the tools to enhance your scripting experience with AWS, then be sure to let us know!

Amazon S3 Client-side Crypto Meta Information

by Hanson Char | on | in Java | Permalink | Comments |  Share

Are you curious about how the Amazon S3 Encryption Java client makes use of meta information to support client-side encryption?  Have you ever wondered how you can write code in other languages that can encrypt/decrypt S3 objects in a format that is compatible with the AWS SDK for Java, or an AWS SDK for another language?

If so, look no further. We have just published an Appendix to provide a summary of the S3 client-side crypto meta information. Enjoy!

Announcing the aws-sdk-rails Gem

by Alex Wood | on | in Ruby | Permalink | Comments |  Share

With the release of V2 of the AWS SDK for Ruby, we’ve received customer feedback asking for support for the Ruby on Rails integration features provided by V1 of the SDK.

Today, we’re excited to announce the release of the aws-sdk-rails gem, available now via RubyGems and, of course, on GitHub.

To get started, add the aws-sdk-rails gem to your Gemfile:

gem 'aws-sdk-rails', '~> 1.0'

ActionMailer and Amazon Simple Email Service (SES)

The gem will automatically configure Rails to include an :aws_sdk delivery method for ActionMailer, that uses Amazon SES as a backend. It is simple to configure Rails to use this delivery method:

# config/application.rb
config.action_mailer.delivery_method = :aws_sdk

The aws-sdk-rails gem will use the AWS SDK for Ruby V2’s SES client automatically for any mail delivery event.

Logging

The gem will automatically wire the AWS SDK for Ruby’s logger to use Rails.logger by default.

You can customize the SDK log level and an optional log formatter in a config initializer:

# config/initializers/aws-sdk.rb
# log level defaults to :info
Aws.config[:log_level] = :debug

It is important to understand that all SDK log messages are logged at the same log level. Why is this important? When you set the Rails log level, you’re muting all log messages below that log level. So, if you want to, for example, only see SDK log messages in development, you might set the SDK log level to :debug as shown above, and set the Rails logger to show debug in development.

Credentials

The AWS SDK for Ruby will attempt locate credentials by searching the following locations:

  • ENV['AWS_ACCESS_KEY_ID'] and ENV['AWS_SECRET_ACCESS_KEY']
  • The shared credentials ini file at ~/.aws/credentials
  • From an instance profile when running on Amazon EC2

If you need to manually configure credentials, you should add them to your initializer:

# config/initializers/aws-sdk.rb
Aws.config[:credentials] = Aws::Credentials.new(access_key, secret)

Learn more about credentials in the AWS SDK for Ruby V2.

Never commit your credentials to source control. Besides being a security risk, it makes it very difficult to rotate your credentials.

Available Now

The aws-sdk-rails gem is available now.

As always, we’d love to hear your feedback, and welcome any Issues or Pull Requests at the aws-sdk-rails GitHub repo.

AWS Lambda Support in Visual Studio

Today we released version 1.9.0 of the AWS Toolkit for Visual Studio with support for AWS Lambda. AWS Lambda is a new compute service in preview that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.

Lambda functions are written in Node.js. To help Visual Studio developers, we have integrated with the Node.js Tools for Visual Studio plugin, which you can download here. Once the Node.js plugin and the latest AWS Toolkit are installed, it is easy to develop and debug locally and then deploy to AWS Lambda when you are ready. Let’s walk through the process of developing and deploying a Lambda function.

Setting up the project

To get started, we need to create a new project. There is a new AWS Lambda project template in the Visual Studio New Project dialog.

The Lambda project wizard has three ways to get started. The first option is to create a simple project that just contains the bare necessities to get started developing and testing. The second option allows you to pull down the source of a function that was already deployed. The last option allows you to create a project from a sample. For this walkthrough, select the the "Thumbnail Creator" sample and choose Finish.

Once this function is deployed, it will get called when images are uploaded to an S3 bucket. The function will then resize the image into a thumbnail, and will upload the thumbnail to another bucket. The destination bucket for the thumbnail will be the same name as the bucket containing the original image plus a "-thumbnails" suffix.

The project will be set up containing three files and the dependent Node.js packages. This sample also has a dependency on the ImageMagick CLI, which you can download from http://www.imagemagick.org/. Lambda has ImageMagick pre-configured on the compute instances that will be running the Lambda function.

Let’s take a look at the files added to the project.

app.js Defines the function that Lambda will invoke when it receives events.
_sampleEvent.json An example of what an event coming from S3 looks like.
_testdriver.js Utility code for executing the Lambda function locally. It will read in the _sampleEvent.json file and pass it into the Lambda function defined in app.js

Credentials

To access AWS resources from Lamdba, functions use the AWS SDK for Node.js which has a different path for finding credentials than the AWS SDK for .NET. The AWS SDK for Node.js looks for credentials in the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY or through the shared credentials file. For further information about configuring the AWS SDK for Node.js refer to the AWS SDK for Node.js documentation

Running locally

To run this sample, you will need to create the source and target S3 buckets. Pick a bucket name for the source bucket, and then create the bucket using AWS Explorer. Create a second bucket with the same name as the source bucket but with the "-thumbnails" suffix. For example, you could have a pair of buckets called foobar and foobar-thumbnails. Note: the _testdriver.js defaults the region to us-west-2, so be sure to update this to whatever region you create the buckets in. Once the buckets are created, upload an image to the source bucket so that you have an image to test with.

Open the _sampleEvent.js file and update the bucket name property to the source bucket and the object key property to the image that was uploaded.

Now, you can run and debug this like any other Visual Studio project. Go ahead and open up _testdriver.js and set a breakpoint and press F5 to launch the debugger.

Deploying the function to AWS Lambda

Once we have verified the function works correctly locally, it is time to deploy it. To do that, right-click on the project and select Upload to AWS Lambda….

This opens the Upload Lambda Function dialog.

You need to enter a Function Name to identify the function. You can leave the File Name and Handler fields at the default, which indicates what function to call on behalf of the event. You then need to configure an IAM role that Lambda can use to invoke your function. For this walkthrough, you are going to create a new role by selecting that we need Amazon S3 access and Amazon CloudWatch access. It is very useful to give access to CloudWatch so that Lambda can write debugging information to Amazon CloudWatch Logs and give you monitoring on the usage of the function. You can always refine these permissions after the function is uploaded. Once all that is set, go ahead and choose OK.

Once the upload is complete the Lambda Function status view will be displayed. The last step is to tell Amazon S3 to send events to your Lambda function. To do that, click the Add button for adding an event source.

Leave the Source Type set to Amazon S3 and select the Source bucket. S3 will need permission to send events to Lambda. This is done by assigning a role to the event source. By default, the dialog will create a role that gives S3 permission. Event sources to S3 are unique in that the configuration is actually done to the S3 bucket’s notification configuration. When you choose OK on this dialog, the event source will not show up here, but you can view it by right-clicking on the bucket and selecting properties.

 

Now that the function is deployed and S3 is configured to send events to our function, you can test it by uploading an image to the source bucket. Very shortly after uploading an image to the source bucket, your thumbnail will show up in the thumbnails bucket.

 

Calling from S3 Browser

Your function is set up to create thumbnails for any newly uploaded images. But what if you want to run our Lambda function on images that have already been uploaded? You can do that by opening the S3 bucket from AWS Explorer and navigating to the image you need the Lambda function to run against and choosing Invoke Lambda Function.

Next select the function we want to invoke and choose OK. The toolkit will then create the event object that S3 would have sent to Lambda and then calls Invoke on the function.

This can be done for an individual file or by selecting multiple files or folders in the S3 Browser. This is helpful when you make a code change to your Lambda function and you want to reprocess all the objects in your bucket with the new code.

Conclusion

Creating thumbnails is just one example you can use AWS Lambda for, but I’m sure you can imagine many ways you can use the power of Lambda’s event-based compute power. Currently, you can create event sources to Amazon S3, Amazon Kinesis, and Amazon DynamoDB Streams, which is currently in preview. It is also possible to invoke Lambda functions for your own custom events using any of AWS SDKs.

Try out the new Lambda features in the toolkit and let us know what you think. Given that AWS Lambda is in preview, we would love to get your feedback about these new features and what else we can add to make you successful using Lambda.

ElastiCache as an ASP.NET Session Store

by Brian Beach | on | in .NET | Permalink | Comments |  Share

Are you hosting an ASP.NET application on AWS? Do you want the benefits of Elastic Load Balancing (ELB) and Auto Scaling, but feel limited by a dependency on ASP.NET session state? Rather than rely on sticky sessions, you can use an out-of-process session state provider to share session state between multiple web servers. In this post, I will show you how to configure ElastiCache and the RedisSessionStateProvider from Microsoft to eliminate the dependency on sticky sessions.

Background

An ASP.NET session state provider maintains a user’s session between requests to an ASP.NET application. For example, you might store the contents of a shopping cart in session state. The default provider stores the user’s session in memory on the web server that received the request.

Using the default provider, your ELB must send every request from a specific user to the same web server. This is known as sticky sessions and greatly limits your elasticity. First, the ELB cannot distribute traffic evenly, often sending a disproportionate amount of traffic to one server. Second, Auto Scaling cannot terminate web servers without losing some user’s session state.

By moving the session state to a central location, all the web servers can share a single copy of session state. This allows the ELB to send requests to any web server, better distributing load across all the web servers. In addition, Auto Scaling can terminate individual web servers without losing session state information.

There are numerous providers available that allow multiple web servers to share session state. One option is use the DynamoDB Session State Provider that ships with the AWS SDK for .NET. This post introduces another option, storing session state in an ElastiCache cluster.

ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud. ElastiCache supports both Memcached and Redis cache clusters. While either technology can store ASP.NET session state, Microsoft offers a provider for Redis, and I will focus on Redis here.

Launch an Elasticache for Redis Cluster

Let us begin by launching a new Elasticache for Redis cluster in the default VPC using PowerShell. Note that you can use the ElastiCache console if you prefer.

First, get a reference to the default VPC and create a new security group for the cluster. The security group must allow inbound requests to Redis, which uses TCP port 6379.

$VPC = Get-EC2Vpc -Filter @{name='isDefault'; value='true'}
$Group = New-EC2SecurityGroup -GroupName 'ElastiCacheRedis' -Description 'Allows TCP Port 6379'
Grant-EC2SecurityGroupIngress -GroupId $Group -IpPermission  @{ IpProtocol="tcp"; FromPort="6379"; ToPort="6379"; IpRanges=$VPC.CidrBlock }

Second, launch a new Redis cluster. In the example below, I launch a single node cluster named “aspnet” running on a t2.micro. Make sure you specify the security group you created above.

New-ECCacheCluster -CacheClusterId 'aspnet' -Engine 'redis' -CacheNodeType 'cache.t2.micro' -NumCacheNode 1 -SecurityGroupId $Group

Finally, get the endpoint address of the instance you just created. Note that you must wait a few minutes for the cluster to launch before the address is available.

(Get-ECCacheCluster -CacheClusterId 'aspnet' -ShowCacheNodeInfo $true).CacheNodes[0].Endpoint.Address

The endpoint address is a fully qualified domain name that ends in cache.amazon.com and resolves to a private IP address in the VPC. For example, ElastiCache assigned my cluster the address below.

aspnet.k30h8n.0001.use1.cache.amazonaws.com

Configuring the Redis Session State Provider

With the Redis cluster running, you are ready to add the RedisSessionStateProvider to your ASP.NET application. Open your project in Visual Studio. First, right click on the project in Solution Explorer and select Manage NuGet Packages. Then, search for “RedisSessionStateProvider” and click the Install button as show below.

Manage NuGet Packages

NuGet will add a custom session state provider to your project’s web.config file. Open the web.config file and locate the Microsoft.Web.Redis.RedisSessionStateProvider shown below.

<sessionState mode="Custom" customProvider="MySessionStateStore">
  <providers>
    <add name="MySessionStateStore" type="Microsoft.Web.Redis.RedisSessionStateProvider" host="127.0.0.1" accessKey="" ssl="false" />
  </providers>
</sessionState>

Now replace the host attribute with the endpoint address you received from Get-ECCacheCluster. For example, my configuration looks like this.

<sessionState mode="Custom" customProvider="MySessionStateStore">
  <providers>
    <add name="MySessionStateStore" type="Microsoft.Web.Redis.RedisSessionStateProvider" host="aspnet.k30h8n.0001.use1.cache.amazonaws.com" accessKey="" ssl="false" />
  </providers>
</sessionState>

You are now ready to deploy and test your application. Wasn’t that easy?

Summary

You can use ElastiCache to share ASP.NET session information with multiple web servers and eliminate the dependency on ELB stick sessions. ElastiCache is simple to use and integrates with ASP.NET using the RedisSessionStateProvider available as a NuGet package. For more information about ElastiCache, see the ElastiCache documentation.

Create, Update, and Delete Global Secondary Indexes Using the Amazon DynamoDB Document API

by Manikandan Subramanian | on | in Java | Permalink | Comments |  Share

Amazon DynamoDB recently announced a new feature, online indexing that helps you create and modify global secondary indexes (GSI) after table creation. You can also delete a global secondary index associated with a table at any time. This blog post shows how easy it is to use the Amazon DynamoDB Document API of AWS SDK for Java to perform these operations.

Let’s say your application has a Customer table with CustomerId as the primary key and holds the personal details of a customer.

{
   "CustomerId" : 1000,
   "FirstName" : "John",
   "LastName" : "Myers",
   "Gender" : "M",
   "AddressLine1" : "156th Avenue",
   "City" : "Redmond",
   "State" : "WA",
   "Zip" : "98052"
}

You want to create a new global secondary index on the State attribute that helps you in search operations. You can do this with the following code:

// Initialize the DynamoDB object.
DynamoDB dynamo = new DynamoDB(Regions.US_EAST_1);

// Retrieve the reference to an existing Amazon DynamoDB table.
Table table = dynamo.getTable("Customer");

// Create a new Global Secondary Index.
Index index = table.createGSI(
                    new CreateGlobalSecondaryIndexAction()
                        .withIndexName("state-index")
                        .withKeySchema(
                          new KeySchemaElement("State", KeyType.HASH))
                        .withProvisionedThroughput(
                          new ProvisionedThroughput(25L, 25L))
                        .withProjection(
                          new Projection()
                             .withProjectionType(ProjectionType.ALL)),
                    new AttributeDefinition("State", 
                             ScalarAttributeType.S));

// Wait until the index is active.
index.waitForActive();

Amazon DynamoDB allows you to modify the provisioned throughput of a global secondary index at any time after index creation. You can do this with the following code:

// Update the provisioned throughput of the Global Secondary Index.
index.updateGSI(new ProvisionedThroughput(5L, 5L));

// Wait until the index is active.
index.waitForActive();

You can also delete a global secondary index using the following code:

// Delete the Global Secondary Index.
index.deleteGSI();

// Wait until the index is deleted.
index.waitForDelete();

Do you use the Amazon DynamoDB Document API to access Amazon DynamoDB? Let us know what you think!