AWS Developer Blog

DynamoDB JSON Support

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

The latest Amazon DynamoDB update added support for JSON data, making it easy to store JSON documents in a DynamoDB table while preserving their complex and possibly nested shape. Now, the AWS SDK for .NET has added native JSON support, so you can use raw JSON data when working with DynamoDB. This is especially helpful if your application needs to consume or produce JSON—for instance, if your application is talking to a client-side component that uses JSON to send and receive data—as you no longer need to manually parse or compose this data.

Using the new features

The new JSON functionality is exposed in the AWS SDK for .NET through the Document class:

  • ToJson – This method converts a given Document to its JSON representation
  • FromJson – This method creates a Document for a given JSON string

Here’s a quick example of this feature in action.

// Create a Document from JSON data
var jsonDoc = Document.FromJson(json);

// Use the Document as an attribute
var doc = new Document();
doc["Id"] = 123;
doc["NestedDocument"] = jsonDoc;

// Put the item

// Load the item
doc = table.GetItem(42);

// Convert the Document to JSON
var jsonText = doc.ToJson();
var jsonPrettyText = doc["NestedDocument"].AsDocument().ToJsonPretty();

This example shows how a JSON-based Document can be used as an attribute, but you can also use the converted Document directly, provided that it has the necessary key attributes.
Also note that we have introduced the methods ToJson and ToJsonPretty. The difference between the two is that the latter will produce indented JSON that is easier to read.

JSON types

DynamoDB data types are a superset of JSON data types. This means that all JSON data can be represented as DynamoDB data, while the opposite isn’t true.

So if you perform the conversion JSON -> Document -> JSON, the starting and final JSON will be identical (except for formatting). However, since not all DynamoDB data types can be converted to JSON, the conversion Document -> JSON -> Document may result in a different representation of your data.

The differences between DynamoDB and JSON are:

  • JSON has no sets, just arrays, so DynamoDB sets (SS, NS, and BS types) will be converted to JSON arrays.
  • JSON has no binary representation, so DynamoDB binary scalars and sets (B and BS types) will be converted to base64-encoded JSON strings or lists of strings.

If you do end up with a Document instance that has base64-encoded data, we have provided a method on the Document object to decode this data and replace it with the correct binary representation. Here is a simple example:

doc.DecodeBase64Attributes("Data", "DataSet");

After executing the above code, the "Data" attribute will contain binary data, while the "DataSet" attribute will contain a list of binary data.

I hope you find this feature a useful addition to the AWS SDK for .NET. Please give it a try and let us know what you think on GitHub or here in the comments!

AWS re:Invent 2014 Recap

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Another AWS re:Invent has come and gone. Steve and I were lucky enough to be there and meet many developers using AWS in such interesting ways. We also gave a talk showing off some the new features the team added to the SDK this year. The talk has been made available online.

In our talk, we showed demos for:


We hope to hear from more .NET developers at next year’s re:Invent. Until then, feel free to contact us either in our forums or on GitHub.


by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

We’ve added a feature called Waiters to the v2 AWS SDK for Ruby, and I am pretty excited about it. A waiter is a simple abstraction around the pattern of polling an AWS API until a desired state is reached.

Basic Usage

This simple example shows how to use waiters to block until a particular EC2 instance is running:

ec2 =
ec2.wait_until(:instance_running, instance_ids:['i-12345678'])

Waiters will not wait indefinitely and can fail. Each waiter has a default polling interval and maximum number of attempts to make. If a waiter encounters an unexpected error or fails to reach the desired condition in time it will raise an error:

  ec2.wait_until(:instance_running, instance_ids:['i-12345678'])
resuce Aws::Waiters::Errors::WaiterFailed
  # oops


You can modify the default interval and wait time between attempts by passing a block.

# this will wait upto ~ one hour
ec2.wait_until(:instance_running, instance_ids:['i-12345678']) do |w|

  # seconds between each attempt
  w.interval = 15

  # maximum number of polling attempts before giving up
  w.max_attempts = 240



In addition to interval and maximum attempts, you can configure callbacks to trigger before each attempt polling attempt and before sleeping between attempts.

ec2.wait_until(:instance_running, instance_ids:['i-12345678']) do |w|

  w.before_attempt do |n|
    # n - the number of attempts made

  w.before_wait do |n, resp|
    # n - the number of attempts made
    # resp -the client response from the previous attempt


You can throw :success or :failure from these callbacks to stop the waiter immediately. You can use this to write you own delay and back-off logic.

Here I am using a callback to perform exponential back-off between polling attempts:

ec2.wait_until(:instance_running, instance_ids:['i-12345678']) do |w|
  w.interval = 0 # disable normal sleep
  w.before_wait do |n, resp|
    sleep(n ** 2)

This example gives up after one hour.

ec2.wait_until(:instance_running, instance_ids:['i-12345678']) do |w|
  one_hour_later = + 3600
  w.before_wait do |n, resp|
    throw :failure, 'waited too long' if > one_hour_later

Waiters and Resources, Looking Ahead

You may have noticed that some waiters have already been exposed to the resource classes.

ec2 =
instance = ec2.instance('i-12345678')
puts + ' is stopped'

In addition to connecting more waiters and resources, I’m excited to look into batch waiters. Imagine the following use case:

instances = ec2.create_instances(min_count: 5, ...)
puts "the following new instances are now running:n"


Waiters are documented in the Ruby SDK API reference. Each service client documents the #wait_until method and provides a list of available waiter names. Here are links to the Aws::EC2::Client waiter methods:

Give waiters a try and let us know what you think!

AWS re:Invent 2014

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We spent the past week at AWS re:Invent! The PHP SDK team was there with many of our co-workers and customers. It was a great conference, and we had a lot of fun.

If you did not attend re:Invent or follow our @awsforphp Twitter feed during the event, then you have a lot to catch up on.

New AWS Services and Features

Several new services were announced during the keynotes, on both the first day and second day, and during other parts of the event.

During the first keynote, three new AWS services for code management and deployment were announced: AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline. CodeDeploy is available today, and can help you automate code deployments to Amazon EC2 instances.

Additionally, three other new services were revealed that are related to enterprise security and compliance: AWS Key Management Service (AWS KMS), AWS Config, and AWS Service Catalog.

Amazon RDS for Aurora was also announced during the first keynote. Amazon Aurora is a new, MySQL-compatible, relational database engine built for high performance and availability.

The keynote on the second day boasted even more announcements, including the new Amazon EC2 Container Service, which is a highly scalable, high performance container management service that supports Docker containers.

Also, new compute-optimized (C4) Amazon EC2 Instances were announced, as well as new larger and faster Elastic Block Store (EBS) volumes backed with SSDs.

AWS Lambda was introduced during the second keynote, as well. It is a new compute service that runs your code in response to events and automatically manages the compute resources for you. To learn about AWS Lambda in more detail, you should check out their session at re:Invent, which shows how you can implement image thumbnail generation in your applications using AWS Lambda and the new Amazon S3 Event Notifications feature. They also briefly mention the upcoming DynamoDB streams feature in that presentation, which was announced just prior to the conference.

The APIs for AWS CodeDeploy, AWS KMS, AWS Config, and AWS Lambda are currently available, and all are supported in the AWS SDK for PHP as of version 2.7.5.

PHP Presentations

I had the honor of presenting a session about the PHP SDK called Building Apps with the AWS SDK for PHP, where I explained how to use many of the new features from Version 3 of the SDK in the context of building an application I called "SelPHPies with ElePHPants". You should definitely check it out whether you are new to or experienced with the SDK.

Here are the links to my presentation as well as two other PHP-specific sessions that you might be interested in.

  • Building Apps with the AWS SDK for PHP (slides, video)
  • Best Practices for Running WordPress on AWS (slides, video)
  • Running and Scaling Magento on AWS (video)

There were so many other great presentations at re:Invent. The slides, videos, and podcasts for all of the presentations are (or will be) posted online.


Announcements and presentations are exciting and informative, but my favorite part about any conference is the people. Re:Invent was no exception.

It was great to run into familiar faces from my Twitter stream like Juozas Kaziukėnas, Ben Ramsey, Brian DeShong, and Boaz Ziniman. I also had the pleasure of meeting some new friends from companies that had sent their PHP developers to the conference.

See You Next Year

We hope you take the time to check out some of the presentations from this year’s event, and consider attending next year. Get notified about registration for next year’s event by signing up for the re:Invent mailing list on the AWS re:Invent website.

Using Resources

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

With the recent 2.0 stable release of the aws-sdk-core gem, we started publishing preview releases of aws-sdk-resources. Until the preview status is released, you will need to use the –pre flag to install this gem:

gem install aws-sdk-resources --pre

In bundler, you should give the full version:

# update the version as needed
gem 'aws-sdk-resources', version: '2.0.1.pre'


Each service module has a Client class that provides a 1-to-1 mapping of the service API. Each service module now also has a Resource class that provides an object-oriented interface to work with.

Each resource object wraps a service client.

s3 =
#=> #<Aws::S3::Client>

Given a service resource object you can start exploring related resources. Lets start with buckets in Amazon S3:

# enumerate all of my buckets
#=> ['aws-sdk', ...]

# get one bucket
bucket = s3.buckets.first
#=> #<Aws::S3::Bucket name="aws-sdk">

If you know the name of a bucket, you can construct a bucket resource without making an API request.

bucket = s3.bucket('aws-sdk')

# constructors are also available
bucket ='aws-sdk')
bucket = 'aws-sdk')

In each of the three previous examples, an instance of Aws::S3::Bucket is returned. This is a lightweight reference to an actual bucket that might exist in Amazon S3. When you reference a resource, no API calls are made until you operate on the resource.

Here I will use the bucket reference to delete the bucket.


You can use a resource to reference other resources. In the next exmple, I use the bucket object to reference an object in the bucket by its key.
Again, no API calls are made until I invoke an operation such as #put or #delete.

obj = bucket.object('hello.txt')
obj.put(body:'Hello World!')

Resource Data

Resources have one or more identifiers, and data. To construct a resource, you only need the identifiers. A resource can load itself using its identifiers.

Constructing a resource object from its identifiers will never make an API call.

obj = s3.bucket('aws-sdk').object('key') # no API call made

# calling #data loads an object, returning a structure
#=> "ed076287532e86365e841e92bfc50d8c"

# same as
#=> "ed076287532e86365e841e92bfc50d8c"

Resources will never update internal data until you call #reload. Use #reload if you need to poll a resource attribute for a change.

# force the resource to refresh data, returning self

Resource Associations

Most resources types are associated with one or more different resources. For example, an Aws::S3::Bucket object bucket has many objects, a website configuration, an ACL, etc.

Each association is documented on the resource class. The API documentation will specify what API call is being made. If the association is plural, it will document when multiple calls are made.

When working with plural associations, such as bucket that has many objects, resources are automatically paginated. This makes it simple to lazily enumerate all objects.

bucket = s3.bucket('aws-sdk')

# enumerate **all** objects in a bucket, objects are fetched
# in batches of 1K until every object has been yielded
bucket.objects.each do |obj|
  puts "#{obj.key} => #{obj.etag}"

# filter objects with a prefix

Some APIs support operating on resources in batches. When possible,
the SDK will provide batch actions.

# gets and deletes objects in batches of 1K, sweet!

Resource Waiters

Some resources have associated waiters. These allow you to poll until the resource enters a desired state.

instance ='i-12345678')
puts + ' is stopped'

Whats Next?

The resource interface has a lot of unfinished features. Some of the things we are working on include:

  • Adding #exists? methods to all resource objects
  • Consistent tagging interfaces
  • Batch waiters
  • More service coverage with resource definitions

We would love to hear your feedback. Resources are available now in the preview release of the aws-sdk-resources gem and in the master branch of GitHub.

Happy coding!

AWS Toolkit support for Visual Studio Community 2013

We often hear from our customers that they would like our AWS Toolkit for Visual Studio to work with the Express editions of Visual Studio. We understand how desirable this is, but due to restrictions built into the Express editions of Visual Studio, it hasn’t been possible…until now.

With the recent announcement of the new Visual Studio Community 2013 edition, it is now possible to get the full functionality of our AWS Toolkit for Visual Studio inside a free edition of Visual Studio. This includes the AWS Explorer for managing resources, Web Application deployment from the Solution Explorer, and the AWS CloudFormation editor for authoring and deploying your CloudFormation templates.

So if you haven’t tried the AWS Toolkit for Visual Studio, now is a great time to check it out.

Amazon S3 Encryption with AWS Key Management Service

by Hanson Char | on | in Java | Permalink | Comments |  Share

With version 1.9.5 of the AWS SDK for Java, we are excited to announce the full support of S3 object encryption using AWS Key Management Service (KMS). Why KMS, you may ask? In a nutshell, AWS Key Management Service provides many security and administrative benefits, including centralized key management, better security in protecting your master keys, and it leads to simpler code!

In this blog, we will provide two quick examples of how you can make use of AWS KMS for client-side encryption via Amazon S3 Encryption Client, and compare it with the use of AWS KMS for server-side encryption via Amazon S3 Client.

The first example demonstrates how you can make use of KMS for client-side encryption in the Amazon S3 Encryption Client. As you see, it can be as simple as configuring a KMSEncryptionMaterialsProvider with a KMS Customer Master Key ID (generated a-priori, for example, via the AWS management console). Every object put to Amazon S3 would then result in a data key generated by AWS KMS for use in client-side encryption before sending the data (along with other metadata such as the KMS "wrapped" data key) to S3 for storage. During retrieval, KMS would automatically "unwrap" the encrypted data key, and the Amazon S3 Encryption Client would then use it to decrypt the ciphertext locally on the client side.

S3 client-side encryption using AWS KMS

String customerMasterKeyId = ...;
AmazonS3EncryptionClient s3 = new AmazonS3EncryptionClient(
            new ProfileCredentialsProvider(),
            new KMSEncryptionMaterialsProvider(customerMasterKeyId))

String bucket = ...;
byte[] plaintext = "Hello S3/KMS Client-side Encryption!"
ObjectMetadata metadata = new ObjectMetadata();

PutObjectResult putResult = s3.putObject(bucket, "hello_s3_kms.txt",
        new ByteArrayInputStream(plaintext), metadata);

S3Object s3object = s3.getObject(bucket, "hello_s3_kms.txt");

The second example demonstrates how you can delegate the crypto operations entirely to the Amazon S3 server side, yet using fully managed data keys generated by AWS KMS (instead of having the data key locally generated on the client side). This has the obvious benefit of offloading the computationally expensive operations to the server side, and potentially improving the client-side performance. Similar to what you did in the first example, all you need to do is to specify your KMS Customer Master Key ID (generated a-priori, for example, via the AWS management console) in the S3 put request.

S3 server-side encryption using AWS KMS

String customerMasterKeyId = ...;
AmazonS3Client s3 = new AmazonS3Client(new ProfileCredentialsProvider())

String bucket = ...;
byte[] plaintext = "Hello S3/KMS SSE Encryption!"
ObjectMetadata metadata = new ObjectMetadata();

PutObjectRequest req = new PutObjectRequest(bucket, "hello_s3_sse_kms.txt",
        new ByteArrayInputStream(plaintext), metadata)
            new SSEAwsKeyManagementParams(customerMasterKeyId));
PutObjectResult putResult = s3.putObject(req);

S3Object s3object = s3.getObject(bucket, "hello_s3_sse_kms.txt");

For more information about AWS KMS, check out the AWS Key Management Service whitepaper, or the blog New AWS Key Management Service (KMS). Don’t forget to download the latest AWS SDK for Java and give it a spin!

Come see us at re:Invent 2014!

AWS re:Invent is just around the corner, and we are excited to meet you.

I will be presenting DEV 306 – Building cross platform applications using the AWS SDK for JavaScript on November 13, 2014. This talk will introduce you to building portable applications using the SDK and outline some differences in porting your application to multiple platforms. You can learn more about the talk here. Come check it out!

We will also be at the AWS Booth in the Expo Hall (map). Come talk to us about how you’re using AWS services, ask us a question, and learn about how to use our many AWS SDKs and tools.

Hope to see you there!

Announcing the AWS CloudTrail Processing Library

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

We’re excited to announce a new extension to the AWS SDK for Java: The AWS CloudTrail Processing Library.

AWS CloudTrail delivers log files containing AWS API activity to a customer’s Amazon S3 bucket. The AWS CloudTrail Processing Library makes it easy to build applications that read and process those CloudTrail logs and incorporate their own business logic. For example, developers can filter events by event source or event type, or persist events into a database such as Amazon RDS or Amazon Redshift or any third-party data store.

The AWS CloudTrail Processing Library, or CPL, eliminates the need to write code that polls Amazon SQS queues, reads and parses queue messages, downloads CloudTrail log files, and parses and serializes events in the log file. Using CPL, developers can read and process CloudTrail log files in as few as 10 lines of code. CPL handles transient and enduring failures related to network timeouts and inaccessible resources in a resilient and fault tolerant manner. CPL is built to scale easily and can process an unlimited number of log files in parallel. If needed, any number of hosts can each run CPL, processing the same S3 bucket and same SQS queue in parallel.

Getting started with CPL is easy. After configuring your AWS credentials and SQS queue, you simply implement a callback method to be called for every event, and start the AWSCloudTrailProcessingExecutor.

// This file contains your AWS security credentials and the name
// of an Amazon SQS queue to poll for updates
String myPropertiesFileName = "";

// An EventsProcessor is what processes each event from AWS CloudTrail
final AmazonSNSClient sns = new AmazonSNSClient();
EventsProcessor eventsProcessor = new EventsProcessor() {
    public void process(List<CloudTrailEvent> events) {
        for (CloudTrailEvent event : events) {
            CloudTrailEventData data = event.getEventData();
            if (data.getEventSource().equals("") &&
                data.getEventName().equals("ModifyVpcAttribute")) {
                System.out.println("Processing event: " + data.getRequestId());
                sns.publish(myQueueArn, "{ " + 
                    "'requestId'= '" + data.getRequestId() + "'," + 
                    "'request'  = '" + data.getRequestParameters() + "'," + 
                    "'response' = '" + data.getResponseElements() + "'," +
                    "'source'   = '" + data.getEventSource() + "'," +
                    "'eventName'= '" + data.getEventName() + "'" +

// Create AWSCloudTrailProcessingExecutor and start it
final AWSCloudTrailProcessingExecutor executor = 
            new AWSCloudTrailProcessingExecutor
                .Builder(eventsProcessor, myPropertiesFileName)

The preceding example creates an implementation of EventsProcessor that processes each of our events. If the event was from a user modifying an Amazon EC2 VPC through the ModifyVPCAttribute operation, then this code publishes a message to an Amazon SNS topic, so that an operator can review this potentially large change to the account’s VPC configuration.

This example shows how easy it is to use the CPL to process your AWS CloudTrail events. You’ve seen how to create your own implementation of EventsProcessor to specify your own custom logic for acting on CloudTrail events. In addition to EventsProcessor, you can also control the behavior of AWSCloudTrailProcessingExecutor with these interfaces:

  • EventFilter allows you to easily filter specific events that you want to process. For example, if you only want to process CloudTrail events in a specific region, or from a specific service, you can use a EventFilter to easily select those events.
  • SourceFilters allow you to perform filtering using data specific to the source of the events. In this case, the SQSBasedSource contains additional information you can use for filtering, such as how many times a message has been delivered.
  • ProgressReporters allow you to report back progress through your application so you can tell your users how far along in the processing your application is.
  • ExceptionHandlers allow you to add custom error handling for any errors encountered during event processing.

You can find the full source for the AWS CloudTrail Processing Library in the aws-cloudtrail-processing-library project on GitHub, and you can easily pick up the CPL as a dependency in your Maven-based projects:


For more information, go to the CloudTrail FAQ and documentation.

How are you using AWS CloudTrail to track your AWS usage?

Welcome to the AWS CLI Blog

by James Saryerwinnie | on | in AWS CLI | Permalink | Comments |  Share

Hi everyone! Welcome to the AWS Command Line Interface blog. I’m James Saryerwinnie, and I work on the AWS CLI. This blog will be the place to go for information about the AWS CLI including:

  • Tips and tricks for using the AWS CLI
  • New feature announcements
  • Deep dives into various AWS CLI features
  • Guest posts from various AWS service teams

In the meantime, here are a few links to get you started:

We’re excited to get this blog started, and we hope to see you again real soon. Stay tuned!