AWS Developer Blog

Version 2 Resource Interfaces

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

In version 1 of the AWS SDK for Ruby provides a 1-to-1 client class for each AWS service. For many services it also provides a resource-oriented interface. These resource objects use the client to provide a more natural object-oriented experience when working with AWS APIs.

We are busy working resource interfaces for the v2 Ruby SDK.

Resource Interfaces

The following examples use version 1 of the aws-sdk gem. This first example uses the 1-to-1 client to terminate running instances:

ec2 =
resp = ec2.describe_instances
resp[:reservations].each do |reservation|
  reservation[:instances].each do |instance|
    if instance[:state][:name] == 'running'

This example uses the resource abstraction to start instances in the stopped state:

ec2 =
ec2.instances.each do |instance|
  instance.start if instance.status == :stopped

Resources for Version 2

We have a lot of lessons learned from our v1 resource interfaces. We are busy working on the v2 abstraction. Here are some of the major changes from v1 to v2.

Memoization Interfaces Removed

The version 1 resource abstraction was very chatty by default. It did not memoize any resource attributes and a user could unknowingly trigger a large number of API requests. As a workaround, users could use memoization blocks around sections of their code.

In version 2, all resources objects will hold onto their data/state until you explicitly call a method to reload the resource. We are working hard to make it very obvious when calling a method on a resource object will generate an API request over the network.

Less Hand-Written Code and More API Coverage

The version 1 SDK has hand-coded resource and collection classes. In version 2, our goal is to extend the service API descriptions that power our clients with resource definitions. These definitions will be consumed to generate our resource classes.

Using resource definitions helps eliminate a significant amount of hand written code, ensures interfaces are consistent, and makes it easier for users to contribute resource abstractions.

We also plan to provide extension points to resources to allow for custom logic and more powerful helpers.

Resource Waiters

It is a common pattern to operate on a resource and then wait for the change to take effect. Waiting typically requires making an API request, asserting some value has changed and optionally waiting and trying again. Waiting for a resource to enter a certain state can be tricky. You need to deal with terminal cases, failures, transient errors, etc.

Our goal is to provide waiter definitions and attach them to our resource interfaces. For example:

# create a new table in Amazon DynamoDB
table = dynamodb.table('my-table')
table.update(provisioned_throughput: { 
  read_capcity_units: 1000

# wait for the table to be ready
table.wait_for(:status, 'ACTIVE')

In a follow up blog post, I will be introducing the resources branch of the SDK that is available today on GitHub. Please take a look and feedback is always welcome!

Supporting Windows Phone 8.1

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

When we introduced version 2 of AWS SDK for .NET, it included support for Windows Store 8 and Windows Phone 8. With the release of Windows Phone 8.1, the runtime environment has changed to make it similar to Windows Store apps and to support Universal Apps. This means that when you create a new Windows Phone 8.1 project, you have two options, the older Silverlight runtime and the newer Universal runtime. Our current Windows Phone 8 version of the SDK works with the older Microsoft Silverlight runtime but is incompatible with the new Universal runtime.

To address the incompatibility with Universal Apps, we have created a new version of the SDK in version This new version is a portable class library that targets both the Windows Store 8.1 and Windows Phone 8.1 platforms. It is included in the NuGet package for the SDK as well as in the installer. The supported services in the new version of the SDK.

  • Amazon Cognito
  • Amazon CloudWatch
  • Amazon DynamoDB
  • Amazon EC2
  • Amazon Elastic Transcoder
  • Amazon Glacier
  • Amazon Kinesis
  • Amazon RDS
  • Amazon S3
  • Amazon SimpleDB
  • Amazon SES
  • Amazon SNS
  • Amazon SQS
  • Auto Scaling
  • AWS CloudFormation
  • AWS Elastic Beanstalk
  • AWS Identity and Access Management
  • AWS Security Token Service
  • Elastic Load Balancing

Give it a try and let us know what you think.

Response Paging

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

We’ve been busy working on version 2 of the AWS SDK for Ruby. One of the features we added recently was response paging.

Paging in the Version 1 Ruby SDK

In version 1 of the Ruby SDK provides collection classes for many AWS resources. These collections are enumerable objects that yield resource objects.

iam =
user_collection = iam.users
user_collection.each |user|

A collection in version 1 sends a request to enumerate resources. If the response indicates that more data is available, then the collection will continue sending requests to enumerate all resources.

If you want to enumerate a resource that is not modeled in the version 1 SDK, then you need to drop down to the client abstraction and deal with paging on your own.

iam =
options = { max_items: 2 }
  response = iam.client.list_users(options)
  response[:users].each do |user|
    puts user[:user_name]
  options[:marker] = response[:marker]
end while options[:marker]

Response Paging in Version 2 Ruby SDK

One of our main goals of the version 2 Ruby SDK is to improve the experience of users accessing AWS from the client abstractions. Version 2 does not provide resource abstractions yet, but it does provide full response paging from the client interface.

Here is the example above re-written using the version 2 Ruby SDK:

iam =
iam.list_users.each do |response|

Each AWS operation now returns a pageable response object. This object is enumerable. Calling #each on a Aws::PagableResponse object yields the response and any follow up responses.

There are a few other helper methods that make it easy to control response paging:

resp.last_page? #=> false
resp.next_page? #=> true

# get the next page, raises an error if this is the last page
resp = resp.next_page

# gets each response in a loop
resp = resp.next_page until resp.last_page?

Resource Enumeration in the Version 2 Ruby SDK

You will notice the response paging examples don’t address enumerating individual resource objects. We are busy implementing a resource abstraction for the version 2 Ruby SDK. The v2 resources will be enumerable in a method similar to v1. It will however be built on top of client response paging.

Watch the GitHub respository and this blog for more information on resource abstractions in the version 2 Ruby SDK.

Subscribing Websites to Amazon SNS Topics

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Amazon SNS allows you to create topics that have many different subscribers to receive the messages sent from the topic. Amazon SQS queues and emails are probably the most common type of consumers for a topic, but it is also possible to subscribe a website.

Setting Up the Website

The sample application creates a generic handler called SNSReceiver.ashx to handle requests coming from SNS. We’ll discuss each part of the SNSReceiver.ashx individually, but you can download a full copy of SNSReceiver.ashx here.

Each SNS message is sent to the website as an HTTP POST request, which the ProcessRequest method uses to determine if it is is an SNS message that should be processed. For HTTP GET requests, we’ll write back status information of the messages received from SNS.

public void ProcessRequest(HttpContext context)
    if(context.Request.HttpMethod == "POST")
    else if (context.Request.HttpMethod == "GET")

SNS messages are sent as JSON documents. In version of the AWS SDK for .NET, we added the utility class Amazon.SimpleNotificationService.Util.Message to parse the JSON. This class also has the ability to verify authenticity of the message coming from SNS. This is done by calling IsMessageSignatureValid. When a subscription is made to a website, the website must confirm the subscription. The confirmation comes into our website like other messages. To detect a confirmation request, we need to check the Type property from the Message object. If the type is SubscriptionConfirmation, then we need to confirm the request; if the type is Notification, then it is a message that needs to be processed.

private void ProcessPost(HttpContext context)
    string contentBody;
    using (StreamReader reader = new StreamReader(context.Request.InputStream))
        contentBody = reader.ReadToEnd();

    Message message = Message.ParseMessage(contentBody);

    // Make sure message is authentic
    if (!message.IsMessageSignatureValid())
        throw new Exception("Amazon SNS Message signature is invalid");

    if (message.IsSubscriptionType)
        ConfirmSubscription(context, message);
    else if (message.IsNotificationType)
        ProcessNotification(context, message);

To confirm the subscription, we need to call SubscribeToTopic, which uses the SubscribeURL property and makes an HTTP GET request. In a real production situation, you would check the TopicArn property to make sure this is a topic that you should subscribe to.

private void ConfirmSubscription(HttpContext context, Message message)
    if (!IsValidTopic(message.TopicArn))

        Trace.WriteLine(string.Format("Subscription to {0} confirmed.", message.TopicArn));
    catch(Exception e)
        Trace.WriteLine(string.Format("Error confirming subscription to {0}: {1}", message.TopicArn, e.Message));

To process messages, we grab the MessageText property from the Message object. For demonstration purposes, we add each message to a list of messages that we attach to the Application object. This list of messages is then returned by the GET request handler to display the list of messages received.

private void ProcessNotification(HttpContext context, Message message)
    var log = context.Application["log"] as IList;
    if (log == null)
        log = new List();
        context.Application["log"] = log;

    log.Add(string.Format("{0}: Received notification from {1} with message {2}", DateTime.Now, message.TopicArn, message.MessageText));

Here is the ProcessGet method that called from ProcessRequest for HTTP GET requests. It shows the list of received messages from SNS.

private void ProcessGet(HttpContext context)
    context.Response.ContentType = "text/plain";
    var log = context.Application["log"] as IList;
    if (log == null)
        context.Response.Write("No log messages");
        foreach (var message in log.Reverse())
            context.Response.Write(message + "n");

Setting Up a Subscription

Remember that our website must be publicly accessible for SNS to send messages to it. We tested this by first deploying the application to AWS using AWS Elastic Beanstalk. We can use either the AWS Management Console or the AWS Toolkit for Visual Studio. Let’s use the Toolkit to test this.

First, we need to create the topic. In AWS Explorer, right-click Amazon SNS and select Create Topic, give the topic a name, and click OK.

Double-click on the new topic in the explorer to bring up its view. Click Create New Subscription, select HTTP or HTTPS for the protocol depending on how you deployed your application, and specify the URL to the SNSReceiver.ashx.

Depending on how fast the site responds to the confirmation, you might see a subscription status of "Pending Confirmation". If that’s the case, then just click the refresh button.

Once the subscription is confirmed, we can a send a test message by clicking the Publish to Topic button, adding some sample text, and clicking OK. Since our website will respond to GET requests by writing out the messages it receives, we can navigate to the website to see if the test message made it.

Now that we have confirmed messages are being sent and received by our website, we can use the AWS SDK for .NET or any of the other AWS SDKs to send messages to our website. Here is a snippet of how to use the .NET SDK to send messages.

var snsClient = new AmazonSimpleNotificationServiceClient(RegionEndpoint.USWest2);
snsClient.Publish(new PublishRequest
    TopicArn = topicArn,
    Message = "Test message"

Enjoy sending messages, and let us know what you think.

Release: AWS SDK for PHP – Version 2.6.12

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.6.12 of the AWS SDK for PHP. This release adds support for new regions to the Kinesis client and new features to the AWS Support and AWS IAM clients.

Install the SDK

Pausing and Resuming transfers using Transfer Manager

by Manikandan Subramanian | on | in Java | Permalink | Comments |  Share

One of the really cool features that TransferManager now supports is pausing and resuming file uploads and downloads. You can now pause a very large file upload and resume it at a later time without having the necessity to re-upload the bytes that have been already uploaded. Also, this helps you survive JVM crashes as the operation can be resumed from the point at which it was stopped.

On an upload/download operation, TransferManager tries to capture the information that is required to resume the transfer after the pause. This information is returned as a result of executing the pause operation.

Here is an example of how to pause an upload:

// Initialize TransferManager.
TransferManager tm = new TransferManager();

// Upload a file to Amazon S3.
Upload myUpload = tm.upload(myBucket, myKey, myFile);

// Sleep until data transferred to Amazon S3 is less than 20 MB.
long MB = 1024 * 1024;
TransferProgress progress = myUpload.getProgress();
while( progress.getBytesTransferred() < 20*MB ) Thread.sleep(2000);

// Initiate a pause with forceCancelTransfer as true. 
// This cancels the upload if the upload cannot be paused.
boolean forceCancel = true;
PauseResult<PersistableUpload> pauseResult = myUpload.tryPause(forceCancel);

In some cases, it is not possible to pause the upload. For example, if the upload involves client-side encryption using AmazonS3EncryptionClient, then TransferManager doesn’t capture the encrypted context for security reasons and will not be able to resume the upload. In such cases, the user can decide to cancel the uploads by setting the forceCancelTransfers attribute of Upload#tryPause(boolean). The status of the pause operation can be retrieved using PauseResult#getPauseStatus() and can be one of the following.

  • SUCCESS – Upload is successfully paused.
  • CANCELLED – User requested to cancel the upload if the pause has no effect on the upload.
  • CANCELLED_BEFORE_START – User tried to pause the upload even before the start and cancel was requested.
  • NO_EFFECT – Pause operation has no effect on the upload. Upload continues to transfer data to Amazon S3.
  • NOT_STARTED – Pause operation has no effect on the upload because it has not yet started.

On a successful upload pause, PauseResult#getInfoToResume() returns an instance of PersistableUpload that can be used to resume the upload operation at a later time. To persist this information to a file, use the following code,

// Retrieve the persistable upload from the pause result.
PersistableUpload persistableUpload = pauseResult.getInfoToResume();

// Create a new file to store the information.
File f = new File("resume-upload");
if( !f.exists() ) f.createNewFile();
FileOutputStream fos = new FileOutputStream(f);

// Serialize the persistable upload to the file.

While the Upload#tryPause(boolean) returns a PauseResult when the pause operation succeeds or fails, there is an Upload#pause() that throws an PauseException in case the upload cannot be paused.

Here is an example of how to resume an upload.

// Initialize TransferManager.
TransferManager tm = new TransferManager();

FileInputStream fis = new FileInputStream(new File("resume-upload"));

// Deserialize PersistableUpload information from disk.
PersistableUpload persistableUpload = PersistableTransfer.deserializeFrom(fis);

// Call resumeUpload with PersistableUpload.


TransferManager skips the parts of the file that was uploaded previously and uploads the rest to Amazon S3.

Similar to the upload example, the following example pauses an Amazon S3 object download and persists the PersistableDownload to a file.

// Initialize TransferManager.
TransferManager tm = new TransferManager();

//Download the Amazon S3 object to a file.
Download myDownload =, myKey, new File("myFile"));

// Sleep until the progress is less than 20 MB.
long MB = 1024 * 1024;
TransferProgress progress = myDownload.getProgress();
while( progress.getBytesTransferred() < 20*MB ) Thread.sleep(2000);

// Pause the download.
PersistableDownload persistableDownload = myDownload.pause();

// Create a new file to store the information.
File f = new File("resume-download");
if( !f.exists() ) f.createNewFile();
FileOutputStream fos = new FileOutputStream(f);

// Serialize the persistable download to a file.

To resume a download, use the following code

// Initialize TransferManager.
TransferManager tm = new TransferManager();

FileInputStream fis = new FileInputStream(new File("resume-download"));

// Deserialize PersistableDownload from disk.
PersistableDownload persistDownload = PersistableTransfer.deserializeFrom(fis);

// Call resumeDownload with PersistableDownload.


TransferManager performs a range GET operation during the resumeDownload operation to download the remaining Amazon S3 object contents. ETag’s are returned only when downloading whole Amazon S3 objects and hence ETag validation is skipped during resumeDownload operation. Also, resuming a download for an object encrypted using CryptoMode.StrictAuthenticatedEncryption would result in AmazonClientException because authenticity cannot be guaranteed for a range GET operation.

In order to support resuming uploads/downloads during JVM crashes, PersistableUpload or PersistableDownload must be serialized to disk as soon as it is available. You can achieve this by passing an instance of S3SyncProgressListener to TransferManager#upload or TransferManager#download that serializes the data to disk. The following example shows how to serialize the data to a file without calling a pause operation.

// Initialize TransferManager.
TransferManager tm = new TransferManager();

PutObjectRequest putRequest = new PutObjectRequest(myBucket,myKey,file);

// Upload a file to Amazon S3.
tm.upload(putRequest, new S3SyncProgressListener() {

    ExecutorService executor = Executors.newFixedThreadPool(1);

    public void onPersistableTransfer(final PersistableTransfer persistableTransfer) {

       executor.submit(new Runnable() {
          public void run() {
              try {
                  File f = new File("resume-upload");
                  if (!f.exists()) {
                  FileOutputStream fos = new FileOutputStream(f);
              } catch (IOException e) {
                  throw new RuntimeException("Unable to persist transfer to disk.", e);

As the name indicates, S3SyncProgressListener is executed in the same thread as the upload/download operation. It should be very fast and return control to TransferManager since it will affect the performance of the upload/download. Note that the above example code is for illustrative purposes only, so in your progress listener implementation you must avoid blocking operations such as writing to disk.

Do you like the new Pause and Resume functionality supported by TransferManager? Let us know your feedback in the comments.

AWS SDK Core v2.0.0.rc12 Updates

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

We recently published v2.0.0.rc12 of the aws-sdk-core gem ( This release merges the long-running normalized branch onto master.

Upgrading Notes

Please note, when updating to rc12, you may need to make some minor code changes. These are summarized below:

  • Service modules now have a Client class, these should be used to construct API clients:

    # deprecated, will be removed for 2.0.0 final
    s3 =
    # preferred
    s3 =

    Looking forward to the resources update, this will ensure we have a suitable namespace for the new resource classes. Look for more information in a follow up blog post.

  • The Amazon SimpleDB client class has been renamed from Aws::SDB to Aws::SimpleDB. This also affects the short name used in configuration:

    # old configuration key
    Aws.config[:sdb] = { ... }
    # new key
    Aws.config[:simpledb] = { ... }
  • The :raw_json configuration option has been renamed to :simple_json. This is used for services that use the JSON protocol.

Less Visible Changes

If you have written plugins for the aws-sdk-core gem, there are a few other changes to the internals you need to be aware of.

  • Seahorse::Model has received significant updates, especially the API model format. This new format is much more flexible than the denormalized format used previously. Additionally, the AWS API models are now consumed as-is without translation. See the API reference for more information.

  • Seahorse::Client::Http::Request#endpoint is now URI::HTTPS or URI::HTTP object. The custom Endpoint class has been removed in favor of these objects provided by the Ruby standard library.

  • Seahorse::Client::HandlerList#add no longer accepts instance objects and requires a handler class that can be constructed.

Follow up on Base64 Codec Performance

by Hanson Char | on | in Java | Permalink | Comments |  Share

After we posted the previous blog, A Fast and Correct Base64 Codec, some readers expressed interest in getting more details about the comparison of various codecs’ performance. So this blog post is a quick follow-up with a side-by-side decode/encode performance comparison of various Base64 codec’s, including AWS SDK for Java, DataTypeConverter, Jakarta Commons Codec and Java 8.

To generate the performance data, we used the same test suite as in the previous blog, except this time we ran the tests in a Java 8 VM (instead of Java 7 as in the original blog) so as to include the Java 8-specific Base 64 codec for comparison. (The particular JVM is Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode.)

As you can see from the graph and statistics below, in terms of raw performance, the DataTypeConverter is the fastest in terms of Base64 decoding, whereas Java 8 is the clear winner in terms of Base64 encoding. However, as explained in the previous blog, decoding speed is not the only factor in selecting the Base64 decoder for use in the AWS SDK for Java, but also security and correctness.

Hope you find this information useful.

Decoder Performance

Base 64 decoder speed density plots

                (Decoder performance statistics associated with the graph generated via R above.)
                              vars      n  mean   sd median trimmed  mad  min max range skew  kurtosis  se
Commons                1 1000 24.28 4.36     24        23.88  4.45  18   50    32      1.06     2.12    0.14
DataTypeConverter 2 1000  8.51 2.55        8          8.14  1.48    6   51    45      5.83    79.84   0.08
Java8                       3 1000 11.43 2.96     11         11.11  2.97    8   60    52      4.85    70.63   0.09
SDK                         4 1000 13.77 2.80     13        13.47  2.97  10   39    29      2.27    13.50   0.09

    Commons      DataTypeConverter     Java8            SDK       
 Min.     :18.00     Min.   :  6.000           Min.      : 8.00    Min.     :10.00  
 1st Qu.:21.00   1st Qu. : 7.000           1st Qu.  : 9.00   1st Qu. :12.00  
 Median :24.00  Median : 8.000           Median :11.00  Median :13.00  
 Mean   :24.28   Mean   :  8.506           Mean   :11.43   Mean   :13.77  
 3rd Qu.:27.00   3rd Qu.:  9.000           3rd Qu.:13.00   3rd Qu.:16.00  
 Max.    :50.00    Max.    :51.000           Max.    :60.00   Max.    :39.00  

Encoder Performance

Base 64 encoder speed density plots

                  (Encoder performance statistics associated with the graph generated via R above.)
                                 vars    n  mean   sd median trimmed  mad min max range  skew kurtosis   se
Commons                 1 1000 21.94 3.68     21   21.46       2.97  17  69     52      2.99     27.04  0.12
DataTypeConverter  2 1000 11.88 9.70     10    0.91        1.48    7 148   141    11.06   127.96  0.31
Java8                        3 1000  7.22  3.56      6     6.91       1.48    5   99     94    18.08   447.36  0.11
SDK                          4 1000 11.84 9.66     10   10.84       1.48    5 139   134    10.52   114.39  0.31

    Commons      DataTypeConverter       Java8                 SDK        
 Min.      :17.00   Min.      :   7.00    Min.      : 5.000   Min.      :    5.00  
 1st Qu. :19.00   1st Qu. : 10.00    1st Qu.  : 6.000   1st Qu. :  10.00  
 Median :21.00   Median : 10.00    Median : 6.000   Median :  10.00  
 Mean    :21.94   Mean    : 11.88    Mean    : 7.225   Mean   :  11.84  
 3rd Qu. :24.00   3rd Qu. : 12.00    3rd Qu. : 8.000   3rd Qu.:  12.00  
 Max.     :69.00   Max.     :148.00    Max.    :99.000   Max.    :139.00  


Release: AWS SDK for PHP – Version 2.6.11

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.6.11 of the AWS SDK for PHP.

  • Added support for Amazon Cognito Identity
  • Added support for Amazon Cognito Sync
  • Added support for Amazon CloudWatch Logs
  • Added support for editing existing health checks and associating health checks with tags to the Amazon Route 53 client
  • Added the ModifySubnetAttribute operation to the Amazon EC2 client

Install the SDK

A Fast and Correct Base 64 Codec

by Hanson Char | on | in Java | Permalink | Comments |  Share

In AWS, we always strive to make our tools and services better for our customers. One example is the recent improvement we made to the AWS Java SDK’s Base 64 encoding and decoding. In essence, we’ve replaced the use of Jakarta Commons Codec 1.x with a different implementation throughout the entire SDK. Why, you may wonder? There are two reasons.


The first is about performance. Here is a graph that summarizes the situation:

Base64 Performance Comparision

This graph is the frequency distribution of a thousand data points captured for each of the two codec’s, Jakarta Commons 1.x vs. AWS SDK for Java. Each data point represents the total number of milliseconds it takes in each iteration to Base 64 encode and decode 2 MB of random binary data. The test was conducted in a Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode). On average, the Java SDK’s Base 64 codec is about 2.47x faster, with a reduction in time variance of about 42.81%. (For readers who are statistically inclined, details are provided in the Appendix below.)


The second reason is correctness. Here is a quick quiz:

What is the correct result of Base 64 decoding the string "ZE==" ?

(Stop reading further in case the answer spoils the fun.)

The answer is: the decoding should fail. Why? Even though "ZE==" may look like a valid Base 64 encoded string, it is technically impossible to construct such string via Base 64 encoding from any binary data in the first place! (Don’t take my word of it.  Try it yourself!)

If such invalid string is passed to the latest Java SDK’s Base 64 codec, the Base 64 decoding routine would correctly fail fast with an IllegalArgumentException. As far as I know, there seems to be no other existing Base 64 codec that handles such "illegal" input correctly. Most Base 64 decoders (including the latest in Java 8) would simply silently return some implementation-specific, arbitrary values that could never be Base 64 re-encoded back to the original input string. You could probably imagine how such "random" behavior could make the security engineers quite uncomfortable. :)


Under the hood, the latest Base 64 codec in the AWS SDK for Java is a hybrid implementation. For encoding from bytes to string, we directly use javax.xml.bind.DataTypeConverter available from the JDK (1.6+). For decoding, we use our own implementation for reasons of both speed and correctness as discussed above.


This fast and correct Base 64 codec is now available in the AWS SDK for Java 1.8.3 or later. You can of course directly and independently make use of it. For example:

import com.amazonaws.util.Base64;
byte[] bytes = ...
// Base 64 encode
String encoded = Base64.encodeAsString(bytes);
// Base 64 decode
byte[] decoded = Base64.decode(encoded);

For more details, check out Enjoy!


(Performance statistics associated with the graph generated via R above.)

                 vars    n  mean  sd median trimmed  mad min max range skew kurtosis  se
Commons    1 1000 47.69 6.75     46    46.9      5.93  38   84     46   1.43     3.62    0.21
SDK             2 1000 19.51 2.89     19    19.1      2.97  16   46     30   1.84     7.90    0.09

   Commons           SDK       
 Min.      :38.00    Min.      :16.00  
 1st Qu. :42.00    1st Qu. :17.00  
 Median :46.00    Median :19.00  
 Mean    :47.69    Mean    :19.51  
 3rd Qu. :51.00    3rd Qu. :21.00  
 Max.     :84.00    Max.     :46.00