AWS Developer Blog

Announcing CORS Support for Amazon EC2

We are pleased to announce that Amazon EC2 now supports CORS requests, which means you can now use the AWS SDK for JavaScript in the Browser to access your Amazon EC2 resources.

The following example code snippet shows how to make requests to Amazon EC2:

In your HTML file:

<script src="https://sdk.amazonaws.com/js/aws-sdk-2.1.34.min.js"></script>

In your JavaScript file:

var ec2 = new AWS.EC2({region: 'us-west-2'});

ec2.describeInstances(function(err, data) {
  if (err) {
    console.log(err);
  } else {
    data.Reservations.forEach(function(reservation) {
      reservation.Instances.forEach(function(instance) {
        console.log(instance.InstanceId);
      });
    });
  }
});

With Amazon EC2 support for CORS requests, you can now build rich two-tier web applications to manage your instances, VPCs, and more using the AWS SDK for JavaScript in the Browser. Check out our API documentation for details about how to use the API.

We hope you are excited to use use the Amazon EC2 API directly from the browser. We’re eager to know what you think, so leave us a comment or tweet about it @awsforjs.

SDK Extensions Moved to Modularization

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

We are currently finalizing the move of the AWS Tools for Windows PowerShell and the AWS Toolkit for Visual Studio to the new modularized SDK. In addition, we have released new versions of the ASP.NET session provider and the .NET System.Diagnostics trace listener. These two extensions have moved from the SDK GitHub repository into their own separate repositories for better discoverability and to make it easier to track progress.

Session Provider

The Amazon DynamoDB session state provider allows ASP.NET applications to store their sessions inside Amazon DynamoDB. This helps applications scale across multiple application servers while maintaining session state across the system. To get started, check out the NuGet package or view the source on GitHub.

Trace Listener

The AWS DynamoDB trace listener allows System.Diagnostics.Trace calls to be written to Amazon DynamoDB. It is really useful when running an application over several hosts to have all the log messages in one location where the data can be searched through. To get started, check out the NuGet package or view the source on GitHub.

HaPHPy 20th Birthday to PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Twenty years ago, Rasmus Lerdorf announced version 1.0 of PHP. It’s now two decades later, and PHP has evolved so much and is still going strong. The AWS SDK for PHP team would like to say thank you to everyone who has contributed to the PHP language and community over these past twenty years, and wish PHP a very HaPHPy birthday.

Join in the celebration today by reflecting on the history of PHP, following the #20yearsofphp hashtag, and checking out some of these other blog posts from people in the PHP community:

Generating Amazon S3 Pre-signed URLs with SSE-C (Part 5 Finale)

by Hanson Char | on | in Java | Permalink | Comments |  Share

In the previous blog (Part 4), we demonstrated how you can generate and consume pre-signed URLs using SSE-C. In this last and final blog of the series, I will provide code examples that show how to generate and consume pre-signed URLs using SSE-C, but restricting the URLs to be used only with specific customer-provided encryption keys.

As indicated in Part 1 of this blog, a prerequisite to this option is that you must use Signature Version 4 (SigV4). You can enable SigV4 in the AWS SDK for Java in various ways, including using S3-specific system properties, or programmatically as demonstrated previously. Here, the code examples will assume you have enabled SigV4.

SSE-C with specific Customer-Provided Encryption Keys

Here’s how to generate a pre-signed PUT URL using SSE-C (with specific customer-provided encryption keys):

String myExistingBucket = ... // an existing bucket
String myKey = ...    // target S3 key
SecretKey customerKey = ...;
GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    myExistingBucket, myKey, HttpMethod.PUT);
// Restrict the pre-signed PUT URL to be used only against
// a specific customer-provided encryption key
genreq.setSSECustomerKey(new SSECustomerKey(customerKey));
// Note s3 must have been configured to use SigV4
URL puturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned PUT URL with SSE-C: " + puturl);

Here’s how to make use of the generated pre-signed PUT URL via the Apache HttpClient (4.3):

File fileToUpload = ...;
SecretKey customerKey = ...;
HttpPut putreq = new HttpPut(URI.create(puturl.toExternalForm()));
// Specify the customer-provided encryption key 
// when consuming the pre-signed URL
putreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM,
    SSEAlgorithm.AES256.getAlgorithm()));
putreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY, 
    Base64.encodeAsString(customerKey.getEncoded())));
putreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY_MD5, 
    Md5Utils.md5AsBase64(customerKey.getEncoded())));
putreq.setEntity(new FileEntity(fileToUpload));
CloseableHttpClient httpclient = HttpClients.createDefault();
httpclient.execute(putreq);

Here’s how to generate a pre-signed GET URL for use with SSE-C (with specific customer-provided encryption keys):

GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    BUCKET, KEY, HttpMethod.GET);
// Restrict the pre-signed GET URL to be used only against
// a specific customer-provided encryption key
genreq.setSSECustomerKey(new SSECustomerKey(customerKey));
// Note s3 must have been configured to use SigV4
URL geturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned GET URL for SSE-C: " + geturl);

Here’s how to make use of the generated pre-signed GET URL via the Apache HttpClient (4.3):

HttpGet getreq = new HttpGet(URI.create(geturl.toExternalForm()));
// Specify the customer-provided encryption key 
// when consuming the pre-signed URL
getreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM,
    SSEAlgorithm.AES256.getAlgorithm()));
getreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY,
    Base64.encodeAsString(customerKey.getEncoded())));
getreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY_MD5,
    Md5Utils.md5AsBase64(customerKey.getEncoded())));
CloseableHttpClient httpclient = HttpClients.createDefault();
CloseableHttpResponse res = httpclient.execute(getreq);
InputStream is = res.getEntity().getContent();
String actual = IOUtils.toString(is);

In summary, we have shown how you can generate and consume pre-signed URLs using SSE-C with specific customer-provided encryption keys.

We hope you find this blog series on generating pre-signed URLs with SSE useful. We would be very interested to hear about how you make use of this feature in your applications. Please feel free to drop us some comments.

Ciao for now!

Version 3 of the AWS SDK for PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Last October, we announced the Developer Preview of Version 3 of the AWS SDK for PHP. We even presented about it at AWS re:Invent last November. We are grateful for your early feedback and support. Since last fall, we’ve been hard at work on improving, testing, and documenting Version 3 to get it ready for a stable release. We’re excited to announce that Version 3 of the AWS SDK for PHP is now generally available via Composer and on GitHub.

Version 3 of the SDK (V3) represents a significant effort to improve the capabilities of the SDK, incorporate over two years of customer feedback, upgrade our dependencies, improve performance, and adopt the latest PHP standards.

What we’re excited about

We’ve made many improvements to V3, even since our blog post about the Developer Preview (check out that post if you haven’t already). There are also some things that have changed or have been removed since Version 2 of the SDK (V2). We encourage you to take a look at our V3 Migration Guide for all the details about what has changed.

V3 has less code and better performance than V2 and is using the latest version of the Guzzle HTTP library. It also has some exciting new features and improvements.

Asynchronous requests and promises

V3 allows you to execute operations asynchronously. This not only means that it is easier to do concurrent requests, it’s also easier to create asynchronous and cooperative workflows. We use Promises, the basic building block of our asynchronous features, all throughout the SDK’s core. We also use them to implement the higher-level abstractions SDK, including Command Pools, Paginators, Waiters, and service-specific features like the S3 MultipartUploader. That means that almost every feature of the SDK can be used in an asynchronous way.

To execute an operation asynchronously, you simply add "Async" as a suffix to your method call.

// The SYNCHRONOUS (normal) way:

// Executing an operation returns a Result object.
$result = $s3Client->putObject([
    'Bucket' => 'your-bucket',
    'Key'    => 'docs/file.pdf',
    'Body'   => fopen('/path/to/file.pdf', 'r'),
]);

// You can access the result data from the Result object.
echo $result['ObjectURL'];

// The ASYNCHRONOUS way:

// Executing an operation asynchronously returns a Promise object.
$promise = $s3Client->putObjectAsync([
    'Bucket' => 'your-bucket',
    'Key'    => 'docs/file.pdf',
    'Body'   => fopen('/path/to/file.pdf', 'r'),
]);

// Wait for the operation to complete to get the Result object.
$result = $promise->wait();

// Then you can access the result data like normal.
echo $result['ObjectURL'];

The true power of using asynchronous requests is being able to create asynchronous workflows. For example, if you wanted to create a DynamoDB table, wait until it is ACTIVE (using Waiters), and then write some data to it, you can use the then() method of the Promise object to chain those actions together.

$client->createTableAsync([
    'TableName' => $table,
    // Other params...
])->then(function () use ($client, $table) {
    return $client->getWaiter('TableExists', [
        'TableName' => $table,
    ])->promise();
})->then(function () use ($client, $table) {
    return $client->putItemAsync([
        'TableName' => $table,
        'Item' => [
            // Item attributes...
        ]
    ]);
})->wait();

Please take a look at our detailed guide on promises for more information.

PSR-7 compliance and decoupling of the HTTP layer

The PHP-FIG has recently announced the acceptance of PSR-7, a "PHP Standard Recommendation" that defines interfaces for HTTP messages (e.g., Request and Response objects). We have adopted these interfaces for how we represent HTTP requests within the SDK, and it has allowed us to decouple the SDK from Guzzle such that V3 will work with both Guzzle 5 and Guzzle 6. It’s also possible to write your own HTTP handler for the SDK that does not use Guzzle.

The SDK defaults to using Guzzle 6 to perform HTTP requests. Guzzle 6 comes with a number of improvements, including support for asynchronous requests, PSR-7 compliance, and swappable HTTP adapters (including a PHP stream wrapper implementation that can be used on systems where cURL is not available).

JMESPath querying of results and paginators

In V3, the Result object has a new method: search(). With this method you can query data in Result objects using JMESPath expressions. JMESPath is a query language for JSON, or, in our case, PHP arrays.

$result = $ec2Client->describeInstances();
print_r($result->search('Reservations[].Instances[].InstanceId'));

JMESPath expressions can also be applied to Paginators in the same way. This will return a new Iterator that yields the result of the expression on every page of data.

$results = $s3->getPaginator('ListObjects', [
    'Bucket' => 'my-bucket',
]);
foreach ($results->search('Contents[].Key') as $key) {
    echo $key . "n";
}

Time to code

We hope you will enjoy using Version 3 of the AWS SDK for PHP. Here are the links you need to get started:

Creating Amazon CloudFront Signed URLs in Node.js

Amazon CloudFront allows you to use signed URLs to restrict access to content. This allows you to securely serve private content, or content intended for selected users using CloudFront. Read more about how CloudFront signed URLs work.

This article describes how to generate Amazon CloudFront signed URLs in Node.js.

To generate signed URLs, you can use the aws-cloudfront-sign npm module.

Installing the module

npm install aws-cloudfront-sign

Using the module in your code

We recommend that you restrict direct access to your bucket, and require that users access content only through CloudFront. Read more about using an origin access identity to restrict access to Amazon S3 content.

To create a signed URL, you first need to configure your distribution to specify which AWS accounts can create signed URLs (trusted signers). You then need to create a CloudFront key pair for your trusted signer. Once you’ve downloaded your private key for the key-pair ID (Access Key ID), you can use it in your code to generate signed URLs.

The following code shows how to generate signed URLS for web distributions:

var cfsign = require('aws-cloudfront-sign');

var signingParams = {
  keypairId: process.env.PUBLIC_KEY,
  privateKeyString: process.env.PRIVATE_KEY,
  // Optional - this can be used as an alternative to privateKeyString
  privateKeyPath: '/path/to/private/key',
  expireTime: 1426625464599
}

// Generating a signed URL
var signedUrl = cfsign.getSignedUrl(
  'http://example.cloudfront.net/path/to/s3/object', 
  signingParams
);

This module can also be used to generate signed URLs for RTMP distributions:

var signedRTMPUrlObj = cfsign.getSignedRTMPUrl(
  'example.cloudfront.net', 
  // Must not contain prefixes like mp3: or mp4:
  'path/to/s3/object', 
  signingParams
);

This generated URL can now be served to users who are entitled to access the content. We hope this simplifies creating signed URLs for Amazon CloudFront in Node.js.

Generating Amazon S3 Pre-signed URLs with SSE-C (Part 4)

by Hanson Char | on | in Java | Permalink | Comments |  Share

In Part 3 of this blog, we demonstrated how you can generate and consume pre-signed URLs using SSE-S3. In this blog, I will provide code examples to show how you can generate and consume pre-signed URLs using one of the more advanced options, namely SSE-C (server-side encryption with customer-provided encryption keys). The code samples assume the version of the AWS SDK for Java to be 1.9.31 or later.

Server-Side Encryption with Customer-Provided Encryption Keys (SSE-C)

Here’s how to generate a pre-signed PUT URL using SSE-C:

AmazonS3Client s3 = ...;
String myExistingBucket = ... // an existing bucket
String myKey = ...    // target S3 key
// Generate a pre-signed PUT URL for use with SSE-C
GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    myExistingBucket, myKey, HttpMethod.PUT);
genreq.setSSECustomerKeyAlgorithm(SSEAlgorithm.getDefault());
URL puturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned PUT URL with SSE-C: " + puturl);

Here’s how to make use of the generated pre-signed PUT URL via the Apache HttpClient (4.3):

File fileToUpload = ...;
SecretKey customerKey = ...;
HttpPut putreq = new HttpPut(URI.create(puturl.toExternalForm()));
// Note it's necessary to specify the customer-provided encryption key 
// when consuming the pre-signed URL
putreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM,
    SSEAlgorithm.AES256.getAlgorithm()));
putreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY, 
    Base64.encodeAsString(customerKey.getEncoded())));
putreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY_MD5, 
    Md5Utils.md5AsBase64(customerKey.getEncoded())));
putreq.setEntity(new FileEntity(fileToUpload));
CloseableHttpClient httpclient = HttpClients.createDefault();
httpclient.execute(putreq);

Here’s how to generate a pre-signed GET URL for use with SSE-C:

GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    BUCKET, KEY, HttpMethod.GET);
genreq.setSSECustomerKeyAlgorithm(SSEAlgorithm.getDefault());
URL geturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned GET URL for SSE-C: " + geturl);

Here’s how to make use of the generated pre-signed GET URL via the Apache HttpClient (4.3):


HttpGet getreq = new HttpGet(URI.create(geturl.toExternalForm()));
// Note it's necessary to specify the customer-provided encryption key
// when consuming the pre-signed URL
getreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM,
    SSEAlgorithm.AES256.getAlgorithm()));
getreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY,
    Base64.encodeAsString(customerKey.getEncoded())));
getreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY_MD5,
    Md5Utils.md5AsBase64(customerKey.getEncoded())));
CloseableHttpClient httpclient = HttpClients.createDefault();
CloseableHttpResponse res = httpclient.execute(getreq);
InputStream is = res.getEntity().getContent();
String actual = IOUtils.toString(is);

In Part 5, the last blog of this series, I will provide code examples that show how to generate and consume pre-signed URLs using SSE-C, but restricting the URLs to be used only with specific customer-provided encryption keys. 

Stay tuned!

Serving Private Content Through Amazon CloudFront Using Signed Cookies

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

Private content can be served through Amazon CloudFront in two ways: through signed URLs or signed cookies. For information about which approach to choose, see Choosing Between Signed URLs and Signed Cookies.

The AWS SDK for .NET includes an Amazon.CloudFront.AmazonCloudFrontUrlSigner utility class that can be used to generate signed URLs. Based on a customer request, we recently added the Amazon.CloudFront.AmazonCloudFrontCookieSigner utility class to make it easier to generate the cookies required to access private content through Amazon CloudFront.

To start serving private content through Amazon CloudFront:

  • Creating CloudFront Key Pairs for Your Trusted Signers. You can either create a new key pair using the AWS Management Console or, if you have your own RSA key pair, you can upload the public key to create a key pair. Each key pair has a key pair ID, which will be used to create the signed cookies.
  • The RSA key pair file (.pem file) must be available when creating signed cookies. If you created the key pair using the AWS Management Console, you can download the key pair file and store it locally.
  • Adding Trusted Signers to Your Distribution. You can do this through the AWS Management Console or programmatically, through the Amazon.CloudFront.IAmazonCloudFront.CreateDistribution or Amazon.CloudFront.IAmazonCloudFront.UpdateDistribution APIs.

 

Creating Signed Cookies for Canned Policies

Canned policies allow you to specify an expiration date only. Custom policies allow more complex restrictions. For a comparison between the two types of policies, see Choosing Between Canned and Custom Policies for Signed Cookies.

The following code snippet shows the use of the Amazon.CloudFront.AmazonCloudFrontCookieSigner.GetCookiesForCannedPolicy method to create signed cookies for canned policies.

// The key pair Id for the CloudFront key pair
var cloudFrontKeyPairId = "key_pair_id";

// The RSA key pair file (.pem file) that contains the private key    
var privateKeyFile = new FileInfo(@"rsa_file_path"); 

// Path to resource served through a CloudFront distribution
var resourceUri = "http://xyz.cloudfront.net/image1.jpeg" 
    
var cookies = AmazonCloudFrontCookieSigner.GetCookiesForCannedPolicy(
    resourceUri,
    cloudFrontKeyPairId,
    privateKeyFile,
    DateTime.Today.AddYears(1)); // Date until which the signed cookies are valid

Creating Signed Cookies for Custom Policies

You should use custom policies to apply complex restrictions to the accessing of private content. In addition to an expiration date, custom policies allow you to set resource paths with wildcards, activation time, and IP address/address ranges.

The following code snippet shows how to generate signed cookies for custom policies.

// The key pair Id for the CloudFront key pair
var cloudFrontKeyPairId = "key_pair_id"; 

// The RSA key pair file (.pem file) that contains the private key    
var privateKeyFile = new FileInfo(@"rsa_file_path"); 

// Path to resource served through a CloudFront distribution
var resourceUri = "http://xyz.cloudfront.net/image1.jpeg" 

var cookies = AmazonCloudFrontCookieSigner.GetCookiesForCustomPolicy(
    AmazonCloudFrontCookieSigner.Protocols.Http | 
    AmazonCloudFrontCookieSigner.Protocols.Https, // Allow either http or https

    "xyz.cloudfront.net",      // CloudFront distribution domain
    privateKeyFile,
    "content/*.jpeg",          // Allows use of wildcards
    cloudFrontKeyPairId, 
    DateTime.Today.AddDays(1), // Date till which the signed cookies are valid
    DateTime.MinValue,         // Date from which the signed cookies are valid,
	                       // a value of DateTime.MinValue is ignored			
    "192.0.2.0/24");           // Source IP or range of IP addresses,
                               // a value of string.Empty or null is ignored

Send Cookies to a User’s Browser

Typically, you would create signed cookies when a user visits your website and signs in (or meets some other criteria). At that point, the cookies are generated on the web server and included in response. The user’s browser caches these cookies and includes them in subsequent requests to Amazon CloudFront, when the user accesses the URL for private content hosted on Amazon CloudFront.

The following snippet sends the generated cookies back in the HTTP response to the browser in an ASP.NET web application.

using System.Web;
...
// Set signed cookies for precanned policies
Response.Cookies.Add(new HttpCookie(cookies.Expires.Key, cookies.Expires.Value));
Response.Cookies.Add(new HttpCookie(cookies.Signature.Key, cookies.Signature.Value));
Response.Cookies.Add(new HttpCookie(cookies.KeyPairId.Key, cookies.KeyPairId.Value));

//Or set signed cookies for custom policies 
Response.Cookies.Add(new HttpCookie(cookies.Policy.Key, cookies.Policy.Value));
Response.Cookies.Add(new HttpCookie(cookies.Signature.Key, cookies.Signature.Value));
Response.Cookies.Add(new HttpCookie(cookies.KeyPairId.Key, cookies.KeyPairId.Value));

In this blog post, we showed how to use the customer-suggested Amazon.CloudFront.AmazonCloudFrontCookieSigner utility class to generate signed cookies to access private content from Amazon CloudFront. If you have ideas for new utilities or high-level APIs to add to the SDK, please provide your feedback here.

RailsConf 2015 Recap

by Alex Wood | on | in Ruby | Permalink | Comments |  Share

Recently, Trevor, Loren, and myself from the AWS SDK team attended RailsConf in Atlanta. We had a great time at the conference and enjoyed connecting with many of you there.

Our Rails on Amazon Web Services Workshop

At RailsConf, we ran a workshop called Deploy and Manage Ruby on Rails Apps on AWS. It was an amazing experience for us, with attendees of all experience levels gettings hands-on experience not only deploying to AWS, but learning about the tools we’ve made to help make integrations easier.

For those of you who could not make it, you can still give this workshop a try!

  • Detailed step-by-step instructions, the same as we provided to attendees, are available here.
  • You can also follow along with the presentation recording on YouTube.
  • Code for the sample app is available on GitHub.
  • If you’d like to try using Amazon Relational Database Service instead of using AWS OpsWorks managed MySQL, you can reference our blog post on that topic as well.

Continuing the Conversation

We hope to see more of you on the conference trail again soon! Apropos of this, it is worth mentioning that AWS re:Invent registration is open at the time of writing. We will be there, and we hope to see you there!

Generating Amazon S3 Pre-signed URLs with SSE-S3 (Part 3)

by Hanson Char | on | in Java | Permalink | Comments |  Share

As mentioned in Part 1 and Part 2 of this blog, there are fundamentally four ways you can generate Amazon S3 pre-signed URLs using server-side encryption (SSE). We demonstrated how you could do so with SSE-KMS (server-side encryption with AWS Key Management Service).

In this blog, I will provide further sample code that shows how you can generate and consume pre-signed URLs for SSE-S3 (server-side encryption with Amazon S3-managed keys). The code samples assume the version of the AWS SDK for Java to be 1.9.31 or later.

Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

Here’s how to generate a pre-signed PUT URL using SSE-S3:


AmazonS3Client s3 = ...;
String myExistingBucket = ... // an existing bucket
String myKey = ...    // target S3 key
// Generate a pre-signed PUT URL for use with SSE-S3
GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    myExistingBucket, myKey, HttpMethod.PUT);
genreq.setSSEAlgorithm(SSEAlgorithm.getDefault());
URL puturl = s3.generatePresignedUrl(genreq);
System.out.println("Pre-signed PUT URL with SSE-S3: " + puturl);

Here’s how to make use of the generated pre-signed PUT URL via the Apache HttpClient (4.3):


File fileToUpload = ...;
HttpPut putreq = new HttpPut(URI.create(puturl.toExternalForm()));
// AES256 is currently the only supported algorithm for SSE-S3
putreq.addHeader(new BasicHeader(Headers.SERVER_SIDE_ENCRYPTION,
    SSEAlgorithm.AES256.getAlgorithm()));
putreq.setEntity(new FileEntity(fileToUpload));
CloseableHttpClient httpclient = HttpClients.createDefault();
httpclient.execute(putreq);

Here’s how to generate a pre-signed GET URL for use with SSE-S3:


GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    BUCKET, KEY, HttpMethod.GET);
URL geturl = s3.generatePresignedUrl(genreq);
System.out.println("Pre-signed GET URL for SSE-S3: " + geturl);

(Note in particular that generating a pre-signed GET URL for an S3 object encrypted using SSE-S3 is as simple as generating a regular pre-signed URL!)

Here’s how to make use of the generated pre-signed GET URL via the Apache HttpClient (4.3):


HttpGet getreq = new HttpGet(URI.create(geturl.toExternalForm()));
CloseableHttpClient httpclient = HttpClients.createDefault();
CloseableHttpResponse res = httpclient.execute(getreq);
InputStream is = res.getEntity().getContent();
String actual = IOUtils.toString(is);

In Part 4 and 5, I will provide code examples to show how you can generate and consume pre-signed URLs using server-side encryption with customer-provided encryption keys (SSE-C).

Enjoy!