AWS Developer Blog

Receiving Amazon SNS Messages in PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

The following post details how to use version 2 of the AWS SDK for PHP to receive and validate HTTP(S) messages from Amazon SNS. For a guide on how to do so with version 3 of the SDK, please see our updated post.

Handling inbound Amazon SNS notification messages with PHP is simple. In this post, I’ll show you how to retrieve data from incoming messages, and verify that the messages are coming from Amazon SNS.

A Little About Amazon SNS

Amazon Simple Notification Service (Amazon SNS) is a fast, fully-managed, push messaging service. Amazon SNS can deliver messages to email, mobile devices (i.e., SMS; iOS, Android and FireOS push notifications), Amazon SQS queues, and HTTP/HTTPS endpoints.

With Amazon SNS, you can setup topics to publish custom messages to subscribed endpoints. However, SNS messages are used by many of the other AWS services to communicate information asynchronously about your AWS resources. Some examples include:

  • Configuring Amazon Glacier to notify you when a retrieval job is complete.
  • Configuring AWS CloudTrail to notify you when a new log file has been written.
  • Configuring Amazon Elastic Transcoder to notify you when a transcoding job changes status (e.g., from "Progressing" to "Complete")

Though you can certainly subscribe your email address to receive SNS messages from service events like these, your inbox would fill up rather quickly. There is great power, however, in being able to subscribe an HTTP/HTTPS endpoint to receive the messages. This allows you to program webhooks for your applications to easily respond to various events.

Receiving a Message

In order for an HTTP/HTTPS endpoint to receive messages, you must subscribe the endpoint to an SNS topic. Before you do that, you need to create and deploy a script to the endpoint to process the messages.

Here is a naïvely simple PHP script that can read a posted SNS message.


// Fetch the raw POST body containing the message
$postBody = file_get_contents('php://input');

// JSON decode the body to an array of message data
$message = json_decode($postBody, true);
if ($message) {
    // Do something with the data
    echo $message['Message'];

The AWS SDK for PHP has an SNS Message class for representing an SNS message. It encapsulates the preceding code, and also validates the structure of the message data.


// Include Composer autoloader
require 'path/to/vendor/autoload.php';

// Create a message object from the POST body
$message = AwsSnsMessageValidatorMessage::fromRawPostData();
echo $message->get('Message');

Amazon SNS sends different types of messages including SubscriptionConfirmation, Notification, and UnsubscribeConfirmation. The formats of these messages are described on the Appendix: Message and JSON Formats section of the Amazon SNS Developer Guide.

Confirming a Subscription to a Topic

In order to handle a SubscriptionConfirmation message, we need to add some code that actually does something with the message. SubscriptionConfirmation messages provide a URL that you can use to confirm the subscription. We’ll use a Guzzle HTTP client to send a GET request to the URL.

$message = AwsSnsMessageValidatorMessage::fromRawPostData();

// Create a Guzzle client and send a request to the SubscribeURL
$client = new GuzzleHttpClient();

Verifying a SNS Message’s Signature

Messages from Amazon SNS are signed. It’s a good practice to verify the signature and ensure that a message was actually sent from Amazon SNS before performing actions as a result of the message. The SDK includes a MessageValidator class for validating the message, but you must have the OpenSSL PHP extension installed to use it.

use AwsSnsMessageValidatorMessage;
use AwsSnsMessageValidatorMessageValidator;

$message = Message::fromRawPostData();

// Validate the message
$validator = new MessageValidator();

Handling Notifications

Let’s put it all together and add some extra code for handling both SubscriptionConfirmation and Notification messages.


require 'path/to/vendor/autoload.php';

use AwsSnsMessageValidatorMessage;
use AwsSnsMessageValidatorMessageValidator;
use GuzzleHttpClient;

// Make sure the request is POST

try {
    // Create a message from the post data and validate its signature
    $message = Message::fromRawPostData();
    $validator = new MessageValidator();
} catch (Exception $e) {
    // Pretend we're not here if the message is invalid

if ($message->get('Type') === 'SubscriptionConfirmation') {
    // Send a request to the SubscribeURL to complete subscription
    (new Client)->get($message->get('SubscribeURL'))->send();
} elseif ($message->get('Type') === 'Notification') {
    // Do something with the notification


As you can see, receiving, verifying and handling Amazon SNS messages is simple. Setting up your application to receive SNS messages will allow you to create applications that can handle asynchronous communication from AWS services and other parts of your application.

EDIT: My next blog post is a follow up to this one, and describes how you can test your Amazon SNS webhooks locally.

Response Logging in AWS Tools for Windows PowerShell

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

As described in an earlier post, the AWS SDK for .NET has support for logging service responses, error responses, and metrics for AWS API calls. For the SDK, this is enabled through the App.config or Web.config file.

The AWS Tools for Windows PowerShell supports a shell variable, named $AWSHistory, that records what cmdlets have been run and the corresponding service response (and optionally request) data. However, until recently developers wanting to use configurable and more detailed diagnostic logging from within the underlying SDK were only able to effect this by editing the configuration file for PowerShell itself (powershell.exe.config)—which affects logging for all PowerShell scripts.

We recently added a cmdlet that makes it possible to configure logging with System.Diagnostics within a script. This cmdlet affects only the currently running script. It will either create simple TextWriterTraceListener instances, or allow you to add custom listeners for the trace sources associated with AWS requests.

First, let’s add a simple text listener:

Add-AWSLoggingListener MyAWSLogs c:logsaws.txt

This listener creates a TextWriterTraceListener that logs error responses from AWS requests to the file c:logsaws.txt. The listener is attached to the source Amazon, which matches all service requests.

If we want to send Amazon S3 errors to a separate log, we could add a second listener:

Add-AWSLoggingListener MyS3Logs c:logss3.txt -Source Amazon.S3

Trace data will go only to the most-specific trace source configured for a listener. In this example, the S3 logs go to s3.txt and all other service logs go to aws.txt.

By default, listeners added in this way will log only error responses. Enabling logging of all responses and/or metrics can be done with a couple of other cmdlets:

Set-AWSResponseLogging Always

These cmdlets affect all listeners added with Add-AWSLoggingListener. Similarly, we can turn those logging levels back down or off:

Set-AWSResponseLogging OnError
Set-AWSResponseLogging Never

Also, we can remove specific listeners from a trace source by name:

Remove-AWSLoggingListener Amazon MyAWSLog

Now, only the S3 logger is active. One way you could use these cmdlets is to enable logging only around a particular section of script.

The Add-AWSLoggingListener cmdlet can also add instances of trace listeners created by other means, such as custom listeners. These statements do the same thing:

Add-AWSLoggingListener -Name MyAWSLog -LogFilePath c:logsaws.txt

$listener = New-Object System.Diagnostics.TextWriterTraceListener c:logsaws.txt
$listener.Name = "MyAWSLog"
Add-AWSLoggingListener -TraceListener $listener

Exposing this facility through the PowerShell cmdlets required adding the ability to programmatically add or remove listeners via the existing AWSConfigs class in the AWS SDK for .NET, in addition to the logging-related configuration items already on that class.

  new TextWriterTraceListener("c:\logs\dynamo.txt", "myDynamoLog"));

Now PowerShell developers have the same access to performance and diagnostic information about AWS API calls as other AWS SDK for .NET users. For more information, refer to the Shell Configuration section of the AWS Tools for Windows PowerShell Cmdlet Reference.

Amazon S3 Transfer Utility for Windows Store and Windows Phone

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

We recently made the Amazon S3 Transfer Utility API in AWS SDK for .NET available for the Windows Store and Windows Phone platforms. TransferUtility is an API that runs on top of the low-level Amazon S3 API and provides utility methods for uploading and downloading files and directories. It includes support for automatic switching to multipart upload for large files, multi-threaded uploads, cancellation of in-progress operations and notifications for transfer progress. The set of TransferUtility API available for Windows Store and Windows Phone platforms includes all the methods available for .NET 3.5 and .NET 4.5 platforms except for the upload/download directory functionality. Another point to note is that these platforms support only the asynchronous APIs.

The code snippets in the following sections show how to upload and download a file using TransferUtility. Notice that we use the IStorageFile type available on Windows Store and Windows Phone platforms. You can use File Pickers available on these platforms to get an instance of IStorageFile. This article provides information on working with File Pickers for the Windows Store platform.

Upload using Transfer Utility

The following code snippet shows the TransferUtility.UploadAsync method being used to upload a file. We use an instance of the TransferUtilityConfig class to change the default values for ConcurrentServiceRequests and MinSizeBeforePartUpload. We have changed the part size to 10 MB for multipart upload using the TransferUtilityUploadRequest.PartSize property. You can also see that we subscribe to TransferUtilityUploadRequest.UploadProgressEvent to receive upload progress notification events.

private const int MB_SIZE = (int)Math.Pow(2, 20);

public async Task UploadFile(IStorageFile storageFile, string bucket, string key, AWSCredentials credentials, CancellationToken cancellationToken)
    var s3Client = new AmazonS3Client(credentials,RegionEndpoint.USWest2);
    var transferUtilityConfig = new TransferUtilityConfig
        // Use 5 concurrent requests.
        ConcurrentServiceRequests = 5,

        // Use multipart upload for file size greater 20 MB.
        MinSizeBeforePartUpload = 20 * MB_SIZE,
    using (var transferUtility = new TransferUtility(s3Client, transferUtilityConfig))
        var uploadRequest = new TransferUtilityUploadRequest
            BucketName = bucket,
            Key = key,
            StorageFile = storageFile,

            // Set size of each part for multipart upload to 10 MB
            PartSize = 10 * MB_SIZE
        uploadRequest.UploadProgressEvent += OnUploadProgressEvent;
        await transferUtility.UploadAsync(uploadRequest, cancellationToken);

void OnUploadProgressEvent(object sender, UploadProgressArgs e)
    // Process progress update events.

Download using Transfer Utility

Following is a snippet to download an object from S3 using the TransferUtility.DownloadAsync method. We use the TransferUtilityDownloadRequest.WriteObjectProgressEvent event to suscribe to notifications about the download progress.

public async Task DownloadFile(IStorageFile storageFile, string bucket, string key, AWSCredentials credentials, CancellationToken cancellationToken)
    var s3Client = new AmazonS3Client(credentials, RegionEndpoint.USWest2);
    using (var transferUtility = new TransferUtility(s3Client))
        var downloadRequest = new TransferUtilityDownloadRequest
            BucketName = bucket,
            Key = key,
            StorageFile = storageFile
        downloadRequest.WriteObjectProgressEvent += OnWriteObjectProgressEvent;
        await transferUtility.DownloadAsync(downloadRequest, cancellationToken);

void OnWriteObjectProgressEvent(object sender, WriteObjectProgressArgs e)
    // Process progress update events.

In this post, we saw how to use the Amazon S3 Transfer Utility API to upload and download files on the Windows Store and Windows Phone 8 platforms. Try it out, and let us know what you think.

Using AmazonS3EncryptionClient to Send Secure Data Between Two Parties

by Hanson Char | on | in Java | Permalink | Comments |  Share

Suppose you have a partner who would like to encrypt and upload some confidential data to you via Amazon S3, but doesn’t want anyone other than you to be able to decrypt the data. Is this possible?

Yes! That’s a classical use case of Public-key Cryptography, and AmazonS3EncryptionClient makes it easy to do.

First of all, since you are the only party that can decrypt the data, your partner will need to have a copy of your public key. Your private key is, of course, kept secret, and therefore is not shared with your partner. Armed with your public key, your partner can then construct an AmazonS3EncryptionClient to encrypt and upload data to you via S3. Notice, however, the relevant API of AmazonS3EncryptionClient requires the use of a KeyPair. How can one construct a KeyPair with only the public but not the private key? Can the private key be null? The short answer is yes. This may not seem obvious, so here is a sample code snippet on how this can be done:

// Create an S3 client using only a public key
AmazonS3 s3 = new AmazonS3EncryptionClient(new EncryptionMaterials(getPublicKeyPair()));
PutObjectResult result = s3.putObject("your_bucket", "location");
// ...

public static KeyPair getPublicKeyPair() throws NoSuchAlgorithmException, InvalidKeySpecException {
    byte[] public_key = … // public key in binary
    KeyFactory kf = KeyFactory.getInstance("RSA");
    PublicKey publicKey = kf.generatePublic(new X509EncodedKeySpec(public_key));
    return new KeyPair(publicKey, null);

For obvious reasons, any such key pair with a null private key can only, by definition, be used to encrypt and upload data to S3, but cannot decrypt any data retrieved from S3 using the Amazon S3 encryption client. (Indeed, any such attempt would lead to an exception like "AmazonClientException: Unable to decrypt symmetric key".) On the receiving side, to retrieve and decrypt the Amazon S3 object, you can simply make use of AmazonS3EncryptionClient, but this time instantiated with a KeyPair in the usual way (i.e. by specifying both the public and private keys).

Note that, for performance and security reasons, the encryption material provided to the S3 encryption client is used only as a key-encrypting-key material, and not for content encryption. AmazonS3EncryptionClient always encrypts the content of every S3 object with a randomly generated one-time symmetric key, also known as the "envelope key". The envelope key is therefore globally unique per S3 object. As with most block cipher modes of operation, the security assurance degrades as more data is processed with a single key. The unique envelope key per S3 object therefore enables a maximum level of "key freshness" in terms of security.

For more background information, see Client-Side Data Encryption with the AWS SDK for Java and Amazon S3, and Specifying Client-Side Encryption Using the AWS SDK for Java. Let us know what you think!

Using Transfer Manager to Copy Amazon S3 Objects

by Manikandan Subramanian | on | in Java | Permalink | Comments |  Share

The latest addition to the list of Transfer Manager features is the ability to easily make copies of your data in Amazon S3.

The new TransferManager.copy method allows you to easily copy an existing Amazon S3 object from one location to another.

Under the hood, TransferManager selects which copy algorithm is best for your data, either single-part copy or multipart copy. When possible, TransferManager initiates multipart copy requests in parallel, each copying a small part of the Amazon S3 object, resulting in better performance, throughput, and resilience to errors. You don’t have to worry about the details of copying your data – just rely on TransferManager's easy to use, asynchronous API for working with Amazon S3.

The following example shows how easy it is to copy data using TransferManager.

// Create a new transfer manager object with your credentials.

TransferManager tm = new TransferManager(new DefaultAWSCredentialsProviderChain());

// The copy method returns immediately as your data copies in the background.
// Use the returned transfer object to track the progress of the copy operation.

Copy copy = tm.copy(sourceBucket, sourceKey,
	              destinationBucket, destinationKey);

// Perform any work while the copy processes

if (copy.isDone()) {
   System.out.println("Copy operation completed.");

There’s lots more great functionality in TransferManager. Check out some of our other blog posts on TransferManager

Any new functionality that you’d like us to add to TransferManager? Let us know your ideas.

Release: AWS SDK for PHP – Version 2.6.0

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.6.0 of the AWS SDK for PHP. This release updates the Amazon CloudSearch, Amazon EC2, and Amazon Redshift clients to support their newest APIs and features. See the CHANGELOG for a full list of changes.

Version 2.6.0 is a major release of the SDK, and contains some backwards-incompatible changes. These changes only effect the usage of the Amazon CloudSearch client. See the document for more details.

Install the SDK

Tagging Amazon EC2 Instances at Launch

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

In this guest post (by James Saull from the AWS Solutions Architects team), we will show how to launch EC2 instances, retrieve the new instances’ IDs, and apply tags to them.

Tagging EC2 instances allows you to assign metadata to instances to facilitate management – especially at scale. Canonical examples include tagging instances to identify which individual or department they belong to or which application they are part of. They are also a useful way to help with cost allocation of resources when it comes time to analyze or apportion the bill. Some organizations consider tagging so important to the management of their infrastructure that they terminate anything that is not appropriately tagged!

It makes good sense to apply the minimum set of tags at the time of launch. More can be applied later if required. In this short post, we will show how to launch and tag EC2 instances.

For simplicity, we will launch a pair of instances using the latest Windows 2012 Base image:

Set-DefaultAWSRegion eu-west-1
$NewInstanceResponse =  
"WINDOWS_2012_BASE" | Get-EC2ImageByName | New-EC2Instance -InstanceType t1.micro -MinCount 2 -MaxCount 2

Now to retrieve the Instance Ids:

$Instances = ($NewInstanceResponse.Instances).InstanceId 

Next we need to compose the collection of tags we wish to apply to these instances. When writing Windows PowerShell scripts, I prefer to avoid using New-Object where reasonable, but I will eschew my personal preferences and demonstrate with and without:

$Tags = @()
$CreatedByTag = New-Object Amazon.EC2.Model.Tag
$CreatedByTag.Key = "CreatedBy"
$CreatedByTag.Value = "James"
$Tags += $CreatedByTag
$DepartmentTag = New-Object Amazon.EC2.Model.Tag
$DepartmentTag.Key = "Department"
$DepartmentTag.Value = "Solutions Architecture"
$Tags += $DepartmentTag 

We can rewrite the above as an array of key-value pairs:

$Tags = @( @{key="CreatedBy";value="James"}, `
           @{key="Department";value="Solutions Architecture"} )

The final step is to apply the tags to the instances we launched:

New-EC2Tag -ResourceId $Instances -Tags $Tags

This can be rewritten using pipes instead:

$Instances | New-EC2Tag -Tags $Tags 

We can now look at our newly tagged instances:

((Get-EC2Instance -Instance $Instances).RunningInstance).Tags

It is tempting to condense this script into a single line, but it might be more robust to code more defensively and check at each stage that failures have not been encountered (e.g., zero instances launched due to reaching an account limit):

(("WINDOWS_2012_BASE" | Get-EC2ImageByName | New-EC2Instance -InstanceType t1.micro -MinCount 2 -MaxCount 2).Instances).InstanceId | New-EC2Tag -Tags $Tags

If you have been following along in your test account and you’ve launched some instances, be sure to terminate them when you no longer need them:

$Instances | Stop-EC2Instance -Terminate -Force 

Release: AWS SDK for PHP – Version 2.5.4

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.5.4 of the AWS SDK for PHP. This release updates the Amazon CloudFront client, AWS OpsWorks client, and Elastic Load Balancing client; adds support for the AWS_SECRET_ACCESS_KEY environment variable; updates the Amazon S3 stream wrapper; addresses an issue with dot-segments in the Amazon S3 directory sync, and addresses an issue with Amazon S3 pre-signed URLs. Please refer to the CHANGELOG for a complete list of changes.

Install the SDK

Requesting feedback on the AWS Toolkit for Visual Studio

by Andrew Fitz Gibbon | on | in .NET | Permalink | Comments |  Share

The AWS Toolkit for Visual Studio provides extensions for Microsoft Visual Studio that make it easier to develop, debug, and deploy .NET applications using Amazon Web Services. We’re constantly working to improve these extensions and provide developers what they need to develop and manage their applications.

To better guide the future of the AWS Toolkit for Visual Studio, we’re reaching out to you for direct feedback. Below is a link to a short survey. It shouldn’t take more than 15 minutes to fill out and your responses will help us bring you a better development experience. Thank you!

Survey: Feedback on the AWS Toolkit for Visual Studio

AWS SDK for Java Maven Archetype

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

If you’re a Maven user, there’s a brand new way to get started building Java applications that use the AWS SDK for Java.

With the new Maven archetype, you can easily create a new Java project configured with the AWS SDK for Java and some sample code to help you find your way around the SDK.

Starting a project from the new archetype is easy:

mvn archetype:generate 

When you run the Maven archetype:generate goal, you’ll be prompted for some basic Maven values for your new project (groupId, artifactId, version).

[INFO] Generating project in Interactive mode
[INFO] Archetype [com.amazonaws:aws-java-sdk-archetype:1.0.0] 
Define value for property 'groupId': :   
Define value for property 'artifactId': : my-aws-java-project
Define value for property 'version':  1.0-SNAPSHOT: : 
Define value for property 'package': : 

When the archetype:generate goal completes, you’ll have a new Maven Java project, already configured with a dependency on the AWS SDK for Java and some sample code in the project to help you get started with the SDK.

The POM file in your new project will be configured with the values you just gave Maven:

<project xmlns="">


  <name>AWS SDK for Java Sample</name>


Before you can run the sample code, you’ll need to fill in your AWS security credentials. The README.html file details where to put your credentials for this sample. Once your credentials are configured, you’re ready to compile and run your new project. The sample project’s POM file is configured so that you can easily compile, jar, and run the project by executing mvn package exec:java. The package goal compiles the code and creates a jar for it, and the exec:java goal runs the main method in the sample class.

Depending on what’s in your AWS account, you’ll see something like this:


[INFO] >>> exec-maven-plugin:1.2.1:java (default-cli) @ my-aws-java-project >>>
[INFO] <<< exec-maven-plugin:1.2.1:java (default-cli) @ my-aws-java-project <<<
[INFO] --- exec-maven-plugin:1.2.1:java (default-cli) @ my-aws-java-project ---
Welcome to the AWS Java SDK!
You have access to 3 availability zones:
 - us-east-1a (us-east-1)
 - us-east-1b (us-east-1)
 - us-east-1c (us-east-1)
You have 1 Amazon EC2 instance(s) running.
You have 3 Amazon S3 bucket(s).
The bucket 'aws-demos-265490781088' contains 48 objects with a total size of 376257032 bytes.
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10.928s
[INFO] Finished at: Wed Feb 19 15:40:24 PST 2014
[INFO] Final Memory: 24M/222M
[INFO] ------------------------------------------------------------------------

Are you already using Maven for your AWS Java projects? What are your favorite features of Maven? Let us know in the comments below.