Category: PHP


Release: AWS SDK for PHP 2.4.3

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.4.3 of the AWS SDK for PHP. This release adds support for the Amazon Simple Notification Service mobile push API, adds progress reporting on snapshot restore operations for Amazon Redshift, and addresses an issue with directories and the Amazon S3 stream wrapper.

Changelog

  • Updated the Amazon SNS client API to support mobile push
  • Updated the Amazon Redshift client API to support progress reporting on snapshot restore operations
  • Updated the Amazon Elastic MapReduce client to now use JSON serialization and AWS Signature V4 to securely sign requests
  • AWS SDK for PHP clients now throw AwsCommonExceptionTransferException exceptions when a network error occurs instead of a GuzzleHttpExceptionCurlException. The TransferException class, however, extends from GuzzleHttpExceptionCurlException. You can continue to catch the Guzzle CurlException or catch AwsCommonExceptionAwsException to catch any exception that can be thrown by an AWS client.
  • Fixed an issue with the Amazon S3 stream wrapper where trailing slashes were being added when listing directories

Install/Download the Latest SDK

Iterating through Amazon DynamoDB Results

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

The AWS SDK for PHP has a feature called "iterators" that allows you to retrieve an entire result set without manually handling pagination tokens or markers. The iterators in the SDK implement PHP’s Iterator interface, which allows you to easily enumerate or iterate through resources from a result set with foreach.

The Amazon DynamoDB client has iterators available for all of the operations that return sets of resoures, including Query, Scan, BatchGetItem, and ListTables. Let’s take a look at how we can use the iterators feature with the DynamoDB client in order to iterate through items in a result.

Specifically, let’s look at an example of how to create and use a Scan iterator. First, let’s create a client object to use throughout the rest of the example code.

<?php

require 'vendor/autoload.php';

use AwsDynamoDbDynamoDbClient;

$client = DynamoDbClient::factory(array(
    'key'    => '[aws access key]',
    'secret' => '[aws secret key]',
    'region' => '[aws region]' // (e.g., us-west-2)
));

Next, we’ll create a normal Scan operation without an iterator. A DynamoDB Scan operation is used to do a full table scan on a DynamoDB table. We want to iterate through all the items in the table, so we will just provide the TableName as a parameter to the operation without a ScanFilter.

$result = $client->scan(array(
    'TableName' => 'TheNameOfYourTable',
));

foreach ($result['Items'] as $item) {
    // Do something with the $item
}

The $result variable will contain a GuzzleServiceResourceModel object, which is an array-like object structured according to the description in the API documentation for the scan method. However, DynamoDB will only return up to 1 MB of results per Scan operation, so if your table is larger than 1 MB and you want to retrieve the entire result set, you will need to perform subsequent Scan operations that include the ExclusiveStartKey parameter. The following example shows how to do this:

$startKey = array();

do {
    $args = array('TableName' => 'TheNameOfYourTable') + $startKey;
    $result = $client->scan($args);

    foreach ($result['Items'] as $item) {
        // Do something with the $item
    }

    $startKey['ExclusiveStartKey'] = $result['LastEvaluatedKey'];
} while ($startKey['ExclusiveStartKey']);

Using an iterator to perform the Scan operation makes this much simpler.

$iterator = $client->getScanIterator(array(
    'TableName' => 'TheNameOfYourTable'
));

foreach ($iterator as $item) {
    // Do something with the $item
}

Using the iterator allows you to get the full result set, regardless of how many MB of data there are, and still be able to use a simple syntax to iterate through the results. The actual object returned by getScanIterator(), or any get*Iterator() method, is an instance of the AwsCommonIteratorAwsResourceIterator class.

Warning: Doing a full table scan on a large table may consume a lot of provisioned throughput and, depending on the table’s size and throughput settings, can take time to complete. Please be cautious before running the examples from this post on your own tables.

Iterators also allow you to put a limit on the maximum number of items you want to iterate through.

$iterator = $client->getScanIterator(array(
    'TableName' => 'TheNameOfYourTable'
), array(
    'limit' => 20
));

$count = 0;
foreach ($iterator as $item) {
    $count++;
}
echo $count;
#> 20

Now that you know how iterators work, let’s work through another example. Let’s say you have a DynamoDB table named "Contacts" with the following simple schema:

  • Id (Number)
  • FirstName (String)
  • LastName (String)

You can display the full name of each contact with the following code:

$contacts = $client->getScanIterator(array(
    'TableName' => 'Contacts'
));

foreach ($contacts as $contact) {
    $firstName = $contact['FirstName']['S'];
    $lastName = $contact['LastName']['S'];
    echo "{$firstName} {$lastName}n";
}

Item attribute values in your DynamoDB result are keyed by both the attribute name and attribute type. In many cases, especially when using a loosely typed language like PHP, the type of the item attribute may not be important, and a simple associative array might be more convenient. The SDK (as of version 2.4.1) includes the AwsDynamoDbIteratorItemIterator class which you can use to decorate a Scan, Query, or BatchGetItem iterator object in order to enumerate the items without the type information.

use AwsDynamoDbIteratorItemIterator;

$contacts = new ItemIterator($client->getScanIterator(array(
    'TableName' => 'Contacts'
)));

foreach ($contacts as $contact) {
    echo "{$contact['FirstName']} {$contact['LastName']}n";
}

The ItemIterator also has two more features that can be useful for certain schemas.

  1. If you have attributes of the binary (B) or binary set (BS) type, the ItemIterator will automatically apply base64_decode() to the values for you.
  2. The item will actually be enumerated as a GuzzleCommonCollection object. A Collection behaves like an array (i.e., it implements the ArrayAccess interface) and has some additional convenience methods. Additionally, it returns null instead of triggering notices for undefined indices. This is useful for working with items, since the NoSQL nature of DynamoDB does not restrict you to following a fixed schema with all of your items.

We hope that using iterators makes working with the AWS SDK for PHP easier and reduces the amount of code you have to write. You can use the ItemIterator class to get even easier access to the data in your Amazon DynamoDB tables.

AWS SDK ZF2 Module 1.1.0

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the availability of version 1.1.0 of the AWS SDK ZF2 Module. This version includes a session save handler for Amazon DynamoDB, so that you can use DynamoDB as a session store for your Zend Framework 2 applications.

Release: AWS SDK for PHP 2.4.2

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.4.2 of the AWS SDK for PHP. This release adds support for custom Amazon Machine Images (AMIs) and Chef 11 to the AWS OpsWorks client, adds the latest snapshot permission features to the Amazon Redshift client, and updates the Amazon EC2 and AWS Security Token Service clients.

Changelog

  • Added support for cross-account snapshot access control to the Amazon Redshift client
  • Added support for decoding authorization messages to the AWS STS client
  • Added support for checking for required permissions via the DryRun parameter to the Amazon EC2 client
  • Added support for custom Amazon Machine Images (AMIs) and Chef 11 to the AWS OpsWorks client
  • Added an SDK compatibility test to allow users to quickly determine if their system meets the requirements of the SDK
  • Updated the Amazon EC2 client to use the 2013-06-15 API version
  • Fixed an unmarshalling error with the Amazon EC2 CreateKeyPair operation
  • Fixed an unmarshalling error with the Amazon S3 ListMultipartUploads operation
  • Fixed an issue with the Amazon S3 stream wrapper "x" fopen mode
  • Fixed an issue with AwsS3S3Client::downloadBucket by removing leading slashes from the passed $keyPrefix argument

Install/Download the Latest SDK

Amazon S3 PHP Stream Wrapper

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

As of the 2.3.0 release, the AWS SDK for PHP now provides an official Amazon S3 PHP stream wrapper. The stream wrapper allows you to treat Amazon S3 like a filesystem using functions like fopen(), file_get_contents(), and filesize() through a custom stream wrapper protocol. The Amazon S3 stream wrapper opens up some interesting possibilities that were either previously impossible or difficult to implement.

Registering the stream wrapper

Before you can use the Amazon S3 stream wrapper, you must register it with PHP:

use AwsS3S3Client;

// Create an Amazon S3 client object
$client = S3Client::factory(array(
    'key'    => '[aws access key]',
    'secret' => '[aws secret key]'
));

// Register the stream wrapper from a client object
$client->registerStreamWrapper();

After registering the stream wrapper, you can use various PHP filesystem functions that support custom stream wrapper protocols.

$bucket = 'my_bucket';
$key = 'object_key';

// Get the contents of an object as a string
$contents = file_get_contents("s3://{$bucket}/{$key}");

// Get the size of an object
$size = filesize("s3://{$bucket}/{$key}");

Stream wrappers in PHP are identified by a unique protocol; the Amazon S3 stream wrapper uses the "s3://" protocol. Amazon S3 stream wrapper URIs always start with the "s3://" protocol followed by an optional bucket name, forward slash, and optional object key: s3://bucket/key.

Streaming downloads

The Amazon S3 stream wrapper allows you to truly stream downloads from Amazon S3 using functions like fopen(), fread(), and fclose(). This allows you to read bytes off of a stream as needed rather than downloading an entire stream upfront and then working with the data.

The following example opens a read-only stream, read up to 1024 bytes from the stream, and closes the stream when no more data can be read from it.

// Open a stream in read-only mode
if (!($stream = fopen("s3://{$bucket}/{$key}", 'r'))) {
    die('Could not open stream for reading');
}

// Check if the stream has more data to read
while (!feof($stream)) {
    // Read 1024 bytes from the stream
    echo fread($stream, 1024);
}
// Be sure to close the stream resource when you're done with it
fclose($stream);

Seekable streams

Because no data is buffered in memory, read-only streams with the Amazon S3 stream wrapper are by default not seekable. You can force the stream to allow seeking using the seekable stream context option.

// Create a stream context to allow seeking
$context = stream_context_create(array(
    's3' => array(
        'seekable' => true
    )
));

if ($stream = fopen('s3://bucket/key', 'r', false, $context)) {
    // Read bytes from the stream
    fread($stream, 1024);
    // Seek back to the beginning of the stream
    fseek($stream, 0);
    // Read the same bytes that were previously read
    fread($stream, 1024);
    fclose($stream);
}

Opening seekable streams allows you to seek only to bytes that have been previously read. You cannot skip ahead to bytes that have not yet been read from the remote server. In order to allow previously read data to be recalled, data is buffered in a PHP temp stream using Guzzle’s CachingEntityBody decorator.

Streaming uploads from downloads

You can use an Amazon S3 stream resource with other AWS SDK for PHP operations. For example, you could stream the contents of one Amazon S3 object to a new Amazon S3 object.

$stream = fopen("s3://{$bucket}/{$key}", 'r');

if (!$stream) {
    die('Unable to open stream for reading');
}

$client->putObject(array(
    'Bucket' => 'other_bucket',
    'Key'    => $key,
    'Body'   => $stream
));

fclose($stream);

Uploading data

In addition to downloading data with the stream wrapper, you can use the stream wrapper to upload data as well.

$stream = fopen("s3://{$bucket}/{$key}", 'w');
fwrite($stream, 'Hello!');
fclose($stream);

Note: Because Amazon S3 requires a Content-Length for all entity-enclosing HTTP requests, the contents of an upload must be buffered using a PHP temp stream before it is sent over the wire.

Traversing buckets

You can modify and browse Amazon S3 buckets similar to how PHP allows the modification and traversal of directories on your filesystem.

Here’s an example of creating a bucket:

mkdir('s3://bucket');

You can delete empty buckets using the rmdir() function.

rmdir('s3://bucket');

The opendir(), readdir(), rewinddir(), and closedir() PHP functions can be used with the Amazon S3 stream wrapper to traverse the contents of a bucket.

$dir = "s3://bucket/";

if (is_dir($dir) && ($dh = opendir($dir))) {
    while (($file = readdir($dh)) !== false) {
        echo "filename: {$file} : filetype: " . filetype($dir . $file) . "n";
    }
    closedir($dh);
}

You can recursively list each object and prefix in a bucket using PHP’s RecursiveDirectoryIterator.

$dir = 's3://bucket';
$iterator = new RecursiveIteratorIterator(new RecursiveDirectoryIterator($dir));

foreach ($iterator as $file) {
    echo $file->getType() . ': ' . $file . "n";
}

Using the Symfony2 Finder component

The easiest way to traverse an Amazon S3 bucket using the Amazon S3 stream wrapper is through the Symfony2 Finder component. The Finder component allows you to more easily filter the files that the stream wrapper returns.

require 'vendor/autoload.php';

use SymfonyComponentFinderFinder;

$finder = new Finder();

// Get all files and folders (key prefixes) from "bucket" that are less than
// 100K and have been updated in the last year
$finder->in('s3://bucket')
    ->size('< 100K')
    ->date('since 1 year ago');

foreach ($finder as $file) {
    echo $file->getType() . ": {$file}n";
}

You will need to install the Symfony2 Finder component and add it to your project’s autoloader in order to use it with the AWS SDK for PHP. The most common way to do this is to add the Finder component to your project’s composer.json file. You can find out more about this process in the Composer documentation.

More information

We hope you find the new Amazon S3 stream wrapper useful. You can find more information and documentation on the Amazon S3 stream wrapper in the AWS SDK for PHP User Guide.

Uploading Archives to Amazon Glacier from PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

You can easily upload your data archives to Amazon Glacier by using the Glacier client included in the AWS SDK for PHP. Similar to the Amazon S3 service, Amazon Glacier has an API for both single and multipart uploads. You can upload archives of up to 40,000 GB through the multipart operations. With the UploadArchive operation, you can upload archives of up to 4 GB in a single request; however, we recommended using the multipart operations for archives larger than 100 MB.

Before we look at how to use the specific operations, let’s create a client object to work with Amazon Glacier.

use AwsGlacierGlacierClient;

$client = GlacierClient::factory(array(
    'key'    => '[aws access key]',
    'secret' => '[aws secret key]',
    'region' => '[aws region]', // (e.g., us-west-2)
));

Uploading an archive in a single request

Now let’s upload some data to your Amazon Glacier vault. For the sake of this and other code samples in this blog post, I will assume that you have already created a vault and have stored the vault name in a variable called $vaultName. I’ll also assume that the archive data you are uploading is stored in a file and that the path to that file is stored in a variable called $filename. The following code demonstrates how to use the UploadArchive operation to upload an archive in a single request.

$result = $client->uploadArchive(array(
    'vaultName' => $vaultName,
    'body'      => fopen($filename, 'r'),
));
$archiveId = $result->get('archiveId');

In this case, the SDK does some additional work for you behind the scenes. In addition to the vault name and upload body, Amazon Glacier requires that you provide the account ID of the vault owner, a SHA-256 tree hash of the upload body, and a SHA-256 content hash of the entire payload. You can manually specify these parameters if needed, but the SDK will calculate them for you if you do not explicitly provide them.

For more details about the SHA-256 tree hash and SHA-256 content hash, see the Computing Checksums section in the Amazon Glacier Developer Guide. See the GlacierClient::uploadArchive API documentation for a list of all the parameters to the UploadArchive operation.

Uploading an archive in parts

Amazon Glacier also allows you to upload archives in parts, which you can do using the multipart operations: InitiateMultipartUpload, UploadMultipartPart, CompleteMultipartUpload, and AbortMultipartUpload. The multipart operations allow you to upload parts of your archive in any order and in parallel. Also, if one part of your archive fails to upload, you only need to reupload that one part, not the entire archive.

The AWS SDK for PHP provides two different techniques for doing multipart uploads with Amazon Glacier. First, you can use the multipart operations manually, which provides the most flexibility. Second, you can use the multipart upload abstraction which allows you to configure and create a transfer object that encapsulates the multipart operations. Let’s look at the multipart abstraction first.

Using the multipart upload abstraction

The easiest way to perform a multipart upload is to use the classes provided in the AwsGlacierModelMultipartUpload namespace. The classes provide an abstraction of the multipart uploading process. The main class you interact with is UploadBuilder. The following code uses the UploadBuilder to configure a multipart upload using a part size of 4 MB. The upload() method executes the uploads and returns the result of the CompleteMultipartUpload operation at the end of the upload process.

use AwsGlacierModelMultipartUploadUploadBuilder;

$uploader = UploadBuilder::newInstance()
    ->setClient($client)
    ->setSource($filename)
    ->setVaultName($vaultName)
    ->setPartSize(4 * 1024 * 1024)
    ->build();

$result = $uploader->upload();

$archiveId = $result->get('archiveId');

Using the UploadBuilder class, you can also configure the parts to be uploaded in parallel by using the setConcurrency() method.

$uploader = UploadBuilder::newInstance()
    ->setClient($client)
    ->setSource($filename)
    ->setVaultName($vaultName)
    ->setPartSize(4 * 1024 * 1024)
    ->setConcurrency(3) // Upload 3 at a time in parallel
    ->build();

If a problem occurs during the upload process, an AwsCommonExceptionMultipartUploadException is thrown, which has access to a TransferState object that represents the state of the upload.

try {
    $result = $uploader->upload();
    $archiveId = $result->get('archiveId');
} catch (AwsCommonExceptionMultipartUploadException $e) {
    // If the upload fails, get the state of the upload
    $state = $e->getState();
}

The TransferState object can be serialized so that the upload can be completed in a separate request if needed. To resume an upload using a TransferState object, you must use the resumeFrom() method of the UploadBuilder.

$resumedUploader = UploadBuilder::newInstance()
    ->setClient($client)
    ->setSource($filename)
    ->setVaultName($vaultName)
    ->resumeFrom($state)
    ->build();

$result = $resumedUploader->upload();

Using the multipart operations

For the most flexibility, you can manage all of the upload process yourself using the individual multipart operations. The following code sample shows how to initialize an upload, upload each of the parts one by one, and then complete the upload. It also uses the UploadPartGenerator class to help calculate the information about each part. UploadPartGenerator is not required to work with the multipart operations, but it does make it much easier, especially for calculating the checksums for each of the parts and the archive as a whole.

use AwsGlacierModelMultipartUploadUploadPartGenerator;

// Use helpers in the SDK to get information about each of the parts
$archiveData = fopen($filename, 'r');
$partSize = 4 * 1024 * 1024; // (i.e., 4 MB)
$parts = UploadPartGenerator::factory($archiveData, $partSize);

// Initiate the upload and get the upload ID
$result = $client->initiateMultipartUpload(array(
    'vaultName' => $vaultName,
    'partSize'  => $partSize,
));
$uploadId = $result->get('uploadId');

// Upload each part individually using data from the part generator
foreach ($parts as $part) {
    fseek($archiveData, $part->getOffset())
    $client->uploadMultipartPart(array(
        'vaultName'     => $vaultName,
        'uploadId'      => $uploadId,
        'body'          => fread($archiveData, $part->getSize()),
        'range'         => $part->getFormattedRange(),
        'checksum'      => $part->getChecksum(),
        'ContentSHA256' => $part->getContentHash(),
    ));
}

// Complete the upload by using data aggregated by the part generator
$result = $client->completeMultipartUpload(array(
    'vaultName'   => $vaultName,
    'uploadId'    => $uploadId,
    'archiveSize' => $parts->getArchiveSize(),
    'checksum'    => $parts->getRootChecksum(),
));
$archiveId = $result->get('archiveId');

fclose($archiveData);

For more information about the various multipart operations, see the API documentation for GlacierClient. You should also take a look at the API docs for the classes in the MultipartUpload namespace to become more familiar with the multipart abstraction. We hope that this post helps you work better with Amazon Glacier and take advantage of the low-cost, long-term storage it provides.

Release: AWS SDK for PHP 2.4.1

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.4.1 of the AWS SDK for PHP. This release adds support for setting watermark and max frame rate parameters in the Amazon Elastic Transcoder client and resolves issues with the Amazon S3, Amazon EC2, Amazon ElastiCache, AWS Elastic Beanstalk, Amazon EMR, and Amazon RDS clients.

Changelog

  • Added support for setting watermarks and max frame rates to the Amazon Elastic Transcoder client
  • Added MD5 validation to Amazon SQS ReceiveMessage operations
  • Added the AwsDynamoDbIteratorItemIterator class to make it easier to get items from the results of DynamoDB operations in a simpler form
  • Added support for the cr1.8xlarge EC2 instance type. Use AwsEc2EnumInstanceType::CR1_8XLARGE
  • Added support for the suppression list SES mailbox simulator. Use AwsSesEnumMailboxSimulator::SUPPRESSION_LIST
  • Fixed an issue with data formats throughout the SDK due to a regression. Dates are now sent over the wire with the correct format. This issue affected the Amazon EC2, Amazon ElastiCache, AWS Elastic Beanstalk, Amazon EMR, and Amazon RDS clients
  • Fixed an issue with the parameter serialization of the ImportInstance operation in the Amazon EC2 client
  • Fixed an issue with the Amazon S3 client where the RoutingRules.Redirect.HostName parameter of the PutBucketWebsite operation was erroneously marked as required
  • Fixed an issue with the Amazon S3 client where the DeleteObject operation was missing parameters
  • Fixed an issue with the Amazon S3 client where the Status parameter of the PutBucketVersioning operation did not properly support the "Suspended" value
  • Fixed an issue with the Amazon Glacier UploadPartGenerator class so that an exception is thrown if the provided body to upload is less than 1 byte

Install/Download the SDK

You can get the latest version of the SDK via:

AWS Service Provider for Laravel 1.0.4

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the availability of version 1.0.4 of the AWS Service Provider for Laravel. This version includes the AwsLaravelAwsFacade class which allows you to register an AWS facade in your Laravel 4 project, so you can retrieve clients in an easy and idiomatic way (e.g., $s3 = AWS::get('s3');).

We would also like to remind you about the other framework-specific packages that we currently support: the AWS SDK ZF2 Module and the AWS Service Provider for Silex. In the future, we will also post announcements on the blog when we update these packages.

Syncing Data with Amazon S3

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

Warning: this blog post provides instructions for AWS SDK for PHP V2, if you are looking for AWS SDK for PHP V3 instructions, please see our SDK guide.

Have you ever needed to upload an entire directory of files to Amazon S3 or download an Amazon S3 bucket to a local directory? With a recent release of the AWS SDK for PHP, this is now not only possible, but really simple.

Uploading a directory to a bucket

First, let’s create a client object that we will use in each example.

use AwsS3S3Client;

$client = S3Client::factory(array(
    'key'    => 'your-aws-access-key-id',
    'secret' => 'your-aws-secret-access-key'
));

After creating a client, you can upload a local directory to an Amazon S3 bucket using the uploadDirectory() method of a client:

$client->uploadDirectory('/local/directory', 'my-bucket');

This small bit of code compares the contents of the local directory to the contents in the Amazon S3 bucket and only transfer files that have changed. While iterating over the keys in the bucket and comparing against the names of local files, the changed files are uploaded in parallel using batches of requests. When the size of a file exceeds a customizable multipart_upload_size option, the uploader automatically uploads the file using a multipart upload.

Customizing the upload sync

Plenty of options and customizations exist to make the uploadDirectory() method flexible so that it can fit many different use cases and requirements.

The following example uploads a local directory where each object is stored in the bucket using a public-read ACL, 20 requests are sent in parallel, and debug information is printed to standard output as each request is transferred.

$dir = '/local/directory';
$bucket = 'my-bucket';
$keyPrefix = '';
$options = array(
    'params'      => array('ACL' => 'public-read'),
    'concurrency' => 20,
    'debug'       => true
);

$client->uploadDirectory($dir, $bucket, $keyPrefix, $options);

By specifying $keyPrefix, you can cause the uploaded objects to be placed under a virtual folder in the Amazon S3 bucket. For example, if the $bucket name is “my-bucket” and the $keyPrefix is “testing/”, then your files will be uploaded to “my-bucket” under the “testing/” virtual folder: https://my-bucket.s3.amazonaws.com/testing/filename.txt.

You can find more documentation about uploading a directory to a bucket in the AWS SDK for PHP User Guide.

Downloading a bucket

Downloading an Amazon S3 bucket to a local directory is just as easy. We’ll again use a simple function available on an AwsS3S3Client object to easily download objects: downloadBucket().

The following example downloads all of the objects from my-bucket and stores them in /local/directory. Object keys that are under virtual subfolders are converted into a nested directory structure when the objects are downloaded.

$client->downloadBucket('/local/directory', 'my-bucket');

Customizing the download sync

Similar to the uploadDirectory() method, the downloadBucket() method has several options that can customize how files are downloaded.

The following example downloads a bucket to a local directory by downloading 20 objects in parallel and prints debug information to standard output as each transfer takes place.

$dir = '/local/directory';
$bucket = 'my-bucket';
$keyPrefix = '';

$client->downloadBucket($dir, $bucket, $keyPrefix, array(
    'concurrency' => 20,
    'debug'       => true
));

By specifying $keyPrefix, you can limit the downloaded objects to only keys that begin with the specified $keyPrefix. This can be useful for downloading objects under a virtual directory.

The downloadBucket() method also accepts an optional associative array of $options that can be used to further control the transfer. One option of note is the allow_resumable option, which allows the transfer to resume any previously interrupted downloads. This can be useful for resuming the download of a very large object so that you only need to download any remaining bytes.

You can find more documentation on syncing buckets and directories and other great Amazon S3 abstraction layers in the AWS SDK for PHP User Guide.

Static Service Client Facades

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Version 2.4 of the AWS SDK for PHP adds the ability to enable and use static client facades. These "facades" provide an easy, static interface to service clients available in the service builder. For example, when working with a normal client instance, you might have code that looks like the following:

// Get the configured S3 client from the service builder
$s3 = $aws->get('s3');

// Execute the CreateBucket command using the S3 client
$s3->createBucket(array('Bucket' => 'your-new-bucket-name'));

With client facades enabled, you can also accomplish this with the following code:

// Execute the CreateBucket command using the S3 client
S3::createBucket(array('Bucket' => 'your-new-bucket-name'));

Enabling and using client facades

To enable static client facades to be used in your application, you must use the AwsCommonAws::enableFacades method when you setup the service builder.

// Include the Composer autoloader
require 'vendor/autoload.php';

// Instantiate the SDK service builder with my config and enable facades
$aws = Aws::factory('/path/to/my_config.php')->enableFacades();

This will setup the client facades and alias them into the global namespace. After that, you can use them anywhere to have more simple and expressive code for interacting with AWS services.

// List current buckets
echo "Current Buckets:n";
foreach (S3::getListBucketsIterator() as $bucket) {
    echo "{$bucket['Name']}n";
}

$args = array('Bucket' => 'your-new-bucket-name');
$file = '/path/to/the/file/to/upload.jpg';

// Create a new bucket and wait until it is available for uploads
S3::createBucket($args) and S3::waitUntilBucketExists($args);
echo "nCreated a new bucket: {$args['Bucket']}.n";

// Upload a file to the new bucket
$result = S3::putObject($args + array(
    'Key'  => basename($file),
    'Body' => fopen($file, 'r'),
));
echo "nCreated a new object: {$result['ObjectURL']}n";

You can also mount the facades into a namespace other than the global namespace. For example, if you want to make the client facades available in the "Services" namespace, you can do the following:

Aws::factory('/path/to/my_config.php')->enableFacades('Services');

$result = ServicesDynamoDb::listTables();

Why use client facades?

The use of static client facades is completely optional. We included this feature in the SDK in order to appeal to PHP developers who prefer static notation or who are familiar with PHP frameworks like CodeIgnitor, Laravel, or Kohana where this style of method invocation is common.

Though using static client facades has little real benefit over using client instances, it can make your code more concise and prevent you from having to inject the service builder or client instance into the context where you need the client object. This can make your code easier to write and understand. Whether or not you should use the client facades is purely a matter of preference.

How client facades work in the AWS SDK for PHP is similar to how facades work in the Laravel 4 Framework. Even though you are calling static classes, all of the method calls are proxied to method calls on actual client instances—the ones stored in the service builder. This means that the usage of the clients via the client facades can still be mocked in your unit tests, which removes one of the general disadvantages to using static classes in object-oriented programming. For information about how to test code that uses client facades, please see the Testing Code that Uses Client Facades section of the AWS SDK for PHP User Guide.

Though we are happy to offer this new feature, we we don’t expect you to change all of your code to use the static client facades. We are simply offering it as an alternative that may be more convenient or familiar to you. We still recommend using client instances as you have in the past and support the use of dependency injection. Be sure to let us know in the comments if you like this new feature and if you plan on using it.