Tag: php


Reduce Composer Issues on Elastic Beanstalk

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

During the past couple of months, we’ve had a few reports from customers where they have experienced PHP application deployment failures on AWS Elastic Beanstalk related to parsing exceptions being thrown by Composer. In case you have recently run into the issue yourself, we would like to briefly describe why it is happening and how you can circumvent the issue.

The issue

The issue occurs when a project or its dependencies expresses its requirements using newer Composer syntax features like the carat (^) operator. For users of the AWS SDK for PHP, the error looks something like this:

[RuntimeException] Could not load package aws/aws-sdk-php in http://packagist.org:
[UnexpectedValueException] Could not parse version constraint ^5.3: Invalid version string "^5.3"

We also observed the issue with some versions of the Laravel framework and a few other libraries. The issue comes up when you are using older Elastic Beanstalk stacks with your applications. The older stacks have an old version of Composer included on the underlying Amazon Machine Image (AMI) that does not support some of the latest Composer features like the carat (^) and new OR (||) syntax.

The solution

There are 3 different ways to solve this issue.

  1. Upgrade your application to use the latest Elastic Beanstalk solution stack. The latest solution stacks for PHP have a more recent version of Composer that supports the new syntax features.
  2. Use Elastic Beanstalk configuration files (.ebextension). You can create a file ending in .config inside your .ebextension directory that allows you to perform a Compose self-update command before installing your dependencies. For example, name the file 01composer.config and add the following configuration:

    commands:
      01updateComposer:
        command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update
    
    option_settings:
      - namespace: aws:elasticbeanstalk:application:environment
        option_name: COMPOSER_HOME
        value: /root
  3. Install your dependencies locally. One way to avoid issues with Composer during deployment is to bypass the whole Composer workflow entirely by creating deployments of your application with the dependencies pre-installed.

The conclusion

We hope that this short blog post will be helpful if you happen to run into this issue. If this article does not solve your problem or you are running into other issues, please contact AWS Support or ask for help on the Elastic Beanstalk forum.

AWS SDK for PHP Office Hour

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

The AWS SDKs and Tools team invites you to the first-ever online office hour hosted by the maintainers of the AWS SDK for PHP. It will be held via Google Hangouts at 10:30-11:30am PDT (UTC -7:00) on Monday 6/29. If you don’t have one already, you will need to create an account with Google to join the video chat.

This first office hour will be driven by customer questions. We expect to focus on questions about the SDK, but any questions related to PHP development on AWS are welcome. We’re excited to meet you and help you be successful in developing PHP applications on AWS!

Please register for the event, add it to your calendar, and join the office hour next Monday.

Updated Framework Modules for V3

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Last month, we announced that Version 3 of the AWS SDK for PHP was generally available. We’ve now updated all of our framework-specific modules with releases that support Version 3 (V3) of the SDK. Take a look!

We’ve also updated our AWS Resource APIs for PHP library which we previewed in December. Now that V3 of the SDK is stable, we will be working on adding features and documentation to this library over the coming weeks.

As always, we appreciate your feedback on any of our open source packages. Check out these updates and let us know what you think. :-)

P.S. We’d like to give a special thanks to Graham Campbell and Michaël Gallego for their contributions to the Laravel and ZF2 packages, respectively.

HaPHPy 20th Birthday to PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Twenty years ago, Rasmus Lerdorf announced version 1.0 of PHP. It’s now two decades later, and PHP has evolved so much and is still going strong. The AWS SDK for PHP team would like to say thank you to everyone who has contributed to the PHP language and community over these past twenty years, and wish PHP a very HaPHPy birthday.

Join in the celebration today by reflecting on the history of PHP, following the #20yearsofphp hashtag, and checking out some of these other blog posts from people in the PHP community:

Version 3 of the AWS SDK for PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Last October, we announced the Developer Preview of Version 3 of the AWS SDK for PHP. We even presented about it at AWS re:Invent last November. We are grateful for your early feedback and support. Since last fall, we’ve been hard at work on improving, testing, and documenting Version 3 to get it ready for a stable release. We’re excited to announce that Version 3 of the AWS SDK for PHP is now generally available via Composer and on GitHub.

Version 3 of the SDK (V3) represents a significant effort to improve the capabilities of the SDK, incorporate over two years of customer feedback, upgrade our dependencies, improve performance, and adopt the latest PHP standards.

What we’re excited about

We’ve made many improvements to V3, even since our blog post about the Developer Preview (check out that post if you haven’t already). There are also some things that have changed or have been removed since Version 2 of the SDK (V2). We encourage you to take a look at our V3 Migration Guide for all the details about what has changed.

V3 has less code and better performance than V2 and is using the latest version of the Guzzle HTTP library. It also has some exciting new features and improvements.

Asynchronous requests and promises

V3 allows you to execute operations asynchronously. This not only means that it is easier to do concurrent requests, it’s also easier to create asynchronous and cooperative workflows. We use Promises, the basic building block of our asynchronous features, all throughout the SDK’s core. We also use them to implement the higher-level abstractions SDK, including Command Pools, Paginators, Waiters, and service-specific features like the S3 MultipartUploader. That means that almost every feature of the SDK can be used in an asynchronous way.

To execute an operation asynchronously, you simply add "Async" as a suffix to your method call.

// The SYNCHRONOUS (normal) way:

// Executing an operation returns a Result object.
$result = $s3Client->putObject([
    'Bucket' => 'your-bucket',
    'Key'    => 'docs/file.pdf',
    'Body'   => fopen('/path/to/file.pdf', 'r'),
]);

// You can access the result data from the Result object.
echo $result['ObjectURL'];

// The ASYNCHRONOUS way:

// Executing an operation asynchronously returns a Promise object.
$promise = $s3Client->putObjectAsync([
    'Bucket' => 'your-bucket',
    'Key'    => 'docs/file.pdf',
    'Body'   => fopen('/path/to/file.pdf', 'r'),
]);

// Wait for the operation to complete to get the Result object.
$result = $promise->wait();

// Then you can access the result data like normal.
echo $result['ObjectURL'];

The true power of using asynchronous requests is being able to create asynchronous workflows. For example, if you wanted to create a DynamoDB table, wait until it is ACTIVE (using Waiters), and then write some data to it, you can use the then() method of the Promise object to chain those actions together.

$client->createTableAsync([
    'TableName' => $table,
    // Other params...
])->then(function () use ($client, $table) {
    return $client->getWaiter('TableExists', [
        'TableName' => $table,
    ])->promise();
})->then(function () use ($client, $table) {
    return $client->putItemAsync([
        'TableName' => $table,
        'Item' => [
            // Item attributes...
        ]
    ]);
})->wait();

Please take a look at our detailed guide on promises for more information.

PSR-7 compliance and decoupling of the HTTP layer

The PHP-FIG has recently announced the acceptance of PSR-7, a "PHP Standard Recommendation" that defines interfaces for HTTP messages (e.g., Request and Response objects). We have adopted these interfaces for how we represent HTTP requests within the SDK, and it has allowed us to decouple the SDK from Guzzle such that V3 will work with both Guzzle 5 and Guzzle 6. It’s also possible to write your own HTTP handler for the SDK that does not use Guzzle.

The SDK defaults to using Guzzle 6 to perform HTTP requests. Guzzle 6 comes with a number of improvements, including support for asynchronous requests, PSR-7 compliance, and swappable HTTP adapters (including a PHP stream wrapper implementation that can be used on systems where cURL is not available).

JMESPath querying of results and paginators

In V3, the Result object has a new method: search(). With this method you can query data in Result objects using JMESPath expressions. JMESPath is a query language for JSON, or, in our case, PHP arrays.

$result = $ec2Client->describeInstances();
print_r($result->search('Reservations[].Instances[].InstanceId'));

JMESPath expressions can also be applied to Paginators in the same way. This will return a new Iterator that yields the result of the expression on every page of data.

$results = $s3->getPaginator('ListObjects', [
    'Bucket' => 'my-bucket',
]);
foreach ($results->search('Contents[].Key') as $key) {
    echo $key . "n";
}

Time to code

We hope you will enjoy using Version 3 of the AWS SDK for PHP. Here are the links you need to get started:

DynamoDB JSON and Array Marshaling for PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Back in October of 2014, Amazon DynamoDB added support for new data types, including the map (M) and list (L) types. These new types, along with some API updates, make it possible to store more complex, multilevel data, and use DynamoDB for document storage.

The DynamoDB Marshaler

To make these new types even easier for our PHP SDK users, we added a new class, called the DynamoDB Marshaler, in Version 2.7.7 of the AWS SDK for PHP. The Marshaler object has methods for marshaling JSON documents and PHP arrays to the DynamoDB item format and unmarshaling them back.

Marshaling a JSON Document

Let’s say you have JSON document describing a contact in the following format:

{
  "id": "5432c69300594",
  "name": {
    "first": "Jeremy",
    "middle": "C",
    "last": "Lindblom"
  },
  "age": 30,
  "phone_numbers": [
    {
      "type": "mobile",
      "number": "5555555555",
      "preferred": true
    },
    {
      "type": "home",
      "number": "5555555556",
      "preferred": false
    }
  ]
}

You can use the DynamoDB Marshaler to convert this JSON document into the format required by DynamoDB.

use AwsDynamoDbDynamoDbClient;
use AwsDynamoDbMarshaler;

$client = DynamoDbClient::factory(/* your config */);
$marshaler = new Marshaler();
$json = file_get_contents('/path/to/your/document.json');

$client->putItem([
    'TableName' => 'YourTable',
    'Item'      => $marshaler->marshalJson($json)
]);

The output of marshalJson() is an associative array that includes all the type information required for the DynamoDB 'Item' parameter.

[
    'id' => ['S' => '5432c69300594'],
    'name' => ['M' => [
        'first' => ['S' => 'Jeremy'],
        'middle' => ['S' => 'C'],
        'last' => ['S' => 'Lindblom'],
    ]],
    'age' => ['N' => '30'],
    'phone_numbers' => ['L' => [
        ['M' => [
            'type' => ['S' => 'mobile'],
            'number' => ['S' => '5555555555']
        ]],
        ['M' => [
            'type' => ['S' => 'home'],
            'number' => ['S' => '5555555556']
        ]],
    ]],
];

To retrieve an item and get the JSON document back, you need to use the unmarshalJson() method.

$result = $client->getItem([
    'TableName' => 'YourTable',
    'Key'       => ['id' => ['S' => '5432c69300594']]
]);
$json = $marshaler->unmarshalJson($result['Item']);

Marshaling a Native PHP Array

The Marshaler also provides the marshalItem() and unmarshalItem() methods that do the same type of thing, but for arrays. This is essentially an upgraded version of the existing DynamoDbClient::formatAttributes() method.

$data = [
    'id' => '5432c69300594',
    'name' => [
        'first'  => 'Jeremy',
        'middle' => 'C',
        'last'   => 'Lindblom',
    ],
    'age' => 30,
    'phone_numbers' => [
        [
            'type'      => 'mobile',
            'number'    => '5555555555',
            'preferred' => true
        ],
        [
            'type'      => 'home',
            'number'    => '5555555556',
            'preferred' => false
        ],
    ],
];

// Marshaling the data and putting an item.
$client->putItem([
    'TableName' => 'YourTable',
    'Item'      => $marshaler->marshalItem($data)
]);

// Getting and item and unmarshaling the data.
$result = $client->getItem([
    'TableName' => 'YourTable',
    'Key'       => ['id' => ['S' => '5432c69300594']]
]);
$data = $marshaler->unmarshalItem($result['Item']);

Be aware that marshalItem() does not support binary (B) and set (SS, NS, and BS) types. This is because they are ambiguous with the string (S) and list (L) types and have no equivalent type in JSON. We are working on some ideas that will provide more help with these types in Version 3 of the SDK.

Deprecations in the SDK

The new data types are a great addition to the Amazon DynamoDB service, but one consequence of adding support for these types is that we had to deprecate the following classes and methods in the AwsDynamoDb namespace of the PHP SDK:

These classes and methods made assumptions about how certain native PHP types convert to DynamoDB types. The addition of the new types to DynamoDB invalidated those assumptions, and we could not update the code in a backward-compatible way to support the new types. They still work fine, but just not with the new types. These classes and methods are removed in Version 3 of the SDK, and the DynamoDB Marshaler object is meant to be the replacement for their functionality.

Feedback

We hope that this addition to the SDK makes working with DynamoDB really easy. If you have any feedback about the Marshaler or any ideas on how we can improve it, please let us know on GitHub. Better yet, send us a pull request. :-)

WordPress on AWS Whitepapers

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

This is a guest post by Andreas Chatzakis (@achatzakis), one of our AWS Solutions Architects.


WordPress is a very popular open source blogging tool and content management system (CMS) based on PHP and MySQL. AWS customers have been deploying WordPress to power anything from small blogs up to high traffic web sites.

We have recently published two new whitepapers about running WordPress on AWS:

  1. WordPress: Best Practices on AWS – This whitepaper helps system administrators get started with WordPress on AWS and shows them how to improve the cost efficiency of the deployment as well as the end-user experience. It also provides a reference architecture that addresses common scalability and high availability requirements.

  2. Deploying WordPress with AWS Elastic Beanstalk – This whitepaper demonstrates how to use AWS Elastic Beanstalk to implement a highly available and scalable deployment of WordPress. It includes the use of additional services such as Amazon Simple Storage Service (S3), Amazon CloudFront, and Amazon ElastiCache to improve the efficiency and performance of the installation.

These whitepapers complement the slides presented at the AWS reInvent 2014 conference: Best Practices for Running WordPress on AWS (slides, video).

We hope you find the above material useful, and we are always eager to hear your stories and experience with WordPress on AWS!

Preview the AWS Resource APIs for PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

This year is just about over, but we are too excited to wait until the new year to share with you a feature we are developing for the AWS SDK for PHP. We are calling it the AWS Resource APIs for PHP. This feature is maintained as a separate package, but it acts as an extension to Version 3 of the AWS SDK for PHP.

As you know, the core SDK is composed of service client objects that have methods corresponding 1-to-1 with operations in the service’s API (e.g., Ec2Client::runInstances() method maps to the EC2 service’s RunInstances operation). The resource APIs build upon the SDK to add new types of objects that allow you to interact with the AWS service APIs in a more resource-oriented way. This allows you to use a more expressive syntax when working with AWS services, because you are acting on objects that understand their relationships with other resources and that encapsulate their identifying information.

Resource Objects

Resource objects each represent a single, identifiable AWS resource (e.g., an Amazon S3 bucket or an Amazon SQS queue). They contain information about how to identify the resource and load its data, the actions that can be performed on it, and the other resources to which it is related. Let’s take a look at a few examples showing how to interact with these resource objects.

First, let’s set up the Aws object, which acts as the starting point into the resource APIs.

<?php

require 'vendor/autoload.php';

use AwsResourceAws;

$aws = new Aws([
    'region'  => 'us-west-2',
    'version' => 'latest',
    'profile' => 'your-credential-profile',
]);

(Note: The array of configuration options provided in the preceding example is the same as what you would provide when instantiating the AwsSdk object in the core SDK.)

You can access related resources by calling the related resource’s name as a method and passing in its identity.

$bucket = $aws->s3->bucket('your-bucket');
$object = $bucket->object('image/bird.jpg');

Accessing resources this way is evaluated lazily, so the preceding example does not actually make any API calls.

Once you access the data of a resource, an API call will be triggered to "load" the resource and fetch its data. To access a resource object’s data, you can access it like an array.

echo $object['LastModified'];

Performing Actions

You can perform actions on a resource by calling verb-like methods on the object.

// Create a bucket and object.
$bucket = $aws->s3->createBucket([
    'Bucket' => 'my-new-bucket'
]);
$object = $bucket->putObject([
    'Key'  => 'images/image001.jpg',
    'Body' => fopen('/path/to/image.jpg', 'r'),
]);

// Delete the bucket and object.
$object->delete();
$bucket->delete();

Because the resource’s identity is encapsulated within the resource object, you never have to specify it again once the object is created. This way, actions like $object->delete() do not need to require arguments.

Collections

Some resources have a "has many" type relationship with other resources. For example, an S3 bucket has many S3 objects. The AWS Resource APIs also allow you to work with resource collections.

foreach ($bucket->objects() as $object) {
    echo $object->delete();
}

Using the Resource APIs

We are currently working on providing API documentation for the AWS Resource APIs. Even without documentation, you can programmatically determine what methods are available on a resource object by calling the respondsTo method.

print_r($bucket->respondsTo());
// Array
// (
//     [0] => create
//     [1] => delete
//     [2] => deleteObjects
//     [3] => putObject
//     [4] => multipartUploads
//     [5] => objectVersions
//     [6] => objects
//     [7] => bucketAcl
//     [8] => bucketCors
//     [9] => bucketLifecycle
//     [10] => bucketLogging
//     [11] => bucketPolicy
//     [12] => bucketNotification
//     [13] => bucketRequestPayment
//     [14] => bucketTagging
//     [15] => bucketVersioning
//     [16] => bucketWebsite
//     [17] => object
// )

var_dump($bucket->respondsTo('putObject'));
// bool(true)

Check it Out!

To get started, you can install the AWS Resource APIs for PHP using Composer, by requiring the aws/aws-sdk-php-resources package in your project. The source code and README, are located in the awslabs/aws-sdk-php-resources repo on GitHub.

The initial preview release of the AWS Resource APIs supports the following services: Amazon EC2, Amazon Glacier, Amazon S3, Amazon SNS, Amazon SQS, AWS CloudFormation, and AWS Identity and Access Management (IAM). We will continue to add support for more APIs over this next year.

We’re eager to hear your feedback about this new feature! Please use the issue tracker to ask questions, provide feedback, or submit any issues or feature requests.

AWS re:Invent 2014

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We spent the past week at AWS re:Invent! The PHP SDK team was there with many of our co-workers and customers. It was a great conference, and we had a lot of fun.

If you did not attend re:Invent or follow our @awsforphp Twitter feed during the event, then you have a lot to catch up on.

New AWS Services and Features

Several new services were announced during the keynotes, on both the first day and second day, and during other parts of the event.

During the first keynote, three new AWS services for code management and deployment were announced: AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline. CodeDeploy is available today, and can help you automate code deployments to Amazon EC2 instances.

Additionally, three other new services were revealed that are related to enterprise security and compliance: AWS Key Management Service (AWS KMS), AWS Config, and AWS Service Catalog.

Amazon RDS for Aurora was also announced during the first keynote. Amazon Aurora is a new, MySQL-compatible, relational database engine built for high performance and availability.

The keynote on the second day boasted even more announcements, including the new Amazon EC2 Container Service, which is a highly scalable, high performance container management service that supports Docker containers.

Also, new compute-optimized (C4) Amazon EC2 Instances were announced, as well as new larger and faster Elastic Block Store (EBS) volumes backed with SSDs.

AWS Lambda was introduced during the second keynote, as well. It is a new compute service that runs your code in response to events and automatically manages the compute resources for you. To learn about AWS Lambda in more detail, you should check out their session at re:Invent, which shows how you can implement image thumbnail generation in your applications using AWS Lambda and the new Amazon S3 Event Notifications feature. They also briefly mention the upcoming DynamoDB streams feature in that presentation, which was announced just prior to the conference.

The APIs for AWS CodeDeploy, AWS KMS, AWS Config, and AWS Lambda are currently available, and all are supported in the AWS SDK for PHP as of version 2.7.5.

PHP Presentations

I had the honor of presenting a session about the PHP SDK called Building Apps with the AWS SDK for PHP, where I explained how to use many of the new features from Version 3 of the SDK in the context of building an application I called "SelPHPies with ElePHPants". You should definitely check it out whether you are new to or experienced with the SDK.

Here are the links to my presentation as well as two other PHP-specific sessions that you might be interested in.

  • Building Apps with the AWS SDK for PHP (slides, video)
  • Best Practices for Running WordPress on AWS (slides, video)
  • Running and Scaling Magento on AWS (video)

There were so many other great presentations at re:Invent. The slides, videos, and podcasts for all of the presentations are (or will be) posted online.

PHPeople

Announcements and presentations are exciting and informative, but my favorite part about any conference is the people. Re:Invent was no exception.

It was great to run into familiar faces from my Twitter stream like Juozas Kaziukėnas, Ben Ramsey, Brian DeShong, and Boaz Ziniman. I also had the pleasure of meeting some new friends from companies that had sent their PHP developers to the conference.

See You Next Year

We hope you take the time to check out some of the presentations from this year’s event, and consider attending next year. Get notified about registration for next year’s event by signing up for the re:Invent mailing list on the AWS re:Invent website.

Version 3 Preview of the AWS SDK

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We’re excited to introduce you to the preview release of Version 3 of the AWS SDK for PHP! As of today, the preview release of Version 3 (V3) is available on GitHub and via Composer.

Two years ago, we released Version 2 (V2) of the SDK. Since then, thousands of developers and companies have adopted it. We are sincerely grateful to all of our users and contributors. We have been constantly collecting your feedback and ideas, and continually watching the evolution of PHP, AWS, and the Guzzle library.

Earlier this year, we felt we could make significant improvements to the SDK, but only if we could break a few things. Since receiving a unanimously positive response to our blog post about updating to the latest version of Guzzle a few months ago, we’ve been working hard on V3, and we’re ready to share it with you.

What’s new?

The new version of the SDK provides a number of important benefits to AWS customers. It is smaller and faster, with improved performance for both serial and concurrent requests. It has several new features based on its use of the new Guzzle 5 library (which also includes the new features from Guzzle 4). The SDK will also, starting from V3, follow the official SemVer spec, so you can have complete confidence when setting version constraints in your projects’ composer.json files.

Let’s take a quick look at some of the new features.

Asynchronous requests

With V3, you can perform asynchronous operations, which allow you to more easily send requests concurrently. To achieve this, the SDK returns future result objects when you specify the @future parameter, which block only when they are accessed. For managing more robust asynchronous workflows, you can retrieve a promise from the future result, to perform logic once the result becomes available or an exception is thrown.

<?php

// Upload a file to your bucket in Amazon S3.
// Use '@future' to make the operation complete asynchronously.
$result = $s3Client->putObject([
    'Bucket' => 'your-bucket',
    'Key'    => 'docs/file.pdf',
    'Body'   => fopen('/path/to/file.pdf', 'r'),
    '@future' => true,
]);

After creating a result using the @future attribute, you now have a future result object. You can use the data stored in the future in a blocking (or synchronous) manner by just using the result as normal (i.e., like a PHP array).

// Wait until the response has been received before accessing its data.
echo $result['ObjectURL'];

If you want to allow your requests to complete asynchronously, then you should use the promise API of the future result object. To retrieve the promise, you must use the then() method of the future result, and provide a callback to be completed when the promise is fulfilled. Promises allow you to more easily compose pipelines when dealing with asynchronous results. For example, we could use promises to save the Amazon S3 object’s URL to an item in an Amazon DynamoDB table, once the upload is complete.

// Note: $result is the result of the preceding example's PutObject operation.
$result->then(
    function ($s3Result) use ($ddbClient) {
        $ddbResult = $ddbClient->putItem([
            'TableName' => 'your-table',
            'Item' => [
                'topic' => ['S' => 'docs'],
                'time'  => ['N' => (string) time()],
                'url'   => ['S' => $s3Result['ObjectURL']],
            ],
            '@future' => true,
        ]);

        // Don't break promise chains; return a value. In this case, we are returning
        // another promise, so the PutItem operation can complete asynchronously too.
        return $ddbResult->promise();
    }
)->then(
    function ($result) {
        echo "SUCCESS!n";
        return $result;
    },
    function ($error) {
        echo "FAILED. " . $error->getMessage() . "n";
        // Forward the rejection by re-throwing it.
        throw $error;
    }
);

The SDK uses the React/Promise library to provide the promise functionality, allowing for additional features such as joining and mapping promises.

JMESPath querying of results

The result object also has a new search() method that allows you to query the result data using JMESPath, a query language for JSON (or PHP arrays, in our case).

<?php

$result = $ec2Client->describeInstances();

print_r($result->search('Reservations[].Instances[].InstanceId'));

Example output:

Array
(
    [0] => i-xxxxxxxx
    [1] => i-yyyyyyyy
    [2] => i-zzzzzzzz
)

Swappable and custom HTTP adapters

In V3, cURL is no longer required, but is still used by the default HTTP adapter. However, you can use other HTTP adapters, like the one shipped with Guzzle that uses PHP’s HTTP stream wrapper. You can also write custom adapters, which opens up the possibility of creating an adapter that integrates with a non-blocking event loop like ReactPHP.

Paginators

Paginators are a new feature in V3, that come as an addition to Iterators from V2. Paginators are similar to Iterators, except that they yield Result objects, instead of items within a result. This is nice, because it handles the tokens/markers for you, getting multiple pages of results, but gives you the flexibility to extract whatever data you want.

// List all "directories" and "files" in the bucket.
$paginator = $s3->getPaginator('ListObjects', [
    'Bucket' => 'my-bucket',
    'Delimiter' => '/'
]);
foreach ($paginator as $result) {
    $jmespathExpr = '[CommonPrefixes[].Prefix, Contents[].Key][]';
    foreach ($result->search($jmespathExpr) as $item) {
        echo $item . "n";
    }
}

Example output:

Array
(
    [0] => dir1/
    [1] => dir2/
    [2] => file1
    [3] => file2
    ...
)

New event system

Version 3 features a new and improved event system. Command objects now have their own event emitter that is decoupled from the HTTP request events. There is also a new request "progress" event that can be used for tracking upload and download progress.

use GuzzleHttpEventProgressEvent;

$s3->getHttpClient()->getEmitter()->on('progress', function (ProgressEvent $e) {
    echo 'Uploaded ' . $e->uploaded . ' of ' . $e->uploadSize . "n";
});

$s3->putObject([
   'Bucket' => $bucket,
   'Key'    => 'docs/file.pdf',
   'Body'   => fopen('/path/to/file.pdf', 'r'),
]);

Example output:

Uploaded 0 of 5299866
Uploaded 16384 of 5299866
Uploaded 32768 of 5299866
...
Uploaded 5275648 of 5299866
Uploaded 5292032 of 5299866
Uploaded 5299866 of 5299866

New client options

For V3, we changed some of the options you provide when instantiating a client, but we added a few new options that may help you work with services more easily.

  • "debug" – Set to true to print out debug information as requests are being made. You’ll see how the Command and Request objects are affected during each event, and an adapter-specific wire log of the request.
  • "retries" – Set the maximum number of retries the client will perform on failed and throttled requests. The default has always been 3, but now it is easy to configure.

These options can be set when instantiating client.

<?php

$s3 = (new AwsSdk)->getS3([
    // Exist in Version 2 and 3
    'profile'  => 'my-credential-profile',
    'region'   => 'us-east-1',
    'version'  => 'latest',

    // New in Version 3
    'debug'    => true,
    'retries'  => 5,
]);

What has changed?

To make all of these improvements for V3, we needed to make some backward-incompatible changes. However, the changes from Version 2 to Version 3 are much fewer than the changes from Version 1 to Version 2. In fact, much of the way you use the SDK will remain the same. For example, the following code for writing an item to an Amazon DynamoDB table looks exactly the same in both V2 and V3 of the SDK.

$result = $dynamoDbClient->putItem([
    'TableName' => 'Contacts',
    'Item'      => [
        'FirstName' => ['S' => 'Jeremy'],
        'LastName'  => ['S' => 'Lindblom'],
        'Birthday'  => ['M' => [
            'Month' => ['N' => '11'],
            'Date'  => ['N' => '24'],
        ],
    ],
]);

There are two important changes though that you should be aware of upfront:

  1. V3 requires PHP 5.5 or higher and requires the use of Guzzle 5.
  2. You must now specify the API version (via the "version" client option) when you instantiate a client. This is important, because it allows you to lock-in to the API versions of the services you are using. This helps us and you maintain backward compatibility between future SDK releases, because you will be in charge of API versions you are using. Your code will never be impacted by new service API versions until you update your version setting. If this is not a concern for you, you can default to the latest API version by setting 'version' to 'latest' (this is essentially the default behavior of V2).

What next?

We hope you are excited for Version 3 of the SDK!

We look forward to your feedback as we continue to work towards a stable release. Please reach out to us in the comments, on GitHub, or via Twitter (@awsforphp). We plan to publish more blog posts in the near future to explain some of the new features in more detail. We have already published the API docs for V3, but we’ll be working on improving all the documentation for V3, including creating detailed migration and user guides. We’ll also be speaking about V3 in our session at AWS re:Invent.

We will continue updating and making regular releases for V2 on the "master" branch of the SDK’s GitHub repository. Our work on V3 will happen on a separate "v3" branch until we are ready for a stable release.

Version 3 can be installed via Composer using version 3.0.0-beta.1, or you can download the aws.phar or aws.zip on GitHub.