Tag: SDK

Automated Changelog in AWS SDK for PHP

Starting with version 3.22.10 of the AWS SDK for PHP, released February 23, 2017, the Changelog Builder automatically processes all changelog entries. Each pull request is required to have a changelog JSON blob as part of the request. The system also calculates the next version for the SDK based on the type of the changes that are defined in the given changelog JSON blob.

The update simplifies the process of adding release notes to the CHANGELOG.md file for each pull request. Each merged pull request that was part of the release results in a new entry to the CHANGELOG.md file. The entry describes the change and provides the TAG number and release date.

The following is a sample changelog blob.

        "type"       : "feature|enhancement|bugfix",
        "category"   : "Target of Update",
        "description": "English language simple description of your update."

Each changelog blob is required to define the “type”, “category”, and “description” fields. The “category” explains the service that the change is associated with. For SDK changes that are not related to any service, the category field is left as an empty string. The “description” field should contain one or two sentences detailing the changes.

The “type” field describes the scope of the change being proposed. This field helps the Changelog Builder decide if a minor version bump is needed. The “type” field is assigned one of the following values:

  1. feature: A major change to the SDK that will cause a minor version bump. A feature will open a new use case or significantly improve upon an existing one. The update will result in a minor version bump. Example: a new AWS service.
  2. enhancement: A small update to the code. This should not cause any code to break and should only enhance given functionality of the SDK. The update will result in a patch version update. Example: Documentation Update.
  3. bugfix: A fix in the code that has caused some unwanted behavior. The update will result in a patch version update.

The changelog blob that will be included in the next release must be put inside the .changes/nextrelease folder. The changelog blob file name must be unique in that folder. A good practice is to give a descriptive name to the changelog blob file related to the request.

On each release, the SDK looks inside the .changes/nextrelease folder and consolidates all JSON blobs into a single JSON document. Then the SDK calculates the next version number, based on the full JSON document.


Current SDK Version: 1.2.3
Changelog Blob File 1: update-s3-client.json

        "type"       : "enhancement",
        "category"   : "S3",
        "description": "Adds support for new feature in s3"

Changelog Blob File 2: ec2-documentation-update.json

        "type"       : "enhancement",
        "category"   : "EC2",
        "description": "Update Ec2 documentation for operation foo bar"

Changelog Blob File 3: sdk-bugfix.json

        "type"       : "bugfix",
        "category"   : "",
        "description": "Fixes a typo in Aws Client"

Consolidated JSON document: .changes/1.2.4.json
Next SDK version: 1.2.4

    "type": "enhancement",
    "category": "S3",
    "description": "Adds support for new feature in s3"
    "type": "enhancement",
    "category": "EC2",
    "description": "Update Ec2 documentation for operation foo bar"
    "type": "bugfix",
    "category": "",
    "description": "Fixes a typo in Aws Client"

A new file, with the name VERSION_NUMBER.json, will be created in the .changes folder with the contents of the JSON blob for the changelog. You can see the previous changelog JSON blobs here. If needed, we can reconstruct the CHANGELOG.md file using the JSON documents in the .changes folder.

On a successful release, the changelog entries are written to the top of the CHANGELOG.md file. Then chag is used to tag the SDK and label the added release notes with the current version number.

For more information about Changelog Builder, see Class Aws\Build\ChangelogBuilder and CONTRIBUTING.md.

PHP application logging with Amazon CloudWatch Logs and Monolog

by Joseph Fontes | on | in PHP | Permalink | Comments |  Share

Logging and information debugging can be approached from a multitude of different angles. Whether you use an application framework or coding from scratch it’s always comforting to have familiar components and tools across different projects. In our examples today, I am going to enable Amazon CloudWatch Logs logging with a PHP application. To accomplish this, I wanted to use an existing solution that is both already popular and well used, and that is standards compliant. For these reasons, we are going to use the open source log library, PHP Monolog (https://github.com/Seldaek/monolog).

PHP Monolog

For those who work with a new PHP application, framework, or service, one of the technology choices that appears more frequently across solutions is the use of Monolog for application logging. PHP Monolog is a standards-compliant PHP library that enables developers to send logs to various destination types including, databases, files, sockets, and different services. Although PHP Monolog predates the standards for PHP logging defined in PSR-3, it does implement the PSR-3 interface and standards. This makes Monolog compliant with the common interface for logging libraries. Using Monolog with CloudWatch Logs creates a PSR-3 compatible logging solution. Monolog is available for use with a number of different applications and frameworks such as Laravel, Symfony, CakePHP, and many others. Our example today is about using PHP Monolog to send information to CloudWatch Logs for the purpose of application logging and to build a structure and process that enables the use of our application data with CloudWatch alarms and notifications. This enables us to use logs from our application for cross-service actions such as with Amazon EC2 Auto Scaling decisions.

Amazon CloudWatch Logs

As a customer-driven organization, AWS is constantly building and releasing significant features and services requested by AWS customers and partners. One of those services that we highlight today is Amazon CloudWatch Logs. CloudWatch Logs enables you to store log file information from applications, operating systems and instances, AWS services, and various other sources. An earlier blog post highlighted the use of CloudWatch Logs with various programming examples.

Notice in the blog post that there is a PHP example that uses CloudWatch Logs to store an entry from an application. You can use this example and extend it as a standalone solution to provide logging to CloudWatch Logs from within your application. With our examples, we’ll enhance this opportunity by leveraging PHP Monolog.

Implementing Monolog

To begin using Monolog, we install the necessary libraries with the use of Composer (https://getcomposer.org/). The instructions below install the AWS SDK for PHP, PHP Monolog, and an add-on to Monolog that enables logging to CloudWatch Logs.

curl -sS https://getcomposer.org/installer | php
php composer.phar require aws/aws-sdk-php
php composer.phar require monolog/monolog
php composer.phar require maxbanton/cwh:^1.0

Alternatively, you can copy the following entry to the composer.json file and install it via the php composer.phar install command.

    "minimum-stability": "stable",
    "require": {
        "aws/aws-sdk-php": "^3.24",
        "aws/aws-php-sns-message-validator": "^1.1",
        "monolog/monolog": "^1.21",
        "maxbanton/cwh": "^1.0"

Local logging

Now that PHP Monolog is available for use, we can test the implementation. We start with an example of logging to a single file.

require "vendor/autoload.php";

use Monolog\Logger;
use Monolog\Formatter\LineFormatter;
use Monolog\Handler\StreamHandler;

$logFile = "testapp_local.log";

$logger = new Logger('TestApp01');
$formatter = new LineFormatter(null, null, false, true);
$infoHandler = new StreamHandler(__DIR__."/".$logFile, Logger::INFO);
$logger->info('Initial test of application logging.');

In the previous example, we start by requiring the composer libraries we installed earlier. The new Logger line sets the channel name as “TestApp01”. The next line creates a new LineFormatter that removes brackets around unused log items. The next line establishes the destination as the file name we identified, testapp_local.log, and associates that with the INFO log level. Next, we apply the format to our stream handler. Then we add the stream handler with the updated format to the handler list. Finally, a new message is logged with the log level of INFO. For information about log levels and different handlers, see the Monolog GitHub page and IETF RFC 5424 and PSR-3.

We can now view the contents of the log file to ensure functionality:

Syslog logging

Now that we are able to write a simple log entry to a local file, our next example uses the system Syslog to log events.

$logger = new Logger($appName);

$localFormatter = new LineFormatter(null, null, false, true);
$syslogFormatter = new LineFormatter("%channel%: %level_name%: %message% %context% %extra%",null,false,true);

$infoHandler = new StreamHandler(__DIR__."/".$logFile, Logger::INFO);

$warnHandler = new SyslogHandler($appName, $facility, Logger::WARNING);


$logger->info('Test of PHP application logging.');
$logger->warn('Test of the warning system logging.');

Here we can see that the format of the syslog messages has been changed with the value, $syslogFormatter. Because syslog provides a date/time with each log entry, we don’t need to include these values in our log text. The syslog facility is set to local0 with all WARNING messages sent to syslog with the INFO level messages and WARNING level messages logged to our local file. You can find additional information about Syslog facilities and log levels on the Syslog Wikipedia page.

Logging to CloudWatch Logs

Now that you’ve seen the basic use of Monolog, let’s send some logs over to CloudWatch Logs. We can use the Amazon Web Services CloudWatch Logs Handler for Monolog library to integrate Monolog with CloudWatch Logs. In our example, an authentication application produces log information.

use Aws\CloudWatchLogs\CloudWatchLogsClient;
use Maxbanton\Cwh\Handler\CloudWatch;
use Monolog\Logger;
use Monolog\Formatter\LineFormatter;
use Monolog\Handler\StreamHandler;
use Monolog\Handler\SyslogHandler;

$logFile = "testapp_local.log";
$appName = "TestApp01";
$facility = "local0";

// Get instance ID:
$url = "";
$instanceId = file_get_contents($url);

$cwClient = new CloudWatchLogsClient($awsCredentials);
// Log group name, will be created if none
$cwGroupName = 'php-app-logs';
// Log stream name, will be created if none
$cwStreamNameInstance = $instanceId;
// Instance ID as log stream name
$cwStreamNameApp = "TestAuthenticationApp";
// Days to keep logs, 14 by default
$cwRetentionDays = 90;

$cwHandlerInstanceNotice = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameInstance, $cwRetentionDays, 10000, [ 'application' => 'php-testapp01' ],Logger::NOTICE);
$cwHandlerInstanceError = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameInstance, $cwRetentionDays, 10000, [ 'application' => 'php-testapp01' ],Logger::ERROR);
$cwHandlerAppNotice = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameApp, $cwRetentionDays, 10000, [ 'application' => 'php-testapp01' ],Logger::NOTICE);

$logger = new Logger('PHP Logging');

$formatter = new LineFormatter(null, null, false, true);
$syslogFormatter = new LineFormatter("%channel%: %level_name%: %message% %context% %extra%",null,false,true);
$infoHandler = new StreamHandler(__DIR__."/".$logFile, Logger::INFO);

$warnHandler = new SyslogHandler($appName, $facility, Logger::WARNING);



$logger->info('Initial test of application logging.');
$logger->warn('Test of the warning system logging.');
$logger->notice('Application Auth Event: ',[ 'function'=>'login-action','result'=>'login-success' ]);
$logger->notice('Application Auth Event: ',[ 'function'=>'login-action','result'=>'login-failure' ]);
$logger->error('Application ERROR: System Error');

In this example, application authentication events are passed as a PHP array and presented in CloudWatch Logs as JSON. The events with a result of login-success and login-failure are sent to both the log stream associated with the instance ID and to the log stream associated with the application name.


Using these different stream locations, we can create metrics and alarms at either a per-instance level or per-application level. Let’s assume that we want to create a metric for total number of users logged into our application over the past five minutes. Select your event group and then choose Create Metric Filter.

On the next page, we can create our filter and test in the same window. For the filter data, we use the JSON string from the log entry. Enter the following string to extract all the successful logins.

{ $.result = login-success }

Below, we can see the filter details. I updated the Filter Name to a value that’s easy to identify. The Metric Namespace now has a value associated with the application name and the metric name reflects the number of login-success values.


We could now create an alarm to send a notification or perform some action (such as an Amazon EC2 scaling decision), based on this information being received via CloudWatch Logs.

With these values, we would receive an alert each time there were more than 50 successful logins within a five-minute period.

Laravel logging

Monolog is used as the logging solution for a number of PHP applications and frameworks, including, the popular Laravel PHP framework. In this example, we’ll show the use of Monolog with CloudWatch Logs within Laravel. Our first step is to find out the current log settings for our Laravel application. If you open config/app.php within your application root, you see various log settings. By default, Laravel is set to log to a single log file using the baseline log level of debug.

Next, we add the AWS SDK for PHP as a service provider within Laravel using instructions and examples from here.

You also want to add the Monolog library for CloudWatch Logs to the composer.json file for inclusion in the application, as shown.

You now need to extend the current Laravel Monolog configuration with your custom configuration. You can find additional information about this step on the Laravel Error and Logging page. The following is an example of this addition to the bootstrap/app.php file.

use Maxbanton\Cwh\Handler\CloudWatch;

$app->configureMonologUsing( function($monolog) {

    $cwClient = App::make('aws')->createClient('CloudWatchLogs');
    $cwGroupName = env('AWS_CWL_GROUP', 'laravel-app-logs');
    $cwStreamNameApp = env('AWS_CWL_APP', 'laravel-app-name');
    $cwTagName = env('AWS_CWL_TAG_NAME', 'application');
    $cwTagValue = env('AWS_CWL_TAG_VALUE', 'laravel-testapp01');
    $cwRetentionDays = 90;
    $cwHandlerApp = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameApp, $cwRetentionDays, 10000, [ $cwTagName => $cwTagValue ] );


For testing purposes, we add a logging call to a test route in routes/web.php.

Route::get('/test', function () {
    Log::warning('Clicking on test link!!!');
    return view('test');

When the test route is invoked, the logs now show in CloudWatch Logs.


In our examples, we’ve shown how to use PHP Monolog to log to a local file, syslog, and CloudWatch Logs. We have also demonstrated the integration of Monolog with CloudWatch Logs within a popular PHP application framework. Finally, we’ve shown how to create CloudWatch Logs metric filters and apply those to CloudWatch Alarms that make the data from the logs actionable with notifications, as well as scaling decisions. CloudWatch Logs provides a central logging capability for your PHP applications and, combined with Monolog, ensures the availability of the library for use within established projects and custom engagements.

AWS SDK for .NET Version 2 Status

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Version 3 of the AWS SDK for .NET has been generally available since 7/28/2015. Although the legacy version (v2) of the SDK will continue to work, we strongly recommend that all customers migrate to the latest version 3 to take advantage of various improvements including modularized architecture, portable class library support, and .NET Core support. There are only a few backward-incompatible changes between version 2 and version 3 (see the migration guide for details). Additionally, the last few legacy releases of the version 2 SDK (versions 2.3.50 and later) have all the classes and methods that are changed in version 3 marked obsolete, so compile-time warnings can help you make forward-compatible updates before upgrading to the version 3 SDK.

To help customers plan their migration, our current maintenance timeline for the version 2 SDK is provided below.

Security issues and critical bugs

Critical bugs with no reasonable workaround as well as any security-related issues will be addressed with the highest priority. We will continue to support fixing such issues indefinitely.

Non-critical bugs

We will continue to address non-critical bugs in the version 2 SDK until the end of 2016. These bugs will be fixed in relative priority order. Factors considered in prioritization will include

  • Number of affected customers
  • Severity of the problem (broken feature vs. typo fix in documentation)
  • Whether the issue is already fixed in version 3
  • Risk of the fix causing unintended side effects

Service API updates

We will continue to add API updates to existing service clients based on customer request (GitHub Issue) until 8/1/2016.

New service clients

New service clients will not be added to the version 2 SDK. They will only be added to version 3.

As always, please find us in the Issues section of the SDK repository on GitHub, if you would like to report bugs, request service API updates, or ask general questions.

SDK Extensions Moved to Modularization

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

We are currently finalizing the move of the AWS Tools for Windows PowerShell and the AWS Toolkit for Visual Studio to the new modularized SDK. In addition, we have released new versions of the ASP.NET session provider and the .NET System.Diagnostics trace listener. These two extensions have moved from the SDK GitHub repository into their own separate repositories for better discoverability and to make it easier to track progress.

Session Provider

The Amazon DynamoDB session state provider allows ASP.NET applications to store their sessions inside Amazon DynamoDB. This helps applications scale across multiple application servers while maintaining session state across the system. To get started, check out the NuGet package or view the source on GitHub.

Trace Listener

The AWS DynamoDB trace listener allows System.Diagnostics.Trace calls to be written to Amazon DynamoDB. It is really useful when running an application over several hosts to have all the log messages in one location where the data can be searched through. To get started, check out the NuGet package or view the source on GitHub.

Version 3 of the AWS SDK for PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Last October, we announced the Developer Preview of Version 3 of the AWS SDK for PHP. We even presented about it at AWS re:Invent last November. We are grateful for your early feedback and support. Since last fall, we’ve been hard at work on improving, testing, and documenting Version 3 to get it ready for a stable release. We’re excited to announce that Version 3 of the AWS SDK for PHP is now generally available via Composer and on GitHub.

Version 3 of the SDK (V3) represents a significant effort to improve the capabilities of the SDK, incorporate over two years of customer feedback, upgrade our dependencies, improve performance, and adopt the latest PHP standards.

What we’re excited about

We’ve made many improvements to V3, even since our blog post about the Developer Preview (check out that post if you haven’t already). There are also some things that have changed or have been removed since Version 2 of the SDK (V2). We encourage you to take a look at our V3 Migration Guide for all the details about what has changed.

V3 has less code and better performance than V2 and is using the latest version of the Guzzle HTTP library. It also has some exciting new features and improvements.

Asynchronous requests and promises

V3 allows you to execute operations asynchronously. This not only means that it is easier to do concurrent requests, it’s also easier to create asynchronous and cooperative workflows. We use Promises, the basic building block of our asynchronous features, all throughout the SDK’s core. We also use them to implement the higher-level abstractions SDK, including Command Pools, Paginators, Waiters, and service-specific features like the S3 MultipartUploader. That means that almost every feature of the SDK can be used in an asynchronous way.

To execute an operation asynchronously, you simply add "Async" as a suffix to your method call.

// The SYNCHRONOUS (normal) way:

// Executing an operation returns a Result object.
$result = $s3Client->putObject([
    'Bucket' => 'your-bucket',
    'Key'    => 'docs/file.pdf',
    'Body'   => fopen('/path/to/file.pdf', 'r'),

// You can access the result data from the Result object.
echo $result['ObjectURL'];


// Executing an operation asynchronously returns a Promise object.
$promise = $s3Client->putObjectAsync([
    'Bucket' => 'your-bucket',
    'Key'    => 'docs/file.pdf',
    'Body'   => fopen('/path/to/file.pdf', 'r'),

// Wait for the operation to complete to get the Result object.
$result = $promise->wait();

// Then you can access the result data like normal.
echo $result['ObjectURL'];

The true power of using asynchronous requests is being able to create asynchronous workflows. For example, if you wanted to create a DynamoDB table, wait until it is ACTIVE (using Waiters), and then write some data to it, you can use the then() method of the Promise object to chain those actions together.

    'TableName' => $table,
    // Other params...
])->then(function () use ($client, $table) {
    return $client->getWaiter('TableExists', [
        'TableName' => $table,
})->then(function () use ($client, $table) {
    return $client->putItemAsync([
        'TableName' => $table,
        'Item' => [
            // Item attributes...

Please take a look at our detailed guide on promises for more information.

PSR-7 compliance and decoupling of the HTTP layer

The PHP-FIG has recently announced the acceptance of PSR-7, a "PHP Standard Recommendation" that defines interfaces for HTTP messages (e.g., Request and Response objects). We have adopted these interfaces for how we represent HTTP requests within the SDK, and it has allowed us to decouple the SDK from Guzzle such that V3 will work with both Guzzle 5 and Guzzle 6. It’s also possible to write your own HTTP handler for the SDK that does not use Guzzle.

The SDK defaults to using Guzzle 6 to perform HTTP requests. Guzzle 6 comes with a number of improvements, including support for asynchronous requests, PSR-7 compliance, and swappable HTTP adapters (including a PHP stream wrapper implementation that can be used on systems where cURL is not available).

JMESPath querying of results and paginators

In V3, the Result object has a new method: search(). With this method you can query data in Result objects using JMESPath expressions. JMESPath is a query language for JSON, or, in our case, PHP arrays.

$result = $ec2Client->describeInstances();

JMESPath expressions can also be applied to Paginators in the same way. This will return a new Iterator that yields the result of the expression on every page of data.

$results = $s3->getPaginator('ListObjects', [
    'Bucket' => 'my-bucket',
foreach ($results->search('Contents[].Key') as $key) {
    echo $key . "n";

Time to code

We hope you will enjoy using Version 3 of the AWS SDK for PHP. Here are the links you need to get started:

Modularization Released to NuGet in Preview

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today, we pushed our new modularized version of the AWS SDK for .NET to NuGet in preview. This means there are separate NuGet packages for each AWS service. For example, if your application uses Amazon S3 and Amazon DynamoDB, then instead of including the existing AWSSDK package that includes all the AWS services, you can add the AWSSDK.S3 and AWSSDK.DynamoDB packages. This allows your application to include much smaller assemblies, and you’ll need to update these packages only when the services you use are updated.

Why Preview?

The modularized version of the SDK is production ready, so we encourage developers to start using the modularized version now. We marked the modularized SDK as a preview while we are tweaking our release process and documentation. When adding preview packages, be sure to select Include Prerelease.

Check our previous blog post to learn about the differences. You can also follow our development on the modularization branch in GitHub.

NuGet Packages

Service Name NuGet Package
Auto Scaling AWSSDK.AutoScaling
AWS CloudFormation AWSSDK.CloudFormation
Amazon CloudFront AWSSDK.CloudFront
Amazon CloudSearch AWSSDK.CloudSearch
Amazon CloudSearch Domain AWSSDK.CloudSearchDomain
AWS CloudTrail AWSSDK.CloudTrail
Amazon CloudWatch AWSSDK.CloudWatch
Amazon CloudWatch Logs AWSSDK.CloudWatchLogs
AWS CodeDeploy AWSSDK.CodeDeploy
Amazon Cognito Identity AWSSDK.CognitoIdentity
Amazon Cognito Sync AWSSDK.CognitoSync
AWS Config AWSSDK.ConfigService
AWS Data Pipeline AWSSDK.DataPipeline
AWS Direct Connect AWSSDK.DirectConnect
Amazon DynamoDB (v2) AWSSDK.DynamoDBv2
Amazon Elastic Compute Cloud (EC2) AWSSDK.EC2
Amazon EC2 Container Service AWSSDK.ECS
Amazon ElastiCache AWSSDK.ElastiCache
AWS Elastic Beanstalk AWSSDK.ElasticBeanstalk
Elastic Load Balancing AWSSDK.ElasticLoadBalancing
Amazon Elastic MapReduce AWSSDK.ElasticMapReduce
Amazon Elastic Transcoder AWSSDK.ElasticTranscoder
Amazon Glacier AWSSDK.Glacier
AWS Identity and Access Management (IAM) AWSSDK.IdentityManagement
AWS Import/Export AWSSDK.ImportExport
AWS Key Management Service AWSSDK.KeyManagementService
Amazon Kinesis AWSSDK.Kinesis
AWS Lambda AWSSDK.Lambda
Amazon Machine Learning AWSSDK.MachineLearning
AWS OpsWorks AWSSDK.OpsWorks
Amazon Relational Database Service (RDS) AWSSDK.RDS
Amazon Redshift AWSSDK.Redshift
Amazon Route 53 AWSSDK.Route53
Amazon Route 53 Domains AWSSDK.Route53Domains
Amazon Simple Storage Service (S3) AWSSDK.S3
AWS Security Token Service (STS) AWSSDK.SecurityToken
Amazon SimpleDB AWSSDK.SimpleDB
Amazon Simple Email Service (SES) AWSSDK.SimpleEmail
Amazon Simple Notification Service (SNS) AWSSDK.SimpleNotificationService
Amazon EC2 Simple Systems Manager (SSM) AWSSDK.SimpleSystemsManagement
Amazon Simple Workflow Service AWSSDK.SimpleWorkflow
Amazon Simple Queue Service (SQS) AWSSDK.SQS
AWS Storage Gateway AWSSDK.StorageGateway
Amazon WorkSpaces AWSSDK.WorkSpaces


Update on Modularization of the SDK

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

As mentioned earlier, we are currently working on modularizing the AWS SDK for .NET into individual packages for each service. We have pushed the changes to the modularization branch in GitHub. If you use the solution file AWSSDK.sln, it will produce a core assembly for each supported platform and individual service assemblies for each supported platform. Since this solution builds Windows Store and Windows Phone versions of the SDK for .NET, we recommend that you use Windows 8.1 as your platform. We still have more testing, clean up, and work to do on our build/release process before we release modularization for general availability.

Breaking changes

We have tried to keep the list of breaking changes to a minimum to make it as easy as possible to adopt to the new modularized SDK. Here are the breaking changes in the SDK.

Amazon.AWSClientFactory Removed

This class was removed because in the modularized SDK it didn’t make sense to have a class that had a dependency to every service. Instead, the preferred way to construct a service client is to just use its constructor.

Amazon.Runtime.AssumeRoleAWSCredentials Removed

This class was removed because it was in a core namespace but had a dependency to the AWS Security Token Service. It has been obsolete in the SDK for quite some time and will be removed with the new structure. Use Amazon.SecurityToken.AssumeRoleAWSCredentials instead.

SetACL from S3Link

S3Link is part of the Amazon DynamoDB package and is used for storing objects in Amazon S3 that are references in a DynamoDB item. This is a useful feature, but we didn’t want to cause a compile dependency on the S3 package for DynamoDB. Consequently, we needed to simplify the exposed S3 methods from S3Link, so we replaced SetACL with MakeS3ObjectPublic. For more control over the ACL on the object, you’ll need to use the S3 package directly.

Removal of Obsolete Result Classes

For most all services in the SDK, operations return a response object that contains metadata for the operation such as the request ID and a result object. We found having a separate response and result class was redundant and mostly just caused extra typing for developers. About a year and half ago when version 2 of the SDK was released, we put all the information that was on the result class on to the response class. We also marked the result classes obsolete to discourage their use. In the new modularized SDK currently in development, we removed these obsolete result classes. This helps us reduce the size of the SDK.

AWS Config Section Changes

It is possible to do advanced configuration of the SDK through the app.config or web.config file. This is done through an aws config section like the following that references the SDK assembly name.

    <section name="aws" type="Amazon.AWSSection, AWSSDK"/>
  <aws region="us-west-2">
    <logging logTo="Log4Net"/>  

In the modularized SDK, there is no longer an assembly called AWSSDK. Instead, we need to reference the new core assembly like this.

    <section name="aws" type="Amazon.AWSSection, AWSSDK.Core"/>
  <aws region="us-west-2">
    <logging logTo="Log4Net"/>  

You can also manipulate the config settings through an Amazon.AWSConfigs object. In the modularized SDK, we moved config settings for DynamoDB from the Amazon.AWSConfigs object to Amazon.AWSConfigsDynamoDB.

What’s next

We are making good progress getting our process and development switched over to the new modularized approach. We still have a bit to go, but in the meantime, we would love to hear any feedback on our upcoming changes. Until we’ve completed our switchover, you can still use the current version of the SDK to make all the changes except for the configuration. This means you can make those updates now to ready your code for the modularized SDK.

DynamoDB JSON and Array Marshaling for PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Back in October of 2014, Amazon DynamoDB added support for new data types, including the map (M) and list (L) types. These new types, along with some API updates, make it possible to store more complex, multilevel data, and use DynamoDB for document storage.

The DynamoDB Marshaler

To make these new types even easier for our PHP SDK users, we added a new class, called the DynamoDB Marshaler, in Version 2.7.7 of the AWS SDK for PHP. The Marshaler object has methods for marshaling JSON documents and PHP arrays to the DynamoDB item format and unmarshaling them back.

Marshaling a JSON Document

Let’s say you have JSON document describing a contact in the following format:

  "id": "5432c69300594",
  "name": {
    "first": "Jeremy",
    "middle": "C",
    "last": "Lindblom"
  "age": 30,
  "phone_numbers": [
      "type": "mobile",
      "number": "5555555555",
      "preferred": true
      "type": "home",
      "number": "5555555556",
      "preferred": false

You can use the DynamoDB Marshaler to convert this JSON document into the format required by DynamoDB.

use AwsDynamoDbDynamoDbClient;
use AwsDynamoDbMarshaler;

$client = DynamoDbClient::factory(/* your config */);
$marshaler = new Marshaler();
$json = file_get_contents('/path/to/your/document.json');

    'TableName' => 'YourTable',
    'Item'      => $marshaler->marshalJson($json)

The output of marshalJson() is an associative array that includes all the type information required for the DynamoDB 'Item' parameter.

    'id' => ['S' => '5432c69300594'],
    'name' => ['M' => [
        'first' => ['S' => 'Jeremy'],
        'middle' => ['S' => 'C'],
        'last' => ['S' => 'Lindblom'],
    'age' => ['N' => '30'],
    'phone_numbers' => ['L' => [
        ['M' => [
            'type' => ['S' => 'mobile'],
            'number' => ['S' => '5555555555']
        ['M' => [
            'type' => ['S' => 'home'],
            'number' => ['S' => '5555555556']

To retrieve an item and get the JSON document back, you need to use the unmarshalJson() method.

$result = $client->getItem([
    'TableName' => 'YourTable',
    'Key'       => ['id' => ['S' => '5432c69300594']]
$json = $marshaler->unmarshalJson($result['Item']);

Marshaling a Native PHP Array

The Marshaler also provides the marshalItem() and unmarshalItem() methods that do the same type of thing, but for arrays. This is essentially an upgraded version of the existing DynamoDbClient::formatAttributes() method.

$data = [
    'id' => '5432c69300594',
    'name' => [
        'first'  => 'Jeremy',
        'middle' => 'C',
        'last'   => 'Lindblom',
    'age' => 30,
    'phone_numbers' => [
            'type'      => 'mobile',
            'number'    => '5555555555',
            'preferred' => true
            'type'      => 'home',
            'number'    => '5555555556',
            'preferred' => false

// Marshaling the data and putting an item.
    'TableName' => 'YourTable',
    'Item'      => $marshaler->marshalItem($data)

// Getting and item and unmarshaling the data.
$result = $client->getItem([
    'TableName' => 'YourTable',
    'Key'       => ['id' => ['S' => '5432c69300594']]
$data = $marshaler->unmarshalItem($result['Item']);

Be aware that marshalItem() does not support binary (B) and set (SS, NS, and BS) types. This is because they are ambiguous with the string (S) and list (L) types and have no equivalent type in JSON. We are working on some ideas that will provide more help with these types in Version 3 of the SDK.

Deprecations in the SDK

The new data types are a great addition to the Amazon DynamoDB service, but one consequence of adding support for these types is that we had to deprecate the following classes and methods in the AwsDynamoDb namespace of the PHP SDK:

These classes and methods made assumptions about how certain native PHP types convert to DynamoDB types. The addition of the new types to DynamoDB invalidated those assumptions, and we could not update the code in a backward-compatible way to support the new types. They still work fine, but just not with the new types. These classes and methods are removed in Version 3 of the SDK, and the DynamoDB Marshaler object is meant to be the replacement for their functionality.


We hope that this addition to the SDK makes working with DynamoDB really easy. If you have any feedback about the Marshaler or any ideas on how we can improve it, please let us know on GitHub. Better yet, send us a pull request. :-)

Preview the AWS Resource APIs for PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

This year is just about over, but we are too excited to wait until the new year to share with you a feature we are developing for the AWS SDK for PHP. We are calling it the AWS Resource APIs for PHP. This feature is maintained as a separate package, but it acts as an extension to Version 3 of the AWS SDK for PHP.

As you know, the core SDK is composed of service client objects that have methods corresponding 1-to-1 with operations in the service’s API (e.g., Ec2Client::runInstances() method maps to the EC2 service’s RunInstances operation). The resource APIs build upon the SDK to add new types of objects that allow you to interact with the AWS service APIs in a more resource-oriented way. This allows you to use a more expressive syntax when working with AWS services, because you are acting on objects that understand their relationships with other resources and that encapsulate their identifying information.

Resource Objects

Resource objects each represent a single, identifiable AWS resource (e.g., an Amazon S3 bucket or an Amazon SQS queue). They contain information about how to identify the resource and load its data, the actions that can be performed on it, and the other resources to which it is related. Let’s take a look at a few examples showing how to interact with these resource objects.

First, let’s set up the Aws object, which acts as the starting point into the resource APIs.


require 'vendor/autoload.php';

use AwsResourceAws;

$aws = new Aws([
    'region'  => 'us-west-2',
    'version' => 'latest',
    'profile' => 'your-credential-profile',

(Note: The array of configuration options provided in the preceding example is the same as what you would provide when instantiating the AwsSdk object in the core SDK.)

You can access related resources by calling the related resource’s name as a method and passing in its identity.

$bucket = $aws->s3->bucket('your-bucket');
$object = $bucket->object('image/bird.jpg');

Accessing resources this way is evaluated lazily, so the preceding example does not actually make any API calls.

Once you access the data of a resource, an API call will be triggered to "load" the resource and fetch its data. To access a resource object’s data, you can access it like an array.

echo $object['LastModified'];

Performing Actions

You can perform actions on a resource by calling verb-like methods on the object.

// Create a bucket and object.
$bucket = $aws->s3->createBucket([
    'Bucket' => 'my-new-bucket'
$object = $bucket->putObject([
    'Key'  => 'images/image001.jpg',
    'Body' => fopen('/path/to/image.jpg', 'r'),

// Delete the bucket and object.

Because the resource’s identity is encapsulated within the resource object, you never have to specify it again once the object is created. This way, actions like $object->delete() do not need to require arguments.


Some resources have a "has many" type relationship with other resources. For example, an S3 bucket has many S3 objects. The AWS Resource APIs also allow you to work with resource collections.

foreach ($bucket->objects() as $object) {
    echo $object->delete();

Using the Resource APIs

We are currently working on providing API documentation for the AWS Resource APIs. Even without documentation, you can programmatically determine what methods are available on a resource object by calling the respondsTo method.

// Array
// (
//     [0] => create
//     [1] => delete
//     [2] => deleteObjects
//     [3] => putObject
//     [4] => multipartUploads
//     [5] => objectVersions
//     [6] => objects
//     [7] => bucketAcl
//     [8] => bucketCors
//     [9] => bucketLifecycle
//     [10] => bucketLogging
//     [11] => bucketPolicy
//     [12] => bucketNotification
//     [13] => bucketRequestPayment
//     [14] => bucketTagging
//     [15] => bucketVersioning
//     [16] => bucketWebsite
//     [17] => object
// )

// bool(true)

Check it Out!

To get started, you can install the AWS Resource APIs for PHP using Composer, by requiring the aws/aws-sdk-php-resources package in your project. The source code and README, are located in the awslabs/aws-sdk-php-resources repo on GitHub.

The initial preview release of the AWS Resource APIs supports the following services: Amazon EC2, Amazon Glacier, Amazon S3, Amazon SNS, Amazon SQS, AWS CloudFormation, and AWS Identity and Access Management (IAM). We will continue to add support for more APIs over this next year.

We’re eager to hear your feedback about this new feature! Please use the issue tracker to ask questions, provide feedback, or submit any issues or feature requests.

AWS re:Invent 2014

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We spent the past week at AWS re:Invent! The PHP SDK team was there with many of our co-workers and customers. It was a great conference, and we had a lot of fun.

If you did not attend re:Invent or follow our @awsforphp Twitter feed during the event, then you have a lot to catch up on.

New AWS Services and Features

Several new services were announced during the keynotes, on both the first day and second day, and during other parts of the event.

During the first keynote, three new AWS services for code management and deployment were announced: AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline. CodeDeploy is available today, and can help you automate code deployments to Amazon EC2 instances.

Additionally, three other new services were revealed that are related to enterprise security and compliance: AWS Key Management Service (AWS KMS), AWS Config, and AWS Service Catalog.

Amazon RDS for Aurora was also announced during the first keynote. Amazon Aurora is a new, MySQL-compatible, relational database engine built for high performance and availability.

The keynote on the second day boasted even more announcements, including the new Amazon EC2 Container Service, which is a highly scalable, high performance container management service that supports Docker containers.

Also, new compute-optimized (C4) Amazon EC2 Instances were announced, as well as new larger and faster Elastic Block Store (EBS) volumes backed with SSDs.

AWS Lambda was introduced during the second keynote, as well. It is a new compute service that runs your code in response to events and automatically manages the compute resources for you. To learn about AWS Lambda in more detail, you should check out their session at re:Invent, which shows how you can implement image thumbnail generation in your applications using AWS Lambda and the new Amazon S3 Event Notifications feature. They also briefly mention the upcoming DynamoDB streams feature in that presentation, which was announced just prior to the conference.

The APIs for AWS CodeDeploy, AWS KMS, AWS Config, and AWS Lambda are currently available, and all are supported in the AWS SDK for PHP as of version 2.7.5.

PHP Presentations

I had the honor of presenting a session about the PHP SDK called Building Apps with the AWS SDK for PHP, where I explained how to use many of the new features from Version 3 of the SDK in the context of building an application I called "SelPHPies with ElePHPants". You should definitely check it out whether you are new to or experienced with the SDK.

Here are the links to my presentation as well as two other PHP-specific sessions that you might be interested in.

  • Building Apps with the AWS SDK for PHP (slides, video)
  • Best Practices for Running WordPress on AWS (slides, video)
  • Running and Scaling Magento on AWS (video)

There were so many other great presentations at re:Invent. The slides, videos, and podcasts for all of the presentations are (or will be) posted online.


Announcements and presentations are exciting and informative, but my favorite part about any conference is the people. Re:Invent was no exception.

It was great to run into familiar faces from my Twitter stream like Juozas Kaziukėnas, Ben Ramsey, Brian DeShong, and Boaz Ziniman. I also had the pleasure of meeting some new friends from companies that had sent their PHP developers to the conference.

See You Next Year

We hope you take the time to check out some of the presentations from this year’s event, and consider attending next year. Get notified about registration for next year’s event by signing up for the re:Invent mailing list on the AWS re:Invent website.