AWS Developer Blog

Security update to AWS SDK for .NET’s Amazon CloudFront Cookie Signer

by Milind Gokarn | on | in .NET | | Comments

The AWS SDK for .NET has a utility class, Amazon.CloudFront.AmazonCloudFrontCookieSigner, for creating signed cookies to access private content served using Amazon CloudFront. This blog contains details on usage of this utility class along with sample code.

Specifying AmazonCloudFrontCookieSigner.Protocols.Https as the protocol parameter creates a cookie with incorrect policy; the policy contains a resource restriction of “http*://” instead of “https://” .

Potential Impact

CloudFront distributions configured to serve HTTP and HTTPS requests are affected by this issue, unless “Viewer Protocol Policy” is configured as HTTPS. In this case, CloudFront will block attempts to access content over HTTP.

Impacted SDK versions

  • Versions 2.3.36 to 2.3.55 for version 2 of the AWS SDK for .NET
  • Versions 3.0.1-preview to 3.3.3.6 for package AWSSDK.CloudFront of the AWS SDK for .NET
  • Versions 3.2.0-beta to 3.2.3.7-beta, and 3.2.8-rc for package AWSSDK.CloudFront in the preview version 3.2 of the AWS SDK for .NET, that targets .NET Core

Mitigation

Update your dependency to the latest version of the SDK. The fix contains a change to the AmazonCloudFrontCookieSigner.Protocols enum’s underlying values (a breaking change) and requires a recompilation of the consuming application. The assembly version of the SDK package has been updated for this fix. There are no other breaking API changes in this version.

  • Version 2.3.55.2 and above for package AWSSDK in version 2 of the AWS SDK for .NET
  • Version 3.3.4.0 and above for package AWSSDK.CloudFront in version 3 of the AWS SDK for .NET

React Native Support in the AWS SDK for JavaScript

by Christopher Radek | on | in JavaScript | | Comments

We’re excited to announce React Native support in the AWS SDK for JavaScript.

You can now access all services that are currently supported in the AWS SDK for JavaScript from within a React Native application. You can configure Amazon Cognito Identity as the authentication provider by using the same Amazon Cognito Identity credentials you might already be familiar with from the browser SDK.

Getting Started with the SDK in React Native

Save the AWS SDK for JavaScript as a project dependency by running the following:

npm install --save aws-sdk

Within your project, import the SDK with the following example:

import AWS from 'aws-sdk/dist/aws-sdk-react-native';

Once imported, you can use the SDK the same way you use it in the browser. For example, the following shows how to get a list of folders from an S3 bucket, given a prefix:

import AWS from 'aws-sdk/dist/aws-sdk-react-native';

const s3 = new AWS.S3({
  region: 'REGION',
  credentials: {/* */}
});

export async function getFoldersByPrefix(bucket, prefix) {
  let nextMarker = null;
  let isTruncated = false;
  const folders = [];

  // Returns all objects in current 'directory'
  do {
    const s3Objects = await s3.listObjects({
      Bucket: bucket,
      Prefix: prefix || '',
      Delimiter: '/',
      Marker: nextMarker
    }).promise();
    // Store the folder paths
    const prefixes = s3Objects.CommonPrefixes.map(common => common.Prefix);
    folders.push.apply(folders, prefixes)
    // Check if there are more objects in this directory
    isTruncated = s3Objects.IsTruncated;
    nextMarker = isTruncated ? s3Objects.NextMarker : null;
  } while (isTruncated);
  
  return folders;
}

Take a look at the AWS SDK for JavaScript SDK Developer Guide for examples of how to make service calls with the SDK.

Give It a Try!

We look forward to hearing what you think of this new support for React Native in the AWS SDK for JavaScript! Try it out and leave your feedback in the comments or on GitHub!

Build and Deploy a Serverless REST API in Minutes Using Chalice

by Leah Rivers | on | in Python | | Comments

Chalice is a serverless microframework that makes it simple for you to use AWS Lambda and Amazon API Gateway to build serverless apps. We’ve improved Chalice based on community feedback from GitHub, and we’re eager for you to take our latest version for a spin. Hopefully, you’ll find Chalice a fast and effective way to build serverless apps.

To help you get started with Chalice, here’s a quick five-step review:

   Step 1: Install Chalice
   Step 2: Configure credentials
   Step 3: Create a project
   Step 4: Deploy your API
   Step 5: You’re done launching a simple API. Consider adding something to your app!

Let’s dig in.

Step 1: Install Chalice.
To install Chalice, you have to use Python2.7 or 3.6, the versions Lambda supports. We recommend using a virtual environment, as follows.
(If you haven’t installed chalice before, you can do that with pip install chalice).

 $ pip install virtualenv
 $ virtualenv ~/.virtualenvs/chalice-demo
 $ source ~/.virtualenvs/chalice-demo/bin/activate

Step 2: Add credentials if you haven’t previously configured boto3 or the AWS CLI.
(If you’re already running boto3 or the AWS CLI, you’re all good. Move on to Step 3.)

If this is your first time configuring credentials for AWS, use the following.

 $ mkdir ~/.aws
 $ cat >> ~/.aws/config
 [default]
 aws_access_key_id=YOUR_ACCESS_KEY_HERE
 aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
 region=YOUR_REGION (such as us-west-2, us-west-1, etc)

For more information on all the supported methods for configuring credentials, see the boto3 docs.

Step 3: Create a project using the chalice command line.
Use the new-project command to create a sample app that defines a single view.

 $ chalice new-project helloworld
 $ cd helloworld

Take a moment to check out what you’ve created. In app.py, you’ve created a sample app that defines a single view, /, that when called will return the JSON body {“hello”: “world”}.

Step 4: Deploy your App.
Alright, double-check that you’re still in your project directory – you’re ready to deploy!
From the command line, run chalice deploy.

 $ chalice deploy
 ...
 Initiating first time deployment...
 https://qxea58oupc.execute-api.us-west-2.amazonaws.com/dev/

You now have an API up and running using API Gateway and Lambda.

 $ curl https://qxea58oupc.execute-api.us-west-2.amazonaws.com/dev/
 {"hello": "world"}

Step 5: Add something to your app!
From this point, there’s a bunch of stuff you can do, including adding URL parameters, adding routing, or customizing the HTTP response. Find tutorials and examples here.

Have fun!

Using AWS CodeCommit with Visual Studio Team Explorer

by Steve Roberts | on | in .NET | | Comments

We recently announced support for new features in the AWS Toolkit for Visual Studio that make working with AWS CodeCommit repositories easy and convenient from within Visual Studio Team Explorer. In this post, we take a look at getting started with setting up credentials, and then how to create and clone repositories from within Team Explorer.

Credential types for AWS CodeCommit

If you’re an existing user of the AWS Toolkit for Visual Studio, you’re aware of setting up AWS credential profiles that contain your access and secret keys. These credential profiles are used in the Toolkit for Visual Studio to enable the Toolkit to call service APIs on your behalf, for example, to list your Amazon S3 buckets in AWS Explorer or to launch an Amazon EC2 instance. The integration of AWS CodeCommit with Team Explorer also uses these credential profiles. However, to work with Git itself we need additional credentials, specifically, Git credentials for HTTPS connections. You can read about these kinds of credentials (a user name and password) at Setup for HTTPS Users Using Git Credentials in the AWS CodeCommit user guide.

You can create the Git credentials for AWS CodeCommit only for Identity and Access Management (IAM) user accounts. You cannot create them for a root account. You can create up to two sets of these credentials for the service and, although you can mark a set of credentials as inactive, inactive sets still count toward your limit of two sets. Note that you can delete and recreate credentials at any time. When you use AWS CodeCommit from within Visual Studio, your traditional AWS credentials are used for working with the service itself, for example, when you’re creating and listing repositories. When working with the actual Git repositories hosted in AWS CodeCommit, you use the Git credentials.

As part of the support for AWS CodeCommit, we’ve extended the Toolkit for Visual Studio to automatically create and manage these Git credentials for you and associate them with your AWS credential profile. That way, you don’t need to worry about having the right set of credentials at hand to perform Git operations within Team Explorer. Once you connect to Team Explorer with your AWS credential profile, the associated Git credentials are used automatically whenever you work with a Git remote.

Later in this post we’ll go over how and when to set up the Git credentials that you need. Just remember that you have to use an IAM user account (which we strongly recommend you do anyway).

Connecting to AWS CodeCommit

When you open the Team Explorer window in Visual Studio 2015 or later, you’ll see a new entry in the Hosted Service Providers section of Manage Connections, as shown.

Choosing Sign up opens the AWS home page in a browser window. What happens when you choose Connect depends on whether the Toolkit for Visual Studio can find a credential profile with AWS access and secret keys to enable it to make calls to AWS on your behalf. You might have set up a credential profile by using the new Getting Started page that displays in the IDE when the Toolkit cannot find any locally stored credentials. Or you might have been using our Toolkit, the AWS Tools for PowerShell, or the AWS CLI and already have AWS credential profiles available for the Toolkit to use.

When you choose Connect, the toolkit starts the process to find a credential profile to use in the connection. If the Toolkit can’t find a credential profile, it opens a dialog box that invites you to enter the access and secret keys for your AWS account. We strongly recommend that you use an IAM user account, and not your root credentials. In addition, as noted earlier, the Git credentials you will eventually need can only be created for IAM users. Once the access and secret keys are provided and the credential profile is created, the connection between Team Explorer and AWS CodeCommit is ready for use.

If the Toolkit finds more than one AWS credential profile, you’re prompted to select the account you want to use within Team Explorer, as shown.

If you have only one credential profile, the toolkit bypasses the profile selection dialog box and you’re connected immediately.

When a connection is established between Team Explorer and AWS CodeCommit via your credential profiles, the invitation dialog box closes and the connection panel is displayed, as shown below.

Because we have no repositories cloned locally, the panel shows just the operations we can perform: Clone, Create, and Sign out. Like other providers, AWS CodeCommit in Team Explorer can be bound to only a single AWS credential profile at any given time. To switch accounts, you use Sign out to remove the connection so you can start a new connection using a different account. We’ll see how this panel expands to display our local AWS CodeCommit repositories later in the post.

Now that we have established a connection, we can create a repository by clicking the Create link.

Creating a repository

When we click the Create link, the Create a New AWS CodeCommit Repository dialog box opens.

AWS CodeCommit repositories are organized by region, so in Region we can select the region in which to host the repository. The list has all the regions in which AWS CodeCommit is supported. We provide the Name (required) and Description (optional) for our new repository.

The default behavior of the dialog box is to suffix the folder location for the new repository with the repository name (as you enter the name, the folder location also updates). To use a different folder name, edit the Clone into folder path after you finish entering the repository name.

You can also elect to automatically create an initial .gitignore file for the repository. The AWS Toolkit for Visual Studio provides a built-in default for Visual Studio file types. Or you can choose to have no file or to use a custom existing file that you would like to reuse across repositories. Simply select Use custom in the list and navigate to the custom file to use.

Once we have a repository name and location, we’re ready to click OK and start creating the repository. The Toolkit requests that the service create the repository and then clone the new repository locally, adding an initial commit for the .gitignore file, if we’re using one. It’s at this point that we start working with the Git remote, so the Toolkit now needs access to the Git credentials we described earlier.

Setting up Git credentials

Until now we’ve been using AWS access and secret keys to request that the service create our repository. Now we need to work with Git itself to do the actual clone operation, and Git doesn’t understand AWS access and secret keys. Instead, we need to supply the user name and password credentials to Git to use on an HTTPS connection with the remote.

As we said earlier, the Git credentials we’re going to use must be associated with an IAM user. You cannot generate them for root AWS credentials (this is another reason why we recommend you set up your AWS credential profiles to contain IAM user access and secret keys, and not root keys). The Toolkit can attempt to set up Git credentials for AWS CodeCommit for you, and associate them with the AWS credential profile that we used to connect in Team Explorer earlier. Let’s take a look at the process.

When you choose OK in the Create a New AWS CodeCommit Repository dialog box and successfully create the repository, the Toolkit checks the AWS credential profile that is connected in Team Explorer to determine if Git credentials for AWS CodeCommit exist and are associated locally with the profile. If so, the Toolkit instructs Team Explorer to commence the clone operation on the new repository. If Git credentials are not available locally, the Toolkit checks the type of account credentials that were used in the connection in Team Explorer. If the credentials are for an IAM user, as we recommend, the following message is shown.

If the credentials are root credentials, the following message is shown instead.

In both cases, the Toolkit offers to attempt to do the work to create the necessary Git credentials for you. In the first scenario, all it needs to create are a set of Git credentials for the IAM user. When a root account is in use, the Toolkit first attempts to create an IAM user and then proceeds to create Git credentials for that new user. If the Toolkit has to create a new user, it applies the AWS CodeCommit Power User managed policy to that new user account. This policy allows access to AWS CodeCommit (and nothing else) and enables all operations to be performed with AWS CodeCommit except for repository deletion.

When you’re creating credentials, you can only view them once. Therefore, the toolkit prompts you to save the newly created credentials (as a .csv file) before continuing.

You won’t be surprised to learn that this is something we also strongly recommend (and be sure to save them to a secure location)!

There might be cases where the Toolkit can’t automatically create credentials. For example, you may already have created the maximum number of sets of Git credentials for AWS CodeCommit (two), or you might not have sufficient programmatic rights for the Toolkit to do the work for you (if you’re signed in as an IAM user). In these cases, you can log into the AWS Management Console to manage the credentials or obtain them from your administrator. You can then enter them in the Git Credentials for AWS CodeCommit dialog box, which the Toolkit displays.

Now that the credentials for Git are available, the clone operation for the new repository proceeds (see progress indication for the operation inside Team Explorer). If you elected to have a default .gitignore file applied, it is committed to the repository with a comment of ‘Initial Commit’.

That’s all there is to setting up credentials and creating a repository within Team Explorer. Once the required credentials are in place, all you see when creating new repositories in the future is the Create a New AWS CodeCommit Repository dialog itself. Now let’s look at cloning an existing repository.

Cloning a repository

To clone a repository, we return to the connection panel for AWS CodeCommit in Team Explorer. We click the Clone link to open the Clone AWS CodeCommit Repository dialog box, and then select the repository to clone and the location on disk where we want to place it.

Once we choose the region, the Toolkit queries the service to discover the repositories that are available in that region and displays them in the central list portion of the dialog box. The name and optional description of each repository are also displayed. You can reorder the list to sort it by either repository name or the last modified date, and to sort each in ascending or descending order.

Once we select our repository we can choose the location to clone to. This defaults to the same repository location used in other plugins to Team Explorer, but you can browse to or enter any other location. By default, the repository name is suffixed onto the selected path. However, if you want a specific path, simply edit the text box after you select the folder. Whatever text is in the box when you click OK will be the folder in which the cloned repository will be found.

Having selected the repository and a folder location, we then click OK to proceed with the clone operation. Just as with creating a repository, you can see the progress of the clone operation reported in Team Explorer.

Working with repositories

When you clone and/or create repositories, notice that the set of local repositories for the connection are listed in the connection panel in Team Explorer under the operation links. These entries give you a convenient way to access the repository to browse content. Simply right-click the repository and choose Browse in Console.

You can also use Update Git Credentials to update the stored Git credentials associated with the credential profile. This is useful if you’ve rotated the credentials. The command will display the Git Credentials for AWS CodeCommit dialog box we noted earlier for you to enter or import the new credentials.

Git operations on the repositories work as you’d expect. You can make local commits and, when you are ready to share, you use the Sync option in Team Explorer. Because the Git credentials are already stored locally and associated with our connected AWS credential profile, we won’t be prompted to supply them again for operations against the AWS CodeCommit remote.

Wrap

We hope you found this post useful in detailing how to manage credentials for AWS CodeCommit inside Team Explorer and using them to create and clone repositories within the IDE!

Updates for .NET Core Lambda Libraries

by Norm Johanson | on | in .NET | | Comments

With our release of .NET Core support in AWS Lambda, we also released many NuGet packages to help you develop Lambda functions. We’ve been constantly updating them on our GitHub repository as well. Let’s look at some of the recent updates.

Amazon.Lambda.Tools

This package contains the integration with the .NET Core CLI, which you can use to deploy your functions. The AWS Toolkit for Visual Studio also uses this package to perform the deployment. For information about this package, see this previous post.

Lambda supports .NET Core 1.0. If you add a dependency to your .NET Core project that requires .NET Core 1.1, the .NET Core publishing tooling used by Amazon.Lambda.Tools will run without errors. However, when you run the function you’ll get errors because of the incompatibility. In version 1.5.0 of Amazon.Lambda.Tools we added validation on top of the .NET Core publishing tool to ensure that none of the dependencies for the project require a later runtime than Lambda supports.

New Events Packages

We have many NuGet packages that contain typed classes modeling the Lambda event types for the services. We recently added two more packages: Amazon.Lambda.LexEvents and Amazon.Lambda.KinesisFirehoseEvents.

Amazon.Lambda.LexEvents

Amazon Lex is a service for creating bots. You can use Lambda functions to process the incoming requests to the bot. The Amazon.Lambda.LexEvents package contains the LexEvent and LexResponse classes that you can use as parameter and return for your Lambda functions.

In the Amazon Lex console you can create several getting started Amazon Lex bots. Book Trips is one of the getting started samples you can use to simulate booking a hotel or car. We added a blueprint in Visual Studio that you can use to create the Lambda processor for the Book Trips bot.

Amazon.Lambda.KinesisFirehoseEvents

Amazon Kinesis Firehose recently added support for using Lambda functions to transform the data being streamed to Amazon S3. The Amazon.Lambda.KinesisFirehoseEvents package contains the KinesisFirehoseEvent and KinesisFirehoseResponse classes. We also added a new getting started blueprint to Visual Studio for Firehose.

Serialization Debugging

As we mentioned, we have many packages providing typed classes that you can use for Lambda functions. You can also define your own classes, and the Amazon.Lambda.Serialization.Json package, which is registered in all of the blueprints we provide, will automatically handle all serializing and deserializing into JSON. In version 1.1.0 of the Amazon.Lambda.Serialization.Json package, we added a new debugging feature to help diagnose serialization issues you might have with your custom types. If you add the environment variable LAMBDA_NET_SERIALIZER_DEBUG with the value of true, the Amazon.Lambda.Serialization.Json package writes the incoming and outgoing JSON to the Amazon CloudWatch log stream. This can be very useful to verify that typed classes are being sent back as you expect.

ASP.NET Core Web API Support

We continue to add features to our ASP.NET Core Web API support on top of Lambda. We are also getting some great support from our community on this project with pull and feature requests. Please keep the feedback coming. In versions 0.10.1-preview1 of Amazon.Lambda.AspNetCoreServer we added:

  • Binary support – see the README.md file for details on how to set this up.
  • Filling in the RemoteIpAddress and RemotePort on HttpContext.Connection from the Amazon API Gateway request.
  • New APIGatewayProxyRequest and ILambdaContext objects for the Lambda function to the HttpContext.Items collection with the collection keys APIGatewayRequest and LambdaContext.

Amazon.Lambda.Templates (1.2.1)

The NuGet package Amazon.Lambda.Templates makes all the blueprints offered in Visual Studio available to the dotnet new command. We recently released version 1.2.1 with the new Amazon Lex and Firehose blueprints, and we updated all the dependencies for the other blueprints. See this earlier blog post on how to install and use the blueprints from the dotnet new command.

Summary

We are continually improving our Lambda packages to enhance the experience of developing Lambda functions. Check out the GitHub repo, which is also a great place to give us your feedback. You can also track the releases of the packages in the RELEASE.CHANGELOG.md file.

PHP application logging with Amazon CloudWatch Logs and Monolog

by Joseph Fontes | on | in PHP | | Comments

Logging and information debugging can be approached from a multitude of different angles. Whether you use an application framework or coding from scratch it’s always comforting to have familiar components and tools across different projects. In our examples today, I am going to enable Amazon CloudWatch Logs logging with a PHP application. To accomplish this, I wanted to use an existing solution that is both already popular and well used, and that is standards compliant. For these reasons, we are going to use the open source log library, PHP Monolog (https://github.com/Seldaek/monolog).

PHP Monolog

For those who work with a new PHP application, framework, or service, one of the technology choices that appears more frequently across solutions is the use of Monolog for application logging. PHP Monolog is a standards-compliant PHP library that enables developers to send logs to various destination types including, databases, files, sockets, and different services. Although PHP Monolog predates the standards for PHP logging defined in PSR-3, it does implement the PSR-3 interface and standards. This makes Monolog compliant with the common interface for logging libraries. Using Monolog with CloudWatch Logs creates a PSR-3 compatible logging solution. Monolog is available for use with a number of different applications and frameworks such as Laravel, Symfony, CakePHP, and many others. Our example today is about using PHP Monolog to send information to CloudWatch Logs for the purpose of application logging and to build a structure and process that enables the use of our application data with CloudWatch alarms and notifications. This enables us to use logs from our application for cross-service actions such as with Amazon EC2 Auto Scaling decisions.

Amazon CloudWatch Logs

As a customer-driven organization, AWS is constantly building and releasing significant features and services requested by AWS customers and partners. One of those services that we highlight today is Amazon CloudWatch Logs. CloudWatch Logs enables you to store log file information from applications, operating systems and instances, AWS services, and various other sources. An earlier blog post highlighted the use of CloudWatch Logs with various programming examples.

Notice in the blog post that there is a PHP example that uses CloudWatch Logs to store an entry from an application. You can use this example and extend it as a standalone solution to provide logging to CloudWatch Logs from within your application. With our examples, we’ll enhance this opportunity by leveraging PHP Monolog.

Implementing Monolog

To begin using Monolog, we install the necessary libraries with the use of Composer (https://getcomposer.org/). The instructions below install the AWS SDK for PHP, PHP Monolog, and an add-on to Monolog that enables logging to CloudWatch Logs.

curl -sS https://getcomposer.org/installer | php
php composer.phar require aws/aws-sdk-php
php composer.phar require monolog/monolog
php composer.phar require maxbanton/cwh:^1.0

Alternatively, you can copy the following entry to the composer.json file and install it via the php composer.phar install command.

{
    "minimum-stability": "stable",
    "require": {
        "aws/aws-sdk-php": "^3.24",
        "aws/aws-php-sns-message-validator": "^1.1",
        "monolog/monolog": "^1.21",
        "maxbanton/cwh": "^1.0"
    }
}

Local logging

Now that PHP Monolog is available for use, we can test the implementation. We start with an example of logging to a single file.

require "vendor/autoload.php";

use Monolog\Logger;
use Monolog\Formatter\LineFormatter;
use Monolog\Handler\StreamHandler;

$logFile = "testapp_local.log";

$logger = new Logger('TestApp01');
$formatter = new LineFormatter(null, null, false, true);
$infoHandler = new StreamHandler(__DIR__."/".$logFile, Logger::INFO);
$infoHandler->setFormatter($formatter);
$logger->pushHandler($infoHandler);
$logger->info('Initial test of application logging.');

In the previous example, we start by requiring the composer libraries we installed earlier. The new Logger line sets the channel name as “TestApp01”. The next line creates a new LineFormatter that removes brackets around unused log items. The next line establishes the destination as the file name we identified, testapp_local.log, and associates that with the INFO log level. Next, we apply the format to our stream handler. Then we add the stream handler with the updated format to the handler list. Finally, a new message is logged with the log level of INFO. For information about log levels and different handlers, see the Monolog GitHub page and IETF RFC 5424 and PSR-3.

We can now view the contents of the log file to ensure functionality:

Syslog logging

Now that we are able to write a simple log entry to a local file, our next example uses the system Syslog to log events.

$logger = new Logger($appName);

$localFormatter = new LineFormatter(null, null, false, true);
$syslogFormatter = new LineFormatter("%channel%: %level_name%: %message% %context% %extra%",null,false,true);

$infoHandler = new StreamHandler(__DIR__."/".$logFile, Logger::INFO);
$infoHandler->setFormatter($localFormatter);

$warnHandler = new SyslogHandler($appName, $facility, Logger::WARNING);
$warnHandler->setFormatter($syslogFormatter);

$logger->pushHandler($warnHandler);
$logger->pushHandler($infoHandler);

$logger->info('Test of PHP application logging.');
$logger->warn('Test of the warning system logging.');

Here we can see that the format of the syslog messages has been changed with the value, $syslogFormatter. Because syslog provides a date/time with each log entry, we don’t need to include these values in our log text. The syslog facility is set to local0 with all WARNING messages sent to syslog with the INFO level messages and WARNING level messages logged to our local file. You can find additional information about Syslog facilities and log levels on the Syslog Wikipedia page.

Logging to CloudWatch Logs

Now that you’ve seen the basic use of Monolog, let’s send some logs over to CloudWatch Logs. We can use the Amazon Web Services CloudWatch Logs Handler for Monolog library to integrate Monolog with CloudWatch Logs. In our example, an authentication application produces log information.

use Aws\CloudWatchLogs\CloudWatchLogsClient;
use Maxbanton\Cwh\Handler\CloudWatch;
use Monolog\Logger;
use Monolog\Formatter\LineFormatter;
use Monolog\Handler\StreamHandler;
use Monolog\Handler\SyslogHandler;

$logFile = "testapp_local.log";
$appName = "TestApp01";
$facility = "local0";

// Get instance ID:
$url = "http://169.254.169.254/latest/meta-data/instance-id";
$instanceId = file_get_contents($url);

$cwClient = new CloudWatchLogsClient($awsCredentials);
// Log group name, will be created if none
$cwGroupName = 'php-app-logs';
// Log stream name, will be created if none
$cwStreamNameInstance = $instanceId;
// Instance ID as log stream name
$cwStreamNameApp = "TestAuthenticationApp";
// Days to keep logs, 14 by default
$cwRetentionDays = 90;

$cwHandlerInstanceNotice = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameInstance, $cwRetentionDays, 10000, [ 'application' => 'php-testapp01' ],Logger::NOTICE);
$cwHandlerInstanceError = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameInstance, $cwRetentionDays, 10000, [ 'application' => 'php-testapp01' ],Logger::ERROR);
$cwHandlerAppNotice = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameApp, $cwRetentionDays, 10000, [ 'application' => 'php-testapp01' ],Logger::NOTICE);

$logger = new Logger('PHP Logging');

$formatter = new LineFormatter(null, null, false, true);
$syslogFormatter = new LineFormatter("%channel%: %level_name%: %message% %context% %extra%",null,false,true);
$infoHandler = new StreamHandler(__DIR__."/".$logFile, Logger::INFO);
$infoHandler->setFormatter($formatter);

$warnHandler = new SyslogHandler($appName, $facility, Logger::WARNING);
$warnHandler->setFormatter($syslogFormatter);

$cwHandlerInstanceNotice->setFormatter($formatter);
$cwHandlerInstanceError->setFormatter($formatter);
$cwHandlerAppNotice->setFormatter($formatter);

$logger->pushHandler($warnHandler);
$logger->pushHandler($infoHandler);
$logger->pushHandler($cwHandlerInstanceNotice);
$logger->pushHandler($cwHandlerInstanceError);
$logger->pushHandler($cwHandlerAppNotice);

$logger->info('Initial test of application logging.');
$logger->warn('Test of the warning system logging.');
$logger->notice('Application Auth Event: ',[ 'function'=>'login-action','result'=>'login-success' ]);
$logger->notice('Application Auth Event: ',[ 'function'=>'login-action','result'=>'login-failure' ]);
$logger->error('Application ERROR: System Error');

In this example, application authentication events are passed as a PHP array and presented in CloudWatch Logs as JSON. The events with a result of login-success and login-failure are sent to both the log stream associated with the instance ID and to the log stream associated with the application name.

 

Using these different stream locations, we can create metrics and alarms at either a per-instance level or per-application level. Let’s assume that we want to create a metric for total number of users logged into our application over the past five minutes. Select your event group and then choose Create Metric Filter.

On the next page, we can create our filter and test in the same window. For the filter data, we use the JSON string from the log entry. Enter the following string to extract all the successful logins.

{ $.result = login-success }

Below, we can see the filter details. I updated the Filter Name to a value that’s easy to identify. The Metric Namespace now has a value associated with the application name and the metric name reflects the number of login-success values.

 

We could now create an alarm to send a notification or perform some action (such as an Amazon EC2 scaling decision), based on this information being received via CloudWatch Logs.

With these values, we would receive an alert each time there were more than 50 successful logins within a five-minute period.

Laravel logging

Monolog is used as the logging solution for a number of PHP applications and frameworks, including, the popular Laravel PHP framework. In this example, we’ll show the use of Monolog with CloudWatch Logs within Laravel. Our first step is to find out the current log settings for our Laravel application. If you open config/app.php within your application root, you see various log settings. By default, Laravel is set to log to a single log file using the baseline log level of debug.

Next, we add the AWS SDK for PHP as a service provider within Laravel using instructions and examples from here.

You also want to add the Monolog library for CloudWatch Logs to the composer.json file for inclusion in the application, as shown.

You now need to extend the current Laravel Monolog configuration with your custom configuration. You can find additional information about this step on the Laravel Error and Logging page. The following is an example of this addition to the bootstrap/app.php file.

use Maxbanton\Cwh\Handler\CloudWatch;

$app->configureMonologUsing( function($monolog) {

    $cwClient = App::make('aws')->createClient('CloudWatchLogs');
    $cwGroupName = env('AWS_CWL_GROUP', 'laravel-app-logs');
    $cwStreamNameApp = env('AWS_CWL_APP', 'laravel-app-name');
    $cwTagName = env('AWS_CWL_TAG_NAME', 'application');
    $cwTagValue = env('AWS_CWL_TAG_VALUE', 'laravel-testapp01');
    $cwRetentionDays = 90;
    $cwHandlerApp = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameApp, $cwRetentionDays, 10000, [ $cwTagName => $cwTagValue ] );

    $monolog->pushHandler($cwHandlerApp);
});

For testing purposes, we add a logging call to a test route in routes/web.php.

Route::get('/test', function () {
    Log::warning('Clicking on test link!!!');
    return view('test');
});

When the test route is invoked, the logs now show in CloudWatch Logs.

Conclusion

In our examples, we’ve shown how to use PHP Monolog to log to a local file, syslog, and CloudWatch Logs. We have also demonstrated the integration of Monolog with CloudWatch Logs within a popular PHP application framework. Finally, we’ve shown how to create CloudWatch Logs metric filters and apply those to CloudWatch Alarms that make the data from the logs actionable with notifications, as well as scaling decisions. CloudWatch Logs provides a central logging capability for your PHP applications and, combined with Monolog, ensures the availability of the library for use within established projects and custom engagements.

AWS Toolkit for Eclipse: Support for AWS CodeCommit and AWS CodeStar

by Zhaoxi Zhang | on | in Java | | Comments

I am pleased to announce that the AWS Toolkit for Eclipse now supports AWS CodeCommit and AWS CodeStar. This means you can create, view, clone, and delete your AWS CodeCommit repositories in the AWS Toolkit for Eclipse. You can also import existing projects under your AWS CodeStar account directly into the Eclipse IDE.

Git Credentials Configuration

We recommend that you use Git credentials with HTTPS to connect to your AWS CodeCommit repositories. For more information, see Use Git Credentials and HTTPS with AWS CodeCommit.

In the new version of the AWS Toolkit for Eclipse, you will see an entry for AWS CodeCommit on the Eclipse Preferences page, shown here. To install the AWS Toolkit for Eclipse, follow the instructions on the AWS Toolkit for Eclipse page. You can configure your Git credentials for your AWS accounts on this page. For information, see Create Git Credentials for HTTPS Connections to AWS CodeCommit. You can type in the newly generated user name and password into the text fields, or import the CSV file generated from the IAM console directly into Eclipse.

AWS CodeCommit Explorer

An entry for AWS CodeCommit also appears in AWS Explorer, as shown here. To open this view, click the drop-down box next to the AWS icon in the toolbar, and select Show AWS Explorer View. You can create, view, clone, and delete repositories in this view.

  • Create a Repository
    To create a repository, right-click AWS CodeCommit and then select Create Repository, as shown here. Type the repository name and an optional description in the Create Repository dialog box. The newly created repository will appear under AWS CodeCommit.

    Figure: AWS CodeCommit Explorer View

    Figure: Create Repository Dialog Box

  • View a Repository
    To view a repository, double-click the repository name in AWS Explorer. This will open the repository editor where you can see the metadata for the repository, as shown here. The repository editor also shows the latest 10 commits for the selected branch. To refresh the repository editor, click the refresh icon on the top-right corner of the page.
  • Clone a Repository
    To clone a repository, click the Check out button in the repository editor, or right-click the repository name in AWS Explorer and select Clone Repository. If you haven’t configured Git credentials for your current AWS account in your Eclipse, a dialog box will prompt you to configure them.


    After you have configured your Git credentials, you will see the following pages for selecting a branch and local destination. You’ll see these pages have the same look and feel as EGit. For information about EGit, see the EGit Tutorial. You can use the Eclipse EGit plugin for managing your projects with Git. 

    Figure: Branch Selection Page

    Figure: Destination Page

  • Delete a Repository
    To delete a repository from AWS CodeCommit, right-click the repository name and select Delete Repository. When the following dialog box is displayed, type the repository name.

AWS CodeStar Project Checkout

You can use the AWS Toolkit for Eclipse to check out AWS CodeStar projects and edit them in the Eclipse IDE. To import your AWS CodeStar projects to Eclipse, click the drop-down box next to the AWS icon in the toolbar, and select Import AWS CodeStar Project. You will see all your AWS CodeStar projects under the selected account and region.

The plugin for AWS CodeStar finds all the AWS CodeCommit repositories that are linked to the selected project. From the Select repository drop-down list, choose the repository, and then click Next. You can also configure the Git credentials on this page if they have not been configured on the selected account.

Resources

For information about AWS CodeCommit, see the AWS CodeCommit documentation. For information about AWS CodeStar, see the AWS CodeStar documentation.

Conclusion

We hope you find these new features useful. If you have questions or other feedback about using the AWS Toolkit for Eclipse, feel free to leave it in the comments.

Make the Most of Community Resources for AWS SDKs and Tools

by Leah Rivers | on | | Comments

As the new year gets well underway, we want to be sure you know the best ways to get help, keep up to date, and join the conversation about tools you use to build on AWS. We’ve recently refreshed our SDK and CLI README files on GitHub with links to Stack Overflow for getting help. It’s a great time to give you an overview of community resources that we hope make it easier for you to develop using AWS. Let us know what you think!

Get Help
We use GitHub for tracking bugs and feature requests: .NET | Java | JavaScript | PHP | Ruby | Python | Go | C++ | CLI

We use Stack Overflow for general help questions. Use these tags for our SDKs and CLI:

Chat with the Community
We ❤ our gitter channels for the CLI and SDKs. We regularly participate in conversations with developers building on AWS to share ideas, get feedback, and answer questions in the context of a community chat. Join the community by checking out our gitter channels: .NET | Java | JavaScript | PHP | Ruby | Python | Go | CLI

Follow us on Twitter
@awscloud – We’ll share blog posts and announcements for all our SDKs and developer tools.
@awsfornet – Follow us here for updates to the AWS SDK for .NET and AWS Toolkit for Visual Studio.
@awsforjava – Follow us here for updates to the AWS SDK for Java and AWS Toolkit for Eclipse.

AWS Toolkit for Eclipse: VPC Configuration Enhancement for AWS Elastic Beanstalk Environments

by Zhaoxi Zhang | on | in Java | | Comments

From the blog post VPC Configuration for an AWS Elastic Beanstalk Environment, you learned how to deploy your web application to AWS Elastic Beanstalk by using the AWS Toolkit for Eclipse. In this blog, I’m happy to announce that you can now configure Elastic Load Balancing (ELB) subnets and Amazon EC2 subnets separately. The following screenshots show the experience in the AWS Toolkit for Eclipse is consistent with that in the Elastic Beanstalk console.

 

VPC Configuration in AWS Elastic Beanstalk Console

VPC Configuration in AWS Toolkit for Eclipse

Notice that the ELB subnet configuration is enabled only when the environment type is Load Balanced Web Server Environment (see the following screenshot for the type selection). Please read through Using Elastic Beanstalk with Amazon VPC to be sure you understand all the VPC parameters. Inappropriate parameter combinations can cause deployment failures. Follow the rules below when you create an AWS Elastic Beanstalk environment:

  • You must select at least one subnet for EC2 and for ELB.
  • You must select at least one ELB subnet in each Availability Zone where there is an EC2 subnet, and vice versa.
  • You may only select one EC2 subnet per Availability Zone.
  • When one subnet is used for both EC2 and ELB, select the Associate Public IP Address check box unless you have set up a NAT instance to route traffic from the Internet to your ELB subnet.

Application and Environment Configuration

Context Pattern added to the AWS SDK for Go

by Jason Del Ponte | on | in Go | | Comments

The AWS SDK for Go v1.8.0 release adds support for the API operation request functional options, and the Context pattern. Both of these features were high demand requests from our users. Request options allow you to easily configure and augment how the SDK makes API operation requests to AWS services. The SDK’s support for the Context pattern allows your application take advantage of cancellation, timeouts, and Context Values on requests.  The new request options and Context pattern give your application even more control over SDK’s request execution and handling.

Request Options

Request Options are functional arguments that you pass in to the SDK’s API operation methods. These enable you to configure the request in line with functional options. Functional options are a pattern you can use to configure an operation via passed-in functions or closures in line with the method call.

For example, you can configure the Amazon S3 API operation PutObject to log debug information about the request directly, without impacting the other API operations used by your application.

// Log this API operation only. 
resp, err := svc.PutObjectWithContext(ctx, params, request.WithLogLevel(aws.LogDebug))

This pattern is also helpful when you want your application to inject request handlers into the request. This allows you to do so in line with the API operation method call.

resp, err := svc.PutObjectWithContext(ctx, params, func(r *request.Request) {
	start := time.Now()
	r.Handlers.Complete.PushBack(func(req *request.Request) {
		fmt.Println("request %s took %s to complete", req.RequestID, time.Since(start))
	})
})

All of the SDK’s new service client methods that have a WithContext suffix support these request options. You can also apply request options to the SDK’s standard Request directly with the ApplyOptions method.

API Operations with Context

All of the new methods of the SDK’s API operations that have a WithContext suffix take a ContextValue. This value must be non-nil. Context allows your application to control API operation request cancellation. This means you can now easily institute request timeouts based on the Context pattern. Go introduced the Context pattern in the experimental package golang.org/x/net/context, and it was later added to the Go standard library in Go 1.7. For backward compatibility with previous Go versions, the SDK created the Context interface type in the github.com/aws/aws-sdk-go/aws package. The SDK’s Context type is compatible with Context from both golang.org/x/net/context and the Go 1.7 standard library Context package.

Here is an example of how to use a Context to cancel uploading an object to Amazon S3. If the put doesn’t complete within the timeout passed in, the API operation is canceled. When a Context is canceled, the SDK returns the CanceledErrorCode error code. A working version of this example can be found in the SDK.

sess := session.Must(session.NewSession())
svc := s3.New(sess)

// Create a context with a timeout that will abort the upload if it takes 
// more than the passed in timeout.
ctx := context.Background()
var cancelFn func()
if timeout > 0 {
	ctx, cancelFn = context.WithTimeout(ctx, timeout)
}
// Ensure the context is canceled to prevent leaking.
// See context package for more information, https://golang.org/pkg/context/
defer cancelFn()

// Uploads the object to S3. The Context will interrupt the request if the 
// timeout expires.
_, err := svc.PutObjectWithContext(ctx, &s3.PutObjectInput{
	Bucket: aws.String(bucket),
	Key:    aws.String(key),
	Body:   body,
})
if err != nil {
	if aerr, ok := err.(awserr.Error); ok && aerr.Code() == request.CanceledErrorCode {
		// If the SDK can determine the request or retry delay was canceled
		// by a context the CanceledErrorCode error code will be returned.
		fmt.Println("request's context canceled,", err)
	}
	return err
}

API Operation Waiters

Waiters were expanded to include support for request Context and waiter options. The new WaiterOption type defines functional options that are used to configure the waiter’s functionality.

For example, the WithWaiterDelay allows you to provide your own function that returns how long the waiter will wait before checking the waiter’s resource state again. This is helpful when you want to configure an exponential backoff, or longer retry delays with ConstantWaiterDelay.

The example below highlights this by configuring the WaitUntilBucketExists method to use a 30-second delay between checks to determine if the bucket exists.

svc := s3.New(sess)
ctx := contex.Background()

_, err := svc.CreateBucketWithContext(ctx, &s3.CreateBucketInput{
	Bucket: aws.String("myBucket"),
})
if err != nil {
	return fmt.Errorf("failed to create bucket, %v", err)
}

err := svc.WaitUntilBucketExistsWithContext(ctx,
	&s3.HeadBucket{
		Bucket: aws.String("myBucket"),
	},
	request.WithWaiterDelay(request.ConstantWaiterDelay(30 * time.Second)),
)
if err != nil {
	return fmt.Errorf("failed to wait for bucket exists, %v", err)
}

fmt.Println("bucket created")

API Operation Paginators

Paginators were also expanded to add support for Context and request options. Configuring request options for pagination applies the options to each new Request that the SDK creates to retrieve the next page. By extending the Pages API methods to include Context and request options the SDK gives you control over how the SDK will make each page request, and cancellation of the pagination.

svc := s3.New(sess)
ctx := context.Background()

err := svc.ListObjectsPagesWithContext(ctx,
	&s3.ListObjectsInput{
		Bucket: aws.String("myBucket"),
		Prefix: aws.String("some/key/prefix"),
		MaxKeys: aws.Int64(100),
	},
	func(page *s3.ListObjectsOutput, lastPage bool) bool {
		fmt.Println("Received", len(page.Contents), "objects in page")
		for _, obj := range page.Contents {
			fmt.Println("Key:", aws.StringValue(obj.Key))
		}
		return true
	},
)
if err != nil {
	return fmt.Errorf("failed to create bucket, %v", err)
}

API Operation Pagination without Callbacks

In addition to the Pages API operations, you can use the new Pagination type in the github.com/aws/aws-sdk-go/aws/request package. This type enables you to control the iterations of pages directly. This is helpful when you do not want to use callbacks for paginating AWS operations. This new type allows you to treat pagination similar to the Go stdlib bufio package’s Scanner type to iterate through pages with a for loop. You can also use this pattern with the Context pattern by calling Request.SetContext on each request in the NewRequest function.

svc := s3.New(sess)

params := s3.ListObjectsInput{
	Bucket: aws.String("myBucket"),
	Prefix: aws.String("some/key/prefix"),
	MaxKeys: aws.Int64(100),
}
ctx := context.Background()

p := request.Pagination{
	NewRequest: func() (*request.Request, error) {
		req, _ := svc.ListObjectsRequest(&params)
		req.SetContext(ctx)
		return req, nil
	},
}

for p.Next(){
	page := p.Page().(*s3.ListObjectsOutput)
	
	fmt.Println("Received", len(page.Contents), "objects in page")
	for _, obj := range page.Contents {
		fmt.Println("Key:", aws.StringValue(obj.Key))
	}
}

return p.Err()

Wrap Up

The addition of Context and request options expands the capabilities of the AWS SDK for Go, giving your applications the tools needed to implement request lifecycle and configuration with the SDK. Let us know your experiences using the new Context pattern and request options features.