AWS Developer Blog

AWS Toolkit for Eclipse Integration with AWS CodeDeploy (Part 3)

In this part of the series, we will show you how easy it is to run deployment commands on your EC2 fleet with the help of the AWS CodeDeploy plugin for Eclipse.

Create AppSpec Template

  • First, let’s create a shell script that executes the command we need to run on our instances:

/home/hanshuo/stop-httpd-server-appspec/template/stop-httpd.sh

#!/bin/bash

service ##HTTPD_SERVICE_NAME## stop

To make it a little bit fancier, instead of hardcoding httpd as the service name, we instead use a place holder ##HTTPD_SERVICE_NAME##. Later, you will learn how this could help you create a configurable deployment task in Eclipse.

  • Next, inside the same directory, let’s create a simple AppSpec file that specifies our shell script as the command for the ApplicationStart lifecycle event.

/home/hanshuo/stop-httpd-server-appspec/template/appspec.yml

version: 0.0
os: linux
hooks:
  ApplicationStart:
    - location: stop-httpd.sh
      timeout: 300
      runas: root

This AppSpec file asks CodeDeploy to run stop-httpd.sh as the root user during the ApplicationStart phase of the deployment. Since this is the only phase mentioned, it basically tells the service to run this single script as the whole deployment process – that’s all we need! You can find more information about the AppSpec file in the AWS CodeDeploy Developer Guide.

  • Now that we have created our template which consists of all the necessary AppSpec and command script files. The final step is to create a metadata file for it, which is in a specific JSON format understood by the Eclipse plugin.

/home/hanshuo/stop-httpd-server-appspec/template.md

{
  "metadataVersion" : "1.0",
  "templateName" : "Stop Apache HTTP Server",
  "templateDescription" : "Stop Apache HTTP Server",
  "templateBasedir" : "/home/hanshuo/stop-httpd-server-appspec/template",
  "isCustomTemplate" : true,
  "warFileExportLocationWithinDeploymentArchive" : "/application.war",
  "parameters" : [
    {
      "name" : "Apache HTTP service name",
      "type" : "STRING",
      "defaultValueAsString" : "httpd",
      "substitutionAnchorText" : "##HTTPD_SERVICE_NAME##",
      "constraints" : {
        "validationRegex" : "[\S]+"
      }
    }
  ]
}
  • templateName, templateDescription – Specifies the name and description for this template
  • templateBasedir –  Specifies the base directory where your AppSpec file and the command scripts are located
  • isCustomTemplate – True if it is a custom template created by the user; this tells the plugin to treat templateBasedir as an absolute path.
  • warFileExportLocationWithinDeploymentArchive – Since this deployment task doesn’t actually consume any WAR file, we can specify any value for this attribute.
  • parameters – Specifies a list of all the configurable parameters in our template. In this case, we have only one parameter ##HTTPD_SERVICE_NAME##
    • ​name – The user-friendly name for the parameter
    • type – Either STRING or INTEGER
    • defaultValueAsString – The default value for this parameter
    • substitutionAnchorText – The place-holder text that represents this parameter in the template files; when copying the template files, the plugin will replace these place holders with the user’s actual input value
    • constraints – The constraints that will be used to validate user input; supported constraints are validationRegex (for STRING), minValue and maxValue (for INTEGER).

Ok, now we that have everything ready, let’s go back to Eclipse and import the AppSpec template we just created.

In the last page of the deployment wizard, click the Import Template button at the top-right corner of the page:

Then find the location of our template metadata file, and click Import.

The plugin will parse the metadata and create a simple UI view for the user input of the template parameter ##HTTPD_SERVICE_NAME##. Let’s just use the default value httpd, and click Finish.

After the deployment completes, all the httpd services running on your EC2 instances will be stopped. If you are interested in how your commands were executed on your hosts, or if you need to debug your deployment, the log output of your scripts can be found at /opt/codedeploy-agent/deployment-root/{deployment-group-ID}/{deployment-ID}/logs/scripts.log

In this example, you are expected to get the following log output. You can see that the httpd service was successfully stopped during the ApplicationStart event:

	2015-01-07 00:28:04 LifecycleEvent - ApplicationStart
	2015-01-07 00:28:04 Script - stop-httpd.sh
	2015-01-07 00:28:04 [stdout]Stopping httpd: [  OK  ]

In the future, if you ever want to repeat this operation on your EC2 instances, just kick off another deployment in Eclipse using the same Stop Apache HTTP Server template, and you are done!

You can use the AppSpec template system to create more complicated deployment tasks. For example, you can define your own template that deploys your Java web app to other servlet containers such us Jetty and JBoss. If you are interested in the Tomcat 7 running on Linux template we used in the walkthrough, you can find the source code in our GitHub repo.

Feel free to customize the source for your specific need, and keep in mind that we are always open to pull-requests if you want to contribute your own templates that might be useful for other Java developers.

Conclusion

The AWS CodeDeploy plugin for Eclipse allows you to easily initiate a deployment directly from your source development environment. It eliminates the need to repeat the manual operations of building, packaging and preparing revision. It also allows you to quickly set up an AppSpec template that represents a repeatable and configurable deployment task.

Give it a try and see whether it can improve how you deploy your Java web project to your EC2 instances. If you have any feedback or feature requests, tell us about them in the comments. We’d love to hear them!

AWS Toolkit for Eclipse Integration with AWS CodeDeploy (Part 2)

In this part of the blog series, we will show you how to deploy a Java web project to your EC2 instances using the AWS CodeDeploy plugin.

Prerequisites

If you want to follow the walkthrough, you will need to create a CodeDeploy deployment group to begin with. The easiest way to do so is to follow the first-run walkthrough in the AWS CodeDeploy Console.

In this example, we have created a CodeDeploy application called DemoApplication and a deployment group called DemoFleet, which includes three EC2 instances running the Amazon Linux AMI.

Deploy an AWS Java Web Project to CodeDeploy

First, let’s open Eclipse and create a new AWS Java Web project in the workspace (File -> New -> Project -> AWS Java Web Project). Select the Basic Java Web Application option to start with. Note that this step is the same as how you would start a project for AWS Elastic Beanstalk.

In Project Explorer, right-click on the new project, and select Amazon Web Services -> Deploy to AWS CodeDeploy….

In the first page of the deployment wizard, you will be asked to select the target CodeDeploy application and deployment group. In this example, we select “DemoApplication” and “DemoFleet” which we just created in the console.

In the next page, you can specify the following options for this deployment.

  • CodeDeploy deployment config – Specifies how many instances you would want the deployment to run in parallel. In this example, we select “CodeDeployDefault.OneAtATime” which is the safest approach to reduce application downtime.
  • Ignore ApplicationStop step failures – Indicates whether or not the deployment should stop if it encounters an error when executing the ApplicationStop lifecycle event command.
  • S3 bucket name – Specifies the S3 bucket where your revision will be uploaded.

Click Next, and you will be asked to select the AppSpec template and parameter values for this deployment. At this moment, you should see only one predefined template, Tomcat 7 running on Linux. This AppSpec template includes the lifecycle event commands that spin up a Tomcat 7 server on your EC2 instances and deploy your application to it. The template accepts parameters including the context-path of the application and the port number that the Tomcat server will listen to.

We will explain later how the AppSpec template is defined and how you can add your custom templates. Here we select deploying to server root and using the default HTTP port 80. Then just click Finish to initiate the deployment.

After the deployment starts, you will be prompted by a dialog that tracks the progress of all the deployments on individual EC2 instances.

You can double-click on any of the instances to open the detailed view of each lifecycle event. If the deployment fails during any of the events, you can click View Diagnostics to see the error code and the log output from your command script.

After the deployment completes, your application will be available at http://{ec2-public-endpoint}

To view the full deployment history of a deployment group, we can visit the deployment group detail page via AWS Explorer View -> AWS CodeDeploy -> DemoApplication -> Double-click DemoFleet.

For some of you who have been following the walkthrough, it’s possible that you might not see the sample JSP page when accessing the EC2 endpoint. Instead it shows the “Amazon Linux AMI Test Page”. This happens because the Amazon Linux AMI pre-bundles a running Apache HTTP server that has occupied the 80 port which our Tomcat server also attempts to bind to.

To solve this problem, you will need to run `sudo service httpd stop` on every EC2 instance before the Java web app is deployed. Without the help of CodeDeploy, you would need to ssh into each of the instances and manually run the command, which is a tedious and time-consuming process. So how can we leverage the CodeDeploy service to ease this process? What would be even better is to have the ability to save this specific deployment task into some configurable format, and make it easily repeatable in the future.

In the next part of our blog series, we will take a look at how we can accomplish this by using the AWS CodeDeploy plugin for Eclipse.

AWS Toolkit for Eclipse Integration with AWS CodeDeploy (Part 1)

We are excited to announce that the AWS Toolkit for Eclipse now includes integration with AWS CodeDeploy and AWS OpsWorks. In addition to the support of AWS Elastic Beanstalk deployment, these two new plugins provide more options for Java developers to deploy their web application to AWS directly from their Eclipse development environment.

In this blog post series, we will take a look at the CodeDeploy plugin and walk you through its features to show you how it can improve your deployment automation.

How to Install?

The AWS CodeDeploy and OpsWorks plugins are available at the official AWS Eclipse Toolkit update site (http://aws.amazon.com/eclipse). Just follow the same steps you took when you installed and updated the AWS plugins, and you will see the two new additions in the plugin list of our update site.

For more information about the installation and basic usage of the AWS Toolkit for Eclipse, go to our official documentation site.

AWS CodeDeploy

If you haven’t heard of it yet, AWS CodeDeploy is a new AWS service that was just launched last year during re:Invent 2014. The service allows you to fully automate the process of deploying your code to a fleet of EC2 instances. It eliminates the need for manual operations by providing a centralized solution that allows you to initiate, control and monitor your deployments.

If you want to learn more about CodeDeploy, here are some useful links:

One of the major design goals of AWS CodeDeploy is to be platform and language agnostic. With a command-based install model, CodeDeploy allows you to specify the commands you want to run during each deployment phase (a.k.a. lifecycle event), and these commands can be written in any kind of code.

The language-agnostic characteristic of CodeDeploy brings maximum flexibility and makes it usable for all kinds of deployment purposes. But meanwhile, because of its generality, the service may not natively support some common use cases that are specific to a particular development language. For example, when working with Java web applications, we would likely not want to deploy the Java source code directly to our hosts – the deployment process always involves some necessary building and packaging phases before publishing the content to the hosts. This is in contrast to many scripting languages where the source code itself can be used directly as the deployment artifact. In its deployment workflow model, CodeDeploy also requires the developer to prepare a revision every time he/she wants to initiate a deployment. This could be either in form of a snapshot of the GitHub repo or an archive bundle uploaded to Amazon S3. This revision should include an Application Specification (AppSpec) file, where the developer’s custom deployment commands are specified.

To summarize, as the following diagram shows, deploying a Java web application via CodeDeploy would require non-trivial manual operations in the development environment, before the deployment actually happens.

Ideally, we would want a tool that is able to:

  • automate the building, packaging, and revision preparation phases for a Java web app
  • support creating configurable and repeatable deployment tasks

In the next part of our blog series, we will walkthrough a simple use case to demonstrate how the AWS CodeDeploy Eclipse plugin solves these problems and makes the deployment as easy as a number of mouse-clicks. Stay tuned!

DynamoDB JSON and Array Marshaling for PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Back in October of 2014, Amazon DynamoDB added support for new data types, including the map (M) and list (L) types. These new types, along with some API updates, make it possible to store more complex, multilevel data, and use DynamoDB for document storage.

The DynamoDB Marshaler

To make these new types even easier for our PHP SDK users, we added a new class, called the DynamoDB Marshaler, in Version 2.7.7 of the AWS SDK for PHP. The Marshaler object has methods for marshaling JSON documents and PHP arrays to the DynamoDB item format and unmarshaling them back.

Marshaling a JSON Document

Let’s say you have JSON document describing a contact in the following format:

{
  "id": "5432c69300594",
  "name": {
    "first": "Jeremy",
    "middle": "C",
    "last": "Lindblom"
  },
  "age": 30,
  "phone_numbers": [
    {
      "type": "mobile",
      "number": "5555555555",
      "preferred": true
    },
    {
      "type": "home",
      "number": "5555555556",
      "preferred": false
    }
  ]
}

You can use the DynamoDB Marshaler to convert this JSON document into the format required by DynamoDB.

use AwsDynamoDbDynamoDbClient;
use AwsDynamoDbMarshaler;

$client = DynamoDbClient::factory(/* your config */);
$marshaler = new Marshaler();
$json = file_get_contents('/path/to/your/document.json');

$client->putItem([
    'TableName' => 'YourTable',
    'Item'      => $marshaler->marshalJson($json)
]);

The output of marshalJson() is an associative array that includes all the type information required for the DynamoDB 'Item' parameter.

[
    'id' => ['S' => '5432c69300594'],
    'name' => ['M' => [
        'first' => ['S' => 'Jeremy'],
        'middle' => ['S' => 'C'],
        'last' => ['S' => 'Lindblom'],
    ]],
    'age' => ['N' => '30'],
    'phone_numbers' => ['L' => [
        ['M' => [
            'type' => ['S' => 'mobile'],
            'number' => ['S' => '5555555555']
        ]],
        ['M' => [
            'type' => ['S' => 'home'],
            'number' => ['S' => '5555555556']
        ]],
    ]],
];

To retrieve an item and get the JSON document back, you need to use the unmarshalJson() method.

$result = $client->getItem([
    'TableName' => 'YourTable',
    'Key'       => ['id' => ['S' => '5432c69300594']]
]);
$json = $marshaler->unmarshalJson($result['Item']);

Marshaling a Native PHP Array

The Marshaler also provides the marshalItem() and unmarshalItem() methods that do the same type of thing, but for arrays. This is essentially an upgraded version of the existing DynamoDbClient::formatAttributes() method.

$data = [
    'id' => '5432c69300594',
    'name' => [
        'first'  => 'Jeremy',
        'middle' => 'C',
        'last'   => 'Lindblom',
    ],
    'age' => 30,
    'phone_numbers' => [
        [
            'type'      => 'mobile',
            'number'    => '5555555555',
            'preferred' => true
        ],
        [
            'type'      => 'home',
            'number'    => '5555555556',
            'preferred' => false
        ],
    ],
];

// Marshaling the data and putting an item.
$client->putItem([
    'TableName' => 'YourTable',
    'Item'      => $marshaler->marshalItem($data)
]);

// Getting and item and unmarshaling the data.
$result = $client->getItem([
    'TableName' => 'YourTable',
    'Key'       => ['id' => ['S' => '5432c69300594']]
]);
$data = $marshaler->unmarshalItem($result['Item']);

Be aware that marshalItem() does not support binary (B) and set (SS, NS, and BS) types. This is because they are ambiguous with the string (S) and list (L) types and have no equivalent type in JSON. We are working on some ideas that will provide more help with these types in Version 3 of the SDK.

Deprecations in the SDK

The new data types are a great addition to the Amazon DynamoDB service, but one consequence of adding support for these types is that we had to deprecate the following classes and methods in the AwsDynamoDb namespace of the PHP SDK:

These classes and methods made assumptions about how certain native PHP types convert to DynamoDB types. The addition of the new types to DynamoDB invalidated those assumptions, and we could not update the code in a backward-compatible way to support the new types. They still work fine, but just not with the new types. These classes and methods are removed in Version 3 of the SDK, and the DynamoDB Marshaler object is meant to be the replacement for their functionality.

Feedback

We hope that this addition to the SDK makes working with DynamoDB really easy. If you have any feedback about the Marshaler or any ideas on how we can improve it, please let us know on GitHub. Better yet, send us a pull request. :-)

Updated Amazon Cognito Credentials Provider

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Amazon Cognito allows you to get temporary AWS credentials, so that you don’t have to distribute your own credentials with your application. Last year we added a Cognito credentials provider to the AWS SDK for .NET to simplify this process.

With the latest update to Cognito, we are now making it even easier to use Cognito with your application. Using the latest version of the SDK, you no longer need to specify IAM roles in your application if you have already associated the correct roles with your identity pool.

Below is an example of how you can construct and use the new credentials provider:

CognitoAWSCredentials credentials = new CognitoAWSCredentials(
    identityPoolId,   // identity pool id
    region);          // identity pool region

using (var s3Client = new AmazonS3Client(credentials))
{
    s3Client.ListBuckets();
}

Something to note is that even if you have associated roles with an identity pool, you can still specify IAM roles—even ones that are different from the roles configured on the identity pool—when creating these credentials. This gives you finer control over what resources and operations these credentials have access to.

Upcoming Stable Release of AWS SDK for Ruby – Version 2

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

We plan to release version 2 of the AWS SDK for Ruby next week. We will remove the preview flag from the 2.0 version of aws-sdk.

Specify Your Version Dependencies

The AWS SDK for Ruby uses semantic versioning. Updates within version 1 are backwards compatible.

Version 2 of the aws-sdk gem is not backwards compatible.

If you depend on the aws-sdk gem today, and do not specify the major version, please add this now. If not, you may run into issues when you bundle update.

# Gemfile
gem 'aws-sdk', '< 2.0'

# gemspec
spec.add_dependency('aws-sdk', '< 2.0')

NameError: uninitialized constant AWS

If you receive this error, you likely have a dependency on aws-sdk and have updated so that you now have version 2 installed. Version 2 uses a different module name, so it does not define AWS.

To resolve this issue, specify your version dependency as instructed above.

Using Both Versions

The following diagram shows how the version 1 and version 2 gems are organized.

The aws-sdk gem is empty, and simply requires version 1 or version 2 specific gems. This allows you to install version 1 and version 2 in the same application.

Option A, for existing users

# Gemfile
gem 'aws-sdk', '~> 1'
gem 'aws-sdk-resources', '~> 2'

# in code
require 'aws-sdk'
require 'aws-sdk-resources'

Option B, for new users

# Gemfile
gem 'aws-sdk-v1'
gem 'aws-sdk', '~> 2'

# in code
require 'aws-sdk-v1'
require 'aws-sdk'

Attention Library Authors

If you maintain a gem that has a dependency on version 1 of aws-sdk, I strongly recommend that you replace it with a dependency on aws-sdk-v1. This allows end users to require version 2 of aws-sdk.

Please report any issues you have on our GitHub repository.

Cross-Account IAM Roles in Windows PowerShell

by Brian Beach | on | in .NET | Permalink | Comments |  Share

As a company’s adoption of Amazon Web Services (AWS) grows, most customers adopt a multi-account strategy. Some customers choose to create an account for each application, while others create an account for each business unit or environment (development, testing, production). Whatever the strategy, there is often a use case that requires access to multiple accounts at once. This post examines cross-account access and the AssumeRole API, known as Use-STSRole in Windows PowerShell.

A role consists of a set of permissions that grant access to actions or resources in AWS. An application uses a role by calling the AssumeRole API function. The function returns a set of temporary credentials that the application can use in subsequent function calls. Cross-account roles allow an application in one account to assume a role (and act on resources) in another account.

One common example of cross-account access is maintaining a configuration management database (CMDB). Most large enterprise customers have a requirement that all servers, including EC2 instances, must be tracked in the CMDB. Example Corp., shown in Figure 1, has a Payer account and three linked accounts: Development, Testing, and Production.

Figure 1: Multiple Accounts Owned by a Single Customer

Note that linked accounts are not required to use cross-account roles, but they are often used together. You can use cross-account roles to access accounts that are not part of a consolidated billing relationship or between accounts owned by different companies. See the user guide to learn more about linked accounts and consolidated billing.

Scenario

Bob, a Windows administrator at Example Corp., is tasked with maintaining an inventory of all the instances in each account. Specifically, he needs to send a list of all EC2 instances in all accounts to the CMDB team each night. He plans to create a Windows PowerShell script to do this.

Bob could create an IAM user in each account and hard-code the credentials in the script. Though this would be simple, hard-coding credentials is not the most secure solution. The AWS best practice is to Use IAM roles. Bob is familiar with IAM roles for Amazon EC2 and wants to learn more about cross-account roles.

Bob plans to script the process shown in Figure 2. The CMDB script will run on an EC2 instance using the CMDBApplication role. For each account, the script will call Use-STSRole to retrieve a set of temporary credentials for the CMDBDiscovery role. The script will then iterate over each region and call Get-EC2Instance using the CMDBDiscovery credentials to access the appropriate account and list all of its instances.

Figure 2: CMDB Application and Supporting IAM Roles

Creating IAM Roles

Bob begins to build his solution by creating the IAM roles shown in Figure 3. The Windows PowerShell script will run on a new EC2 instance in the Payer account. Bob creates a CMDBApplication role in the Payer account. This role in used by the EC2Instance allowing the script to run without requiring IAM user credentials. In addition, Bob will create a CMDBDiscovery role in every account. The CMDBDiscovery role has permission to list (or “Discover”) the instances in that account.

Figure 3: CMDB Application and Supporting IAM Roles

Notice that Bob has created two roles in the Payer account: CMDBApplication and CMDBDiscovery. You may be asking why he needs a cross-account role in the same account as the application. Creating the CMDBDiscovery role in every account makes the code easier to write because all accounts are treated the same. Bob can treat the Payer account just like any of the other accounts without a special code branch.

Bob first creates the Amazon EC2 role, CMDBApplication, in the Payer account. This role will be used by the EC2 instance that runs the Windows PowerShell script. Bob signs in to the AWS Management Console for the Payer account and follows the instructions to create a new IAM Role for Amazon EC2 with the following custom policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["sts:AssumeRole"],
      "Resource": ["*"]
    }
  ]
}

Policy 1: Policy Definition for the CMDBApplication IAM Role

The CMDBApplication role grants a single permission, sts:AssumeRole, which allows the application to call the AssumeRole API to get temporary credentials for another account. Notice that Bob is following the best practice of Least Privilege and has assigned only one permission to the application.

Next, Bob creates a cross-account role called CMDBDiscovery in each of the accounts, including the Payer account. This role will be used to list the EC2 instances in that account. Bob signs in to the console for each account and follows the instructions to create a new IAM role for cross-account access. In the wizard, Bob supplies the account ID of the Payer account (111111111111 in our example) and specifies the following custom policy.

Note that when creating the role, there are two options. One provides access between accounts you own, and the other provides access from a third-party account. Third-party account roles include an external ID, which is outside the scope of this article. Bob chooses the first option because his company owns both accounts.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["ec2:DescribeInstances"],
      "Resource": ["*"]
    }
  ]
}

Policy 2: Policy Definition for the CMDBDiscovery IAM Role

Again, this policy follows the best practice of least privilege and assigns a single permission,ec2:DescribeInstances, which allows the caller to list the EC2 instances in the account.

Creating the CMDB Script

With the IAM roles created, Bob next launches a new EC2 instance in the Payer account. This instance will use the CMDBApplication role. When the instance is ready, Bob signs in and creates a Windows PowerShell script that will list the instances in each account and region.

The first part of the script, shown in Listing 1, lists the instances in a given region and account. Notice that in addition to the account number and region, the function expects a set of credentials. These credentials represent the CMDBDiscovery role and will be retrieved from the AssumeRole API in the second part of the script.

Function ListInstances {
    Param($Credentials, $Account, $Region)
          
    #List all instances in the region
    (Get-EC2Instance -Credential $Credentials -Region $Region).Instances | % {
        If($Instance = $_) {
  
            #If there are instances in this region return the desired attributes
            New-Object PSObject -Property @{
                Account = $Account
                Region = $Region
                InstanceId = $Instance.InstanceId
                Name = ($Instance.Tags | Where-Object {$_.Key -eq 'Name'}).Value
            }
        }
    }
}

Listing 1: Windows PowerShell Function to List EC2 Instances

The magic happens in the second part of the script, shown in Listing 2. We know that the script is running on the new EC2 instance using the CMDBApplication. Remember that the only thing this role can do is call the AssumeRole API. Therefore, we should expect to see a call to AssumeRole. The Windows PowerShell cmdlet that implements AssumeRole is Use-STSRole.

#List of accounts to check
$Accounts = @(111111111111, 222222222222, 333333333333, 444444444444)
  
#Iterate over each account
$Accounts | % {
    $Account = $_
    $RoleArn = "arn:aws:iam::${Account}:role/CMDBDiscovery"
  
    #Request temporary credentials for each account and create a credential object
    $Response = (Use-STSRole -Region $Region -RoleArn $RoleArn -RoleSessionName 'CMDB').Credentials
    $Credentials = New-AWSCredentials -AccessKey $Response.AccessKeyId -SecretKey $Response.SecretAccessKey -SessionToken $Response.SessionToken
  
    #Iterate over all regions and list instances
    Get-AWSRegion | % {
        ListInstances -Credential $Credentials -Account $Account -Region $_.Region
    }
  
} | ConvertTo-Csv

Listing 2: Windows PowerShell Script That Calls AssumeRole

Use-STSRole will retrieve temporary credentials for the IAM role specified in the ARN parameter. The ARN uses the following format, where “ROLE_NAME” is the role you created in “TARGET_ACCOUNT_NUMBER” (e.g. CMDBDiscovery).

arn:aws:iam::TARGET_ACCOUNT_NUMBER:role/ROLE_NAME

Use-STSRole will return an AccessKey, a SecretKey, and a SessionToken that can be used to access the account specified in the role ARN. The script uses this information to create a new credential object, which it passes to ListInstances. ListInstances uses the credential object to list EC2 instances in the account specified in the role ARN.

That’s all there is to it. Bob creates a scheduled task that executes this script each night and sends the results to the CMDB team. When the company adds additional accounts, Bob simply adds the CMDBDiscovery role to the new account and updates the account list in his script.

Summary

Cross-account roles are a valuable tool for large customers with multiple accounts. Roles provide temporary credentials that a user or application in one account can use to access resources in another account. These temporary credentials do not need to be stored or rotated, resulting in a secure and maintainable architecture.

WordPress on AWS Whitepapers

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

This is a guest post by Andreas Chatzakis (@achatzakis), one of our AWS Solutions Architects.


WordPress is a very popular open source blogging tool and content management system (CMS) based on PHP and MySQL. AWS customers have been deploying WordPress to power anything from small blogs up to high traffic web sites.

We have recently published two new whitepapers about running WordPress on AWS:

  1. WordPress: Best Practices on AWS – This whitepaper helps system administrators get started with WordPress on AWS and shows them how to improve the cost efficiency of the deployment as well as the end-user experience. It also provides a reference architecture that addresses common scalability and high availability requirements.

  2. Deploying WordPress with AWS Elastic Beanstalk – This whitepaper demonstrates how to use AWS Elastic Beanstalk to implement a highly available and scalable deployment of WordPress. It includes the use of additional services such as Amazon Simple Storage Service (S3), Amazon CloudFront, and Amazon ElastiCache to improve the efficiency and performance of the installation.

These whitepapers complement the slides presented at the AWS reInvent 2014 conference: Best Practices for Running WordPress on AWS (slides, video).

We hope you find the above material useful, and we are always eager to hear your stories and experience with WordPress on AWS!

Alternative Formatting for Metrics Data in .NET SDK Logs

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

The AWS SDK for .NET has had response logging and performance metrics logging since before version 2.0. We introduced SDK logging and metrics output in an earlier post. You might want to skim that as a refresher.

The metrics data is included in the logs in a human-readable format, but SDK users who aggregate, analyze, and report on this data have to implement their own parser to parse the data out of the logs, which takes time, and can be error prone. So, we’ve added an alternate format to emit the metrics data into the log as JSON.

If you need a format other than JSON, or if you only need to log a subset of the metrics, the SDK now also has a mechanism to add a custom formatter.

Switching to JSON is done through the App.config or Web.config. The SDK’s application configuration has changed a little bit since the aforementioned post; though, for the sake of backwards compatibility, the original configuration settings will still work. To use the JSON setting, however, you’ll have to adopt the new style of configuration, at least for the logging section.

The old style of configuration was a set of flat key-value pairs in the <appSettings> section, like this:

<appSettings>
    <add key="AWSLogging" value="SystemDiagnostics" />
    <add key="AWSLogMetrics" value="true" />
    <add key="AWSResponseLogging" value="OnError" />
</appSettings>

The new configuration uses a custom section for the SDK with a structured format, like this:

<configuration>
    <configSections>
        <section name="aws" type="Amazon.AWSSection, AWSSDK" />
    </configSections>
    ...
    <aws region="us-west-2">
        <logging logTo="SystemDiagnostics"
             logMetrics="true"
             logResponses="OnError"
             logMetricsFormat="JSON" />
    </aws>
    ...
</configuration>

You can see that this configuration selects the JSON formatting. The rest of the logging configuration, including selection of System.Diagnostics or Log4Net, is the same as specified in the introductory logging post.

Creating a custom formatter is easy too. First, you need to implement the Amazon.Runtime.IMetricsFormatter interface, specifying a single method that takes in an Amazon.Runtime.IRequestMetrics and returns a string. Here’s a trivial example that prints out a single metric for a request:

using Amazon.Runtime;
namespace MyLib.Util
{
    public class MyMetricsFormatter : IMetricsFormatter
    {
        public string FormatMetrics(IRequestMetrics metrics)
        {
            var fmt = string.Empty;
            if (metrics.Timings.ContainsKey(Metric.ResponseProcessingTime))
            {
                var timing = metrics.Timings[Metric.ResponseProcessingTime]
                    .FirstOrDefault();

                if (timing != null)
                    fmt = string.Format("ResponseProcessingTime (ms): {0}", 
                            timing.ElapsedTime.Milliseconds);
            }
            return fmt;
        }
    }
}

The IRequestMetrics has three dictionaries of metrics; Properties, Timings, and Counters. The keys for these dictionaries are defined in the Amazon.Runtime.Metric enum. The Properties and Timings dictionaries have lists as values, and the the Counters dictionary has long as values.

To use a custom formatter, use the logMetricsCustomFormatter configuration, specifying the type and assembly:

<aws region="us-west-2">
    <logging logTo="SystemDiagnostics"
         logMetrics="true"
         logResponses="OnError"
         logMetricsCustomFormater="MyLib.Util.MyMetricsFormatter, MyLib" />
</aws>

If you want to collect metrics for a subset of services or method calls, your custom formatter can inspect the Metrics.ServiceName and Metrics.MethodName items in the Properties dictionary. The default behavior can be accessed by calling ToString() on the passed in IRequestMetrics. Similarly, you can get the JSON by calling metrics.ToJSON().

Keep in mind that if you have metrics logging enabled and have specified a custom formatter your formatter will be called for every request, so keep it as simple as possible.

Support for Amazon SNS in the Preview Release of AWS Resource APIs for .NET

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

The latest addition to the AWS Resource APIs for .NET is Amazon Simple Notification Service (SNS). Amazon SNS is a web service that enables applications, end-users, and devices to instantly send and receive notifications. In this post, we’ll see how we can use the resource APIs to work with SNS and to publish messages.

Topics

The key concept in SNS is a topic. A topic is something that publishers send messages to and subscribers receive messages from. Let’s take a look at how we can create and use a topic.

using Amazon.SimpleNotificationService.Model;
using Amazon.SimpleNotificationService.Resources; // Namespace for SNS resource APIs

// Create an instance of the SNS service 
// You can also use the overload that accepts an instance of the service client.
var sns = new SimpleNotificationService();

// Create a new topic
var topic = sns.CreateTopic("testTopic");

// Check that the topic is now in the list of all topics
// To do this, we can retrieve a list of all topics and check that.
var exists = sns.GetTopics()
    .Any(t => t.Arn.Equals(topic.Arn));
Console.WriteLine("Topic exists = {0}", exists);

// Modify topic attributes
topic.SetAttributes("DisplayName", "Test Topic");

// Subscribe an email endpoint to the topic
topic.Subscribe("test@example.com", "email");

// Wait until the subscription has been confirmed by the endpoint
// WaitForSubscriptionConfirmation();

// Publish a message to the topic
topic.Publish("Test mesage");

// Delete the topic
topic.Delete();

// Check that the topic is no longer in the list of all topics
exists = sns.GetTopics()
    .Any(t => t.Arn.Equals(topic.Arn));
Console.WriteLine("Topic exists = {0}", exists);

As you can see, it’s easy to get started with and use the new Amazon SNS Resource APIs to work with the service.