AWS Developer Blog

Release: AWS SDK for PHP – Version 2.4.8

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.4.8 of the AWS SDK for PHP. This release updates the AWS Direct Connect client and updates the Amazon Elastic MapReduce client to add support for new EMR APIs, termination of specific cluster instances, and unlimited EMR steps.

Changelog

  • Updated the AWS Direct Connect client
  • Updated the Amazon Elastic MapReduce client to add support for new EMR APIs, termination of specific cluster instances, and unlimited EMR steps.

Install/Download the Latest SDK

Sending requests through a proxy

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

Some network configurations require that outbound connections be sent through a proxy server. Requiring a proxy for outbound HTTP requests is a common practice in many companies, and is often something that must be configured in a client.

You can send requests with the AWS SDK for PHP through a proxy using the "request options" of a client. These "request options" are applied to each HTTP request sent from the client. One of the option settings that can be specified is the proxy option. This setting controls how the SDK utilizes a proxy.

Request options are passed to a client through the client’s factory method. Here’s an example of how you can specify a proxy for an Amazon S3 client:

use AwsS3S3Client;

$s3 = S3Client::factory(array(
    'request.options' => array(
        'proxy' => '127.0.0.1:123'
    )
));

The above example tells the client that all requests should be proxied through an HTTP proxy located at the 127.0.0.1 IP address using port 123.

Username and password

You can supply a username and password when specifying your proxy setting if needed:

$s3 = S3Client::factory(array(
    'request.options' => array(
        'proxy' => 'username:password@127.0.0.1:123'
    )
));

Proxy protocols

Because proxy support is handled through cURL, you can specify various protocols when specifying the proxy (e.g., socks5://127.0.0.1). More information on the proxy protocols supported by cURL can be found in the online cURL documentation.

AWS SDK Core Response Structures

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

I blogged recently about how the code is now available for AWS SDK Core. This new repository is the basis for what will become version 2 of the AWS SDK for Ruby.

We have not cut a public gem for AWS SDK Core yet. Instead, we have published the work-in-progress code to GitHub for the community to see. We hope to get a lot of feedback on features as they are developed. In an effort to engage the community, I will be blogging about some of these new features and soliciting feedback. Our hope is to improve the overall quality of version 2 of the Ruby SDK through this process.

Today, I will talking about the new response structures.

V1 Response Strutures

In version 1 of the Ruby SDK, the low-level clients accepted a hash of request parameters, and then returned response data as a hash. Here is a quick example:

# hash in, hash out
response = AWS::S3::Client.new.list_buckets(limit: 2)
pp response.data

{:buckets=>[
  {:name=>"aws-sdk", :creation_date=>"2012-03-19T16:37:04.000Z"},
  {:name=>"aws-sdk-2", :creation_date=>"2013-09-27T16:17:02.000Z"}],
 :owner=>
  {:id=>"...",
   :display_name=>"..."}}

This approach is simple and flexible. However, it gives little guidance when exploring a response. Here are some issues that arise from using hashes:

  • Attempts to access unset response keys return a nil value. There is no way to tell if the service omitted the value or if the hash key contains a typo.

  • Operating on nested values is a bit awkward. To collect bucket names, a user would need to use blocks to access attributes:

    data[:buckets].map{ |b| b[:name] }
  • The response structure gives no information about what other attributes the described resource might also have, only what is present currently.

V2 Response Structures

In AWS SDK Core, we take a different approach. We use descriptions about the complete response structure to construct Ruby Struct objects. Here is the sample from above with version 2:

Aws::S3.new.list_buckets.data
#=> #<struct 
 buckets=
  [#<struct name="aws-sdk", creation_date=2012-03-19 16:37:04 UTC>,
   #<struct name="aws-sdk-2", creation_date=2013-09-27 16:17:02 UTC>],
 owner=
  #<struct 
   id="...",
   display_name="...">>

Struct objects provide the following benefits:

  • Indifferent access with strings, symbols, and methods:

    data.buckets.first.name
    data[:buckets].first[:name]
    data['buckets'].first['name']
    
  • Operating on nested values possible using Symbol-to-Proc semantics:

    data.buckets.map(&:name)
    
  • Accessing an invalid property raises an error:

    data.buckets.first.color
    #=> raises NoMethodError: undefined method `color' for #<struct ...>
    

Feedback

What do you think about the new response structures? Take a moment to checkout the new code and give it a spin. We would love to hear your feedback. Issues and feature requests are welcome. Come join us on GitHub.

Access Key Management for .NET Applications – Part 1

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

In this post, we talk about the three methods for providing AWS access keys to your .NET applications and look at a few best practices. We look at the following questions while discussing the different options for managing access keys.

  • Security: How do you securely store credentials and supply them to your application?
  • Ease of management: What do you do if the credentials are compromised? How easy is it to rotate the credentials? How do you supply the new credentials to your application? (We’ll talk about this in detail in a future post.)

Setting credentials while constructing the service client

You can pass the IAM user credentials directly as parameters while constructing the service client. All service clients and the AWSClientFactory class have overloads that allow you to pass the credentials. The following snippet demonstrates creating an Amazon S3 client using AWSClientFactory.

var accessKey = "";// Get access key from a secure store
var secretKey = "";// Get secret key from a secure store
var s3Client = AWSClientFactory.CreateAmazonS3Client(accessKey,secretKey,RegionEndpoint.USWest2);

The preceding snippet assumes you have a way to manually retrieve IAM user access keys from a secure location. You should never store access keys in plain text in the code, as anyone with access to your code can misuse them.

Application configuration file

Instead of passing credentials to the service client in code, you can define access keys in an application configuration file (e.g., app.config or web.config). The SDK looks up the application configuration for both components of the access keys [AWSAccessKey and AWSSecretKey] and uses its values. We strongly recommended using IAM user access keys instead of root account access keys, as root account access keys always have full permissions to your entire AWS environment. If your application configuration file is ever compromised, you will want to ensure the access keys it contains are as limited in scope as possible. Defining an IAM user lets you do this.

The following snippet shows a sample app.config file and creation of a S3 client using the AWSClientFactory class.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <appSettings>
    <add key="AWSAccessKey" value=""/>
    <add key="AWSSecretKey" value=""/>
    <add key="AWSRegion" value="us-west-2"/>
  </appSettings>
</configuration>
var s3Client = AWSClientFactory.CreateAmazonS3Client();

If you store credentials in the configuration file, be careful not to commit the file with credentials to your source control, making it available to anyone with access to it.

We looked at two methods to pass credentials, both of these are easy to use but only support trivial scenarios. For production grade usage, you will additionally need to implement the following

  • Securely store and retrieve credentials.
  • Implement a mechanism to rotate credentials.

Next, we’ll see a method that provides both these features out of the box.

Preferred method: IAM roles for EC2

If your application runs on an Amazon EC2 instance, you can use AWS Identity and Access Management(IAM) roles for EC2 to secure and simplify access key management. Roles for EC2 automatically distributes temporary security credentials to your EC2 instances that the AWS SDK for .NET in your application can use as if they were long-term access keys. These temporary security credentials are auto-rotated, and the SDK transparently picks up the new credentials before the existing ones expire.

To use this method, you first need to create an IAM role with only the permissions to access AWS resources required by your application. When you launch an EC2 instance with the IAM role, your application can query the EC2 instance metadata service for the temporary security credentials that will be retrieved automatically by the instance that assumes the role. The SDK supports loading these credentials from the EC2 instance metadata service, so you don’t need to take any special steps to use this feature. If you construct a service client without specifying the credentials, the client will pick up the credentials from the metadata service. (We’ll cover IAM roles in detail in a subsequent post.)

This is a good time to understand how credentials are loaded by the SDK when there are multiple ways in which they are made available to an application. Knowing this information can save you time debugging or trying to figure out the credentials being actually used when they are specified in multiple ways. The credentials are loaded or resolved in the following order

  • Credentials explicitly passed to the service client.
  • Credentials in application configuration file.
  • Credentials from EC2 instance metadata service (applicable only when your code is running in an EC2 instance).

A few best practices

  • Do not hard-code access keys in code or embed them in your application. Doing so makes it difficult to rotate the keys, and you risk disclosure.
  • Do not use root access keys associated with your AWS account for development or in production. These keys have unlimited privileges. Instead, create IAM users and use their associated access keys.
  • For applications that run on Amazon EC2, only use IAM roles to distribute access keys. The security and management properties of IAM roles for EC2 are superior to any alternative where you manually manage access keys.

You will find more best practices with in-depth information here.

Getting your Amazon EC2 Windows Password with the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

When you launch a Windows instance in EC2, a password will be generated for the Windows administrator user. You can retrieve this administrator’s password by using the AWS SDK for .NET.

In order to be able get the administrator password, you need to launch the EC2 instance with a key pair. To create a key pair, call the CreateKeyPair method.

string keyPairName = "get-my-password";
var createKeyPairResponse = ec2Client.CreateKeyPair(new CreateKeyPairRequest()
{
    KeyName = keyPairName
});

// The private key for the key pair used to decrypt the password.
string privateKey = createKeyPairResponse.KeyPair.KeyMaterial;

It is important when creating a key pair to save the private key. This is required to be able to decrypt the password.

Now, when launching the EC2 instance, you need to set the key pair.

// Use the ImageUtilities from the Amazon.EC2.Util namespace to look up the latest Windows 2012 AMI
var image = ImageUtilities.FindImage(ec2Client, ImageUtilities.WINDOWS_2012_BASE);
var runInstanceResponse = ec2Client.RunInstances(new RunInstancesRequest()
{
    ImageId = image.ImageId,
    KeyName = keyPairName,
    InstanceType = InstanceType.T1Micro,
    MaxCount = 1,
    MinCount = 1
});

// Capture the instance ID
string instanceId = runInstanceResponse.Reservation.Instances[0].InstanceId;

Once you’ve launched the instance, it will take a few minutes for the password to become available. To get the password, call the GetPasswordData method. If the PasswordData property on the response from GetPasswordData is null, then the password is not available yet.

var getPasswordResponse = ec2Client.GetPasswordData(new GetPasswordDataRequest()
{
    InstanceId = instanceId
});

if (string.IsNullOrEmpty(getPasswordResponse.PasswordData))
{
    Console.WriteLine("Password not available yet.");
}
else
{
    string decryptedPassword = getPasswordResponse.GetDecryptedPassword(privateKey);
    Console.WriteLine("Decrypted Windows Password: {0}", decryptedPassword);
}

If the PasswordData property is not null, then it contains the encrypted administrator password. The utility method GetDecryptedPassword on GetPasswordReponse takes in the private key from the key pair and decrypts the password.

Archiving and Backing-up Data with the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Jason Fulghum recently posted a blog entry about using Glacier with the AWS SDK for Java that I thought would be interesting for .NET developers. Here is Jason’s post with the code replaced with the C# equivalent.

Do you or your company have important data that you need to archive? Have you explored Amazon Glacier yet? Amazon Glacier is an extremely low-cost storage service that provides secure and durable storage for data archiving and backup. Just like with other AWS offerings, you pay only for what you use. You don’t have to pay large upfront infrastructure costs or predict capacity requirements like you do with on-premise solutions. Simply use what you need, when you need it, and pay only for what you use.

There are two easy ways to leverage Amazon Glacier for data archives and backups using the AWS SDK for .NET. The first option is to interact with the Amazon Glacier service directly. The AWS SDK for .NET includes a high-level API called ArchiveTransferManager for easily working with transfers into and out of Amazon Glacier.

IAmazonGlacier glacierClient = new AmazonGlacierClient();
ArchiveTransferManager atm = new ArchiveTransferManager(glacierClient);
UploadResult uploadResult = atm.Upload("myVaultName", "old logs", @"C:logsoldLogs.zip");

// later, when you need to retrieve your data
atm.Download("myVaultName", uploadResult.ArchiveId, @"C:downloadlogs.zip");

The second easy way of getting your data into Amazon Glacier using the SDK is to get your data into Amazon S3 first, and use a bucket lifecycle to automatically archive your objects into Amazon Glacier after a certain period.

It’s easy to configure an Amazon S3 bucket’s lifecycle using the AWS Management Console or the AWS SDKs. Here’s how to create a bucket lifecycle that will copy objects under the "logs/" key prefix to Amazon Glacier after 365 days, and will remove them completely from your Amazon S3 bucket a few days later at the 370 day mark.

IAmazonS3 s3 = new AmazonS3Client();

var configuration = new LifecycleConfiguration()
{
    Rules = new List
    {
        new LifecycleRule
        {
            Id = "log-archival-rule",
            Prefix = "logs/",
            Transition = new LifecycleTransition()
            {
                Days = 365,
                StorageClass = S3StorageClass.Glacier
            },
            Status = LifecycleRuleStatus.Enabled,
            Expiration = new LifecycleRuleExpiration()
            {
                Days = 370
            }
        }
    }
};

s3.PutLifecycleConfiguration(new PutLifecycleConfigurationRequest()
{
    BucketName = myBucket,
    Configuration = configuration
});

Are you using Amazon Glacier yet? Let know how you’re using it and how it’s working for you!

DynamoDB APIs

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Amazon DynamoDB is a fast NoSQL database service offered by AWS. DynamoDB can be invoked from .NET applications by using the AWS SDK for .NET. The SDK provides three different models for communicating with DynamoDB. This blog post is the first of a series that describes the various APIs, their respective tradeoffs, best practices, and little-known features.

The Models

The SDK provides three ways of communicating with DynamoDB. Each one offers a different tradeoff between control and ease of use.

  • Low-level : Amazon.DynamoDBv2 namespace—This is a thin wrapper over the DynamoDB service calls. It matches all the service features. You can reference the service documentation to learn more about each individual operation.
  • Document Model : Amazon.DynamoDBv2.DocumentModel namespace—This is a model that provides a simpler interface for dealing with data. DynamoDB tables are represented by Table objects, while individual rows of data are represented by Document objects. Conversion of .NET objects to DynamoDB data is automatic for basic types.
  • Object Persistence Model : Amazon.DynamoDBv2.DataModel namespace—This set of APIs allow you to store and load .NET objects in DynamoDB. Objects must be marked up to configure the target table and the hash/range keys. DynamoDBContext acts on marked up objects. It is used to store and load DynamoDB data, or to retrieve .NET objects from a query or scan operation. Basic data types are automatically converted to DynamoDB data and converters allow arbitrary types to be stored in DynamoDB.

The three models provide different approaches to working with the service. While the low-level approach requires more client-side code—the user must convert .NET types such as numbers and dates to DynamoDB-supported strings—it provides access to all service features. By comparison, the Object Persistence Model approach makes it easier to use the service—since the user is for the most part working with familiar .NET objects—but does not provide all the functionality. For example, it is not possible to make conditional Put calls with the Object Persistence Model.

Sample code

The best way to gain an understanding of the different models is with a code sample. Below are three examples of storing and retrieving data from DynamoDB, each using a different model.

Low-level

var client = new AmazonDynamoDBClient();

// Store item
client.PutItem(new PutItemRequest
{
    TableName = "Books",
    Item = new Dictionary<string, AttributeValue>
    {
        { "Title", new AttributeValue { S = "Cryptonomicon" } },
        { "Id", new AttributeValue { N = "42" } },
        { "Authors", new AttributeValue {
            SS = new List<string> { "Neal Stephenson" } } },
        { "Price", new AttributeValue { N = "12.95" } }
    }
});

// Get item
Dictionary<string, AttributeValue> book = client.GetItem(new GetItemRequest
{
    TableName = "Books",
    Key = new Dictionary<string, AttributeValue>
    {
        { "Id", new AttributeValue { N = "42" } }
    }
}).Item;

Console.WriteLine("Id = {0}", book["Id"].S);
Console.WriteLine("Title = {0}", book["Title"].S);
Console.WriteLine("Authors = {0}",
    string.Join(", ", book["Authors"].SS));

Document Model

var client = new AmazonDynamoDBClient();
Table booksTable = Table.LoadTable(client, "Books");

// Store item
Document book = new Document();
book["Title"] = "Cryptonomicon";
book["Id"] = 42;
book["Authors"] = new List<string> { "Neal Stephenson" };
book["Price"] = 12.95;
booksTable.PutItem(book);

// Get item
book = booksTable.GetItem(42);
Console.WriteLine("Id = {0}", book["Id"]);
Console.WriteLine("Title = {0}", book["Title"]);
Console.WriteLine("Authors = {0}",
    string.Join(", ", book["Authors"].AsListOfString()));

Object Persistence Model

This example consists of two parts: first, we must define our Book type; second, we use it with DynamoDBContext.

[DynamoDBTable("Books")]
class Book
{
    [DynamoDBHashKey]
    public int Id { get; set; }
    public string Title { get; set; }
    public List<string> Authors { get; set; }
    public double Price { get; set; }
}
var client = new AmazonDynamoDBClient();
DynamoDBContext context = new DynamoDBContext(client);

// Store item
Book book = new Book
{
    Title = "Cryptonomicon",
    Id = 42,
    Authors = new List<string> { "Neal Stephenson" },
    Price = 12.95
};
context.Save(book);

// Get item
book = context.Load<Book>(42);
Console.WriteLine("Id = {0}", book.Id);
Console.WriteLine("Title = {0}", book.Title);
Console.WriteLine("Authors = {0}", string.Join(", ", book.Authors));

Summary

As you can see, the three models differ considerably. The low-level approach is quite verbose, but it does expose service capabilities that are not present in the other models. The tradeoffs, specific features unique to each model, and how to use all models together will be the focus of this series on the .NET SDK and DynamoDB.

Archiving and Backing-up Data with the AWS SDK for Java

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Do you or your company have important data that you need to archive? Have you explored Amazon Glacier yet? Amazon Glacier is an extremely low-cost storage service that provides secure and durable storage for data archiving and backup. Just like with other AWS offerings, you pay only for what you use. You don’t have to pay large upfront infrastructure costs or predict capacity requirements like you do with on-premise solutions. Simply use what you need, when you need it, and pay only for what you use.

There are two easy ways to leverage Amazon Glacier for data archives and backups using the AWS SDK for Java. The first option is to interact with the Amazon Glacier service directly. The AWS SDK for Java includes a high-level API called ArchiveTransferManager for easily working with transfers into and out of Amazon Glacier.

ArchiveTransferManager atm = new ArchiveTransferManager(myCredentials);
UploadResult uploadResult = atm.upload("myVaultName", "old logs",
                                       new File("/logs/oldLogs.zip"));

// later, when you need to retrieve your data
atm.download("myVaultName", uploadResult.getArchiveId(), 
             new File("/download/logs.zip"));

The second easy way of getting your data into Amazon Glacier using the SDK is to get your data into Amazon S3 first, and use a bucket lifecycle to automatically archive your objects into Amazon Glacier after a certain period.

It’s easy to configure an Amazon S3 bucket’s lifecycle using the AWS Management Console or the AWS SDKs. Here’s how to create a bucket lifecycle that will copy objects under the "logs/" key prefix to Amazon Glacier after 365 days, and will remove them completely from your Amazon S3 bucket a few days later at the 370 day mark.

AmazonS3 s3 = new AmazonS3Client(myCredentials);
Transition transition = new Transition()
    .withDays(365).withStorageClass(StorageClass.Glacier);
BucketLifecycleConfiguration config = new BucketLifecycleConfiguration()
    .withRules(new Rule()
        .withId("log-archival-rule")
        .withKeyPrefix("logs/")
        .withExpirationInDays(370)
        .withStatus(ENABLED)
        .withTransition(transition));

s3.setBucketLifecycleConfiguration(myBucketName, config);

Are you using Amazon Glacier yet? Let know how you’re using it and how it’s working for you!

Wire Logging in the AWS SDK for PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

One of the features of the AWS SDK for PHP that I often recommend to customers is the LogPlugin, that can be used to do wire logging. It is one of the many plugins included with Guzzle, which is the underlying HTTP library used by the SDK. Guzzle’s LogPlugin includes a default configuration that will output the content of the requests and responses sent over the wire to AWS. You can use it to help debug requests or just learn more about how the AWS APIs work.

Adding the LogPlugin to any client in the SDK is simple. The following shows how to set it up.

$logPlugin = GuzzlePluginLogLogPlugin::getDebugPlugin();
$client->addSubscriber($logPlugin);

The output generated by LogPlugin for a single request looks similar to the following text (this request was for executing an Amazon S3 ListBuckets operation).

# Request:
GET / HTTP/1.1
Host: s3.amazonaws.com
User-Agent: aws-sdk-php2/2.4.6 Guzzle/3.7.3 curl/7.25.0 PHP/5.3.27
Date: Fri, 27 Sep 2013 15:53:10 +0000
Authorization: AWS AKIAEXAMPLEEXAMPLE:eEXAMPLEEsREXAMPLEWEFo=

# Response:
HTTP/1.1 200 OK
x-amz-id-2: EXAMPLE4j/v8onDxyeuFaQFsNvN66EXAMPLE30KQLfq0T6sVcLxj
x-amz-request-id: 4F3EXAMPLEE14
Date: Fri, 27 Sep 2013 15:53:09 GMT
Content-Type: application/xml
Transfer-Encoding: chunked
Server: AmazonS3

<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">[...]</ListAllMyBucketsResult>

This is the output generated using the default configuration. You can configure the LogPlugin to customize the behavior, format, and location of what is logged. It’s also possible to integrate with third-party logging libraries like Monolog. For more information, see the section about the wire logger in the AWS SDK for PHP User Guide.

Release 2.0.0.6 of the AWS SDK V2.0 for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today, we updated our version 2 preview of the AWS SDK for .NET. You can download version 2.0.0.6 of the SDK here. This preview contains the following updates.

  • The SDK now requires the region to be explicitly specified through the client constructor or by using the AWSRegion setting in the application’s app or web config file. Prior versions of the SDK implicitly defaulted to us-east-1 if the region was not set. Here is an example of setting a region in the app config file so applications that are not explicitly setting a region can take this update without making any code changes.

    <configuration>
      <appSettings>
        <add key="AWSRegion" value="us-east-1"/>
      </appSettings>
    </configuration>
    
  • The Amazon DynamoDB high-level APIs Document Model and Object Persistence Model were added to the Windows Store and Windows Phone 8 version of the AWS SDK for .NET. For those of you coming to the AWS re:Invent conference next month, you can see our session where we’ll discuss using these APIs with version 2 of the SDK.
  • All the service clients have been updated to match the latest changes for each service.

 

We’ve received some great feedback on our preview and have made changes based on that feedback. We are hoping to GA version 2 of the SDK soon, but it is not too late to send us your thoughts.