AWS Developer Blog

Automating the Deployment of AWS Config with the AWS SDK for PHP

by Jonathan Eskew | on | in PHP | Permalink | Comments |  Share

My colleague Joseph Fontes, an AWS Solutions Architect, wrote the guest post below to discuss automation strategies for AWS Config.


There are times when you need to automate the deployment of services either in your account or within external accounts.  When I recently had to enable AWS Config support in remote accounts, I approached this task the way many others do….by opening the AWS SDK for PHP Reference Guide!

To complete this task, we will need three AWS Config methods: putConfigurationRecorder(), putDeliveryChannel(), and startConfigurationRecorder().  Before making the call to putDeliveryChannel(), we need to create our Amazon S3 bucket destination and identify an AWS SNS topic. 

Let’s instantiate the clients we will need for this exercise.  The client creation will be slightly different for those familiar with creating clients in version 2 of the AWS SDK for PHP.

$s3Client = $sdk->createS3();
$iamClient = $sdk->createIam();
$confClient = $sdk->createConfigService();
$snsClient = $sdk->createSns();

Let’s next create an S3 bucket and destination for our AWS Config logs.  Remember that S3 buckets must be globally unique. We cannot simply use a name like “logs.”  For this reason, our naming convention will use our account number:

$accountID = "XXXXXXXXXXXX";
$s3Bucket = $accountID."-config";
$role = "AWS-Config-Role-IAM";

$s3Res = $s3Client->listBuckets([]);

$s3ResA = $s3Res->toArray();

if(bucketExists($sdk,$s3Bucket,$s3ResA) == 0) {
    $s3Data = [
        'ACL' => 'private',
        'Bucket' => $s3Bucket, 
        'LocationConstraint' => $region
    ];
    $s3Res = $s3Client->createBucket($s3Data);
    $s3ResA = $s3Res->toArray();
    print_r($s3ResA);
    print "Waiting for bucket to become available...";
    $s3Client->waitUntil('BucketExists', [
        'Bucket' => $s3Bucket
    ]);
}

In the preceding example, I have written a function to test if the bucket exists.  This is a completely optional step. The full code will be made available for download.

The call for createBucket() with Amazon S3 followed by the waitUntil() method delays script execution until the S3 bucket is available for use. 

We now need to create the IAM role AWS Config uses to access the S3 bucket.  We need an assume role policy to reference during the call.  I have created a text file, policy.json, with the following contents:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "config.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Create a local copy of the policy.json file.

Next, we need to create an additional policy file that gives the AWS Config service permissions to perform required tasks in your account. Download the following file and place it in your running directory:

config-policy.json

We can now create the IAM role for the AWS Config service:

$config_topic = "ConfigTopicSNS";
$policy_info = file_get_contents('config-policy.json');
$replace_sns = ":".$AccountID.":".$config_topic;
$replS3Bucket = "XXXXXXXXXXXXXXXXXXXXXXXXXXX";
$replSNS = "YYYYYYYYYYYYYYYYYYYYYYYYYYYYY";
$replS3Bucket2 = "QQQQQQQQQQQQQQQQQQQQQQQQQQQQQQ";
$policy_info = str_replace($replS3Bucket,$s3Bucket,$policy_info);
$policy_info = str_replace($replSNS,$replace_sns,$policy_info);
$policy_info = str_replace($replS3Bucket2,$s3Bucket,$policy_info);

$iamData = [
    'RoleName' => $role,
    'AssumeRolePolicyDocument' => file_get_contents('policy.json')
];

$iamRes = $iamClient->createRole($iamData);
$iamResA = $iamRes->toArray();

$roleARN = $iamResA['Role']['Arn'];
$iamData = [
    'PolicyDocument' => $policy_info,
    'PolicyName' => 'NEWConfigPolicy',
    'Description' => "config policy via sdk"
];
$iamRes = $iamClient->createPolicy($iamData);
$iamResA = $iamRes->toArray();

$confPolicyArn = $iamResA['Policy']['Arn'];

$iamData = [
    'PolicyArn' => $ConfPolicyArn,
    'RoleName' => $Role
];
$iamRes = $iamClient->attachRolePolicy($iamData);
$iamResA = $iamRes->toArray();

This portion imports the trust policy defined by the local file, policy.json, along with the permissions policy identified by the local file, config-policy.json.  The permissions policy is modified upon read to ensure proper identifiers are used in the script.

Let’s create the SNS topic.

$snsTopic = 'ConfigTopicSNS';

$snsData = ['Name' => $config_topic];
$snsRes = $snsClient->createTopic($snsData);
$snsResA = $snsRes->toArray();

$snsTopicARN = $snsResA['TopicArn'];

We now have to call the putConfigurationRecorder() method.  This creates a new configuration recorder that will identify the changes we want to record.  In this example, we want to record all changes.  Readers can be more prescriptive in their recordings by identifying resources types.  You’ll find a list of supported resource types here:

http://docs.aws.amazon.com/config/latest/developerguide/resource-config-reference.html#supported-resources

$confData = [
    'ConfigurationRecorder' => [
        'name' => 'default',
        'recordingGroup' => [
            'allSupported' => true,
        ],
        'roleARN' => $roleARN,
    ],
];

$confClient->putConfigurationRecorder($confData);

Now that we know what we are going to record, we have to identify where we will send the recordings.  The following shows the putDeliveryChannel() method, which needs the S3 bucket (created earlier) to store the recordings and the SNS topic, which will alert to configuration changes.

$confData = [
    'DeliveryChannel' => [
        'name' => 'default',
        's3BucketName' => $s3Bucket,
        'snsTopicARN' => $snsTopicARN,
    ],
];

$confClient->putDeliveryChannel($confData);

Finally, now that we have our recording configuration and methods for delivery defined, we have to start recording the changes:

$confData = ['ConfigurationRecorderName' => 'default' ];

$confClient->startConfigurationRecorder($confData);

Now that changes to infrastructure within this region of our account are being recorded with notifications sent, we can be notified of and record infrastructure updates.  We can then use additional SNS subscriptions to process the list of items in our infrastructure that have changed, review them for root cause analysis in the event of service issues, use centralized logging along with event correlation to look for system anomalies, and so on.

You can review the processing of Amazon SNS notifications here:

http://blogs.aws.amazon.com/php/post/Tx15276Q7B4NUO0/Receiving-Amazon-SNS-Messages-in-PHP


 

 

Code Analyzers Added to AWS SDK for .NET

One of the most exciting Microsoft Visual Studio 2015 features is the ability to have static analysis run on your code as you write it. This allows you to flag code that is syntactically correct but will cause errors when run.

We have added static analyzers to the latest AWS SDK NuGet packages for each of the version 3 service packages. The analyzers will check the values set on the SDK classes to make sure they are valid. For example, for a property that takes in a string, the analyzer will verify the string meets the minimum and maximum length. An analyzer will also run a regular expression to make sure it meets the right pattern.

Let’s say I wanted to create an Amazon DynamoDB table. Table names must be at least three characters and cannot contain characters like @ or #. So if I tried to create a table with the name of "@work", the service would fail the request. The analyzers will detect the issue, display an alert in the code editor, and put warnings in the error list before I even attempt to call the service.

Setup

The analyzers are set up in your project when you add the NuGet reference. To see the installed analyzers, go to the project properties, choose Code Analysis, and then choose the Open button.

The code analyzers can also be disabled here.

Feedback

We hope this is just the start of what we can do with the code analysis features in Visual Studio. If can suggest other common pitfalls that can be avoided through the use of these analyzers, let us know. If you have other ideas or feedback, open an issue in our GitHub repository.

Contributing Topics and Examples to the AWS CLI

by Kyle Knapp | on | in AWS CLI | Permalink | Comments |  Share

Whether it’s a quickstart for using a service, a tricky gotcha, or a neat application, have you ever wanted to share information with other users of the AWS CLI? The good news is, you can.

In a post, that I wrote a few months ago, I introduced the AWS CLI Topic Guide and described how to search for and access the contents of topics. In this post, I will discuss how you can contribute your own topic to the AWS CLI Topic Guide and examples for CLI commands.

How do I write a topic?

If you want to contribute to the CLI, submit a pull request to our GitHub repository.

The topics in the AWS CLI Topic Guide are maintained in this section in the CLI code base. They are written in the format of reStructuredText. Let’s walk through the steps for adding a topic to the AWS CLI Topic Guide.

Setting a development environment

If you have not cloned the CLI repository and set up a development environment, follow these steps.

First, clone the AWS CLI git repository:

~$ git clone git@github.com:aws/aws-cli.git

Then use pip to install the CLI:

~$ cd aws-cli
~/aws-cli$ pip install -e .

You are now ready to start contributing to the CLI.

Step 1: Create a file in the topics dirctory

Navigate from the root directory of the CLI’s cloned git repository to the topics directory:

~/aws-cli$ cd awscli/topics

Use a text editor to create a new file:

~/aws-cli/topics$ vim my-sample-topic.rst

The reStructuredText file, my-sample-topic.rst, will show up in the output of the aws help topics command as my-sample-topic. To access the topic’s contents, a user should run the command aws help my-sample-topic.

Step 2: Add the appropriate fields to the topic

You will need to add some metadata to the beginning of the file in the form of reStructuredText fields to the beginning of the file. These fields play an important role in the display and organization of each topic.

The currently supported fields are:

  • title: Specifies a title for the topic. Its value will be displayed as the title whenever the content of the topic is displayed through the command aws help my-sample-topic.
  • description: Specifies a sentence description for the topic. Its value will be displayed when listing all of the available topics through the command aws help topics.
  • category: Specifies a category to which a topic belongs to. . A topic can belong to one category only.. The topic will be listed under the specified category when viewing the available topics through the command aws help topics.

Here is an example of what the list of fields would look like:

:title: My Sample Topic
:description: This describes my sample topic
:category: General

Step 3: Add the content

After these fields have been added to the top of the file, you can now add some reStructuredText content.

:title: My Sample Topic
:description: This describes my sample topic
:category: General

Here is a summary of my topic.


My Section
==========
Here is some more content.


My Subsection
-------------
Here is even more content.

The content and the structure of the content I added is arbitrary. As long as it is valid reStructuredText, decide for yourself what you want to add and how you want to structure it.

Step 4: Regenerate the topic index

After you have written the content, run the script make-topic-index to make sure the topic will be visible in the aws help topics command and available through the aws topics my-sample-topic command. This step is straightforward as well. All you need to do is run the script make-topic-index:

~/aws-cli/topics$ cd ~/aws-cli/scripts
~/aws-cli/scripts$ ./make-topic-index

This script will run through all of the topic files in awscli/topics and regenerate an index that allows for fast lookups of topics in the CLI. You can use the aws help topics command to check that your topic is available. (The following shows only the AVAILABLE TOPICS section of the output.)

$ aws help topics

AVAILABLE TOPICS
   General
       o config-vars: Configuration Variables for the AWS CLI

       o my-sample-topic: This describes my sample topic

       o return-codes: Describes the various return codes of the AWS CLI

   S3
       o s3-config: Advanced configuration for AWS S3 Commands

And the content of the topic is available through the aws help my-sample-topic command:

$ aws help my-sample-topic


NAME
       My Sample Topic -

       Here is a summary of my topic.

MY SECTION
       Here is some more content.

   My Subsection
       Here is even more content.

How do I write an example?

By example, I mean the EXAMPLES section that appears in the output when you run the help command for a CLI command. For example, take the help output for aws ec2 describe-instances:

$ aws ec2 describe-instances help

...continued...
EXAMPLES
       To describe an Amazon EC2 instance

       Command:

          aws ec2 describe-instances --instance-ids i-5203422c

       To describe all instances with the instance type m1.small

       Command:

          aws ec2 describe-instances --filters "Name=instance-type,Values=m1.small"

       To describe all instances with a Owner tag

       Command:

          aws ec2 describe-instances --filters "Name=tag-key,Values=Owner"

       To describe all instances with a Purpose=test tag

       Command:

          aws ec2 describe-instances --filters "Name=tag:Purpose,Values=test"
...continued...

All of these examples are written in reStructuredText and are located in the example section of the CLI codebase.

Writing an example for a command requires even fewer steps than writing a topic. In this walkthrough, we will add an example for aws iot list-things.

If you have not already done so, make sure you have set up your development environment as described earlier in this post.

Step 1: Create a file in the examples dirctory

Navigate from the root directory of the CLI’s cloned git repository to the examples directory:

~/aws-cli$ cd awscli/examples

This directory contains directories of other service commands in the CLI. For this walkthrough, you need to navigate to the iot directory (or create it if it does not exist):

~/aws-cli/examples$ cd iot

Use a text editor to create a new file:

~/aws-cli/examples/iot$ vim list-things.rst

In order for the example to be picked up in the aws iot list-things help command, the name of the file must match the name of the command.

Step 2: Add the content

Now just add reStructuredText content to this newly created file:

The following command lists all AWS IoT things::

$ aws iot list-things

Output::

    {
        "things": []
    }

To confirm the example was added to the help command:

$ aws iot list-things help

...continued...
EXAMPLES
       The following command lists all AWS IoT things:

          $ aws iot list-things

       Output:

          {
              "things": []
          }
...continued...

Conclusion

After you have created your content, submit a pull request to our GitHub repository so that your knowledge can be shared with other CLI users.

Follow us on Twitter @AWSCLI and let us know what you’d like to read about next! Stay tuned for our next post, and contribute some topics and examples to the CLI today.

 

New Support for Federated Users in the AWS Tools for Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Starting with version 3.1.31.0, the AWS Tools for Windows PowerShell support the use of federated user accounts through Active Directory Federation Services (AD FS) for accessing AWS services, using Security Assertion Markup Language (SAML).

In earlier versions, all cmdlets that called AWS services required you to specify AWS access and secret keys through either cmdlet parameters or data stored in credential profiles that were shared with the AWS SDK for .NET and the AWS Toolkit for Visual Studio. Managing groups of users required you to create an AWS Identity and Access Management (IAM) user instance for each user account in order to generate individual access and secret keys.

Support for federated access means your users can now authenticate using your Active Directory directory; temporary credentials will be granted to the user automatically. These temporary credentials, which are valid for one hour, are then used when invoking AWS services. Management of the temporary credentials is handled by the tools. For domain-joined user accounts, if a cmdlet is invoked but the credentials have expired, the user is reauthenticated automatically and fresh credentials are granted. (For non-domain-joined accounts, the user is prompted to enter credentials prior to reauthentication.)

The tools support two new cmdlets, Set-AWSSamlEndpoint and Set-AWSSamlRoleProfile, for setting up federated access:

# first configure the endpoint that one or more role profiles will reference by name
$endpoint = "https://adfs.example.com/adfs/ls/IdpInitiatedSignOn.aspx?loginToRp=urn:amazon:webservices"
Set-AWSSamlEndpoint -Endpoint $endpoint -StoreAs "endpointname"

# if the user can assume more than one role, this will prompt the user to select a role
Set-AWSSamlRoleProfile -EndpointName "endpointname" -StoreAs "profilename"

# if the principal and role ARN data of a role is known, it can be specified directly
$params = @{
 "PrincipalARN"="arn:aws:iam::012345678912:saml-provider/ADFS"
 "RoleARN"="arn:aws:iam::012345678912:role/ADFS-Dev"
}
Set-AWSSamlRoleProfile -EndpointName "endpointname" -StoreAs "ADFS-Dev" @params

# if the user can assume multiple roles, this creates one profile per role using the role name for the profile name
Set-AWSSamlRoleProfile -EndpointName "endpointname" -StoreAllRoles

Role profiles are what users will employ to obtain temporary credentials for a role they have been authorized to assume. When a user needs to authenticate after selecting a role profile, the data configured through Set-AWSSamlEndpoint is used to obtain the HTTPS endpoint that should be accessed. Authentication occurs when you first run a cmdlet that requires AWS credentials. The examples here assume a domain-joined user account is in use. If the user needs to supply network credentials to authenticate, the credentials can be passed with the -NetworkCredential parameter. By default, authentication is performed through Kerberos, but you can override this by passing the -AuthenticationType parameter to Set-AWSSamlEndpoint. (Currently supported values for this parameter are Kerberos, NTLM, Digest, Basic, or Negotiate.)

After role profiles are configured, you use them in the same way you have used AWS credential profiles. Simply pass the profile name to Set-AWSCredentials or to the -ProfileName parameter on individual service cmdlets. That’s all there is to it!

The new support for federated access reduces the burden of creating IAM user accounts for your team members. Currently, the tools support federated users for AD FS and SAML. If you want to use federation with other systems that support SAML, be sure to let us know in the comments. For more information about this feature, and examples that show how SAML based authentication works, see this post on the AWS Security blog.

Building a serverless developer authentication API in Java using AWS Lambda, Amazon DynamoDB, and Amazon Cognito – Part 4

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

In parts 1, 2, and 3 of this series, we used the AWS Toolkit for Eclipse to create a Java Lambda function. This function authenticated a user against an Amazon DynamoDB table (representing a directory of users) and then connected to Amazon Cognito to obtain an OpenID token. This token could then be used to obtain temporary AWS credentials for your mobile app. We tested the function locally in our development environment and used the AWS Toolkit for Eclipse to upload it to AWS Lambda. We will now integrate this Lambda function with Amazon API Gateway so it can be accessed through a RESTful interface from your applications.

First, let’s create a new API. Open the AWS Management Console. From the Application Services drop-down list, choose API Gateway. If this is your first API, choose Get Started. Otherwise, choose the blue Create API button. Type a name for this API (for example, AuthenticationAPI), and then choose the Create API button.

Now that your API has been created, you will see the following page, which shows, as expected, that there are no resources and no methods defined for this API.

We will create methods for our authentication API in a moment, but first, we will create models that define the data exchange for our API. The models are also used by API Gateway to create SDK classes for use in Android, iOS, and JavaScript clients.

To access the navigation menu, choose Resources. From the navigation menu, choose Models. By default, there are two models included with every API: Error and Empty. We will leave these models as they are and create two models that will define objects and attributes for use in our mobile app to send requests and interpret responses to and from our API.

To define the request model, choose the Create button. Set up the model as shown here:

Model Name: AuthenticationRequestModel
Content type: application/json 
Model Schema: 
{ 
  "$schema": "http://json-schema.org/draft-04/schema#", 
  "title": " AuthenticationRequestModel ", 
  "type": "object", 
  "properties": { 
    "userName": { 
      "type": "string" 
    }, 
    "passwordHash": { 
      "type": "string" 
    }
  } 
}

Choose the Create Model button. You may need to scroll down to see it.

Next, create the response model.

Model Name: AuthenticationResponseModel
Content type: application/json
Model Schema: 
{
  "$schema": "http://json-schema.org/draft-04/schema#",
  "title": " AuthenticationResponseModel ",
  "type": "object",
  "properties": 
  {
    "userId": { "type": "integer" },
    "openIdToken": { "type": "string" },
    "status": { "type": "string" }
 }
}

We need to create methods for our API. In true REST fashion, the URI path is composed of resources names. Each resource can have one or many HTTP methods. For our authentication example, we will add one resource, and then attach one method to it. Let’s start by creating a resource called “login.”

From the Models drop-down list, choose Resources.

Choose the Create Resource button. For the name of your resource, type login, and then choose the Create Resource button. This will refresh the dashboard. The new resource will appear in the navigation pane under the root of the URI path.

Choose the Create Method button. In the navigation pane, a drop-down list will appear under the login resource. From the drop-down list, choose POST, and then choose the check icon to confirm. The page will be updated, and the POST action will appear in the navigation pane. On the Setup page for your method, choose the Lambda Function option:

The page will be updated to display Lambda-specific options. From the Lambda Region drop-down list, choose the region (us-east-1) where you created the Lambda function. In the Lambda Function text box, type AuthenticateUser. This field will autocomplete. When a dialog box appears to show the API method has been given permission to invoke the Lambda function, choose OK.

The page shows the flow of the API method.

We will now set the request and response models for the login method. Choose Method Request. The panel will scroll and be updated. In the last row of the panel, choose the triangle next to Request Models. This will display a new panel. Choose Add Model. Choose the AuthenticationRequestModel you created earlier. Make sure you manually add “application/json” as the content type before you apply the changes, and then choose the check icon to confirm.

The procedure to add a model to our method response is slightly different. API Gateway lets you configure responses based on the HTTP response code of the REST invocation. We will restrict ourselves to HTTP 200 (the “OK” response code that signifies successful completion). Choose Method Response. In the response list that appears, choose the triangle next to the 200 response. The list of response models will appear on the right. The Empty model has been applied by default. Because multiple response models can be applied to a method, just as you did with the request model, add the AuthenticationResponseModel to the 200 response. Again, make sure to enter the content type (“application/json”) before you apply the change.

We now have an API resource method tied to a Lambda function that has request and response models defined. Let’s test this setup. In the navigation pane, choose the POST child entry of the /login resource. The flow screen will appear. Choose the Test icon.

Choose the triangle next to Response Body and type a sample JSON body (the same text used to test the Lambda function earlier).

{
   "username":"Dhruv",
   "passwordHash":"8743b52063cd84097a65d1633f5c74f5"
}

Choose the Test button. You may have to scroll down to see it. The function will be executed. The output should be the same as your response model.

The results show the HTTP response headers as well as a log snippet. The flow screen is a powerful tool for fine-tuning your API and testing use cases without having to deploy code.

At this point, you would continue to test your API, and when finished, use the Deploy API button on the Resources page to deploy it to production. Each deployment creates a stage, which is used to track the versions of your API. From the Stage page (accessible from the navigation menu), you can select a stage and then use the Stage Editor page to create a client SDK for your mobile app.

This wraps up our sample implementation of developer authenticated identities. By using the AWS Toolkit for Eclipse, you can do server-side coding and testing in Eclipse, and even deploy your server code, without leaving your IDE. You can test your Lambda code both in the IDE and again after deployment by using the console. In short, we’ve covered a lot of ground in the last four blog posts. We hope we’ve given you plenty of ideas for creating your own serverless applications in AWS. Feel free to share your ideas in the comments below!

AWS re:Invent 2015 Practical DynamoDB Working Together with AWS Lambda

by Zhaoxi Zhang | on | in Java | Permalink | Comments |  Share

Today, I’m excited to announce the Practical DynamoDB Programming in Java demo from AWS re:Invent 2015 is available on github. This project is used to demonstrate how Amazon DynamoDB can be used together with AWS Lambda to perform real-time and batch analysis of domain-specific data. Real-time analysis is performed by using DynamoDB streams as an event source of a Lambda function. Batch processing uses the parallel scan operation in DynamoDB to distribute work to Lambda.

To download the project from github, use:
git clone https://github.com/awslabs/reinvent2015-practicaldynamodb.git .

Follow the instructions in the README file and play with the demo code. You’ll see how simple it is to use the AWS Toolkit for Eclipse to upload AWS Lambda functions and invoke them with the AWS SDK for Java.

Using Amazon Kinesis Firehose

Amazon Kinesis Firehose, a new service announced at this year’s re:Invent conference, is the easiest way to load streaming data into to AWS. Firehose manages all of the resources and automatically scales to match the throughput of your data. It can capture and automatically load streaming data into Amazon S3 and Amazon Redshift.

An example use for Firehose is to keep track of traffic patterns in a web application. To do that, we want to stream the records generated for each request to a web application with a record that contains the current page and the page being requested. Let’s take a look.

Creating the Delivery Stream

First, we need to create our Firehose delivery stream. Although we can do this through the Firehose console, let’s take a look at how we can automate the creation of the delivery stream with PowerShell.

In our PowerShell script, we need to set up the account ID and variables for the names of the resources we will create. The account ID is used in our IAM role to restrict access to just the account with the delivery stream.

$accountId = '<account-id>'
$roleName = '<iam-role-name>'
$s3BucketName = '<s3-bucket-name>'
$firehoseDeliveryStreamName = '<delivery-stream-name>'

Because Firehose will push our streaming data to S3, our script will need to make sure the bucket exists.

$s3Bucket = Get-S3Bucket -BucketName $s3BucketName
if($s3Bucket -eq $null)
{
    New-S3Bucket -BucketName $s3BucketName
}

We also need to set up an IAM role that gives Firehose permission to push data to S3. The role will need access to the Firehose API and the S3 destination bucket. For the Firehose access, our script will use the AmazonKinesisFirehoseFullAccess managed policy. For the S3 access, our script will use an inline policy that restricts access to the destination bucket.

$role = (Get-IAMRoles | ? { $_.RoleName -eq $roleName })

if($role -eq $null)
{
    # Assume role policy allowing Firehose to assume a role
    $assumeRolePolicy = @"
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "firehose.amazonaws.com"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId":"$accountId"
        }
      }
    }
  ]
}
"@

    $role = New-IAMRole -RoleName $roleName -AssumeRolePolicyDocument $assumeRolePolicy

    # Add managed policy AmazonKinesisFirehoseFullAccess to role
    Register-IAMRolePolicy -RoleName $roleName -PolicyArn 'arn:aws:iam::aws:policy/AmazonKinesisFirehoseFullAccess'

    # Add policy giving access to S3
    $s3AccessPolicy = @"
{
"Version": "2012-10-17",  
    "Statement":
    [    
        {      
            "Sid": "",      
            "Effect": "Allow",      
            "Action":
            [        
                "s3:AbortMultipartUpload",        
                "s3:GetBucketLocation",        
                "s3:GetObject",        
                "s3:ListBucket",        
                "s3:ListBucketMultipartUploads",        
                "s3:PutObject"
            ],      
            "Resource":
            [        
                "arn:aws:s3:::$s3BucketName",
                "arn:aws:s3:::$s3BucketName/*"		    
            ]    
        } 
    ]
}
"@

    Write-IAMRolePolicy -RoleName $roleName -PolicyName "S3Access" -PolicyDocument $s3AccessPolicy

    # Sleep to wait for the eventual consistency of the role creation
    Start-Sleep -Seconds 2
}

Now that the S3 bucket and IAM role are set up, we will create the delivery stream. We just need to set up an S3DestinationConfiguration object and call the New-KINFDeliveryStream cmdlet.

$s3Destination = New-Object Amazon.KinesisFirehose.Model.S3DestinationConfiguration
$s3Destination.BucketARN = "arn:aws:s3:::" + $s3Bucket.BucketName
$s3Destination.RoleARN = $role.Arn

New-KINFDeliveryStream -DeliveryStreamName $firehoseDeliveryStreamName -S3DestinationConfiguration $s3Destination 

After the New-KINFDeliveryStream cmdlet is called, it will take a few minutes to create the delivery stream. We can use the Get-KINFDeliveryStream cmdlet to check the status. As soon as it is active, we can run the following cmdlet to test our stream.

Write-KINFRecord -DeliveryStreamName $firehoseDeliveryStreamName -Record_Text "test record"

This will send one record to our stream, which will be pushed to the S3 bucket. By default, delivery streams buffer data to either 5 MB or 5 minutes before pushing to S3, so check the bucket in 5 minutes.

Writing to the Delivery Stream

In an ASP.NET application, we can write an IHttpModule so we know about every request. With an IHttpModule, we can add an event handler to the BeginRequest event and inspect where the request is coming from and going to. Here is code for our IHttpModule. The Init method adds the event handler. The RecordRequest method grabs the current URL and the request URL and sends that to the delivery stream.

using System;
using System.IO;
using System.Text;
using System.Web;

using Amazon;
using Amazon.KinesisFirehose;
using Amazon.KinesisFirehose.Model;

namespace KinesisFirehoseDemo
{
    /// 
    /// This http module adds an event handler for incoming requests.
	/// For each request a record is sent to Kinesis Firehose. For this demo a
    /// single record is sent at time with the PutRecord operation to
	/// keep the demo simple. This can be optimized by batching records and
	/// using the PutRecordBatch operation.
    /// 
    public class FirehoseSiteTracker : IHttpModule
    {
        IAmazonKinesisFirehose _client;

        // The delivery stream that was created using the setup.ps1 script.
        string _deliveryStreamName = "";

        public FirehoseSiteTracker()
        {
            this._client = new AmazonKinesisFirehoseClient(RegionEndpoint.USWest2);
        }

        public void Dispose() 
        {
            this._client.Dispose(); 
        }

        public bool IsReusable
        {
            get { return true; }
        }

        /// 
        /// Setup the event handler for BeginRequest events.
        /// 
        /// 
        public void Init(HttpApplication application)
        {
            application.BeginRequest +=
                (new EventHandler(this.RecordRequest));
        }

        /// 
        /// Write to Firehose a record with the starting page and the page being requested.
        /// 
        /// 
        /// 
        private void RecordRequest(Object source, EventArgs e)
        {
            // Create HttpApplication and HttpContext objects to access
            // request and response properties.
            HttpApplication application = (HttpApplication)source;
            HttpContext context = application.Context;

            string startingRequest = string.Empty;
            if (context.Request.UrlReferrer != null)
                startingRequest = context.Request.UrlReferrer.PathAndQuery;

            var record = new MemoryStream(UTF8Encoding.UTF8.GetBytes(string.Format("{0}t{1}n",
                startingRequest, context.Request.Path)));

            var request = new PutRecordRequest
            {
                DeliveryStreamName = this._deliveryStreamName,
                Record = new Record
                {
                    Data = record
                }
            };
            this._client.PutRecordAsync(request);
        }
    }
}

 

<system.webServer>
  <modules>
    <add name="siterecorder" type="KinesisFirehoseDemo.FirehoseSiteTracker"/>
  </modules>
</system.webServer>

Now we can navigate through our ASP.NET application and watch data flow into our S3 bucket.

What’s Next

Now that our data is flowing into S3, we have many options for what to do with that data. Firehose has built-in support for pushing our S3 data straight to Amazon Redshift, giving us lots of power for running queries and doing analytics. We could also set up event notifications to have Lambda functions or SQS pollers read the data getting pushed to Amazon S3 in real time.

How to Protect the Integrity of Your Encrypted Data by Using AWS Key Management Service and EncryptionContext

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

There’s a great post on the AWS Security Blog today. Greg Rubin explains How to Protect the Integrity of Your Encrypted Data by Using AWS Key Management Service and EncryptionContext.

Greg is a security expert and a developer on AWS Key Management Service. He’s helped us out with encryption and security changes in the AWS SDK for Java many times, and he also wrote the AWS DynamoDB Encryption Client project on GitHub.

Go check out Greg’s post on the AWS Security Blog to learn more about keeping your data secure by properly using EncryptionContext in the KMS API.

Building a serverless developer authentication API in Java using AWS Lambda, Amazon DynamoDB, and Amazon Cognito – Part 3

In parts 1 and 2 of this blog post, we saw how easy it is to get started on Java development for AWS Lambda, and use a microservices architecture to quickly iterate on an AuthenticateUser call that integrates with Amazon Cognito. We set up the AWS Toolkit for Eclipse, used the wizard to create a Java Lambda function, implemented logic for checking a user name/password combination against an Amazon DynamoDB table, and then used the Amazon Cognito Identity Broker to get an OpenID token.

In part 3 of this blog post, we will test our function locally as a JUnit test. Upon successful testing, we will then use the AWS Toolkit for Eclipse to configure and upload the function to Lambda, all from within the development environment. Finally, we will test the function from within the development environment on Lambda.

Expand the tst folder in Package Explorer:

You will see the AWS Toolkit for Eclipse has already created some stubs for you to write your own unit test. Double-click AuthenticateUserTest.java. The test must be implemented in the testAuthenticateUser function. The function creates a dummy Lambda context and a custom event that will be your test data for the testing of your Java Lambda function. Open the TestContext.java file to see a stub that is created to represent a Lambda context. The Context object in Java allows you to interact with the AWS Lambda execution environment through the context parameter. The Context object allows you to access useful information in the Lambda execution environment. For example, you can use the context parameter to determine the CloudWatch log stream associated with the function. For a full list of available context properties in the programming model for Java, see the documentation.

As we mentioned in part 1 of our blog post, our custom object is passed as a LinkedHashMap into our Java Lambda function. Create a test input in the createInput function for a valid input (meaning there is a row in your DynamoDB table User that matches your input).

@BeforeClass
    public static void createInput() throws IOException {
        // TODO: set up your sample input object here.
        input = new LinkedHashMap();
        input.put("userName", "Dhruv");
        input.put("passwordHash","8743b52063cd84097a65d1633f5c74f5");
    } 

Fill in any appropriate values for building the context object and then implement the testAuthenticateUser function as follows:

@Test	
    public void testhandleRequest() {
        AuthenticateUser handler = new AuthenticateUser();
        Context ctx = createContext();

        AuthenticateUserResponse output = (AuthenticateUserResponse)handler.handleRequest(input, ctx);

        // TODO: validate output here if needed.
        if (output.getStatus().equalsIgnoreCase("true")) {
            System.out.println("AuthenticateUser JUnit Test Passed");
        }else{
        	Assert.fail("AuthenticateUser JUnit Test Failed");
        }
    }

Save the file. To run the unit test, right-click AuthenticateUserTest, choose Run As, and then choose JUnit Test. If everything goes well, your test should pass. If not, run the test in Debug mode to see if there are any exceptions. The most common causes for test failures are not setting the right region for your DynamoDB table or not setting the AWS credentials in the AWS Toolkit for Eclipse configuration.

Now that we have successfully tested this function, let’s upload it to Lambda. The AWS Toolkit for Eclipse makes this process very simple. To start the wizard, right-click your Eclipse project, choose Amazon Web Services, and then choose Upload function to AWS Lambda.

 
          

You will now see a page that will allow you to configure your Lambda function. Give your Lambda function the name AuthenticateUser and make sure you choose the region in which you created your DynamoDB table and Amazon Cognito identity pool. Choose Next.

On this page, you will configure your Lambda function. Provide a description for your service. The function handler should already have been selected for you.

You will need to create an IAM role for Lambda execution. Choose Create and type AuthenticateUser-Lambda-Execution-Role. We will need to update this role later so your Lambda function has appropriate access to your DynamoDB table and Amazon Cognito identity pool. You will also need to create or choose an S3 bucket where you will upload your function code. In Advanced Settings, for Memory (MB), type 256. For Timeout(s), type 30. Choose Finish.

Your Lambda function should be created. When the upload is successful, go to the AWS Management Console and navigate to the Lambda dashboard to see your newly created function. Before we execute the function, we need to provide the permissions to the Lambda execution role. Navigate to IAM, choose Roles, and then choose the AuthenticateUser-Lambda-Execution-Role. Make sure the following managed policies are attached.

We need to provide two inline policies for the DynamoDB table and Amazon Cognito. Click Create Role Policy, and then add the following policy document. This will give Lambda access to your identity pool.

The policy document that gives access to the DynamoDB table should look like the following:

Finally, go back to Eclipse, right-click your project name, choose Amazon Web Services, and then choose Run Function on AWS Lambda. Provide your custom JSON input in the format we provided in part 1 of the blog and click Invoke. You should see the result of your Lambda function execution in the Eclipse console:

 
         

AWS re:Invent 2015 and more

by James Saryerwinnie | on | in AWS CLI | Permalink | Comments |  Share

re:Invent has come and gone, and once again, we all had a blast. It’s always
great to meet with customers and discuss how they’re using the AWS SDKs and
CLIs. Keep the feedback coming.

At this year’s AWS CLI session at re:Invent, I had the opportunity to
address one of the topics that we previously hadn’t talked about much,
which is using the AWS CLI as a toolkit to create
shell scripts. As a CLI user, you may initially start off running a few
commands interactively in your terminal, but eventually you may want to combine
several AWS CLI commands to create some higher level of abstraction that’s
meaningful to you. This could include:

  • Combining a sequence of commands together: perform operation A, then B, then C.
  • Taking the output of one command and using it as input to another command.
  • Modifying the output to work well with other text processing tools such as
    grep, sed, and awk.

This talk discussed tips, techniques, and tools you can use to help you write shell scripts with the AWS CLI.

The video for the sesssion is now available online. The slides are also
available.

During the talk, I also mentioned that all the code I was showing would be
available on github. You can check out those code samples, along with some I
did not have time to go over, at the awscli-reinvent2015-samples repo.

In this post, and the next couple of posts, I’ll be digging into this
topic of shell scripting in more depth and covering things that I did
not have time to discuss in my re:Invent session.

Resource exists

I’d like to go over one of the examples that I didn’t have time to demo during
the re:Invent talk. During the breakout session, I showed how you can launch an
Amazon EC2 instance, wait until it’s running, and then automatically SSH to
the instance.

To do this, I made some assumptions that simplified the script,
particularly:

  • You have imported your id_rsa SSH key
  • You have a security group tagged with “dev-ec2-instance=linux”
  • You have an instance profile called “dev-ec2-instance”

What if these assumptions aren’t true?

Let’s work on a script called setup-dev-ec2-instance that checks for these
resources, and creates them for you, if necessary.

We’re going to use the resource exists pattern I talked about in this
part of the session. In pseudocode, here’s what this script will do:

if security group tagged with dev-ec2-instance=linux does not exist:
  show a list of security groups and ask which one to tag

if keypair does not exist in ec2
  offer to import the local ~/.ssh/id_rsa key for the user

if instance profile named "dev-ec2-instance" does not exist
  offer to create instance profile for user

Here’s what this script looks like the first time it’s run:

$ ./setup-dev-ec2-instance
Checking for required resources...

Security group not found.

default  sg-87190abc  None
ssh      sg-616a6abc  None
sshNAT   sg-91befabc  vpc-9ecd7abc
default  sg-f987fabc  vpc-17a20abc
default  sg-16befabc  vpc-9ecd7abc

Enter the security group ID to tag: sg-616a6abc
Tagging security group

~/.ssh/id_rsa key pair does not appear to be imported.
Would you like to import ~/.ssh/id_rsa.pub? [y/N]: y
{
    "KeyName": "id_rsa",
    "KeyFingerprint": "bb:3e:a9:82:50:32:6f:45:8a:f8:d4:24:0e:aa:aa:aa"
}

Missing IAM instance profile 'dev-ec2-instance'
Would you like to create an IAM instance profile? [y/N]: y
{
    "Role": {
        "AssumeRolePolicyDocument": {
             ...
        },
        "RoleId": "...",
        "CreateDate": "2015-10-23T21:24:57.160Z",
        "RoleName": "dev-ec2-instance",
        "Path": "/",
        "Arn": "arn:aws:iam::12345:role/dev-ec2-instance"
    }
}
{
    "InstanceProfile": {
        "InstanceProfileId": "...",
        "Roles": [],
        "CreateDate": "2015-10-23T21:25:02.077Z",
        "InstanceProfileName": "dev-ec2-instance",
        "Path": "/",
        "Arn": "arn:aws:iam::12345:instance-profile/dev-ec2-instance"
    }
}

Here’s what this script looks like if all of your resources are configured:

$ ./setup-dev-ec2-instance
Checking for required resources...

Security groups exists.
Key pair exists.
Instance profile exists.

The full script is available on github, but I’d like to highlight the use of
the resource_exists function, which is used for each of the three
resources:

echo "Checking for required resources..."
echo ""
# 1. Check if a security group is found.
if resource_exists "aws ec2 describe-security-groups 
  --filter Name=tag:dev-ec2-instance,Values=linux"; then
  echo "Security groups exists."
else
  echo "Security group not found."
  tag_security_group
fi

# 2. Make sure the keypair is imported.
if [ ! -f ~/.ssh/id_rsa ]; then
  echo "Missing ~/.ssh/id_rsa key pair."
elif has_new_enough_openssl; then
  fingerprint=$(compute_key_fingerprint ~/.ssh/id_rsa)
  if resource_exists "aws ec2 describe-key-pairs 
    --filter Name=fingerprint,Values=$fingerprint"; then
    echo "Key pair exists."
  else
    echo "~/.ssh/id_rsa key pair does not appear to be imported."
    import_key_pair
  fi
else
  echo "Can't check if SSH key has been imported."
  echo "You need at least openssl 1.0.0 that has a "pkey" command."
  echo "Please upgrade your version of openssl."
fi

# 3. Check that they have an IAM role called dev-ec2-instance.
#    We're using a local --query expression here.
if resource_exists "aws iam list-instance-profiles" 
  "InstanceProfiles[?InstanceProfileName=='dev-ec2-instance']"; then
  echo "Instance profile exists."
else
  echo "Missing IAM instance profile 'dev-ec2-instance'"
  create_instance_profile
fi

By using the resource_exists function, the bash script is fairly close to the
original pseudocode. In the third step, we’re using a local
–query expression
to check if we have an instance profile with the name dev-ec2-instance.

Creating an instance profile

In the preceding script, each of the three conditionals has a clause where we
offer to create the resource that does not exist. They all follow a similar
pattern, so we’ll take a look at just one for creating an instance profile.

The create_instance_profile has this high level logic:

  • Ask the user if they’d like us to create an instance profile.
  • If they don’t say yes, then exit.
  • First create an IAM role with a trust policy that allows the
    ec2.amazonaws.com service to assume the role
  • Next find the ARN for the “AdministratorAccess” managed policy.
  • Attach the role policy to the role we’ve created
  • Create an instance profile with the same name as the role
  • Finally, add the role to the instance profile we’ve created.

Here’s the code for that function:

create_instance_profile() {
  echo -n "Would you like to create an IAM instance profile? [y/N]: "
  read confirmation
  if [[ "$confirmation" != "y" ]]; then
    return
  fi
  aws iam create-role --role-name dev-ec2-instance 
    --assume-role-policy-document "$TRUST_POLICY" || errexit "Could not create Role"

  # Use a managed policy
  admin_policy_arn=$(aws iam list-policies --scope AWS 
      --query "Policies[?PolicyName=='AdministratorAccess'].Arn | [0]" 
      --output text | head -n 1)
  aws iam attach-role-policy 
    --role-name dev-ec2-instance 
    --policy-arn "$admin_policy_arn" || errexit "Could not attach role policy"

  # Then we need to create an instance profile from the role.
  aws iam create-instance-profile 
    --instance-profile-name dev-ec2-instance || 
    errexit "Could not create instance profile."
  # And add it to the role
  aws iam add-role-to-instance-profile 
    --role-name dev-ec2-instance 
    --instance-profile-name dev-ec2-instance || 
    errexit "Could not add role to instance profile."
}

In this function, we’re using –output text combined with –query
to retrieve the admin_policy_arn as a string that we can pass to
the subsequent attach-role-policy command. Using –output text
along with –query is one of the most powerful patterns you can use
when shell scripting with the AWS CLI.

See you at re:Invent 2016

I hope everyone enjoyed re:Invent 2015. We look forward to seeing you again in
2016!