AWS Field Notes

Launch Amazon Elasticsearch Service with Amazon Cognito User Pools

To get started with Amazon Elasticsearch Service (Amazon ES), you must have a concept for authentication and authorization for your search cluster. In addition to an IP-based access policy or a proxy server to protect your search cluster, you can leverage AWS Identity and Access Management (IAM) with Amazon Cognito User Pools to authenticate and authorize users. You can configure this using the AWS Management Console or the AWS Command Line Interface (AWS CLI). More information on configuration can be found in the Amazon Cognito Authentication for Kibana documentation.

An Infrastructure-as-Code approach allows you to complete deployment and configuration in a safe, repeatable manner, so you can build and rebuild your search cluster automatically.

This post describes the structure of an AWS Cloud Development Kit (AWS CDK) template for the fully automated provisioning of the Amazon ES and Amazon Cognito resources, as well as your first search index and Kibana dashboard. The template structure is also applicable if you prefer to use AWS CloudFormation. This post also includes instructions for the transformation from AWS CDK to AWS CloudFormation.

Get Started

Deploy the sample template from the AWS Serverless Application Repository:

Deploy the sample template

The source code is available on GitHub.

Template Requirements

Let’s dive deeper into the resources you’ll need to include in your template for a fully automated deployment. The resources are listed in the order of their deployment:

  • An Amazon Cognito user pool, which is a collection of users. You can compare this to an LDAP directory. In this setup users can sign-in with their email and password to access Kibana.
  • An Amazon Cognito user pool domain, which is used to host the sign-in webpages for the authentication experience.
  • An Amazon Cognito identity pool, which stores unique identities for your users and federates them with identity providers like the Amazon Cognito user pool (#1). For each identity you can obtain temporary, limited-privilege AWS credentials with permission to your search cluster. You don’t need to configure any identity providers in this case. This configuration will be done automatically during the Amazon Elasticsearch Service domain (#6) deployment.
  • An authenticated user IAM role for your identity pool. When a user signs-in, Amazon Cognito generates temporary AWS credentials for the user. These temporary credentials are associated with this role.
  • An Amazon Elasticsearch Service IAM role that grants permissions to configure the identity provider and user pool for authentication of users for Kibana and your search cluster.
  • An Amazon Elasticsearch Service domain, which is your search cluster. During deployment Amazon Elasticsearch Service assumes its IAM role (#5) to configure the authentication with the Amazon Cognito user pool (#1) and the Amazon Cognito identity pool (#3). The search cluster has a resource based IAM policy, which grants GET, PUT, POST, and DELETE permissions to the authenticated user IAM role (#4). You should size the search cluster according to your specific workload. If you just want to experiment with Amazon Elasticsearch Service, start with free tier configuration of a single t2.small.elasticsearch instance and 10GB of EBS storage (please refer to the Amazon Elasticsearch Service pricing for details). At the time of writing, CDK does not support configuration of Amazon Cognito for the search domain, but AWS CloudFormation does. In this case, use raw overrides as an escape hatch: simply specify the properties for your CDK resource as you would include them in an AWS CloudFormation template in dot notation. Here’s an example of the Amazon Cognito user pool, identity pool, and IAM role configuration for the search domain:
    esDomain.addPropertyOverride('CognitoOptions.Enabled', true);
    esDomain.addPropertyOverride('CognitoOptions.IdentityPoolId', idPool.ref);
    esDomain.addPropertyOverride('CognitoOptions.RoleArn', esRole.roleArn);
    esDomain.addPropertyOverride('CognitoOptions.UserPoolId', userPool.ref);
    
  • An identity pool role attachment links the authenticated user IAM role (#4) to the Amazon Cognito identity pool (#3).
  • An AWS Lambda function that is used by an AWS CloudFormation custom resource (#9) to send requests to the Amazon Elasticsearch Service domain (#6) endpoint.
  • An AWS CloudFormation custom resource which defines the requests that should be sent by the AWS Lambda function (#8) to create a search index template and a Kibana dashboard.

Permissions

There are two ways to grant permissions to your search cluster via IAM policies: a resource-based policy associated with your search domain, or identity-based policies associated with your IAM users and roles.

The sample template uses identity-based policies to grant write permissions to the AWS Lambda function that sends requests to the search cluster. The function needs to sign every request in order for the permissions to take effect.

The resource-based policy of the domain grants access to the authenticated user role:

const esDomain = new CfnDomain(this, "searchDomain", {
    [...]
    accessPolicies: {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "AWS": authRole.roleArn
            },
            "Action": [
              "es:ESHttpGet",
              "es:ESHttpPut",
              "es:ESHttpPost",
              "es:ESHttpDelete"
            ],
            "Resource": "arn:aws:es:" + this.region + ":"
            + this.account + ":domain/" + applicationPrefix + "/*"
          }
        ]
    }
});

Lock down the access to IP ranges by adding a condition based on the aws:sourceIp context key. If your search cluster runs inside of an Amazon Virtual Private Cloud (VPC), use a security group.

Deploy your Search Index and the Dashboard

After your search cluster deployment is completed, you can configure resources in the Kibana user interface (the dashboard editor). You can also send HTTP requests to the search cluster’s endpoint from the Kibana dev tools or external systems. The sample template includes an AWS CloudFormation custom resource that sends two requests to create a search index template and import a dashboard from a previous export:

import path = require('path');
import fs = require('fs');
[...]
new CustomResource(this, 'esRequestsResource', {
  provider: new SamFunctionCustomResourceProvider(esRequestsFn),
  properties: {
    requests: [
      {
        "method": "PUT", // (1)
        "path": "_template/example-index-template",
        "body": fs.readFileSync(path.join(__dirname, "index-template.json")).toString()
      },
      {
        "method": "POST", // (2)
        "path": "api/kibana/dashboards/import",
        "body": fs.readFileSync(path.join(__dirname, "dashboard.json")).toString()
      }
    ]
  }
});

Both of the requests must be signed, and the Kibana API request needs a special handling. The AWS Lambda function must prepend with  _plugin/kibana/ on the path to send the request to Kibana.

Leveraging the JavaScript libraries fs and path allows you to store the request bodies in separate files so that the JSON body does not bloat the template and you can easily validate the JSON syntax. To create a file similar to the dashboard.json example, retrieve an export of your dashboard by id from Kibana.

Deploy the template from code

The sample template’s source code is available on GitHub. The template provisions an Amazon Elasticsearch Service cluster in a fully automated way. The search cluster consists of a single t2.small.elasticsearch instance with 10GB of EBS storage. It is integrated with Amazon Cognito User Pools so you only need to add your user(s). The template also configures an example Kibana dashboard and an Amazon ES index template.

The template prefixes the search domain and the Amazon Cognito Hosted UI with a string that you can define with the applicationPrefix template parameter.

You can either deploy the template with AWS CloudFormation or CDK. Both options require you to install and configure the AWS CLI and the CDK.

The CDK template is written in TypeScript. TypeScript sources must be compiled to JavaScript initially and after each modification. Open a new terminal and keep this terminal open in the background if you like to change the source files. Change the directory to the one where cdk.json is and execute:

npm install
npm run watch

Read the CDK developer guide for more information.

Option 1: Deployment using AWS CloudFormation

Synthesize the CDK template to an AWS CloudFormation template:

cdk synth --version-reporting false > synth.yaml

Package the template for deployment. AWS CloudFormation transforms the AWS Serverless Application Model (AWS SAM) syntax to AWS CloudFormation code and uploads the package to a bucket of your choice. The bucket must be in the region in which you want to deploy the sample application:

aws cloudformation package \
    --template-file synth.yaml \
    --output-template-file packaged.yaml \
    --s3-bucket <BUCKET> \
    --region <REGION>

Deploy the packaged application to your account:

aws cloudformation deploy
    --template-file packaged.yaml \
    --stack-name <STACKNAME> \
    --parameter-overrides applicationPrefix=<PREFIX> \
    --capabilities CAPABILITY_IAM \
    --region <REGION>

Option 2: Deployment using CDK

CDK needs the AWS Lambda function code already packaged in S3. Run the synthesize and package steps to package the code:

cdk synth > synth.yaml
aws cloudformation package \
    --template-file synth.yaml \
    --output-template-file packaged.yaml \
    --s3-bucket <BUCKET> \
    --region <REGION>

Create or update the application with cdk deploy. This CDK template can retrieve the S3 URL to your AWS Lambda function code package from the previously packaged template – provided in the SAM_PACKAGED_TEMPLATE environment variable:

SAM_PACKAGED_TEMPLATE=$(cat packaged.yaml) \
    AWS_DEFAULT_REGION=<REGION> \
    cdk deploy -c applicationPrefix=<PREFIX>

Access the Example Dashboard

As soon as the application is deployed completely the outputs of the AWS CloudFormation stack provides the links for the next steps. You will find two URLs in the AWS CloudFormation console called createUserUrl and kibanaUrl.

  • Use the createUserUrl link from the outputs, or navigate to the Amazon Cognito user pool in the console to create a new user in the pool. Enter an email address as username and emailEnter a temporary password of your choice with at least 8 characters. Leave the phone number empty and uncheck the checkbox to mark the phone number as verified. If you like you can check the checkboxes to send an invitation to the new user or to make the user verify the email address. Then choose Create user.
  • Access the Kibana dashboard with the kibanaUrl link from the outputs, or navigate to the Kibana link displayed in the Amazon Elasticsearch Service console. In Kibana, choose the Dashboard icon in the left menu bar and open the Example Dashboard. The dashboard contains instructions to add new documents to the search index and to visualize the documents with the graph in the dashboard.

Cleaning Up

To avoid incurring charges, delete the AWS CloudFormation stack when you are finished experimenting:

  • Sign in to the AWS CloudFormation console and choose your stack.
  • Choose Delete to delete all resources, including the search cluster and the Amazon Cognito user pool.

Conclusion

In this post, we identified the resources necessary to bootstrap an Amazon Elasticsearch Service search cluster including a search index template and a Kibana dashboard protected by Amazon Cognito User Pools.

To get started, launch the sample template from the AWS Serverless Application Repository. As a next step you can make the template your own by customizing it according to your needs and deploy it with CDK. You can submit enhancements to the sample template in the source repository.

Steffen Grunwald

Steffen Grunwald

Steffen Grunwald is a Principal Solutions Architect at Amazon Web Services. He supports German enterprise customers on their journey to the cloud. He loves to dive deep into application architectures and development processes to drive performance, scale operational efficiency, and increase the speed of innovation.