Monitoring resources in an AWS Control Tower environment using Splunk from AWS Marketplace
As a customer, you deploy workloads on the AWS Cloud, and you often build multi-account environments based on AWS Multi Account Strategy best practices. Many of my customers use AWS Control Tower as an orchestration framework to provision new accounts. AWS Control Tower provides a way to set up and govern a new, secure, multi-account AWS environment based on best practices. AWS established those practices through experience, working with thousands of enterprises as they move to the cloud. A multi-account environment requires you to purchase or develop solutions that provide insights into the resources and security policies built into your accounts.
In this blog post, Igor, Kam, and I show you how to integrate Splunk’s Grand Central software to automate logging configuration for log analysis and resource monitoring. Grand Central is available as an free application when you purchase Splunk in AWS Marketplace.
Splunk brings data to every question, decision, and action. With Splunk and Splunk Cloud, you can search, monitor, analyze, and visualize data coming from websites, applications, servers, networks, sensors, and mobile devices.
About AWS Control Tower
When you deploy AWS Control Tower in your AWS account, here is what happens:
- In the master account, AWS Organizations is created, if it’s not already established.
- Two organizational units (OUs) are created. One (Core OU) is for shared accounts and the other (Custom OU) is for accounts that can be provisioned.
- A cloud native directory is established with preconfigured groups and AWS Single Sign-On (AWS SSO) access.
- Baseline and guardrails are enabled across all accounts using AWS CloudFormation StackSets.
- Account Factory, an AWS Service Catalog product to provision new accounts, is provided.
Refer to the following diagram.
This solution requires the following AWS services:
- AWS Identity and Access Management (IAM)
- AWS CloudFormation
- AWS Lambda
- Amazon Simple Storage Service (Amazon S3)
- Amazon CloudWatch.
Splunk’s GrandCentral interfaces with the AWS Organizations API to gather all the linked accounts within your organization. It uses AWS CloudFormation to deploy StackSets on OUs. It does this by assuming a role within the primary account where AWS Control Tower is deployed. The StackSets enable the linked accounts to send logs back to Splunk for centralized logging and analysis using AWS services.
To follow this walkthrough, you must have the following:
- A new AWS account with an AWS Landing Zone already set up.
- Awareness of scaling requirements for your Splunk Enterprise deployments. Consult your Splunk representative with any questions.
- Access to Splunk Enterprise licenses and a valid login on the com portal to download Splunk applications. Customers with a Splunk Cloud stack have this capability built into the service.
- Familiarity with deploying Splunk instances in AWS Marketplace.
Step1: Create an account for log gathering
- In the AWS Management Console, log in to the primary account as an administrator. In the top right-hand corner, select the Region where AWS Control Tower is deployed.
- Create a new Organizational Unit (OU). To do this, in the left sidebar, select Organizational Units and choose Add an OU. In the Add an OU under Root popup box, enter Operations as the OU name.
- In the left sidebar, choose Account Factory and choose Enroll Account. Then enter all your details, including unique email addresses for Account email and AWS SSO email. Enter a Display name for your new account. I named mine ct-operations. Select the newly created Operations OU.
Step 2: Procure and launch Splunk Enterprise from AWS Marketplace
For the purposes of this blog post, I chose to deploy using the AMI in AWS Marketplace. To do this, do the following:
- In the AWS Management Console, log in as an admin to the ct-operations account you created in step 1.
- Enter marketplace in the search bar and select AWS Marketplace Subscriptions.
- Choose Manage Subscriptions and then Discover Products.
- Enter Splunk Enterprise in the search box. You should see the Splunk Enterprise product detail page. Refer to the following screenshot.
- Choose Continue to subscribe. Select your desired delivery method, software version, and Region. Choose Continue to launch.
- Choose Launch from website. Choose your EC2 Instance Type, VPC Settings, and Subnet Settings. Choose Launch.
Step 3: Create an IAM user using the CloudFormation template
In the AWS Management Console, log in to the primary account, also known as the master payer account, as an administrator.
- In the AWS Management Console, search for CloudFormation and choose AWS CloudFormation.
- Create an IAM user for Splunk GrandCentral. To do this, download this template. In the console, select Create stack. Choose New resources and then select Upload a template. The file dialog box opens; select the template file you just downloaded. Choose Enter a name for your stack and choose Next. Review and deploy. This gives access to certain operations using AWS Organizations and AWS CloudFormation API.
- Once the CloudFormation template succeeds, navigate to the deployment Outputs tab, the fourth from the left. Copy the AccessKey ID and SecretKey ID. Refer to the following screenshot.
Step 4: Configure the HTTP Event Collector and create a token
- Log in to your Splunk Console. You can find login details under usage instructions.
- Configure the HTTP Event Collector (HEC). HEC uses token-based authentication. The token must be created and saved up for later use. As the exact steps to access this vary based on the Splunk deployment chosen, refer to Splunk’s documentation on the specifics on the deployment. I named mine hec1, enabled indexer acknowledgement, and chose I entered my input settings and reviewed them. The following screenshot shows my successful token creation.
Step 5: Create a certification and classic load balancer
- Create a certificate using Amazon Certificate Manager (ACM).
- Create a classic load balancer. To do this, follow this guide to Create a Classic Load Balancer with an HTTPS listener. HEC listens on HTTPS port 8088, the above generated public certificate needs to be used for the listener.
When you have completed these steps, return to your AWS Management Console. In the Listeners tab, you should see the listener configured for HTTPS port 8088. Refer to the following screenshot.
Step 6: Deploy Splunk add-ons and applications
Deploy the following applications and add-ons to your Splunk installation.
- Deploy Splunk Add-on for Amazon Web Services by following the installation guide.
- Deploy the Splunk Add-on for Amazon Kinesis Firehose by following the installation guide.
- Install the GrandCentral for AWS app from Splunkbase.
Step 7: Add a new master account and new Splunk account
- Log in to the Splunk Console and navigate to the GrandCentral application. You can find login details under usage instructions.
- In the GrandCentral top navigation, select Configure Data Sources. In AWS Organization Master Accounts, select New Organization Master Account. On the top right of the page, enter your AWS Master Account ID and enter a name for it such as MasterAccount. Then enter the AccessKey ID and SecretKey ID from Step 3 and choose Save.
- To see a list of all the accounts in your organization, under the newly added Master Account, choose Choose List Accounts in Organization. You should see all linked accounts in the console. This indicates that the configured credentials for the accounts are working.
- In the top navigation, select Configure Data Sources. Under Splunk Endpoints, select New Splunk Account. Enter an account name and the HTTP Event Collector Endpoint. Make sure to include the port, 8088 or 443 (Splunk Cloud deployments) on your HTTP Event Collector Endpoint. the HEC Token ID that you saved in step 4 twice.
- Select Save.
Step 8: Deploy the StackSet on an OU
- To deploy the StackSet on an OU, within the GrandCentral Application select Configure Data Sources select New Stackset. Select the AWS Master Account and choose the right OU where you want this deployed. Enter a name for your deployment, select one or multiple Regions, and select your data sources. Select These data sources are the source of log and event collection for Splunk for analysis and event correlation.
- Verify the deployment by checking in the primary account in the respective Regions. To do this, in on your AWS Management Console navigate to CloudFormation and select Region where you ran the deployment. A StackSet Status of ACTIVE indicates the StackSet is now successfully deployed.
As the deployment has been done using StackSets, any new accounts added under the OU automatically get the stack instance deployed. The data source configuration will be identical. StackSets takes care of repetitive configuration across all new accounts governed by AWS Control Tower and ensures all new accounts meet your governance policies.
After you complete these steps, any newly created accounts automatically get configured to export data and logs to Splunk. Accounts governed by AWS Control Tower will now meet your logging, alerting, and governance policies.
Search logs and explore dashboards on Splunk
You have now integrated Splunk’s Grand Central software into your AWS Control Tower environment. Any newly created accounts automatically get configured to export data and logs to Splunk. You can now automate logging configuration for log analysis and monitor your resources. Here’s how to do that:
- Log in to the Splunk console. In the left panel, select the Search and Reporting. In the search box, enter a search string, such as vpc or a user name. This should give you the raw logs to analyze.
- Splunk provides dashboards to visualize your collected logs and metrics. One of my favorite dashboards is the Config Rules dashboard, which provides consolidated view of AWS Config rules and violations across all the AWS Accounts managed via AWS Control Tower.
In the following screenshot of that dashboard, you can see three columns summarizing active AWS Config rules, non-compliant AWS Config rules, and non-compliant resources using pie charts. Beneath that are two tables, and active AWS Config rules summary table and a non-compliant resources details summary table.
In this blog post, Igor, Kam, and I showed you how to purchase Splunk in AWS Marketplace. We showed you how to integrate Splunk’s Grand Central application so that newly created accounts automatically are configured to export data and logs to Splunk. And we showed to how to use Splunk to visualize your collected logs and metrics
Find out more about Splunk products available in AWS Marketplace.
To get started on AWS or are in the process of building your Landing Zone, visit Getting Started with AWS Control Tower. This page offers guidance on building a well-architected AWS environment. You can integrate Splunk with AWS Control Tower in AWS Marketplace by visiting the solution page and using the implementation guide that accompanies the solution.
About the authors
Vijay Shekhar Rao is a Senior Technical Account Manager with large AWS enterprise customers. Before joining AWS, Vijay has spent several years architecting, building, managing, and troubleshooting complex infrastructure for critical systems. Over several years in his career, he has accumulated knowledge to understand what it takes to design and deploy things in the right way for operational simplicity, effectiveness, and cost optimization. Vijay had been involved with various industry verticals, including finance, communications, public and government sector ,and startups.
Igor Alekseev is a Data Solution Architect at AWS, working with strategic partners helping them build complex, AWS-optimized architectures. Prior to that, he implemented many big data projects, including several data lakes in the Hadoop ecosystem. As a Data Engineer, he was involved in applying machine learning to fraud detection and office automation. Igor’s projects were in variety of industries including communications, finance, public safety, manufacturing, and healthcare.
Kamilo “Kam” Amir is the Director of Cloud Interlock specializing in the Splunk Cloud service and is based in the Washington, D.C. area. He’s been with Splunk since version 4.3 and started as an SE covering Major accounts. If you need to find him, just look for him hiking in Rock Creek park with his family and his husky.