This Guidance helps you set up a Cloud Financial Management (CFM) capability to manage and optimize your expenses for cloud services. This capability includes near real-time visibility and cost and usage analysis to support decision-making for topics such as spend dashboards, optimization, spend limits, chargeback, and anomaly detection and response. This capability also includes budget and forecasting, giving you a defined, cost-optimized architecture for your workloads so you can select the right pricing model and attribute resource costs to relevant teams. This enables you to track, notify, and apply cost optimization techniques across your environment and resources. You can centrally manage expense information and give critical stakeholders access as needed for targeted visibility and to support decision-making.

Architecture Diagram

Download the architecture diagram PDF 

Implementation Resources

It is critical to understand and have visibility into the spend of your cloud environment. Setting up the right measures to monitor your resources will allow you to create reports, dashboards, and processes for anomaly detection. By tracking cloud spend and budget, you can plan for cost optimization and reduce unnecessary costs.

These mechanisms and tools enable you to support business decisions with data and establish cloud financial operations in your environment. You can socialize cost awareness across different business units, application teams, and other stakeholders without affecting the pace of innovation for your teams.

Before implementing a cloud financial management (CFM) function, you will need to implement a tagging strategy for your environment. Reference the Tagging capability section in the Establishing Your Cloud Foundation on AWS whitepaper. In this whitepaper, you will find recommended tags for your environment. Some of these tags can be used to track spend through your cloud environment, allowing you to create dashboards and reports for individual business units, workloads, and types of environments. 

Your tagging capability requires a tagging dictionary, and CFM tags and defined values for these tags must be incorporated into the dictionary. Make these tags widely available across your different stakeholders and teams to enable them to track their spend back. This will allow your cloud team or your financial operations (FinOps) team to analyze usage and build a cost allocation strategy.

    • Scenario
    • Allocate cost for your cloud environment

      • Implement a showback-based cost allocation mechanism for cloud costs
      • Implement a chargeback-based cost allocation mechanism for cloud costs
      • Drive business value by quantifying the total value of migrating a workload to the cloud
      • Ensure cost is a design trade-off during new workload migrations and born-in-cloud workload
    • Overview
    • Leveraging metadata across your environment helps you to accurately allocate cost for workloads and applications. Using a showback approach, you can identify the costs incurred by a business unit, product, or team. Showback refers to reporting that breaks down cloud costs into attributable categories such as consumers, business units, general ledger accounts, or other responsible entities. The goal of showback is to show teams, business units, or individuals the cost of their consumed cloud resources.

      Material spend, however, may not be accounted for due to the lack of enforcement of tagging mechanisms. To remediate this issue, group your different resources and establish boundaries between them. This will help identify how those groups of resources are using their assigned budget across different stages of the software development lifecycle. 

      Create categories to consolidate groups of resources based on your business needs. Examples of categories include Business Unit, Product Line, or Environment. Once these categories are set up, you can use them to monitor category-based cost and usage information. Additionally, you can retrieve meaningful information from these groups of resources, such as who created the resource or the line of business to which a resource belongs.

      Leveraging infrastructure as code (IaC) to deploy infrastructure and resources for your workloads will allow you to consistently apply tags from your tagging dictionary, avoid untagged resources, and enforce your tagging policy. These actions will help you allocate costs for your infrastructure when needed.

      Cost allocation methods allow you to carve out material cloud charges, including commitment purchases, standalone resources, and shared resources. These methods include networking, log retention and archival, security tools, and operational tool charges. Establishing chargeback mechanisms will allow you to report costs incurred by different business units, products, and teams.

    • Implementation
    • Implement a showback-based cost allocation mechanism for cloud costs

      As you develop reporting capabilities, you should also learn how to create a showback mechanism. 

      The first step in creating a showback mechanism is to define the allocation dimensions you want to use for reporting. This will vary from customer to customer. These are three mechanisms you can use:

      1. Use Cost Allocation Tags based on the existing tagging dictionary
      2. Use Cost Explorer filters
      3. Use Cost Categories to aggregate combinations of dimensions to filter in AWS Cost Explorer

      For example, if you want to showback on a per project basis, you would use the dimension(s) that identify the project. You may have a tag named Project, so you can use the group-by setting in Cost Explorer to group the values of the project tag. Reference Guidance for Tagging on AWS for additional information about tagging resources in your AWS environment.

      To showback a specific project’s costs, you can implement a filter in Cost Explorer on the Tag dimension to only display the costs related to a specific value of the Project tag.

      If you want to show costs for a specific team in a specific environment (such as a Cloud Platform team in a development environment), then you can choose a combination of tags called Team with a value of Cloud Platform, using an AWS account for your development environment.  

      As tag combinations in Cost Explorer get more complex, creating and maintaining various combinations of tags and filters can become difficult to manage. AWS Cost Categories allow you to use rules to dynamically organize one or many dimensions into meaningful categories. You can create Cost Category rules based on a combination of Accounts, Cost Allocation Tags, Services, or Charge Types. You can also nest other Cost Categories inside a rule and create multiple rules per each Cost Category. Cost Categories can be used as Cost Explorer filters or groupings, similar to tags in the previous examples. Once created or changed, Cost Categories take up to 24 hours to process. Review the Creating cost categories section in the AWS Billing User Guide for additional information. 

      Some customers may require a way to showback costs on a per-account basis and a way to provide individual bills to entities like business units. In these cases, use AWS Billing Conductor. This service allows you to create Billing Groups that divide the charges for an account or set of accounts and produces an invoice for the charges of the Billing Groups. You can also create custom line items to add in charges that may not be split in the invoice or are not taggable, like support charges or data transfer. 

      Implement a chargeback-based cost allocation mechanism for cloud costs

      Showbacks are a representation of charges incurred. Chargebacks are the actual charge to a profit and loss (P&L) through accounting processes, such as financial systems and journal vouchers, based on defined categories. Chargebacks are not an invoicing process but a way to allocate expenses to various stakeholders. 

      To implement chargeback, a showback mechanism must already be in place. We recommend using a combination of Cost Allocation tags, Cost Categories, and Cost Explorer reports. Building on the showbacks, you must decide on the cost allocation of un-taggable or shared resources. You can tag or categorize shared resources accordingly and quickly surface the value of untagged resources using Cost Explorer. Under the Advanced Options in the console, check the box Show only untagged resources which allows you to group these resources by Service, Linked Account, or other options. 

      You must decide on the most appropriate allocation model for un-taggable or shared costs. The most common models are an even split across business groups, charging proportionally to spend, or a fixed cost. To help calculate these costs, regardless of allocation method, we recommend leveraging split charges within cost categories. By defining the source category (such as infrastructure services or networking) and targets (such as business units or workloads), you can calculate the correct chargeback based on your allocation method each month. However, this feature only does the calculation and reporting. The split charges would not appear in any other AWS Cost Management tools. 

      To provide a single source of truth for showback and chargeback that is integrated with AWS Cost Management tools, use AWS Billing Conductor. New mutually exclusive billing domains within a consolidated billing family can be created using billing groups, and each billing group can be defined with the rates you want applied to the usage. Split charges can be added to the bill using custom line items or charging or crediting based on either a percentage of spend or a fixed cost. Once configured, each billing group can generate its own pro-forma AWS Cost and Usage Report that can be analyzed using Amazon Athena and visualized using Amazon QuickSight dashboards, such as Cloud Intelligence Dashboards (CIDs).

      Drive business value by quantifying the total value of migrating a workload to the cloud

      A business case for moving to the cloud should include value benefits beyond the cost savings that exist today for an organization. To quantify the total value of a cloud migration, you will need to build a comprehensive business case by measuring and tracking progress against four key dimensions of value, including the following: 

      • Cost Savings is the total cost ownership (TCO)-based benefit of moving to the cloud
      • Staff Productivity is the full time equivalent (FTE) productivity gained from reducing or eliminating tasks no longer needed with the cloud
      • Operational Resilience is the benefit from improved availability and security
      • Business Agility is the ability to respond faster and increase experimentation

      To begin building a cost-focused migration business case, you can use AWS Migration Evaluator—at no cost—to collect data from your existing environment. Following data collection, this service will create an assessment that includes a projected cost estimate and savings of running your on-premises workloads in the AWS Cloud.

      Using the data collected from AWS Migration Evaluator, you can work with your AWS account team to engage the AWS Cloud Economics team. To help customers better understand the impact of a cloud transformation, the AWS Cloud Economics team established the Cloud Value Framework (CVF). The CVF helps you build a more holistic business case to present to decision makers.

      Ensure cost is a design trade-off during new workload migrations and born-in-cloud workload designs

      It is important to consider cost at every stage of a workload’s lifecycle. A cost-optimized workload fully uses all resources, achieves an outcome at the lowest possible price point, and meets your functional requirements. There are trade-offs to consider for cost optimization. For example, you might consider whether to optimize for speed-to-market or for cost. In some cases, it’s best to optimize for speed—going to market quickly, shipping new features, or meeting a deadline—rather than investing in upfront cost optimization. 

      Investing the right amount of effort upfront in a cost optimization strategy allows you to realize the economic benefits of the cloud more readily by encouraging consistent adherence to best practices and avoiding unnecessary technical debt.

      The AWS Well-Architected Framework provides architectural best practices, around six pillars, for designing and operating reliable, secure, efficient, and cost-effective workloads in the cloud. As you consider trade-offs, it will help to review the design principles in the Cost Optimization Pillar.

      With the help of an AWS Solutions Architect or an AWS Technical Account Manager, you can conduct Well-Architected Reviews throughout the lifecycle of the workload. The AWS Well-Architected Tool tracks recommendations and remediations from the findings of the review.

    • Scenario
    • Plan and forecast your spend across your environment 

      • Forecast existing cloud usage and spend
      • Purchase commitments for already deployed workloads
      • Purchase commitments as part of a new cloud workload deployment
    • Overview
    • Whether the workload you bring to your cloud environment is a new, cloud-native workload or a migration from your on-premises data center, you need to model and plan costs. Using cloud-native tools, you can extract data that will allow you to determine a TCO to quantify expected cloud costs. Additionally, for your overall environment, there are other non-cost cloud values that you need to consider, including staff productivity, operational resiliency, and business agility, which will showcase the business value of moving to the cloud. 

      We recommend regularly reviewing cloud budgets to understand variances in cost, so you can plan ahead for spending. We also recommend performing forecasting exercises to define future IT and workload-based budgets. You can perform cloud forecasting by using a combination of trend-based and driver-based methods to align to future cloud usage (such as new products, new launches, or changes in cloud deployments) in addition to future cloud demand (such as forecasted demand for cloud-hosted products). Cloud spend planning is part of the organization’s overall IT financial planning process, which may include on-premises or other hybrid spend planning.

    • Implementation
    • Forecast existing cloud usage and spend

      Forecasting is an essential part of staying on top of your cloud costs and usage, and it is especially important in helping you improve budgeting and cost predictability as your business scales.

      AWS Cost Explorer uses a machine learning (ML) algorithm that learns your usage trends and gives you the ability to create custom usage forecasts. These forecasts predict your expected future costs over a forecast period you select and is based on your past usage of AWS services. You can use a forecast to estimate your AWS bill and set alarms and budgets based on predictions. Cost Explorer forecasts have a prediction interval of 80%. If AWS doesn't have enough data to forecast an 80% prediction interval, Cost Explorer doesn't provide a forecast. This is common for accounts that have less than one full billing cycle.

      To get started with forecasting your spend in AWS Cost Explorer, follow these steps:

      1. Select either Daily or Monthly time granularity.
      2. Select a custom or Auto-select date range:
        • Select a custom date range that includes only future dates or a range that also includes historical data.
        • Auto-select +3M to display forecast data for the next 3 months or +12M to display forecast data for the next 12 months.

      AWS Pricing Calculator helps estimate costs with driver-based forecasting and pre-defined architectures of a workload. AWS Pricing Calculator is a web-based planning tool used to model your solutions and create cost estimates before you start building. It allows you to explore AWS service price points and review the calculations behind your estimates so you can make informed decisions when using AWS services.

      To create an estimate with AWS Pricing Calculator, follow these steps:

      1. In AWS Pricing Calculator, choose Create estimate.
      2. On the Add service page, find the service that you want to choose, and then select Configure. For more information, see Configure a service.
      3. Add a Description for the estimated service.
      4. Select a Region.
      5. Enter your settings in the Service settings section.
      6. Choose Add to my estimate.

      With your estimate, AWS Pricing Calculator gives you the ability to see the calculations behind the estimated prices for your service configurations. You can also share and export your estimates. If AWS Pricing Calculator does not have prices available for a particular service, you can find the price on the public AWS service page.

      Building a small-scale proof-of-concept (POC) is an effective method for forecasting the cost of workloads built on the cloud and that are billed by usage. You can derive an accurate cost of the POC through AWS Cost Explorer by using Cost Allocation tags and distributed load tests of future usage against the workload.

      Purchase commitments for already deployed workloads

      One of the biggest advantages of cloud computing is the ability to instantly provision additional resources to meet demand. If you maintain steady state cloud workloads that are always on, such as a database, you can take advantage of volume-based discounts that can greatly reduce your spend. In addition, workloads can also see discounts for the baseline resources, while using on-demand resources for peak times and unexpected spikes. Choosing the right pricing model(s) is key to successfully optimizing the cost of your cloud workloads. The two pricing models applicable to this CFM capability are Savings Plans and Reserved Instances (RIs). 

      Savings Plans is a flexible pricing model you can use to reduce Amazon Elastic Compute Cloud (Amazon EC2), AWS Fargate, and AWS Lambda costs. This pricing model offers lower prices on Amazon EC2 instances usage, regardless of instance family, size, OS, tenancy, or AWS Region. This pricing model also applies to AWS Fargate and AWS Lambda usage. AWS also offers Amazon SageMaker Savings Plans to help reduce the cost of running ML workloads on AWS.

      There are 3 distinct types of Savings Plans:

      1. Compute Amazon EC2
      2. Compute for AWS Fargate and AWS Lambda
      3. Artificial intelligence and ML (AI/ML): Amazon SageMaker

      When you sign up for a Savings Plan, AWS Cost Explorer will provide a recommendation. Follow this recommendation, and select a compute Savings Plan. Once you sign up for Savings Plans, your compute usage is automatically charged at the discounted Savings Plans prices.

      To access your Savings Plans recommendations in any account in your organization, follow these steps:

      • Sign in to the AWS Management Console and open the AWS Cost Management console.
      • In the navigation pane, under Savings Plans, choose Recommendations.
      • In the Recommendation options section, choose your preferred Savings Plans type, Savings Plans term, Payment option, and look-back period.
      • In the Recommended Savings Plan table, select the check boxes next to the Savings Plans that you want to purchase.

      For additional information, reference the Purchasing Savings Plans section in the Savings Plans User Guide

      You can use Reserved Instances (RIs) to reduce Amazon Relational Database Service (Amazon RDS), Amazon Redshift, Amazon ElastiCache, and Amazon OpenSearch Service costs up to 75% compared to equivalent on-demand capacity. RIs are available in 1- or 3-year terms with 3 different payment options: all up-front (AURI), partial up-front (PURI), or no upfront (NURI). You can use AWS Cost Explorer RI purchase recommendations to help determine the best terms that suit your business needs.

      To view your RI recommendations, follow these steps:

      • Sign in to the AWS Management Console and open the AWS Cost Management console.
      • In the navigation pane, under Reservations, choose Recommendations.
      • For Select recommendation type, choose the service for which you want recommendations.
      • To purchase the recommended reservations, go to the purchase page on the respective service’s console.

      For additional information, reference the Using Your RI Recommendations section in the AWS Cost Management User Guide

      If you have consistent Amazon CloudFront usage, you can take advantage of the CloudFront Security Savings Bundle. This is a flexible, self-service pricing plan that helps you save on your CloudFront bill in exchange for making a commitment to a consistent amount of monthly CloudFront usage (measured in $/month) for a 1-year term.

      Purchase commitments as part of a new cloud workload deployment

      As cloud costs increase—either due to new workload deployments or resource growth of existing workloads—you should make further commitments to reduce the cost of steady state usage. You can regularly review your cloud spend either on a monthly or quarterly basis to make commitment decisions. To automate the review process, create AWS Budget Alerts for Savings Plans and reservation coverage. 

      To begin, access AWS Budgets through the billing console to create a budget for Savings Plans coverage or reservation coverage. While you will only need a minimum of one Savings Plans coverage budget, you will need one reservation coverage budget for each applicable service (such as Amazon EC2, Amazon RDS, or Amazon Redshift). Based on your current coverage, set the threshold you’d like to stay above. Typically, customers will commit to 50-70% of their usage to maximize savings while leaving room for variability. When the coverage has dipped below the desired threshold, you will receive an alert through email or AWS Chatbot. 

      Savings Plans and Reservation recommendations should then be reviewed within the AWS Cost Management console with a look-back period of 7 days, 30 days, or 60 days, appropriate to the new cloud spend. You can then purchase commitments to remediate the coverage gap through the AWS console, command line interface (CLI), or software development kit (SDK).

    • Scenario
    • Monitor cost and usage in your environment 

      • Implement a mechanism to proactively monitor detailed cloud costs
      • Implement a mechanism to report on aggregate cloud cost and usage
      • Implement a mechanism to proactively monitor aggregate cloud costs
      • Perform audit-based cost optimization assessments for deployed workloads
    • Overview
    • Visibility across your cloud environment is critical. Your cloud and finance teams will require visibility into cloud spend, and stakeholders owning business units and workloads need to generate and save custom reports. Providing visibility into cloud spend will allow you to create a cost-aware culture within your organization.

      Custom reports will allow different stakeholders to create different views of the cloud environment, based on groups of resources or stages of development. These reports will also enable them to forecast spend or detect cost anomalies during a specified period. 

      The level of granularity can be enabled at different levels for dashboards and access, and access is granted based on roles and groups for each of the workloads through federation. The following are examples of varying levels of access:

      • To analyze and monitor your entire environment, your cloud team or your FinOps team will need access to environment-level billing. From there, they can set up budgets, analyze spend trends, and detect anomalies across the entire environment.
      • For a business owner who needs visibility across different workloads and applications, you can grant access to a specific group of resources or custom dashboards. The business owner can then visualize spend for each of the workloads.
      • For a builder, a developer, or an individual user, you can grant access to visualize the cost associated with the resources that the individual is using or within their sandbox environment.

      Setting up the right monitoring tools for your spend will allow you to evaluate different spend patterns. By evaluating your patterns weekly or monthly, you can determine if any anomalies in your spend patterns exist and quickly identify and remediate the root cause.

      You should define metrics that allow you to identify whether your cost strategy is successful. For example, you can define metrics for unit cost. This will feed into your overall finance strategy, allowing your technology and business stakeholders to identify cost-saving opportunities and reallocate savings to new initiatives or projects to enhance your cloud environment. 

    • Implementation
    • Implement a mechanism to proactively monitor detailed cloud costs

      To identify unexpected or anomalous cloud spend before receiving a monthly bill, you will need to proactively monitor costs. Alerting based on budgets, thresholds, and anomalies allow you to quickly remediate unexpected and anomalous cloud spend. You should set up centralized budgeting and cost monitors in the payer account of a consolidated billing family to alert on spend as a whole. After completing that step, we recommended creating budgets and monitors at a more granular level based on your needs. 

      When creating a new budget using AWS Budgets, you can apply filters including usage type, usage type group, service, or tag. For example, you can create a budget specifically for Amazon EC2: Data Transfer - Internet (Out) to be alerted when varying levels of Amazon EC2 spend are forecasted to be exceeded. If you don’t have a fixed or planned budget, you can create an auto-adjusting budget that is based on a baseline time range you define. 

      Once you have set your budget, you can configure alerts to be sent at pre-defined current or forecasted thresholds. These alerts can be sent to an email addresses, an Amazon Simple Notification Service (SNS) topic, or AWS Chatbot. By sending an alert, the person responsible for an individual account, workload, or usage metric will be notified directly rather than alerts being sent to an entire centralized cloud team. For information on setting up budget alerts in Amazon Chime or Slack, review the Receiving budget alerts in Amazon Chime and Slack section in the AWS Cost Management User Guide.

      Alternatively, if end-users don’t have access to the billing console, you can monitor costs using Amazon CloudWatch alerts. You must first enable this feature in the payer account by selecting Receive Billing Alerts under the billing console preferences. Once complete, you can create alarms for Total Estimated charges. Otherwise, we recommend using AWS Budgets. Unlike AWS Budgets, forecasted or auto-adjusted thresholds are not available through CloudWatch alerts. 

      We recommend creating an AWS Cost Anomaly Detection monitor for AWS services. As an add-on to this feature, additional monitors can be created by a linked account, Cost Category, or Cost Allocation tag. These monitors detect and alert on anomalous spend associated with a specific account, workload, or business unit. 

      Implement a mechanism to report on aggregate cloud cost and usage

      Reporting cloud costs is a fundamental competency for your CFM capability. It gives stakeholders transparency into the overall cloud spend for the organization. 

      To begin, you need to ensure the right individuals have access to AWS billing and cost management tools. The correct Identity and Access Management (IAM) resources and IAM policies should be created and assigned to stakeholders. Note that IAM access is not enabled by default, so you must first enable IAM Access from the management account before IAM resources can be granted access to the AWS billing console. 

      Once access has been granted, the next step is to enable AWS Cost Explorer. Note that Cost Explorer will take approximately 24 hours to generate the current month’s costs. The trailing 12-month costs and 12-month forecast will be available a few days later. AWS Cost Explorer can only be enabled through the console. 

      After enabling AWS Cost Explorer, you can generate reports for your stakeholders. By default, there are nine reports already created including monthly costs, daily costs, AWS Marketplace costs, RI costs, and Savings Plans costs. The default reports are a good starting point for managing overall costs and are a quick way to begin reporting on aggregate costs. 

      In the console, you can customize your reporting with various group-by settings. You can only use one grouping setting at a time. Costs can be grouped by categories like service, linked account, region, instance type, usage type, and many other categories. 

      Costs can also be filtered by these categories. Multiple filters and multiple values inside each filter category are allowed. In the Filters window, you can create a filter under Service to only show costs related to Amazon EC2 or show all costs except for Premium Support. Multiple values in a filter use the logical “or” operator, and multiple filters use the logical “and” operator.

      You can also change the way you account for monthly costs. Under Advanced Options, you have the ability to report unblended, amortized, and net amortized costs after discounts. 

      Once you set up your report to display the right aggregated data, you can save the report for future use using the Save feature. Once saved, your report will be listed in the Reports section of AWS Cost Explorer.

      For additional granularity, AWS Cost and Usage Reports (CUR) provide hourly usage and costs data and allow you to break this data down by product, resource, tags, and many other dimensions, such as per hour, day, or month. You may also create multiple CUR files with differing configurations.

      To create a CUR file, first you need to create an Amazon Simple Storage Service (Amazon S3) bucket to store the CUR files. CUR data is cumulative and can be updated up to three times a day in the S3 bucket. We recommend using a new bucket for each CUR file you create.

      After you create the S3 bucket, you can create a new CUR file in the AWS billing console. There are a number of options for creating a CUR file, including the following:

      • You can choose hourly, daily, or monthly granularity.
      • You can choose to overwrite or not overwrite the report with each update. Overwriting can save on Amazon S3 storage costs; however, you will lose access to the data that is overwritten.
      • You can choose to enable integration with Athena, Amazon Redshift, or QuickSight, which will bring up the applicable report format. For example, an integration with Athena can help you create a reporting dashboard.
      • If you do not choose to enable integration, you can select Parquet format, or CSV format compressed with GZIP or ZIP.

      Once created, you will see the CUR data populate within 24 hours. To begin querying the CUR data with Athena, you will locate a YAML AWS CloudFormation template file in the root path prefix you chose when creating the CUR file. This can be downloaded and executed with CloudFormation to create an AWS Glue crawler and database and an AWS Lambda event that will execute the crawler when the S3 bucket is updated. 

      Once the CloudFormation stack is complete, you can test the integration by running Athena queries on the AWS Glue database.

      After CUR is configured, you can begin querying the CUR data with Athena, which is the easiest way to get started with data query. This is also a foundational step for using the Cloud Intelligence Dashboard (CID), our recommended visualization tool. 

      If you want to ingest the CUR data directly into your own business intelligence (BI) tools, Athena supports open database connectivity (ODBC) and Java database connectivity (JDBC) connections. Athena also has a native power BI connector. Many third-party CFM applications can also import CUR data directly in a CSV format to be used in an Athena integration.

      Implement a mechanism to proactively monitor aggregate cloud costs

      AWS Budgets lets you set custom cost and usage budgets that alert you through an SNS topic when your budget thresholds are exceeded (or forecasted to exceed). You can also create budgets to track your aggregate Reservation and Savings Plans utilization and coverage metrics.

      Access AWS Budgets using the AWS billing console to create a budget based on cost, usage, Savings Plans utilization, or Reservation utilization. The first budget we recommend you set up is a cost budget. A cost budget allows you to set a specific dollar amount over a specified period, such as weekly or monthly. The scope can encompass all services or be filtered based on a type of cost dimension, such as tags. Alerts can then be triggered on an actual or forecasted threshold using an SNS alert.

      AWS Budgets reports allow you to configure a report to monitor the performance of up to 50 budgets on a daily, weekly, or monthly cadence and deliver that report to up to 50 email addresses.

      AWS Cost Anomaly Detection is a service that leverages advanced ML technologies to identify anomalous spend and the root cause so you can prevent unwanted spend in the future. Alerts are sent when the cost monitor detects an anomaly that is above your defined threshold. For example, if the cost monitor establishes your normal spend pattern as $500 and you set a $100 threshold, then alert recipients will get anomaly notifications when your daily spend exceeds $600. 

      AWS Cost Anomaly Detection helps you evaluate your spend patterns using ML methods to minimize false positive alerts. For example, you can evaluate weekly or monthly seasonality and organic growth. It will also help determine the root cause of the anomaly, such as the account, service, Region, or usage type that is driving the cost increase. 

      AWS Cost Anomaly Detection Example: Assume that you are running a single Amazon RDS instance at the cost of $200/month in your AWS account. You decide to set up a cost monitor with a $50 threshold and choose to monitor your spend using AWS services. Consistently running this RDS instance would establish a $200 baseline for RDS within the account. If you were to set up a second, identical RDS instance in the account, this would then be detected as an anomaly, and you will be alerted because it has exceeded the $250 threshold. Running this second RDS instance consistently in the account would then increase the RDS baseline to $400, establishing a new RDS threshold of $450.

      As a best practice, we recommend going beyond monitoring with AWS services, but also segmenting your cost monitors so that it detects anomalies at a lower granularity. For example, you can segment cost monitors by Cost Allocation tags, Cost Categories, or member accounts using AWS Organizations.

      To get started with AWS Cost Anomaly Detection, follow these steps:

      1. AWS Cost Anomaly Detection is a feature within Cost Explorer. To access AWS Cost Anomaly Detection, you must first enable Cost Explorer.
      2. Open Cost Explorer and on the navigation pane, choose Cost Anomaly Detection.
      3. Create a Cost Monitor to begin the detection of anomalies with your accounts. 
      4. As you create your cost monitors, you can configure your alert subscriptions specific to each monitor.
      Perform audit-based cost optimization assessments for deployed workloads

      Cost optimization is a continuous process that should be practiced during the lifecycle of all cloud workloads. Opportunities to optimize cost will develop over time due to changes in demand, lack of oversight and governance, or by not following architectural best practices.

      With an AWS Business Support or Enterprise Support plan, you will have access to AWS Trusted Advisor cost optimization checks. Trusted Advisor cost optimization checks recommend potential monthly savings, identify underutilized resources, and provide RI and Savings Plans recommendations.

      To view the Cost Optimization results and potential monthly savings with Trusted Advisor, open the Trusted Advisor console. Then, choose Cost Optimization in the navigation pane to view the potential monthly savings, cost optimization checks, and related recommendations. You will see 14 checks in the Cost Optimization pillar. You can weigh the status and recommendations provided against your requirements and then act on or disregard the recommendations based on your business needs.

      AWS provides tools and service features to help you optimize your existing environment. AWS Cost Explorer will provide you with Amazon EC2 rightsizing recommendations based on utilization data from the previous 14 days by default, or the previous 3 months with enhanced metrics. Recommendations based on memory utilization are also possible with the installation of the CloudWatch agent. AWS Compute Optimizer goes beyond the Cost Explorer recommendations by not only providing Amazon EC2 rightsizing recommendations, but also Auto-Scaling, Amazon Elastic Block Store (EBS), and Lambda rightsizing recommendations.

      Get started by opening the AWS Compute Optimizer service. Select View recommendations for EC2 instances. The EC2 instances recommendations page lists each of your current instances, their finding classifications, finding reasons, platform differences, current instance type, and current hourly price for the selected purchasing option. The top recommendation from Compute Optimizer is listed next to each of your instances, and it includes the recommended instance type, the hourly price for the selected purchasing option, and the price difference between your current instance and the recommendation.

      Amazon S3 and Amazon Elastic File System (Amazon EFS) provide the ability to use different data tiers to optimize costs based on access patterns. In addition, with S3-Intelligent Tiering and EFS-Intelligent Tiering, Amazon S3 and EFS automatically move data between storage tiers based on past and predicted future access patterns.

      Implement a mechanism to report on more detailed, custom views of cloud cost and usage

      To further improve transparency and visibility, you should implement mechanisms to report detailed, custom views of cost and usage. Cost Explorer and CUR are good first steps in reporting and showback. To create a dashboard visualization of your cloud costs and usage, we recommend deploying CIDs. CIDs visualize data for both single and multi-payer customers. CIDs are built using Athena and QuickSight. CID is fully customizable and allows filtering by tags

      Additional dashboards include:

      • Cost and Usage Dashboards Operations Solution (CUDOS): An in-depth, granular, and recommendation-driven dashboard to help you dive deep into cost and usage and to fine-tune efficiency
      • Key Performance Indicator (KPI) Dashboard - A dashboard that helps your organization combine DevOps and IT infrastructure with Finance and C-Suite to grow more efficiently and effectively on AWS
    • Scenario
    • Measure efficiency and optimize cost

      • Implement a unit metrics approach for measuring cost efficiency
      • Implement a process to analyze your spend patterns and detect anomalies
    • Overview
    • Cost is treated as an essential component when performing tradeoffs between various designs and architectures during the early stages of product ideation. Cost optimization for your environment can happen across the entire environment and workloads, from the design and architectural stage, up until the point when resources have been launched on the cloud. Cost optimization includes POC designs so you can estimate the cost before moving any resources to non-production or production environments.

      Your environment can benefit from using a centralized model to acquire and manage billing and costs, because you can reserve resources or purchase Savings Plans that can be shared across all teams. Each team will get information, recommendations, and different tools from a centralized team to optimize usage and workloads. These teams should implement resource modifications before commitments are in place to benefit from discounted prices. By grouping resources and managing them centrally throughout your environment, you may also benefit from volume-based discounts based on consolidated billing across teams. Additionally, managing network and licensing centrally will reduce the overall cost and overhead for individual teams.

      Each team still needs to analyze the resources they own to identify cloud waste associated with each workload. Teams can use recommendations from a centralized finance team or cloud-based tools built for this purpose. Recommendations may include compute size, different tiers of storage available, and any committed pricing models. Additionally, each team should analyze and identify opportunities to modernize their environments so they can use the newest tools and technologies. Newer tools and technologies often include better performance ratios, leading to reduced cost for workloads.

    • Implementation
    • Implement a unit metrics approach for measuring cost efficiency

      Understanding how well you are performing is another important aspect of CFM. This is accomplished by measuring KPIs that define what metric is being measured and the target for that metric. 

      A good KPI measures efficiency over optimization. This means that the KPI metric doesn’t just measure how much of something is consumed or how much its costs. Efficient KPIs measure how much output you’re getting per unit of cost or usage. KPIs that strictly measure cost only have an outcome of reducing cost. For growing customers that are adopting new services, these types of metrics may not add value as costs increase. Measuring efficiency is a better KPI, as it shows how the consumed cloud services result in more or less output per dollar spent for your organization.

      KPIs can be broken down into two major categories: IT and business metrics. An IT metric measures the efficiency of deployed cloud resources. This means measuring cost per resource used. A more efficient organization will see increasing usage with flatter or decreasing spend.  Examples of IT metrics are Amazon S3 cost per GB-month, Amazon EC2 cost per hour, or Amazon EBS gp3 volume costs per GB-Month. 

      Business metrics measure efficiency related to business outcomes. This means measuring the cost of cloud spend against business output. A more efficient organization will see their business output increasing against flatter or decreasing cloud spend. Examples of business metric are cost per ride, cost per transaction, or cost per shipped product.

      You should work with business, finance, and IT stakeholders to define the IT and business metrics that will be used to measure efficiency. To create a good KPI metric, stakeholders should find metrics that meet the following criteria:

      • Material: An efficient metric should be significant to the core competencies of CFM. You should determine if the metric helps you achieve your business goals.
      • Volatile: An efficient metric has output that varies frequently or is volatile. Metrics that do not change do not convey much information, so volatile metrics identify seasonal patterns or one-off spikes. 
      • Understandable: An efficient metric should be easy to understand and not require a deep analysis or explanation of what the metric is measuring. Metrics that are not easy to understand may be misused or ignored.
      • Actionable: An efficient metric identifies a proactive business action. An example would be a rightsizing recommendation or terminating idle resources. These may also be longer-term metrics such as changing S3 storage classes for infrequently accessed data. 
      • Unique: The number of metrics used should be inversely proportional to the size of the group consuming the metrics. As a group gets larger, the group’s focus will be more strategic, so a smaller number of metrics will be easier for that group to consume.  Smaller groups will be focused on more tactical activities and require a larger number of metrics. 

      Once defined, metrics should be tracked and reported. Data used in business metrics will need to be combined with data from CURs. You can use QuickSight to build dashboards for tracking these metrics. The CIDs also include an optional KPI dashboard that you can use for many IT metrics.

      Implement a process to analyze your spend patterns and detect anomalies

      Once mechanisms are in place to proactively monitor and alert on cloud spend variances and anomalies, you will need a standard operating procedure to act on those alerts. To help automate this, AWS Budgets can run an action when you exceed a cost or usage threshold. These actions can be configured to run automatically or after manual approval. AWS Budgets can apply IAM policies or service control policies (SCPs) in addition to targeting specific EC2 or RDS instances in the account. 

      For example, from the management account, a budget action could be configured to remove EC2 provisioning access across the AWS Organization through SCPs when EC2 costs reach 90% of the allocated budget. At the linked account level, when the costs reach 100% of allocated budget, development and quality assurance (QA) environments could be turned off. These environments could be turned off without intervention or requiring approval. All actions, once executed, can be reversed, and AWS Budgets will no longer evaluate the action for the remaining budgeted period. The budget can also be reset if the reversed action should continue in order to evaluate cost and usage during the same period. A budget reset would be applicable if you increased the budgeted amount after the alert was triggered for a specific reason, such as receiving manager approval to increase your budget. 

      When an anomaly is detected, an alert will be sent through your preferred method (email, SNS topic, or AWS Chatbot). The alert will include a link to the anomaly details within the console. From that page, you can view the root cause analysis on what caused the anomaly in addition to the cost impact. You can assess anomalies on whether they were accurately detected to improve the model. If the anomaly is accurate, you will then have to determine how to act on it. Standard operating procedures should be in place to contact appropriate stakeholders to either remediate the spend or provide business justification for the anomaly. 

    • Scenario
    • Establishing your cloud financial operations function

      • Identify a CFM program owner, secure executive sponsorship, determine roles/responsibilities, and rules of engagement with other stakeholders
      • Implement a standard operating procedure to investigate and fix cloud spend variances and anomalies
      • Drive organizational cost-awareness
    • Overview
    • Cloud financial operations focus on functions that allow you to evolve your organization, processes, automation, and tools so you can establish a self-sustaining culture of cost-awareness. Establishing new working partnerships between finance and technology teams helps ensure that stakeholders across your organization have a common understanding of cloud costs. These partnerships allow for more accurate budgeting and cloud spend monitoring. Reporting, education, gamification trainings, and recognition of efficiency wins can drive organizational cost awareness and help foster a cost-aware culture.

      We recommend establishing a centralized function (either an individual or team) that owns CFM activities across your cloud environment. This centralized function provides and controls access to billing and costs tools, co-owns a cross-organizational tagging dictionary, defines cost categories, manages commitment purchases, and is a primary business partner for the finance organization. Additionally, this centralized function is responsible for driving awareness about how teams across the organization are using the cloud, performing budget reviews, and helping with forecasting exercises. 

      The centralized function is also responsible for defining and implementing a cost management tooling strategy. This strategy may require procuring and curating external tools for internal consumption or identifying internal resources to build in-house cost management tools. Tools should automate as many cost management activities as possible to reduce undifferentiated work and to enable scale. 

      In addition to implementing a centralized approach for commitment purchases (such as Savings Plans and RIs), the centralized function:

      • Augments existing business and technical processes to instill cost-awareness (for example, introducing cost into change management, incident management, and service operationalization and readiness processes).
      • Maintains direct relationships with technical, finance, and business stakeholders to raise cross-organizational cost awareness (for example, through hackathons, all-hands meetings, or frugality awards).
      • Ensures cost transparency through KPI development and cost reporting as it pertains to business that the cloud supports.
      • Drives cloud spend concerns (such as budget variances, spend anomalies, root cause identification, and remediation) to closure.

      Additionally, each team that operates their own environment needs to confirm that that the products and tools they build support cost awareness. These teams can use recommendations that the centralized function provides to create tools that perform automated audit-based optimization assessments, helping them to be aware of the cost of their workloads. Executive sponsorship of the CFM program ensures that builders are able to design and build cost-aware applications.

    • Implementation
    • Identify a CFM program owner, secure executive sponsorship, determine roles/responsibilities, and rules of engagement with other stakeholders

      To build momentum and ensure your CFM initiative continues to mature, it is important to identify a single-threaded leader to be the CFM program owner. A single-threaded leader is a leader who is dedicated and accountable to a specific product or program, such as a CFM initiative. The single-threaded leader is responsible for turning strategy into real results, and they are empowered to do so. This individual will be responsible for driving organizational change and taking overall responsibility for CFM at an organization-level. As a first step, the CFM may consider taking the AWS Cloud for Finance Professionals course.

      Next, you should secure executive sponsorship, determine roles and responsibilities, and define rules of engagement with other stakeholders. This can be implemented through a Cloud Business Office (CBO), as detailed in the Building a Cloud Operating Model whitepaper. A CBO is one of the functional domains of a Cloud Enablement Engine (CEE), a small group of cloud experts drawn from cross-functional teams. The CEE focuses on the business, governance, and people enablement aspects of cloud adoption and aligns products with end customer needs. The CEE is responsible for:

      • Establishing the overall cloud change strategy to be delivered and enabled by the CEE to drive successful implementation across the organization.
      • Providing alignment between Enterprise Architecture and the CEE.
      • Establishing processes to evaluate and develop new cloud patterns to support teams looking to adopt the cloud.
      • Understanding end customer requirements and demand for cloud products and translating requirements and demand into a prioritized backlog of work.
      • Managing the delivery of items within the cloud platform engineering and CBO backlogs.
      • Providing mechanisms to accurately allocate, forecast, and optimize spending by cloud consumers.
      • Enabling self-service capabilities for consumers and executives to manage current and forecasted spend.
      • Guiding consumer teams through the process of migrating to the cloud including training, deployments, migration, and the transition to steady-state operation.

      With the implementation of a CBO, the right stakeholders, leadership, and business owners can align on CFM activities.

      Drive organizational cost awareness

      You can help drive cost awareness throughout your organization by focusing on reporting, education, gamification, and public recognition of cost optimization wins.

      For reporting, we recommend using AWS Budgets, Cost Anomaly Detection, and the CIDs to share useful data with your organization through email, chat, or a dashboard. This will keep the organization up to date on the latest information on cloud costs. 

      For education, we recommend starting with the either the AWS Cloud for Finance Professionals or the AWS Cloud Financial Management for Builders courses. These courses align well with both the finance persona (who needs to better understand cloud economics) and the builder persona (who needs to keep cost in mind when architecting). 
      For gamification, we recommend AWS FinHack, a dynamic CFM event designed to provide a hands-on learning experience for technical and finance audiences with the goal of teaching them how to sustainably manage, predict, and optimize AWS spend. The event helps customers increase the value they get out of their existing AWS spend by exploring the current technical and financial state of their AWS environments. AWS subject matter experts (SMEs) provide guidance throughout the event. 

      Finally, we recommend celebrating cost optimization wins by informing management and program leads of commitments, cost hygiene, or architecture improvements that have resulted in savings for the business. Management should highlight these accomplishments and give additional incentives to other teams responsible for managing spend and costs. 

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

Was this page helpful?