Desktop and Application Streaming

Use AWS Lambda to adjust scaling steps and thresholds for Amazon AppStream 2.0

This blog post walks you through creating an event driven solution with AWS Lambda to change your AppStream 2.0 auto scaling policy based on time of day.

With Amazon AppStream 2.0’s Fleet Auto Scaling capabilities you can adjust the size of your AppStream 2.0 fleet automatically to match user demand. However, with certain usage patterns, using the same application auto scaling step sizes all the time causes the launch of unnecessary resources.  For example,  off hours usage may not require the same number of instances to be added per scaling action as your peak hours policy specifies.

By using AWS Lambda to adjust scaling based on the time of day you can scale your environment, and optimize costs.

Overview

By using Amazon EventBridge to launch an AWS Lambda function, you can set the step scaling policy step size of your fleet to reflect your usage patterns based on time of day. This solution runs at specific times of day to ensure you are scaling the AppStream 2.0 fleet to meet the demands of your users.

The solution diagram depicts Amazon EventBridge triggering AWS Lambda, which in turn modifies AWS Application Auto Scaling and Amazon CloudWatch. Amazon CloudWatch then triggers AWS Application Auto Scaling to Scale Amazon AppStream 2.0 resources.

In this walkthrough you complete the following tasks:

  1. Create an Identity and Access Management (IAM) policy and role for the AWS Lambda function.
  2. Create an AWS Lambda function.
  3. Create two EventBridge rules to run the Lambda function on a schedule.

Prerequisites

For this walkthrough, you need the following prerequisites:

  • An AWS account.
  • An existing Amazon AppStream 2.0 environment.

Read more about setting up Amazon AppStream 2.0 in the getting started guide.

Step 1. Create an IAM policy and IAM role for the AWS Lambda function

In this step you create an IAM Policy, and attach it to an IAM role that the Lambda function assumes.

  1. Navigate to the IAM console.
  2. In the navigation pane, choose Policies.
  3. Choose Create policy.
  4. Choose the JSON tab.
  5. Copy and paste the JSON policy below.
  6. When you’re done, choose Review policy.
  7. For name enter AppStream2LambdaPolicy.
  8. Choose Create policy.

IAM Policy document example:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "cloudwatch:PutMetricAlarm",
                "logs:CreateLogStream",
                "application-autoscaling:DescribeScalingPolicies",
                "application-autoscaling:PutScalingPolicy",
                "logs:CreateLogGroup",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        }
    ]
}

Now that the IAM Policy has been created, create the IAM Role for Lambda to assume.

  1. Open the IAM console.
  2. In the navigation pane, choose Roles.
  3. Choose Create role.
  4. Select Lambda, and then choose Next.
  5. In the filter policies search box, enter the name of the policy created in the previous step. When the policy appears in the list, select the box next to the policy name.
  6. Choose Next.
  7. For Role name, enter AppStream2LambdaRole.
  8. Choose Create role.

Step 2. Create an AWS Lambda function

In this step, you create a Lambda function, and attach the IAM role created in Step 1.

  1. Open the Lambda console
  2. Choose Create function.
  3. For Function name, enter AppStream2ScalingFunction.
  4. Select Python 3.9 as the Runtime.
  5. Expand the permissions section, select Use an existing role, and choose AppStream2LambdaRole.
  6. Choose Create function.
  7. Under the Function code section, replace the placeholder text with the code below.
  8. Choose Deploy.

Lambda function example:

#Import required modules
from logging import getLogger,INFO
from boto3 import client
from botocore import exceptions

#Setup logger
logger = getLogger()
logger.setLevel(INFO)

CW = client('cloudwatch') #Configure cloudWatch Boto3 Client
AA = client('application-autoscaling') ##Configure Application-AutoScaling Boto3 Client

def lambda_handler(event, context): #Lambda handler definition
    try:
        Fleet = event.get('Fleet')
        logger.info('Processing Fleet: ' + Fleet)
        #Describing existing fleet scaling policies.
        ActivePolicies = AA.describe_scaling_policies(
            ServiceNamespace='appstream',
            ResourceId='fleet/' + Fleet,
            ScalableDimension='appstream:fleet:DesiredCapacity')
        #Checking if fleet has existing scaling policies.
        if ActivePolicies.get('ScalingPolicies'):
            logger.info('Found scaling policies.')
            #Assigning variables for existing fleet values.
            for Policy in ActivePolicies['ScalingPolicies']:
                #Assigning scale out variables
                if Policy['StepScalingPolicyConfiguration']['StepAdjustments'][0]['ScalingAdjustment'] > 0:
                    OutPolicy = Policy['PolicyName']
                    OutAlarm = Policy['Alarms'][0]['AlarmName']
                #Assigning scale in variables
                if Policy['StepScalingPolicyConfiguration']['StepAdjustments'][0]['ScalingAdjustment'] < 0:
                    InPolicy = Policy['PolicyName']
                    InAlarm = Policy['Alarms'][0]['AlarmName']
        else:
            #Log and return failure if fleet does not have policies set.
            logger.error('No scaling policy set for Fleet.  Please configure initial scaling policies on this AppStream 2.0 Fleet.')
            return 0
        #Set new scaling out policy using event details
        NewScalingOutPolicy = AA.put_scaling_policy(
            PolicyName=OutPolicy,
            ServiceNamespace='appstream',
            ResourceId='fleet/' + Fleet,
            ScalableDimension='appstream:fleet:DesiredCapacity',
            PolicyType='StepScaling',
            StepScalingPolicyConfiguration={
                'AdjustmentType': 'ChangeInCapacity',
                'StepAdjustments': [{'MetricIntervalUpperBound': 0.0,'ScalingAdjustment': event.get('ScaleOutValue')},],
                'Cooldown': 120,
                'MetricAggregationType': 'Average'}) #Set new s
        logger.info('Set scale out step adjustment to: ' + str(event.get('ScaleOutValue')))
        #Set new scaling in policy using event details
        NewScalingInPolicy = AA.put_scaling_policy(
            PolicyName=InPolicy,
            ServiceNamespace='appstream',
            ResourceId='fleet/' + Fleet,
            ScalableDimension='appstream:fleet:DesiredCapacity',
            PolicyType='StepScaling',
            StepScalingPolicyConfiguration={
                'AdjustmentType': 'ChangeInCapacity',
                'StepAdjustments': [{'MetricIntervalLowerBound': 0.0,'ScalingAdjustment': event.get('ScaleInValue')},],
                'Cooldown': 360,
                'MetricAggregationType': 'Average'})
        logger.info('Set scale in step adjustment to: ' + str(event.get('ScaleInValue')))
        #Set new scaling out CloudWatch Alarm using event details
        NewOutAlarm = CW.put_metric_alarm(
            AlarmName=OutAlarm,
            ActionsEnabled=True,
            AlarmActions=[NewScalingOutPolicy['PolicyARN']],
            MetricName=event.get('ScaleOutMetric'),
            Namespace='AWS/AppStream',
            Statistic='Average',
            Dimensions=[{'Name': 'Fleet','Value': Fleet},],
            Period=60,
            EvaluationPeriods=10,
            Threshold=event.get('ScaleOutThreshold'),
            ComparisonOperator=event.get('ScaleOutOperator'))
        logger.info('Set scale out threshold to: ' + event.get('ScaleOutOperator') + ' ' + str(event.get('ScaleOutThreshold')))
        #Set new scaling in CloudWatch Alarm using event details
        NewInAlarm = CW.put_metric_alarm(
            AlarmName=InAlarm,
            ActionsEnabled=True,
            AlarmActions=[NewScalingInPolicy['PolicyARN']],
            MetricName=event.get('ScaleInMetric'),
            Namespace='AWS/AppStream',
            Statistic='Average',
            Dimensions=[{'Name': 'Fleet','Value': Fleet}],
            Period=120,
            EvaluationPeriods=10,
            Threshold=event.get('ScaleInThreshold'),
            ComparisonOperator=event.get('ScaleInOperator'))
        logger.info('Set scale in threshold to: ' + event.get('ScaleInOperator') + ' ' + str(event.get('ScaleInThreshold')))
    except exceptions.ClientError as err:
        #Log Client exception error
        logger.error(err)

Step 3. Create two EventBridge rules to run the Lambda function on a schedule

In this step, you create two EventBridge rules to run the Lambda function created in Step 2.

The first rule triggers at 9 AM UTC. The example JSON specifies two instances be added if there are only four AppStream 2.0 instances available. It also remove two instances, if there are eight or more instances currently running.

  1. Open the EventBridge console.
  2. Choose Create rule.
  3. For Name, enter AppStream2Scale9am. Optionally, add a Description.
  4. Select Schedule and choose Next.
  5. For Schedule pattern, leave the default fine-grained schedule selected.
  6. For Cron expression, enter 0 9 * * ? *
  7. Choose Next.
  8. Under Select targets, leave AWS service as the default. For Select a target, choose Lambda function, then select the function created in the step 2 (AppStream2ScalingFunction).
  9. Expand Additional settings. Under Configure input, the choose Constant (JSON text).
  10. Enter the JSON example below and replace the following values:
    • <Fleet> with the name your AppStream 2.0 fleet.
  11. Choose Next.
  12. Add any optional tags, choose Next.
  13. Choose Create rule.

JSON text example:

{
   "Fleet": "<Fleet>",
   "ScaleOutValue": 2,
   "ScaleInValue": -2,
   "ScaleOutThreshold": 4,
   "ScaleInThreshold": 8,
   "ScaleOutOperator": "LessThanOrEqualToThreshold",
   "ScaleInOperator": "GreaterThanOrEqualToThreshold",
   "ScaleOutMetric": "AvailableCapacity",
   "ScaleInMetric": "AvailableCapacity" 
}

The second rule triggers at 5 PM UTC. The example JSON specifies one instance be added if there is only one AppStream 2.0 instances available. It also removes two instances, if there are three or more instances currently running.

  1. Open the EventBridge console.
  2. Choose Create rule.
  3. For Name, enter AppStream2Scale5pm. Optionally a Description.
  4. Select Schedule and choose Next.
  5. For Schedule pattern, leave the default fine-grained schedule selected.
  6. For Cron expression, enter 0 17 * * ? *
  7. Choose Next.
  8. Under Select targets, leave AWS service as the default. For Select a target, choose Lambda function, then select the function created in the step 2 (AppStream2ScalingFunction).
  9. Expand Additional settings. Under Configure input, the choose Constant (JSON text).
  10. Enter the JSON example below and replace the following values:
    • <Fleet> with the name your AppStream 2.0 fleet.
  11. Choose Next.
  12. Add any optional tags, choose Next.
  13. Choose Create rule.

JSON text example:

{
   "Fleet": "<Fleet>",
   "ScaleOutValue": 1,
   "ScaleInValue": -2,
   "ScaleOutThreshold": 1,
   "ScaleInThreshold": 3,
   "ScaleOutOperator": "LessThanOrEqualToThreshold",
   "ScaleInOperator": "GreaterThanOrEqualToThreshold",
   "ScaleOutMetric": "AvailableCapacity",
   "ScaleInMetric": "AvailableCapacity"
}

Clean up

To avoid incurring future charges, remove the resources  created. Delete the EventBridge Rule, Lambda Function, IAM Policy and Role, and any AppStream 2.0 resources created by the automation.

Conclusion

The EventBridge Rules triggers the Lambda function at scheduled times. The function changes your AppStream 2.0 Fleet Auto Scaling rules to match your usage patterns throughout the day. With this, you can optimize costs by using the appropriately sized steps for your environment at any time of the day. If your usage pattern has more than two phases in a day, configure additional EventBridge rules to adjust your AppStream 2.0 scaling policy step sizes throughout the day.

Kellie Cottingame is a Senior Partner Solutions Architect at Amazon Web Services. In his 4+ years of experience working with AWS he has had to opportunity work for the Amazon AppStream 2.0 service team, and AWS Professional Services End User Computing (EUC) global specialty pratice group. He is passionate about helping businesses leverage AWS EUC to achieve their business goals.
I am an AWS Cloud Infrastructure Architect and work on delivering AWS solutions to customers across the globe.