Desktop and Application Streaming
Creating custom logging and Amazon CloudWatch alerting in Amazon AppStream 2.0
Amazon AppStream 2.0 fleet instances are ephemeral. Because of this, application event logs are lost with the instance once the streaming session ends. If your AppStream 2.0 users have an issue with an application during their streaming session, it can be difficult to troubleshoot without detailed log data.
Customers often ask how to configure alarms to notify them when a desktop application has an issue. Or, they ask how to store the application logs from the fleet instances when they use AppStream 2.0 to stream their desktop applications. This post shows you one of the ways you can use Amazon Athena, Amazon CloudWatch, and Amazon SNS to store and search application logs and be how to be notified when an issue arises on your AppStream 2.0 fleet instances.
Walkthrough
This post walks you through the following steps:
- Configure automatic alerting with CloudWatch to notify you in real time when your AppStream 2.0 users are experiencing errors.
- Configure your AppStream 2.0 fleets to automatically upload log data to Amazon S3 when they occur.
- Parse through the log data that is uploaded to Amazon S3 with an Amazon Athena database.
Prerequisites
- An SNS topic with an email address subscription
- An S3 bucket
Create a test Windows Event Log and Windows scheduled task
- In the AppStream 2.0 console, choose Images and launch an AppStream 2.0 image builder.
- When the image builder is ready, log in to the instance as the administrator.
- In an elevated PowerShell prompt, run the following command to create the event source for the test event log entry.
New-EventLog –LogName "Application" –Source "CustomApplication"
- Run the following PowerShell command to generate a test event in the newly created event log source.
Write-EventLog -LogName "Application" -Source "CustomApplication" -EventId 1000 -Message "This is a test event."
- With the test event created, open the Windows Event Viewer, navigate to Windows Logs, Application, and locate the newly created event.
- Open the context (right-click) menu for the event, and choose Attach a Task to this Event.
- Optionally, you can use this task template XML to skip the task XML creation and modification with the Value Queries branch. Update the XML file with the arguments for your script as well as the Start in value. After making those changes, import the XML as detailed later in step 13.
- Configure the task to execute a program, using the following settings:
- Program:
PowerShell.exe
- Arguments:
-windowstyle hidden .\event_trigger.ps1 -eventRecordID $(eventRecordID) -eventChannel $(eventChannel) -eventSeverity $(eventSeverity) -NoProfile –NonInteractive
- Start in:
C:\Scripts
- Program:
- After the task has been created, modify the task to run as the built-in users group. Also modify the event trigger and remove the Event ID value. The event trigger script catches and processes any event created in this log channel and source.
- On the Settings tab, under the If the task is already running, then the following rule applies: setting drop down, select Queue a new instance. Under the Stop the task if it runs longer than: setting drop down, select 1 hour.
- After you change the event, open the context (right-click) menu for the task. Export the task to an XML file, and save it to the desktop.
- Open the exported scheduled task’s XML file in Windows Notepad (or a Unicode-aware text editor of your choice), and add the following Value Queries branch to the EventTrigger branch.
<ValueQueries> <Value name="eventChannel">Event/System/Channel</Value> <Value name="eventRecordID">Event/System/EventRecordID</Value> <Value name="eventSeverity">Event/System/Level</Value> </ValueQueries>
- After editing, save and close the XML file.
- Open Windows Task Scheduler, and delete the previously created task that you exported the XML from. After deleting the old task, create a new task by importing the XML you just modified.
- Verify the task’s settings are correct, then close the Task Scheduler.
Configure the event trigger PowerShell script
In the folder C:\Scripts
, create a new PowerShell script named event_trigger.ps1
with the following content.
event_trigger.ps1
Param($eventRecordID, $eventChannel, $eventSeverity)
Import-Module AWSPowershell
$logFile = "$Env:TEMP\AS2\Logging\$eventRecordID-event.log"
New-Item -path $logfile -ItemType File -Force | Out-Null
Function Write-Log {
Param ([string]$message)
$stamp = Get-Date -Format "yyyy/MM/dd HH:mm:ss"
$logoutput = "$stamp $message"
Add-content $logfile -value $logoutput
}
$AS2InstanceType = (Get-Item Env:AppStream_Resource_Type).Value
if ($AS2InstanceType -eq "image-builder") {
Write-Log "Image Buiulder, exiting..."
exit
}
$event = get-winevent -LogName $eventChannel -FilterXPath "<QueryList><Query Id='0' Path='$eventChannel'><Select Path='$eventChannel'>*[System[(EventRecordID=$eventRecordID)]]</Select></Query></QueryList>"
$eventdetails = ([xml]$event.ToXml())
$eventtime = get-date($eventdetails.Event.System.TimeCreated.SystemTime) -Format "yyyy/MM/dd HH:mm:ss"
$eventsource = $eventdetails.Event.System.Provider.Name
$eventmessage = $eventdetails.Event.EventData.Data
$as2SessionID = (Get-Item Env:AppStream_Session_ID).Value
$AS2UserName = (Get-Item Env:AppStream_UserName).Value
$AS2StackName = (Get-Item Env:AppStream_Stack_Name).Value
$AS2FleetName = (Get-Item Env:AppStream_Resource_Name).Value
$InstanceAWSRegion = (Get-Item Env:AWS_Region).Value
$BucketName = "examplecorp-eventlog-bucket-$InstanceAWSRegion" #Update with your S3 bucket name
$errorlogfile = "$as2SessionID-error-log.csv"
$warninfologfile = "$as2SessionID-warninfo-log.csv"
$errorlogfilepath = "$home\Documents\AS2\Logging\$errorlogfile"
$warninfologfilepath = "$home\Documents\AS2\Logging\$warninfologfile"
if ($eventSeverity -eq 2) {
Write-Log "Event is an Error."
$eventSeverityName = "Error"
if (!(test-path -path $errorlogfilepath)) {
New-Item -Path $errorlogfilepath -ItemType File -Force | Out-Null
}
$count = (get-content $errorlogfilepath | Measure-Object).Count
if ($count -eq 0) {
$Row = "$($eventtime),$($eventRecordID),$($eventSeverityName),$($eventsource),$($as2SessionID),$($eventmessage),$($AS2StackName),$($AS2FleetName),$($AS2UserName)"
Add-Content -Path $errorlogfilepath -Value $Row
$count = (get-content $errorlogfilepath | Measure-Object).Count
Write-Log "Error count is $count."
$Metric = [Amazon.CloudWatch.Model.MetricDatum]::new()
$Dimension = [Amazon.CloudWatch.Model.Dimension]::new()
$Dimension.Name = 'AS2Session'
$Dimension.Value = $as2SessionID
$Metric.MetricName = 'ErrorCount'
$Metric.Value = $count
$Metric.Unit = "Count"
$Metric.TimestampUtc = (get-date($eventtime)).ToUniversalTime()
$Metric.Dimensions = $Dimension
Write-Log "Writing to CloudWatch metric"
try {
Write-CWMetricData -MetricData $Metric -Namespace $AS2FleetName -ProfileName appstream_machine_role -ErrorAction Stop
}
catch {
Write-Log $_
}
$alertparams = @{"AlarmName" = "Alarm-AS2-Session-$as2SessionID";
"AlarmDescription" = "Evalualates custom application errors for AS2 session $as2SessionID";
"ActionsEnabled" = $true;
"AlarmAction" = "arn:aws:sns:us-east-2:123456789012:MyTopic" #Update with your SNS topic ARN
"ComparisonOperator" = "GreaterThanOrEqualToThreshold";
"Dimensions" = $Dimension;
"EvaluationPeriod" = 2;
"MetricName" = "ErrorCount";
"Namespace" = "$AS2FleetName";
"Period" = 120;
"Statistic" = "SampleCount";
"Threshold" = 5;
"TreatMissingData" = "ignore";
"Unit" = "Count"
}
Write-Log "Creating CloudWatch Metric Alert"
try {
Write-CWMetricAlarm @alertparams -ProfileName appstream_machine_role -ErrorAction Stop
}
catch {
Write-Log $_
}
}
elseif ($count -gt 0) {
$Row = "$($eventtime),$($eventRecordID),$($eventSeverityName),$($eventsource),$($as2SessionID),$($eventmessage),$($AS2StackName),$($AS2FleetName),$($AS2UserName)"
Add-Content -Path $errorlogfilepath -Value $Row
$count = (get-content $errorlogfilepath | Measure-Object).Count
Write-Log "Error count is $count."
$Metric = [Amazon.CloudWatch.Model.MetricDatum]::new()
$Dimension = [Amazon.CloudWatch.Model.Dimension]::new()
$Dimension.Name = 'AS2Session'
$Dimension.Value = $as2SessionID
$Metric.MetricName = 'ErrorCount'
$Metric.Value = $count
$Metric.Unit = "Count"
$Metric.TimestampUtc = (get-date($eventtime)).ToUniversalTime()
$Metric.Dimensions = $Dimension
Write-Log "Writing to CloudWatch metric"
try {
Write-CWMetricData -MetricData $Metric -Namespace $AS2FleetName -ProfileName appstream_machine_role -ErrorAction Stop
}
catch {
Write-Log $_
}
}
$S3ObjectPath = "$($AS2FleetName)/$($as2SessionID)/ErrorLogs/"
Write-Log "Uploading event data to S3."
try {
Write-S3Object -BucketName $BucketName -File $errorlogfilepath -Key "$($S3ObjectPath)$($errorlogfile)" -ProfileName appstream_machine_role -ErrorAction Stop
}
catch {
Write-Log $_
}
}
elseif ($eventSeverity -gt 2) {
If ($eventSeverity -eq 3) {
Write-Log "Event is a Warning."
$eventSeverityName = "Warning"
}
elseif ($eventSeverity -eq 4) {
Write-Log "Event is Informational."
$eventSeverityName = "Information"
}
if (!(test-path -path $warninfologfilepath)) {
New-Item -Path $warninfologfilepath -ItemType File -Force | Out-Null
}
$S3ObjectPath = "$($AS2FleetName)/$($as2SessionID)/WarnInfoLogs/"
$Row = "$($eventtime),$($eventRecordID),$($eventSeverityName),$($eventsource),$($as2SessionID),$($eventmessage),$($AS2StackName),$($AS2FleetName),$($AS2UserName)"
Add-Content -Path $warninfologfilepath -Value $Row
Write-Log "Uploading event data to S3."
try {
Write-S3Object -BucketName $BucketName -File $warninfologfilepath -Key "$($S3ObjectPath)$($warninfologfile)" -ProfileName appstream_machine_role -ErrorAction Stop
}
catch {
Write-Log $_
}
}
else {
Write-Log "No event data available. Exiting..."
exit
}
The script is triggered on event creation, and processes the event based on its severity. Warnings and informational events get their details sent only to S3. Errors also get their event details sent to S3 as well as getting a CloudWatch metric. The ErrorCount metric is created in a namespace defined by the AppStream 2.0 instance’s fleet name.
If enough errors are triggered in the defined evaluation period, the CloudWatch alarm, created after the first error occurs, alerts you through your SNS topic. Each metric and alarm is unique. The dimension for each defined by the user’s AppStream 2.0 session ID.
Edit the following variables within the script:
- Variable
$BucketName
should be the name of the S3 bucket that the log details upload to. - The parameter
AlarmAction
in the$alertparams
array should be updated with the ARN of your SNS topic.
After the variables have been edited, save and close the script.
Create your AppStream 2.0 image
- From the image builder instance’s desktop, launch Image Assistant.
- When Image Assistant is open, add PowerShell as an application. Add any other applications that you might require for your image.
- Proceed with the normal image creation process. For more information please see Create a Custom AppStream 2.0 Image by Using the AppStream 2.0 Console.
Create an Athena database
While the image is being created, create an Athena database. This allows you to parse the log files the event trigger script uploads to S3.
- In the Athena console, in the query editor, run the following SQL query to create a new database:
CREATE DATABASE IF NOT EXISTS <new_db_name>
- When the database has been created, run the following SQL query. This query creates a table in the database along with the required column names. It also defines your S3 bucket as the source for the data.
CREATE EXTERNAL TABLE IF NOT EXISTS <db_name>.<new_table_name> ( `EventTime` string, `EventID` string, `EventSeverity` string, `EventSource` string, `SessionID` string, `EventMessage` string, `StackName` string, `FleetName` string, `UserName` string ) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' WITH SERDEPROPERTIES ( 'serialization.format' = ',', 'field.delim' = ',' ) LOCATION 's3://<S3_Location>/' TBLPROPERTIES ('has_encrypted_data'='false');
Create and configure an IAM role
With your Athena database created and configured, now create the IAM role for your fleet instances to use in interacting with S3 and CloudWatch.
- In the IAM console, and choose Policies, Create policy.
- Choose the JSON tab. Copy and paste the following JSON policy into the policy document field:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricAlarm", "s3:PutObject", "cloudwatch:PutMetricData" ], "Resource": "*" } ] }
- Choose Review policy. Name your policy, and choose Create policy.
- In the navigation pane, choose Roles, Create Role, and configure the following fields:
- For Select type of trusted entity, choose AWS Service.
- For the service that will use this role, choose AppStream 2.0.
- Choose Next: Permissions.
- In the Filter policies search box, search for the policy that you previously created. When the policy appears in the list, select the check box next to the policy name.
- Choose Next: Tags. Although you can specify a tag for the policy, a tag is not required.
- Choose Next: Review. Name your role, and choose Create role.
Configure your AppStream 2.0 fleet
With both your IAM role and Athena database created, the image creation process should be finished and your image ready for use.
- From the AppStream 2.0 console, choose Fleets, Create Fleet.
- Provide a name for your new fleet, and configure the fleet to use the image that you just created. On Step 3, make sure to configure the IAM role setting to use the IAM role that you created. Proceed with the fleet creation process.
- After the fleet has been created, make sure that it’s in the Starting state, and then assign it to a stack.
Testing your setup
After your fleet has started, launch an AppStream 2.0 session. When you get to the catalog page, launch PowerShell.
Because there is no application on the image that triggers the script automatically, use the following PowerShell commands to create test events of varying severity.
Write-EventLog -LogName "Application" -Source "CustomApplication" -EventId 1000 -Message "This is a test event." -EntryType Information
Write-EventLog -LogName "Application" -Source "CustomApplication" -EventId 1001 -Message "This is a test warning." -EntryType Warning
Write-EventLog -LogName "Application" -Source "CustomApplication" -EventId 1002 -Message "This is a test error." -EntryType Error
Create enough error events (based on the threshold defined in the event_trigger.ps1 script) that you trigger a CloudWatch alarm. Verify that you receive an email detailing the CloudWatch alarm.
When you have generated enough test data and CloudWatch alarms, open the Athena console.
Verify that your event data is available to Athena by running the following SQL query. This returns the first 10 rows of event data from your database.
SELECT * FROM "db_name"."table_name" limit 10;
For more information about Athena SQL queries, see SQL Reference for Amazon Athena.
Optional: Cleaning up old alarms
You can use the following PowerShell script to clean up any CloudWatch alarms that are no longer required. This script deletes any custom alarms created by your AppStream 2.0 instances older than the defined date parameter. You can also use this to create a PowerShell AWS Lambda function that automatically runs on a set schedule.
$region = "<region>"
$alarms = Get-CWAlarm -Region $region | Where-Object { ($_.AlarmName -like "Alarm-AS2-Session-*") -and ($_.StateUpdatedTimestamp -le $((Get-Date).AddDays(-3))) } | Select-Object -ExpandProperty AlarmName
if (null -eq $alarms) {
Write-Host "There are no AppStream 2.0 session alarms that meet the date requirement."
}
else {
$total = $alarms.count
Write-Host "Deleting $total alarm(s) in region $region"
foreach ($alarm in $alarms) {
try {
Remove-CWAlarm -Region $region -AlarmName $alarm -Force -ErrorAction Stop
Write-Host "Alarm $alarm has been deleted."
}
catch {
Write-Error $_
}
}
}
Conclusion
And that’s it! You now have a custom AppStream 2.0 image configured with an event trigger script that processes your application events and forwards the data on to Amazon S3 and CloudWatch. You have an AppStream 2.0 fleet configured with your custom image along with an IAM role allowing for seamless interaction between the fleet instances and Amazon S3 and CloudWatch. And finally, you have an Athena database that allows you to query for specific event data.