Desktop and Application Streaming
Using Kinesis Agent for Linux to stream application logs in AppStream 2.0
When an Amazon AppStream 2.0 instance session is complete, the instance is ended. This means all application logs used in the session are lost. But what if you must persist your application logs? This post shows you how to persist your application logs by streaming them in real-time to Amazon Kinesis Data Firehose using Kinesis Agent for Linux.
This blog provides a solution for Linux-based images. A similar setup for Windows-based images is found in the blog post using Kinesis Agent for Microsoft Windows to store AppStream 2.0 Windows Event Logs.
Time to read | 7 minutes |
Time to complete | 30 minutes |
Learning level | Advanced (300) |
Services used | Amazon AppStream 2.0, Amazon Kinesis Data Firehose, Amazon S3 |
Overview of solution
The solution uses a custom Linux-based AppStream 2.0 image that has the Kinesis agent installed. The agent uses the custom credentials provider configuration and assumes the streaming instance AWS Identity and Access Management (IAM) role. The agent then sends application logs to a Kinesis Data Firehose delivery stream that loads the logs into Amazon Simple Storage Service (Amazon S3).
To learn more on IAM roles for your fleet, review using an IAM role to grant permissions to applications and scripts running on AppStream 2.0 streaming instances.
Walkthrough
This post walks you through the following steps:
- Update AppStream 2.0 fleet IAM role to grant stream permissions.
- Install Kinesis agent on a custom Linux-based image.
- Create custom credentials provider artifact.
- Upload custom credentials provider JAR to Kinesis agent library folder.
- Configure Kinesis agent to use custom credentials provider.
- Test log delivery.
Prerequisites
For this walkthrough, you should have the following prerequisites:
- An AWS account with permissions to create AWS resources that are used in this blog.
- An environment with Java 8 and Apache Maven installed to compile Java classes.
- An Amazon Kinesis Data Firehose delivery stream for the Kinesis agent to send data to. As source, choose Direct PUT and for destination Amazon S3. For instructions on creating delivery streams, refer to creating an Amazon Kinesis Data Firehose delivery stream. The delivery stream name and ARN will be required in the walkthrough.
- A Linux-based AppStream 2.0 Image Builder.
- An IAM role for the Linux-based AppStream 2.0 Image Builder and streaming instances. The role name is later used in the walkthrough. If you don’t have an Image Builder yet, follow the tutorial create a custom Linux-based AppStream 2.0 image. If your Image Builder does not have Internet access, create an Amazon S3 gateway VPC endpoint so required packages can be installed. To create gateway VPC endpoints, review gateway endpoints for Amazon S3.
Update instance role to grant stream permissions
To create the IAM policy that provides the instance role permission to send data to the delivery stream:
- In the navigation pane of the IAM console, choose Roles.
- Choose the name of the AppStream 2.0 instance role that you want to modify (see prerequisites).
- Choose the Permissions tab, and then choose Add permissions. Choose Create inline policy.
- In JSON policy editor, choose the JSON
- Paste the following JSON policy document into the editor. Replace
delivery-stream-arn
with your own information.
{
"Statement": [
{
"Action": [
"cloudwatch:PutMetricStream",
"cloudwatch:PutMetricData"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch"
],
"Effect": "Allow",
"Resource": "delivery-stream-arn"
}
],
"Version": "2012-10-17"
}
Install Kinesis agent on a custom Linux-based image
If you don’t have an Image Builder, follow the tutorial to create a custom Linux-based AppStream 2.0 Image. If your Image Builder does not have internet access, create an Amazon S3 gateway VPC endpoint so required packages can be installed. To create gateway VPC endpoints, review gateway endpoints for Amazon S3.
- Connect to your AppStream 2.0 Image Builder.
- Open the terminal.
- Run
sudo yum install –y aws-kinesis-agent
- Run
sudo service aws-kinesis-agent start
- Configure the agent to start on system startup with the command
sudo chkconfig aws-kinesis-agent on
For more information for the Kinesis agent configuration review writing to Amazon Kinesis data streams using Kinesis agent.
Create custom credentials provider artifact
Create the following CustomProcessCredentialsProvider
Java source code file, and save it under the name CustomProcessCredentialsProvider.java
:
package com.amazon.demo.appstream.kinesis;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.ProcessCredentialsProvider;
public class CustomProcessCredentialsProvider implements AWSCredentialsProvider {
private final ProcessCredentialsProvider processCredentialsProvider;
public CustomProcessCredentialsProvider () {
this.processCredentialsProvider = ProcessCredentialsProvider.builder()
.withCommand("/usr/local/appstream/credentials-provider/AppStreamRoleCredentialProvider --role=Machine")
.build();
}
@Override
public AWSCredentials getCredentials() {
return processCredentialsProvider.getCredentials();
}
@Override
public void refresh() {
processCredentialsProvider.refresh();
}
}
To compile the custom credentials provider Java class, you need the AWS SDK for Java in your classpath. Download the AWS SDK Java from the Maven repository, and save it next to the Java source code file. Run the following commands in your terminal:
mkdir out
javac -d out -cp aws-java-sdk-core-1.12.376.jar CustomProcessCredentialsProvider.java
cd out
jar -cf custom-credentials-provider.jar com
Make sure that you are using the same Java version to compile your code as installed on your Image Builder (check with java -version
). This post uses base image AppStream-AmazonLinux2-06-20-2022 and Java 1.8.
This creates a JAR with the name custom-credentials-provider.jar
that contains the CustomProcessCredentialsProvider
class.
Upload custom credentials provider JAR to Kinesis agent library folder
Next, upload the JAR from your local computer to the Kinesis library folder in your Image Builder.
- In the AppStream toolbar, choose the My Files
- Navigate to the TemporaryFiles folder, and choose Upload Files.
- Select and upload
custom-credentials-provider.jar
- In the terminal, move the uploaded JAR to the Kinesis agent library location with the command
sudo mv MyFiles/TemporaryFiles/custom-credentials-provider.jar /usr/share/aws-kinesis-agent/lib/
When configuring the custom credentials provider, Kinesis agent will use the lib folder as its classpath.
Configure Kinesis agent to use custom credentials provider
- Edit Kinesis agent configuration file
/etc/aws-kinesis/agent.json
. Replace the content with the following configuration JSON. Update the configuration parts in following steps. - Look up your Kinesis Data Firehose endpoints. Replace
firehose-endpoint
with your own information. - Look up your Amazon CloudWatch endpoints. Replace
cloudwatch-endpoint
with your own information. - Replace
delivery-stream-name
with your own information. - Save the changes of the agent.json file.
- Restart the agent with the command
sudo service aws-kinesis-agent restart
- Verify that the agent started correctly by checking the log:
cat /var/log/aws-kinesis-agent/aws-kinesis-agent.log
{
"cloudwatch.emitMetrics": true,
"cloudwatch.endpoint": "cloudwatch-endpoint",
"firehose.endpoint": "firehose-endpoint",
"flows": [
{
"deliveryStream": "delivery-stream-name",
"filePattern": "/tmp/app.log*"
}
],
"kinesis.endpoint": "",
"userDefinedCredentialsProvider.classname": "com.amazon.demo.appstream.kinesis.CustomProcessCredentialsProvider"
}
Make sure that your instance either has internet access or has VPC endpoints for services com.amazonaws.<region>.monitoring
and com.amazonaws.<region>.kinesis-firehose
configured. To create VPC endpoints, review access an AWS service using an interface VPC endpoint.
For more information about flows configuration, refer to writing to Kinesis Data Firehose using Kinesis Agent.
Test application log delivery
Create an example log file for the file pattern defined in your agent.json for your delivery stream. This post uses /tmp/app.log* as an example.
- Create a file with name
app.log
under thetmp
folder. - Insert a test string, for example ‘Hello World’
- Save your file.
- In your Kinesis agent logs (
/var/log/aws-kinesis-agent/aws-kinesis-agent.log
) you see entries similar to the following:2023-01-11 22:14:44.032+0000 (FileTailer[fh:PUT-S3-PHKn7:/tmp/app.log*].MetricsEmitter RUNNING) com.amazon.kinesis.streaming.agent.tailing.FileTailer [INFO] FileTailer[fh:PUT-S3-PHKn7:/tmp/app.log*]: Tailer Progress: Tailer has parsed 1 records (24 bytes), transformed 0 records, skipped 0 records, and has successfully sent 1 records to destination. 2023-01-11 22:14:44.033+0000 (Agent.MetricsEmitter RUNNING) com.amazon.kinesis.streaming.agent.Agent [INFO] Agent: Progress: 1 records parsed (24 bytes), and 1 records sent successfully to destinations. Uptime: 120023ms
- Navigate to the Amazon S3 console, select the bucket you specified as the destination of your delivery stream and verify log entry creation. It may take several minutes until the file content is placed in the destination, dependent on your delivery stream configuration.
- Navigate to the CloudWatch console, choose Metrics. Verify the existence of metric namespace AWSKinesisAgent.
Cleaning up
In this post, you tested your Kinesis agent configuration within an AppStream 2.0 environment. To avoid incurring future charges, remove the Kinesis agent and stop your AppStream 2.0 fleet. Note that Kinesis will only generate cost if you are actively streaming data to the service.
Conclusion
In this post, you walked through a solution to push application logs from AppStream 2.0 streaming instances to a Kinesis Firehose delivery stream. You can query logs stored in Amazon S3 with Amazon Athena.
About the author
Modood Alvi is Senior Solutions Architect at Amazon Web Services (AWS). Modood is passionate about the digital transformation and he is committed helping large enterprise customers across the globe to accelerate their adoption of and migration to the cloud. Modood brings more than decade of experience in software development having held a variety of technical roles within companies like SAP and Porsche Digital. Modood earned his Diploma in Computer Science from the University of Stuttgart. |