AWS Contact Center

Analyzing Amazon Connect usage with agent desktop and streaming data

As customer engagement expectations become increasingly advanced, contact center managers find themselves in need of granular visibility of their contact center usage. For example, customers seek to understand usage patterns by agent, business division, caller intent, routing queue, or customer segment. Amazon Connect provides a real-time performance dashboard so that you can monitor the overall health of your contact center. For customers looking for deeper insights, there are additional data sources that can be enabled in Amazon Connect as demonstrated in this post.

This post provides a walk-through of the features, functionality, and steps to deploy a AWS CloudFormation template to log, capture, and store detailed metrics from a comprehensive set of data sources. These data sources include contact trace records, agent events, chat transcripts, and the agent-facing Amazon Connect Contact Control Panel (CCP). With these datasets, you can create customized detailed reports to understand and forecast the usage of Amazon Connect better.

The reporting use-cases covered by this solution include the following:

  1. Customers who want to understand call length, agent status changes, and customer routing across multiple call flows.
  2. Customers who require detailed reports based on custom attributes defined in the contact flow. For example, you can create reports about customer intent, request fulfillment, or the extent where a customer reached in a contact flow.
  3. Agents who utilize the transfer voice call functionality and require the ability to trace the call from the original two-way communication through multiple connections.  This includes forwarding calls to desk-phones, transferring calls to other agents, merger of two calls, or temporary consultative calls while the customer is on hold.
  4. Customers who require a breakdown of the exact number of messages sent by the agent and customer throughout chat conversations for analytics purposes. This includes messages sent by an individual or a chatbot.

Solution overview

The artifacts deployed through this blog contain a set of serverless components deployed through CloudFormation. There are four nested CloudFormation stacks that can be independently deployed, enabling three different mechanisms to track metrics across agents, contacts, and connections to provide comprehensive usage analysis.

Front-end stack: This stack deploys a custom browser-based Amazon Connect Contact Control Panel (CCP) to capture usage telemetry data from the agent’s client side. This CCP collects granular metrics from the Connect service using the Amazon Connect Streams JavaScript library.  It’s worth noting that some of these generated metrics are new and not included in the Amazon Connect libraries.

  • The stack code can be downloaded from our GitHub repository and integrated with existing custom CCPs.
  • This component provides a best effort attempt to capture client-side data to supplement the data available through Amazon Connect’s data streaming options.
  • The front-end component uses a custom CCP wrapper, which runs locally on the agent’s machine and retrieve metrics on agent state, current contacts, and each connection within the contact.
  • Data is submitted in 5-minute intervals by default. It is processed by the Amazon API Gateway and Amazon Kinesis Stream before being made available for reports. This submission frequency can be modified in the provided JavaScript files.

Backend stack: This stack deploys Amazon Kinesis Data Streams and Amazon Kinesis Data Firehose. It uses the native integration of Amazon Connect and Amazon Kinesis to upload the Contact Trace Records (CTR) to Amazon Simple Storage Service (Amazon S3) bucket in near-real-time. Note that Usage of the data processed in this fashion is intended for analytics use-cases than real time or in-call streaming usage.

Chat stack: This stack deploys serverless components to track the usage of Amazon Connect chat by counting the number of messages in the transcripts. In order to use this feature, chat transcript logging to Amazon S3 must be enabled.

In addition, the solution deploys Apache Hive schema queries to enable Amazon Athena to scan, query, and aggregate data in the S3 bucket.  Additionally, we have provided an optional set-up guide to visualize your data with Amazon QuickSight within our Github project. Other business intelligence tools may also be used with Athena through the ODBC/JDBC drivers. We use Athena to combine the various data sources to correlate information across contacts, chats, and voice connections and allow the customers to add their own logic for use-case specific reporting.


To get started, launch the CloudFormation template below to the region your Amazon Connect instance is deployed.


For this walkthrough, you should have the following prerequisites:

  • An AWS account
  • Access to the following services:
    • Amazon Connect
    • Amazon CloudFront
    • Amazon Kinesis Streams
    • Amazon Kinesis Data Firehose
      Amazon Simple Storage Service (Amazon S3)
    • Amazon Athena / Glue Data Catalog
    • AWS Lambda
  • Knowledge of SQL
  • Amazon Athena is based on HiveQL for DDL and Presto 0.172 and Presto 0.217 for DML

Deployment steps

The deployment requires the following parameters:

Front-end stack parameters:

  • EnableFrontendStack: <required>
    Set this to true (default value) to deploy front-end related artifacts.
  • ConnectionDataBucket: <required>
    Name of the Amazon S3 bucket, which will be the destination of the call connection records delivered by Kinesis Data Firehose. You can create a new bucket manually or use an existing bucket.

    • For example, connect-57c490306eec
  • CcpUrl: <required>
    HTTPS URL of your Amazon Connect CCP.

    • For example,
  • SamlUrl: <optional>
    SAML login URL for your instance. Leave empty if you aren’t using SAML.

    • For example,

Backend stack parameters

  • storageDestinationS3: <required>
    Name of the storage bucket for saving agent events, contact trace records, and logs. This can be the same bucket as you used in the FrontEndStack parameters or any other existing bucket.

    • For example, connect-57c490306eec

Chat usage stack parameters

  • EnableChatUsageStack: <required>
    Set this to true (default value) to deploy chat usage-related artifacts
  • ConnectChatTranscriptBucket: <required>
    Amazon S3 bucket that stores the Amazon Connect chat transcripts. This can be the same bucket as used in the prior two steps.

    • For example, connect-57c490306eec
  • TranscriptBucketPrefix
    Prefix for the Amazon S3 bucket that contains the transcript. You will find the name of this bucket in your Data Storage configuration within the AWS Management Console, under the Connect Service and instance you are looking to monitor.

    • For example, connect/connect-demos-sandbox-fra-test/ChatTranscripts

  1. Once you have completed the deployment options, deploy the stack. It should complete within 20 minutes.
  2. Once deployment has completed, check the output of the main stack.
  3. In the outputs section check the value for the parameter called “NeedManualConfigNotification”, if the value is “True”, then we need to manually configure the notification event for the chat usage stack.
    1. Navigate to the Amazon S3 bucket that contains the chat transcript, setup S3 notification for all Put Object creation on the prefix where the chat transcripts are stored. In the event notification configure the ProcessTranscriptLambda to be invoked by this Amazon S3 notification

To add an event notification:

  1. Select the Amazon S3 bucket containing your chat transcripts
  2. Click on the Properties Tab
  3. Scroll down to Event notifications
  4. Select All object create events
  5. Select the existing AWS Lambda that is named “ProcessTranscriptLambda” from the list

NOTE: The chat usage stack relies on Amazon S3 PUT Event Notifications on the chat transcripts bucket to trigger a Lambda function.  If you have an existing notification set for this bucket, you will need to configure an SNS “Fan-out” pattern to trigger multiple locations using one notification. If you fall into this use case, please manually configure a notification to SNS, and subscribe the Lambda for the chat usage stack as well as any other existing functionality to your SNS Topic. Reference SNS documentation here on how to set up this architectural pattern.

Please skip this step if you have not deployed the front-end stack. We will add the CloudFront URL deployed by the CloudFormation stack to the Application integration in order to allow the CCP to be initiated from a custom domain.  This is required because the front-end stack relies on a JavaScript wrapper around the standard CCP.

  1. In the AWS Management Console, navigate to Amazon Connect, <Your Connect Instance>, Application Integration, Add Origin

  1. Navigate back to Amazon Connect console, Instance Alias, Data Streaming
    1. Check the Data Streaming check box.
    2. Under Contact Trace Records, select the CTR Kinesis Data Firehose that was deployed by the CloudFormation Template
    3. Under Agent Events, select the Agent Events Kinesis Stream that was deployed by the CloudFormation Template (This step is optional)
    4. You may use your existing Kinesis Data Firehose or Kinesis Streams. The following steps will reference the S3 bucket that was input into the CloudFormation stack parameter. You may manually reconnect to your data if you choose to deviate from this set-up.

  1. Next, navigate to Amazon Athena. If this is the first time you are using Athena you will need to configure Athena to store the results in an S3 bucket by following the link in the screenshot below, titled set up a query result location in Amazon S3.

  1. Excute the following stored queries to create schemas in Athena for your reporting. (You should be able to see direct links to these queries in the Output section of the front-end stack, backend stack and chat usage stack in CloudFormation).
    1. CallConnectionsDataNamedQuery
    2. ContactTraceRecordsAthenaNamedQuery
    3. Note: If you have consistently utilized custom attributes captured as part of your contact flows, you may edit the ContactTraceRecordsAthenaNamedQuery to specify schema attributes for use in future queries or visualization tools.
    4. ChatDetailsAthenaNamedQuery
    5. Note: If you have your data saved to a different S3 bucket, you can modify the data source queries before running them by including your bucket and prefix where your data resides.  The data source can be found on the final line of each query, and should look like this: ROW FORMAT SERDE '' LOCATION 's3://${storageDestinationS3}/CTR'
  2. At this point, you may make a few phone calls and/or chat contacts using the CloudFront Distribution URL found in the CloudFormation template output parameter with the key CloudfrontDistributionURL.
    1. NOTE: The frontend caches data locally on the user’s desktop and sends it to an API every 5 minutes. Kinesis Data Firehose has a similar behavior where it buffers new data before saving it to the destination in either 5Mb or 5-minute intervals. Therefore, there could be up to a ten-minute delay before the front-end data will appear in Athena queries. You can configure these intervals to meet your needs.
  3. At this point, you are ready to query your data for insights. In Athena, run the following queries to produce a report on the usage for the voice or chat channel:
    1. UsageReportAthenaNamedQuery
    2. ChatUsageReportAthenaNamedQuery

Front-end stack:

Once the solution is deployed, data will automatically be generated in your specified Amazon S3 bucket for the backend stack and chat usage stack.  For the front-end stack to generate data, agents must utilize the CCP from a custom CCP page rather than the standard CCP.  This URL is listed in the CloudFormation output under the parameter “CloudfrontDistributionURL”,

Once logged in, the agent experience will look like a standard Amazon Connect CCP. To verify the front-end stack works as expected, you can look at the browser console log for the “Successfully opened db” statement from the worker thread.

Athena queries:

The queries used to create the schema for Athena in the deployment guide provide a view of the schema and specific attributes available for your queries.

Cleaning up

To avoid incurring future charges, you can delete the deployed resources through CloudFormation, which will automatically clean up components that have been created.  Note that assets created in Amazon S3, such as CTRs, CCP logs, or custom CCP front-end pages will not be removed by CloudFormation. To avoid storage costs in Amazon S3, you can manually delete the data that has been streamed to S3 as the next step in your clean-up process.


Once you are able to successfully complete the example queries deployed by the solution, you have successfully completed deployment of the solution. You may continue to use the example queries provided to analyze your usage. You can start to include your own modifications, or begin to write your own queries with the attributes included in the defined Athena tables. You can also follow the optional set-up guide to visualize your data with Amazon QuickSight within our Github project.

For some ideas on next steps, you can utilize the data populated by this solution in Amazon S3.  This data can serve as a foundation for a number of data driven automation opportunities for your Contact Center, such as:

  1. Triggering events in your CRM or Ticketing System based on records meeting specific criteria
  2. Generating advanced analytics dashboards using Amazon QuickSight
  3. Running automated auditing procedures, e.g., looking for missed calls by agents