AWS Partner Network (APN) Blog
Knowlarity’s Cloud Communications Platform Provides Customer Conversation Analytics Built on AWS
By Saurabh Srivastava, Sr. Principal Engineer – Knowlarity
By Bharath S, Sr. Partner Solutions Architect – AWS
By Smiti Guru, Sr. Solutions Architect – AWS
Knowlarity |
Knowlarity, a Gupshup company, provides automated business communication with artificial intelligence (AI)-enabled cloud telephony solutions.
Gupshup enables better customer engagement through conversational messaging and is a leading conversational messaging platform, powering over 10 billion messages per month.
Across verticals, thousands of large and small businesses in emerging markets use Gupshup to build conversational experiences across pre-purchase (marketing and promotion), purchase (commerce), and post-purchase (sales and support) of a customer journey. Gupshup provides a single messaging API for 30+ channels.
Knowlarity is an AWS Specialization Partner and Gupshup is an AWS Marketplace Seller.
Knowlarity’s Cloud Call Center is a software-as-a-service (SaaS) solution that enhances customer experience by enabling real-time conversations and personalized interactions. It reduces communication costs, increases efficiency and productivity, and provides analytics and real-time performance insights.
Knowlarity built its analytics platform on Amazon Web Services (AWS) for a highly scalable, reliable, and resilient infrastructure that processes terabytes of data. Retail customers achieve cost and operational efficiencies thorough the platform’s insights provided by the platform, and in this post we will discuss how the platform is built by adopting different strategies and best practices to ingest, store, and query customer data.
Knowlarity Analytics Platform Architecture
Design requirements for a multi-tenant customer analytics platform like Knowlarity’s include:
- Scalable architecture that handles high volume of events and supports the growing number of customer tenants.
- Support scaling for thousands of tenants while implementing a robust data ingestion pipeline that handles hundreds of millions of events per day and the corresponding data processing needs.
- Reliable data storage solutions to accommodate large-scale data generated by calling and campaigns, storing terabytes of data.
- Real-time analytics that query hundreds of gigabytes of data per day and scans billions of events within a few seconds. Enables reporting capabilities to provide timely insights to tenants.
- Provides APIs to enable programmatic access to tenant data.
- Ensure high availability and fault tolerance of the analytics platform to minimize downtime.
- Implements robust monitoring and alerting mechanisms to proactively identify and resolve issues.
- Robust backup to protect tenant data.
- Optimize overall platform cost.
Now, let’s understand the solution by walking through the architecture below:
Figure 1 – Knowlarity call insights platform architecture.
Take an example of a tenant looking to understand how a cloud-based contact center is used to engage end customers:
- The tenant’s users or field agents use the product or an integrated customer relationship management (CRM) system to call and connect with clients.
- At the end of every conversation, a call detail record (CDR) gets generated and contains details like unique call identifier, call type, call start time, and call duration in a JSON format.
Data Ingestion Layer
- The JSON event payload, including tenant information, is ingested to Amazon Simple Queue Service (SQS). There are multiple SQS queues used depending on the payload and data processing requirements. For example, real-time calling takes a preference over calling data getting generated from large running campaigns. There are multiple workers running on Amazon EC2 Spot instances across AWS Availability Zones (AZs) that consume the messages from these queues, validates the event payload, and does real-time processing like pushing call details to a tenant API or CRM endpoint.
- Next, these workers push events to Amazon Kinesis Data Streams, and shards are provisioned based on the incoming data volume and traffic patterns. The data is partitioned on the basis of unique identifiers to distribute it uniformly across the shards. The worker nodes batch these messages into multiple records for better throughput and efficiency, and the data is retained in Kinesis for a specific period for increased reliability.
- Amazon CloudWatch metrics and alerts are used to track key metrics such as incoming and outgoing records, throughput, consumer lag, or the iterator age millisecond information to gain insights into the health and performance of the stream.
- AWS Lambda functions are used as compute and configured to process multiple records in a single invocation by setting the batch size parameter. This reduces the number of Lambda function invocations and improves efficiency.
- For reliable processing of all data, Knowlarity uses asynchronous Lambda invocations with a fixed number of retry attempts. SQS is configured to capture and store records that couldn’t be processed after exhausting the retry attempts and being notified in real-time. This allows the solution to analyze and debug failed records separately while maintaining the flow of successful records.
Data Storage Layer
- AWS Lambda functions pick these records and bulk inserts the call logs in an Amazon Aurora table. All of the call logs analysis on the user interface (UI) are powered by this data store.
- The records are pre-processed and formatted to compressed CSV files which are then uploaded to an Amazon Simple Storage Service (Amazon S3). The data files in S3 are split into multiple files to enable parallel loading into an Amazon Redshift cluster.
- Redshift COPY commands are used for efficient and fast loading of data files stored in S3 to Redshift tables.
Data Query Layer
- Users log in into the Knowlarity product UI to access detailed insights like overall call volume, connectivity across various solution offerings, agent performance and efficiency. and various other reports. These insights and reports are returned via backend APIs after successful authentication and authorization.
Data Processing and Aggregation
- Amazon Redshift materialized views are created to maintain hourly, daily, weekly, and monthly aggregations. There are scripts scheduled on EC2 instances to run at fixed intervals that refreshes these materialized views periodically to reflect any changes in the underlying data, updating the aggregated results.
- The materialized views are queried in Redshift, and the queries retrieve the pre-computed aggregated results, resulting in faster query response times. This improves the query performance by bringing down the API response times to under six seconds.
Below are snapshots of some of the analytics and insight on the Knowlarity solution UI.
Figure 2 – Knowlarity call analytics dashboard.
Results and Benefits
By building the platform on AWS, Knowlarity was able to achieve the following benefits:
- Scalable architecture: Leverages AWS services like Amazon EC2 and AWS Lambda for scalable compute resources. Uses autoscaling groups to automatically adjust resources based on demand.
- Data ingestion and processing: Utilizes services like Amazon SQS and Amazon Kinesis Data Streams for real-time and batch data ingestion. Designs efficient data processing workflows using AWS Lambda and scripts for data transformation, enrichment, and aggregation.
- Data storage and management: Uses Amazon S3 for cost-effective and scalable object storage and utilized Amazon Redshift for analytics. Enables calling insights, various performance reports, and contact center agent efficiency and performance metrics. Kinesis and Redshift allows the solution to handle large volumes of event data and process it in a scalable way. Redshift materialized views enable it to serve analytical payloads for queries that were run in near real-time in sub-seconds
- Fault tolerance and high availability: Leverages AWS services like Amazon Aurora for database replication and failover and S3 for data durability. Implements multi-AZ deployments for critical components to ensure redundancy and fault tolerance.
- Monitoring and alerting: Utilizes Amazon CloudWatch and AWS CloudTrail to monitor key metrics, log files, and system health.
- Cost optimization: Leverages On-Demand and Spot instances as well as AWS Lambda functions for compute resources based on workload characteristics. Uses AWS Cost Explorer and AWS Budgets to track and manage costs effectively.
- Analytics platform access: Implements authentication and authorization mechanisms in Amazon API Gateway to ensure secure and controlled access.
- Reliable backups: Data gets written to S3 for long-term storage and backup, ensuring the data is safe and secure.
Overall, Knowlarity is running scalable analytics on the AWS cloud and processing terabytes of data for business insights. Its contact center solution accelerates the time to insights and continues to deliver performance while keeping costs low.
Conclusion
Knowlarity is a conversational engagement platform empowering businesses to engage meaningfully with customers across commerce, marketing, and support use cases on 30+ communication channels.
The solution highlighted in this post enables better customer engagement through conversational messaging and powers more than 10 billion messages per month. Across verticals, thousands of large and small businesses use Knowlarity’s contact center solution across marketing, sales, and support.
Visit the Knowlarity (Gupshup) listing in AWS Marketplace to get started.
Knowlarity – AWS Partner Spotlight
Knowlarity is an AWS Partner and Gupshup company that’s a one-stop conversational engagement platform for marketing (pre-purchase), support (post-purchase), and commerce (purchase) journeys.