This Guidance helps to create a single source of truth of customer touch points to automatically understand and extract customer linked information from siloed, raw, and disparate data.
Architecture Diagram
Step 1
Data is ingested from multiple data sources across telco’s engagement channels and systems of record through batch, streaming, and/or API-based mechanisms.
Step 2
Data is loaded in parallel to AWS Neptune and Amazon Simple Storage Service (Amazon S3). AWS Lambda calls either deterministic or probabilistic identity resolver machine learning (ML) models deployed on Amazon SageMaker real-time interface endpoints based on Neptune Stream data. The identity resolved data is upserted into Amazon S3. The journey KPI store is a set of queries written on Amazon Redshift in order to visualize key insights from journey data on Amazon QuickSight dashboards.
Step 3
Amazon Athena connector is used for querying journey milestone data on Neptune. This data is then converted into a Gremlin query and stored in Amazon DynamoDB along with trigger criteria. This query is triggered as new vertices and edges are created or updated on Neptune. Based on the result, journey milestone vertices and edges are created/updated.
Step 4
The journey frequency analyzer returns the top journeys along with the number of times each journey step happened based on the event of interest chosen by the customer experience (CX) strategist as the start or end of the journey.
Step 5
The journey similarity analyzer returns the count and details of other customers who have gone through very similar journeys.
Step 6
A number of journey propensity models are trained and deployed on SageMaker using the data on Neptune. These models are invoked using Neptune ML queries. Example propensity models include churn, personalization, fraud, and customer satisfaction score prediction.
Step 7
The website static files are deployed on Amazon S3 and distributed globally using Amazon CloudFront. The website is secured using AWS WAF.
Step 8
The journey key performance indicator (KPI) visualizer exposes an API, which allows the QuickSight dashboards to be embedded into the CCJ insights dashboard.
Step 9
All the extracted journey data is presented via insights dashboards or the APIs and widgets can be integrated to the communication service provider’s (CSP) choice of system of engagement.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Telecoms data is ingested using data pipelines built using AWS Cloud Development Kit (AWS CDK), AWS CloudFormation, and serverless application model (SAM). Continuous integration/continuous delivery (CI/CD) toolsets such as AWS CodePipeline are used to orchestrate deployment and promote code through environments. AWS Glue Studio, AWS Step Functions, and AWS Glue DataBrew are used to provide orchestration of the data operations lifecycle. SageMaker pipelines are used to orchestrate the ML lifecycle.
-
Security
All data is encrypted both in motion and at rest. Encrypted Amazon S3 buckets store data. Neptune database is also encrypted and is secured in a private subnet within the VPC. SageMaker can only access that data via the VPC and not via the internet. Training is done in secure containers and the results are stored in encrypted S3 buckets.
-
Reliability
Neptune is deployed across multiple availability zones. SageMaker hosting is used to server the trained model, which takes advantage of multiple Availability Zones (AZs) and Elastics scaling groups. All other services are serverless, which means that they are inherently highly available across multiple AZs in a region.
-
Performance Efficiency
Serverless technology is used where possible. In the case of Neptune, autoscaling is configured to deal with unpredictable read patterns. SageMaker endpoints can scale up and down as needed to ensure the minimum number of instances needed are running.
-
Cost Optimization
Serverless services are used where possible, making sure that customers pay for only the resources consumed. Lambda power tuning is used to optimize cost while maintaining performance. Autoscaling is used in Neptune to automatically turn off read replicas when not being used. SageMaker endpoints can scale up and down as needed to ensure the minimum number of instances needed are running. Instance sizes are measured by using SageMaker Inference Recommender to make sure costs are minimized.
-
Sustainability
By extensively utilizing managed services and dynamic scaling, we minimize the environmental impact of the backend services. All compute instances are right-sized to provide maximum utility.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.