AWS for Industries

Limiting Subscriber Churn by leveraging real-time subscribers’ feedback – part 1 of 2

When resolving network and service incidents, communication service providers (CSPs) have low visibility on how network incidents are perceived by their subscribers. As a result, resolution of network incidents is not optimized towards upholding customer satisfaction, leading to reduced effectiveness in limiting churn and increased cost of first-line support.

This blog series demonstrates a fully serverless approach to building a sentiment analytics and customer engagement solution for CSPs. Considering the size of the subscriber base of CSPs and the need to capture subscribers’ sentiments in real time and adapt to unpredictable incident patterns, a cloud-based solution is the best way to limit subscriber churn while containing costs.A combination of natively integrated AWS services, spanning end-to-end from real-time analytics, block storage, customer engagement, and serverless compute to serverless NoSQL DB, enables CSPs to:

  1. Identify worst-performing subscribers with advanced analytics in real-time
  2. Engage subscribers based on their historical network and service performance
  3. Capture their feedback to validate and weight their incidents
  4. Monitor the sentiment of the entire subscriber base in real-time and associate it to other dimensions (cell, device, network vendor, and so on)
  5. Prioritize incident intervention based on subscriber sentiment

This first blog post focuses on step 1 of the above list, covering the real-time ingestion and processing of incidents for the CSP’s entire subscriber base.

Business case

The landscape of the UK telecommunications industry is facing significant market challenges as competitive forces place downward pressure on prices, customers are increasingly more demanding with their bandwidth and speed, and expect a high standard of customer service. CSP revenues have, in fact, fallen by a fifth in the UK over the last ten years, while globally, they fell 5.4 percent from 2019 (1).

Customer churn remains the single greatest challenge to revenue stabilization ranging between 5 percent to 32 percent per year (2). In a saturated market such as this one, service providers should focus on retaining existing customers rather than trying to attract new ones. The probability of selling to an existing customer is 60–70 percent, while the probability of selling to a potential new customer is 5–20 percent. This issue is compounded by the customer acquisition cost Cost of Customer Acquisition (COA) which, on average, is five times greater than retention (3).

In an environment where average revenue per user (ARPU) dipped by 28 % 2010 to 2019, CSPs saw -5.3% decline in share price while paying out a +4.1% in dividends (2015-2020), it is clear this situation is not sustainable (4).

CSP’s major asset is its network. The cost of running the network from an opex and capex perspective is substantial.

As next generations of network technology come online ROI needs to be justified. The industry faces one of its greatest capex demands coming from the need to upgrade to 5G technology, FTTP and implement AI diagnostics and planning capabilities into networks (intelligent network planning).

Combined with these capex challenges, CSPs face significant opex due to the high network maintenance costs.

The average data use per fixed broadband connection increased by 75GB per month (31%) to 315gb in 2019, with a key contributing factor being continued growth in the use of video streaming services (5). As the necessity for connectivity continues to increase, the expectations of customers are also rising. Many customers expect CSPs to provide the closest to 100% operation, good speed and stability, and the shortest downtime while overall prices are falling.

mobile arpu and data usage

What can operators do to intelligently combat churn and improve their financial position? How can they recoup the costs of network optimization and upgrade in a dual compressing market environment?

To respond to these fast-changing customer needs, CSPs need to enhance their dynamic understanding of customers’ expectations in nearly real time.

One of the main problems that CSPs face is that their customer service is not able to appropriately identify in advance the customers who are most frustrated with their network service and so cannot differentiate the customer service experience effectively, likely contributing to churn and revenue loss.

There are two main types of customers: customers who complain if the service they receive does not meet their expectations, and customers who do not complain but are unhappy with their service (the “silent sufferers”). If CSPs are unable to identify these customers in real time, there is an increasing risk that their marketing team is unable to effectively target sweeteners to the customers who are most likely to churn based on poor network service or simply wrong expectations.

Understanding real-time customer feedback has therefore become imperative to reduce churn. The real challenge then becomes how to use it in the most cost-effective way. Fixing network issues carries a high cost of intervention but not every fault in the network impacts service. So how can we identify the most service-impacting faults?

The answer is linking real-time customer feedback with real-time network data.

The use of real-time customer feedback and proactive care solutions will have a transformative effect on customer churn. Integration of proactive care based on real-time customer sentiment will ensure the costs of network optimization are justified by the substantive reduction in customer churn and optimization of spend on network assets.

Solution

By identifying unhappy customers before they do or do not report an issue, telco companies would be able to:

  1. Optimize spend to minimize churn by using budget effectively on specific, targeted outbound comms that provide “sweeteners” to customers who are frustrated with their service due to network issues
  2. Create a framework of service offerings that appropriately target and address actual customer frustration levels—both proactively and reactively
  3. Optimize costs through the prioritization of resolution of the highest impact faults (that is, those faults creating the highest number of frustrated customers or the greatest intensity of frustration)

The proposed solution wants to help CSPs prioritize incident resolution based on how incidents negatively impact subscribers’ sentiments. By prioritizing the resolution of incidents on cells that see the highest drop of overall sentiment, CSPs increase the effectiveness of their operations department in upholding customer satisfaction, contributing to limiting churn and reducing first-line support costs.

The main characteristics of the solution are:

  • At scale—the solution scale to support the entirety of the subscribers’ base
  • Real-time—the whole sentiment-gathering process and spawned actions are performed in real time
  • Pattern sensitive—incidents are not assessed individually; patterns and correlations are evaluated
  • Serverless—entirely serverless, truly pay for what you consume, with no upfront commitment
  • In-context engagement—engagement with subscribers is established based on their performance history

This first blog part post focuses on the first stage of the solution, on how network and service incidents are ingested and analyzed (data ingestion), stored (data lake), processed to track every subscriber’s incident performance in real time (event handler and subscriber DB). The section covered is highlighted in the following diagram.

Here are the functional steps of the solution covered in this first blog post:

  • Service records (CDR, xDR) are real-time streamed by OSS Probe vendors into AWS
  • Service incidents are tracked down for each unique subscriber (phone number) and their frequency is quantified in real time over variable time windows
  • A Data Lake stores all incidences and associated frequency data
  • Every subscriber’s incident profile is kept up to date in an Amazon DynamoDB

Technical description

The solution demonstrated in this post is built in the Europe (London) Region. You can choose any other AWS Region where the following services are available

For more information about AWS Regions and where AWS services are available, visit the Region Table.

The following prerequisites must be in place to build this solution:

  • An AWS account
  • The AdministratorAccess policy granted to your AWS account (for production, you should restrict access as needed)
  • Event records about CSP service performance fed by a third-party OSS probe-based monitoring solution

Data Ingestion

The following diagram illustrates the data ingestion block of the architecture. It is tasked to capture event records from the probe-based monitoring solution, isolate service incidents, and compute incident rates for every subscriber in real time, and deliver enriched incident records to the data lake.

data integration

Data Sources

This post uses the Kinesis Data Generator tool to simulate event records. There are four types of event records, whose details are provided in the following table:

Event Record name Network protocol of origin Record Structure Numerosity
Call—Signaling Call Control Timestamp, MSINDN, type, cell, status One record per call
Call—Media RTP Timestamp, MSINDN, type, cell, status At least one record per call
Video—User Plane HTTP, Streaming protocols Timestamp, MSINDN, type, cell, status One record per streamed video
Web Browsing—User Plane HTTP Timestamp, MSINDN, type, cell, status One record per browsed webpage*
  • Event Record name—the name of the record
  • Network protocol of origin—Layer 7 network protocol containing the information reported in the event record. The list of options provided is not exhaustive.
  • Record Structure—the structure of the event record as seen in this post
    • Timestamp—time when the event happened on the CSP network
    • MSISDN—subscriber identifier (phone number)
    • Type—the type of record (one of the four event record names)
    • Cell—cell id univocally identifying 2G/3G/4G/5G cell. This is assumed to be extracted from the network’s control plane protocol at either access or core interfaces, correlated with the service event, and enriched in the event record. This logic is implemented by the OSS probe vendor.
    • Status—termination status of the service event. For example: failed, success, dropped, etc.
  • Numerosity—number of records per service event
    * Definition of a webpage is subject to the OSS probe vendor’s interpretation

Copy the code found at GitHub (KDG_CALL_DROP, KDG_CALL_QUALITY, KDG_LOW_BITRATE, KDG_VIDEO_STALLING) for the Kinesis Data Generator to replicate the incident patterns I have used for this blog post.

Amazon Kinesis Data Streams

Four data streams are created, one per event record type. This arrangement spawns four parallel chains of Kinesis services, whose output is written into the data lake. The end-to-end creation process detailed below must be implemented for all four branches independently.

aws management console

To create a Kinesis data stream, complete the steps listed in Creating a Stream via the AWS Management Console . Enter the number of shards according to your requirements; for this post, Provisioned Capacity mode with one shard is selected. Enable server-side encryption.

The below screenshot shows one of the four data streams, capturing the Call Media event record. The call media stream will be used as an example throughout the data ingestion section. Please repeat the same procedure for the other streams, namely Call Signalign event record, Video User Plane event records, Web Browsing event records.

input from KDG CALL QUALITY

Amazon Kinesis Data Analytics

Create a Kinesis data stream
To create an Amazon Kinesis Data Analytics application, complete the steps listed in its SQL Developer Guide. Select SQL runtime option. Enter tags as desired.

Configure source input
To configure a streaming source as input, complete the following steps from the same console as the previous section:

  1. Select Choose source
  2. Select Kinesis data stream
  3. Select the relevant choice from the dropdown menu. Following the flow of the call media event record started in the previous section, the data stream previously considered is set as Streaming data source in the configuration page of the newly created analytics function.
  4. Select Disabled in the Record preprocessing with Lambda section
  5. Select the Create IAM role option in the Access permission section
  6. Choose Discover schema in the Schema section. Schema is automatically discovered provided event records are being ingested while the discovery takes place.
  7. Inspect the schema to verify it matches the one shown in the screenshot below.

real time sql

Add real-time SQL application
To add a real-time analytics application, complete the following steps:

  1. On the application hub page, choose Go to SQL editor.
  2. When asked whether you would like to start your application, choose Yes, start application.
  3. Copy the SQL code found at GitHub (call drop, call quality, low bitrate, video stalling) and follow the instructions contained in the README file. The SQL application processes the input data stream in real time and reports statistics and other relevant information to an output stream. The SQL application performs the following steps:
    1. It isolates incident records from all event records.
    2. On an individual subscriber basis (MSISDN), it calculates the incident ratio over a 15-minute window and the incident ratio over a 60-minute window.
    3. It includes record attributes and calculated incident metrics in an output in-application stream
  4. Any other record structure can be supported by easily modifying the Amazon Kinesis Data Analytics SQL application code.
  5. Choose Save and run SQL. Verify that the Ouput_Stream in-application stream resembles the record structure seen in the screenshot below.

sql results

Add a destination
To connect an in-application stream to a Kinesis data stream in order to continuously deliver SQL results, complete the following steps in the Destination—optional section:

  1. Choose Connect new destination.
  2. On the subsequent page, choose Kinesis data stream.
  3. Choose Create new button.

connect to destination

  1. To create an Amazon Kinesis Data Stream, complete the steps listed in the Developer Guide under the heading Creating a Stream: To create a data stream using the console. Enter the number of shards according to your requirements; for this post, Provisioned Capacity mode with one shard is selected. Enable server-side encryption.

  1. In the In-application stream section, select Choose an existing in-application stream
  2. Scroll down and select the Output_Stream option
  3. Choose CSV in the Output format selection

in application stream

  1. Select the Create IAM role option in the Access permission section
  2. Choose Save and continue

Encrypt data at rest
Follow the instruction at How Do I Get Started with Server-Side Encryption? to set encryption at rest for your data on Amazon Kinesis Data Analytics.

Amazon Kinesis Data Firehose

Create an Amazon Kinesis Data Firehose Delivery Stream
To create an Amazon Kinesis Data Firehose delivery stream, complete the steps listed in its Developer Guide. More specifically:

  1. Type a Delivery stream name
  2. Choose Kinesis Data Stream as Source
  3. Select the Kinesis data stream previously created from the dropdown menu
  4. Choose Disabled in the Transform source records with AWS Lambda section
  5. Choose Disabled in the Convert record format section
  6. Choose Amazon S3 as Destination
  7. Choose Create new to create a new bucket
  8. Enter a Bucket name, in this post called “event_bucket”
  9. Leave 5 MiB as Buffer size
  10. Adjust the Buffer interval to suit your real-time requirements
  11. Enter the following prefix based on the stream: CALL_DROP/, CALL_QUALITY/, LOW_BITRATE/, VIDEO_STALLING/
  12. Enable S3 compression at will
  13. Enable S3 encryption
  14. Enable Error logging
  15. Select the Create IAM role option in the Access permission section

Data lake

The following diagram illustrates the data lake block of the architecture. It receives and stores all the events types (Incidents, Engagement, Feedback, Sentiment) over time, constituting the data source queried by the Offline Analytics section. For every event written to the data lake, one S3 notification is spawned, triggering the event handler section.

offline analytics

Object Storage

An Amazon S3 bucket was already created in this phase.

Amazon S3 Data Structure
In this section, we are defining the structure of the Amazon S3 bucket to store all incident, engagement, feedback, and sentiment records.

  1. Open the Amazon S3 bucket previously created, in this post called “event_bucket”
  2. Choose Create folder to create three distinct folders
  3. The three folders are named
    1. Engagement
    2. Feedbacks
    3. Sentiment
  4. Choose Enable in the Server-side encryption section
    The structure of the Amazon S3 bucket will appear as illustrated below:

objects

Folders CALL_DROP, CALL_QUALITY, LOW_BITRATE, and VIDEO_STALLING contain incident records. These folders map to the four Amazon Kinesis Firehose delivery streams and, as soon as the first record is written, they are automatically created.

Access and Permission
Navigate to the Permissions section of the S3 bucket and verify that public access is disabled.

block public access

Managing storage lifecycle
Manage your storage lifecycle to retain only the required data and only long enough to satisfy business functionality.

Bucket versioning
Consider enabling bucket versioning. With versioning you can recover more easily from both unintended user actions and application failures.

Bucket access logging
Consider logging requests using server access logging in order to enable auditing.

Encryption in transit
Add the following bucket policy by Adding a bucket policy using the Amazon S3 console.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "s3:*",
      "Effect": "Deny",
      "Resource": [
        "arn:aws:s3:::event_bucket ",
        "arn:aws:s3:::event_bucket /*"
      ],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      }
    }
  ]
}

Event Handler

The following diagram illustrates the event handler block of the architecture. It is a serverless event manager pipeline whose objective is to parse events being written to the data lake, extract relevant information, and update the subscriber DB accordingly. The block is triggered by the Amazon S3 event notification.

For illustrative purposes, service instance names are shortened in the diagram above.
Lambda1 is called S3-get-object
Lambda2 is called SQS-Poller
Amazon SQS FIFO queue is called incident_queue.fifo

Lambda1: S3-get-object

Create Lambda function
Complete the steps contained in this Create Lambda Function with the following amendments:

  • Type S3-get-object-NEW in the Function name field
  • Select Python 3.9 as the runtime version
  • Keep the default execution role selection

Upload the code
Complete the following steps:

  1. Select the newly created Lambda function from the main dashboard where all Lambda functions are displayed
  2. Beneath the Function overview panel, select the Code tab
  3. In the Code Source panel, copy and paste the code found at GitHub and follow the instructions contained in the README file.

Set up Trigger
Complete the following steps:

  1. In the Function overview panel, choose Add trigger
  2. Select S3 in the Trigger configuration panel
  3. Select the S3 bucket name previously created in the data lake block from the dropdown menu of the Bucket field (in this post called “event_bucket”)
  4. Select All object create events in the Event type dropdown menu
  5. Tick the Enable trigger box
  6. Choose Add

Set up Destination
No destination is configured for this Lambda function.

Permissions
Following the least permissions policy, please complete the following steps:

  1. Beneath the Function overview panel, Select the Configuration tab, then follow for the Permissions section (as illustrated below)

s3-get-object-new

  1. Select the role name within the Execution Role panel to open up the IAM console page
  2. Three permissions policies must be enabled, whose JSON representation is here reported, where AWSaccountnumber is to be replaced with your AWS account number:
    1. Get and List Objects from Amazon S3 bucket previously created, in this post called “event_bucket”
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "s3:GetObject",
                      "s3:ListBucket"
                  ],
                  "Resource": "arn:aws:s3:::event_bucket"
              }
          ]
      }
    2. Send Message to incident_queue.fifo
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "VisualEditor0",
                  "Effect": "Allow",
                  "Action": [
                      "sqs:SendMessageBatch",
                      "sqs:SendMessage"
                  ],
                  "Resource": ["arn:aws:sqs:eu-west-2:AWSaccountnumber:incident_queue.fifo"
      ]
              }
          ]
      }
    3. Lambda Execution Role to log events
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": "logs:CreateLogGroup",
                  "Resource": "arn:aws:logs:eu-west-2: AWSaccountnumber:*"
              },
              {
                  "Effect": "Allow",
                  "Action": [
                      "logs:CreateLogStream",
                      "logs:PutLogEvents"
                  ],
                  "Resource": [
                      "arn:aws:logs:eu-west-2: AWSaccountnumber:log-group:/aws/lambda/S3-get-object-NEW:*"
                  ]
              }
          ]
      }

Environment Variables
Still in the Configuration tab, follow for the Environment Variables section. Configure the following Environment Variables by replacing AWSaccountnumber with your account number:

  1. Key = queue_url, Value = https://sqs.eu-west-2.amazonaws.com/AWSaccountnumber/incident_queue.fifo

FIFO SQS: incident_queue.fifo

Create an Amazon SQS queue
Complete the creation steps contained in the following Creating an Amazon SQS queue (console), with the specifics:

  • At point 3, choose FIFO
  • At point 4, type the name “incident_queue.fifo”
  • At points 5a to 5e, keep the default choices
  • At point 5f, Enable content-based deduplication
  • At point 6, choose Basic method
    • At Define who can send messages to the queue selection, choose Only the queue owner
    • At Define who can receive messages from the queue selection, choose Only the queue owner
  • At point 7, enable encryption
  • At point 8, enable dead-letter queue

Why Amazon SQS FIFO Queues
The reason an Amazon SQS FIFO queue is required is to assure the messages are processed in the order they have been added to the queue. Lambda 1 is triggered by an Amazon S3 notification generated when an object is put onto Amazon S3. Such object contains multiple rows, one row being one incident that happened in a 5-minute window. Lambda1 parses the content of the object and writes one message to the Amazon SQS queue per row. For example, in this screenshot taken from Amazon CloudWatch, one Lambda1 invocation wrote 687 SQS messages.

lambda - get s3 object

Among these 687 incidents that happened in a 5-minute window, two incidents might have happened for the same subscriber. It is important that this specific subscriber’s record on DynamoDB is updated with the two incidents in the right order.

Lambda2: SQS-Poller

Create Lambda function
Complete the steps contained in Create a Lambda Function with the following amendments:

  • Type “SQS-Poller1-NEW” in the Function name field
  • Select Python 3.9 as runtime version
  • Keep the default execution role selection

Upload the code
Complete the following steps:

  1. Select the newly created Lambda function from the main dashboard where all Lambda functions are displayed
  2. Beneath the Function overview panel, select the Code tab
  3. In the Code Source panel, copy and paste the code found at GitHub and follow the instructions contained in the README file.

Set up Trigger
Complete the following steps:

  1. In the Function overview panel, choose Add trigger
  2. Select SQS in the Trigger configuration panel
  3. Select the incident_queue.fifo from the dropdown menu of the SQS Queue field
  4. Type “1” in the Batch size panel
  5. Tick the Enable trigger box
  6. Choose Add

Set up Destination
No destination is configured for this Lambda function.

Permissions
Following the least permissions policy, please complete the following steps:

  1. Beneath the Function overview panel, select the Configuration tab, then follow for the Permissions section (as illustrated below)

sqs-poller

  1. Select the role name within the Execution Role panel to open up the IAM console page
  2. Three permissions policies must be enabled, whose JSON representation is here reported, where AWSaccountnumber is to be replaced with your AWS account number:
    1. Get and Update Items on Amazon DynamoDB Subscriber_table
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "VisualEditor0",
                  "Effect": "Allow",
                  "Action": [
                      "dynamodb:GetItem",
                      "dynamodb:UpdateItem"
                  ],
                  "Resource": [
      "arn:aws:dynamodb:eu-west-2:AWSaccountnumber:table/Subscriber_table"
                  ]
              }
          ]
      }
    2. Receive from and Delete message on Amazon SQS queue incident_queue.fifo
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "VisualEditor0",
                  "Effect": "Allow",
                  "Action": [
                      "sqs:DeleteMessage",
                      "sqs:GetQueueAttributes",                
         "sqs:ReceiveMessage"
                  ],
                  "Resource": [
                      "arn:aws:lambda:eu-west-2:AWSaccountnumber:function:SQS-Poller1-NEW",
                      "arn:aws:sqs:eu-west-2:AWSaccountnumber:incident_queue.fifo"
                  ]
              }
          ]
      }
    3. Lambda Execution Role to log events
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": "logs:CreateLogGroup",
                  "Resource": "arn:aws:logs:eu-west-2: AWSaccountnumber:*"
              },
              {
                  "Effect": "Allow",
                  "Action": [
                      "logs:CreateLogStream",
                      "logs:PutLogEvents"
                  ],
                  "Resource": [
                      "arn:aws:logs:eu-west-2: AWSaccountnumber:log-group:/aws/lambda/SQS-Poller1-NEW:*"
                  ]
              }
          ]
      }

Environment Variables
Still in the Configuration tab, follow for the Environment Variables section. Configure the following environment variables by replacing AWSaccountnumber with your account number:

  1. Key = queue_url, Value = https://sqs.eu-west-2.amazonaws.com/AWSaccountnumber/incident_queue.fifo

Subscriber DB

The following diagram illustrates the subscriber DB block of the architecture. It is based on an Amazon DynamoDB table, whose objective is to store incidents, engagement, feedback, and sentiment by the subscriber. The table is updated following new events and triggers the engagement handler through the DynamoDB stream event.

subscriber db

Amazon DynamoDB

Create an Amazon DynamoDB table
Complete the following steps to create an Amazon DynamoDB table:

  1. Open the DynamoDB console.
  2. Choose Create Table.
  3. In the Create DynamoDB table screen, do the following:
    1. On the Table name box, enter “Subscriber_table.”
    2. In the Partition key box, for the Primary key, enter “PhoneNumber.” Set the data type to String.
  4. In the Table setting, select Use default settings.
  5. Choose Create.

Set Capacity
Complete the following steps to set the Capacity to meet your traffic requirements:

  1. Navigate to the Capacity tab of the Subscriber_table just created
  2. Choose the on-demand option. This choice will remove any performance bottlenecks, albeit with cost implications.
  3. Choose Save.

Picture3

Enable DynamoDB stream
Complete the following steps to enable DynamoDB stream functionality:

  1. Navigate to the Overview tab of the Subscriber_table just created.
  2. Choose Manage DynamoDB stream.
  3. Choose New Image in the Manage Stream panel.

manage stream

  1. Choose Enable

Database structure
The Subscriber_table auto-populates itself as soon as the SQS-Poller1-NEW Lambda function starts writing on it. The resulting table structure will appear as in the following extract.

code

Every event pertaining to a single PhoneNumber updates the same record. They are captured and grouped into Lists of Map objects, called after the nature of the event: (engagement, feedback, incident, sentiment).

Data retention strategy
The DynamoDB TTL feature allows you to set expiration timers on the data fields written to a given DynamoDB table to retain only the required data and only long enough to satisfy business functionality.

Conclusion

AWS services provide CSPs with the ability to build a customer in-context engagement solution that meets the needs of their evolving operations.

In this first post, we have explored the first section of an end-to-end solution, which tracks every subscriber’s incident in real time. This section constitutes the foundation to collect and analyze incident occurrence profiles and provide a single pane of glass for tracking subscribers’ quality of service.

The next blog post in this series will explore how CSPs can utilize this foundational layer to validate network and service incidents directly with subscribers in real time. By directly capturing their sentiment following recurring incident patterns, CSPs can prioritize operations with the objective of reducing churn and minimizing the strain on first-line support.

References

  1. https://www.globenewswire.com/news-release/2020/09/28/2099863/0/en/Global-Telecommunications-Network-Operators-Market-Review-Q2-2020-Capex-Drops-to-10-Year-Low-Revenues-Sink-Amidst-Spread-of-COVID-19-Pandemic.html
  2. https://www.computerweekly.com/blog/The-Full-Spectrum/How-churn-is-breaking-the-telecoms-market-and-what-service-providers-can-do-about-it
  3. https://www.invespcro.com/blog/customer-acquisition-retention/
  4. https://datahub.analysysmason.com/dh/
  5. https://www.ofcom.org.uk/research-and-data/multi-sector-research/cmr/cmr-2020/interactive

Contribution

  • Business Case – Ludovica Chiacchierini and Tom Edwards
  • Technical description – Christian Finelli (AWS) and Angelo Sampietro (AWS)
  • Intro, Solution, Conclusion – Christian Finelli (AWS), Angelo Sampietro (AWS), Ludovica Chiacchierini, Tom Edwards
Christian Finelli

Christian Finelli

I am a Solution Architect in AWS with a strong telecom background. Since I joined AWS, I have been struck by how serverless makes it simple to innovate. I work with AWS customers in the Telco IBU, and I literally learn new things every day. When not at work, I love reading and swimming.

Angelo Sampietro

Angelo Sampietro

Angelo Sampietro is a Sr.Manager leading Telecom IBU Solution Architecture team in EMEA at Amazon Web Services. Angelo has a strong background in cloud computing, with over 20 years’ experience in telecom and global industry, working in the United States, Luxembourg and Italy. He currently helps CSPs to adopt the AWS technology, with the ability to pull together and lead diverse teams into new business areas and capable of rapidly analyzing and understanding client situations and needs. Angelo currently works also on strategic initiatives and future AWS solutions for the Telecom operators.

Ludovica Chiacchierini

Ludovica Chiacchierini

Ludovica Chiacchierini is a Strategy & Management Consulting Manager within the Communications & Media Consulting Practice at Accenture UK&I. She has specialist network experience advising clients on how to optimise network processes by harnessing the power of data and automation to deliver tangible benefits for them and their customers.

Tom Edwards

Tom Edwards

Tom Edwards is a Management Consultant within the Communications & Media Consulting Practice at Accenture UK&I. He has specialist experience advising on large scale business process improvement projects and Lead to Cash programmes, designing and delivering solutions to transform the client’s end to end customer experience.