AWS Public Sector Blog

Improving customer experience for the public sector using AWS services

AWS branded background design with text overlay that says "Improving customer experience for the public sector using AWS services"

Citizens are increasingly engaging with public sector entities through digital surfaces and touchpoints. Every touchpoint is an opportunity for the public sector to give their consumers an experience tailored to their preferences and needs. In fact, citizens are increasingly expecting government to provide modern digital experiences for conducting online transactions. Market research tells us 63 percent of consumers see personalization as the standard level of service.

This post offers various architectural patterns for improving customer experience for the public sector for a wide range of use cases. The aim of the post is to help public sector organizations create customer experience solutions on the Amazon Web Services (AWS) Cloud using AWS artificial intelligence (AI) services and AWS purpose-built data analytics services.

Why customer experience (CX) is so important

Customer experience can be applied across the entire customer journey, not just one point in time. Apart from just improving the user experience, customer experience can have a significant impact on your overall business and should be viewed as a growth engine, not a cost center.

Here are some key business outcomes of customer experience for the public sector

  • A personalized user experience can help captivate an audience while creating brand allegiance in a crowded digital environment.
  • It can increase engagement with products, content, and the overall time spent on a website or app.
  • CX drives efficiencies in marketing spend (as measured by business KPIs like click-through rates, content views, or email open rates).
  • It increases conversion rates and customer lifetime value because customers are more likely to consume content or items that are tailored to their needs.
  • It can also improve discoverability by helping users quickly and easily discover content or items they are most interested in. This is particularly important for organizations with large catalogs where new products are frequently being added.

Improving public sector business outcomes with accurate time series forecasts 

As the amount of data increases dramatically every year, organizations face significant challenges in managing inventory levels to avoid poor customer experiences, predict service demand, allocate resources effectively, and conduct financial and workforce planning. Amazon Forecast is a fully managed service that uses statistical and machine learning (ML) algorithms to deliver highly accurate time-series forecasts.

Some common use cases and solutions that fit under the architecture pattern for the public sector include:

  • Predicting service demand
  • Allocating resources to optimize impact and outcomes for citizens
  • Financial planning and revenue and cost forecasts
  • Workforce planning

The following figure illustrates the architecture of the solution.

Figure 1. Architecture diagram for the Forecast solution.

The steps that follow the architecture are:

  1. Data preparation – You start with processing and cleaning the relevant datasets using AWS Glue or by using a custom notebook to safeguard quality input for forecasting.
  2. Data ingestion – Once the data is prepared, you can proceed to ingest the data. The data types can be demand data, related data such as seasonal factors, marketing data, or metadata such as the source, format, or timestamps, which helps manage the actual data. The configuration file contains settings and rules for data processing. It might define how to interpret different data formats, how to handle errors, or how to map source data to the destination format. Finally, the processed data is stored in Amazon Simple Storage Service (Amazon S3) for ML.
  3. Machine learning – Once the data is imported into Amazon S3, you configure a trigger using an AWS Lambda function to start a workflow using AWS Step Functions. Step Functions orchestrates a series of automated tasks, managing the flow of operations. The central component here is Forecast. Forecast enhances its performance by training a variety of base models and fine-tuning them through hyperparameter optimization to confirm the delivery of the most effective model. The optimal model is determined by your criteria of choice, be it the highest precision or the lowest loss measure.
  4. Evaluate and share
    1. Data storage – Subsequently, you can store the generated forecast results back into the S3 bucket.
    2. Data query – You can utilize Amazon Athena for data querying services to extract and manipulate forecast results for further analysis.
    3. Business intelligence – You can use Amazon QuickSight to create intuitive reports and dashboards that effectively communicate insights from the forecast data.

For details on Forecast, visit Automating with AWS CloudFormation and Solcast: Solar irradiance forecasting for the solar powered future.

Improve citizens’ customer experience with personalized recommendations

For many organizations, customer satisfaction is a metric that is unparalleled, serving as the cornerstone of success for multiple businesses. Harnessing the power of personalized experiences has emerged as a critical strategy for engaging citizens and improving customer satisfaction. Amazon Personalize is a fully managed ML service that goes beyond rigid rule-based recommendation systems to deliver highly personalized recommendations.

Some common use cases and solutions that fit under the architecture pattern for the public sector include:

  • User personalization for educational content
  • Personalized ranking
  • Related items
  • User segmentation

The following figure illustrates the architecture for near real-time personalized recommendations:

Figure 2. Architecture diagram for near real-time personalized recommendations using Amazon Personalize.

The steps that follow the architecture are:

  1. Prepare the dataCreate a dataset group, schemas, and datasets that represent your items, interactions, and user data.
  2. Train the model with Amazon Personalize – Select the best recipe matching your use case after importing your datasets into a dataset group using Amazon S3, and then create a solution to train a model by creating a solution version. When your solution version is complete, you can create a campaign for your solution version.
  3. Get near real-time recommendations – After a campaign has been created, you can integrate calls to the campaign in your application. This is where calls to the GetRecommendations or GetPersonalizedRanking APIs are made to request near real-time recommendations from Amazon Personalize. Your website or mobile application calls a Lambda function over Amazon API Gateway to receive recommendations for your business apps.
  4. Use an event tracker for near real-time recommendations – An event tracker provides an endpoint that allows you to stream interactions that occur in your application back to Amazon Personalize in near real time. You do this by using the PutEvents API. You can build an event collection pipeline using API Gateway, Amazon Kinesis Data Streams, and Lambda to receive and forward interactions to Amazon Personalize. The event tracker performs two primary functions. First, it persists all streamed interactions so they will be incorporated into future retrainings of your model. This is also how Amazon Personalize cold starts new users. When a new user visits your site, Amazon Personalize will recommend popular items. After you stream in an event or two, Amazon Personalize immediately starts adjusting recommendations.

For more details on Amazon Personalize, visit Architecting near real-time personalized recommendations with Amazon Personalize and How Skillshare increased their click-through rate by 63% with Amazon Personalize. 

Localize content through translation

In an increasingly globalized world, organizations face the obstacle of connecting with a broad, multilingual audience. The ability to present localized content and bridge linguistic divides is crucial. A solution that targets these issues not only simplifies content localization and language translation but also enhances the analysis of multilingual content. This is essential for academic and news publishers aiming to disseminate their work globally, ensuring they can efficiently reach and engage with international communities.

Amazon Translate makes localization more efficient with broad language support for real-time use cases like chat.

Some common use cases and solutions that fit under the architecture pattern for public sector include:

  • Reaching a wider audience with localized content
  • Bridging communication gaps caused by language
  • Analyzing content efficiently across multiple languages
  • User segmentation
  • Academic research and news articles for global publishing

The following solution depicts the architecture for an ML translation workflow.

Figure 3. Architecture for a ML translation workflow using Amazon Translate.

The steps that follow the architecture are:

  1. Data ingestion – You start by uploading initial content in a centralized repository such as Amazon S3, ready for processing. Once uploaded, an automated Lambda function is activated to initiate the translation process, pulling content from the repository.
  2. Machine translation – The content is processed through Amazon Translate. Amazon Translate offers a variety of features that make localization more efficient and organizations more global. For document translation, you can use the batch API to translate a set of documents. For real-time text translation, you can use the real-time API to translate short texts of conversational length into the target language. The machine-translated content is temporarily held in an S3 bucket.
  3. Human augmentation – Once the translated documents are ready you can opt to conduct a human review of the ML results for precision. Amazon Augmented AI (Amazon A2I) provides a managed experience where you can set up an entire human review workflow in a few steps to evaluate and enhance the machine-translated output.
  4. Content reconstruction – Another automated function takes the edited content and reformats it, ensuring it retains the original document structure. The human-reviewed and post-edited translations are saved in a dedicated storage to build a memory for future reference, which can help in improving the translation quality over time.

For more details on Amazon Translate, visit Build a multi-lingual document translation workflow with domain-specific and language-specific customization and How one council used machine learning to cut translation costs by 99.96%.

Automatically extract printed text, handwriting, and data from documents and forms

Organizations across the public sector often deal with large volumes of documents and forms containing critical information in various formats, including printed text, handwritten text, and structured data. Automating this data extraction process using advanced ML and computer vision technologies can significantly streamline operations, improve accuracy, and enable organizations to unlock valuable insights from their data more efficiently. Amazon Textract automates data extraction from diverse document types using ML models for accurate identification and extraction.

Some common use cases and solutions that fit under the architecture pattern for public sector include:

  • Automate user data capture to improve customer experience
  • Improve productivity by automating repeatable workflows
  • Improve decision-making
  • Business process automation

The following figure illustrates the architecture for a text extraction workflow.

Figure 4. Architecture for a text extraction workflow using Amazon Textract.

The steps that follow the architecture are:

  1. Data ingestion and extraction – You start by uploading the source documents to a data storage service such as Amazon S3. Once uploaded, an automated Lambda function is triggered, which initiates the data extraction workflow. Amazon Textract extracts relevant data and information from the source files.
  2. Data storage (extracted) – After the data extraction process is completed, Amazon Simple Notification Service (Amazon SNS)) notifies another Lambda function about its completion. You can also store document metadata in Amazon DynamoDB. The extracted data is then stored in a separate S3 bucket for further processing.
  3. Data analysis and visualization – Data processing services such as AWS Glue process and transform the data, making it ready for analysis. Using Athena and QuickSight, you can query and visualize the data to gain insights from it.

For more details on Amazon Textract, visit Extracting, analyzing, and interpreting data from Medicaid forms with Amazon Textract and Build a receipt and invoice processing pipeline with Amazon Textract.

Key considerations and best practices

  1. AI and ML are fast becoming widely deployed technology across the public sector industry—it is the
    new normal.
  2. You must define the business value first and then identify your AI/ML solutions to achieve those desired business outcomes.
  3. Choose AWS managed AI services to improve citizen user experience by using Amazon Personalize for personalized recommendations, Amazon Forecast for predicting service demand, Amazon Textract for automating repeated documents workflow, and Amazon Translate for reaching a wider audience with localized content.
  4. With the AI Use Case Explorer on AWS, you can find the most relevant AI use cases with related content and guidance to make them real for building better customer experience solutions for public sector.
  5. Amazon SageMaker Autopilot is a productivity game changer, where you can run more than 250 training experiments in a single day with a multidisciplinary team of data scientists, cybersecurity experts, and developers to make it successful.

Conclusion

This post demonstrated various architectural patterns for building customer experience solutions for the public sector. You can build your own customer experience applications with AWS AI/ML services using the information in this post. You can also attend or watch our “Improving customer experience for public sector” session (ID# SLG201) at the AWS DC Summit on June 26, 2024.

For detailed architectural patterns, refer to the following resources:

Raghavarao Sodabathina

Raghavarao Sodabathina

Raghavarao is a principal solutions architect at Amazon Web Services (AWS), focusing on data analytics, artificial intelligence/machine learning (AI/ML), and cloud security. He engages with customers to create innovative solutions that address their business problems and accelerate the adoption of AWS services. In his spare time, Raghavarao enjoys being with his family, reading books, and watching movies.

FNU Zubair

FNU Zubair

FNU is a solutions architect at Amazon Web Services (AWS), specializing in data analytics. He helps public sector customers with their cloud adoption and modernization efforts. Outside of work, he enjoys painting, creating music, and traveling.

Shwetha Radhakrishnan

Shwetha Radhakrishnan

Shwetha is a solutions architect for Amazon Web Services (AWS) with a focus in data analytics. She builds solutions that drive cloud adoption and help organizations make data-driven decisions within the public sector. Outside of work, she loves dancing, spending time with friends and family, and traveling.

Srujana Alajangi

Srujana Alajangi

Srujana is a solutions architect for Amazon Web Services (AWS) with a focus on data analytics. As a solutions architect, she plays a crucial role in guiding public sector customers through their cloud journey by designing scalable and secure cloud solutions. Outside of work, she loves spending time with friends and family, watching movies, and traveling.