AWS Architecture Blog

Simplify and automate bill processing with Amazon Bedrock

This post was co-written with Shyam Narayan, a leader in the Accenture AWS Business Group, and Hui Yee Leong, a DevOps and platform engineer, both based in Australia. Hui and Shyam specialize in designing and implementing complex AWS transformation programs across a wide range of industries.

Enterprises that operate out of multiple locations such as in retail and telecom industries often deal with the complexities of processing several utility bills. These bills need to be verified for discrepancies before making payments. Business processes are often done by teams of people manually processing invoices in various formats.

Additionally, enterprises often need to meet Environmental, Social, and Governance (ESG) regulatory compliances, and utility bills are important elements relating to the reporting of electricity, water, and gas usage, which largely gets untapped as well.

Invoices are generated by utility providers in various formats, like PDF, XLS, and EML, have different layouts, and are often delivered as emails. This makes it difficult to standardize ingestion, process these invoices for anomalies such as seasonal usage patterns, compare contracted vs. billed rates, and finally process payments.

Due to this lack of usage data standardization, ingesting this data into a central ESG data lake becomes challenging.

In this post, we present a solution using Amazon Bedrock to address these challenges. The solution offers the following capabilities:

  • Provides flexibility to ingest utility bills in various formats and layouts
  • Standardizes bills into a single format and applies data quality controls
  • Integrates with existing systems through events
  • Automates repetitive tasks, which reduces human error and enhances efficiency
  • Enables predictive analysis, which enables informed decision-making with generative artificial intelligence (AI)
  • Integrates with existing data lakes, data warehouse, payments systems, and ESG reporting systems

Solution overview

The solution uses Amazon Bedrock to automate invoice processing, tariff extraction, validation, and reporting, as shown in Figure 1.

Diagram showing the Amazon Bedrock solution to simplify and automate billing

Figure 1. Diagram showing the Amazon Bedrock solution to simplify and automate billing

The workflow includes the following steps:

  1. Using SFTP connectors with the AWS Transfer Family, invoices and are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket.
  2. Some Utility providers send invoices directly to an email address enabled on Amazon SES, the PDF attachment is extracted and uploaded to a Amazon S3 bucket.
  3. The upload generates an S3 event into an Amazon EventBridge bus and an EventBridge rule invokes AWS Step Functions workflow for invoice extraction and validation.
  4. The Step Functions workflow to validates invoices. It uses Amazon Textract for text extraction (for a tutorial, see Detecting text with an AWS Lambda function) and invokes the Amazon Titan Text V1 Express model to generate embeddings and store them in Amazon Aurora PostgreSQL-Compatible Edition with pgvector It also stores the extracted invoices in the DynamoDB table.
  5. Failed validations are flagged for manual processing by agents through Amazon Simple Notification Service (Amazon SNS).
  6. A Lambda function invoked by Amazon EventBridge scheduled rule fetches tariff data from external SFTP repository and stores in a S3 bucket.
  7. Utility Data Extraction Step Functions is invoked by an S3 event. This process involves extracting data from various providers, which may be in different formats and units, to facilitate seamless integration with the business logic.
  8. The tariff data is then stored in an Amazon DynamoDB table, which is used by the business logic Step Functions workflow.
  9. The main business logic of checking invoices for usage anomalies and check for approved tariff is done in the Business Logic Step function. This Step function makes use of Amazon Bedrock, embeddings, extracted invoices and tariff data to check for anomalies, invoice accuracy and update the reporting database.
  10. Reporting data is stored in an Amazon Aurora database and visualized using Amazon QuickSight for payment validation reports.
  11. Amazon Q in QuickSight is used for enhanced and quick decision-making using generative BI capabilities.

The following screenshots show examples of the Amazon QuickSight visualizations.

QuickSight visualization showing physical location of invoiced locations, monthly combined usage and billed amount.

Figure 2. QuickSight visualization showing physical location of invoiced locations, monthly combined usage and billed amount.

QuickSight Q animation demonstrating AI-driven answers to the questions on the data beyond what is presented in the dashboards

Figure 3. QuickSight Q animation demonstrating AI-driven answers to the questions on the data beyond what is presented in the dashboards

Benefits from the solution

This solution offers the following benefits:

  • Contextual understanding – With the Anthropic Claude 3 Sonnet model on Amazon Bedrock, this solution has the capability to understand, analyze, and interpret the context of your data beyond just text recognition.
  • Flexibility and adaptability – This solution enables flexibility to learn and adapt to new formats because Amazon Bedrock is able to understand the data contained within the invoices and adapt to the various changes of data representation.
  • Event-driven architecture – This is an event-driven, serverless architecture, which enables modularity and integration with external workflows specific to your organization.
  • Automated workflow – The solution reduces the need for manual intervention in data quality processes, such as data profiling, cleansing, and validation. This allows for faster processing and reduced human error.
  • Cost savings – Automation reduces the reliance on teams of people, resulting in cost savings for organizations.
  • Compliance and risk mitigation – Automated data quality processes help organizations maintain ESG compliance with regulatory requirements and industry standards.
  • Data governance – Automation facilitates the implementation of data governance policies and procedures. By automating data quality monitoring and reporting, organizations can enforce data governance standards more effectively and adhere to data quality guidelines.

Conclusion

In this post, we saw how automation paves the way for organizations to optimize utility bill processing and get additional ESG insights. We demonstrated how the application and the power of generative AI on Amazon Bedrock can simplify data extraction when the data isn’t presented in a standard format. Finally, we presented a serverless and event-driven solution that scales automatically based on your business needs.

For more in-depth guides, check out our workshops on Amazon QuickSight Q and Amazon Bedrock.

Vijay Shekhar Rao

Vijay Shekhar Rao

Vijay Shekhar Rao is a Partner Solutions Architect working with global system integrators. Before joining AWS, Vijay spent several years architecting, building, managing, and troubleshooting complex infrastructure for critical systems. When not working, he enjoys time with his family and tries to stay healthy.

Mike Black

Mike Black

Mike Black works as a Partner Solutions Architect with global system integrators. He has spent most of his career working in consulting, cloud and network engineering. When not working, Mike enjoys traveling with family and playing football (not soccer).