Artificial Intelligence

Build a generative AI-powered business reporting solution with Amazon Bedrock

Traditional business reporting processes are often time-consuming and inefficient. Associates typically spend about two hours per month preparing their reports, while managers dedicate up to 10 hours per month aggregating, reviewing, and formatting submissions. This manual approach often leads to inconsistencies in both format and quality, requiring multiple cycles of review. Additionally, reports are fragmented across various systems, making consolidation and analysis more challenging.

Generative artificial intelligence (AI) presents a compelling solution to these reporting challenges. According to a Gartner survey, generative AI has become the most widely adopted AI technology in organizations, with 29% already putting it into active use.

This post introduces generative AI guided business reporting—with a focus on writing achievements & challenges about your business—providing a smart, practical solution that helps simplify and accelerate internal communication and reporting. Built following Amazon Web Services (AWS) best practices, with this solution you will spend less time writing reports and more time focusing on driving business results. This solution tackles three real-world challenges:

  • Uncover valuable insights from vast amounts of data
  • Manage risks associated with AI implementation
  • Drive growth through improved efficiency and decision-making

The full solution code is available in our GitHub repo, allowing you to deploy and test this solution in your own AWS environment.

The generative AI solution enhances the reporting process through automation. By utilizing large language model (LLM) processing, the reporting system can generate human-readable reports, answer follow-up questions, and make insights more accessible to non-technical stakeholders. This automation reduces costs and the need for extensive human resources while minimizing human error and bias. The result is a level of accuracy and objectivity that’s difficult to achieve with manual processes, ultimately leading to more efficient and effective business reporting.

Solution overview

This generative AI-powered Enterprise Writing Assistant demonstrates a modern, serverless architecture that leverages AWS’s powerful suite of services to deliver an intelligent writing solution. Built with scalability and security in mind, this system combines AWS Lambda functions, Amazon Bedrock for AI capabilities, and various AWS services to create a robust, enterprise-grade writing assistant that can help organizations streamline content creation processes while maintaining high standards of quality and consistency.

This solution uses a serverless, scalable design built on AWS services. Let’s explore how the components work together:

User interaction layer

  • Users access the solution through a browser that connects to a frontend web application hosted on Amazon S3 and distributed globally via Amazon CloudFront for optimal performance
  • Amazon Cognito user pools handle authentication and secure user management

API layer

  • Two API types in Amazon API Gateway manage communication between frontend and backend:
    • WebSocket API enables real-time, bidirectional communication for report writing and editing
    • REST API handles transactional operations like submitting and retrieving reports
  • Amazon CloudWatch monitors both APIs for operational visibility
  • Dedicated AWS Lambda authorizers secure both APIs by validating user credentials

Orchestration layer

  • Specialized AWS Lambda functions orchestrate the core business logic:
    • Business Report Writing Lambda handles report drafting and user assistance
    • Rephrase Lambda improves report clarity and professionalism
    • Submission Lambda processes final report submissions
    • View Submission Lambda retrieves previously submitted reports

AI and storage layer

  • Amazon Bedrock provides the LLM capabilities for report writing and rephrasing
  • Two Amazon DynamoDB tables store different types of data:
    • Session Management table maintains conversation context during active sessions
    • Business Report Store table permanently archives completed reports

This architecture facilitates high availability, automatic scaling, and cost optimization by using serverless components that only incur charges when in use. Communications between components are secured following AWS best practices.

You can deploy this architecture in your own AWS account by following the step-by-step instructions in the GitHub repository.

Real-world workflow: Report generation and rephrasing

The system’s workflow begins by analyzing and categorizing each user input through a classification process. This classification determines how the system processes and responds to the input. The system uses specific processing paths based on three distinct classifications:

  1. Question or command: When the system classifies the input as a question or command, it activates the LLM with appropriate prompting to generate a relevant response. The system stores these interactions in the conversation memory, allowing it to maintain context for future related queries. This contextual awareness provides coherent and consistent responses that build upon previous interactions.
  2. Verify submission: For inputs requiring verification, the system engages its evaluation protocols to provide detailed feedback on your submission. While the system stores these interactions in the conversation memory, it deliberately bypasses memory retrieval during the verification process. This design choice enables the verification process based solely on the current submission’s merits, without influence from previous conversations. This approach reduces system latency and facilitates more accurate and unbiased verification results.
  3. Outside of scope: When the input falls outside the system’s defined parameters, it responds with the standardized message: “Sorry, I can only answer writing-related questions.” This maintains clear boundaries for the system’s capabilities and helps prevent confusion or inappropriate responses.

These classifications support efficient processing while maintaining appropriate context only where necessary, optimizing both performance and accuracy in different interaction scenarios.

User experience walkthrough

Now that we have explored the architecture, let’s dive into the user experience of our generative AI-powered Enterprise Writing Assistant. The following walkthrough demonstrates the solution in action, showcasing how AWS services come together to deliver a seamless, intelligent writing experience for enterprise users.

Home page

The home page offers two views: Associate view and Manager view.

Associate view

Within the Associate view, you have three options: Write Achievement, Write Challenge, or View Your Submissions. For this post, we walk through the Achievement view. The Challenge view follows the same process but with different guidelines.

In the Achievement view, the system prompts you to either ask questions or make a submission. Inputs go through the generative AI workflow.

The following example demonstrates an incomplete submission, along with the system’s feedback. This feedback includes a visual summary that highlights the missing or completed components. The system evaluates the submission based on a predefined guideline. Users can adapt this approach in their solutions. At this stage, the focus should not be on grammar or formatting, but rather on the overall concept.

If the system is prompted with an irrelevant question, it declines to answer to avoid misuse.

Throughout the conversation, you can ask questions related to writing a business report (achievement, or challenge about the business).

Once all criteria is met, the system can automatically rephrase the input text to fix grammatical and formatting issues. If you need to make changes to the input text, you can click the Previous button, which will take you back to the stage where you can modify your submission.

After rephrasing, the system shows both the original version and the rephrased version with highlighted differences.

The system also automatically extracts customer name metadata.

When complete, you can save or continue editing the output.

Manager view

In the Manager view, you have the ability to aggregate multiple submissions from direct reports into a consolidated roll-up report. The following shows how this interface appears.

Prerequisites

To deploy this solution in your AWS account, the following is needed:

  • An AWS account with administrative access
  • AWS CLI (2.22.8) installed and configured
  • Access to Amazon Bedrock models (Claude or Anthropic Claude)
  • Node.js (20.12.7) the frontend components
  • Git for cloning the repository

Deploy the solution

The generative AI Enterprise Report Writing Assistant uses AWS CDK for infrastructure deployment, making it straightforward to set up in your AWS environment:

  1. Clone the GitHub repository:
git clone https://github.com/aws-samples/sample-generative AI-enterprise-report-writing-assistant.git && cd sample-generative AI- enterprise-report-writing-assistant
  1. Install dependencies:
npm install
  1. Deploy the application to AWS:
cdk deploy
  1. After deployment completes, wait 1-2 minutes for the AWS CodeBuild process to finish.
  2. Access the application using the VueAppUrl from the CDK/CloudFormation outputs.

The deployment creates the necessary resources including Lambda functions, API Gateways, DynamoDB tables, and the frontend application hosted on S3 and CloudFront.

For detailed configuration options and customizations, refer to the README in the GitHub repository.

Clean up resources

To avoid incurring future charges, delete the resources created by this solution when they are no longer needed:

cdk destroy

This command removes the AWS resources provisioned by the CDK stack, including:

  • Lambda functions
  • API Gateway endpoints
  • DynamoDB tables
  • S3 buckets
  • CloudFront distributions
  • Cognito user pools

Be aware that some resources, like S3 buckets containing deployment artifacts, might need to be emptied before they can be deleted.

Conclusion

Traditional business reporting is time-consuming and manual, leading to inefficiencies across the board. The generative AI Enterprise Report Writing Assistant represents a significant leap forward in how organizations approach their internal reporting processes. By leveraging generative AI technology, this solution addresses the traditional pain points of business reporting while introducing capabilities that were previously unattainable. Through intelligent report writing assistance with real-time feedback, automated rephrasing for clarity and professionalism, streamlined submission and review processes, and robust verification systems, the solution delivers comprehensive support for modern business reporting needs. The architecture facilitates secure, efficient processing, striking the crucial balance between automation and human oversight. As organizations continue to navigate increasingly complex business problems, the ability to generate clear, accurate, and insightful reports quickly becomes not just an advantage but a necessity. The generative AI Enterprise Report Writing Assistant provides a framework that can scale with your organization’s needs while maintaining consistency and quality across the levels of reporting.

We encourage you to explore the GitHub repository to deploy and customize this solution for your specific needs. You can also contribute to the project by submitting pull requests or opening issues for enhancements and bug fixes.

For more information about generative AI on AWS, refer to the AWS Generative AI resource center.

Resources


About the authors

Nick Biso is a Machine Learning Engineer at AWS Professional Services. He solves complex organizational and technical challenges using data science and engineering. In addition, he builds and deploys AI/ML models on the AWS Cloud. His passion extends to his proclivity for travel and diverse cultural experiences.

Michael Massey is a Cloud Application Architect at Amazon Web Services, where he specializes in building frontend and backend cloud-native applications. He designs and implements scalable and highly-available solutions and architectures that help customers achieve their business goals.

Jeff Chen is a Principal Consultant at AWS Professional Services, specializing in guiding customers through application modernization and migration projects powered by generative AI. Beyond GenAI, he delivers business value across a range of domains including DevOps, data analytics, infrastructure provisioning, and security, helping organizations achieve their strategic cloud objectives.

Jundong Qiao is a Sr. Machine Learning Engineer at AWS Professional Service, where he specializes in implementing and enhancing AI/ML capabilities across various sectors. His expertise encompasses building next-generation AI solutions, including chatbots and predictive models that drive efficiency and innovation.