AWS for Industries
Hexagon Automates Manufacturing Standard Operating Procedures with AWS Generative AI
Customers in Energy and Manufacturing industries rely on accurate, up-to-date operating procedures to ensure safety and efficiency. AcceleratorKMS®, Hexagon’s industry-leading knowledge management system, transforms these critical operational procedures into digital, mobile-ready content. This solution makes sure that personnel operating in high stress, mission critical roles use the latest, up to date procedures, therefore reducing information-caused incidents and enhancing human performance in the field. At a single refinery or manufacturing facility, there is a need to convert thousands of technical documents into a standardized digital format, which is a costly, labor-intensive bottleneck to realizing the benefits of this solution.
Hexagon recently partnered with the AWS Partner and Prototyping & Cloud Engineering (PACE) teams to transform Standard Operating Procedure (SOP) document workflow with an AI-powered solution to automate document conversion with above 90% accuracy while achieving significant cost reductions when compared with existing baseline. This blog post explores how Hexagon and Amazon Web Services built a scalable, cost-efficient document processing solution to reduce manual effort and improve experience for industrial customers in Manufacturing and Energy.
Partner Overview: Hexagon and AWS Collaboration
Hexagon AB is a global leader in sensor, software, and autonomous solutions, putting data to work to boost efficiency, productivity, and quality across industrial and manufacturing applications. The AcceleratorKMS® product under the Operations and Maintenance product group of Asset Lifecycle Intelligence division, provides a cloud-based procedure lifecycle management software that helps customers build and operate safe and sustainable industrial facilities. As an AWS Partner, Hexagon works with AWS to deliver cloud solutions to their customers by combining Hexagon’s deep domain expertise in manufacturing and industrial procedure management with AWS’s breadth of managed services and AI innovation. This collaboration between the AWS and Hexagon AcceleratorKMS® team is an example of how AWS and its partners co-innovate to solve real-world business challenges in the manufacturing and industrial sectors.
Hexagon AcceleratorKMS® SOP Document Digitization Challenge
Hexagon’s AcceleratorKMS® procedure lifecycle management software breaks down operating procedures into structured content, and benefits from managing a library of structured content. By using a workflow managed process to update the structured content, AcceleratorKMS® assures that front-line workers access the latest correct information in context. Traditionally, onboarding new procedures into AcceleratorKMS® required a manual “digitization” process: service teams utilized a rule-based “Digitizer” tool to ingest SOPs into AcceleratorKMS®. This was an improvement over traditional manual or basic digital solutions, but with significant opportunities to improve as the scale of the effort increases. Furthermore, the process needed to make sure the output matched the original document. This problem compounded with non-standard formats and multiple languages. The high manual cost hurt operational efficiency as customers had to pay for these services.
Hexagon and its customers had experimented with LLMs to automate this task by converting documents to images and using a large language model to extract content. While promising, that approach lost important formatting like images and hyperlinks, and did not handle documents with complex formatting. Hexagon sought a solution that improves accuracy, handles diverse formats such as PDFs, Word documents with tables, images, and links, does batch processing, and provides a user-friendly interface for quality control.
Solution Architecture
To address these challenges, Hexagon worked with AWS to transform the manual process into an automated serverless workflow powered by generative AI. The resulting solution combines multiple functions including document ingestion, AI-powered processing, storage, and a user interface for management.
Figure 1: Solution Architecture
Amazon Bedrock for Generative AI: At the core is Amazon Bedrock, which provides access to foundation models to understand and transform documents.
- Claude Sonnet 3.7 in Amazon Bedrock is used to extract structured data in JSON format from each document page using its multi-modal capabilities. This approach helps to preserve complex formatting like tables or multi-column layouts that simple text extraction misses. For instance, the model accurately understands and recreates table structures including rows, columns, and merged cells in JSON format. The foundation model’s capabilities enable processing diverse document types without requiring fine-tuning, thereby avoiding potential legal implications.
- The solution maintains context across pages by passing the model’s output from the previous page as an additional prompt input for the next page.
- Intelligent prompt engineering techniques where a system prompt defines the JSON schema and instructions is used and dynamic user prompts include the page content as well as any user-specified tags.
- Portions of the prompt that remain constant such as the schema definition are cached to avoid re-sending it on every page, improving throughput and reducing Bedrock API tokens consumed.
The above optimizations strike a balance between accuracy and performance while ensuring that costs remain low.
AWS Amplify and Amazon Cognito for the Front-End: Amazon Cognito authenticates users via a secure login, leveraging Cognito user pools to allow access to authorized users as well as integration with third-party Single Sign-On identity providers. The front-end, built with AWS Amplify, provides an intuitive web interface where users upload procedure documents and later review the AI-extracted output.
AWS AppSync (GraphQL API) and Real-Time Updates: This web app communicates with backend APIs through AWS AppSync, a managed GraphQL service that enables real-time updates – for example, to notify the user when processing is complete . AWS AppSync subscriptions are utilized to push real-time status updates to the user interface (UI). As each document goes through the AWS Step Functions workflow, AWS AppSync broadcasts updates to the client without any polling. This event-driven design keeps the user interface responsive and informed. A user uploads multiple SOP files, kicks off batch processing, and then watches in the UI as each file’s status changes from pending to processed in real time. The result is a smooth user experience, even for long-running AI tasks.
Amazon S3 and Amazon DynamoDB for Storage: All source documents and results are stored securely in Amazon Simple Storage Service (Amazon S3), which serves as the data lake for the pipeline. Amazon S3 triggers initiate the Step Functions workflow when a new file is uploaded. Processed JSON outputs and evaluation metrics are saved in Amazon DynamoDB for fast retrieval. For example, the team built a comparison feature that stores both the raw text and AI-extracted text in DynamoDB, so accuracy metrics, such as section and statement match rates, are computed and tracked over time.
AWS Step Functions for Orchestration: An AWS Step Functions state machine orchestrates the end-to-end workflow, coordinating a series of AWS Lambda functions that perform specialized tasks. The pipeline is broken into multiple steps – pre-processing, page-by-page AI processing, and post-processing – each executed by separate Lambda functions. This modular approach prevents any single function from exceeding the Lambda time limit and provides checkpoints to recover gracefully from errors. For example, the pre-processing step extracts text and images from the document, the processing step handles AI inference on each page, and post-processing assembles the final JSON output and stores results. Step Functions also provide built-in error handling and retry logic, which increases reliability for processing large batches of documents. The step functions workflow runs asynchronously, so large jobs, such as ingesting 100 documents in one batch, process in the background without blocking the user interface.
Human-in-the-Loop Validation
The user interface is specifically designed to allow users to review and edit the LLM-generated output before finalizing it. Users verify and correct any discrepancies by comparing the original text side-by-side with the extracted JSON output, all within the browser. This ability to easily validate and adjust the output means Hexagon achieves high accuracy without over-engineering the AI prompts for every conceivable document variation. In practice, this approach – automating the bulk of the work and allowing targeted human review – proved more effective than attempting to handle all edge cases purely through prompt tuning. By capturing these edits, Hexagon continually improves the model and prompts in future iterations. Since the system retains the exact source text and no content is altered by the AI step, there is full traceability from original document to JSON output – a key requirement in regulated industries.
Figure 2: AI Procedure Converter Front End – User Updates
Figure 3: AI Procedure Converter Front End – Result Viewer
Business Outcomes and Benefits
The solution achieved accuracy rates above 90% in extracting procedure content, both at the section level and statement level, giving end-users confidence that no critical details are lost.
“This AI-powered approach represents a transformative leap in how we help customers digitize their operational content,” said Neil Singh, Principal Strategy & Enablement Consulting Lead, PSE: Operations, who leads strategy for AcceleratorKMS® at Hexagon. “Hexagon wanted to redefine the balance of automation, speed, cost, and accuracy of content, to give customers a cost efficient approach with only upsides. By automating the most labor-intensive aspects of SOP ingestion, we’re encouraging organizations to accelerate their digital transformation journey and realize the business benefits of efficiency while maintaining the high standards of accuracy and reliability that safety-critical industries demand.”
This level of quality is essential in industrial environments where a single omission or error in an operating procedure has serious safety, performance, and compliance implications on the factory floor. Replacing the old manual process, the new approach delivers significant cost reductions per document. Moreover, automation accelerates the onboarding of new procedures from days to minutes. Hexagon’s customers now deploy updates and new procedures to their workforce faster, leading to less downtime and more agility in responding to changes.
Another benefit is the scalability and cost efficiency. Because the solution uses fully managed services, Hexagon scales it to handle more documents or expanded use cases without major redesign. Costs remain proportional to usage – when no documents are being processed, the AWS services incur little to no cost.
Conclusion
Hexagon’s journey with AcceleratorKMS® showcases how manufacturers embrace cloud and AI to solve operational challenges. By leveraging AWS generative AI services, Hexagon transformed a costly, manual process into an automated, intelligent pipeline – improving speed and accuracy for procedure documentation. This solution serves as a blueprint for modernizing industrial knowledge management, and underscores the broader trend of smart manufacturing, where AI and automation simplify workflows that were once painfully manual.
Visit the Hexagon partner page to learn more about how AWS and Hexagon work together.
Visit the Amazon Innovation Ambassador Podcast to learn more about how AWS innovates with customers and partners to co-develop art-of-the possible use cases in generative AI.