Skip to main content

Guidance for Automating Non-Conformance Reviews on AWS

Overview

This Guidance shows how to automate non-conformance review (NCR) disposition recommendations using generative AI and image analysis to reduce manufacturing delays. It demonstrates a multimodal recommender system that integrates with existing quality ticketing systems to accelerate quality engineering decisions. The Guidance processes natural language descriptions and images of non-conformances, matching them with similar historical cases to suggest appropriate dispositions. By leveraging past NCR data to provide rapid, consistent recommendations, this Guidance helps quality engineers make faster decisions, reducing the time spent waiting for manual research and getting production back on track more quickly.

How it works

These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.

Deploy with confidence

Ready to deploy? Review the sample code on GitHub for detailed deployment instructions to deploy as-is or customize to fit your needs. 

Go to sample code

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

Amazon S3, DynamoDB, and OpenSearch Service provide comprehensive monitoring features that offer early failure detection, while their versioning and recovery features enable rapid recovery of critical NCR and application state data. Amazon S3 delivers document versioning and automated lifecycle management, and DynamoDB offers snapshots, event monitoring, alerting, point-in-time (PiT) recovery, and automated replication. OpenSearch Service contributes snapshots, event monitoring, and automatic recovery. These capabilities support seamless operations management, minimize downtime from disruptive failures, and help ensure smooth version upgrades.

Read the Operational Excellence whitepaper 

Amazon Bedrock isolates training and inference data for LLMs within private accounts. Amazon S3, DynamoDB, and OpenSearch Service implement encryption of all data at rest, including search indices and log files, using service-managed keys by default. You can override this to use your own AWS Key Management Service (AWS KMS)-managed keys. This Guidance helps ensure that application data, NCR databases, problem descriptions, and recommended dispositions remain accessible only to authorized users.

Read the Security whitepaper 

Amazon S3 provides 99.999999999%  (11 9's) of data durability through multi-Availability Zone (AZ) distribution, while DynamoDB and OpenSearch Service automatically manage replica deployment for changing capacity demands. Amazon Bedrock, Lambda, Step Functions, API Gateway, and Amazon Rekognition deliver fully-managed, auto-scaling capabilities to handle varying NCR processing workloads. Built-in version management features in Amazon Bedrock and Lambda provide controlled updates with automatic rollback capabilities.

Read the Reliability whitepaper 

DynamoDB has a NoSQL architecture delivers single-digit millisecond access times, while Amazon Bedrock, Lambda, and Amazon Rekognition automatically adjust resource allocation to match customer workload demands. Rather than over-provisioning for peak production times, you can leverage these services to dynamically scale computing capacity. The Guidance eliminates the need to manage dedicated ML infrastructure for LLM processing or computer vision systems for defect classification.

Read the Performance Efficiency whitepaper 

Amazon S3 provides low-cost storage with auto-tiering options for archiving infrequently-accessed data or automated deletion. DynamoDB delivers cost-effective database storage with auto-snapshots and time-to-live (TTL) features. Lambda and Step Functions implement pay-as-you-go serverless computing, while Amazon Bedrock uses a pay-per-token model that charges only for actual LLM usage. The common API framework allows easy experimentation with different models to optimize price-performance ratios, helping you match computing and storage costs to actual NCR demand without upfront investment.

Read the Cost Optimization whitepaper 

Amazon S3 has auto-tiering and auto-deletion capabilities move infrequently-used data to lower-impact storage tiers or remove it entirely. The DynamoDB TTL feature and automated storage snapshots enable efficient data lifecycle management. Lambda provides a serverless compute model that allocates resources strictly based on demand. Together, these services automatically reduce storage and compute resource usage during idle periods or when reference dispositions age out, minimizing your environmental footprint through efficient resource utilization.

Read the Sustainability whitepaper 

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.