The Internet of Things on AWS – Official Blog
How to digitize and automate vehicle assembly inspection process with voice-enabled AWS services
Introduction
Today, most automotive manufacturers depend on workers to manually inspect defects during their vehicle assembly process. Quality inspectors record the defects and corrective actions through a paper checklist, which moves with the vehicle. This checklist is digitized only at the end of the day through a bulk scanning and upload process. The current inspection and recording systems hinder the Original Equipment Manufacturer’s (OEM) ability to correlate field defects with production issues. This can lead to increased warranty costs and quality risks. By implementing an artificial intelligence (AI) powered digital solution deployed at an edge gateway, the OEM can automate the inspection workflow, improve quality control, and proactively address quality concerns in their manufacturing processes.
In this blog, we present an Internet of Things (IoT) solution that you can use to automate and digitize the quality inspection process for an assembly line. With this guidance, you can deploy a Machine Learning (ML) model on a gateway device running AWS IoT Greengrass that is trained on voice samples. We will also discuss how to deploy an AWS Lambda function for inference “at the edge,” enrich the model output with data from on-premise servers, and transmit the defects and corrective data recorded at assembly line to the cloud.
AWS IoT Greengrass is an open-source, edge runtime, and cloud service that helps you to build, deploy, and manage software on edge, gateway devices. AWS IoT Greengrass provides pre-built software modules, called components, that help you run ML inferences on your local edge devices, execute Lambda functions, read data from on-premise servers hosting REST APIs, and connect and publish payloads to AWS IoT Core. To effectively train your ML models in the cloud, you can use Amazon SageMaker, a fully managed service that offers a broad set of tools to enable high-performance, low-cost ML to help you build and train high-quality ML models. Amazon SageMaker Ground Truth uses high-quality datasets to train ML models through labelling raw data like audio files and generating labelled, synthetic data.
Solution Overview
The following diagram illustrates the proposed architecture to automate the quality inspection process. It includes: machine learning model training and deployment, defect data capture, data enrichment, data transmission, processing, and data visualization.
Figure 1. Automated quality inspection architecture diagram
- Machine Learning (ML) model training
In this solution, we use whisper-tiny, which is an open-source pre-trained model. Whisper-tiny can convert audio into text, but only supports the English language. For improved accuracy, you can train the model more by using your own audio input files. Use any of the prebuilt or custom tools to assign the labeling tasks for your audio samples on SageMaker Ground Truth.
- ML model edge deployment
We use SageMaker to create an IoT edge-compatible inference model out of the whisper model. The model is stored in an Amazon Simple Storage Service (Amazon S3) bucket. We then create an AWS IoT Greengrass ML component using this model as an artifact and deploy the component to the IoT edge device.
- Voice-based defect capture
The AWS IoT Greengrass gateway captures the voice input either through a wired or wireless audio input device. The quality inspection personnel record their verbal defect observations using headphones connected to the AWS IoT Greengrass device (in this blog, we use pre-recorded samples). A Lambda function, deployed on the edge gateway, uses the ML model inference to convert the audio input into relevant textual data and maps it to an OEM-specified defect type.
- Add defect context
Defect and correction data captured at the inspection stations need contextual information, such as the vehicle VIN and the process ID, before transmitting the data to the cloud. (Typically, an on-premise server provides vehicle metadata as a REST API.) The Lambda function then invokes the on-premise REST API to access the vehicle metadata that is currently being inspected. The Lambda function enhances the defect and corrections data with the vehicle metadata before transmitting it to the cloud.
- Defect data transmission
AWS IoT Core is a managed cloud service that allows users to use message queueing telemetry transport (MQTT) to securely connect, manage, and interact with AWS IoT Greengrass-powered devices. The Lambda function publishes the defect data to specific topics, such as a “Quality Data” topic, to AWS IoT Core. Because we configured the Lambda function to subscribe for messages from different event sources, the Lambda component can act on either local publish/subscribe messages or AWS IoT Core MQTT messages. In this solution, we publish a payload to an AWS IoT Core topic as a trigger to invoke the Lambda function.
- Defect data processing
The AWS IoT Rules Engine processes incoming messages and enables connected devices to seamlessly interact with other AWS services. To persist the payload onto a datastore, we configure AWS IoT rules to route the payloads to an Amazon DynamoDB table. DynamoDB then stores the key-value user and device data.
- Visualize vehicle defects
Data can be exposed as REST APIs for end clients that want to search and visualize defects or build defect reports using a web portal or a mobile app.
You can use Amazon API Gateway to publish the REST APIs, which supports client devices to consume the defect and correction data through an API. You can control access to the APIs using Amazon Cognito pools as an authorizer by defining the users/applications identities in the Amazon Cognito User Pool.
The backend services that power the visualization REST APIs use Lambda. You can use a Lambda function to search for relevant data for the vehicle, across a group of vehicles, or for a particular vehicle batch. The functions can also help identify field issues related to the defects recorded during the assembly line vehicle inspection.
Prerequisites
- An AWS account.
- Basic Python knowledge.
Steps to setup the inspection process automation
Now that we have talked about the solution and its component, let’s go through the steps to setup and test the solution.
Step 1: Setup the AWS IoT Greengrass device
This blog uses an Amazon Elastic Compute Cloud (Amazon EC2) instance that runs Ubuntu OS as an AWS IoT Greengrass device. Complete the following steps to setup this instance.
Create an Ubuntu instance
- Sign in to the AWS Management Console and open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
- Select a Region that supports AWS IoT Greengrass.
- Choose Launch Instance.
- Complete the following fields on the page:
- Name: Enter a name for the instance.
- Application and OS Images (Amazon Machine Image): Ubuntu & Ubuntu Server 20.04 LTS(HVM)
- Instance type: t2.large
- Key pair login: Create a new key pair.
- Configure storage: 256 GiB.
- Launch the instance and SSH into it. For more information, see Connect to Linux Instance.
Install AWS SDK for Python (Boto3) in the instance
Complete the steps in How to Install AWS Python SDK in Ubuntu to set up the AWS SDK for Python on the Amazon EC2 instance.
Set up the AWS IoT Greengrass V2 core device
Sign into the AWS Management Console to verify that you’re using the same Region that you chose earlier.
Complete the following steps to create the AWS IoT Greengrass core device.
- In the navigation bar, select Greengrass devices and then Core devices.
- Choose Set up one core device.
- In the Step 1 section, specify a suitable name, such as, GreengrassQuickStartCore-audiototext for the Core device name or retain the default name provided on the console.
- In the Step 2 section, select Enter a new group name for the Thing group field.
- Specify a suitable name, such as, GreengrassQuickStartGrp for the field Thing group name or retain the default name provided on the console.
- In the Step 3 page, select Linux as the Operating System.
- Complete all the steps specified in steps 3.1 to 3.3 (farther down the page).
Step 2: Deploy ML Model to AWS IoT Greengrass device
The codebase can either be cloned to a local system or it can be set-up on Amazon SageMaker.
Set-up Amazon SageMaker Studio
Detailed overview of deployment steps
- Navigate to SageMaker Studio and open a new terminal.
- Clone the Gitlab repo to the SageMaker terminal, or to your local computer, using the GitHub link: AutoInspect-AI-Powered-vehicle-quality-inspection. (The following shows the repository’s structure.)
-
- The repository contains the following folders:
- Artifacts – This folder contains all model-related files that will be executed.
- Audio – Contains a sample audio that is used for testing.
- Model – Contains whisper-converted models in ONNX format. This is an open-source pre-trained model for speech-to-text conversion.
- Tokens – Contains tokens used by models.
- Results – The folder for storing results.
- Compress the folder to create greengrass-onnx.zip and upload it to an Amazon S3 bucket.
- Implement the following command to perform this task:
aws s3 cp greengrass-onnx.zip s3://your-bucket-name/greengrass-onnx-asr.zip
- Go to the recipe folder. Implement the following command to create a deployment recipe for the ONNX model and ONNX runtime:
aws greengrassv2 create-component-version --inline-recipe fileb://onnx-asr.json
aws greengrassv2 create-component-version --inline-recipe fileb://onnxruntime.json
- Navigate to the AWS IoT Greengrass console to review the recipe.
- You can review it under Greengrass devices and then Components.
- Create a new deployment, select the target device and recipe, and start the deployment.
Step 3: Setup AWS Lambda service to transmit validation data to AWS Cloud
Define the Lambda function
- In the Lambda navigation menu, choose Functions.
- Select Create Function.
- Choose Author from Scratch.
- Provide a suitable function name, such as, GreengrassLambda
- Select Python 3.11 as Runtime.
- Create a function while keeping all other values as default.
- Open the Lambda function you just created.
- In the Code tab, copy the following script into the console and save the changes.
- In the Actions option, select Publish new version at the top.
Import Lambda function as Component
Prerequisite: Verify that the Amazon EC2 instance set as the Greengrass device in Step 1, meets the Lambda function requirements.
- In the AWS IoT Greengrass console, choose Components.
- On the Components page, choose Create component.
- On the Create component page, under Component information, choose Enter recipe as JSON.
- Copy and replace the below content in the Recipe section and choose Create component.
- On the Components page, choose Create component.
- Under Component information, choose Import Lambda function.
- In the Lambda function, search for and choose the Lambda function that you defined earlier at Step 3.
- In the Lambda function version, select the version to import.
- Under section Lambda function configuration
- Choose Add event Source.
- Specify Topic as defectlogger/trigger and choose Type AWS IoT Core MQTT.
- Choose Additional parameters under the Component dependencies Then Add dependency and specify the component details as:
- Component name: lambda_function_depedencies
- Version Requirement: 1.0.0
- Type: SOFT
- Keep all other options as default and choose Create Component.
Deploy Lambda component to AWS IoT Greengrass device
- In the AWS IoT Greengrass console navigation menu, choose Deployments.
- On the Deployments page, choose Create deployment.
- Provide a suitable name, such as, GreengrassLambda, select the Thing Group defined earlier and choose Next.
- In My Components, select the Lambda component you created.
- Keep all other options as default.
- In the last step, choose Deploy.
The following is an example of a successful deployment:
Step 4: Validate with a sample audio
- Navigate to the AWS IoT Core home page.
- Select MQTT test client.
- In the Subscribe to a Topic tab, specify audioDevice/data in the Topic Filter.
- In the Publish to a topic tab, specify defectlogger/trigger under the topic name.
- Press the Publish button a couple of times.
- Messages published to defectlogger/trigger invoke the Edge Lambda component.
- You should see the messages published by the Lambda component that were deployed on the AWS IoT Greengrass component in the Subscribe to a Topic section.
- If you would like to store the published data in a data store like DynamoDB, complete the steps outlined in Tutorial: Storing device data in a DynamoDB table.
Conclusion
In this blog, we demonstrated a solution where you can deploy an ML model on the factory floor that was developed using SageMaker on devices that run AWS IoT Greengrass software. We used an open-source model whisper-tiny (which provides speech to text capability) made it compatible for IoT edge devices, and deployed on a gateway device running AWS IoT Greengrass. This solution helps your assembly line users record vehicle defects and corrections using voice input. The ML Model running on the AWS IoT Greengrass edge device translates the audio input to textual data and adds context to the captured data. Data captured on the AWS IoT Greengrass edge device is transmitted to AWS IoT Core, where it is persisted on DynamoDB. Data persisted on the database can then be visualized using web portal or a mobile application.
The architecture outlined in this blog demonstrates how you can reduce the time assembly line users spend manually recording the defects and corrections. Using a voice-enabled solution enhances the system’s capabilities, can help you reduce manual errors and prevent data leakages, and increase the overall quality of your factory’s output. The same architecture can be used in other industries that need to digitize their quality data and automate quality processes.
———————————————————————————————————————————————
About the Authors
Pramod Kumar P is a Solutions Architect at Amazon Web Services. With over 20 years of technology experience and close to a decade of designing and architecting Connectivity Solutions (IoT) on AWS. Pramod guides customers to build solutions with the right architectural practices to meet their business outcomes.
Raju Joshi is a Data scientist at Amazon Web Services with more than six years of experience with distributed systems. He has expertise in implementing and delivering successful IT transformation projects by leveraging AWS Big Data, Machine learning and artificial intelligence solutions.