AWS for Industries

Accelerate your visual quality inspection at the edge with AWS and SoftServe

In the high-stakes arena of manufacturing, where product quality can make or break market reputation, using visual intelligence effectively remains the linchpin of excellence. Visual intelligence, in this context, refers to the application of artificial intelligence (AI) precision to enhance visual quality inspection (VQI), seamlessly integrating visual data with operational metrics like overall equipment effectiveness and SCADA systems to provide a comprehensive, data-driven view of manufacturing efficiency and quality.

Traditional methods that rely on the human eye grapple with the sheer scale and complexity of modern production lines. This is where the transformative potential of AI-based visual inspection solutions comes into sharp focus, heralding a new era when technology and human expertise converge to elevate quality control to unprecedented heights.

AI-based visual inspection solutions are gaining significant traction in the manufacturing sector. By 2028, the global market for AI-based visual inspection in manufacturing is expected to reach $21.3 billion, with a compound annual growth rate of 43.4 percent. These solutions are not merely tools but collaborators, enhancing the precision and speed of defect detection.

Now, finding the right AI-based visual inspection solution for manufacturing isn’t straightforward. Manufacturers face a common dilemma: go for an off-the-shelf system that’s easy to deploy but might not fit all their needs or build a custom solution from various tools and components that could offer a perfect fit but with the headache of integration and compatibility issues.

Off-the-shelf solutions are attractive for their quick deployment, but they often lack the flexibility to address the unique challenges of different production lines. Many of these solutions require costly alterations or even a redesign of the manufacturing facility to accommodate the vision systems that come packaged with these solutions. Even with this additional expense, there’s always a nuance or customer use case that the most well engineered solution might have overlooked. An automotive parts manufacturer might struggle with accurately identifying defects due to the system’s inability to adapt to the specific reflective properties and various shades of their parts. Perhaps the predefined algorithm couldn’t distinguish between acceptable variations in surface finish, leading to false positives.

On the other hand, with a custom, à la carte approach, manufacturers get to pick and choose components and fine-tuned algorithms that meet their specific requirements. However, this approach can lead to complex integration challenges, requiring significant time and resources so that all parts of the system work seamlessly together. A widget manufacturer might discover that proprietary interfaces between the high-resolution cameras and downstream processing tools require custom coding and middleware development to support data compatibility and real-time processing capabilities, leading to extended project budgets, maintenance costs, and time to market.

In this blog, we present an effective solution with a balance between the convenience of off-the-shelf solutions and the customization offered by an à la carte approach without the complexities of integration. EdgeInsight from SoftServe Inc, a Premier AWS Partner, is an edge computer vision solution accelerator, based on the advanced capabilities of NVIDIA GPU hardware integrating with NVIDIA DeepStream and Amazon Web Services (AWS).

A typical use case

Integrating an AI-based visual intelligence system into a factory’s existing production line must be approached with precision so that the new technology complements the established processes rather than causing costly disruptions. Let’s dig into a detailed use case, outlining typical challenges that businesses might encounter on the path to a fully integrated, AI-enhanced visual intelligence solution.

Inspecting fasteners on a high-speed conveyor

Consider a use case where we wish to perform visual quality inspection along a shop conveyor carrying an assortment of fasteners (screws, nuts, bolts, washers, and so on) in addition to calculating a real-time overall equipment effectiveness (OEE) metric to measure how the line is doing.

An illustrative dashboard of what this could look like on the factory floor or in a control room is shown below.

Figure 1: An illustrative dashboard of a factory floor or in a control room OEE metric

Solution implementation:

Implementing such a system is a complex, multidisciplinary task requiring software engineering, machine learning (ML), computer vision, hardware engineering, and domain knowledge of the specific manufacturing process we are monitoring.

Some of the more heavy-lifting activities involved could include:

  • ML: building a software platform capable of training a model with labeled 3D scans of the objects (both real and synthetically generated) to form sufficient and annotated datasets so the desired accuracy can be demonstrated well before the accompanying hardware is even prototyped; validate the model against a separate set of data as part of testing.
  • Software update pipeline: Build the necessary software development life cycle (SDLC) pipeline to the edge device, updating the actual application (detection) software, including model updates and device provisioning, as well as making it resilient and capable of integrating well with an existing operational technology (OT) software investment—for example, notifications or alerts.
  • Vision ingestion pipeline: Capture the data streams from hardware (cameras), standardize the format to streamline processing and any necessary pre-processing (normalizing, resizing, adjusting, or contrast) to make it consistent for the ML model to consume; determine data storage options or data streaming services to move video from point to point with accompanying queues, error handling, multi-device synchronization, and security measures
  • Integration with existing systems: Support the edge system’s outputs integration with existing OT systems (SCADA, historians, edge-based analytics, and so on) as required to make the solution an extension to the factory floor routine and not a separate tool that the staff needs to learn.
  • Hardware selection: The selection of camera and lighting is a critical component in a vision system. Precision in choosing the right cameras—considering factors such as resolution, frame rate, sensitivity, and the ability to capture fine details under varying lighting conditions—can make or break the system’s effectiveness. High-quality cameras can capture clear, detailed images necessary for accurate defect detection and real-time monitoring. This selection must also be informed by the environmental conditions of the factory floor, where dust, vibration, and temperature extremes might be present. Moreover, the cameras should be compatible with the processing hardware and software, providing seamless integration and data flow within the Internet of Things (IoT) environment.
  • OEE + dashboard development: Choose a suitable dashboard platform and establishing real-time data connections to visualize key performance indicators such as availability, performance, and quality; offer interactivity, be responsive across panels / display devices, and allow user customization.

Build or buy?

Deciding between an off-the-shelf AI-based VQI system and a custom-built solution is filled with complexity. Prepackaged systems can fall short in meeting the nuanced ML and integration needs specific to a manufacturing environment, such as compatibility with 3D scanning and existing OT systems. Conversely, custom systems, though tailored to precise technical requirements like vision ingestion and camera selection, bring the burdens of lengthy development and integration. This trade-off underscores the difficulty in finding a middle ground that can efficiently marry the detailed technical requisites with the ease of a ready-made solution.

Meet EdgeInsight!

SoftServe Inc. EdgeInsight is a solution accelerator balanced between the convenience of off-the-shelf solutions and the customization offered by an à la carte approach, without the complexities of integration. This middle ground provides a more tailored fit for manufacturing needs, offering both ease of use and adaptability.

Harnessing the advanced capabilities of NVIDIA GPU hardware integrated with NVIDIA DeepStream and AWS, EdgeInsight is an edge computer vision solution accelerator designed to expedite your VQI and visual intelligence journey from concept to market. It shifts the focus from the complexities of edge software infrastructure to what truly matters, creating groundbreaking algorithms and deriving meaningful insights. With EdgeInsight, visionaries can now innovate freely, unburdened by the technical minutiae, and pave the way for the future of intelligent edge computing. Say goodbye to stitching together foundational layers of your software!

EdgeInsight supports multiple video streams to the same edge device to facilitate parallel runs of different ML models so that your VQI solution can simultaneously count objects on a conveyor, detect damage, read labels, and provide specification conformance in real time.

In keeping with the philosophy of unburdening users from the time-consuming and less interesting work, EdgeInsight also offers over-the-air (OTA) update management of both the host operating system (OS) and the applications to run on the edge (inclusive of ML models), and reduces the integration friction when trying to reuse existing integrations with edge and cloud systems.

Reference architecture

Figure 2: AI-based visual intelligence solution architecture with EdgeInsights accelerator

Edge-level (custom AWS IoT Greengrass components)

  1. Video interface manager (VIM) serves as an intermediary layer between the EdgeInsight internal components and the low-level hardware controls of video equipment. Its primary purpose is to abstract and encapsulate the complexity of interacting with each piece of video equipment (for example, ONVIF, RTSP, USB3 Vision, or GigE Vision), providing a standardized interface for other system components to interact with video devices without needing to understand the intricacies of the underlying hardware specifics.
  2. Real-time streaming protocol (RTSP) server provides communication between clients and the underlying media streams and acts as a broker, providing a standardized protocol for managing (establish, modify, and tear down) streaming sessions. It acts as a video streams broker for the internal EdgeInsight Vision components, supporting multiple video input streams and multiple video consumers for complex analysis applications.
  3. DeepStream computer vision (DSCV) offers high-throughput and low-latency real-time video analytics and processing platform capable of running ML inference for the tasks of quality inspections. It produces JSON-like objects with a lot of data in them that are ready to stream northbound to the fog/cloud or even be fed back into an object-tracking module, also deployed at the edge for low-latency, real-time corrective actions.
  4. Video stream environment manager (VSEM) is a component that takes the role of video streams exporter for local storage or to the AWS Cloud, capable of transporting video data in real time or in batches based on the use case. It can also be dynamically initiated based on local logic or remote commands to start or stop streaming or recording.
  5. Data integration module (DINT) is an application component responsible for inference data preprocessing (tags reduction, aggregation, filtering, and the like) on the edge, implementing one-time business logic (for example, analyze inference data and initiate on property value threshold breach). It blends with OT process data for complex logic, or simply allows the data to pass through to AWS IoT SiteWise—a managed service that makes it easy to collect, store, organize, and monitor data from industrial equipment—through Greengrass Stream Manager on AWS IoT Greengrass, an open-source edge runtime and cloud service.

Solution overview: What is happening at the edge?

Features of EdgeInsight:

Figure 3: DeepStream inference pipeline, Source: NVIDIA

NVIDIA Jetson-based industrial modules deliver 248 TOPS as a compact system on module (SOM) to support multiple concurrent AI application pipelines. They are powered by NVIDIA’s AI software stack with tools and SDKs to combine platforms, including NVIDIA Omniverse Replicator (for synthetic data generation) and frameworks such as DeepStream (see the pipeline above) for intelligent video analytics.

Semantic segmentation is a computer vision task that involves labeling each pixel in an image with a corresponding class label. This allows identification and classification of different objects within an image, providing a comprehensive understanding of the entire picture. In contrast with bounding boxes, which only specify the coordinates of an object within an image, semantic segmentation outlines the exact shape of the object. This is crucial in making inferences when objects interact together at high speeds. Using EdgeInsight, advanced ML models can perform semantic segmentation in real time, lending its usefulness to applications that require immediate analysis and response, such as video surveillance or measuring quality on a high-speed conveyor belt.

Synthetic data generation using NVIDIA Omniverse Replicator helps make the model robust, fault tolerant, and easier to train. This includes the ability to ingest CAD files directly and perform domain randomization, such as putting in unwanted or surprise objects that it might encounter on the conveyor but should ignore. Omniverse Replicator allows us to account for lighting variability, orientation changes, and simulating factory dust or fog that a camera lens might encounter in such a setting. This helps us to readily generate millions of automatically-labeled images for training sets.

For maintenance and software (including model) updates, EdgeInsight blends the monolithic image-based OTA upgrades with dynamically configurable containerized applications that sit on top of the prepackaged stable base OTA with bootloader, drivers, kernel, and a root file system to run your containers on your edge device.

Integrating with AWS Edge and the Cloud

At an enterprise scale, process data collected from the edge (which can include multiple conveyors from across many factory sites) can be centrally organized and contextualized as a data lake using asset models built in AWS IoT SiteWise. These can be visualized using AWS IoT SiteWise Monitor by the factory operations team.

Video data passed into Amazon Kinesis Video Streams, which makes it easy to securely stream video from connected devices to AWS, is linked to an ML model in Amazon SageMaker, where developers can build, train, and deploy ML models for nearly any use case. Amazon SageMaker scales automatically to accommodate the data and processing needs of customers’ EdgeInsight use case.

Functions in AWS Lambda, a compute service that runs your code in response to events, initiate with new model artifacts (through Amazon SageMaker) arriving at a bucket in Amazon Simple Storage Service (Amazon S3), an object storage service, as shown in the diagram. AWS IoT Greengrass App Manager is used to deploy the ML model to the edge device, managing and orchestrating deployment of updated EdgeInsight images (docker containers), pulling from Amazon Elastic Container Registry (Amazon ECR), a fully managed Docker container registry.

Benefits to your business

An EdgeInsight –based VQI solution can bring transformational value to manufacturers on the factory floor.

  • Cost savings: By detecting defects (corrosion, wear, and tear), maintenance and repair costs can be minimized.
  • Time savings: ML models process data faster than humans, and the considerable time savings are even more beneficial in large-scale projects with a large amount of data being produced and requiring analysis.
  • Consistency: No longer reliant on an inspector’s experience and training; inspection processes are free from human error or bias, providing for more reliable and repeatable inspection results.
  • Enhanced safety: The need for manual inspections in hard-to-reach areas is reduced, particularly in industries such as oil and gas, where hazardous material exposure risks are higher.
  • Adaptability: The model can be adapted and refined over time based on the data it is processing, improving the accuracy and efficiency of the inspection process.

Though there is market competition to build the industry’s leading building blocks to solve these challenges at the edge, EdgeInsight’s biggest differentiator is that a client solution does not have to do away with its desired functionality to shoehorn itself to a shrink-wrapped vendor product. Developers can control and customize all aspects of the functionality to conform to even the most challenging customer requirements.

Ready to improve your factory floor efficiency with an AI-powered visual intelligence system in your production line? Contact SoftServe’s EdgeInsight team to get started.

Krishna Doddapaneni

Krishna Doddapaneni

Krishna is an AWS Worldwide Technical Lead for Industry Partner in IoT essentially helping partners and customers build crazy and innovative IoT products and solutions on AWS. Krishna has a Ph.D. in Wireless Sensor Networks and a Postdoc in Robotic Sensor Networks. He is passionate about ‘connected’ solutions, technologies, security and their services.

Maysara Hamdan

Maysara Hamdan

Maysara Hamdan is a Partner Solutions Architect based in Atlanta, Georgia. Maysara has over 15 years of experience in building and architecting Software Applications and IoT Connected Products in Telecom and Automotive Industries. In AWS, Maysara helps partners in building their cloud practices and growing their businesses. Maysara is passionate about new technologies and is always looking for ways to help partners innovate and grow.

Shaun Greene

Shaun Greene

Shaun Greene serves as Director of Industry Solutions at SoftServe, where he works with customers and technology experts to accelerate value creation using advanced technologies such as AI/ML, Computer Vision, IoT, Robotics, XR, and Digital Twins. Always grounded in the physical world, he started life as an electrical hardware engineer, and has slowly moved up the stack through embedded, networking and connectivity, cloud, frontend and business applications, across industries such as medical devices, consumer products, security, and manufacturing. Shawn is currently focused on digital twins and strategy, and works with clients to identify and deploy valuable technology solutions, especially for things that beep, buzz, and whirr.

Shiv Sankaranarayanan

Shiv Sankaranarayanan

Shiv Sankaranarayanan serves as an IoT Solutions Architect at SoftServe, where he leverages over 20 years of expertise in Instrumentation, Networking, and Cloud Engineering to facilitate seamless communication between devices, applications, services—and occasionally, people. With a rich background navigating diverse roles in Architecture, Product Development, and Engineering Leadership, Shiv has contributed to industries such as Electrical Relay Testing, Telecommunications, and Life Sciences. Shiv is currently focused on Industry 4.0, and collaborates with clients to develop integrated IoT solutions and deepens discussions around their digital transformation journeys.

Yibo Liang

Yibo Liang

Yibo Liang is an Industry Specialist Solutions Architect supporting Engineering, Construction and Real Estate industry on AWS. Yibo has a keen interest in IoT, data analytics, and Digital Twins.