AWS for Industries

Closing the manufacturing skills gap with generative AI

Manufacturers often tell us how workforce challenges affect their organizational productivity. If you feel this way, you are not alone. Every quarter, the U.S. National Association of Manufacturers, the trade association that consolidates manufacturer interests, polls CEOs on their primary business challenges. In the latest poll (2023 Fourth Quarter Manufacturers’ Outlook Survey), more than 71% of manufacturers cited the “inability to attract and retain a quality workforce”, more than any other issue, as the challenge most affecting their business. As the top concern for many years, it even has its own branding: “the manufacturing skills gap”. Simply put, 30-year manufacturing industry veterans are retiring, the next generation didn’t come in, and the current generation has different views and expectations, resulting in a net 20% attrition on the shop floor; a situation only exacerbated by the pandemic.

Manufacturers must constantly train new employees, and do so quickly, to maintain productivity in an era of high attrition rates; they must upskill or reskill existing employees to keep up with the pace of innovation and shifting skillset needs; and they must capture and institutionalize existing knowledge soon, or risk it being lost through the departure and retirement of domain experts (sometimes referred to as “the silver tsunami”). For multi-national enterprises, there are more challenges, as institutional knowledge may be entered in one language but need to be consumed in another language.

Gathering, consolidating, and unlocking trapped institutional knowledge—thereby making it readily accessible—has been at the forefront of natural language processing technologies for some time. In this blog, we will uncover how generative AI technology, and its ability to contextualize unstructured and disparate information and distill insights, is helping manufacturers make progress to close the manufacturing skills gap.

What is generative AI

Generative AI refers to a domain of machine learning (ML) models that have been trained on vast amounts of broad-spectrum, unstructured data sets and contain billions of parameters. These generative AI models, generally referred to as foundation models (FMs) or large language models (LLM), are capable of interpreting complex requests that can be applied to a wide range of contexts, including: retrieving and summarizing information, answering questions, and generating original content. There are different types of models, including ones focused on generating text (text-to-text models), generating images (text-to-image), and multi-modal (text-to-image, image-to-text). Examples include: Anthropic Claude, Meta Llama, and Amazon Titan foundation models.

Out-of-the-box, foundation models perform well on general knowledge tasks but are limited in their domain-specific knowledge. Domain-specific capabilities, based on information that manufacturers often consider proprietary or trade secrets, are precisely what customers need to help address their workers’ skills gap.

Rather than develop expensive, highly specialized models from scratch, data scientists take advantage of existing, generalized FMs as a starting point, and architect solutions that allow them to access domain-specific information to fine tune the model’s responses. A common and cost-effective architectural pattern is retrieval augmented generation (RAG). With RAG, a smart search is performed on a corpus of domain-specific and potentially proprietary sources to find the most relevant information. These results are then passed to a foundation model, which uses its ability to interpret complex information to extract meaning from the provided search results, summarizes it, answers a question, or generates original content based on that domain-specific knowledge. This technique improves response accuracy, critical for fact-based industries like manufacturing, while protecting proprietary sources. RAG also allows organizations to optimize performance and cost through the flexibility of foundation models, knowledge stores, and smart search service choices.

By combining foundation models with proprietary data sources like document knowledge bases, enterprise resource planning (ERP), and manufacturing execution systems (MES), enterprises can deliver solutions, such as a learning chatbot, which can deal with domain-specific, highly-personalized technical information.

How generative AI helps with closing the skills gap

To understand how a RAG architecture can unlock information to help workers, envision a chatbot that helps repair technicians lower the time spent diagnosing an equipment problem. Having a machine down may impede the operation of a critical manufacturing line. Keeping the line running is a component of every manufacturer’s key performance indicator (i.e. uptime is part of overall equipment effectiveness (OEE)) and may be directly tied to company revenue. Faster time-to-diagnose leads to faster time-to-repair and greater equipment availability.

Behind the scenes, this chatbot would combine a foundation model with additional knowledge sources like diagnostic manuals, service bulletins, prior root-cause analyses, and information from maintenance systems using RAG. These knowledge sources may be in paper form. The paper documents would have to be processed to extract relevant information into a digital format. Smart search and generative AI require content to be digital, and specific formats can enhance its ability to interpret semantics and similarities of words, concepts, and meaning; as such, documents whose original source is digital may also need to be transformed.

An automated content processing pipeline, benefiting from AWS managed services like AWS Step Functions, Amazon Textract, and AWS Lambda, can be setup to extract and transform new knowledge sources as they are added. In multi-national organizations or those with multi-lingual workforces, the pipeline can take the additional step of translating the knowledge from the source language to one or many target languages using Amazon Translate.

This corpus serves as the domain-specific data source for the foundation model to refer to for accurate answers. The foundation models help contextualize and summarize the information for easier human understanding, and allow for interactive question and answering beyond what typical search techniques provide.

Organizations may have a corpus of thousands of knowledge documents. A repair technician is unlikely to have read them all; more importantly, during a critical problem diagnosis, that technician would be highly challenged to find the right piece of information. By implementing smart search, using AWS services like Amazon Kendra or Amazon OpenSearch Service, the technician could reduce the thousands of knowledge documents to a handful of ones that are most closely related to the problem at hand. Although we’ve lessened the technician’s cognitive load from thousands to a dozen knowledge documents, that’s still a burden. By passing those dozen documents to a managed foundation model running on Amazon Bedrock, we can complete the RAG process and distill the entire corpus, the thousands of knowledge documents, into a single paragraph or a set of targeted instructions.

From thousands, to a dozen, and ultimately to one succinct response that the repair technician can use to lower the time spent diagnosing and resolving a critical production problem.

The solution described above is just one example of how a generative AI-enabled architecture can be applied to help address manufacturing challenges. This architecture includes AWS managed services. Investigate the technical architecture and the overall solution in order to scale, enhance, and optimize them to meet your specific requirements.

Technical Solution Overview

Let’s review an example architecture highlighting how we can access both structured data like diagnostic manuals and dynamic data like sensor telemetry and anomaly detection alerts to accelerate the repair technician time-to-diagnose.

Extracting knowledge from your existing static content

Knowledge sources like diagnostic manuals do not change often and it makes sense to process and store these for continuous re-use by your repair chatbot. Often these documents have content that needs pre-processing like forms and charts. You can extract this information using services like Amazon Textract. If you are dealing with multiple languages, you can also use Amazon Translate to translate from and to different languages as needed. The extracted and translated information from documents will then be processed and added to a content knowledge repository capable of semantic search, like Amazon Kendra (other options are also available). Your RAG-based chatbot will use this to find relevant information to pass to the foundation model.

  1. Organizational knowledge sources such as drawings, diagnostic manuals, and SOPs are stored in Amazon S3.
  2. An AWS Lambda function triggers an AWS Step Functions orchestration in the event of changes in S3.
  3. A Step Function workflow is configured to use Amazon Textract to extract information from these data sources and use Amazon Translate to translate based on the nature of the document.
  4. The processed documents are deposited as final corpus in Amazon S3.
  5. Amazon Kendra points to the Amazon S3 bucket of multilingual document corpus, which it indexes for search.

Extracting knowledge from your enterprise applications and machine monitoring systems

Your industrial assets generate telemetry data from connected sensors on the line’s equipment. This data is analyzed to find anomalies using predictive algorithms. When anomalies are detected, alerts are generated and/or work orders are automatically created for technicians to resolve. In the architecture example, we depict Amazon Monitron and Amazon Lookout for Equipment as services detecting anomalies. Amazon Kendra includes native connection with many AWS services and industry standard data sources, including maintenance systems, and allows for other data source to be accessed via custom connectors.

  1. The telemetry data from industrial equipment and manufacturing is collected through different sensors. The industrial gateways collect data from these sensors and send it to the cloud.
  2. Manufacturers can use number of machine learning and analytical services, like Amazon SageMaker, to process and analyze this telemetry data.
  3. Based on the results of this analysis, any anomaly or event triggers an AWS Lambda function.
  4. The AWS Lambda function creates a maintenance order in the asset management system. Amazon Kendra uses the asset management system as the knowledge source.

Combining content and application repositories with generative AI

Amazon Kendra allows you to combine your existing static and infrequently updated knowledge stores like diagnostic manuals with transactional data from machine monitoring and enterprise applications such as EAM. Amazon Bedrock leverages data sources, like Amazon Kendra, to search the domain-specific information and pass it to a variety of foundation models for contextualization, summarization, and interactive question and answering. Amazon Bedrock is API-driven and can be embedded in chatbot applications or an internal system (e.g. the existing maintenance system). Amazon Bedrock makes it possible for you to choose a model that delivers accurate answers at the right price-point, which is key to scaling your repair technician chatbot to a broad audience across the enterprise.

  1. Amazon Kendra now points to both static content and transactional application knowledge sources.
  2. Amazon Bedrock uses Amazon Kendra for domain-specific search and runs a foundation model for contextualization, summarization, and interactive questions and answers.
  3. An end user uses a chatbot that connects to Amazon Bedrock.


From thousands, to a dozen, and ultimately to one succinct response that the employee can use, generative AI and the retrieval augmented generation (RAG) architecture have the potential to greatly empower manufacturers’ workforces by unlocking trapped knowledge. By combining foundation models with proprietary data sources, solutions like learning chatbots can provide employees with quick, relevant, and digestible answers to their domain-specific questions. This can significantly reduce the time workers spend searching for information, accelerating processes like equipment diagnosis and repair. Ultimately, this leads to greater productivity and efficiency on the shop floor.

While the solution presented in this blog is just one example, it demonstrates the versatility of generative AI to transform how employees access institutional knowledge. By customizing the architecture to leverage an organization’s unique data sources, this technology can be tailored to fit any manufacturer’s distinct needs. As generative AI continues advancing, manufacturers have an immense opportunity to leverage it to close the skills gap and support their workforces.

Danny Smith

Danny Smith

Danny Smith is principal ML Strategist for Automotive and Manufacturing Industries, serving as a strategic advisor for customers. His career focus has been on helping key decision-makers leverage data, technology, and mathematics to make better decisions, from the board room to the shop floor. Lately, most of his conversations have been on driving digital transformation, becoming data-driven, and strategies for democratizing artificial intelligence and machine learning. He started his career in the industrial supply chain space and continues to have a fondness for it, especially trains. Danny has a B.S. in Industrial Management from the Georgia Institute of Technology and a M.S. in Decision Sciences from Georgia State University.

Garry Galinsky

Garry Galinsky

Garry Galinsky is a Solutions Architecture at Amazon Web Services.He has over 20 years of progressive high tech experience in the Telecommunications, Local Search, Internet, and Financial industries. During his career, Garry has successfully designed, developed, delivered, and evangelized multiple Internet, web 2.0, local & social search, and mobile solutions to small, medium, and large corporate customers in North America, Europe, the Middle East, and Asia.

Ravi Soni

Ravi Soni

Ravi Soni is a Global Industrial Manufacturing Solution specialist at AWS. He brings extensive multidisciplinary experience in manufacturing, technology, and digital transformation. Ravi serves as a trusted advisor to AWS customers on their digital transformation journeys. Share recommendations around security, cost, performance, reliability, and operational efficiency to accelerate innovation and scale industrial manufacturing solutions. He also devotes significant time to growing the internal manufacturing technical field community.