AWS for Industries

Improve tire manufacturing effectiveness with a process digital twin

Introduction to the concept of a process twin for tires | What problem does it solve?

The process digital twin for tire manufacturing aims to improve the Golden Batch mixing process and, as a result, reduce the number of overall noncompliant products (NCPs) as well as the associated offgrade percent. Through standardized, pre-modeled controls and custom algorithms, the process digital twin removes the need to manually configure process parameters and helps to correlate variable interactions and dependencies in near real-time.

There are process variations that occur during tire mixing and assembly, leading to higher production costs, lesser quality, and reduced production line speed. These issues lead to reworking or scrapping of rubber compound or issues with other downstream equipment, like extruders and mixers. Manufacturers employ various modeling techniques to predict and optimize parameters to mitigate poor-quality products. These are mostly local optimizations in a specific process area. The yield results are often not sustainable because the basis for a decision is from a small sample, and as a result, any process change is usually not a long-term solution.

Resolution of process variation using a process digital twin

The principle of a process digital twin is based on the concept of multiple objective optimization (multi-objective optimization). Multi-objective optimization in engineering and industrial settings is often challenging, requiring sophisticated techniques. In typical scenarios, wet and dry process variables are derived through correlation of various parameters, either on a standard spreadsheet solver or on a statistical package. Deriving the variables has prohibitive capacity, and while one optimizes a set of parameters, the other dependent relationship is imbalanced. Hence, an idea state is where an optimizer looks at the entire relationship of key process input variables (KPIVs) and key process output variables (KPOVs) as one.

The process digital twin uses several services from Amazon Web Services (AWS):

  • AWS IoT Greengrass, an open-source edge runtime and cloud service for building, deploying, and managing device software
  • AWS IoT Core, a service that helps you connect billions of IoT devices and route trillions of messages to AWS services without managing infrastructure
  • AWS IoT SiteWise Edge, a service that makes it easy to collect, organize, process, and monitor equipment data on-premises

These services help to process the controllers’ model and to simulate and optimize core tire building, calendaring, mixing, and curing processes.

In the rubber-mixing stage, the recipe of various raw material constituents like rubber, chemicals, carbon, oil, and other additives plays a vital role in the control of process standards and final product quality. In the current schema of things, parameters like Mooney viscosity, specific gravity, and Rheo (the level of curing that can be achieved over the compound) are fairly manual and offline. In addition, the correlation of these parameters is conducted either on a standard spreadsheet solver or statistical package. Because of the delay in such correlation and interdependency, the extent of control a process engineer has on the deviation (such as drop temperature, mixing time, ram pressure, injection time, and so on) are limited.

Verified reference architecture

Figure 1. Process digital twin architecture for a tire manufacturer

It is imperative to ascertain the required data sources and the desired business objective. In our case, we are talking about integration requirements between the following components:

  • A distributed control system (DCS)
  • A manufacturing execution system (MES)
  • A Level 2 or 3 Allen-Bradley programmable logic controller (PLC)
  • A supervisory control and data acquisition (SCADA) system
  • Historians with one year of archived data across critical-to-process parameters
  • Operational data storage (“hot” data)
  • Asset modeling through AWS IoT SiteWise, a managed service that makes it easy to collect, store, organize and monitor data from industrial equipment at scale
  • A third-party edge gateway between Levels 1 and 2
  • Potential actuators
  • Extrinsic sensory, level switch, and temperature controller data, all fed into an AWS industrial data lake and then further processed by Amazon SageMaker (a fully managed service that brings together a broad set of tools for high-performance, low-cost machine learning (ML) for any use case) and Amazon QuickSight (which powers data-driven organizations with unified business intelligence (BI) at hyperscale)

There are four steps to operationalize, the first being data acquisition and noise removal—a process of 3–6 weeks with the built-in and external connectors. Next is model tuning and ascertaining what is fit for our purpose. Since we are considering a list of defect types, we are talking about another four weeks for training, validating, creating test sets, and delivering a simulation environment with minimum error. The third step is delivering the set points and boundary conditions for each grade of compound.

For example, the process digital twin cockpit has three desirable sub-environments:

  1. Carcass level—machine ID, drum width, drum diameter, module number, average weight, actual weight, and deviation results
  2. Tread roll level—machine number, average weight, actual weight, deviation, and SKU number
  3. Curing level—curing ID, handling time, estimated curing time, curing schedule, and associated deviations in curing time

The final step is ascertaining the model outcome and computing the simulation result (bias, Sum of Squares Error (SSE), deviation, and so on) with respect to the business outcome like defect percentage, speed of work, overall accuracy, and so on.

Value for customers

The process digital twin will impact the final yield, the cost of poor quality (COPQ), effective use, and specific consumption of the line. It will be a living digital representation of the physical system that is dynamically updated with data to mimic the true structure, state, and behavior of the physical system. It is used to drive business outcomes. The process digital twin is developed at the system level to assess system performance and at a sub-assembly level to identify off-nominal behavior at the level of a specific failure mode of a specific process to assess root cause. Operational personnel, such as plant operators, process engineers, and maintenance technicians, can use these applications to streamline remote operations, improve planning decisions, and anticipate operational issues.

Stakeholder FAQs:

Typical Stakeholders

Manufacturing director, vice president, chief data officer, and chief operating officer: These individuals are responsible for overall process quality, product quality assurance, manufacturing costs, and operating standards. The chief data officer is also responsible for planning the digital strategy road map around manufacturing operations.

Process engineer and controller: These individuals are responsible for managing the process parameters for each standard operating procedure.

1. What features and functionalities will a process digital twin drive for my factory users?

A process digital twin provides low-touch, data-led insights into a combination of production processes managed by a set of quality controls. At the shop floor, these controls are performed through manual standard operating procedures (SOPs) and vary from one SKU to another. A process digital twin digitizes the SOPs and the associated set points for operators to make decisions on the fly, using the simulation capability available in the cockpit of the solutions. Users are prone to analyzing the impact of incoming raw material parameters (for example, on the in-process behavior). Currently, analysis is offline through complex and error-prone spreadsheets or standard software packages to infer the final product quality or outcome of the process. The process digital twin solution helps operators to focus on only the outcome while the platform beneath the solution does all the correlations and covariances in near real-time, helping run the processes with higher efficiency, stability, and final yield.

2. What are the various attributes of my tire process digital twin?

The contour of the process digital twin is based on an attribute framework originally proposed by the General Electric company. The process digital twin has three main attributes. The first is individualized process digital twin modeling—not a general model for all tire SKUs, processes, or raw material compounds but rather a specific tire compound type or grade mix at a specific plant. The second is scalability and reconfigurability—the process digital twin is applicable to all instances of the same class by changing the inputs and outputs, and calibrating the process digital twin’s parameters to specific physical conditions. This ability resolves the issue of whether users can move from an individualized process digital twin and use it to horizontally scale across SKUs, processes, grade types, and so on. The process digital twin can thus be redeployed as part of a different system (like mixers or calendaring with appropriate calibration) rather than rebuilding the entire model from scratch. The last attribute is that the process digital twin can be mapped to lagging and leading indicators for business outcomes. The process digital twin is built with the potential impact on metrics (like right first time, yield, green tire weight, overall equipment effectiveness, process capability index, and other indicators) in mind. Throughout the tire value chain, the process digital twin continuously monitors and impacts the previously mentioned metrics in near real-time for seamless final simulations. For example, the process digital twin can draw insights based on near real-time interactions between raw material mix. It can also draw insights from parameters like ash content, annealing point, solubility, and viscosity and input parameters like dump temperature, chamber temperature, RAM pressure, and mixer control thermocouple.

3. What are the potential risks in my manufacturing process due to a process digital twin?

Like any new software package or proportional–integral–derivative (PID) controls that we introduce in the shop floor controls, the process digital twin platform has potential risks if left as a closed-loop system. In a closed-loop system, a model is used to automatically manage the near real-time tuning of a set point, and the Level 3 accepts the set points to run the process in the future with recommended adjusted parameter values. This process has a risk of potentially jeopardizing the outcome of the batch if the actual behavior of the process turns out to be different than the predicted values, thus ruining an entire batch. To prevent this from happening, we have designed the process digital twin to be a simulation engine in the beginning. At this point the operator uses the recommended values to adjust the true values of the set point manually, compares the predicted/actual deviation at batch or SKU, and watches for any anomalies. It is only after the system generates an accuracy of more than 95 percent over a period of three months and across a large set of products, SKU types, and raw material composition types that we shall move to a closed-loop architecture.

4. How does the near real-time monitoring and optimization* process happen in a process digital twin for my production process? (*This is a closed-loop feature not considered currently.)

The process digital twin makes it straightfroward for users to combine data in a single service without creating another data store and without requiring us to reenter the schema information that already exists in our data stores. To reduce the heavy lifting needed to connect to these data stores, the process digital twin provides unified access APIs that applications can use to access the data from various stores with the same APIs, regardless of where the data is stored. There are built-in data connectors for AWS IoT SiteWise for equipment and time-series sensor data; Amazon Kinesis Video Streams, which makes it easy to securely stream video from connected devices to AWS, for video data of tire surface quality. Also connected is Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from anywhere, for data from enterprise applications.

5. How does this solution improve the predictive quality analytics for the raw material mixing stage?

Raw material mixing is one of the most important stages in the overall process of a digital twin value orchestration. The quality of rubber products and SKUs depends on the exact mixing of various incoming raw materials and hence is prone to significant variation of critical to process (CTP) and critical to quality (CTQ). For example, a Mooney viscosity analysis is usually done end of batch, so a lack of predictive models for Mooney viscosity has been a perineal issue. A Mooney viscosity prediction uses nonlinear systems with high accuracy (for example, root mean square error) and nonlinear analysis of categorical variables (variables that can sort objects into a limited number of categories). With temporal semantic data, the process digital twin can model a near real-time predictive controller of the process parameters for nonlinear yet dynamic systems.

6. What are our measures of success for a process digital twin post-deployment? Is there a period of stability for the tool?

Like any artificial intelligence (AI)/ML use case, a solution such as a process digital twin would need to acclimate with the dynamics of the processes and the associated dependencies where it is deployed. Yes, an out-of-the-box solution requires less time to start deliberating improvements compared to traditional generic packages. However, the more the process digital twin runs across a variety of product types, grade mixes, and complexities, the better it is. Typically, there are 3–4 months before we see a solution like this achieve the accuracy we aspire for. The initial three months are for process familiarization, working condition stabilization, and anomaly determination through boundary conditions (upper control limit and lower control limit).

Conclusion

Tire mixing, processing, curing, and quality processes over the years have been using the core competencies of an MES, DCS, or piecemeal AI/ ML application to further optimize the process performance. Without a process digital twin the system would continue to run in a stable manner. However, because of the increasing customer requests for a complete process genealogy from the raw material source to the finished goods, and with the current system’s inability to provide this across processes, a process digital twin is a worthwhile case to pursue.

Contact an AWS Representative to know how we can help accelerate your business.

Further Reading

  • Read about the AWS Industrial Data Fabric, a well-architected framework with prescriptive guidance, to accelerate ingestion, contextualization, and the ability to act on your enterprise data across the manufacturing value chain.
  • Read about Predictive Quality solutions on AWS that provides manufacturers with leading KPIs to identify quality before defects occur.
Sundar Ram

Sundar Ram

Sundar Ram, Director of Business Development, leads the Business Development team at Amazon Internet Services Pvt Ltd (part of AWS globally). His team handles 5 types of activities, 1/ Migrations and Modernizations, 2/ Specialist Products like Analytics, storage, databases, AIML, HPC etc., 3/ Business Value Assessment, 4/ Digital Innovation using Amazon's innovation methodologies, and 5/ Industry BDs. Over the last 2 decades, Sundar has developed tremendous exposure to the APAC market and to a wide range of solutions domains and industries, especially telecom, manufacturing and public sector. Sundar started his career in Maurti Udyog in the IT team and helped build systems for sales and dispatch and production / materials. He holds an MBA from IIM Ahmedabad and a B.Tech from IIT Madras.

Anindya Bhattacharya

Anindya Bhattacharya

Anindya Bhattacharya, Industry Specialist Manufacturing and Supply Chain at AWS, drives the Manufacturing and Supply chain Industry solutions for the AWS India market. He brings in close to two decades of core Manufacturing and Supply chain expertise focusing on delivering holistic EBITDA transformation, Smart Manufacturing deployments and advisory around strategic Supply chain platforms. Prior to joining AWS he has worked with Hitachi, Blue Yonder, EY and TATA STEEL group in various capacities globally. Anindya has hands on experience of delivering large scale and piece meal Smart factory deployments across key industry segments like Metals, Automotive and Industrials segment.