The Internet of Things on AWS – Official Blog
Digital Twins on AWS: Driving Value with L4 Living Digital Twins
Introduction
In working with customers, we often hear of a desired Digital Twin use case to drive actionable insights through what-if scenario analysis. These use cases typically include operations efficiency management, fleet management, failure predictions, and maintenance planning, to name a few. To help customers navigate this space, we developed a concise definition and four-level Digital Twin leveling index consistent with our customers’ applications. In a prior blog, we described the four-level index (shown in the figure below) to help customers understand their use cases and the technologies required to achieve their desired business value.
In this blog, we will illustrate how the L4 Living Digital Twins can be used to model the behavior of a physical system whose inherent behavior evolves over time. Continuing with our example for electric vehicle (EV) batteries, we will focus on predicting battery degradation over time. We described the L1 Descriptive, L2 Informative, and L3 Predictive levels in previous blogs. In this blog, you will learn about the data, models, technologies, AWS services, and business processes needed to create and support an L4 Living Digital Twin solution.
L4 Living Digital Twin
An L4 Living Digital Twin focuses on modeling the behavior of the physical system as it changes over time by using real-world data to update the model parameters. Examples of real-world operational data include continuous data (time-series), measurements (sensors), or observations (visual inspection data or streaming video). The capability to update the model makes it “living” so that the model is synchronized with the physical system. This can be contrasted with an L3 Predictive Digital Twin, where the operational data is used as input to a static pretrained model to obtain the response output.
The workflow to create and operationalize an L4 Digital Twin is shown in the figure below. The first step is to build the model using first-principles methods (“physics-based”), historical operational data, or hybrid modeling techniques. The second step is to perform a sensitivity analysis of the model parameters to select which parameters will be updatable and confirm that the selected subset captures the variation in the real-world data. Afterward, the model’s parameters are calibrated using a probabilistic calibration algorithm, and the model can then be deployed in production.
Once in production, the deployed model is used to predict the measured values, which are compared against the actual measured values, in order to calculate the error term. If the error is less than a preset threshold, then no adjustments are made, and the model is used to predict the next measured values. If the error is larger than the threshold, then the probabilistic Bayesian calibration algorithm is used to update the model parameters reflecting the latest data observations. This updating capability is what makes the L4 Digital Twin “living.”
To help customers build and deploy L4 Digital Twins, AWS (Iankoulski, Balasubramaniam, and Rajagopalan) published the open-source aws-do-pm framework on AWS Samples. Technical details are provided in the GitHub readme files and a detailed 3-part blog by the authors showing an example implementation for EV battery degradation that we will leverage in this blog. In summary, the aws-do-pm framework enables customers to deploy predictive models at scale across a distributed computing architecture. The framework also allows users to probabilistically update the model parameters using real-world data and calculate prediction uncertainty while maintaining a fully auditable history for version control.
In our example, we will show how to create L4 Digital Twins for a fleet of EV batteries using the aws-do-pm framework and integrate it with AWS IoT TwinMaker. These L4 Digital Twins will make predictions of the battery voltage within each route driven, taking into account battery degradation over time. Since each vehicle takes a different route and has different charging and discharging cycles over the months, the battery degradation for each vehicle will be different. The EV battery Digital Twins must have two attributes: 1/ the EV battery DTs must be individualized for each battery; 2/ the EV battery DTs must be updated over the battery’s life to reflect the degraded performance accurately.
Initial model building and calibration
The first thing we need to build a model is an operational dataset. For this example, we will use same EV fleet model published by Iankoulski, Balasubramaniam, and Rajagopalan in the aws-do-pm GitHub. Following the documentation, we created two synthetic datasets to mimic the operations of 100 vehicles, each driving 100 routes, using the example code in aws-do-pm. In practice, these datasets would be obtained from actual vehicles in operation. The first synthetic dataset mimics the routes that are traveled by each of the vehicles. Each route is characterized by trip distance, trip duration, average speed, average load (weight), rolling friction, and aerodynamic drag that are preassigned by sampling from probability distributions for each characteristic. Once the routes are set, the second synthetic dataset calculates the battery discharge curves for each of the 100 vehicles as they travel their 100 routes. Each vehicle is assumed to have a new battery initially. To mimic real-life battery degradation, the example used a simple phenomenological degradation model applied as a multiplier to the voltage discharge curves as each vehicle drives its 100 routes. The degradation model is a function of the route duration, route distance, and average load so that each vehicle experiences a different degradation depending on its driving history. This synthetic time-series dataset of degrading battery discharge for each vehicle is our starting point mimicking real-life operational data. The figure below shows the complete voltage versus time charge-discharge cycles for Vehicle 1 over several months as it drives its assigned 100 routes, and we can see how the battery degrades over time.
Now that we have representative operational data, the first step is to build the model that predicts the voltage as the vehicle drives along its route. The model can be built in several ways. It could be a physics-inspired model where the functional form of the model equation is based on underlying scientific principles, a purely empirical model where the functional form of the model equation is based on a curve fit, a strictly data-driven model such as a neural network, or a hybrid model such as a physics-inspired neural network. In all cases, the model coefficients or parameters are exposed and can be used to calibrate the model. In our example, we trained a neural network using the first trip of each of the 100 vehicles to represent the behavior of a new battery. To make the example more realistic, we trained the model to predict battery voltage as a function of quantities that can be measured in real-life – specifically average velocity, distance traveled within the route, and average load. Details of this model are available in the aws-do-pm blog.
The second step is to run a sensitivity analysis to determine which model parameters to calibrate. The aws-do-pm framework implements the Sobol index for sensitivity analysis because it measures sensitivity across the entire multi-variate input space and can identify the main effect and 2-way interactions. The details are covered in the aws-do-pm documentation and the corresponding technical blog and briefly summarized here. The graph on the left shows the result for the main effects plot, indicating that trip_dist_0, bias_weight_array_2, and bias_weight_array_4 are the key parameters needed to be included in the calibration. The graph on the right shows the chord plot for 2-way interactions indicating the additional parameters to include in the calibration.
The third step is calibrating the battery model using the parameters that had the most significant impact on the output voltage. The model calibration in aws-do-pm employs the Unscented Kalman Filter (UKF) method, which is a Bayesian technique for parameter estimation of non-linear system behavior. UKF is commonly applied for guidance, navigation, and control of vehicles, robotic motion planning, and trajectory optimization – all of which represent use cases where real-world data is used to update the control of the system. In our application, we’re using UKF in a similar manner, except this time, we’ll use real-world data to update the model parameters of the L4 Digital Twins. The details on performing the calibration within the aws-do-pm framework are covered in the aws-do-pm documentation, as well as the corresponding technical blog.
In-production deployment of L4 EV battery digital twin
Now that we have a trained and calibrated model of a new battery for each of the vehicles, we deploy the models into production. As shown in the architecture diagram below, this solution is created using AWS IoT SiteWise, and AWS IoT TwinMaker and builds on the solution developed for the L3 Predictive level.
The vehicle data, including trip distance, trip duration, average speed, average load (weight), and additional parameters, are collected and stored using AWS IoT SiteWise. Historical maintenance data and upcoming scheduled maintenance activities are generated in AWS IoT Core and stored in Amazon Timestream. AWS IoT TwinMaker can access the time series data stored in AWS IoT SiteWise through the built-in AWS IoT SiteWise connector and the maintenance data via a custom data connector for Timestream. For the predictive modeling, we export the EV data to Amazon Simple Storage Service (Amazon S3) to generate a dataset in CSV format, from where it is picked up by aws-do-pm.
Aws-do-pm runs a service on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster responsible for the execution of tasks, such as the updating of individual battery models, and the persistence and synchronization across different data stores. We added a custom task that periodically checks for new trip data placed on Amazon S3. This data is used to perform new predictions and update individual battery models as required. The predictions are then fed back to an S3 bucket and then to AWS IoT SiteWise. From there, they are forwarded to AWS IoT TwinMaker and displayed in the Amazon Grafana dashboard.
We simulated in-production real-world operations by having the vehicles “drive” the routes as per the synthetic datasets generated earlier. We then used the calibrated model of the new EV battery in the predict-measure-recalibrate loop we described earlier. In this manner, the EV battery model for each vehicle evolves over time, with different model parameters being estimated based on the routes driven. For example, the figures below show the model error calculated between the model prediction and the measured voltage at three different points over the course of many routes. We can see the error is calculated at the end of each route (blue dot) and if the error is above the threshold, then a model update is triggered (red dot). The error for the non-updated model prediction (blue line) drifts higher, whereas the updated model prediction stays near or below the threshold.
The complete voltage versus time history for a single route of the above figures is shown below. The left figure shows the non-updated model prediction (red line) and prediction uncertainty band (red shaded area), which is well above the actual observed data (dashed line). The right figure shows the updated model prediction (blue line) and uncertainty band (blue shaded area) overlapping the observation data (dashed line).
This example demonstrates the value of the L4 Living Digital Twin as the behavior of the degraded EV battery is correctly modeled over time. The lower voltage output from the battery and the resulting lower battery capacity directly translates into shorter ranges for the EV as the battery ages. Range anxiety (e.g., fear of being stranded due to a dead EV battery) and reduced battery capacity are key drivers in the market value of EVs and research in the automotive industry. In a future blog, we’ll extend the concepts in this example to show how to use an L4 Living Digital Twin to calculate EV remaining range (to address range anxiety) and battery State of Health (SoH), which determines the value of the EV battery (and therefore the EV) on the second-hand market.
Summary
In this blog, we described the L4 Living level by walking through the use case of point-by-point prediction of in-route voltage for an EV battery as it degrades over time. We leveraged the aws-do-pm framework published by Iankoulski, Balasubramaniam, and Rajagopalan and showed how to integrate their example EV fleet model with AWS IoT TwinMaker. In prior blogs, we described the L1 Descriptive, the L2 Informative, and the L3 Predictive levels. At AWS, we’re excited to work with customers as they embark on their Digital Twin journey across all four Digital Twin levels, and encourage you to learn more about our new AWS IoT TwinMaker service on our website, as well as our open-sourced aws-do-pm framework.
About the authors
Dr. Adam Rasheed is the Head of Autonomous Computing at AWS, where he is developing new markets for HPC-ML workflows for autonomous systems. He has 25+ years experience in mid-stage technology development spanning both industrial and digital domains, including 10+ years developing digital twins in the aviation, energy, oil & gas, and renewables industries. Dr. Rasheed obtained his Ph.D. from Caltech where he studied experimental hypervelocity aerothermodynamics (orbital reentry heating). Recognized by MIT Technology Review Magazine as one of the “World’s Top 35 Innovators”, he was also awarded the AIAA Lawrence Sperry Award, an industry award for early career contributions in aeronautics. He has 32+ issued patents and 125+ technical publications relating to industrial analytics, operations optimization, artificial lift, pulse detonation, hypersonics, shock-wave induced mixing, space medicine, and innovation. | |
Dr. David Sauerwein is a Data Scientist at AWS Professional Services, where he enables customers on their AI/ML journey on the AWS cloud. David focuses on forecasting, digital twins and quantum computation. He has a PhD in quantum information theory. | |
Seibou Gounteni is a Specialist Solutions Architect for IoT at Amazon Web Services (AWS). He helps customers architect, develop, operate scalable and highly innovative solutions using the depth and breadth of AWS platform capabilities to deliver measurable business outcomes. Seibou is an instrumentation engineer with over 10 years experience in digital platforms, smart manufacturing, energy management, industrial automation and IT/OT systems across a diverse range of industries | |
Pablo Hermoso Moreno is a Data Scientist in the AWS Professional Services Team. He works with clients across industries using Machine Learning to tell stories with data and reach more informed engineering decisions faster. Pablo’s background is in Aerospace Engineering and having worked in the motorsport industry he has an interest in bridging physics and domain expertise with ML. In his spare time, he enjoys rowing and playing guitar. |