AWS HPC Blog

Building a Scalable Predictive Modeling Framework in AWS – Part 2

In the first part of this blog series, we introduced the aws-do-pm framework for building predictive models at scale in AWS. In this blog, we showcase a sample application for predicting the life of batteries in a fleet of electric vehicles, using the aws-do-pm framework.

Prerequisites

To run this example, one needs to have AWS account and have Docker installed. For more details on the setup and how to run the example, refer to the documentation here.

Demonstrating a sample application using aws-do-pm

A common use case is to manage a fleet of electric vehicles (EV) for commercial operations. In EVs the battery is the most critical component. Building models to predict performance of the battery is of utmost importance. When the batteries are new, it can be assumed the performance of EVs in the fleet are similar. However, the individual batteries degrade differently over time, leading to divergence in EV performance. Thus, a battery’s model need to continuously updated to track the degradation of the battery, over its lifetime. We have provided a complete demo in aws-do-pm to:

  • generate synthetic data
  • build models
  • update models
  • perform sensitivity analysis

The code for the demo has been released and documented in https://github.com/aws-samples/aws-do-pm. After completing the initial setup in the documentation, you can run the entire demo using the following command from the base folder in the repository:

./ev-fleet-sequential-scale-demo

In the subsequent sections, we will discuss in detail the aspects of:

  • data generation and registration
  • model building,
  • model prediction
  • using the aws-do-pm framework.

Data Generation and Registration

To provide the most flexibility, users can choose the number of vehicles they would like to model, in the data generation module. The module uses a phenomenological degradation model to generate the data for each vehicle. All vehicles start with the “ideal” battery. Each vehicle is expected to travel ~100 routes, and every route is assigned a specific distance, speed, load, rolling friction, and drag. The damage depends on all the inputs. The battery voltage, as a function of time in each trip is calculated based on the inputs and the accumulated damage.

The demo generates a simulated dataset that mimics real world behavior of 100 vehicles over 100 routes. The data is organized by vehicle id and route id. The “train_data” folder, created during the execution of the demo, has inputs and outputs required to build a model for pristine battery performance in all the vehicles. The generated data is automatically registered in the aws-do-pm framework. Any external dataset can be registered for use in aws-do-pm using the following command:

pm data register <local_path> <main_file> ['description']

 An example would be like

pm data register /project/data  ev_data.json  “EV data for model training”

Model Building

In this section, we will briefly discuss the model building process. Battery models are usually developed with test data generated in a laboratory. Numerous tests are performed, under different loading conditions, to generate a multitude of performance profiles (i.e., voltage, current and power curves). Then, a model is built with this large data set. The model can be physics-informed or empirical, in nature. Once built, this model has a specific structure (form) and has a set of parameters associated with it. This model form and the associated parameters represent the physics of the system. This model is then used for all individual batteries of the fleet. An alternative approach would be to use data from real-world operations for model building. The data from the vehicles, when the battery is pristine, can be used for modeling ideal battery behavior. In our example, we will use the data generated from the first trip of the vehicles for modeling the ideal battery, using a dense neural network.

Building a Neural Network Model for “ideal” battery performance

A dense neural network is used to model the trip voltage of the battery, with trip load, trip velocity, and trip distance as inputs. The neural network, built using PyTorch, consisting of 5 hidden layers was trained with data from the first trip of 100 vehicles generated above. It is assumed that all batteries are in pristine condition at the start of operation. Therefore, all individual batteries have the same model at the time of their initial operation.

The fully-connected layers in the neural network have ReLu (Rectified Linear unit) activation functions to enable good regression performance. The drop-out layers were designed to enable model uncertainty calculations to be built in, for downstream processes. 80% of the data was used for training and 20% was used for testing.

A schematic of the neural network is shown below:

The model is of the form shown below:

Note that even though the data generation used the drag and rolling resistance as inputs, the modeler would not have any information about the specific drag or the rolling resistance for any route to include them in the model. This is similar to most real-world situations, and we have designed this example to mimic the real-world use case as much as possible.

The model was trained on an Amazon EC2 g4dn.2xlarge Instance. For larger datasets and more complex models, the same code (aws-do-pm) can be deployed on larger, more powerful instances like P4d. The command for building an ANN model in aws-do-pm is shown below. The neural network model is built using the registered data represented by the data_id.

pm model build <data_id>

The model build performance plots from aws-do-pm are shown below. As seen in the plots, the model performs equally well on both training and test data. The loss-vs-Epochs plot shows that the model training error has reduced to a near minimum while keeping the validation error low. This is critical to make sure the model has not been over-fit. The actual-vs-predicted plots for both training and test datasets show that over the entire domain of 100 vehicles, the model performs adequately, with less than 5% maximum error.

A model built in the aws-do-pm framework is automatically registered. However, you can also register an external model in aws-do-pm framework as given below:

pm model register <folder_path> <rel_model_path> [‘description’]

The <rel_model_path> represents the executable version of the model while the <folder_path> contains all its dependencies.

Model Prediction

After a model is built and registered, the model can be used to predict, on a registered dataset, using the following command

pm model predict <model_id> <data_id>

A sample prediction (trip voltage vs index), in normalized units, for one vehicle traveling over one route, along with the prediction uncertainty is shown below.

Clean-up

To run the examples in the third blog of this series, the models built here need to be retained. In such a case, the clean-up can be done after running the examples of the third blog. Otherwise, to free up the resources created by this example, follow the instructions provided here.

Summary

In this blog, we have introduced how to use the aws-do-pm framework for predictive modeling at scale. We demonstrated the scalability of the framework using a battery discharge model of an electric vehicle. The electric vehicle demo included synthetic data generation, model building and predicting with built model. In the next blog, we will use this dataset and models as the basis to showcase the model updating and sensitivity analysis capabilities of the aws-do-pm framework.

Alex Iankoulski

Alex Iankoulski

Alex Iankoulski is a full-stack software and infrastructure architect who likes to do deep, hands-on work. He is currently a Principal Solutions Architect for Self-managed Machine Learning at AWS. In his role he focuses on helping customers with containerization and orchestration of ML and AI workloads on container-powered AWS services. He is also the author of the open source [Do framework](https://bit.ly/do-framework) and a Docker captain who loves applying container technologies to accelerate the pace of innovation while solving the world's biggest challenges. During the past 10 years, Alex has worked on combating climate change, democratizing AI and ML, making travel safer, healthcare better, and energy smarter.

Mahadevan Balasubramaniam

Mahadevan Balasubramaniam

Mahadevan Balasubramaniam has 24 years experience in the area of physics infused deep learning and building digital twins at scale for physical assets such as aircraft engines, industrial gas turbines and industrial process platforms. At AWS, he is a WWSO HPC Principal SA developing solutions for large-scale HPC+AI and ML Frameworks. Prior to joining AWS, Mahadevan was first at GE where he focused on probabilistic modeling, hardware design, anomaly detection, and remaining useful life predictions for a variety of applications across aviation, energy, and oil & gas. Mahadevan then joined as a Senior Principal Data Scientist for a startup where focused on deep learning based solar energy forecasting for managing the battery discharge in PV-battery installations. Dr. Balasubramaniam obtained his Ph.D. from MIT in 2001 where he studied computational geometry for automated toolpath generation for 5-axis NC machining.

Venkatesh Rajagopalan

Venkatesh Rajagopalan

Venkatesh Rajagopalan is a Principal Solutions Architect for Autonomous Computing. He has ~13 years of industrial experience in research and product development. In his current role, he develops solutions for problems in large-scale machine learning and autonomous systems. Prior to joining AWS, Venkatesh was the Senior Director of Data Science with GE’s Oil & Gas business, where lead an Industrial AI team responsible for building hybrid analytics (physics + deep learning) products focused on production optimization in large fields, anomaly detection, and remaining useful life estimation for the oil & gas industry. Prior to this, Venkatesh was a Senior Engineer with the Prognostic Systems Lab at the GE Research Center, India. His research focused on developing models and methods for failure prediction in critical industrial assets like gas turbines, electrical motors, and aircraft engines. He was a key contributor to the development of the Digital Twin platform at GE Research. The digital twins that he developed are being used to monitor the health and performance of a fleet of (7000+) aircraft engines and (300+) gas turbines. Venkatesh has a PhD in Electrical Engineering from the Pennsylvania State University. He has been granted 12 patents, has published 17 papers, with more than 600 citations. His expertise includes signal processing, estimation theory, optimization and machine learning.