Amazon Supply Chain and Logistics
Maximize efficiency for replenishable deliveries with AWS Last Mile Replenishment Scheduler
Last mile delivery is typically one of the most expensive distribution legs in many industries, such as for e-commerce businesses and postal networks. Numerous stops, even more numerous items to deliver (e.g., Amazon orders), and complex routing to minimize overall distances are common challenges in maximizing operational efficiency.
However, there is another subset of challenges in last mile delivery – maintaining critical levels of replenishable products at the consumption sites. For example, think about a network of gas stations. These need replenishment for different types of gasoline depending on their consumption. Topping up too often and in small quantities is an expensive proposition, while waiting until the last minute when tanks are almost empty is risky because of stock-outs and losing demand to competition.
Running out of stock becomes a more dangerous situation for products like medical consumables or vaccines. Deciding when and in what quantities to replenish stock levels and overlaying this with transport capacity and route design to minimize the cost of logistics becomes a nontrivial and highly combinatorial challenge for schedulers.
This blog post describes an AWS Last Mile Replenishment Scheduler to predict product usage at the consumption sites and decide on the optimal product volume to allocate to delivery vehicles, with the goal of preventing product stock-outs and optimizing delivery routes to maximize logistics efficiency.
Three modules of Last Mile Replenishment Scheduler (LMRS)
LMRS consists of three key modules: (1) a Demand Forecaster to predict short-term product demand at points of consumption; (2) a Simulator to generate potential customer orders and decide the delivery time window, delivery amount, and delivery priority for each customer order; (3) an Agent to schedule the resources and plan the trips in order to minimize the risk of stock-outs and maximize the logistic efficiency.
The following steps take place every day:
- The Demand Forecaster predicts the consumption rate for each customer in the next few days, according to the recent consumption patterns using machine learning (ML) algorithms and proven ML features.
- Using the customers’ forecasted consumption data and the estimation of stock-out risk, the Simulator generates orders for the next five days. For each order, the Simulator decides the delivery time window, range of delivery quantity, and delivery priority.
- The Agent takes the generated orders as input and plans the delivery routes for the next five days.
- The Agent only returns to the Simulator the next-day trips and discards the trips of the remaining four days. As the Agent will plan the next five days’ trips dynamically each day, we do not need to keep the trips for the following four days because these will be replanned on the next day in any case. This is similar to playing chess, where at each step, we look ahead N steps, but we only move one actual step each time and reevaluate the following one depending on the opponent’s move.
- Finally, once the delivery plan is confirmed, the Simulator consumes the first day’s trips and deliveries and correspondingly updates the internal simulated stock levels at the customers.
For the best outcome, it makes sense to run the Simulator and Agent feedback loop several times with different parameters, e.g., with different forecast probabilities, and assess their impact on the daily delivery plan to settle on the optimal one.
The Demand Forecaster uses a time-series forecasting approach. It is trained on historical consumption data to learn the consumption pattern for each customer. When making a prediction about the future, it utilizes the past 4–6 weeks of consumption data as a context input to make predictions for the next week of consumption.
We use Amazon Forecast to find the best time-series model. Amazon Forecast automatically inspects the data, identifies the key attributes, and selects the right algorithm by training different model types and selecting the one with the best forecast properties. Time-series forecast models are able to model and forecast temporal structures like seasonality, and more complex models learn and transfer insights gained from related time series like similar products or customer types.
Amazon Forecast contains algorithms that either train one model for each series or train a single model jointly over all time series. If the model is trained jointly over all time series, the model finds groupings of time series that behave similarly and uses information from each group to predict the members of the group. More advanced algorithms also allow the addition of item metadata in addition to the time series values; for example, product features or types. This allows the model to learn from this metadata and to find hidden connections between the series. We use these advanced features to adjust our forecast to individual consumption patterns as well as patterns found in groups.
Once the model is trained, we can predict future values. Depending on the model type, the prediction can be a single point or a distribution of values. Having a model that predicts a distribution allows us to represent the uncertainty in an estimate and define a confidence band for a prediction. For example, using a 95 percent prediction interval, we can be 95 percent confident that the next new observation will fall within this range.
To simplify the work with Amazon Forecast even more, AWS Solutions provides a one-click deployment using an AWS CloudFormation template. The solution automates the work to generate, test, compare, and iterate on Amazon Forecast predictions and automatically generates visualization dashboards to inspect the results.
The Simulator takes the next five days of predicted consumption data and generates potential orders for each customer. The Simulator will decide on the delivery time window, delivery quantity range, and delivery priority or penalty.
Deciding on the delivery time window
The delivery time window of an order is first defined as , where is the earliest time to deliver a customer, and is the latest time to deliver a customer (i.e., hit time). We derive and by looking up the corresponding product stocks levels:
Where and are the product stock levels at and respectively. is the hit level of the customer’s stock for risk control, which means the stock should not drop below this level. is the storage capacity of the customer, and is a decision variable between zero and max top up volume. For example, we can try different values of and see which one produces the best outcome.
Note: for customers with large storage capacity, we change the hit level to be . By making this modification, we force delivery to large storage customers as early as possible as long as we can deliver a full truckload (FTL).
Finally, the delivery time window is refined as an overlap between and customer access hours, e.g., opening time.
Deciding on delivery quantity
For an order, the quantity (e.g., weight or volume) that can be delivered to the customer is defined as a range , where is the maximum amount that can be delivered to the customer on the day , and is the minimum amount. In our method, depends on when the customer will stock out. We would like to be large when the customer will stock out immediately on the next day, meaning we want to deliver the customer as much as possible since the customer will run out of stock soon. However, we allow to be low when the customer will stock out a couple of days later, meaning we do not necessarily need to deliver a lot to the customer if we project a later stock-out. Therefore, can be defined as:
where and are hyperparameters that can be tuned and decided based on experiments. For example, we can try different values of and in the experiments and check which value produces the best results on the key metric (e.g., logistic efficiency). Another example, when shows that the dry-out period is very far away and we can ignore this in our volume decision. is the number of days between current time and dry-out time. For example, if we let and , we will have:
- if the customer stocks out on the next day (immediately)
- if the customer stocks out on the second day
- if the customer stocks out on the third day
During the optimization, transport will try to first deliver at least for the orders and will deliver more than if the transport has remaining product left but is not able to deliver a new order for any reason. However, the final delivered quantity will be within .
Deciding on delivery priority
The Simulator will assign priority to each order according to how soon the corresponding customer will stock out. The priority is represented by a penalty by which the algorithm will be penalized for missing that order. As an example, we use a Gaussian kernel to model the penalty:
where is the number of hours left between current time and the projected stock-out event.
From penalty decay diagrams (varying with used parameters and to show the difference in penalty reduction), we can see that an order will have a much higher penalty if the customer will stock out sooner than an order where the customer will stock out a couple of days later.
Once we decide on the delivery time window, deliver quantity range, and deliver priority for each order, we include the order into the Simulator’s output if its delivery time window overlaps with the next five days.
The Agent takes the above generated orders as inputs and plans delivery routes for the next five days. The goal of trip planning is to minimize the objective function for distance traveled, subject to certain constraints.
The essential goal of trip planning is to minimize the objective function below:
where is the total distance (e.g., miles or kilometers) for all trips, and is the total number of orders generated by the Simulator. is the penalty of order . is a binary variable with 0 meaning order is delivered, and 1 meaning order is missed in the solution. An order may be missed for reasons such as delivery window constraints or limited resources and capacity on that day. For example, it’s possible that the transport doesn’t have enough vehicles on that day to deliver all the generated orders within the shift duty time.
The Agent aims to find the trips that minimize the objective function with limited search time and computational resources. This means the agent will try to minimize the penalty of missing orders while reducing the travel distance.
We provide a set of example constraints. Depending on the situation, they need adjusting to represent the existing operational logic.
- Customer access hours
- Service time, i.e., the time spent at a customer’s site to unload the product
- Unload time
- Transit time
- Setup time, e.g., loading
- Pre- and post-trip time, e.g., an initial trip from the delivery base to the first drop-off point
- Vehicle type, e.g., van, semitrailer
- Capacity, i.e., maximum weight or volume of the product
- Time and distance matrix to get from every possible origin to every possible destination
- Source and product constraints, e.g., which products exist across loading locations, which customer needs specific products, etc.
- Shift constraints, e.g., working hours, duty time of drivers, based on the available drivers
- Other customer- or situation-specific constraints
We ran a proof of concept (PoC) using a dataset from a real business for industrial products. It is a chemical manufacturer that supplies replenishable industrial products in tanker trucks to other manufacturers. Its business critical objective is to always maintain some product stock levels at the customers’ tanks and prevent stock-outs. Naturally, their business objective is to maximize volume of product delivered over total traveled distance and reduce operational delivery costs.
The PoC proved compliant with the mentioned real-world constraints, such as customer access hours, vehicle capacity and labor constraints, source and product constraints, and demonstrated potential for multimillion-dollar savings on last-mile deliveries or up to 10 percent improvement for the weight over distance metric for the pilot country. Furthermore, the benefits go beyond pure operational cost savings. LMRS enables expansion of the planning horizon to improve scheduling agility and reporting accuracy, as well as to optimize network efficiency when breaking the planning siloes between each region.
LMRS also improves the quality of strategic investment decisions by simulating the Opex reduction when making Capex investment. For example, simulating delivery plans and total delivery cost over a period of time when opening a new truck loading site or adding a certain number of vehicles. LMRS reduces the efforts and increases the accuracy for this type of strategic decisions so strategic improvements initiatives are not bounded to the regular strategy planning cycle.
Architecture and integration with customer systems
LMRS is built using a serverless and event-driven architecture on AWS to optimize for cost, reduce the required operational effort to run and maintain LMRS, and focus on delivering the expected business value. In order to enable seamless integration of LMRS with existing scheduling systems and processes, the integration is driven by loosely coupled APIs that allow for on-demand forecasting of customer demand and optimization of delivery schedule and route.
Schedulers execute the optimization workflow from their current scheduling system, which provides the required data input for the optimization logic. With the execution, the latest snapshot of relevant data is collected by the existing scheduling system and sent to a REST API endpoint using Amazon API Gateway with Lambda proxy integration. The API receives the data and stores it in Amazon Simple Storage Service (Amazon S3) for further processing.
Once the data is added to the S3 bucket, Amazon S3 Event Notification with Amazon EventBridge is used to start a state machine execution. The state machine is based on AWS Step Functions and AWS Lambda. The implemented workflow loads the data from the S3 bucket and performs preprocessing of the data to match the required input formats for the Demand Forecaster, Simulator, and optimization Agent. Consequently, the optimization logic of the solution framework is executed in sequence to eventually generate the optimized routes and product deliveries as an output. The output is then formatted to be loaded back into the customer scheduling system. Using the AWS Step Functions observability features, schedulers can use Amazon CloudWatch to have full auditability of the previous data processing and generation of optimization results.
The output data is then stored in Amazon S3. Once the data is added to Amazon S3, an AWS Lambda function is executed, which is again triggered by the respective Amazon S3 change event and respective Amazon EventBridge rule. This AWS Lambda function fetches the output data from the S3 bucket and pushes the data via a REST API call to the customer scheduling system. The customer scheduling system then stores and loads the data into the existing user interface of the customer scheduling system.
Deciding when to replenish products at the points of their consumption is a complex combinatorial problem, and it is challenging for the human mind to evaluate all possible scenarios that might occur over the course of a few days in order to arrive at the most cost-efficient delivery option. In this post, we proposed a three-module LMRS to predict replenishable product consumption at the customer sites, simulate potential deliveries and optimize their selection to prevent stock-outs at the customer and minimize delivery costs at the same time.
LMRS is applicable to various products and industries, such as. gas networks, chemical distribution, construction materials, and pharma and hospital networks. This allows them to maintain the minimum stock levels of required products, avoid costly stock-outs, and minimize logistics costs. If you wish to explore how LMRS could optimize your replenishable last-mile deliveries, then please reach out to your account manager to set up a discovery workshop with the AWS Supply Chain, Transportation, and Logistics team.