AWS Machine Learning Blog
Solving numerical optimization problems like scheduling, routing, and allocation with Amazon SageMaker Processing
July 2023: This post was reviewed for accuracy.
In this post, we discuss solving numerical optimization problems using the very flexible Amazon SageMaker Processing API. Optimization is the process of finding the minimum (or maximum) of a function that depends on some inputs, called design variables. This pattern is relevant to solving business-critical problems such as scheduling, routing, allocation, shape optimization, trajectory optimization, and others. Several commercial and open-source solvers are available for solving such problems. We demonstrate this solution with three popular Python libraries and solvers that are free to use, and provide a sample notebook that shows how to solve these optimization problems using SageMaker Processing.
Solution overview
SageMaker Processing lets data scientists and ML engineers easily run preprocessing, postprocessing, and model evaluation workloads on SageMaker. This SDK uses the built-in container for scikit-learn and Spark. You can also use your own Docker images without having to conform to any Docker image specification. This gives you maximum flexibility in running any code you want, whether on SageMaker Processing, on AWS container services like Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS), or even on premises; which is what we do in this post. First, we build and push a Docker image that includes several popular optimization packages and solvers, and then we use this Docker image to solve three example problems:
- Minimize the cost of shipping goods through a distribution network
- Scheduling shifts of a set of nurses in a hospital
- Find a trajectory for landing the Apollo 11 Lunar Module with the least amount of fuel
We solve each use case using a different interface that connects to a different solver. We complete the following high-level steps (as in the provided example notebook) for each problem:
- Build a Docker container that contains useful Python interfaces (such as Pyomo and PuLP) to optimization solvers (such as GLPK and CBC)
- Build and push the image to a repository in Amazon Elastic Container Registry (Amazon ECR).
- Use the SageMaker Python SDK (from a notebook or elsewhere with the right permissions) to point to the Docker image in Amazon ECR and send in a Python file with the actual optimization problem.
- Monitor the logs in a notebook or Amazon CloudWatch Logs and obtain and outputs you need in a dedicated output folder that you specify in Amazon Simple Storage Service (Amazon S3).
Schematically, this process looks like the following diagram.
Let’s get started!
Building and pushing a Docker container
Start with the following Dockerfile:
In this code, we install Python interfaces to solvers such as PuLP, Pyomo, Inspyred, OR-Tools, Scipy, and DEAP. For more information about these solvers, see the References section at the end of this post.
We then use the following commands from the notebook to build and push this container to Amazon ECR:
Sample output for this command looks like the following code:
Using the SageMaker Python SDK to start a job
Typically, we first initialize a SageMaker Processing ScriptProcessor
as follows:
Then we write a file (for this post, we always use a file called preprocessing.py
) and run a processing job on SageMaker as follows:
Use case 1: Minimizing the cost of shipping goods through a distribution network
In this use case, American Steel, an Ohio-based steel manufacturing company, produces steel at its two steel mills located at Youngstown and Pittsburgh. The company distributes finished steel to its retail customers through the distribution network of regional and field warehouses.
The network represents shipment of finished steel from American Steel—two steel mills located at Youngstown (node 1) and Pittsburgh (node 2) to their field warehouses at Albany, Houston, Tempe, and Gary (nodes 6, 7, 8, and 9) through three regional warehouses located at Cincinnati, Kansas City, and Chicago (nodes 3, 4, and 5). Also, some field warehouses can be directly supplied from the steel mills.
The following table presents the minimum and maximum flow amounts of steel that may be shipped between different cities, along with the cost per 1,000 tons per month of shipping the steel. For example, the shipment from Youngstown to Kansas City is contracted out to a railroad company with a minimal shipping clause of 1,000 tons per month. However, the railroad can’t ship more than 5,000 tons per month due the shortage of rail cars.
From node | To node | Cost | Minimum | Maximum |
Youngstown | Albany | 500 | – | 1000 |
Youngstown | Cincinnati | 350 | – | 3000 |
Youngstown | Kansas City | 450 | 1000 | 5000 |
Youngstown | Chicago | 375 | – | 5000 |
Pittsburgh | Cincinnati | 350 | – | 2000 |
Pittsburgh | Kansas City | 450 | 2000 | 3000 |
Pittsburgh | Chicago | 400 | – | 4000 |
Pittsburgh | Gary | 450 | – | 2000 |
Cincinnati | Albany | 350 | 1000 | 5000 |
Cincinnati | Houston | 550 | – | 6000 |
Kansas City | Houston | 375 | – | 4000 |
Kansas City | Tempe | 650 | – | 4000 |
Chicago | Tempe | 600 | – | 2000 |
Chicago | Gary | 120 | – | 4000 |
The objective of transshipment problems in general and The American Steel Problem in particular is to minimize the cost of shipping goods through the network.
All the nodes have supply and demand, demand = 0 for supply nodes, supply = 0 for demand nodes, and supply = demand = 0 for transshipment nodes. The only constraints in the transshipment problem are flow conservation constraints. These constraints simply state that the flow of goods into a node must be greater than or equal to the flow of goods out of a node.
This problem can be formulated as follows:
We solve this problem using the PuLP interface and its default solver GLPK using script_processor.run
. Logs from this optimization job provide these solutions:
Use case 2: Scheduling shifts of a set of nurses in a hospital
In the next example, a hospital supervisor must create a schedule for four nurses over a 3-day period, subject to the following conditions:
- Each day is divided into three 8-hour shifts
- Every day, each shift is assigned to a single nurse, and no nurse works more than one shift
- Each nurse is assigned to at least two shifts during the 3-day period
For more information about this scheduling use case, see Employee Scheduling.
This problem can be formulated as follows:
We solve this problem using the OR-Tools interface and its CP-SAT solver with script_processor.run
. Logs from this optimization job provide these solutions:
Use case 3: Finding a trajectory for landing the Apollo 11 Lunar Module with the least amount of fuel
This example uses Pyomo and a simple model of a rocket to compute a control policy for a soft landing. The parameters used correspond to the descent of the Apollo 11 Lunar Module to the moon on July 20, 1969. For a rocket with a mass ? in vertical flight at altitude ℎ, a momentum balance yields the following model:
In this model, ? is the mass flow of propellant and ?? is the velocity of the exhaust relative to the rocket. In this first attempt at modeling and control, we neglect the change in rocket mass due to fuel burn.
Fuel consumption can be calculated as the following:
We want to find a trajectory that minimizes fuel consumption:
This problem can be formulated as follows:
We use the Pyomo interface and the nonlinear optimization solver Ipopt to solve this continuous-time, trajectory optimization problem. Logs from ScriptProcessor.run
provide the following solution:
Summary
We used various examples, front ends, and solvers to solve numerical optimization problems using SageMaker Processing. Next, try using Scipy.optimize, DEAP, or Inspyred to explore other examples. See the references in the next section for documentation and other examples to help solve your own business problems using SageMaker Processing. If you currently use SageMaker APIs for your machine learning projects, using SageMaker Processing for running optimization is a simple, obvious extension. However, consider that other compute options on AWS such as Lambda or Fargate may be more relevant when running some of these open source libraries for serverless optimization, especially when your team has this expertise. Lastly, open source libraries are provided as is, with minimal support whereas commercial libraries such as CPLEX and Gurobi are constantly being tuned for higher performance, and usually provide premium support.
References
- Training APIs: Processing
- https://pypi.org/project/PuLP/
- OR-Tools: Get Started Guides
- Pyomo Documentation 5.7.2
- inspyred: Recipes
- Optimization and root finding (scipy.optimize)
- DEAP documentation
- https://www.gurobi.com/
- CPLEX optimzer
- OR Tools code license
- PuLP code license
- Pyomo code license
About the Author
Shreyas Subramanian is a AI/ML specialist Solutions Architect, and helps customers by using Machine Learning to solve their business challenges on the AWS platform.