Posted On: Nov 17, 2022

Amazon SageMaker Autopilot now supports batch/offline inference within Amazon SageMaker Studio so you can run batch predictions on machine learning (ML) models. SageMaker Autopilot automatically builds, trains and tunes the best ML models based on your data, while allowing you to maintain full control and visibility.

Previously, if you wanted to perform offline inference on the ML models created by Amazon SageMaker Autopilot, you would have to first obtain SageMaker Autopilot’s candidate definitions using DescribeAutoMLJob API, then use those container definitions to create a SageMaker model with the CreateModel API and eventually create SageMaker transform job using the CreateTransformJob API, which could then be invoked programmatically to obtain batch inferences. Starting today, you can select any of the SageMaker Autopilot models and proceed with batch inference within SageMaker Studio. To perform batch predictions, you can provide input and output data configurations and create a batch transform job. The transform job upon completion will output the Amazon S3 location of the predictions. Now you can seamlessly perform offline inferencing from Amazon SageMaker Studio without having to switch to a programmatic mode.

To get started, update Amazon SageMaker Studio to the latest release and launch SageMaker Autopilot either from SageMaker Studio Launcher or APIs. To learn more on how to update studio please see documentation.

Batch inferencing in SageMaker Autopilot is now available in all regions except China where SageMaker Autopilot is available. To get started, see Creating an Experiment with Autopilot and SageMaker Autopilot API reference. To learn more, visit the SageMaker Autopilot product page.