Posted On: Oct 13, 2023

Amazon SageMaker Canvas now supports deploying machine learning (ML) models to real-time inferencing endpoints, allowing you take your ML models to production and drive action based on ML powered insights. SageMaker Canvas is a no-code workspace that enables analysts and citizen data scientists to generate accurate ML predictions for their business needs.

Until now, SageMaker Canvas provided the ability to evaluate an ML model, generate bulk-predictions and run what-if analysis within its interactive workspace. Starting today, you can also deploy the models to SageMaker endpoints for real time inferencing, making it easier to consume model predictions and drive actions outside the SageMaker Canvas workspace . Having the ability to directly deploy ML models from SageMaker Canvas eliminates the need to manually export, configure, test and deploy ML models into production thereby saving reducing complexity and saving time. It also makes operationalizing ML models more accessible to individuals, without the need to write code. 

To get started, log-in to Amazon SageMaker Canvas to access your existing models or build new models. Select the model and deploy with appropriate endpoint configurations for your model. SageMaker Inferencing charges will apply to deployed models. The ability to directly deploy ML models in Amazon SageMaker Canvas is now available in all AWS regions where SageMaker Canvas is supported. To learn more, refer to the SageMaker Canvas product documentation.