This Guidance shows how you can bring your own machine learning (ML) models into Amazon SageMaker Canvas and remove the need to manually change your code that is often required when building or moving ML models in new environments. In this Guidance, we showcase three patterns for how your teams can use ML models with SageMaker Canvas. One, you can register ML models in the SageMaker model registry, which is a metadata store for ML models. Two, you can directly share models built using Amazon SageMaker Autopilot. Three, you can use Amazon SageMaker Jumpstart and import the ML models into SageMaker Canvas. Business analysts can then analyze and generate predictions from any model in Canvas without writing a single line of code.

Please note: [Disclaimer]

Architecture Diagram

[Architecture diagram description]

Download the architecture diagram PDF 

Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

  • SageMaker Studio, SageMaker Canvas, SageMaker Data Wrangler, and SageMaker JumpStart are purpose-built, fully managed ML services that integrate with many AWS native services. These services help you automate your continuous integration and continuous delivery (CI/CD) and machine learning operations (MLOps) pipelines, improving the productivity of your developers.

    Specifically, SageMaker Studio is an IDE that provides a single web-based visual interface where you access purpose-built tools to perform all ML development steps. From preparing data, to building, training, and deploying your ML models, this can improve your team's productivity by up to 10x. Additionally, with SageMaker Data Wrangler, you can prepare and transform the dataset using 300+ built-in data transformations without writing any code. Furthermore, SageMaker Canvas (features of which were moved from SageMaker Autopilot) can retrain and deploy the model using the updated dataset. And, SageMaker JumpStart provides pretrained, open-source models for a wide range of problem types to help you get started with ML. Finally, by using SageMaker Studio, you can share the trained models using either of the three patterns.

    Read the Operational Excellence whitepaper 
  • To deploy this Guidance, you must set up an AWS Identity and Access Management (IAM) user or role with the appropriate permissions to services. Then, onboard to Amazon SageMaker Domain using IAM and add and remove user profiles. Amazon S3 automatically enables server-side encryption with Amazon S3 managed keys (SSE-S3), or new object uploads. However, you can choose to configure buckets to use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) instead. For more information, using server-side encryption with AWS KMS (SSE-KMS) can help.

    These services help you securely control and setup access permissions for users, such as ML teams, developers, and data scientists to access AWS resources. With IAM, you can generate and download a credential report that lists all users in your account. You can also use the report to audit the effects of credential lifecycle requirements, such as passwords and access to key rotations.

    Read the Security whitepaper 
  • Amazon SageMaker helps you align to AWS best practices for reliability through automatic scaling (auto scaling) for your hosted models.

    To monitor data and model quality, you can set alarms using CloudWatch or SageMaker Model Monitor, and send notifications or take actions when those thresholds are met.

    Also, SageMaker endpoints are purpose-built for ML; it is a fully managed service and integrates with both CloudWatch and SageMaker Model Monitor. These services help you monitor your ML endpoints and APIs so you can configure alarms, act, and scale down resources to zero when there is no traffic.

    Finally, training data is stored in Amazon S3, an object storage service that offers 99.999999999% (11 9's) of durability.

    Read the Reliability whitepaper 
  • SageMaker Canvas (with features from SageMaker Autopilot) helps to ensure your workloads are performing efficiently; it is an ML service that allows you to run multiple jobs and experiments. It includes features like explainability (or interpretability) to help you understand the model better. SageMaker can manage automatic machine learning (AutoML) tasks using an AutoML job with three notebook-based reports, and you can edit these notebooks as needed.

    With SageMaker Jumpstart, you can choose the right instance type and configure autoscaling. Sagemaker also provides purpose-build accelerators for training and inferences, such as AWS Inferentia and AWS Trainium chips. For model training, the scaling is possible via the distribution parameter of the TrainingInput class in the SageMaker Python Software Development Kit, which allows you to specify how data is distributed across multiple training instances for a training job. There are three options for the distribution parameter: FullyReplicated, ShardedByS3Key, and ShardedByRecord. SageMaker also supports automatic scaling for your hosted models.

    Read the Performance Efficiency whitepaper 
  • By utilizing fully managed and purpose-built ML SageMaker services, you only pay for the resources needed, thus helping optimize the cost without under or overprovisioning. If you're not using SageMaker Canvas, you can log out of your session and shut down resources. Also, managed spot training further optimizes the cost of training models up to 90% over on-demand instances. Managed Spot Training uses an Amazon Elastic Compute Cloud (Amazon EC2) spot instance to run training jobs instead of on-demand instances. For running experiments, tests, and development workloads, you can choose an on-demand and spot pricing model. For steady state workloads, fully managed services used in this Guidance allow you to purchase a more flexible pricing model using a savings plan.

    Read the Cost Optimization whitepaper 
  • The Guidance as configured consistently uses high utilization of resources. It leverages purpose-built and managed services to automatically scale resources and ensure efficient resource utilization. In addition, it uses SageMaker Model Monitor, which helps you to capture the input, output, and metadata for invocations of the models that you deploy. It also helps you to analyze the data and monitor its quality.

    You can automate your model drift detection using SageMaker Model Monitor and retrain only when necessary, meaning the environmental impacts, especially energy consumption and efficiency of the backend, are optimized, ensuring efficient resource utilization.

    Read the Sustainability whitepaper 

Implementation Resources

A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.

The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.

[Content Type]


This [blog post/e-book/Guidance/sample code] demonstrates how [insert short description].


The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.

Was this page helpful?