- Machine Learning›
- Amazon SageMaker AI›
- Amazon SageMaker AI FAQs
Amazon SageMaker AI FAQs
General
What is Amazon SageMaker AI?
Amazon SageMaker AI is a unified platform for data, analytics, and AI. Bringing together widely adopted AWS machine learning (ML) and analytics capabilities, the next generation of SageMaker delivers an integrated experience for analytics and AI with unified access to all your data. SageMaker AI allows you to collaborate and build faster from a unified studio using familiar AWS tools for model development, generative AI, data processing, and SQL analytics, accelerated by Amazon Q Developer, the most capable generative AI assistant for software development. Additionally, you can access all your data whether it’s stored in data lakes, data warehouses, third-party or federated data sources, with governance built in to address enterprise security needs.
In which AWS Regions is SageMaker AI available?
For a list of the supported SageMaker AI Regions, please visit the AWS Regional Services page. Also, for more information, see Regional endpoints in the AWS general reference guide.
What is the service availability of SageMaker AI?
SageMaker AI is designed for high availability. There are no maintenance windows or scheduled downtimes. SageMaker AI APIs run in Amazon proven high-availability data centers, with service stack replication configured across three facilities in each Region to provide fault tolerance in the event of a server failure or Availability Zone outage.
How does SageMaker AI secure my code?
SageMaker AI stores code in ML storage volumes, secured by security groups and optionally encrypted at rest.
What security measures does SageMaker AI have?
SageMaker AI ensures that ML model artifacts and other system artifacts are encrypted in transit and at rest. Requests to the SageMaker AI API and console are made over a secure (SSL) connection. You pass AWS Identity and Access Management roles to SageMaker AI to provide permissions to access resources on your behalf for training and deployment. You can use encrypted Amazon Simple Storage Service (Amazon S3) buckets for model artifacts and data, as well as pass an AWS Key Management Service (AWS KMS) key to SageMaker AI notebooks, training jobs, and endpoints to encrypt the attached ML storage volume. SageMaker AI also supports Amazon Virtual Private Cloud (Amazon VPC) and AWS PrivateLink support.
Does SageMaker AI use or share models, training data, or algorithms?
SageMaker AI does not use or share customer models, training data, or algorithms. We know that customers care deeply about privacy and data security. That's why AWS gives you ownership and control over your content through simplified, powerful tools that allow you to determine where your content will be stored, secure your content in transit and at rest, and manage your access to AWS services and resources for your users. We also implement technical and physical controls that are designed to prevent unauthorized access to or disclosure of your content. As a customer, you maintain ownership of your content, and you select which AWS services can process, store, and host your content. We do not access your content for any purpose without your consent.
How am I charged for SageMaker AI?
You pay for ML compute, storage, and data processing resources that you use for hosting the notebook, training the model, performing predictions, and logging the outputs. With SageMaker AI, you can select the number and type of instance used for the hosted notebook, training, and model hosting. You pay only for what you use, as you use it; there are no minimum fees and no upfront commitments. For more details, see Amazon SageMaker AI Pricing and the Amazon SageMaker Pricing Calculator.
How can I optimize my SageMaker AI costs, such as detecting and stopping idle resources to avoid unnecessary charges?
There are several best practices that you can adopt to optimize your SageMaker AI resource usage. Some approaches involve configuration optimizations; others involve programmatic solutions. A full guide on this concept, complete with visual tutorials and code samples, can be found in this blog post.
What if I have my own notebook, training, or hosting environment?
SageMaker AI provides a full and complete workflow, but you can continue using your existing tools with SageMaker AI. You can easily transfer the results of each stage in and out of SageMaker AI as your business requirements dictate.
Is R supported with SageMaker AI?
Yes. You can use R within SageMaker AI notebook instances, which include a preinstalled R kernel and the reticulate library. Reticulate offers an R interface for the Amazon SageMaker AI Python SDK, helping ML practitioners build, train, tune, and deploy R models. You can also launch RStudio, an integrated development environment (IDE) for R in Amazon SageMaker Studio.
What is Amazon SageMaker Studio?
Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps. SageMaker Studio gives you complete access, control, and visibility into each step required to prepare data and build, train, and deploy models. You can quickly upload data, create new notebooks, train and tune models, move back and forth between steps to adjust experiments, compare results, and deploy models to production all in one place, making you much more productive. All ML development activities including notebooks, experiment management, automatic model creation, debugging and profiling, and model drift detection can be performed within the unified SageMaker Studio visual interface..
How does SageMaker Studio pricing work?
There is no additional charge for using SageMaker Studio. You pay only for the underlying compute and storage charges on the services that you use within SageMaker Studio.
In which Regions is SageMaker Studio supported?
You can find the Regions where SageMaker Studio is supported in the Amazon SageMaker Developer Guide.
How can I check for imbalances in my model?
Amazon SageMaker Clarify helps improve model transparency by detecting statistical bias across the entire ML workflow. SageMaker Clarify checks for imbalances during data preparation, after training, and ongoing over time, and also includes tools to help explain ML models and their predictions. Findings can be shared through explainability reports.
What is RStudio on Amazon SageMaker AI?
RStudio on SageMaker AI is the first fully managed RStudio Workbench in the cloud. You can quickly launch the familiar RStudio integrated development environment (IDE) and dial up and down the underlying compute resources without interrupting your work, making it easier to build ML and analytics solutions in R at scale. You can seamlessly switch between the RStudio IDE and SageMaker Studio notebooks for R and Python development. All your work, including code, datasets, repositories, and other artifacts, is automatically synchronized between the two environments to reduce context switch and boost productivity.
What kind of bias does SageMaker Clarify detect?
How does SageMaker Clarify improve model explainability?
SageMaker Clarify is integrated with SageMaker Experiments to provide a feature importance graph detailing the importance of each input for your model’s overall decision-making process after the model has been trained. These details can help determine if a particular model input has more influence than it should on overall model behavior. SageMaker Clarify also makes explanations for individual predictions available through an API.
SageMaker and SageMaker AI
What is Amazon SageMaker AI? How does it differ from Amazon SageMaker?
Amazon SageMaker AI (formerly Amazon SageMaker) is a fully managed service that brings together a broad set of tools to enable high-performance, low-cost machine learning (ML) for any use case. With SageMaker AI, you can build, train, and deploy ML models at scale. SageMaker is a unified platform for data, analytics, and AI. It provides a new AI-powered unified development experience for customers to easily and quickly build applications on AWS.
What SageMaker AI features will be available in the new Amazon SageMaker Unified Studio at re:Invent 2024?
At re:Invent 2024, the supported capabilities include inference endpoints, JumpStart, Train, MLFlow, Model Registry, partner AI apps, HyperPod, Pipelines, and others. We strive to incorporate all existing functionalities that support the entire model development journey, from training to deployment, in the new Unified Studio.
I currently use Amazon SageMaker AI through the APIs, AWS Management Console, Amazon SageMaker notebook instances, or Amazon SageMaker Studio. With the launch of the new unified platform for data, analytics, and AI, do I need to take any action to ensure my existing workflows continue to work?
SageMaker AI provides a unified data and AI experience to find, access, and act on your data, accelerating analytics and AI initiatives. SageMaker AI will continue to be supported, so you don't need to take any action to ensure your existing workflows continue to work. For example, you can continue using your existing Amazon SageMaker HyperPod clusters as they are. If you want to use them in the new SageMaker Unified Studio, set up a connection with this cluster. All your existing HyperPod configurations will automatically be migrated to your project in SageMaker, and the performance and cost-efficiency will be the same. However, the SageMaker Unified Studio experience can improve productivity by bringing all tools into one place.
I currently use SageMaker Studio, and I am interested in evaluating the SageMaker Unified Studio. How do I get started?
We're excited to announce a unified studio that allows you to collaborate and build faster. From SageMaker Unified Studio, you can discover data, query data, train AI models, and build generative AI applications. We are here to support you every step of the way. We will provide easy-to-use guidelines to bring your existing projects to the unified studio in Q1 2025. If you have any questions, don't hesitate to reach out to your account team.
Are there any differences between the current tools I use in SageMaker Studio and those in the new SageMaker Unified Studio?
SageMaker Studio remains a great choice for customers who need a reliable and streamlined ML development experience. Organizations looking to explore data and analytics capabilities will find the new unified platform's integrated governance, advanced analytics, and generative AI features compelling. With the new SageMaker Unified Studio integrated experience, you can prepare and integrate data, browse data using SQL, and discover and govern data with a unified catalog.
Can I access HyperPod, JumpStart, MLFlow, JupyterLab, and Pipelines in the new SageMaker Unified Studio?
Yes, HyperPod, JumpStart, MLFlow, JupyterLab, and Pipelines are all available in the new SageMaker Unified Studio. In addition, inference endpoints, Train, Model Registry, and other popular SageMaker AI capabilities are also supported in the new Unified Studio.
What does the typical generative AI workflow look like with the new unified experience?
Journey 1. Select, customize, and deploy foundation models (FMs):
- Browse and select a dataset
- Select an FM
- Evaluate models (automatic and human)
- Customize, fine-tune: Optimize FM price, performance, and quality
- Optimize and deploy for inference
- Automate with FMOps and model monitoring
Journey 2. Build, train, and deploy ML models at scale:
- Accelerate and scale data prep for ML
- Build ML models
- Train and tune ML models
- Deploy in production
- Manage and monitor
- Automate the ML lifecycle
Journey 3. Select a model, build, and deploy a generative AI application:
- Select a model and fine-tune it
- Import the model to Amazon Bedrock
- Build and deploy a generative AI application that integrates with your endpoint
Journey 4. Select and deploy a model to an endpoint, and connect the endpoint to generative AI apps:
- Select a model
- Deploy the model to a SageMaker AI endpoint
- Connect the endpoint to your generative AI applications
How does the pricing work for the AI-powered unified development experience?
The new unified development experience has a use-based pricing model for the Studio and pass-through pricing for all underlying services. The Unified Studio will charge for metadata storage, API requests, and governance. For more details, visit Amazon SageMaker pricing.
ML governance
What ML governance tools does SageMaker AI provide?
SageMaker AI provides purpose-built ML governance tools across the ML lifecycle. With Amazon SageMaker Role Manager, administrators can define minimum permissions in minutes. Amazon SageMaker Model Cards makes it easier to capture, retrieve, and share essential model information from conception to deployment, and Amazon SageMaker Model Dashboard keeps you informed on production model behavior, all in one place. For more information, see ML Governance with Amazon AI SageMaker.
What does SageMaker Role Manager do?
You can define minimum permissions in minutes with SageMaker Role Manager. It provides a baseline set of permissions for ML activities and personas with a catalog of pre-built IAM policies. You can keep the baseline permissions, or customize them further based on your specific needs. With a few self-guided prompts, you can quickly input common governance constructs such as network access boundaries and encryption keys. SageMaker Role Manager will then generate the IAM policy automatically. You can discover the generated role and associated policies through the AWS IAM console. To further tailor the permissions to your use case, attach your managed IAM policies to the IAM role that you create with SageMaker Role Manager. You can also add tags to help identify the role and organize across AWS services.
What does SageMaker Model Cards do?
SageMaker Model Cards helps you centralize and standardize model documentation throughout the ML lifecycle by creating a single source of truth for model information. SageMaker Model Cards auto-populates training details to accelerate the documentation process. You can also add details such as the purpose of the model and the performance goals. You can attach model evaluation results to your model card and provide visualizations to gain key insights into model performance. SageMaker Model Cards can easily be shared with others by exporting to a PDF format.
What does SageMaker Model Dashboard do?
SageMaker Model Dashboard gives you a comprehensive overview of deployed models and endpoints, letting you track resources and model behavior violations through one pane. It allows you to monitor model behavior in four dimensions, including data and model quality, and bias and feature attribution drift through its integration with SageMaker Model Monitor and SageMaker Clarify. SageMaker Model Dashboard also provides an integrated experience to set up and receive alerts for missing and inactive model monitoring jobs, and deviations in model behavior for model quality, data quality, bias drift, and feature attribution drift. You can further inspect individual models and analyze factors impacting model performance over time. Then, you can follow up with ML practitioners to take corrective measures.
Foundation models
How do I get started with SageMaker AI quickly?
SageMaker JumpStart helps you quickly and easily get started with ML. SageMaker JumpStart provides a set of solutions for the most common use cases that can be deployed readily in just a few steps. The solutions are fully customizable and showcase the use of AWS CloudFormation templates and reference architectures so you can accelerate your ML journey. SageMaker JumpStart also provides foundation models and supports one-step deployment and fine-tuning of more than 150 popular open-source models, such as transformer, object detection, and image classification models.
Which foundation models are available in SageMaker JumpStart?
SageMaker JumpStart provides proprietary and public models. For a list of available foundation models, see Getting Started with Amazon SageMaker JumpStart .
How do I start using the foundation models in SageMaker JumpStart?
You can access foundation models through SageMaker Studio, the SageMaker SDK, and the AWS Management Console. To get started with proprietary foundation models, you must accept terms of sale in the AWS Marketplace .
Will my data be used or shared to update the base model that is offered to customers using SageMaker JumpStart?
No. Your inference and training data will not be used nor shared to update or train the base model that SageMaker JumpStart surfaces to customers.
Can I see the model weights and scripts of proprietary models with SageMaker JumpStart?
No. Proprietary models do not allow customers to view model weights and scripts.
In which Regions are SageMaker JumpStart foundation models available?
Models are discoverable in all Regions where SageMaker Studio is available, but the ability to deploy a model differs by model and instance availability of the required instance type. You can refer to AWS Region availability and required instance from the model detail page in the AWS Marketplace .
How are SageMaker JumpStart foundation models priced?
For proprietary models, you are charged for software pricing determined by the model provider and SageMaker AI infrastructure charges based on the instance used. For publicly available models, you are charged SageMaker AI infrastructure charges based on the instance used. For more information, see Amazon SageMaker AI Pricing and the AWS Marketplace .
How does SageMaker JumpStart help protect and secure my data?
Security is the top priority at AWS, and SageMaker JumpStart is designed to be secure. That's why SageMaker AI gives you ownership and control over your content through simplified, powerful tools that help you determine where your content will be stored, secure your content in transit and at rest, and manage your access to AWS services and resources for your users.
- We do not share customer training and inference information with model sellers in the AWS Marketplace. Similarly, the seller’s model artifacts (for example, model weights) are not shared with the buyer.
- SageMaker JumpStart does not use customer models, training data, or algorithms to improve its service and does not share customer training and inference data with third parties.
- In SageMaker JumpStart, ML model artifacts are encrypted in transit and at rest.
- Under the AWS Shared Responsibility Model, AWS is responsible for protecting the global infrastructure that runs all of AWS. You are responsible for maintaining control over your content that is hosted on this infrastructure.
By using a model from AWS Marketplace or SageMaker JumpStart, users assume responsibility for the model output quality and acknowledge the capabilities and limitations described in the individual model description.
Which publicly available models are supported with SageMaker JumpStart?
SageMaker JumpStart includes over 150 pre-trained publicly available models from PyTorch Hub and TensorFlow Hub. For vision tasks such as image classification and object detection, you can use models like RESNET, MobileNet, and single-shot detector (SSD). For text tasks such as sentence classification, text classification, and question answering, you can use models like BERT, RoBERTa, and DistilBERT.
How can I share ML artifacts with others within my organization?
With SageMaker JumpStart, data scientists and ML developers can easily share ML artifacts, including notebooks and models, within their organization. Administrators can set up a repository that is accessible by a defined set of users. All users with permission to access the repository can browse, search, and use models and notebooks as well as the public content inside SageMaker JumpStart. Users can select artifacts to train models, deploy endpoints, and execute notebooks in SageMaker JumpStart.
Why should I use SageMaker JumpStart to share ML artifacts with others within my organization?
With SageMaker JumpStart, you can accelerate time-to-market when building ML applications. Models and notebooks built by one team inside your organization can be easily shared with other teams within your organization in just a few steps. Internal knowledge sharing and asset reuse can significantly increase the productivity of your organization.
How can I evaluate and select the foundation models?
Can admins control what’s available for their users?
Yes. Admins can control which Amazon SageMaker JumpStart models are visible and usable to their users across multiple AWS accounts and user principals. To learn more, see documentation.
What is inference optimization toolkit?
Inference optimization toolkit makes it easy for you to implement the latest inference optimization techniques in order to achieve state-of-the-art (SOTA) cost performance on Amazon SageMaker AI, while saving months of developer time. You can choose from a menu of popular optimization techniques provided by SageMaker AI and run optimization jobs ahead of time, benchmark the model for performance and accuracy metrics, and then deploy the optimized model to a SageMaker AI endpoint for inference. The toolkit handles all aspects of the model optimization, so you can focus more on your business objectives.
Why should I use inference optimization toolkit?
Inference optimization toolkit helps you improve cost performance and time to market for generative AI applications. The fully-managed model optimization toolkit gives you access to the latest optimization techniques with easy-to-use tooling. It is also easy to upgrade to the best available solution over time as the toolkit adapts continually to state-of-the-art innovations, new hardware and hosting features.
Inference optimization toolkit supports optimization techniques such as Speculative Decoding, Quantization and Compilation. You can choose the optimizations you want to add to your model in a few clicks, and Amazon SageMaker AI will manage all of the undifferentiated heavy lifting of procuring the hardware, selecting the deep-learning container and corresponding tuning parameters to run the optimization jobs, and then saving the optimized model artifacts in the S3 location provided by you.
For Speculative Decoding, you can get started with SageMaker AI provided draft model, so that you don’t have to build your own draft models from scratch, and request routing and system level optimizations. With Quantization, you simply choose the precision type you want to use and start a benchmarking job to measure performance versus accuracy tradeoffs. Amazon SageMaker will generate a comprehensive evaluation report so you can easily analyze the trade-off between performance and accuracy. With Compilation, for the most popular models and their configurations, Amazon SageMaker AI will automatically fetch compiled model artifacts during endpoint set up and scaling, this removes the need for you to run compilation jobs ahead of time, saving you hardware costs.
Amazon SageMaker AI inference optimization toolkit helps reduce your costs and time to optimize GenAI models, allowing you to focus on your business objectives.
Low-code ML
What is Amazon SageMaker Canvas?
SageMaker Canvas is a no-code service with an intuitive, point-and-click interface that lets you create highly accurate ML-based predictions from your data. SageMaker Canvas lets you access and combine data from a variety of sources using a drag-and-drop user interface, automatically cleaning and preparing data to minimize manual cleanup. SageMaker Canvas applies a variety of state-of-the-art ML algorithms to find highly accurate predictive models and provides an intuitive interface to make predictions. You can use SageMaker Canvas to make much more precise predictions in a variety of business applications and easily collaborate with data scientists and analysts in your enterprise by sharing your models, data, and reports. To learn more about SageMaker Canvas, see Amazon SageMaker Canvas FAQs .
What is Amazon SageMaker Autopilot?
SageMaker Autopilot is the industry’s first automated machine learning capability that gives you complete control and visibility into your ML models. SageMaker Autopilot automatically inspects raw data, applies feature processors, picks the best set of algorithms, trains and tunes multiple models, tracks their performance, and then ranks the models based on performance, all with just a few clicks. The result is the best-performing model that you can deploy at a fraction of the time normally required to train the model. You get full visibility into how the model was created and what’s in it, and SageMaker Autopilot integrates with SageMaker Studio. You can explore up to 50 different models generated by SageMaker Autopilot inside SageMaker Studio so it’s easy to pick the best model for your use case. SageMaker Autopilot can be used by people without ML experience to easily produce a model, or it can be used by experienced developers to quickly develop a baseline model on which teams can further iterate.
How does SageMaker Canvas pricing work?
With SageMaker Canvas, you pay based on usage. SageMaker Canvas lets you interactively ingest, explore, and prepare your data from multiple sources, train highly accurate ML models with your data, and generate predictions. There are two components that determine your bill: session charges based on the number of hours for which SageMaker Canvas is used or logged into, and charges for training the model based on the size of the dataset used to build the model. For more information, see Amazon SageMaker Canvas Pricing .
Can I stop a SageMaker Autopilot job manually?
Yes. You can stop a job at any time. When a SageMaker Autopilot job is stopped, all ongoing trials will be stopped and no new trial will be started.
ML workflows
How can I build a repeatable ML workflow in SageMaker AI?
Amazon SageMaker Pipelines helps you create fully automated ML workflows from data preparation through model deployment so you can scale to thousands of ML models in production. You can create Pipelines with the SageMaker Python SDK and view, execute, audit them from the visual interface of the SageMaker Studio. SageMaker Pipelines takes care of managing data between steps, packaging the code recipes, and orchestrating their execution, reducing months of coding to a few hours. Every time a workflow executes, a complete record of the data processed and actions taken is kept so data scientists and ML developers can quickly debug problems.
How do I view all my trained models to choose the best model to move to production?
Which components of SageMaker AI can be added to SageMaker Pipelines?
A SageMaker Pipeline is composed of ‘steps’. You can choose any of the natively supported step types to compose a workflow that invokes various SageMaker AI features (eg. training, evaluation) or other AWS services (eg. EMR, Lambda). You can also lift-and-shift your existing ML Python code into SageMaker Pipeline by either using the ‘@step’ python decorator or adding entire python Notebooks as components of the Pipeline. For additional details, please refer to the SageMaker Pipelines developer guide.
How do I track my model components across the entire ML workflow?
How does the pricing for SageMaker Pipelines work?
There is no additional charge for SageMaker Pipelines. You pay only for the underlying compute or any separate AWS services you use within SageMaker Pipelines.
Can I use Kubeflow with SageMaker AI?
Yes. Amazon SageMaker AI Components for Kubeflow Pipelines are open-source plugins that allow you to use Kubeflow Pipelines to define your ML workflows and use SageMaker AI for the data labeling, training, and inference steps. Kubeflow Pipelines is an add-on to Kubeflow that lets you build and deploy portable and scalable complete ML pipelines. However, when using Kubeflow Pipelines, ML ops teams need to manage a Kubernetes cluster with CPU and GPU instances and keep its utilization high at all times to reduce operational costs. Maximizing the utilization of a cluster across data science teams is challenging and adds additional operational overhead to the ML ops teams. As an alternative to an ML-optimized Kubernetes cluster, with SageMaker Components for Kubeflow Pipelines you can take advantage of powerful SageMaker features such as data labeling, fully managed large-scale hyperparameter tuning and distributed training jobs, one-click secure and scalable model deployment, and cost-effective training through Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances without needing to configure and manage Kubernetes clusters specifically to run the ML jobs.
How does SageMaker Components for Kubeflow Pipelines pricing work?
There is no additional charge for using SageMaker Components for Kubeflow Pipelines.
Human-in-the-loop
What is human-in-the-loop and why it is important for building AI-powered applications?
Human-in-the-loop is the process of harnessing human input across the ML lifecycle to improve the accuracy and relevancy of models. Humans can perform a variety of tasks, from data generation and annotation, to model review and customization. Human intervention is especially important for generative AI applications, where humans are typically both the requester and consumer of the content. It is therefore critical that humans train foundation models (FMs) how to respond accurately, safely, and relevantly to users’ prompts. Human feedback can be applied to help you complete multiple tasks. First, creating high quality labeled training datasets for generative AI applications via supervised learning (where a human simulates the style, length, and accuracy of how a model should respond to user’s prompts) and reinforcement learning with human feedback (where a human ranks and classifies model responses). Second, using human-generated data to customize FMs on specific tasks or with your company and domain specific data and make model output relevant for you.
How can human-in-the-loop capabilities be used for generative AI applications powered by FMs?
Human-in-the-loop capabilities play an important role in creating and improving generative AI applications powered by FMs. A highly skilled human workforce that is trained on the tasks’ guidelines can provide feedback, guidance, inputs, and assessment in activities like generating demonstration data to train FMs, correcting and improving sample responses, fine-tuning a model based on company and industry data, acting as a safeguard against toxicity and bias and more. Human-in-the-loop capabilities, therefore, can improve model accuracy and performance.
What is the difference between Amazon SageMaker Ground Truth’s self-service and AWS-managed offerings?
Amazon SageMaker Ground Truth offers the most comprehensive set of human-in-the-loop capabilities. There are two ways to use Amazon SageMaker Ground Truth, a self-service offering and a AWS-managed offering. In the self-service offering, your data annotators, content creators, and prompt engineers (in-house, vendor-managed, or leveraging the public crowd) can use our low-code user interface to accelerate human-in-the-loop tasks, while having flexibility to build and manage your own custom workflows. In the AWS-managed offering (SageMaker Ground Truth Plus), we handle the heavy lifting for you, which includes selecting and managing the right workforce for your use case. SageMaker Ground Truth Plus designs and customizes an end-to-end workflow (including detailed workforce training and quality assurance steps) and provides a skilled AWS-managed team which is trained on the specific tasks and meets your data quality, security, and compliance requirements.
Prepare data
How can SageMaker AI prepare data for ML?
SageMaker Data Wrangler reduces the time it takes to aggregate and prepare data for ML. From a single interface in SageMaker Studio, you can browse and import data from Amazon S3, Amazon Athena, Amazon Redshift, AWS Lake Formation, Amazon EMR, Snowflake and Databricks in just a few steps. You can also query and import data that is transferred from over 50 data sources and registered in AWS Glue Data Catalog by Amazon AppFlow. SageMaker Data Wrangler will automatically load, aggregate, and display the raw data. After importing your data into SageMaker Data Wrangler, you can see automatically generated column summaries and histograms. You can then dig deeper to understand your data and identify potential errors with the SageMaker Data Wrangler Data Quality and Insights report, which provides summary statistics and data quality warnings. You can also run bias analysis supported by SageMaker Clarify directly from SageMaker Data Wrangler to detect potential bias during data preparation. From there, you can use SageMaker Data Wrangler’s pre-built transformations to prepare your data. Once your data is prepared, you can build fully automated ML workflows with Amazon SageMaker Pipelines or import that data into Amazon SageMaker Feature Store .
What data types does SageMaker Data Wrangler support?
How can I create model features with SageMaker Data Wrangler?
How can I visualize my data in SageMaker Data Wrangler?
How does the pricing for SageMaker Data Wrangler work?
You pay for all ML compute, storage, and data processing resources you use for SageMaker Data Wrangler. You can review all the details of SageMaker Data Wrangler pricing here . As part of the AWS Free Tier , you can also get started with SageMaker Data Wrangler for free.
How can I train ML models with data prepared in SageMaker Data Wrangler?
How does SageMaker Data Wrangler handle new data when I have prepared my features on historical data?
You can configure and launch SageMaker AI processing jobs directly from the SageMaker Data Wrangler UI, including scheduling your data processing job and parameterizing your data sources to easily transform new batches of data at scale.
How does SageMaker Data Wrangler work with my CI/CD processes?
Once you have prepared your data, SageMaker Data Wrangler provides different options for promoting your SageMaker Data Wrangler flow to production and integrates seamlessly with MLOps and CI/CD capabilities. You can configure and launch SageMaker AI processing jobs directly from the SageMaker Data Wrangler UI, including scheduling your data processing job and parameterizing your data sources to easily transform new batches of data at scale. Alternatively, SageMaker Data Wrangler integrates seamlessly with SageMaker AI processing and the SageMaker Spark container, allowing you to easily use SageMaker SDKs to integrate SageMaker Data Wrangler into your production workflow.
Which model does SageMaker Data Wrangler Quick Model use?
What size data does SageMaker Data Wrangler support?
Does SageMaker Data Wrangler work with SageMaker Feature Store?
What is SageMaker Feature Store?
SageMaker Feature Store is a fully managed, purpose-built platform to store, share, and manage features for machine learning (ML) models. Features can be discovered and shared for easy reuse across models and teams with secure access and control, including across AWS accounts. SageMaker Feature Store supports both online and offline features for real-time inference, batch inference and training. It also manages batch and streaming feature engineering pipelines to reduce duplication in feature creation and improve model accuracy.
What are offline features?
What are online features?
How do I maintain consistency between online and offline features?
How can I reproduce a feature from a given moment in time?
How does pricing work for SageMaker Feature Store?
You can get started with SageMaker Feature Store for free, as part of the AWS Free Tier . With SageMaker Feature Store, you pay for writing into the feature store, and reading and storage from the online feature store. For pricing details, see Amazon SageMaker Pricing .
What does SageMaker AI offer for data labeling?
SageMaker AI provides two data labeling offerings, Amazon SageMaker Ground Truth Plus and Amazon SageMaker Ground Truth. Both options allow you to identify raw data, such as images, text files, and videos, and add informative labels to create high-quality training datasets for your ML models. To learn more, see Amazon SageMaker Data Labeling .
What is geospatial data?
What are SageMaker geospatial capabilities?
Why should I use geospatial ML on SageMaker?
Build models
What are Amazon SageMaker Studio notebooks?
You can use fully managed Jupyter notebooks in SageMaker AI for the complete ML development. Scale compute instances up and down with the board selection of compute-optimized and GPU-accelerated instances in the cloud.
How do SageMaker Studio notebooks work?
SageMaker Studio notebooks are one-step Jupyter notebooks that can be spun quickly. The underlying compute resources are fully elastic, so you can easily dial up or down the available resources and the changes take place automatically in the background without interrupting your work. SageMaker AI also enables one-step sharing of notebooks. You can easily share notebooks with others and they’ll get the exact same notebook, saved in the same place.
With SageMaker Studio notebooks, you can sign in with your corporate credentials using IAM Identity Center. Sharing notebooks within and across teams is easy since the dependencies needed to run a notebook are automatically tracked in work images that are encapsulated with the notebook as it is shared.
How are SageMaker Studio notebooks different from the instance-based notebooks offering?
Notebooks in SageMaker Studio IDEs offer a few important features that differentiate them from the instance-based notebooks. First, you can quickly launch notebooks without needing to manually provision an instance and waiting for it to be operational. The startup time of launching the UI to read and execute a notebook is faster than the instance-based notebooks. You also have the flexibility to choose from a large collection of instance types from within the UI at any time. You do not need to go to the AWS Management Console to start new instances and port over your notebooks. Each user has an isolated home directory independent of a particular instance. This directory is automatically mounted into all notebook servers and kernels as they’re started, so you can access your notebooks and other files even when you switch instances to view and run your notebooks. SageMaker Studio notebooks are integrated with AWS IAM Identity Center (successor to AWS SSO), making it easier to use your organizational credentials to access the notebooks. They are also integrated with purpose-built ML tools in SageMaker AI and other AWS services for your complete ML development, from preparing data at petabyte scale using Spark on Amazon EMR, training and debugging models, to deploying and monitoring models and managing pipelines.
What are the shared spaces in SageMaker AI?
ML practitioners can create a shared workspace where teammates can read and edit SageMaker Studio notebooks together. By using the shared paces, teammates can coedit the same notebook file, run notebook code simultaneously, and review the results together to eliminate back and forth and streamline collaboration. In the shared spaces, ML teams will have built-in support for services like BitBucket and AWS CodeCommit, so they can easily manage different versions of their notebook and compare changes over time. Any resources created from within the notebooks, such as experiments and ML models, are automatically saved and associated with the specific workspace where they were created so teams can more easily stay organized and accelerate ML model development.
How do SageMaker Studio notebooks work with other AWS services?
How does SageMaker Studio notebooks pricing work?
You pay for both compute and storage when you use SageMaker AI notebooks in Studio IDEs. See Amazon SageMaker AI Pricing for charges by compute instance type. Your notebooks and associated artifacts such as data files and scripts are persisted on Amazon Elastic File System (Amazon EFS). See Amazon EFS Pricing for storage charges. As part of the AWS Free Tier, you can get started with notebooks in SageMaker Studio for free.
Do I get charged separately for each notebook created and run in SageMaker Studio?
No. You can create and run multiple notebooks on the same compute instance. You pay only for the compute that you use, not for individual items. You can read more about this in our metering guide .
In addition to the notebooks, you can also start and run terminals and interactive shells in SageMaker Studio, all on the same compute instance. Each application runs within a container or image. SageMaker Studio provides several built-in images purpose-built and preconfigured for data science and ML.
How do I monitor and shut down the resources used by my notebooks?
You can monitor and shut down the resources used by your SageMaker Studio notebooks through both SageMaker Studio visual interface and the AWS Management Console. See the documentation for more details.
I’m running a SageMaker Studio notebook. Will I still be charged if I close my browser, close the notebook tab, or just leave the browser open?
Do I get charged for creating and setting up an SageMaker Studio domain?
No, you don’t get charged for creating or configuring an SageMaker Studio domain, including adding, updating, and deleting user profiles.
How do I see the itemized charges for SageMaker Studio notebooks or other SageMaker services?
As an admin, you can view the list of itemized charges for SageMaker AI, including SageMaker Studio, in the AWS Billing console. From the AWS Management Console for SageMaker AI, choose Services on the top menu, type "billing" in the search box and select Billing from the dropdown, and then select Bills on the left panel. In the Details section, you can select SageMaker to expand the list of Regions and drill down to the itemized charges.
What is Amazon SageMaker Studio Lab?
Why should I use SageMaker Studio Lab?
SageMaker Studio Lab is for students, researchers, and data scientists who need a free notebook development environment with no setup required for their ML classes and experiments. SageMaker Studio Lab is ideal for users who do not need a production environment but still want a subset of the SageMaker AI functionality to improve their ML skills. SageMaker AI sessions are automatically saved, helping users pick up where they left off for each user session.
How does SageMaker Studio Lab work with other AWS services?
What is SageMaker Canvas?
SageMaker Canvas is a visual drag-and-drop service that allows business analysts to build ML models and generate accurate predictions without writing any code or requiring ML expertise. SageMaker Canvas makes it easier to access and combine data from a variety of sources, automatically clean data and apply a variety of data adjustments, and build ML models to generate accurate predictions in a single step. You can also easily publish results, explain and interpret models, and share models with others within your organization to review.
What data sources does SageMaker Canvas support?
SageMaker Canvas helps you seamlessly discover AWS data sources that your account has access to, including Amazon S3 and Amazon Redshift. You can browse and import data using the SageMaker Canvas visual drag-and-drop interface. Additionally, you can drag and drop files from your local disk, and use pre-built connectors to import data from third-party sources such as Snowflake.
How do I build an ML model to generate accurate predictions in SageMaker Canvas?
Once you have connected sources, selected a dataset, and prepared your data, you can select the target column that you want to predict to initiate a model creation job. SageMaker Canvas will automatically identify the problem type, generate new relevant features, test a comprehensive set of prediction models using ML techniques such as linear regression, logistic regression, deep learning, time-series forecasting, and gradient boosting, and build the model that makes accurate predictions based on your dataset.
How long does it take to build a model in SageMaker Canvas? How can I monitor progress during model creation?
The time it takes to build a model depends on the size of your dataset. Small datasets can take less than 30 minutes, and large datasets can take a few hours. As the model creation job progresses, SageMaker Canvas provides detailed visual updates, including percent job complete and the amount of time left for job completion.
Train models
What is Amazon SageMaker HyperPod?
When should I use SageMaker HyperPod?
Does SageMaker AI support distributed training?
Yes. SageMaker AI can automatically distribute deep learning models and large training sets across AWS GPU instances in a fraction of the time it takes to build and optimize these distribution strategies manually. The two distributed training techniques that SageMaker AI applies are data parallelism and model parallelism. Data parallelism is applied to improve training speeds by dividing the data equally across multiple GPU instances, allowing each instance to train concurrently. Model parallelism is useful for models too large to be stored on a single GPU and require the model to be partitioned into smaller parts before distributing across multiple GPUs. With only a few lines of additional code in your PyTorch and TensorFlow training scripts, SageMaker AI will automatically apply data parallelism or model parallelism for you, allowing you to develop and deploy your models faster. SageMaker AI will determine the best approach to split your model by using graph partitioning algorithms to balance the computation of each GPU while minimizing the communication between GPU instances. SageMaker AI also optimizes your distributed training jobs through algorithms that fully utilize the AWS compute and network in order to achieve near-linear scaling efficiency, which allows you to complete training faster than manual open-source implementations.
What is Amazon SageMaker Experiments?
What is Amazon SageMaker Training Compiler?
SageMaker Training Compiler is a deep learning (DL) compiler that accelerates DL model training by up to 50 percent through graph- and kernel-level optimizations to use GPUs more efficiently. SageMaker Training Compiler is integrated with versions of TensorFlow and PyTorch in SageMaker, so you can speed up training in these popular frameworks with minimal code changes.
What is Amazon SageMaker Debugger?
How does SageMaker Training Compiler work?
SageMaker Training Compiler accelerates training jobs by converting DL models from their high-level language representation to hardware-optimized instructions that train faster than jobs with the native frameworks. More specifically, SageMaker Training Compiler uses graph-level optimization (operator fusion, memory planning, and algebraic simplification), data flow-level optimizations (layout transformation, common sub-expression elimination), and backend optimizations (memory latency hiding, loop oriented optimizations) to produce an optimized model training job that more efficiently uses hardware resources and, as a result, trains faster.
What is Managed Spot Training?
Managed Spot Training with SageMaker AI lets you train your ML models using Amazon EC2 Spot Instances, while reducing the cost of training your models by up to 90%.
How can I use SageMaker Training Compiler?
SageMaker Training Compiler is built into the SageMaker Python SDK and SageMaker Hugging Face Deep Learning Containers. You don’t need to change your workflows to access its speedup benefits. You can run training jobs in the same way as you already do, using any of the SageMaker interfaces: SageMaker notebook instances, SageMaker Studio, AWS SDK for Python (Boto3), and AWS Command Line Interface (AWS CLI). You can enable SageMaker Training Compiler by adding a TrainingCompilerConfig class as a parameter when you create a framework estimator object. Practically, this means a couple of lines of code added to your existing training job script for a single GPU instance. Most up-to-date detailed documentation, sample notebooks, and examples are available in the documentation .
What is the pricing of SageMaker Training Compiler?
SageMaker Training Compiler is a SageMaker Training feature and is provided at no additional charge exclusively to SageMaker AI customers. Customers can actually reduce their costs with SageMaker Training Compiler as training times are reduced.
How do I use Managed Spot Training?
You enable the Managed Spot Training option when submitting your training jobs and you also specify how long you want to wait for Spot capacity. SageMaker AI will then use Amazon EC2 Spot Instances to run your job and manages the Spot capacity. You have full visibility into the status of your training jobs, both while they are running and while they are waiting for capacity.
When should I use Managed Spot Training?
How does Managed Spot Training work?
Managed Spot Training uses Amazon EC2 Spot Instances for training, and these instances can be pre-empted when AWS needs capacity. As a result, Managed Spot Training jobs can run in small increments as and when capacity becomes available. The training jobs need not be restarted from scratch when there is an interruption, as SageMaker AI can resume the training jobs using the latest model checkpoint. The built-in frameworks and the built-in computer vision algorithms with SageMaker AI enable periodic checkpoints, and you can enable checkpoints with custom models.
Do I need to periodically checkpoint with Managed Spot Training?
We recommend periodic checkpoints as a general best practice for long-running training jobs. This prevents your Managed Spot Training jobs from restarting if capacity is pre-empted. When you enable checkpoints, SageMaker AI resumes your Managed Spot Training jobs from the last checkpoint.
How do you calculate the cost savings with Managed Spot Training jobs?
Which instances can I use with Managed Spot Training?
Managed Spot Training can be used with all instances supported in SageMaker AI.
Which Regions are supported with Managed Spot Training?
Managed Spot Training is supported in all Regions where SageMaker AI is currently available .
Are there limits to the size of the dataset I can use for training?
There are no fixed limits to the size of the dataset you can use for training models with SageMaker AI.
What algorithms does SageMaker AI use to generate models?
SageMaker AI includes built-in algorithms for linear regression, logistic regression, k-means clustering, principal component analysis, factorization machines, neural topic modeling, latent dirichlet allocation, gradient boosted trees, sequence2sequence, time-series forecasting, word2vec, and image classification. SageMaker AI also provides optimized Apache MXNet, Tensorflow, Chainer, PyTorch, Gluon, Keras, Horovod, Scikit-learn, and Deep Graph Library containers. In addition, SageMaker AI supports your custom training algorithms provided through a Docker image adhering to the documented specification.
What is Automatic Model Tuning?
What models can be tuned with Automatic Model Tuning?
You can run automatic model tuning in SageMaker AI on top of any algorithm as long as it’s scientifically feasible, including built-in SageMaker AI algorithms, deep neural networks, or arbitrary algorithms you bring to SageMaker AI in the form of Docker images.
Can I use Automatic Model Tuning outside SageMaker AI?
Not at this time. The best model tuning performance and experience is within SageMaker AI.
What is the underlying tuning algorithm for Automatic Model Tuning?
Currently, the algorithm for tuning hyperparameters is a customized implementation of Bayesian Optimization. It aims to optimize a customer-specified objective metric throughout the tuning process. Specifically, it checks the object metric of completed training jobs, and uses the knowledge to infer the hyperparameter combination for the next training job.
Does Automatic Model Tuning recommend specific hyperparameters for tuning?
No. How certain hyperparameters impact the model performance depends on various factors, and it is hard to definitively say one hyperparameter is more important than the others and thus needs to be tuned. For built-in algorithms within SageMaker AI, we do call out whether or not a hyperparameter is tunable.
How long does a hyperparameter tuning job take?
The length of time for a hyperparameter tuning job depends on multiple factors, including the size of the data, the underlying algorithm, and the values of the hyperparameters. Additionally, customers can choose the number of simultaneous training jobs and total number of training jobs. All these choices affect how long a hyperparameter tuning job can last.
Can I optimize multiple objectives simultaneously, such as optimizing a model to be both fast and accurate?
Not at this time. Currently, you need to specify a single objective metric to optimize or change your algorithm code to emit a new metric, which is a weighted average between two or more useful metrics, and have the tuning process optimize towards that objective metric.
How much does Automatic Model Tuning cost?
There is no charge for a hyperparameter tuning job itself. You will be charged by the training jobs that are launched by the hyperparameter tuning job, based on model training pricing .
How do I decide to use SageMaker Autopilot or Automatic Model Tuning?
SageMaker Autopilot automates everything in a typical ML workflow, including feature preprocessing, algorithm selection, and hyperparameter tuning, while specifically focusing on classification and regression use cases. Automatic Model Tuning, on the other hand, is designed to tune any model, no matter whether it is based on built-in algorithms, deep learning frameworks, or custom containers. In exchange for the flexibility, you have to manually pick the specific algorithm, hyperparameters to tune, and corresponding search ranges.
What is reinforcement learning?
Reinforcement learning is a ML technique that enables an agent to learn in an interactive environment by trial and error using feedback from its own actions and experiences.
Can I train reinforcement learning models in SageMaker AI?
Yes, you can train reinforcement learning models in SageMaker AI in addition to supervised and unsupervised learning models.
How is reinforcement learning different from supervised learning?
Though both supervised and reinforcement learning use mapping between input and output, unlike supervised learning where the feedback provided to the agent is the correct set of actions for performing a task, reinforcement learning uses a delayed feedback where reward signals are optimized to ensure a long-term goal through a sequence of actions.
When should I use reinforcement learning?
While the goal of supervised learning techniques is to find the right answer based on the patterns in the training data, the goal of unsupervised learning techniques is to find similarities and differences between data points. In contrast, the goal of reinforcement learning (RL) techniques is to learn how to achieve a desired outcome even when it is not clear how to accomplish that outcome. As a result, RL is more suited to enabling intelligent applications where an agent can make autonomous decisions such as robotics, autonomous vehicles, HVAC, industrial control, and more.
What type of environments can I use for training RL models?
Amazon SageMaker RL supports a number of different environments for training RL models. You can use AWS services such as AWS RoboMaker, open-source environments or custom environments developed using Open AI Gym interfaces, or commercial simulation environments such as MATLAB and SimuLink.
Do I need to write my own RL agent algorithms to train RL models?
No, SageMaker RL includes RL toolkits such as Coach and Ray RLLib that offer implementations of RL agent algorithms such as DQN, PPO, A3C, and many more.
Can I bring my own RL libraries and algorithm implementation and run them in SageMaker RL?
Yes, you can bring your own RL libraries and algorithm implementations in Docker Containers and run those in SageMaker RL.
Can I do distributed rollouts using SageMaker RL?
Yes. You can even select a heterogeneous cluster where the training can run on a GPU instance and the simulations can run on multiple CPU instances.
Deploy models
What deployment options does SageMaker AI provide?
After you build and train models, SageMaker AI provides three options to deploy them so you can start making predictions. Real-time inference is suitable for workloads with millisecond latency requirements, payload sizes up to 6 MB, and processing times of up to 60 seconds. Batch transform is ideal for offline predictions on large batches of data that are available up front. Asynchronous inference is designed for workloads that do not have sub-second latency requirements, payload sizes up to 1 GB, and processing times of up to 15 minutes.
What is Amazon SageMaker Asynchronous Inference?
How do I configure auto-scaling settings to scale down the instance count to zero when not actively processing requests?
You can scale down the SageMaker Asynchronous Inference endpoint instance count to zero in order to save on costs when you are not actively processing requests. You need to define a scaling policy that scales on the "ApproximateBacklogPerInstance" custom metric and set the "MinCapacity" value to zero. For step-by-step instructions, please visit the autoscale an asynchronous endpoint section of the developer guide.
What is Amazon SageMaker Serverless Inference?
SageMaker Serverless Inference is a purpose-built serverless model serving option that makes it easy to deploy and scale ML models. SageMaker Serverless Inference endpoints automatically start the compute resources and scale them in and out depending on traffic, eliminating the need for you to choose instance type, run provisioned capacity, or manage scaling. You can optionally specify the memory requirements for your serverless inference endpoint. You pay only for the duration of running the inference code and the amount of data processed, not for idle periods.
Why should I use SageMaker Serverless Inference?
What is Provisioned Concurrency for SageMaker Serverless Inference?
Why should I use Provisioned Concurrency?
With on-demand serverless endpoints, if your endpoint does not receive traffic for a while and then your endpoint suddenly receives new requests, it can take some time for your endpoint to spin up the compute resources to process the requests. This is called a cold start. A cold start can also occur if your concurrent requests exceed the current concurrent request usage. The cold start time depends on your model size, how long it takes to download your model, and the start-up time of your container.
To reduce variability in your latency profile, you can optionally enable Provisioned Concurrency for your serverless endpoints. With Provisioned Concurrency, your serverless endpoints are always ready and can instantaneously serve bursts in traffic, without any cold starts.
How will I be charged for Provisioned Concurrency?
As with on-demand Serverless Inference, when Provisioned Concurrency is enabled, you pay for the compute capacity used to process inference requests, billed by the millisecond, and the amount of data processed. You also pay for Provisioned Concurrency usage, based on the memory configured, duration provisioned, and amount of concurrency enabled. For more information, see Amazon SageMaker AI Pricing.
What is SageMaker AI shadow testing?
SageMaker AI helps you run shadow tests to evaluate a new ML model before production release by testing its performance against the currently deployed model. SageMaker AI deploys the new model in shadow mode alongside the current production model and mirrors a user-specified portion of the production traffic to the new model. It optionally logs the model inferences for offline comparison. It also provides a live dashboard with a comparison of key performance metrics, such as latency and error rate, between the production and shadow models to help you decide whether to promote the new model to production.
Why should I use SageMaker AI for shadow testing?
SageMaker AI simplifies the process of setting up and monitoring shadow variants so you can evaluate the performance of the new ML model on live production traffic. SageMaker AI eliminates the need for you to orchestrate infrastructure for shadow testing. It lets you control testing parameters such as the percentage of traffic mirrored to the shadow variant and the duration of the test. As a result, you can start small and increase the inference requests to the new model after you gain confidence in model performance. SageMaker AI creates a live dashboard displaying performance differences across key metrics, so you can easily compare model performance to evaluate how the new model differs from the production model.
What is Amazon SageMaker Inference Recommender?
SageMaker Inference Recommender reduces the time required to get ML models in production by automating performance benchmarking and tuning model performance across SageMaker ML instances. You can now use SageMaker Inference Recommender to deploy your model to an endpoint that delivers the best performance and minimizes cost. You can get started with SageMaker Inference Recommender in minutes while selecting an instance type and get recommendations for optimal endpoint configurations within hours, eliminating weeks of manual testing and tuning time. With SageMaker Inference Recommender, you pay only for the SageMaker ML instances used during load testing, and there are no additional charges.
Why should I use SageMaker Inference Recommender?
How does SageMaker Inference Recommender work with other AWS services?
Can SageMaker Inference Recommender support multi-model endpoints or multi-container endpoints?
No, we currently support only a single model per endpoint.
What type of endpoints does SageMaker Inference Recommender support?
Currently we support only real-time endpoints.
Can I use SageMaker Inference Recommender in one Region and benchmark in different Regions?
We support all Regions supported by Amazon SageMaker, except the AWS China Regions.
Does SageMaker Inference Recommender support Amazon EC2 Inf1 instances?
Yes, we support all types of containers. Amazon EC2 Inf1, based on the AWS Inferentia chip, requires a compiled model artifact using either the Neuron compiler or Amazon SageMaker Neo. Once you have a compiled model for an Inferentia target and the associated container image URI, you can use SageMaker Inference Recommender to benchmark different Inferentia instance types.
What is Amazon SageMaker Model Monitor?
SageMaker Model Monitor allows developers to detect and remediate concept drift. SageMaker Model Monitor automatically detects concept drift in deployed models and provides detailed alerts that help identify the source of the problem. All models trained in SageMaker AI automatically emit key metrics that can be collected and viewed in SageMaker Studio. From inside SageMaker Studio, you can configure data to be collected, how to view it, and when to receive alerts.
Can I access the infrastructure that SageMaker AI runs on?
No. SageMaker AI operates the compute infrastructure on your behalf, allowing it to perform health checks, apply security patches, and do other routine maintenance. You can also deploy the model artifacts from training with custom inference code in your own hosting environment.
How do I scale the size and performance of an SageMaker AI model once in production?
SageMaker AI hosting automatically scales to the performance needed for your application using Application Auto Scaling. In addition, you can manually change the instance number and type without incurring downtime by modifying the endpoint configuration.
How do I monitor my SageMaker AI production environment?
SageMaker AI emits performance metrics to Amazon CloudWatch Metrics so you can track metrics, set alarms, and automatically react to changes in production traffic. In addition, SageMaker AI writes logs to Amazon CloudWatch Logs to let you monitor and troubleshoot your production environment.
What kinds of models can be hosted with SageMaker AI?
SageMaker AI can host any model that adheres to the documented specification for inference Docker images. This includes models created from SageMaker AI model artifacts and inference code.
How many concurrent real-time API requests does SageMaker AI support?
SageMaker AI is designed to scale to a large number of transactions per second. The precise number varies based on the deployed model and the number and type of instances to which the model is deployed.
How does SageMaker AI support fully managed model hosting and management?
As a fully managed service, Amazon SageMaker AI takes care of setting up and managing instances, software version compatibilities, and patching versions. It also provides built-in metrics and logs for endpoints that you can use to monitor and receive alerts. With SageMaker AI tools and guided workflows, the entire ML model packaging and deployment process is simplified, making it easy to optimize the endpoints to achieve desired performance and save costs. You can easily deploy your ML models including foundation models with just a few clicks within SageMaker Studio or using the new PySDK.
What is Batch Transform?
Batch Transform enables you to run predictions on large or small batch data. There is no need to break down the dataset into multiple chunks or manage real-time endpoints. With a simple API, you can request predictions for a large number of data records and transform the data quickly and easily.
What deployment endpoint options does SageMaker AI support?
SageMaker AI supports the following endpoint options: Single-model endpoints - One model on a container hosted on dedicated instances or serverless for low latency and high throughput. Multi-model endpoints - Host multiple models using shared infrastructure for cost-effectiveness and maximize utilization. You can control how much compute and memory each model can use to make sure each model has access to the resources it needs to run efficiently. Serial inference pipelines - Multiple containers sharing dedicated instances and executing in a sequence. You can use an inference pipeline to combine preprocessing, predictions, and post-processing data science tasks.
What is autoscaling for elasticity?
What is Amazon SageMaker Edge Manager?
SageMaker Edge Manager makes it easier to optimize, secure, monitor, and maintain ML models on fleets of edge devices such as smart cameras, robots, personal computers, and mobile devices. SageMaker Edge Manager helps ML developers operate ML models on a variety of edge devices at scale.
How do I get started with SageMaker Edge Manager?
To get started with SageMaker Edge Manager, you need to compile and package your trained ML models in the cloud, register your devices, and prepare your devices with the SageMaker Edge Manager SDK. To prepare your model for deployment, SageMaker Edge Manager uses SageMaker Neo to compile your model for your target edge hardware. Once a model is compiled, SageMaker Edge Manager signs the model with an AWS generated key, then packages the model with its runtime and your necessary credentials to get it ready for deployment. On the device side, you register your device with SageMaker Edge Manager, download the SageMaker Edge Manager SDK, and then follow the instructions to install the SageMaker Edge Manager agent on your devices. The tutorial notebook provides a step-by-step example of how you can prepare the models and connect your models on edge devices with SageMaker Edge Manager.
What devices are supported by SageMaker Edge Manager?
SageMaker Edge Manager supports common CPU (ARM, x86) and GPU (ARM, Nvidia) based devices with Linux and Windows operating systems. Over time, SageMaker Edge Manager will expand to support more embedded processors and mobile platforms that are also supported by SageMaker Neo.
Do I need to use SageMaker AI to train my model in order to use SageMaker Edge Manager?
No, you do not. You can train your models elsewhere or use a pre-trained model from open source or from your model vendor.
Do I need to use SageMaker Neo to compile my model in order to use SageMaker Edge Manager?
Yes, you do. SageMaker Neo converts and compiles your models into an executable that you can then package and deploy on your edge devices. Once the model package is deployed, the SageMaker Edge Manager agent will unpack the model package and run the model on the device.
How do I deploy models to the edge devices?
SageMaker Edge Manager stores the model package in your specified Amazon S3 bucket. You can use the over-the-air (OTA) deployment feature provided by AWS IoT Greengrass or any other deployment mechanism of your choice to deploy the model package from your S3 bucket to the devices.
How is SageMaker Edge Manager SDK different from the SageMaker Neo runtime (dlr)?
Neo dlr is an open-source runtime that only runs models compiled by the SageMaker Neo service. Compared to the open source dlr, the SageMaker Edge Manager SDK includes an enterprise grade on-device agent with additional security, model management, and model serving features. The SageMaker Edge Manager SDK is suitable for production deployment at scale.
How is SageMaker Edge Manager related to AWS IoT Greengrass?
SageMaker Edge Manager and AWS IoT Greengrass can work together in your IoT solution. Once your ML model is packaged with SageMaker Edge Manager, you can use the AWS IoT Greengrass OTA update feature to deploy the model package to your device. AWS IoT Greengrass allows you to monitor your IoT devices remotely, while SageMaker Edge Manager helps you monitor and maintain the ML models on the devices.
How is SageMaker Edge Manager related to AWS Panorama? When should I use SageMaker Edge Manager versus AWS Panorama?
AWS offers the most breadth and depth of capabilities for running models on edge devices. We have services to support a wide range of use cases, including computer vision, voice recognition, and predictive maintenance.
For companies looking to run computer vision on edge devices such as cameras and appliances, you can use AWS Panorama. AWS Panorama offers ready-to-deploy computer vision applications for edge devices. It’s easy to get started with AWS Panorama by logging into the cloud console, specifying the model you would like to use in Amazon S3 or in SageMaker AI, and then writing business logic as a Python script. AWS Panorama compiles the model for the target device and creates an application package so it can be deployed to your devices with just a few clicks. In addition, independent software vendors who want to build their own custom applications can use the AWS Panorama SDK, and device manufacturers can use the Device SDK to certify their devices for AWS Panorama.
Customers who want to build their own models and have more granular control over model features can use SageMaker Edge Manager. SageMaker Edge Manager is a managed service to prepare, run, monitor, and update ML models across fleets of edge devices such as smart cameras, smart speakers, and robots for any use case such as natural langue processing, fraud detection, and predictive maintenance. SageMaker Edge Manager is for ML edge developers who want control over their model, including engineering different model features and monitoring models for drift. Any ML edge developer can use SageMaker Edge Manager through the SageMaker AI console and the SageMaker AI APIs. SageMaker Edge Manager brings the capabilities of SageMaker AI to build, train, and deploy models in the cloud to edge devices.
In which Regions is SageMaker Edge Manager available?
SageMaker Edge Manager is available in six Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), and Asia Pacific (Tokyo). For details, see the AWS Regional Services list .
What is Amazon SageMaker Neo?
SageMaker Neo enables ML models to train once and run anywhere in the cloud and at the edge. SageMaker Neo automatically optimizes models built with popular DL frameworks that can be used to deploy on multiple hardware platforms. Optimized models run up to 25 times faster and consume less than a tenth of the resources of typical ML models.
How do I get started with SageMaker Neo?
To get started with SageMaker Neo, sign in to the SageMaker AI console, choose a trained model, follow the example to compile models, and deploy the resulting model onto your target hardware platform.
What are the major components of SageMaker Neo?
SageMaker Neo contains two major components: a compiler and a runtime. First, the SageMaker Neo compiler reads models exported by different frameworks. It then converts the framework-specific functions and operations into a framework-agnostic intermediate representation. Next, it performs a series of optimizations. Then, the compiler generates binary code for the optimized operations and writes them to a shared object library. The compiler also saves the model definition and parameters into separate files. During execution, the SageMaker Neo runtime loads the artifacts generated by the compiler—model definition, parameters, and the shared object library to run the model.
Do I need to use SageMaker AI to train my model in order to use SageMaker Neo to convert the model?
No. You can train models elsewhere and use SageMaker Neo to optimize them for SageMaker ML instances or AWS IoT Greengrass supported devices.
Which models does SageMaker Neo support?
Currently, SageMaker Neo supports the most popular DL models that power computer vision applications and the most popular decision tree models used in SageMaker AI today. SageMaker Neo optimizes the performance of AlexNet, ResNet, VGG, Inception, MobileNet, SqueezeNet, and DenseNet models trained in MXNet and TensorFlow, and classification and random cut forest models trained in XGBoost.
Which hardware platforms does SageMaker Neo support?
You can find the lists of supported cloud instances , edge devices , and framework versions in the SageMaker Neo documentation.
In which Regions is SageMaker Neo available?
To see a list of supported Regions, view the AWS Regional Services list .
Amazon SageMaker Savings Plans
What are Amazon SageMaker Savings Plans?
SageMaker Savings Plans offer a flexible usage-based pricing model for SageMaker AI in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a one- or three-year term. SageMaker Savings Plans provide the most flexibility and help to reduce your costs by up to 64%. These plans automatically apply to eligible SageMaker ML instance usages, including SageMaker Studio notebooks, SageMaker On-Demand notebooks, SageMaker Processing, SageMaker Data Wrangler, SageMaker Training, SageMaker Real-Time Inference, and SageMaker Batch Transform regardless of instance family, size, or Region. For example, you can change usage from a CPU instance ml.c5.xlarge running in US East (Ohio) to an ml.Inf1 instance in US West (Oregon) for inference workloads at any time and automatically continue to pay the Savings Plans price.
Why should I use SageMaker Savings Plans?
If you have a consistent amount of SageMaker AI instance usage (measured in $/hour) and use multiple SageMaker AI components or expect your technology configuration (such as instance family, or Region) to change over time, SageMaker Savings Plans make it simpler to maximize your savings while providing flexibility to change the underlying technology configuration based on application needs or new innovation. The Savings Plans rate applies automatically to all eligible ML instance usage with no manual modifications required.
How can I get started with SageMaker Savings Plans?
How are Savings Plans for SageMaker AI different from Compute Savings Plans for Amazon EC2?
The difference between Savings Plans for SageMaker AI and Savings Plans for Amazon EC2 is in the services they include. SageMaker Savings Plans apply only to SageMaker ML Instance usage.
How do Savings Plans work with AWS Organizations/Consolidated Billing?
Savings Plans can be purchased in any account within an AWS Organization/Consolidated Billing family. By default, the benefit provided by Savings Plans is applicable to usage across all accounts within an AWS Organization/Consolidated Billing family. However, you can also choose to restrict the benefit of Savings Plans to only the account that purchased them.