Amazon SageMaker makes it easy for IT engineers to deploy ML models into production. You can create and automate workflows to support development of models in the thousands with scalable infrastructure and continuous integration and continuous delivery (CI/CD) pipelines.
Amazon SageMaker allows you to operate on a fully secure ML environment on day one. You can use a comprehensive set of security features including infrastructure security, access control, data protection, and up-to-date compliance certifications across a broad range of industry verticals.
Managed spot training
Amazon SageMaker provides Managed Spot Training to help you to reduce training costs by up to 90%. This capability uses Amazon EC2 Spot instances, which is spare AWS compute capacity. Training jobs are automatically run when compute capacity becomes available and are made resilient to interruptions caused by changes in capacity, allowing you to save cost when you have flexibility with when to run training jobs.
Integration with Kubernetes
You can use the fully managed capabilities of Amazon SageMaker for machine learning, while continuing to use Kubernetes for orchestration and managing pipelines. SageMaker lets users train and deploy models using Kubernetes Operators for SageMaker. In addition, you can use Amazon SageMaker Components for Kubeflow Pipelines which enable you can take advantage of powerful SageMaker features such as data labeling, fully managed large-scale hyperparameter tuning and distributed training jobs, and one-click secure and scalable model deployment, without needing to configure and manage Kubernetes clusters specifically to run the machine learning jobs.
Additional compute for inference
Amazon Elastic Inference allows you to attach just the right amount of GPU-powered inference acceleration to any Amazon SageMaker instance type with no code changes. You can choose the instance type that is best suited to the overall CPU and memory needs of your application, and then separately configure the amount of inference acceleration that you need to use resources efficiently and to reduce the cost of running inference.
Amazon SageMaker makes it easy to deploy your trained model into production with a single click so that you can start generating predictions for real-time or batch data. You can one-click deploy your model onto auto-scaling Amazon ML instances across multiple availability zones for high redundancy. SageMaker will launch the instances, deploy your model, and set up the secure HTTPS endpoint for your application.
Amazon SageMaker provides a scalable and cost-effective way to deploy large numbers of custom machine learning models. SageMaker Multi-Model endpoints enable you to deploy multiple models with a single click on a single endpoint and serve them using a single serving container.