Artificial Intelligence
Category: Amazon SageMaker
Dive deep into Amazon SageMaker Studio Classis Notebooks architecture
NOTE: Amazon SageMaker Studio and Amazon SageMaker Studio Classic are two of the machine learning environments that you can use to interact with SageMaker. If your domain was created after November 30, 2023, Studio is your default experience. If your domain was created before November 30, 2023, Amazon SageMaker Studio Classic is your default experience. […]
Use a SageMaker Pipeline Lambda step for lightweight model deployments
With Amazon SageMaker Pipelines, you can create, automate, and manage end-to-end machine learning (ML) workflows at scale. SageMaker Projects build on SageMaker Pipelines by providing several MLOps templates that automate model building and deployment pipelines using continuous integration and continuous delivery (CI/CD). To help you get started, SageMaker Pipelines provides many predefined step types, such […]
Access an Amazon SageMaker Studio notebook from a corporate network
Amazon SageMaker Studio is the first fully integrated development environment (IDE) for machine learning. It provides a single, web-based visual interface where you can perform all ML development steps required to build, train, and deploy models. You can quickly upload data, create new notebooks, train and tune models, move back and forth between steps to […]
Migrate your work to an Amazon SageMaker notebook instance with Amazon Linux 2
Amazon SageMaker notebook instances now support Amazon Linux 2, so you can now create a new Amazon SageMaker notebook instance to start developing your machine learning (ML) models with the latest updates. An obvious question is: what do I need to do to migrate my work from an existing notebook instance that runs on Amazon […]
Amazon SageMaker notebook instances now support Amazon Linux 2
February 8th, 2022: Updated with AWS CloudFormation support to create an Amazon Linux 2 based SageMaker notebook instance. Today, we’re excited to announce that Amazon SageMaker notebook instances support Amazon Linux 2. You can now choose Amazon Linux 2 for your new SageMaker notebook instance to take advantage of the latest update and support provided […]
Secure multi-account model deployment with Amazon SageMaker: Part 2
In Part 1 of this series of posts, we offered step-by-step guidance for using Amazon SageMaker, SageMaker projects and Amazon SageMaker Pipelines, and AWS services such as Amazon Virtual Private Cloud (Amazon VPC), AWS CloudFormation, AWS Key Management Service (AWS KMS), and AWS Identity and Access Management (IAM) to implement secure architectures for multi-account enterprise […]
Secure multi-account model deployment with Amazon SageMaker: Part 1
Amazon SageMaker Studio is a web-based, integrated development environment (IDE) for machine learning (ML) that lets you build, train, debug, deploy, and monitor your ML models. Although Studio provides all the tools you need to take your models from experimentation to production, you need a robust and secure model deployment process. This process must fulfill […]
Create Amazon SageMaker projects using third-party source control and Jenkins
Launched at AWS re:Invent 2020, Amazon SageMaker Pipelines is the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning (ML). With Pipelines, you can create, automate, and manage end-to-end ML workflows at scale. You can integrate Pipelines with existing CI/CD tooling. This includes integration with existing source control systems such as […]
Patterns for multi-account, hub-and-spoke Amazon SageMaker model registry
Data science workflows have to pass multiple stages as they progress from the experimentation to production pipeline. A common approach involves separate accounts dedicated to different phases of the AI/ML workflow (experimentation, development, and production). In addition, issues related to data access control may also mandate that workflows for different AI/ML applications be hosted on […]
Deploy multiple serving containers on a single instance using Amazon SageMaker multi-container endpoints
Amazon SageMaker is a fully managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models built on different frameworks. SageMaker real-time inference endpoints are fully managed and can serve predictions in real time with low latency. This post introduces SageMaker support for direct multi-container endpoints. […]