Productionize Foundation Models from SageMaker Canvas
Amazon SageMaker Canvas now supports deploying Foundation Models (FMs) to SageMaker real-time inference endpoints, allowing you to bring generative AI capabilities into production and consume them outside the Canvas workspace. SageMaker Canvas is a no-code workspace that enables analysts and citizen data scientists to generate accurate ML predictions and use generative AI capabilities.
SageMaker Canvas provides access to FMs powered by Amazon Bedrock and SageMaker JumpStart, supports RAG-based customization, and fine-tuning of FMs. Starting today, you can deploy FMs powered by SageMaker JumpStart such as Falcon-7B, Llama-2, and more to SageMaker endpoints making it easier to integrate generative AI capabilities into your applications outside the SageMaker Canvas workspace. FMs powered by Amazon Bedrock can already be accessed using a single API outside the SageMaker workspace. By simplifying the deployment process, SageMaker Canvas accelerates time-to-value and ensures a smooth transition from experimentation to production.
To get started, log in to SageMaker Canvas to access the FMs powered by SageMaker JumpStart. Select the desired model and deploy it with the appropriate endpoint configurations such as indefinitely or for a specific duration of time. SageMaker Inferencing charges will apply to deployed models. A new user can access the latest version by directly launching SageMaker Canvas from their AWS console. An existing user can access the latest version of SageMaker Canvas by clicking “Log Out” and logging back in.
The expanded feature is now available in all AWS regions where SageMaker Canvas is supported. To learn more, refer to the SageMaker Canvas product documentation.