Posted On: Nov 4, 2021

Amazon SageMaker Studio now enables customers to make test inference requests to endpoints with a custom URL and endpoints that require specific headers. Amazon SageMaker helps data scientists and developers to prepare, build, train, and deploy high-quality machine learning (ML) models quickly by bringing together a broad set of capabilities purpose-built for ML. Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps.

Customers can use Amazon API Gateway or other services in front of the SageMaker endpoints to provide additional capabilities such as custom authorization, and custom domain names for the endpoint. For example, these capabilities can be used to create a publicly facing inference endpoint that’s protected with JSON Web Token (JWT) and branded with a custom domain. In these cases, customers need the flexibility to add headers needed by their custom authorizer and provide a custom URL for the inference request. Previously, customers could only make inference requests from SageMaker Studio to the default SageMaker real-time endpoint URL but could not customize headers or change the endpoint URL. Now, customers can specify headers and a custom endpoint URL for the test inference request. In addition, customers can generate the equivalent curl command from SageMaker Studio with the click of a button once the inference request has finished. This is useful for sharing with others that may not have access to the UI and for fine tuning other properties of the inference request.

This feature is generally available in all regions where SageMaker and SageMaker Studio is available. To see where SageMaker is available, review the AWS region table. To learn more about this feature, please see our documentation. To learn more about SageMaker, visit our product page.