Amazon SageMaker runtime now supports the CustomAttributes header
Amazon SageMaker now supports a new HTTP header for the InvokeEndpoint API action called CustomAttributes which can be used to provide additional information about an inference request or response. Amazon SageMaker strips all POST headers except those supported by the InvokeEndpoint API action and you can use the CustomAttributes header to pass custom information such as a trace ID, application specific identifier or other metadata to the inference request or response. Your client applications can use this information for internal logging, audit or processing of the inference request or response.
Amazon SageMaker is an end-to-end platform used by data scientists and developers to build, train, tune, and deploy machine learning (ML) models at scale. Amazon SageMaker lets you begin training your model with a single click in the console or with a simple API call. When the training is complete, and you’re ready to deploy your model, you can launch it with a single click in the Amazon SageMaker console. After you deploy a model into production using the Amazon SageMaker hosting service, you have a persistent HTTPS endpoint where your machine learning model is available to provide inferences via the InvokeEndpoint API action.
The information provided by you in the new CustomAttributes header for the InvokeEndpoint API action is forwarded verbatim by Amazon SageMaker and all calls to InvokeEndpoint are authenticated by using AWS Signature Version 4. The custom attributes cannot exceed 1024 characters and must consist of visible US-ASCII characters as specified in Section 3.3.6. Field Value Components of the Hypertext Transfer Protocol (HTTP/1.1).
The following code snippets show how you can provide a custom attribute header to your model using the AWS SDK. In these examples, I am using a trace ID in the CustomAttributes header. My client application provides this information in the request, and it is returned in the inference response to make it easier to log calls to my model.
To help you debug your endpoints, training jobs, and notebook instance lifecycle configurations, anything an algorithm container, a model container, or a notebook instance lifecycle configuration sends to stdout or stderr is also sent to Amazon CloudWatch Logs. In addition to debugging, you can use these for progress analysis. See all the Log Group and Stream names in the documentation.
About the Author
Urvashi Chowdhary is a Senior Product Manager for Amazon SageMaker. She is passionate about working with customers and making machine learning more accessible. In her spare time, she loves sailing, paddle boarding, and kayaking.