Posted On: Jul 26, 2021

Amazon SageMaker JumpStart helps you quickly and easily solve your machine learning problems with one-click access to popular model collections from TensorFlow Hub, PyTorch Hub and Hugging Face (also known as “model zoos”), and to 16 end-to-end solutions that solve common business problems such demand forecasting, fraud detection and document understanding.

Starting today, SageMaker JumpStart supports 20 state-of-the-art fine-tunable object detection models from PyTorch hub and MxNet GluonCV. The models include YOLO-v3, FasterRCNN, and SSD, pre-trained on MS-COCO and PASCAL VOC datasets. Customers can use these pre-trained models to recognize various objects in images by deploying them on SageMaker as-it-is with one click. Customers can also fine-tune these models on their own datasets to identify different objects than what are present in the pre-training datasets, and make accurate predictions.

Now, SageMaker JumpStart also supports image feature vector extraction for 52 state-of-the-art image classification models including ResNet, MobileNet, EfficientNet, etc from TensorFlow hub. Customers can use these new models to generate image feature vectors for their images. The generated feature vectors are representations of the images in a high dimension Euclidean space. They can be used to compare images and identify similarities for image search applications.

In addition, SageMaker JumpStart also added 5 GPT-2 models for text generation and 21 sentence-pair classification models from Hugging Face. These state of the art GPT-2 models can help customers generate coherent English sentences from the input of a few words. Customers can use sentence-pair classification models to perform natural language inference.

The image below shows a sample view of the 64 text models and 196 vision models available in SageMaker JumpStart.

Amazon SageMaker JumpStart is available in all regions where Amazon SageMaker Studio is available. To get started with these new models on SageMaker JumpStart, refer to the documentation.