Posted On: May 2, 2019
Amazon SageMaker now includes enhancements to the built-in Object2Vec algorithm making it faster to train deep learning models. You can access the new features as hyperparameters from the Amazon SageMaker console or using the Amazon SageMaker Python API.
The Object2Vec algorithm now automatically samples training data that are unlikely to be observed and labels them as negative. As a result, you can eliminate the need to manually implement negative sampling as part of data pre-processing, a significant time savings.
In addition, the Object2Vec algorithm now supports a new sparse gradient that speeds up single-GPU training up to 2 times without loss in performance. Additionally, the training speed can be accelerated up to 20 times using multiple GPUs.
The Object2Vec algorithm has two encoders that encode data from two input sources. Now you can jointly train data with both encoders which speeds up the training process. Also, customizations of the comparator operator are now supported providing flexibility to assemble the two encoding vectors into a single vector for use cases such as document embedding.
You can refer to the documentation for details, and also learn more in the blog post.