AWS Architecture Blog

Image background removal using Amazon SageMaker semantic segmentation

Many individuals are creating their own ecommerce and online stores in order to sell their products and services. This simplifies and speeds the process of getting products out to your selected markets. This is a critical key indicator for the success of your business.

Artificial Intelligence/Machine Learning (AI/ML) and automation can offer you an improved and seamless process for image manipulation. You can take a picture identifying your products. You can then remove the background in order to publish high quality and clean product images. These images can be added to your online stores for consumers to view and purchase. This automated process will drastically decrease the manual effort required, though some manual quality review will be necessary. It will increase your time-to-market (TTM) and quickly get your products out to customers.

This blog post explains how you can automate the removal of image backgrounds by combining semantic segmentation inferences using Amazon SageMaker JumpStart. You can automate image processing using AWS Lambda. We will walk you through how you can set up an Amazon SageMaker JumpStart semantic segmentation inference endpoint using curated training data.

Amazon SageMaker JumpStart solution overview

Solution architecture for automatically processing new images and outputting isolated labels identified through semantic segmentation.

Figure 1. Architecture for automatically processing new images and outputting isolated labels identified through semantic segmentation.

The example architecture in Figure 1 shows a serverless architecture that uses SageMaker to perform semantic segmentation on images. Image processing takes place within a Lambda function, which extracts the identified (product) content from the background content in the image.

In this event driven architecture, Amazon Simple Storage Service (Amazon S3) invokes a Lambda function each time a new product image lands in the Uploaded Image Bucket. That Lambda function calls out to a semantic segmentation endpoint in Amazon SageMaker. The function then receives a segmentation mask that identifies the pixels that are part of the segment we are identifying. Then, the Lambda function processes the image to isolate the identified segment from the rest of the image, outputting the result to our Processed Image Bucket.

Semantic segmentation model

The semantic segmentation algorithm provides a fine-grained, pixel-level approach to developing computer vision applications. It tags every pixel in an image with a class label from a predefined set of classes. Because the semantic segmentation algorithm classifies every pixel in an image, it also provides information about the shapes of the objects contained in the image. The segmentation output is represented as a grayscale image, called a segmentation mask. A segmentation mask is a grayscale image with the same shape as the input image.

You can use the segmentation mask and replace the pixels that correspond to the class that is identified with the pixels from the original image. You can use the Python library PIL to do pixel manipulation on the image. The following images show how the image in Figure 2 will result in the masked image shown in Figure 3, when passed through semantic segmentation. When you use the Figure 3 mask and replace it with pixels from Figure 2, the end result is the image from Figure 4. Due to minor quality issues of the final image, you will need to do manual cleanup after automation.

Car image with background

Figure 2. Car image with background

Car mask image

Figure 3. Car mask image

Final image, background removed

Figure 4. Final image, background removed

SageMaker JumpStart streamlines the deployment of the prebuilt model on SageMaker, which supports the semantic segmentation algorithm. You can test this using the sample Jupyter notebook available at Extract Image using Semantic Segmentation, which demonstrates how to extract an individual form from the surrounding background.

Learn more about SageMaker JumpStart

SageMaker JumpStart is a quick way to learn about SageMaker features and capabilities through curated one-step solutions, example notebooks, and deployable pre-trained models. You can also fine-tune the models and then deploy them. You can access JumpStart using Amazon SageMaker Studio or programmatically through the SageMaker APIs.

SageMaker JumpStart provides lot of different semantic segmentation models that are pre-trained with class of objects it can identify. These models are fine-tuned for a sample dataset. You can tune the model with your dataset to get an effective mask for the class of object you want to retrieve from the image. When you fine-tune a model, you can use the default dataset or choose your own data, which is located in an Amazon S3 bucket. You can customize the hyperparameters of the training job that are used to fine-tune the model.

When the fine-tuning process is complete, JumpStart provides information about the model: parent model, training job name, training job Amazon Resource Name (ARN), training time, and output path. We retrieve the deploy_image_uri, deploy_source_uri, and base_model_uri for the pre-trained model. You can host the pre-trained base-model by creating an instance of sagemaker.model.Model and deploying it.

Conclusion

In this blog, we review the steps to use Amazon SageMaker JumpStart and AWS Lambda for automation and processing of images. It uses pre-trained machine learning models and inference. The solution ingests the product images, identifies your products, and then removes the image background. After some review and QA, you can then publish your products to your ecommerce store or other medium.

Further resources:

Patrick Gryczka

Patrick Gryczka

Patrick Gryczka is a Solutions Architect with the AWS Sports SA team. His core areas of focus are serverless and DevOps technologies. Prior to life as a Solutions Architect, Patrick worked as a consultant for ecommerce and fintech customers adopting cloud technologies. Patrick is based out of New York City and lives in Brooklyn with his wife and three cats.

Ajit Puthiyavettle

Ajit Puthiyavettle

Ajit Puthiyavettle is a Solution Architect working with enterprise clients, architecting solutions to achieve business outcomes. He is passionate about solving customer challenges with innovative solutions. His experience is with leading DevOps and security teams for enterprise and SaaS (Software as a Service) companies.

Kenny Guzman

Kenny Guzman

Kenny Guzman is a Solutions Architect based out of New York City. He has a background in infrastructure, containers and DevOps. Prior to being a Solutions Architect, Kenny worked in the Media Entertainment industry helping achieve desired business outcomes through the modernization and containerization of applications.