AWS Marketplace

Improving personalized ranking in recommender systems with Implicit BPR and Amazon SageMaker

A recommender system is an automated software mechanism that uses algorithms and data to personalize product discovery for a particular user. Its essential task is to help users discover the most relevant items within an often-unmanageable set of choices. These days, recommender systems are employed in diverse domains to promote products on e-commerce sites, such as Amazon.com, movie recommendations on Netflix, and music, artists, and album recommendations.

Terminology

Classes of recommender systems include:

  • Content-based recommender systems: These rely on a product’s features, attributes, and descriptions to recommend other products similar to past purchases.
  • Personalized ranking-based recommender systems: These recommend the top-n items for a particular user along with a ranking and a score.
  • Location or demographic: The user’s demographic information is applied to acquire a classifier that can apply particular demographics to ratings or purchasing capacities.
  • Collaborating filtering-based recommender systems: These rely on a user’s prior item interactions or ratings to make a recommendation.
    • User-based: These operate by discovering the other identical or like-minded users.
    • Item-based: These work on the similarity between items assessed using a user’s ratings of those items or interactions.
  • Hybrid: A blend of more than one of the above strategies.

Explicit vs. implicit users feedback:

  • Explicit feedback: This is a dataset collected primarily based on the user’s behavior or explicitly posted by the user in the system. Examples include movie ratings on Netflix, or product ratings by users on Amazon.com, which are provided explicitly by the users.
  • Implicit feedback: Rather than relying on explicit user feedback, the system can indirectly use user behavior and interactions to learn about their interests and choices. For instance, a user purchasing or browsing an or even the number of times they played a particular song would be implicit feedback.

Other terminology:

  • One-class collaborative filtering problem: The most significant difference between explicit and implicit data is that the implicit data has no negative feedback. For example, in a movie or a music rating dataset, a user generally rates a movie or an artist with a rating score of 1 out of 5; we recognize that the user does not like it.

In contrast, if a user does not buy a product on an eCommerce website, we cannot determine whether he does not like it or has not even seen that particular product. Thus, in those cases, all the observations belong to a single positive class known as a one-class problem.

How Bayesian personalized ranking improves recommendation rankings

The general strategy for item recommenders is to predict a personalized score for an item, presenting its desire to purchase that particular item. Then the items are ranked in order according to that score. With traditional recommender systems, sometimes all the recommended items are similar and unranked. This can make the recommendations less personalized and the user’s decision harder.

There are strategies that can help you build the recommender systems using the implicit feedback dataset, such as Matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). However, none directly solves or is optimized for ranking.

An algorithm optimization technique such as Bayesian personalized ranking (BPR) adds an absolute value to improve recommender systems. BPR works on an implicit feedback dataset. It deals with one-class collaborative filtering problems by transforming them into a ranking task.

Using BPR increases the chances of the user getting recommendations containing a diverse selection of items and liking at least one item. The increased personalization can also positively influence customer satisfaction and retention.

In this post, I show you how to improve the personalized ranking of an online retail use case using Implicit BPR, available in AWS Marketplace, and Amazon SageMaker. I further show you how to use SageMaker to collect, analyze, clean, prepare, train, and deploy the model to perform both batch and real-time inferences on the trained model.

The Online Retail Data Set that I use in this post is provided by UCI Machine Learning. This dataset holds all the transactions occurring for a UK-based and registered, non-store online retail between December 1, 2009 and December 09, 2011. The company mainly sells unique all-occasion giftware, and many customers of the company are wholesalers.

Solution overview

The following architecture augments end-to-end solutions at a high level using the Online Retail Data Set and SageMaker to improve personalized ranking in recommender systems with Implicit BPR.

Here is what it enables:

  1. Download the Online Retail Data Set from the UCI Machine Learning repository to SageMaker notebook instance. Use SageMaker notebook instance to collect, analyze, cleanse and prepare the downloaded datasets using this sample Jupyter notebook. Refer to the following diagram.

Download the Online Retail Data Set from the UCI Machine Learning repository to SageMaker notebook instance

  1. Store the final training and testing datasets to Amazon S3.
  2. Use the Implicit BPR algorithm listing in AWS Marketplace to train the ML model.
  3. Present the cleanse training and testing datasets to the model training process.
  4. Optionally, tune the hyperparameters supported by the algorithm based on its specification. Refer to the following diagram.

Use the Implicit BPR algorithm listing in AWS Marketplace to train the ML model

  1. Evaluate and visualize the quality and the performance metrics of the trained model.
  2. The final trained model artifact is stored in the Amazon S3 bucket.
  3. Use the SageMaker hosting service to host the trained model to perform both the batch transform and real-time inferences.
  4. Build request payloads for both a batch and a real-time use case and make an inference request. Both modes facilitate interactive experimentation with the trained model. Refer to the following diagram.

Use the SageMaker hosting service to host the trained model to perform both the batch transform and real-time inferences

Step A: Subscribe to Implicit BPR in AWS Marketplace

To subscribe to the algorithm in AWS Marketplace, follow these steps.

  1. Log in to your AWS account and open the Implicit BPR (V 0.9.36) listing.
  2. Read Highlights, Product Overview, Usage information, and Additional resources and review the supported instance types.
  3. Choose Continue to Subscribe.
  4. Review End user license agreement, Support Terms, and Pricing Information.
  5. To subscribe to the Implicit BPR algorithm, choose Accept Offer.
  6. Choose Continue to Configuration and then choose a Region. A product ARN will appear on the same page. Copy it, as this is the algorithm ARN that you must specify in your training job.

Step B: Set up the notebook instance

First, create an Amazon SageMaker notebook instance by following the instructions at Create a Notebook Instance in the Amazon SageMaker Developer Guide.

Next, open the notebook instance. You should have access to all SageMaker examples. To follow along with the rest of this post, scroll down to the AWS Marketplace section and choose use next to recommender_system_with_implicit_bpr.ipynb.

Step C: Collect and preprocess the data

  1. Subscribe and set up the environment

Run Steps 1 (Pre-requisites: subscribe to Implicit BPR Algorithm from AWS Marketplace) and 2 (Set up the environment) of the sample notebook you opened in step B. These provide instructions on setting up an environment and dependent libraries required for the rest of the demo.

  1. Download the dataset

Run Step 3 (Data collection and preparation) of the sample notebook. This downloads the Online Retail Data Set in a local directory on the notebook instance. The following screenshot shows a table with CustomerID, StockCode, Description, price, Quantity, Invoice, InvoiceDate, and Country. The last row shows that the downloaded dataset has 1,067,371 instances and eight different features.

Top 5 rows of the online retail dataset

The name and the attribute information of the features from the image are as follows:

    1. CustomerID: A 5-digit integral customer number uniquely assigned to each customer.
    2. StockCode: A 5-digit integral product or item code number uniquely assigned to each specific product.
    3. Description: Product or item name.
    4. Price: Unit price. Product price per unit in sterling.
    5. Quantity: The quantities of each product or item per transaction.
    6. Invoice(Nominal): Invoice number. A six-digit integral number uniquely assigned to each transaction. Codes starting with the letter c, indicate cancellations.
    7. InvoiceDate: Invoice date and time, which is the day and time when it generated a transaction.
    8. Country: The name of the country where a customer resides.
  1. Explore, clean, and convert the dataset

The user-item-interaction data is critical for getting started with the recommender system. Recommender systems use this data to train for use cases such as video-on-demand applications, user click-stream logs, and user purchase history. No matter the use case, the algorithms all share a base of learning on user-item-interaction data, which is defined by two core attributes:

    • user_id – The user who interacted
    • item_id – The item that the user interacted with

The Implicit BPR requires the training dataset to contain user_id and item_id columns. In this case, the columns are, respectively, CustomerID and StockCode, representing the items that the user purchased or interacted with. Additionally, the columns must not include any missing values, and the input file must be in a CSV format.

To explore, clean, and convert the dataset into the format accepted by the algorithm, run Step 3.2: Exploring, cleansing and converting dataset into the format accepted by an algorithm from the notebook.

  1. Divide into training and testing

Once the dataset is cleaned and converted into the format to satisfy the algorithm specification, I used the sklearn.model_selection.train_test_split helper function provided by the scikit-learn to split it into 70% training and 30% testing dataset.

Run Step 3.3: Preparing the final training dataset and upload it to Amazon S3 from the notebook to split the dataset and upload it to the Amazon S3 bucket so it can be utilized to train and evaluate the performance of our model.

You can find the uploaded training and testing dataset in S3 as below:

    • Training: s3://bucket-name/sagemaker/implicit-bpr/training/data
    • Testing: s3://bucket-name/sagemaker/implicit-bpr/test/data

At this point, you have performed the ingestion, exploration, and generation of a clean training dataset file that meets the Implicit BPR algorithm’s requirements. You have also uploaded the training and testing dataset to the S3 bucket and can use it to train a model.

Step D. Train and evaluate the ML model

To train a model, you must create a training job. After you start the training job, SageMaker launches the machine learning (ML) compute instances and uses the training code you provided to train the model. It then saves the resulting model artifacts and other output in the S3 bucket. To train your model, do the following:

  1. Train the model

Run Step 4: Train the model and evaluate the performance metrics from the sample notebook. It uses the training and test dataset from the S3 bucket, trains the Implicit BPR algorithm by starting the training job, and waits until it finishes. The resulting trained model artifact is uploaded at Amazon S3 bucket at the below location:

s3://bucket-name/sagemaker/implicit-bpr/training/jobs/implicit-bpr-online-retail-training/training-job-name

  1. Visualize and analyze the metrics

For a recommendation system, you want to promote the top-N items for a user. So, you must measure the Precision and Recall metrics for those top N items.

The following image from the training job logs of the previous step shows that the algorithm produced the Precision at 10 in a top-10 recommendation problem at around 82 percent. It means that about 82 percent of the recommendation the system presented are relevant to users.

Implicit BPR Training job precision at K metrics

Run the code in Step 4.2: Evaluate and visualize the performance metrics which visualizes the metrics p@k(10) produced from the training job, where k is a user-defined integer to match the objective of the top-N recommendations using the SageMaker Python SDK APIs.

Step E: Perform inference on the trained model

After you build and train your models, you can deploy them to get predictions in one of two ways:

  • Batch Transform: To get the inferences on an entire dataset offline, you run a batch transform job on a trained model. A batch transform automatically manages the processing of large datasets within the limits of specified parameters.

For instance, consider the product or movie recommendations on a site; rather than generate new predictions each time a user logs on to the website, you may decide to create recommendations for users in batch and then cache these recommendations for easy retrieval when needed.

  • Real-time: Many customer use cases require the system to have an HTTPS endpoint to get predictions from your models. For instance, a ride-sharing or a food delivery application where estimated time to delivery is created in real-time whenever a user requests the service. It is not helpful to have those inferences generated ahead of time as a batch and serve to users.

Many other applications can benefit from online inference, such as self-driving cars, virtual reality, and any consumer-facing mobile or web applications that allow users to query models in real-time with sub second latency. SageMaker has hosting services for model deployment, providing an HTTPS endpoint where your ML model can perform inferences.

  1. Analyze the model inference with an example user

Run Step 5.1: Identify a customer and understand their purchase history of the sample notebook. This identifies a sample user with CustomerID:13085, who has purchased three separate items of various kinds from the initial dataset. It also visualizes the top 10 actual purchase histories for this customer.

The following image shows the table of top ten original purchase history for a customer with the ID 13085.

    • The first and second rows show that this customer has purchased twelve 15CM Christmas glass ball 20 lights and twelve pink cherry lights.
    • The eighth and ninth rows show they also bought ten fancy font home sweet home doormats and twelve quantities of the cat bowl.

I concluded from this data that this customer likes to purchase different kinds of lights, doormats, and various bowls.

Top ten original purchase history for a customer with the ID 13085

  1. Run a batch transform on the trained model

To run the batch transform job on a trained model, you first construct a request with the customer id and upload it to S3. Run Step 5.2: Upload the payload to Amazon S3 and run a batch transform job of the sample notebook, demonstrating building an inference request and initiating a batch transform job for this customer. When the job finishes, the output response of the batch transform job is uploaded back to the S3 bucket:

s3://bucket-name/sagemaker/implicit-bpr/batch-inference/jobs/implicit-bpr-online-retail-batch-transform-timestamp.

The inference output includes the user_id and item_id that it would recommend to the user and consists of the recommendation score in an order that is most relevant to this user.

Run Step 5.3 of the notebook, which reads and generates better visualization of the inference response. The following screenshot shows the model’s ranked inference results for the next few articles that this customer might want to purchase.

    • The first and second rows recommend flamingo lights with the recommendation score of 3.23 and 10 lights night owl with the recommendation score of 3.08.
    • Along with lights, it also predicted ladle love heart red in the fifth row and mushroom blue hot water bottle in the seventh row with recommendation scores of 2.87 and 2.61, respectively.

Batch Transform inference results with the ranking for customer ID 13085

  1. Run real-time inferences on the trained model

To launch the endpoint to host the model for the real-time inferences, run Step 6: Deploy the model and perform a real-time inference of the sample notebook. This step also identifies another sample user with CustomerID: 17519 and visualizes the top 10 actual purchase history.

The following screenshot shows that this customer prefers purchasing various event-related decorative items:

    • The first and second rows indicate the customer ordered twelve pink and twelve blue felt hanging heart flowers.
    • The seventh and eighth rows show he purchased decorative wall banners, including six quantities of heart string memo holder hanging and forty-eight quantities of folkart bauble Christmas decorations.

Top ten original purchase history for a customer with the ID 17519

  1. Make inference requests to the deployed endpoint

To create a JSON payload and makes inference requests to the deployed endpoint, run Step 6.2: Take the example user, create the JSON payload and make an inference request from the notebook. The following screenshot shows that the inference outcome the model correctly predicted the following articles that this customer would like to purchase. The correct prediction includes multiple garlands for home and party decor, assorted candies for different events, and confetti tubes for various celebrations.

The model predicted that this customer would like to buy the below articles with the recommendation score based on the customer’s purchase history.

    • 3D hearts honeycomb paper garland and paper bunting vintage paisley, as seen in the first two rows of the following screenshot. These items had recommendation scores of 3.56 and 3.47, respectively.
    • The model further recommended paper bunting white lace and paper chain kit retro spot in the fifth and sixth rows of the following screenshot with recommendation scores of 3.27 and 3.23, respectively.

Real-time Transform inference results with the ranking for customer ID 17519

Cleaning up

To avoid incurring future charges, follow these instructions in the Amazon SageMaker Developer Guide.

Conclusion

In this blog post, I demonstrated how to use Implicit BPR in AWS Marketplace to improve the personalized ranking in any recommender system. I showed how to subscribe to the model, create the SageMaker notebook instance, train and deploy the model, and experiment with the sample code.

I further showed how to perform both batch and real-time inferences on the hosted model using different example users from the original dataset and visualize their inference results ordered by recommendation score to be most relevant to those users.

Next steps

Here are some additional resources I recommend checking out:

  1. Buy and sell Amazon SageMaker algorithms and models in AWS Marketplace
  2. Use algorithm and model package resources in AWS Marketplace
  3. Amazon SageMaker k-nearest neighbors (k-NN) algorithm
  4. Amazon SageMaker factorization machines algorithm
  5. Whitepaper on Bayesian personalized ranking from implicit feedback
  6. Whitepaper on adapting K-Nearest Neighbor for tag recommendation in folksonomies

About the author

Nirav Shah

Nirav Shah is a Senior Solutions Architect with AWS, based in sunny California. He specializes in AI/ML and containers and guides AWS customers to build highly secure, scalable, reliable, and cost-efficient applications in the cloud. He brings to his role over 17 years of technology experience in software development and architecture, data governance, engineering, and IT management. Outside of work, Nirav enjoys taking photographs and adventuring to different places.