AWS for M&E Blog
What’s new in recommender systems
With the ever-increasing selection of direct to consumer (DTC) platforms available today, most consumers cannot subscribe to all platforms. Subscription/purchase decisions are driven both by content (what shows/movies a platform has) and user experience (how easy a platform is to use). Consumers today expect real-time, curated experiences as they consider, purchase, and engage with content. Whether it’s improving click-through rate, increasing views, view duration, subscriptions, or purchases of premium content, media companies are working hard to find ways to deliver a better customer experience and expand profitability.
Recommender systems are a critical tool to achieve these goals. Providing recommendations that maximize the value of deep content catalogs, DTC platforms can keep consumers engaged after they’ve watched the content that originally brought them to the platform. For example, good recommendations for Video on Demand (VOD) platforms can increase revenue for long tail content by surfacing it in recommendations based on consumers’ behavior.
In this blog post, we first review the common kinds of recommender systems in use today. Then we dive into an examination of some of the most exciting recent developments in this domain. We compare and contrast these newer techniques with existing ones, and identify the gaps they fill.
Common systems in use today
To place the newer systems in context, let’s begin by reviewing well-established recommender systems. Many such systems can be categorized as either content-based filtering or collaborative filtering. Content-based filtering is one of the simplest systems, but sometimes is still useful. It is based on known user preferences provided explicitly or implicitly, and data about item features (such as categories to which items belong). While these systems are easy to implement, they tend to have recommendations that feel static, and have trouble dealing with new users whose preferences are unknown.
Collaborative filtering is based on (user, item, rating) tuples. So, unlike content-based filtering, it leverages other users’ experiences. Amazon.com was a pioneer of this approach and published an early paper that later won an Institute of Electrical and Electronics Engineers (IEEE) award as a paper that has best withstood the “test of time” [1, 2]. The main concept behind collaborative filtering is that users with similar tastes (based on observed user-item interactions) are more likely to have similar interactions with items they haven’t seen before.
Compared to content-based filtering, collaborative filtering provides better results for diversity (how dissimilar recommended items are); serendipity (a measure of how surprising the successful or relevant recommendations are); and novelty (how unknown recommended items are to a user). However, collaborative filtering is more computationally expensive, and more complex and costly to implement and manage. Though some algorithms used for collaborative filtering such as factorization machines are more lightweight than others. Collaborative filtering has a cold start problem as well, as it has difficulty recommending new items without a large amount of interaction data to train a model.
In addition to these two “classic” categories of recommender systems, various neural net architectures are common in recommender systems. Some implement a form of collaborative filtering. Others expand recommender systems to handle temporal data to make recommendations based on a sequence of user actions that reflect the evolution of user interests. These systems were originally based on various kinds of Recurrent Neural Nets (RNNs). They now leverage Transformer-based models with self-attention to learn dependencies among items in users’ behavior sequences [3].
Neural nets typically are data- and computationally-intensive compared to non-deep learning models such as factorization machines, though both kinds of models continue to be used. For example, Amazon SageMaker, a managed machine learning service that supports the complete project lifecycle from data labeling and processing through model deployment, includes built-in algorithms for both factorization machines and Object2Vec, a neural embedding algorithm that can be used in a recommender system.
New approaches
Researchers have experimented with many new approaches to recommender systems in the last few years. In fact, there are so many that we cannot cover them all here. Instead, we focus on a few interesting ones that have gained traction in the last couple of years.
It’s important to keep in mind that hybrid systems are increasingly popular. Some of these newer approaches are not mutually exclusive and can be combined with each other or earlier techniques. An example is Amazon Personalize, a fully managed service for personalized recommendations. The preferred algorithm (“recipe”) in Amazon Personalize, user-personalization, combines a newer bandit-based approach with a Hierarchical RNN based on a recent paper by AWS [4].
Bandit-based systems
An active area of research is recommender systems that incorporate bandit-based approaches. A bandit algorithm is a form of reinforcement learning (RL) that tries to balance exploration of new possibilities with exploitation of profitable ones already discovered. They have been frequently used as an alternative to static A/B testing: a key advantage is their ability to adapt in real time. This could help to overcome the cold start problem.
In the context of recommender systems, bandit algorithms now have many applications and have been integrated into production-grade systems such as Amazon Personalize, which effectively combines RNNs with bandits to provide more accurate user modeling (high relevance) and effective exploration. In fact, bandit algorithms could be used to make real-time selections between several recommender systems based on how users respond to the different recommendations provided by each system.
An increasingly important application of bandits is in systems that take into account multiple objectives and metrics related to user satisfaction, and/or multiple stakeholders (a “marketplace” – users, advertisers, platform holders, content owners etc.). For example, in a music content recommender system, an additional objective might be to provide “fairness” for long-tail artists and content by ensuring they receive at least some recommendations. This approach has been researched by content providers such as Spotify, as discussed in an interesting, publicly available presentation by one of Spotify’s researchers [5].
On AWS, there are multiple ways to use bandit-based systems. As mentioned in the preceding paragraph, Amazon Personalize provides a fully managed option for doing so. A less managed alternative is to use Amazon SageMaker RL, which includes prebuilt RL libraries and algorithms that make it easy to get started with reinforcement learning. The contextual bandits algorithm in Amazon SageMaker RL can be used to make recommendations by learning from user responses such as clicking a recommendation or not. There is a sample notebook available in a related article [6].
Causal inference
While classical statistics deals with inference of associations, causal inference focuses on determining “how” and “why” under changing conditions like those brought by external interventions or hypothetical counterfactuals. Typical recommender systems frame the recommendation task as either a distance learning problem between pairs of products, or between pairs of users and products, or as a next item prediction problem. However, a recommender system should not only attempt to model organic user behavior, but influence it. This is where causal techniques help, potentially via simple modifications of standard matrix factorization methods [7].
A related approach was taken by researchers who reframed the recommendation task as, “What would the rating be if we ‘forced’ the user to watch the movie?” [8]. Since this is a question about an intervention, it is a causal inference question. As with most casual inference questions, a central problem is unobserved confounders, variables that affect both which items the users decide to interact with and how they rate them. By developing an algorithm that combines causal inference with existing recommender algorithms, the researchers were able to generate improved recommendations because they could take the confounders into account.
Causal inference can also be applied to create other kinds of hybrid systems. For example, a team of Amazon researchers applied causal inference to a bandit-based system. They found that focusing on causal effects leads to a better return on investment for personalized marketing by targeting only the persuadable customers who would not have taken the action organically [9].
Graph Neural Net-based approaches
In contrast to the preceding approaches, Graph Neural Nets (GNNs) are based on graph constructs that represent interactions between customers and items, or where the edges represent the relationship between two items represented as nodes. Compared to sequence-based neural nets, GNNs may sometimes have an advantage since there is not necessarily a fixed order of items a user might be interested in.
Several different architectures have been used for GNNs in recommender systems. One such architecture is the graph convolutional matrix completion (GCMC) network [10]. A GCMC formulates matrix completion as a link prediction task on a bipartite graph. Doing so allows the GCMC to leverage structured external information sources such as social networks. When external information is combined with interaction data, performance bottlenecks related to the cold start problem can be reduced.
In regard to leveraging GNNs for recommender systems on AWS, a good starting point is the Deep Graph Library (DGL), an open source library built for easy implementation of GNNs. DGL is available as a deep learning container on Amazon ECR (a fully-managed Docker container registry) for use in Amazon SageMaker. In the official Amazon SageMaker examples GitHub repository, there is an example notebook showing how to use a GCMC network from DGL with the well-known MovieLens dataset to train a movie recommendation model.
Conclusion
In the decades since Amazon published its seminal paper on collaborative filtering, the domain of recommender systems has greatly expanded. While this provides more options to suit different use cases, it also makes the choice of a system considerably more difficult. Some of the many factors to consider include:
- What business goals and metrics are used to evaluate the effectiveness of the system? Besides the usual metrics such as precision@k and coverage, others to consider include diversity, serendipity, and novelty (as discussed in preceding paragraphs).
- How to address the cold start problem for new users or new items?
- What is the desired latency for predictions (and possibly how much training time is acceptable)? This depends on model complexity.
- Scalability and what kind of hardware (instance type in AWS terms) are required to support training and serving the models effectively? Again, this depends on model complexity, and will be a substantial factor in the cost of the solution.
- How interpretable is the model? This may be a key requirement for business stakeholders.
With respect to implementation, there also are potentially many decisions to make. To simplify this decision-making process, Amazon Personalize incorporates several approaches from recent research and lifts the burden from data science and developer teams of managing a recommender system at scale. If Amazon Personalize does not suit the use case, all of the preceding approaches can be implemented using Amazon SageMaker. As we mentioned in this post, Amazon SageMaker provides some relevant built-in algorithms as well as prebuilt, open source containers for contextual bandits, the DGL, and frameworks such as TensorFlow and PyTorch.
END NOTES:
[1] Greg Linden, Brent Smith, and Jeremy York, Amazon.com Recommendations: Item-to-Item Collaborative Filtering, IEEE Internet Computing, January-February 2003, retrieved from https://www.cs.umd.edu/~samir/498/Amazon-Recommendations.pdf
[2] Larry Hardesty, The history of Amazon’s recommendation algorithm, Amazon Science, November 2019, https://www.amazon.science/the-history-of-amazons-recommendation-algorithm
[3] Qiwei Chen, Huan Zhao, Wei Li, Pipei Huang, and Wenwu Ou, Behavior Sequence Transformer for E-commerce Recommendation in Alibaba, May 2019, https://arxiv.org/abs/1905.06874
[4] Yifei Ma, Balakrishnan (Murali) Narayanaswamy, Haibin Lin, and Hao Ding, Temporal-Contextual Recommendation in Real-Time, KDD ’20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, July 2020, https://dl.acm.org/doi/10.1145/3394486.3403278
[5] Rishabh Mehrotra, Personalizing Explainable Recommendations with Multi-objective Contextual Bandits, April 2019, retrieved from https://www.youtube.com/watch?v=KoMKgNeUX4k.
[6] Saurabh Gupta, Anna Luo, Bharathan Balaji, Siddhartha Agarwal, Vineet Khare, and Yijie Zhuang, Power contextual bandits using continual learning with Amazon SageMaker RL, August 2019, https://aws.amazon.com/blogs/machine-learning/power-contextual-bandits-using-continual-learning-with-amazon-sagemaker-rl
[7] Stephen Bonner and Flavian Vasile, Causal Embeddings for Recommendation, October 2018, Twelfth ACM Conference on Recommender Systems, https://arxiv.org/abs/1706.07639
[8] Yixin Wang, Dawen Liang, Laurent Charlin, and David Blei, Causal Inference for Recommender Systems, Fourteenth ACM Conference on Recommender Systems (RecSys ’20), September 2020, https://doi.org/10.1145/3383313.3412225
[9] Neela Sawant, Chitti Babu Namballa, Narayanan Sadagopan, and Houssam Nassif, Contextual Multi-Armed Bandits for Causal Marketing, October 2018, Proceedings of the 35th International Conference on Machine Learning, https://arxiv.org/pdf/1810.01859.pdf.
[10] Rianne van den Berg, Thomas N. Kipf, and Max Welling, Graph Convolutional Matrix Completion, October 2017, https://arxiv.org/abs/1706.02263