AWS Database Blog

Using collaborative filtering on Yelp data to build a recommendation system in Amazon Neptune

“I’m hungry. Where should I go to eat?” It’s one of the most common questions we ask ourselves every day, and when you’re going out to spend money somewhere, you don’t want to simply pick a random place and try it—you want some sort of assurance that the restaurant you choose matches what you’re looking for. Ideally, you follow the suggestion of a trusted friend who knows your tastes, but what do you do when that isn’t an option? For the last decade, people have increasingly been turning to apps like Yelp, Foursquare, or OpenTable, which gather the collective opinions of hundreds and thousands of diners and distill this information into personalized recommendations for each end-user in a process called collaborative filtering.

Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build applications that work with highly connected datasets. In this post, we use Neptune to ingest and analyze the Yelp Open Dataset, which contains a subset of business, review, and user data from real Yelp users and businesses. After reading this post, you should feel comfortable working with Neptune to build a recommendation engine using collaborative filtering.

Launching Neptune

To launch a Neptune and Amazon SageMaker instance to follow along with this post, see Analyze Amazon Neptune Graphs using Amazon SageMaker Jupyter Notebooks. The post provides an AWS CloudFormation template that works in several Regions.

The Yelp dataset is fairly large, and the author for this post used an r5.12xlarge instance while testing these queries on the Yelp dataset. Be aware that you incur charges by launching this stack.

Formatting and loading data with Neptune

Amazon Neptune supports a variety of data formats when loading data. For this post, you query the data using Gremlin, a graph traversal and query language developed as part of the open-source Apache TinkerPop project, so you need to load the data using the CSV format for property graphs. For more information about data formats, see Load Data Formats. After you format the data, you use the Neptune bulk loading API to load the CSV files. For more information about bulk loading, see Using the Amazon Neptune Bulk Loader to Ingest Data.

The stack includes some Jupyter notebooks, which include Python utility libraries for bulk loading data, querying the Neptune endpoints for loading errors, and launching a Gremlin console for running queries inside of a browser terminal window. This significantly simplifies the process, which otherwise would require additional infrastructure steps around creating Amazon Simple Storage Service (Amazon S3) VPC endpoints for Neptune to use. This post first looks at loading the Yelp business data file to completion before moving on to the other Yelp data files.

In the end, your Neptune graph will contain the following:

  • Business vertices.
  • User vertices.
  • Review edges, which traverse from a user to a business. Edges usually represent verbs (for this use case, “user review businesses”).
  • Friend edges, which create a relationship between users. Because edges in graph databases are directional and Yelp users are friends which each other in both directions, you need to create two friend edges for each relationship, one in each direction (from UserA to UserB and from UserB to UserA). For this use case, this happens naturally if both users are in the dataset and UserA’s friends array would contain UserB and UserB’s friends array would contain UserA, so you don’t need to do anything special in the convertor code. The verb for this edge is “users befriend users”.

The Yelp dataset is delivered in JSON format with a similar structure for every business. See the following example data:

$ head -1 yelp_academic_dataset_business.json | jq
{
  "business_id": "f9NumwFMBDn751xgFiRbNA",
  "name": "The Range At Lake Norman",
  "address": "10913 Bailey Rd",
  "city": "Cornelius",
  "state": "NC",
  "postal_code": "28031",
  "latitude": 35.4627242,
  "longitude": -80.8526119,
  "stars": 3.5,
  "review_count": 36,
  "is_open": 1,
  "attributes": {
    "BusinessAcceptsCreditCards": "True",
    "BikeParking": "True",
    "GoodForKids": "False",
    "BusinessParking": "{'garage': False, 'street': False, 'validated': False, 'lot': True, 'valet': False}",
    "ByAppointmentOnly": "False",
    "RestaurantsPriceRange2": "3"
  },
  "categories": "Active Life, Gun/Rifle Ranges, Guns & Ammo, Shopping",
  "hours": {
    "Monday": "10:0-18:0",
    "Tuesday": "11:0-20:0",
    "Wednesday": "10:0-18:0",
    "Thursday": "11:0-20:0",
    "Friday": "11:0-20:0",
    "Saturday": "11:0-20:0",
    "Sunday": "13:0-18:0"
  }
}

The following Python 3 script processes this JSON file and converts it into the CSV format that Neptune is expecting. You don’t need every single property for your sample recommendation engine.

import json
from datetime import datetime

wfp = open('csv/yelp_business_nodes.csv', 'w')
wfp.write('~id,~label,url,name,address,city,state,postal_code,stars:Float,categories\n')
with open("dataset/yelp_academic_dataset_business.json", "r") as rfp:
    for line in rfp:
        data = json.loads(line)
        line = '"%s",business,"%s","%s","%s","%s","%s","%s",%.1f,"%s"' % (
            data['business_id'],
            'https://www.yelp.com/biz/' + data['business_id'],
            data['name'].replace("'","").replace('"',""),
            data['address'],
            data['city'],
            data['state'],
            data['postal_code'],
            data['stars'],
            '; '.join(list(map(str.strip, data['categories'].split(','))))) if data['categories'] else ''
        wfp.write(line + '\n')

The preceding script has a bug (it doesn’t sanitize the address field like it does with the business name field), which leads to problems importing the data. This was a real bug found while developing this post; we left it in to demonstrate debugging and correcting data loads. This post also includes a second version of the script that fixes the bug.

The following code is the first two lines the script outputs:

$ head -2 csv/yelp_business_nodes.csv
~id,~label,url,name,address,city,state,postal_code,stars:Float,categories
"f9NumwFMBDn751xgFiRbNA",business,"https://www.yelp.com/biz/f9NumwFMBDn751xgFiRbNA","The Range At Lake Norman","10913 Bailey Rd","Cornelius","NC","28031",3.5,"Active Life; Gun/Rifle Ranges; Guns & Ammo; Shopping"

You should keep in mind the following:

  • Vertices always need an ID column that uniquely identifies the vertex. For this use case, the Yelp dataset already includes unique IDs for businesses, reviews, and users, which you can use.
  • Vertices always need a label column identifying the type of the vertex. For this use case, you use the strings business, review, and user.
  • The default data type for properties unless otherwise specified is a list of strings. These strings should be surrounded by double quotes. If more than one entry is in the list (as in business categories), the entries must be separated by a semicolon.
  • Other data types can be specified in the CSV header. This use case uses the stars rating for a business, which is rounded to the nearest 0.5 (for example, 3.0, 3.5, 4.0, 4.5).

To format and load your data, complete the following steps:

  1. Copy this file to an S3 bucket in your AWS account.
  2. On the Amazon SageMaker console, under Notebook, choose Notebook instances.
  3. Choose your notebook.
  4. Choose Open Jupyter and load the CSV file into Neptune.

  1. In Jupyter, from the New drop-down menu, choose conda_python3.

You now enter and run several commands in the notebook. You first load the Neptune utility library, which allows Jupyter to interact with Neptune. Then you bulk load the yelp_business_nodes.csv file from your S3 bucket.

If you attempt to use the bucket as shown in this post, it fails; the Yelp Dataset license doesn’t allow us to distribute the dataset, so you need to download it and make the transformation in your own S3 bucket.

  1. Enter the following code:
%run ‘Neptune/util/neptune.py’
neptune.bulkLoad(‘s3://<your-s3-bucket>/path/to/yelp_business_nodes.csv’)
  1. Choose Run.

Inside that curl command, you see the endpoint of your Neptune cluster and a status endpoint specific to this bulk load request. (for this post, https://neptunedbcluster-4sdblriicbcm.cluster-caw4ixqiolg1.us-east-2.neptune.amazonaws.com:8182/loader/a5f520e0-addd-4b45-a676-8f745f6b6846).

  1. Record this endpoint to use later.

When the bulk load finishes, an exception is thrown from the bulk loader that tells you the status is LOAD_FAILED with one parsingError. See the following screenshot.

Jupyter notebooks allow you to run bash commands, and you want to use curl to hit the loader API endpoint from earlier while appending the query parameters “?details=true&errors=true”. This tells Neptune to show you the error details from the bulk load that caused the exception.

  1. Enter the following code in the Jupyter command cell:
%% bash
curl -s ‘<NEPTUNE ENDPOINT AND PORT>/loader/a5f520e0-addd-4b45-a676-8f745f6b6846?details=true&errors=true‘

The following screenshot shows the output.

The response shows that an invalid character is in line number 108514 of the CSV file. See the following code:

"wUKqPYvtBwrNk-QHgPtaxg",business,"https://www.yelp.com/biz/wUKqPYvtBwrNk-QHgPtaxg","McDonalds","30 Brisdale Rd, Bldg "C", Building C","Brampton","ON","L7A 3G1",3.0,"Fast Food; Restaurants; Burgers; Coffee & Tea; Food"

The address column for this row contains an embedded double quote: "30 Brisdale Rd, Bldg "C", Building C", which makes the CSV format invalid. You could change the code to make sure any such entries are properly escaped, but for this use case, you can just strip out any single or double quotes from the business names during the JSON to CSV transformation by using Python’s replace() function. After you make this change, the following is the correct Python code for this transformation:

import json
from datetime import datetime

wfp = open('csv/yelp_business_nodes.csv', 'w')
wfp.write('~id,~label,url,name,address,city,state,postal_code,stars:Float,categories\n')
with open("dataset/yelp_academic_dataset_business.json", "r") as rfp:
    for line in rfp:
        data = json.loads(line)
        line = '"%s",business,"%s","%s","%s","%s","%s","%s",%.1f,"%s"' % (
            data['business_id'],
            'https://www.yelp.com/biz/' + data['business_id'],
            data['name'].replace("'","").replace('"',""),
            data['address'].replace("'","").replace('"',""),
            data['city'],
            data['state'],
            data['postal_code'],
            data['stars'],
            '; '.join(list(map(str.strip, data['categories'].split(','))))) if data['categories'] else ''
        wfp.write(line + '\n')

Before you upload the corrected CSV file to Amazon S3 and perform the bulk load again, you first need to tell Neptune to empty out the vertices you already loaded so you can load the CSV file again.

  1. Invoke clear(), which is a utility function included in the Neptune Python utility library mentioned earlier.

See the following screenshot for the output.

  1. Reload the file and perform the bulk load again.

You should now get a successful response with no errors shown. See the following screenshot.

  1. To verify there were no errors, query the loader API endpoint again, using the specific load ID from your previous load. See the following screenshot.

Now that you know the process, you can follow it for the other three entity types using the following code (this is the correct version, without bugs). Regarding edge data, keep in mind the following:

    • You convert the review’s date property to ISO 8601 format, which is important later when you write your queries.
    • Edges also need unique ID values, just like vertices do. For review edges, you can use the unique review_id provided by the Yelp dataset. Because there is no unique ID representing the relationship between two friends, you need to construct one. You can join the two user IDs with the string friend between them. For example, ntlvfPzc8eglqvk92iDIAw-friend-oeMvJh94PiGQnx_6GlndPQ, and the relationship in the other direction is oeMvJh94PiGQnx_6GlndPQ-friend-ntlvfPzc8eglqvk92iDIAw.
  1. Repeat the preceding steps for the remaining entity types. Use the following code:
import json
from datetime import datetime

wfp = open('csv/yelp_business_nodes.csv', 'w')
wfp.write('~id,~label,url,name,address,city,state,postal_code,stars:Float,categories\n')
with open("dataset/yelp_academic_dataset_business.json", "r") as rfp:
    for line in rfp:
        data = json.loads(line)
        line = '"%s",business,"%s","%s","%s","%s","%s","%s",%.1f,"%s"' % (
            data['business_id'],
            'https://www.yelp.com/biz/' + data['business_id'],
            data['name'].replace("'","").replace('"',""),
            data['address'],
            data['city'],
            data['state'],
            data['postal_code'],
            data['stars'],
            '; '.join(list(map(str.strip, data['categories'].split(','))))) if data['categories'] else ''
        wfp.write(line + '\n')
        
wfp = open('csv/yelp_review_edges.csv', 'w')
wfp.write('~id,~from,~to,~label,stars:Float,date,funny:Int,useful:Int,cool:Int,url,user_url\n')
with open("dataset/yelp_academic_dataset_review.json", "r") as rfp:
    for line in rfp:
        data = json.loads(line)
        line = '"%s","%s","%s",review,%.1f,"%s",%d,%d,%d,"%s","%s"' % (
            data['review_id'],
            data['user_id'],
            data['business_id'],
            data['stars'],
            datetime.strptime(data['date'], "%Y-%m-%d %H:%M:%S").isoformat(),
            data['funny'],
            data['useful'],
            data['cool'],
            'https://www.yelp.com/biz/' + data['business_id'] + '?hrid=' + data['review_id'],
            'https://www.yelp.com/user_details?userid=' + data['user_id'])
        wfp.write(line + '\n')

wfp = open('csv/yelp_friend_edges.csv', 'w')
wfp.write('~id,~from,~to,~label\n')
with open("dataset/yelp_academic_dataset_user.json", "r") as rfp:
    for line in rfp:
        data = json.loads(line)
        if 'friends' in data and data['friends']:
            for friend_id in list(map(str.strip, data['friends'].split(','))):
                line = '%s,%s,%s,friend' % (
                    '%s-friend-%s' % (data['user_id'], friend_id),
                    data['user_id'],
                    friend_id)
                wfp.write(line + '\n')
  1. Upload those files to your S3 bucket and run the bulk load for all three files.

If you have the CSV files of edges and vertices in a single bucket, Neptune detects which is which and loads them in the correct order. Neptune also allows you to queue ingest jobs using the queueRequest parameter. For more information about data formats, see Neptune Loader Command.

When debugging the ingest phase, load each file one at a time so you can more easily detect if one file has a problem. Therefore, you need to load the vertices before the edges, because the edges reference a relationship between two vertices and if the vertices don’t already exist, the edge fails to load. The neptune.bulkLoad() is a helper function from the Python utility library. See the following code:

neptune.bulkLoad('s3://<YOUR_S3_BUCKET>/yelp_user_nodes.csv')
neptune.bulkLoad('s3://<YOUR_S3_BUCKET>/yelp_review_edges.csv')
neptune.bulkLoad('s3://<YOUR_S3_BUCKET>/yelp_friend_edges.csv')

If you worked for Yelp and had access to the full dataset of all users, businesses, and reviews, you could actually load the full graph. However, this is a sample dataset, which means reviews might exist in the dataset but the associated users or businesses do not. The same is true for friend relationships. For example, UserA might have UserB in the friends array, but UserB isn’t in the dataset. Because of this, you see FROM_OR_TO_VERTEX_ARE_MISSING errors from the edge bulkLoad() calls with an error message such as "Either from vertex, 'dxCce2EdLOAy78Y3fAXCgw', or to vertex, 'XXuS66qQ8np3D3_j_7YMAQ', is not present." See the following screenshot.

You could have your Python code check for these dangling edges and remove them from the CSV files, but the Python utility bulk loading libraries default the failOnError flag to FALSE. The bulk loader lets us know what fails, so for this use case it’s easier to pass them through to the loader and ignore these errors because there’s nothing you can do about it with the incomplete dataset.

Running queries against Neptune

Now that you have ingested your dataset into Neptune, you can start querying that data. For this post, we use Gremlin queries to explore the data. You can run these queries directly from the Jupyter notebook. See the following code:

%run 'Neptune/util/neptune.py'

g = neptune.graphTraversal()
g.V().count().toList()

The following screenshot shows the output.

You may prefer using the Gremlin console when developing queries instead of the Jupyter notebook. Fortunately, this stack includes a web-based Linux terminal and installs the Gremlin console for you.

  1. In Jupyter, from the New drop-down menu, choose Terminal.

When you’re in the terminal, you need to change to the /home/ec2-user/neptune-tools/apache-tinkerpop-gremlin-console-3.4.1 directory and execute bin/gremlin.sh to start the console.

  1. Tell the console to connect to your remote Neptune instance with the following two commands:
:remote connect tinkerpop.server conf/neptune-remote.yaml 
:remote console

Your console is now ready to run queries. See the following screenshot.

Exploring the data

This section demonstrates how to explore your data with several different queries and determine the following:

  • How many of each entity type is in the dataset?
  • Which cities in the database have the most businesses?
  • Which cities have the most reviews?
  • Who are the top five reviewers in the city with the most reviews?

Number of entity types

To determine how many of each entity type is in the dataset, see the following code:

gremlin> g.V().hasLabel('business').count()
==>208869
gremlin> g.V().hasLabel('user').count()
==>1968703
gremlin> g.E().hasLabel('review').count()
==>8018979
gremlin> g.E().hasLabel('friend').count()
==>19205058

The output tells you that the dataset has roughly 200,000 business vertices, 2 million user vertices, 8 million review edges (which link user and business vertices) and 19 million friend edges (which link user vertices to other user vertices).

Cities with the most businesses

To determine which cities in the dataset have the most businesses, see the following code:

g.V().hasLabel('business').
      groupCount().
        by('city').
      order(local).
        by(values, desc).
      unfold().
      limit(5)  
==>Las Vegas=31556
==>Toronto=20330
==>Phoenix=20102
==>Charlotte=10400
==>Scottsdale=9331

The results show that Las Vegas is the clear winner, with 31,556 businesses. However, the fact that it has a lot of businesses in the dataset doesn’t mean it has the largest user base or set of reviews for you to analyze.

Cities with the most reviews

The query to determine which cities have the most reviews traverses the graph between vertices. If I’m sitting on an edge, I can never remember which way the inV() and outV() predicates take me in the graph, so we can pick a random review edge and try it both ways to see what happens. See the following code:

g.E().hasLabel('review').
  inV().
  limit(1).
  valueMap()
==>{address=[2539 W Bethany Home Rd], city=[Phoenix], name=[Lunch Box], state=[AZ], stars=[3.0], categories=[Diners; Breakfast & Brunch; Mexican; American (Traditional); Restaurants], postal_code=[85017], url=[https://www.yelp.com/biz/30PzafMyWvh5fYZ8yV_qBg]}

g.E().hasLabel('review').
  outV().
  limit(1).
  valueMap()
==>{compliment_writer=[0], compliment_more=[1], compliment_funny=[0], cool=[0], compliment_plain=[0], friends=[None], compliment_note=[0], compliment_profile=[0], compliment_hot=[0], name=[Linda], compliment_photos=[0], compliment_cool=[0], compliment_list=[0], compliment_cute=[0], useful=[14], funny=[3]}

Starting with g.E() is generally frowned upon because there could be many types of edges in your graph with counts numbering into the billions. You should start your traversal with g.V(), rewriting the previous two queries (and all future queries) to start with vertices instead of edges. See the following code:

g.V().outE('review').
  limit(1).
  inV().
  valueMap()
==>{address=[2539 W Bethany Home Rd], city=[Phoenix], name=[Lunch Box], state=[AZ], stars=[3.0], categories=[Diners; Breakfast & Brunch; Mexican; American (Traditional); Restaurants], postal_code=[85017], url=[https://www.yelp.com/biz/30PzafMyWvh5fYZ8yV_qBg]}

g.V().outE('review').
  limit(1).
  outV().
  valueMap()
==>{compliment_more=[1], compliment_writer=[0], compliment_funny=[0], cool=[0], compliment_plain=[0], compliment_note=[0],friends=[None], url=[https://www.yelp.com/user_details?userid=hGefRHR-6K5rGaSVkhVJhQ], compliment_profile=[0], compliment_hot=[0], name=[Linda], compliment_photos=[0], compliment_cool=[0], compliment_list=[0], compliment_cute=[0], useful=[14], funny=[3]}

If you’re sitting on a review edge, inV() takes you to the business vertex for that review, and outV() takes you to the vertex of the user who wrote that review.

One way to approach the original question (which cities in the dataset have the most reviews) is to start with the set of all edges and see which cities have the most reviews. See the following code:

g.V().outE('review').
  inV().
  groupCount().by('city').
  order(local).by(values, desc).
  unfold().
  limit(5)
{"requestId":"eb8f7dd5-5cf1-4c21-9826-25d6475ea7db","code":"UnsupportedOperationException","detailedMessage":"Encountered a traverser that does not map to a value for child traversal: value(city)"}

This query returned an error indicating that some of the vertices are missing the city property. If you want to find the businesses that don’t have a city property and clean up your dataset, you could find them using the hasNot() operator. See the following code:

g.V().hasLabel('business').
  hasNot('city').
  valueMap()
==>{address=[262-195 Balzac Blvd, Rockview County], name=[Balzac Campground R.V. Park & Storage], state=[AB], stars=[1.0],categories=[Campgrounds; Hotels & Travel], postal_code=[T4B 2T3], url=[https://www.yelp.com/biz/Nbnp_ICeMwv5Lt0Zl-rDjw]}
==>{name=[NSFC Strikers Soccer Complex], state=[AZ], stars=[2.5], categories=[Soccer; Active Life], url=[https://www.yelp.com/biz/WpC53SqwoCY5AuYIFr_1eA]}

Because there are only two such businesses, you can filter them out of the query by adding a .has(‘city’) step after traversing to the business vertices. See the following code:

g.V().outE('review').
  inV().has('city').
  groupCount().by('city').
  order(local).by(values, desc).
  unfold().
  limit(5)
==>Las Vegas=2444475
==>Phoenix=876351
==>Toronto=600440
==>Scottsdale=457523
==>Charlotte=384693

Las Vegas is also the clear winner on number of reviews.

Now that you know Las Vegas has the most reviews, you can focus your analysis there as you drill down.

Top five reviewers

For this use case, you need to pick a user and generate recommendations for them. It’s hard to make informed recommendations for users that don’t have reviews because you can’t judge what they like and don’t like, so you need to find a user that has written several reviews.

To determine which users in this dataset are the top five reviewers in Las Vegas, you must build the query one step at a time.

First, find all the businesses in Las Vegas. See the following code:

g.V().hasLabel('business').
  has('city', 'Las Vegas').
  count()
==>31556

That’s reassuring because 31,556 is the same number you got from the groupCount() query.

Next, find all the reviews for these Las Vegas businesses with the following code:

g.V().hasLabel('business').
  has('city', 'Las Vegas').
  inE('review').
  count()
==>2444475

Again, the result matches the previous groupCount() result.

Lastly, find the top users who wrote reviews in Las Vegas by grouping and counting by the user represented in the review. You do that by starting with all the reviews from Las Vegas restaurants and stepping to the user vertices, where you can group by the user’s profile URL and get the count for the top five users with the most reviews. See the following code:

g.V().hasLabel('business').
  has('city', 'Las Vegas').
  inE('review').outV().
  groupCount().by(id).
  order(local).by(values, desc).
  unfold().
  limit(5)
==>bLbSNkLggFnqwNNzzq-Ijw=2278
==>PKEzKWv_FktMm2mGPjwd0Q=1829
==>U4INQZOPSUaj8hMjLlZ3KA=1246
==>UYcmGbelzRa0Q6JqzLoguw=1111
==>n86B7IkbU20AkxlFX_5aew=1094

The first result shows a user with ID bLbSNkLggFnqwNNzzq-Ijw, who wrote 2,278 reviews in Las Vegas. This is the kind of super user you want when testing your recommendation engine because you have a lot of indicators of what they like.

Building out the recommendation engine

Now that you know the ID of a user with lots of reviews, you want to build a recommendation list for them. Start your analysis on that specific user vertex. See the following code (note that names have been changed to protect the innocent):

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  valueMap().
    with(WithOptions.tokens).
  unfold().
  filter(select(keys).
    is(without('friends')))
==>compliment_writer=[730]
==>compliment_more=[84]
==>compliment_funny=[1844]
==>cool=[35650]
==>id=bLbSNkLggFnqwNNzzq-Ijw
==>compliment_plain=[1762]
==>compliment_note=[741]
==>elite=[2012; 2013; 2014; 2015; 2016; 2017; 2018]
==>compliment_profile=[64]
==>compliment_hot=[752]
==>label=user
==>name=[Jane]
==>compliment_photos=[1103]
==>compliment_cool=[1844]
==>compliment_list=[16]
==>compliment_cute=[40]
==>useful=[51123]
==>funny=[25247]

Because you use these valueMap() options repeatedly, note the following:

  • This user has a very large friends array. The valueMap() step allows you to specify which keys to include in the results, but you don’t have an option to include all keys except for a select few. You can accomplish this by using a filter step to include everything except the friends key (you could also add more entries to the without() step if you needed to exclude more than just friends).
  • with(WithOptions.tokens) adds the ID and label properties to the output.
  • For the remainder of this post, we refer to this user as Jane (names have been changed to protect the innocent).

You can ask a few simple questions of Jane’s data, like how many reviews they have written. See the following code:

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  count()
==>2555

2,555 is slightly larger than the 2,278 you saw in the previous query, because the previous query only counted reviews in Las Vegas and Jane also has some reviews outside of Las Vegas.

You can calculate a histogram of Jane’s star ratings. The following code returns the key names in descending order:

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  groupCount().by('stars').
  order(local).by(keys, desc).
  unfold()
==>5.0=57
==>4.0=1065
==>3.0=1127
==>2.0=263
==>1.0=43

You can deduce that Jane doesn’t often give 5-star reviews, and you have a large set of information about what Jane likes given the total number of 4- and 5-star reviews. This gives you good information for the recommendation engine you’re about to build.

You can also count Jane’s 4- and 5-star reviews by using where() or has(). Both give the same answer, but for simple cases when just comparing a property and a value, you should use has(). The where() step allows a traversal to be used inside its body, which has() does not, and there’s no guarantee a query planner will optimize it down to the same exact pattern that has() creates. See the following code:

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  where(values('stars').is(gte(4))).
  count()
==>1122

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  has('stars', gte(4)).
  count()
==>1122

Your basic algorithm for building out the recommendation list is as follows:

  1. Find Jane’s 5-star reviews to know which businesses they liked. In later queries, you can see where we add in an aggregate() step to save the list into a temporary variable myLikes. This is so you can refer to this variable later in the query.
  2. Find all the users who also gave 4- or 5-star reviews to the same businesses that Jane gave 5 stars to.
  3. Some of those users from Jane’s like-minded group might have liked more than one of the same business that Jane did, so you also deduplicate the list.
  4. From a graph point of view, Jane is also in the like-minded group, so you need to remove their vertex from the list.
  5. Take the top 100 people from Jane’s like-minded group and find the top 10 places that the group liked that Jane hasn’t been to, and sort that list by average star rating. You use the myLikes aggregated set (also known as a bulk set) to filter out the businesses Jane already reviewed from the businesses Jane hasn’t reviewed yet. This is your master recommendation list, which you further refine later in this post.

The following code builds the recommendation list:

# Get Jane's 5 star reviews

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  has('stars', eq(5))

# Users who liked a place that Jane liked

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  has('stars', eq(5)).
  inV().
  inE('review').
  has('stars', gte(4)).
  outV().
  count()

# Dedup the user list

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  has('stars', eq(5)).
  inV().
  inE('review').
  has('stars', gte(4)).
  outV().
  dedup().
  count()

# The graph also contains a vertex for Jane, since Jane also liked places that Jane liked so filter out the Jane vertex

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  has('stars', eq(5)).
  inV().
  inE('review').
  has('stars', gte(4)).
  outV().
  dedup().
  not(has(id, 'bLbSNkLggFnqwNNzzq-Ijw')).
  count()

# Top 100 People who liked the most places in common with Jane (the wellenlängegruppe)

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  has('stars', eq(5)).
  inV().
  aggregate('myLikes').
  inE('review').
  has('stars', gte(4)).
  outV().
  dedup().
  not(has(id, 'bLbSNkLggFnqwNNzzq-Ijw')).
  order().
    by(local(outE('review').has('stars', gte(4)).inV().where(within('myLikes')).count()), desc).
    limit(100)

# Top 10 Places they liked that Jane hasn't reviewed

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  has('stars', eq(5)).
  inV().
  aggregate('myLikes').
  inE('review').
  has('stars', gte(4)).
  outV().
  dedup().
  not(has(id, 'bLbSNkLggFnqwNNzzq-Ijw')).
  order().
    by(local(outE('review').has('stars', gte(4)).inV().where(within('myLikes')).count()), desc).
  limit(100).
  outE('review').
  has('stars', gte(4)).
  inV().
  where(without('myLikes')).
  dedup().
  limit(10)

# Places with the most stars they liked that Jane hasn't reviewed

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  has('stars',eq(5)).
  inV().
  aggregate('myLikes').
  inE('review').
  has('stars', gte(4)).
  outV().
  dedup().
  not(has(id, 'bLbSNkLggFnqwNNzzq-Ijw')).
  order().
    by(local(outE('review').
      has('stars', gte(4)).
      inV().
      where(within('myLikes')).
      count()), desc).
  limit(100).
  outE('review').
  has('stars', gte(4)).
  inV().
  where(without('myLikes')).
  dedup().
  order().
    by('stars', desc).
  limit(10)

The following code shows the first business on that list:

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  has('stars', eq(5)).
  inV().
  aggregate('myLikes').
  inE('review').
  has('stars', gte(4)).
  outV().
  dedup().
  not(has(id, 'bLbSNkLggFnqwNNzzq-Ijw')).
  order().
    by(local(outE('review').has('stars', gte(4)).inV().where(within('myLikes')).count()), desc).
    limit(100).
  outE('review').
  has('stars', gte(4)).
  inV().
  where(without('myLikes')).
  dedup().
  order().
    by('stars', desc).
    limit(1).
  valueMap('name', 'url', 'stars', 'address', 'city', 'state', 'categories').
  by(unfold()).
  unfold()
==>name=Bronze Cafe
==>url=https://www.yelp.com/biz/uOwWmfGjhC6Ddko2Z_wGIA
==>stars=5.0
==>address=2380 N Buffalo Dr, Ste 110
==>city=Las Vegas
==>state=NV
==>categories=Breakfast & Brunch; Coffee & Tea; Vegan; Salad; Food; Restaurants; Soup; Sandwiches

This breakfast and brunch spot in Las Vegas averages 5 stars, the highest rating possible. It appears we’re on the right track!

From this point you could make several further refinements to your recommendation list:

  • Only return businesses with Coffee & Tea as a category for when Jane wants a dose of caffeine.
  • Only return highly-rated businesses that have over 100 reviews. This sometimes helps to reduce the noise of businesses that only have 1 or 2 reviews posted by friends of the owner.
  • Only return businesses that are open for business right now (we omitted operating hours from the dataset to save space when loading the Yelp dataset; you would need to add it back in).
  • Filter the list using geospatial boundaries, for example, if Jane wants a coffee shop within walking distance. This would best be achieved by dual-ingesting the data into both Neptune and Amazon OpenSearch Service, a secondary search indexing system that supports geospatial queries.
  • Only show businesses that opened recently, for example if Jane is interested in trying new places.

These refinements aren’t mutually exclusive. For example, Jane might want to find a coffee shop within walking distance that opened within the last 6 months.

We leave the first four query refinements for you to experiment with. Finding businesses that opened recently presents a special challenge because there isn’t any data in the business vertex to tell you when the business was started. However, you can find the earliest review for that business and use it as a proxy for the opening date of the business. In other words, if the first review for that business was 6 months ago, you can infer that the business opened in the last 6 months because very few businesses would be open for years without receiving a single Yelp review. Each review has a date property in ISO 8601 format. ISO 8601 format is lexicographically sortable, so to find reviews that came in 2020 or later, look for reviews where the date string is greater than 2020-01-01.

However, for this use case, the Yelp dataset uses data from 2019. To determine which of Jane’s recommendations opened in 2019, see the following code:

g.V('bLbSNkLggFnqwNNzzq-Ijw').
  outE('review').
  has('stars', eq(5)).
  inV().
  aggregate('myLikes').
  inE('review').
  has('stars', gte(4)).
  outV().
  dedup().
  not(has(id, 'bLbSNkLggFnqwNNzzq-Ijw')).
  order().
    by(local(outE('review').has('stars', gte(4)).inV().where(within('myLikes')).count()), desc).
    limit(100).
  outE('review').
  has('stars', gte(4)).
  inV().
  where(without('myLikes')).
  dedup().
  order().
    by('stars', desc).
  aggregate('recos').
  local(inE('review').
    order().by('date', asc).limit(1)).
  has('date', gte('2019-01-01')).
  limit(3).
  inV().
  valueMap('name', 'url', 'stars', 'address', 'city', 'state', 'categories').
  by(unfold()).
  unfold()
==>name=Bronze Cafe
==>url=https://www.yelp.com/biz/uOwWmfGjhC6Ddko2Z_wGIA
==>stars=5.0
==>address=2380 N Buffalo Dr, Ste 110
==>city=Las Vegas
==>state=NV
==>categories=Breakfast & Brunch; Coffee & Tea; Vegan; Salad; Food; Restaurants; Soup; Sandwiches
==>name=Water Grill - Las Vegas
==>url=https://www.yelp.com/biz/sVjL9DQ8hYW_pS-jprYLgg
==>stars=5.0
==>address=3500 Las Vegas Blvd S
==>city=Las Vegas
==>state=NV
==>categories=Restaurants; Seafood
==>name=Tatsujin X
==>url=https://www.yelp.com/biz/MPD8oyot_RIS4ECrtKAhtw
==>stars=5.0
==>address=4439 W Flamingo Rd
==>city=Las Vegas
==>state=NV
==>categories=Restaurants; Japanese; Steakhouses; Teppanyaki

The Bronze Cafe, Water Grill, and Tatsujin X are some new restaurants that Jane might want to try.

Cleaning up

If you launched the CloudFormation stack to follow along with this post, don’t forget to delete that stack to avoid incurring unnecessary costs.

Conclusion

Neptune is a very powerful managed graph database service that can help you quickly build a recommendation system like the one we built in this post using collaborative filtering on restaurant review data.

If you want to generate recommendations from your data, get started today!

 


About the Author

Chad Tindel is a DynamoDB Specialist Solutions Architect based out of New York City. He works with large enterprises to evaluate, design, and deploy DynamoDB-based solutions. Prior to joining Amazon he held similar roles at Red Hat, Cloudera, MongoDB, and Elastic.