程序代写代做代考 Bayesian network algorithm html database hadoop Bayesian graph COMP9313:

COMP9313:
Big Data Management
Recommender System
Source from Dr. Xin Cao

Recommendations
Examples:
Search
Recommendations
Items
Products, web sites, blogs, news items, …
2

Recommender Systems
3

Recommender Systems
•Application areas
• Movie recommendation (Netflix)
• Related product recommendation (Amazon) • Web page ranking (Google)
• Social recommendation (Facebook)
•… …
4

Netflix Movie Recommendation
5

Why using Recommender Systems?
• Value for the customer
• Find things that are interesting
• Narrow down the set of choices
• Help me explore the space of options • Discover new things
• Entertainment
•…
• Value for the provider
• Additional and probably unique personalized service for the customer
• Increase trust and customer loyalty
• Increase sales, click trough rates, conversion etc. • Opportunities for promotion, persuasion
• Obtain more knowledge about customers
•…
6

Recommender systems • RS seen as a function
• Given:
• User model (e.g. ratings, preferences, demographics,
situational context)
• Items (with or without description of item characteristics)
• Find:
• Relevance score. Used for ranking.
• Finally:
• Recommend items that are assumed to be relevant
• But:
• Remember that relevance might be context-dependent
• Characteristics of the list itself might be important
(diversity)
7

Formal Model
• X = set of Customers
• S = set of Items
• Utility function u: X × SàR
• R = set of ratings
• R is a totally ordered set
• e.g., 0-5 stars, real number in [0,1]
• Utility Matrix
Alice 1 Bob
0.5
8
0.2 1
0.3 0.4
Avatar LOTR Matrix Pirates
Carol David
0.2

Key Problems
•Gathering “known” ratings for matrix
• How to collect the data in the utility matrix
•Extrapolate unknown ratings from the known ones
• Mainly interested in high unknown ratings
• We are not interested in knowing what you don’t like but what you
like
•Evaluating extrapolation methods
• How to measure success/performance of recommendation methods
9

Gathering Ratings
• Explicit
• Ask people to rate items
• Doesn’t work well in practice – people can’t be bothered
• Implicit
• Learn ratings from user actions
• E.g., purchase implies high rating
10

Paradigms of recommender systems
Recommender systems reduce information overload by estimating relevance
11

Paradigms of recommender systems
Personalized recommendations
12

Paradigms of recommender systems
Collaborative: “Tell me what’s popular among my peers”
13

Paradigms of recommender systems
Content-based: “Show me more of the same what I’ve liked”
14

Paradigms of recommender systems
Knowledge-based: “Tell me what fits based on my needs”
15

Paradigms of recommender systems
Hybrid: combinations of various inputs and/or composition of different mechanism
16

Content-based Recommendation
show me more of the same what I’ve liked
17

Content-based Recommendations
• Main idea: Recommend items to customer x similar to
previous items rated highly by x • What do we need:
• Some information about the available items such as the genre (“content”)
• Some sort of user profile describing what the user likes (the preferences)
• Example:
• Movie recommendations:
• Recommend movies with same actor(s), director, genre, … • Websites, blogs, news:
• Recommend other sites with “similar” content 18

Plan of Action
likes
Item profiles
recommend
build
Red
Circles Triangles
match
19
User profile

What is the “Content”?
• Most CB-recommendation techniques were applied to recommending text documents. • Like web pages or newsgroup messages for example.
• Content of items can also be represented as text documents.
• With textual descriptions of their basic characteristics.
• Structured: Each item is described by the same set of attributes
• Title
The Night of the Gun
Genre
Author
Type
Price
Keywords
Memoir
David Carr
Paperback
29.90
Press and journalism, drug addiction, personal memoirs, New York
The Lace Reader
Fiction, Mystery
Brunonia Barry
Hardcover
49.90
American contemporary fiction, detective, historical
Into the Fire
Romance, Suspense
Suzanne Brockmann
Hardcover
45.90
American fiction, murder, neo-Nazism
• Unstructured: free-text description. 20

Item Profiles
•For each item, create an item profile
•Profile is a set (vector) of features
• Movies: author, title, actor, director,…
• Text: Set of “important” words in document
•How to pick important features?
• Usual heuristic from text mining is TF-IDF
(Term frequency * Inverse Doc Frequency)
• Term … Feature
• Document … Item
21

User Profiles and Prediction
•User profile possibilities:
• Weighted average of rated item profiles
• Variation: weight by difference from average rating for item
•…
•Prediction heuristic:
• Given user profile x and item profile i, estimate
𝑢(𝒙,𝒊) = cos(𝒙,𝒊) =
𝒙·𝒊 |𝒙|⋅|𝒊|
22

Pros: Content-based Approach
• +: No need for data on other users
• +: Able to recommend to users with unique
tastes
• +: Able to recommend new & unpopular items
• No first-rater problem
• +: Able to provide explanations
• Can provide explanations of recommended items by listing content-features that caused an item to be
recommended
23

Cons: Content-based Approach
• –: Finding the appropriate features is hard • E.g., images, movies, music
• –: Recommendations for new users • How to build a user profile?
• –: Overspecialization
• Never recommends items outside user’s content profile • People might have multiple interests
• Unable to exploit quality judgments of other users
24

Collaborative Filtering
show me more items favored by others who have similar tastes with me
25

Collaborative Filtering (CF)
• The most prominent approach to generate recommendations
• used by large, commercial e-commerce sites
• well-understood, various algorithms and variations exist • applicable in many domains (book, movies, DVDs, ..)
• Approach
• use the “wisdom of the crowd” to recommend items
• Basic assumption and idea
• Users give ratings to catalog items (implicitly or
explicitly)
• Customers who had similar tastes in the past, will have similar tastes in the future
26

Collaborative Filtering
•Consider user x
•Find set N of other users whose ratings are “similar” to
x’s ratings
•Estimate x’s ratings based on ratings
of users in N
27

User-based Nearest-Neighbor Collaborative Filtering
•The basic technique
• Given an “active user” (Alice) and an item 𝑖 not yet seen by Alice
• find a set of users (peers/nearest neighbors) who liked the same items as Alice in the past and who have rated item 𝑖
• use, e.g. the average of their ratings to predict, if Alice will like item 𝑖 • do this for all items Alice has not seen and recommend the best-rated
•Basic assumption and idea
• If users had similar tastes in the past they will have similar
tastes in the future
• User preferences remain stable and consistent over time
28

User-based Nearest-Neighbor Collaborative Filtering
• Example
• A database of ratings of the current user, Alice, and some
other users is given:
Item1
Item2
Item3
Item4
Item5
Alice
5
3
4
4
?
User1
3
1
2
3
3
User2
4
3
4
3
5
User3
3
3
1
5
4
User4
1
5
5
2
1
• Determine whether Alice will like or dislike Item5, which Alice has not yet rated or seen
29

User-based Nearest-Neighbor Collaborative Filtering
•Some first questions
• How do we measure similarity?
• How many neighbors should we consider?
• How do we generate a prediction from the neighbors’ ratings?
Item1
Item2
Item3
Item4
Item5
Alice
5
3
4
4
?
User1
3
1
2
3
3
User2
4
3
4
3
5
User3
3
3
1
5
4
User4
1
5
5
2
1
30

Finding “Similar” Users
• Let rx be the vector of user x’s ratings
• Jaccard similarity measure ||#$∩#&|| ||#$∪#&||
• Problem: Ignores the value of the rating • Cosine similarity measure
•sim(x,y)=cos(r,r)= “!⋅””
x y ||” ||⋅||” ||
• Problem: Treats missing ratings as “negative” • Pearson correlation coefficient
• Sxy = items rated by both users x and y
! ”
rx =[*,_,_,*,***] ry =[*,_,**,**,_]
rx, ry as sets: rx = {1, 4, 5} ry = {1, 3, 4}
r , r as points: x y
rx ={1,0,0,1,3} ry ={1,0,2,2,0}
𝒔𝒊𝒎𝒙,𝒚 =
∑𝒔∈𝑺𝒙𝒚 𝒓𝒙𝒔 −𝒓𝒙 𝒓𝒚𝒔 −𝒓𝒚
rx, ry … avg.
∑𝒔∈𝑺𝒙𝒚 𝒓𝒙𝒔 −𝒓𝒙 𝟐 ∑𝒔∈𝑺𝒙𝒚 𝒓𝒚𝒔 −𝒓𝒚 𝟐 rating of x, y 31

Similarity Metric
Cosine similarity:
𝒔𝒊𝒎(𝒙, 𝒚) = ∑𝒊 𝒓𝒙𝒊 ⋅ 𝒓𝒚𝒊
∑ 𝒓𝟐 ⋅ ∑ 𝒓𝟐 𝒊𝒙𝒊 𝒊𝒚𝒊
•Intuitively we want: sim(A, B) > sim(A, C)
•Jaccard similarity: 1/5 < 2/4 •Cosine similarity: 0.380 > 0.322
• Considers missing ratings as “negative”
• Solution: subtract the (row) mean 32
sim A,B vs. A,C: 0.092 > -0.559
Notice cosine sim. is correlation when data is centered at 0

Similarity Metric (Cont’)
• A popular similarity measure in user-based CF: Pearson
correlation
𝒔𝒊𝒎𝒙,𝒚 =
∑𝒔∈𝑺𝒙𝒚 𝒓𝒙𝒔 − 𝒓𝒙 𝒓𝒚𝒔 − 𝒓𝒚 ∑𝒔∈𝑺𝒙𝒚 𝒓𝒙𝒔 − 𝒓𝒙 𝟐 ∑𝒔∈𝑺𝒙𝒚 𝒓𝒚𝒔 − 𝒓𝒚
𝟐
• Possible similarity values between -1 and 1;
Item1
Item2
Item3
Item4
Item5
Alice
5
3
4
4
?
User1
3
1
2
3
3
User2
4
3
4
3
5
User3
3
3
1
5
4
User4
1
5
5
2
1
sim = 0,85
sim = 0,70 sim = 0,00 sim = -0,79
33

Rating Predictions
From similarity metric to recommendations:
• Let rx be the vector of user x’s ratings
•Let N be the set of k users most similar to x who
have rated item i
• Prediction for item s of user x:
• 𝑟 + , = .- ∑ / ∈ 0 𝑟 / ,
• 𝑟 = ∑”∈( 2!”⋅””) Shorthand:
+, ∑”∈( 2!” 𝒔𝒙𝒚 = 𝒔𝒊𝒎 𝒙, 𝒚 • Other options?
• Many other tricks possible… 34

Memory-based and Model-based Approaches
• User-based CF is said to be “memory-based”
• the rating matrix is directly used to find neighbors / make predictions
• does not scale for most real-world scenarios
• large e-commerce sites have tens of millions of customers and millions of items
• Model-based approaches
• based on an offline pre-processing or “model-learning” phase
• at run-time, only the learned model is used to make predictions • models are updated / re-trained periodically
• large variety of techniques used
• model-building and updating can be computationally expensive
35

Item-Item Collaborative Filtering
• So far: User-user collaborative filtering • Another view: Item-item
• Basic idea:
• Use the similarity between items (and not users) to make predictions
• For item i, find other similar items
• Estimate rating for item i based on ratings for similar
items
• Can use same similarity metrics and prediction functions as in user-user model
sij… similarity of items i and j
rxj…rating of user u on item j
N(i;x)… set items rated by x similar to i 36

s ×r
ij åjÎN(i;x) s
xj
jÎN(i;x) xi
r=
ij

Item-Item Collaborative Filtering
• Example:
• Look for items that are similar to Item5
• Take Alice’s ratings for these items to predict the rating for Item5
Item1
Item2
Item3
Item4
Item5
Alice
5
3
4
4
?
User1
3
1
2
3
3
User2
4
3
4
3
5
User3
3
3
1
5
4
User4
1
5
5
2
1
37

Item-Item CF (|N|=2)
users
1 2 3 4 5 6 7 8 9 10 11 12 1
1
3
5
5
4
5
4
4
2
1
3
2
4
1
2
3
4
3
5
2
4
5
4
2
4
3
4
2
2
5
1
3
3
2
4
2 3 4 5 6
– unknown rating
– rating between 1 to 5
38
movies

Item-Item CF (|N|=2)
users
1 2 3 4 5 6 7 8 9 10 11 12 1
1
3
?
5
5
4
5
4
4
2
1
3
2
4
1
2
3
4
3
5
2
4
5
4
2
4
3
4
2
2
5
1
3
3
2
4
2 3 4 5 6
– estimate rating of movie 1 by user 5 39
movies

Item-Item CF (|N|=2)
users
1 2 3 4 5 6 7 8 9 10 11 12 1
1
3
?
5
5
4
5
4
4
2
1
3
2
4
1
2
3
4
3
5
2
4
5
4
2
4
3
4
2
2
5
1
3
3
2
4
2
3
4 5 6
sim(1,m) 1.00
-0.18 0.41 -0.10 -0.31 0.59
Here we use adjust cosine similarity:
1) Subtract mean rating mi from each movie i
m1 = (1+3+5+5+4)/5 = 3.6
row 1: [-2.6, 0, -0.6, 0, 0, 1.4, 0, 0, 1.4, 0, 0.4, 0] 2) Compute cosine similarities between rows
Neighbor selection:
Identify movies similar to movie 1, rated by user 5 40
movies

Item-Item CF (|N|=2)
users
1 2 3 4 5 6 7 8 9 10 11 12 1
sim(1,m) 1.00
-0.18 0.41 -0.10 -0.31 0.59
1
3
?
5
5
4
5
4
4
2
1
3
2
4
1
2
3
4
3
5
2
4
5
4
2
4
3
4
2
2
5
1
3
3
2
4
2
3
4 5 6
Compute similarity weights:
s1,3=0.41, s1,6=0.59
41
movies

Item-Item CF (|N|=2)
users
1 2 3 4 5 6 7 8 9 10 11 12 1
1
3
2.6
5
5
4
5
4
4
2
1
3
2
4
1
2
3
4
3
5
2
4
5
4
2
4
3
4
2
2
5
1
3
3
2
4
2
3
4 5 6
Predict by taking weighted average:
r1.5 = (0.41*2 + 0.59*3) / (0.41+0.59) = 2.6 42
𝒓𝒊𝒙 = ∑𝒋∈𝑵(𝒊;𝒙) 𝒔𝒊𝒋 ⋅ 𝒓𝒋𝒙 ∑𝒔𝒊𝒋
movies

Item-Item vs. User-User
Alice Bob
Carol David
Avatar
1 0.9
LOTR
0.5
Matrix Pirates
0.8
1 0.8
1 0.4
0.3
■ In practice, it has been observed that item-item often works better than user-user
● Why?Itemsaresimpler,usershavemultipletastes 43

More on Ratings – Explicit Ratings •Probably the most precise ratings
•Most commonly used (1 to 5, 1 to 7 Likert response scales)
•Main problems
• Users not always willing to rate many items
• number of available ratings could be too small → sparse rating matrices → poor recommendation quality
• How to stimulate users to rate more items? 44

More on Ratings – Implicit Ratings
• Typically collected by the web shop or application in which the recommender system is embedded
• When a customer buys an item, for instance, many recommender systems interpret this behavior as a positive rating
• Clicks, page views, time spent on some page, demo downloads …
• Implicit ratings can be collected constantly and do not require additional efforts from the side of the user
• Main problem
• One cannot be sure whether the user behavior is correctly
• For example, a user might not like all the books he or she has bought; the user also might have bought a book for someone else
• Implicit ratings can be used in addition to explicit ones; question of correctness of interpretation
interpreted
45

Collaborative Filtering: Complexity
• Expensive step is finding k most similar customers: O(|X|)
• Too expensive to do at runtime • Could pre-compute
• Naïve pre-computation takes time O(|X|2) • X … set of customers
• Ways of doing this:
• Near-neighbor search in high dimensions (LSH) • Clustering
• Dimensionality reduction
•… …
• Supported by Hadoop: Apache Mahout https://mahout.apache.org/users/basics/algorithms.html
46

What is a Good Recommendation in Practice?
•Total sales numbers
•Promotion of certain items
•…
• Click-through-rates
•Interactivity on platform
•…
•Customer return rates
•Customer satisfaction and loyalty
47

Evaluation
movies
1
3
4
3
5
5
4
5
5
3
3
2
2
2
5
2
1
1
3
3
1
users
48

Evaluation
movies
1
3
4
3
5
5
4
5
5
3
3
2
?
?
?
2
1
?
3
?
1
users
Test Data Set
49

Evaluating Predictions
• Compare predictions with known ratings • Root-mean-square error (RMSE)
• ∑ 𝑟 − 𝑟∗ – where 𝒓 is predicted, 𝒓∗ is the true rating of x on i *+*+*+ 𝒙𝒊 𝒙𝒊
• Precision at top 10: • % of those in top 10
• Rank Correlation:
• Spearman’s correlation between system’s and user’s complete rankings
• Another approach: 0/1 model • Coverage:
• Number of items/users for which system can make predictions • Precision:
• Accuracy of predictions
• Receiver operating characteristic (ROC)
• Tradeoff curve between false positives and false negatives 50

The Netflix Prize
•Training data
• 100 million ratings, 480,000 users, 17,770 movies • 6 years of data: 2000-2005
•Test data
• Last few ratings of each user (2.8 million)
• Evaluation criterion: Root Mean Square Error
(RMSE)=”∑ 𝑟̂−𝑟
• Netflix’s system RMSE: 0.9514
• Competition
• 2,700+ teams
• $1 million prize for 10% improvement on Netflix
#
(&,()∈#
(& (&
.
51

The Netflix Utility Matrix R
480,000 users
Matrix R
17,700 movies
1
3
4
3
5
5
4
5
5
3
3
2
2
2
5
2
1
1
3
3
1
52

Utility Matrix R: Evaluation
Matrix R
17,700 movies
Training Data Set
480,000 users
1
3
4
3
5
5
4
5
5
3
3
2
?
?
?
2
1
?
3
?
1
𝒓𝟔,𝟑
Test Data Set
RMSE=”∑ 𝑟̂−𝑟
True rating of user x on item i
.
/
(&,()∈#
(& (&
53

BellKor Recommender System
•The winner of the Netflix Challenge!
•Multi-scale modeling of the data: Combine top level, “regional” modeling of the data, with
a refined, local view:
• Global:
• Overall deviations of users/movies
• Factorization:
• Addressing “regional” effects
• Collaborative filtering: • Extract local patterns
Global effects
Factorization
Collaborative filtering
54

Performance of Various Methods
Global average: 1.1296
User average: 1.0651 Movie average: 1.0533
Netflix: 0.9514
Basic Collaborative filtering: 0.94 CF+Biases+learned weights: 0.91
55
Grand Prize: 0.8563

Modeling Local & Global Effects
• Global:
• Mean movie rating: 3.7 stars
• The Sixth Sense is 0.5 stars above avg.
• Joe rates 0.2 stars below avg.
Þ Baseline estimation:
Joe will rate The Sixth Sense 4 stars
•Local neighborhood (CF/NN):
• Joe didn’t like related movie Signs
• Þ Final estimate:
Joe will rate The Sixth Sense 3.8 stars
56

Modeling Local & Global Effects
•In practice we get better estimates if we
model deviatåions:
s ×(r -b ) ij xj xj
^å
ij
r=b+ xi xi
jÎN(i;x)
jÎN(i;x) s
baseline estimate for rxi 𝒃𝒙𝒊 = 𝝁 + 𝒃𝒙 + 𝒃𝒊
μ = overall mean rating
bx = rating deviation of user x
= (avg. rating of user x) – μ bi = (avg. rating of movie i) – μ
Problems/Issues:
1) Similarity measures are “arbitrary” 2) Pairwise similarities neglect interdependencies among users
3) Taking a weighted average can be restricting
Solution: Instead of sij use wij that
we estimate directly from data
57

Idea: Interpolation Weights wij
•Use a weighted sum rather than weighted
avg.:
𝑟% = 𝑏 + * 𝑤 𝑟 − 𝑏
+∈-((;’)
•A few notes:
• 𝑵(𝒊; 𝒙) … set of movies rated by user x that are similar to movie i
• 𝒘𝒊𝒋 is the interpolation weight (some real number) • We allow: ∑𝒋∈𝑵(𝒊,𝒙) 𝒘𝒊𝒋 ≠ 𝟏
• 𝒘𝒊𝒋 models interaction between pairs of movies (it does not depend on user x)
‘( ‘(
(+ ‘+ ‘+
58

Idea: Interpolation Weights wij •𝑟%=𝑏 +∑ 𝑤 𝑟 −𝑏
‘( ‘( +∈-((,’) (+ ‘+ ‘+
•How to set wij?
• Remember, error metric is:
8 9
∑ 𝑟̂ − 𝑟 ? or equivalently SSE:
𝟐
• Find wij that minimize SSE on training data! • Models relationships between item i and its neighbors j
• wij can be learned/estimated based on x and all other users that rated i
(;,<)∈9 <; <; ∑(𝒊,𝒙)∈𝑹 𝒓5𝒙𝒊 − 𝒓𝒙𝒊 59 Recommendations via Optimization •Goal: Make good recommendations • Quantify goodness using RMSE: Lower RMSE Þ better recommendations • Want to make good recommendations on items that user has not yet seen. Can’t really do this! • Let’s build a system such that it works well on known (user, item) ratings And hope the system will also predict well the unknown ratings 60 Recommendations via Optimization • Idea: Let’s set values w such that they work well on known (user, item) ratings • How to find such values w? • Idea: Define an objective function and solve the • Find wij that minimize SSE on training data! 𝐽𝑤=*𝑏<;+*𝑤;B𝑟