a nlp approach to professional profile learning and evaluation 

Get Complete Project Material File(s) Now! »

User Representation in Recommendation

A good RS should present users with relevant new items (items that users may like or find useful), but they should also provide recommendations that are both personalized and understandable. At every step of the process, a good user rep-resentation is crucial.
Relevance. Determining what items are relevant to recommend to a given user is a vast and open question. The most classical approach is to consider that relevant items are items that the user would rate with a good mark. This approach is in fact a very common learning objective for RS: the model is trained to predict the rating of a user-item pair, and outputs the items with the highest ratings in inference. Those models are said to be optimized on accuracy. There is however a growing trend to move away from this accuracy-based approach and to consider other qualities that should be modeled, either as training objectives or as evaluation metrics (namely, ranking as an objective and Mean Reciprocal Rank as a metric). Aside from the fact that user experience can highly benefit from being recommended unexpected or surprising items, the notion of Diversity 1 has been at the heart of many works in the past years and still remains quite hard to evaluate. Personalization. Recommending only popular items is far from being an ideal behavior for a RS, as different people have different needs and tastes. Indeed, and while non-personalized RSs could already yield satisfactory results in terms of suggestions and revenue (namely by using only the item bias), we think User experience can greatly benefit from personalization. This is why personalization is a major feature for a RS: taking a user’s past preferences into account is a sound way to refine recommendations made to them. We observe two mains strate-gies to personalize recommendations. On one hand, some Content-based (CB) approaches compute a similarity between a given user and potential items; the recommendation is the item that has the best match with the user. On the other hand, Collaborative Filtering (CF) addresses personalization by finding similar profiles that are similar to the current user, and then recommends items likes by those similar profiles.
Another aspect of personalization worth exploring is the evolution of the user. J. J. McAuley and Leskovec (2013) in particular proposed a framework that refined user recommendations by modeling their level of expertise. We explore this aspect of personalization in Chapter 5.

Collaborative Filtering

Collaborative Filtering models rely on the assumption that users that have had similar tastes in the past are likely to have similar tastes in the future. The tastes of the users are expressed by how they rate items that they experienced. These ratings are often called interactions. Let us use the illustration in Figure 2.2. Since the green user and the blue one rated the purple and the cyan movies similarly, CF expects the blue user to love the orange movie.
Traditionally, a RS of parameters is trained to optimize the cost function between a predicted rating of an item by a user rcu;i and the actual rating ru;i, that is: 1 X X 2 (2.1) min u2U i2Iu( r f u; i R u;i ( j )) rcu;i = f(u; ij ) (2.2).
with R the total numbers of ratings, U the set of users, and Iu the set of items rated by user u. Note that while Mean-Squared Error (MSE) is a common learning objective, it is neither the only possible objective nor is it always the chosen evaluation metric. We further discuss evaluation of recommendation, as well as alternative training objectives in Section 2.1.5.
The prediction of a rating can account for several biases: the global bias of the dataset , the current user’s personal bias bu and the current item’s bias bi. rcu;i = f(u; ij ) (2.3) = + bi + bu + g(u; ij ) The formulation of Equation 2.3 is a commonly used solution to the Cold Start problem.
The Cold Start Problem is the state in which a RS cannot make relevant predic-tions for lack of data. In such a case, a new user would typically be recommended the most popular items of the system.
The nature of the g(u; ij ) function depends on the type of RS that is used.
We differentiate between two approaches to CF: the Neighborhood-based ap-proach and the Matrix Factorization approach.

Neighborhood-based Approach

An intuitive approach to CF is to compare how similar a current user is to other users of the dataset, and to recommend the current user items that similar people have liked. There are several ways to compute a similarity sim(ua; ub) between two users. Some are count-based, such as the Jaccard Similarity, and thus particularly well-suited to situations where the user feedback is binary and/or implicit (click, page visit etc.). Those methods however fail to account for the value of the user interaction with the item (namely, the rating).
The cosine similarity, or any derivative from the inner product, does take the value of the rating into account. Such methods are invariant to the length of vectors ua and ub, meaning that they yield a metric that is not biased by the popularity of items or the prolixity of certain users.
When data is highly sparse, Minkowski distances have been shown to produce relevant similarity (G. Jain et al. 2020).
However intuitive, this kind of method has been gradually abandoned in favor of the model-based approaches, yielding faster results at inference time (since the k-nearest-neighbor search can be very time consuming for large user databases).

Matrix Factorization

The most common implementation of CF is Matrix Factorization (MF), and espe-cially Non-Negative Matrix Factorization since its success in the Netflix challenge (Bennett, Lanning, et al. 2007). The idea of MF is to extract continuous latent pro-files of dimension Z for both the users and the items from the rating matrix R as shown in Equation 2.4 and Equation 2.5.
P = fpug s.t. u = 1; : : : ; Nu 2 RNu Z (2.4).
Q = fqig s.t. i = 1; : : : ; Ni 2 RNi Z with P Q> R: (2.5).
With this formalism, the prediction of the rating of item i by user u is simply the scalar product between pu and qi. Plugging this new formulation in Equation 2.3 yields: rcu;i = + bi + bu + pu>qi: (2.6).
The dimension Z of the matrices P and Q is often much smaller than the dimension of the rating matrix R, thus representing the latent profiles in a more compressed latent space.
However, the extracted latent profiles are often qualified as « black boxes » and are hard to interpret. One major challenge for recommendation is to provide explainable recommendations. Besides, MF methods suffer from the Cold Start problem, i.e., they cannot produce relevant personalized results until they have a sufficient amount of interactions. A classic technique to still produce suggestions is to use only the overall mean and the biases bi and bu of Equation 2.6. On top of that, MF methods have a tendency to overfit, and thus require regularization. A traditional way to regularize MF methods is to add a a regularizing term to the objective function, constraining the norm of the model parameters. Another option is to use latent profiles trained on another task to predict the rating. We present such methods in Section 2.1.4.

READ  Temporal and spatial aspects of biogeochemical platinum cycles in coastal environments

Content Based Recommender Systems

Content-based RSs rely on the assumption that a user may be interested in items similar to the ones they liked before. In the case of movie recommendation, one can imagine that if a user rated the Star Wars movies highly, a content-based RS would propose this user with movies that have a similar cast or the same director for instance.
Content Based Recommender Systems work by computing representations of the system’s items, computing a similarity measure between items and recom-mending to a user items that are similar to the ones they liked before.
We distinguish between two types of items: the structured (or tabular) ones and the textual ones. The latter are a crucial inspiration to this work and are thus detailed in Section 2.1.4.
In the case of structured items, such as movies, their representation of an item is based on its attributes. A Content Based Recommender System’s item matrix would be of size I A, with I the number of items in the database and A the num-ber of possible attributes. Of course, adding new features via features engineering to the matrix is always possible.
When the items of the system are textual, they can be represented in the system by a variety of methods from Information Retrieval, namely TF-IDF 2 vectorization (Sparck Jones 1988) and Topic Modeling by Latent Dirichlet Allocation (LDA) (Blei et al. 2003).
However, Content-based RSs require a heavy amount of data pre-processing and/or feature engineering in order to work, namely to fill the attributes of each items (Lops et al. 2011).

Hybrid Models

Both the CF and the CB approaches have their pitfalls and their strength. The gain in performances allowed by CF is counterbalanced by their vulnerability to the Cold Start Problem (defined at the beginning of Section 2.1.1) and their opacity. The understandability and intuitiveness of CB models are offset by the cost of the neighborhood similarity computations. For those reasons, recent works have considered hybrid approaches in the hope of combining the qualities of both approaches while nullifying their weaknesses (Thorat et al. 2015).
Most hybrid models are developed with a specific application in mind. This is in part due to the fact that content-based RSs are domain-dependent: one could not build a set of explicit features that fit both movies and restaurants. Netflix is a famous example of hybrid RS (Gomez-Uribe and Hunt 2016). M. Li et al. (2020) combine CF and CB with a complementarity-based method on « Question & Answer » (Q&A) documents to help the users in the process of troubleshooting.

Leveraging Textual information for Recommendation

Leveraging textual information to improve the performances of a recommender system has been a growing topic over the past decade. While the trend aims at using user-generated text, and in particular, the reviews, textual item descriptions can also be used to compute a representation, namely in a content-based RS context. Using textual information has proven to help to regularize the latent representations and / or the model, as well as providing explainability insights regarding the predictions.

Textual Content-Based Recommendation

Content-based RSs have long been interested in deriving a representation from the textual description of an item. It is of particular importance when the items you want to recommend are textual documents, such as web pages for instance. As such, the evolution of textual Content-based RSs has closely followed the of the Information Retrieval and NLP domains. Although textual Content-based RSs are mainly item-centered, the techniques they use to learn item representations can be symmetrically applied to users of a RS, provided the users leave textual traces. For this reason, we detail the most popular document representations of textual Content-based RSs in the following paragraphs.
Bag of Words. The Bag of Words (BoW) representation is one of the earliest and most intuitive ways to vectorize a document. The idea of BoW is that any document d in a set of documents D with a common vocabulary of size V can be represented by the count of the words it contains. TF-IDF. TF-IDF stands for Term-Frequency – Inverse Document Frequency. Like in the BoW formulation, a document d 2 D is represented by a vector of size V . However, the TF-IDF vectorization aims at representing documents through the words that differentiates d from the rest of the corpus. To do so, TF-IDF represents a word by the its frequency in in document d, weighted by its frequency across the whole corpus.
This ponderation allows for the emphasis of rare (and thus more discriminative) words in a document, with respect to the whole corpus. Conversely, it is an elegant way to overlook too frequent words.
Pre-processing. It is worth noting that all of the methods presented above are highly sensitive to noise. Thus, the input data should undergo a pre-processing routine involving stop-words removal, case normalization etc.
Note: Lots of works also make use word embeddings models, which are de-tailed in Section 2.2.

Table of contents :

1 introduction 
1.1 Context
1.2 Motivations
1.3 Ethics and AI
1.4 Contributions and outline
2 related work 
2.1 User Representation in Recommendation
2.2 NLP and Leveraging User-Generated text for User Representation
2.3 Generative Models
2.4 Recommendation, NLP and Generative Models for User Representation
3 refining user understanding in recommendation via nlp 
3.1 The model
3.2 Experiments and Results
3.3 Conclusion
4 a nlp approach to professional profile learning and evaluation 
4.1 Models
4.2 Experiments
4.3 Results
4.4 Conclusion
5 user dynamic modeling 
5.1 Job Expertise Rewriting
5.2 Industry Latent Space Structuring via VAE
5.3 Challenges & Obstacles
5.4 Conclusion
6 conclusion 
6.1 Summary of Contributions
6.2 Perspectives for future work


Related Posts