Learning to rank in vector spaces and social networks

Chakrabarti, Soumen (2007) Learning to rank in vector spaces and social networks Internet Mathematics, 4 (2-3). pp. 267-209. ISSN 1542-7951

Full text not available from this repository.

Official URL: http://www.tandfonline.com/doi/abs/10.1080/1542795...


We survey machine learning techniques to learn ranking functions for entities represented as feature vectors as well as nodes in a social network. In the feature-vector scenario, an entity, e.g., a document xx, is mapped to a feature vector ψ(x)∈Rdψ(x)∈Rd in a dd-dimensional space, and we have to search for a weight vector β∈Rdβ∈Rd. The ranking is then based on the values of β⋅ψ(x)β⋅ψ(x). This case corresponds to information retrieval in the ``vector space'' model. Training data consists of a partial order of preference among entities. We study probabilistic Bayesian and maximum-margin approaches to solving this problem, including recent efficient near-linear-time approximate algorithms. In the graph node-ranking scenario, we briefly review PageRank, generalize it to arbitrary Markov conductance matrices, and consider the problem of learning conductance parameters from partial orders between nodes. In another class of formulation, the graph does not establish PageRank or prestige-flow relationships between nodes, but encourages a certain smoothness between the scores (ranks) of neighboring nodes. Some of these techniques have been used by Web search companies with very large query logs. We review some of the issues that arise when applying the theory to practical systems. Finally, we review connections between the stability of a score/rank-learning algorithm and its power to generalize to unforeseen test data.

Item Type:Article
Source:Copyright of this article belongs to Taylor & Francis.
ID Code:100055
Deposited On:12 Feb 2018 12:27
Last Modified:12 Feb 2018 12:27

Repository Staff Only: item control page