Dubey, Avinava ; Machchhar, Jinesh ; Bhattacharyya, Chiranjib ; Chakrabarti, Soumen (2009) Conditional models for non-smooth ranking loss functions In: Ninth IEEE International Conference on Data Mining, 2009. ICDM '09., December 6-9.
Full text not available from this repository.
Official URL: http://ieeexplore.ieee.org/abstract/document/53602...
Abstract
Learning to rank is an important area at the interface of machine learning, information retrieval and Web search. The central challenge in optimizing various measures of ranking loss is that the objectives tend to be non-convex and discontinuous. To make such functions amenable to gradient based optimization procedures one needs to design clever bounds. In recent years, boosting, neural networks, support vector machines, and many other techniques have been applied. However, there is little work on directly modeling a conditional probability Pr (y|x_q) where y is a permutation of the documents to be ranked and x_q represents their feature vectors with respect to a query q. A major reason is that the space of y is huge: n! if n documents must be ranked. We first propose an intuitive and appealing expected loss minimization objective, and give an efficient shortcut to evaluate it despite the huge space of ys. Unfortunately, the optimization is non-convex, so we propose a convex approximation. We give a new, efficient Monte Carlo sampling method to compute the objective and gradient of this approximation, which can then be used in a quasi-Newton optimizer like LBFGS. Extensive experiments with the widely-used LETOR dataset show large ranking accuracy improvements beyond recent and competitive algorithms.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Source: | Copyright of this article belongs to Institute of Electrical and Electronic Engineers. |
Keywords: | Monte Carlo Sampling; Learning to Rank; Conditional Models |
ID Code: | 100021 |
Deposited On: | 12 Feb 2018 12:27 |
Last Modified: | 12 Feb 2018 12:27 |
Repository Staff Only: item control page