Dubey, Avinava ; Machchhar, Jinesh ; Bhattacharyya, Chiranjib ; Chakrabarti, Soumen (2009) Conditional Models for Non-smooth Ranking Loss Functions In: UNSPECIFIED, 06-09 December 2009, Miami Beach, FL, USA.
PDF
283kB |
Official URL: http://doi.org/10.1109/ICDM.2009.49
Related URL: http://dx.doi.org/10.1109/ICDM.2009.49
Abstract
Learning to rank is an important area at the interface of machine learning, information retrieval and Web search. The central challenge in optimizing various measures of ranking loss is that the objectives tend to be non-convex and discontinuous. To make such functions amenable to gradient based optimization procedures one needs to design clever bounds. In recent years, boosting, neural networks, support vector machines, and many other techniques have been applied. However, there is little work on directly modeling a conditional probability Pr(y|x q ) where y is a permutation of the documents to be ranked and x q represents their feature vectors with respect to a query q. A major reason is that the space of y is huge: n! if n documents must be ranked. We first propose an intuitive and appealing expected loss minimization objective, and give an efficient shortcut to evaluate it despite the huge space of ys. Unfortunately, the optimization is non-convex, so we propose a convex approximation. We give a new, efficient Monte Carlo sampling method to compute the objective and gradient of this approximation, which can then be used in a quasi-Newton optimizer like LBFGS. Extensive experiments with the widely-used LETOR dataset show large ranking accuracy improvements beyond recent and competitive algorithms.
Item Type: | Conference or Workshop Item (Other) |
---|---|
Keywords: | Machine learning, Information retrieval, Web search, Loss measurement, Design optimization, Boosting, Neural networks, Support vector machines, Monte Carlo methods, Optimization methods |
ID Code: | 127756 |
Deposited On: | 13 Oct 2022 11:02 |
Last Modified: | 13 Oct 2022 11:02 |
Repository Staff Only: item control page