Mathur, Arpit ; Chakrabarti, Soumen (2006) Accelerating Newton optimization for log-linear models through feature redundancy In: ICDM '06 Proceedings of the Sixth International Conference on Data Mining.
Full text not available from this repository.
Official URL: http://ieeexplore.ieee.org/document/4053067/
Abstract
Log-linear models are widely used for labeling feature vectors and graphical models, typically to estimate robust conditional distributions in presence of a large number of potentially redundant features. Limited-memory quasi-Newton methods like LBFGS or BLMVM are optimization workhorses for such applications, and most of the training time is spent computing the objective and gradient for the optimizer. We propose a simple technique to speed up the training optimization by clustering features dynamically, and interleaving the standard optimizer with another, coarse-grained, faster optimizer that uses far fewer variables. Experiments with logistic regression training for text classification and Conditional Random Field (CRF) training for information extraction show promising speed-ups between 2× and 9× without any systematic or significant degradation in the quality of the estimated models.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Source: | Copyright of this article belongs to IEEE Computer Society. |
ID Code: | 100079 |
Deposited On: | 12 Feb 2018 12:27 |
Last Modified: | 12 Feb 2018 12:27 |
Repository Staff Only: item control page