Vishwanathan, S.V.N. ; Sun, Z. ; Theera-Ampornpunt, N. ; Varma, M. (2010) Multiple Kernel Learning and the SMO Algorithm In: Advances in Neural Information Processing Systems.
Full text not available from this repository.
Abstract
Our objective is to train p-norm Multiple Kernel Learning (MKL) and, more generally, linear MKL regularised by the Bregman divergence, using the Sequential Minimal Optimization (SMO) algorithm. The SMO algorithm is simple, easy to implement and adapt, and efficiently scales to large problems. As a result, it has gained widespread acceptance and SVMs are routinely trained using SMO in diverse real world applications. Training using SMO has been a long standing goal in MKL for the very same reasons. Unfortunately, the standard MKL dual is not differentiable, and therefore can not be optimised using SMO style co-ordinate ascent. In this paper, we demonstrate that linear MKL regularised with the p-norm squared, or with certain Bregman divergences, can indeed be trained using SMO. The resulting algorithm retains both simplicity and efficiency and is significantly faster than state-of-the-art specialised p-norm MKL solvers. We show that we can train on a hundred thousand kernels in approximately seven minutes and on fifty thousand points in less than half an hour on a single core.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Source: | Copyright of this article belongs to Advances in Neural Information Processing Systems. |
ID Code: | 119695 |
Deposited On: | 16 Jun 2021 09:22 |
Last Modified: | 16 Jun 2021 09:22 |
Repository Staff Only: item control page