An algorithm for $l_1 $-norm minimization with application to nonlinear $l_1 $-approximation

El-Attar, R. A. ; Vidyasagar, M. ; Dutta, S. R. K. (1979) An algorithm for $l_1 $-norm minimization with application to nonlinear $l_1 $-approximation SIAM Journal on Numerical Analysis, 16 (1). pp. 70-86. ISSN 0036-1429

Full text not available from this repository.

Official URL: http://epubs.siam.org/sinum/resource/1/sjnaam/v16/...

Related URL: http://dx.doi.org/10.1137/0716006

Abstract

Necessary and sufficient conditions for minimizing an $l_1 $-norm type of objective function are derived using the nonlinear programming (NLP) approach. Broader sufficient conditions are made possible by using directional derivatives. It is shown that an algorithm previously proposed by Osborne and Watson (1971) for nonlinear $l_1 $-approximation falls under a prototype steepest descent algorithm. The $l_1 $-problem is converted to a sequence of problems, each of which involves the minimization of a continuously differentiable function. Based on this conversion and on the optimality conditions obtained, an algorithm that solves the $l_1 $-minimization problem is proposed. An extrapolation technique due to Fiacco and McCormick (1966) and (1968, p.188) is used to accelerate the convergence of the algorithm and to improve its numerical stability. To illustrate some of the theoretical ideas and to give numerical evidence, several examples are solved. The algorithm is then used to solve some nonlinear $l_1 $-approximation problems. A comparison between the Osborne-Watson algorithm and the proposed one is also presented.

Item Type:Article
Source:Copyright of this article belongs to Society for Industrial and Applied Mathematics.
ID Code:56163
Deposited On:29 Nov 2011 13:15
Last Modified:29 Nov 2011 13:15

Repository Staff Only: item control page