Vidyasagar, M. (1995) Minimum-seeking properties of analog neural networks with multilinear objective functions IEEE Transactions on Automatic Control, 40 (8). pp. 1359-1375. ISSN 0018-9286
Full text not available from this repository.
Official URL: http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arn...
Related URL: http://dx.doi.org/10.1109/9.402228
Abstract
In this paper, we study the problem of minimizing a multilinear objective function over the discrete set {0, 1}n. This is an extension of an earlier work addressed to the problem of minimizing a quadratic function over {0, 1}n. A gradient-type neural network is proposed to perform the optimization. A novel feature of the network is the introduction of a so-called bias vector. The network is operated in the high-gain region of the sigmoidal nonlinearities. The following comprehensive theorem is proved: For all sufficiently small bias vectors except those belonging to a set of measure zero, for all sufficiently large sigmoidal gains, for all initial conditions except those belonging to a nowhere dense set, the state of the network converges to a local minimum of the objective function. This is a considerable generalization of earlier results for quadratic objective functions. Moreover, the proofs here are completely rigorous. The neural network-based approach to optimization is briefly compared to the so-called interior-point methods of nonlinear programming, as exemplified by Karmarkar's algorithm. Some problems for future research are suggested.
Item Type: | Article |
---|---|
Source: | Copyright of this article belongs to IEEE. |
ID Code: | 56920 |
Deposited On: | 25 Aug 2011 09:35 |
Last Modified: | 25 Aug 2011 09:35 |
Repository Staff Only: item control page