Convergence of higher-order two-state neural networks with modified updating

Vidyasagar, M. (1994) Convergence of higher-order two-state neural networks with modified updating Sadhana (Academy Proceedings in Engineering Sciences), 19 (2). pp. 239-255. ISSN 0256-2499

[img]
Preview
PDF - Publisher Version
1MB

Official URL: http://www.ias.ac.in/j_archive/sadhana/19/2/239-25...

Related URL: http://dx.doi.org/10.1007/BF02811897

Abstract

The Hopfield network is a standard tool for maximizing aquadratic objective function over the discrete set {−1,1} n . It is well-known that if a Hopfield network is operated in anasynchronous mode, then the state vector of the network converges to a local maximum of the objective function; if the network is operated in asynchronous mode, then the state vector either converges to a local maximum, or else goes into a limit cycle of length two. In this paper, we examine the behaviour ofhigher-order neural networks, that is, networks used for maximizing objective functions that are not necessarily quadratic. It is shown that one can assume, without loss of generality, that the objective function to be maximized ismultilinear. Three methods are given for updating the state vector of the neural network, called the asynchronous, the best neighbour and the gradient rules, respectively. For Hopfield networks with a quadratic objective function, the asynchronous rule proposed here reduces to the standard asynchronous updating, while the gradient rule reduces to synchronous updating; the best neighbour rule does not appear to have been considered previously. It is shown that both the asynchronous updating rule and the best neighbour rule converge to a local maximum of the objective function within a finite number of time steps. Moreover, under certain conditions, under the best neighbour rule, each global maximum has a nonzero radius of direct attraction; in general, this may not be true of the asynchronous rule. However, the behaviour of the gradient updating rule is not well-understood. For this purpose, amodified gradient updating rule is presented, that incorporates bothtemporal as well as spatial correlations among the neurons. For the modified updating rule, it is shown that, after a finite number of time steps, the network state vector goes into a limit cycle of lengthm, wherem is the degree of the objective function. If m = 2, i.e., for quadratic objective functions, the modified updating rule reduces to the synchronous updating rule for Hopfield networks. Hence the results presented here are "true" generalizations of previously known results.

Item Type:Article
Source:Copyright of this article belongs to Indian Academy of Sciences.
Keywords:Neural Dynamics; Hopfield Networks; Higher-order Networks; Modified Updating
ID Code:56919
Deposited On:25 Aug 2011 09:34
Last Modified:18 May 2016 08:31

Repository Staff Only: item control page