On the one dimensional "Learning from Neighbours" model

Bandyopadhyay, Antar ; Roy, Rahul ; Sarkar, Anish (2010) On the one dimensional "Learning from Neighbours" model Electronic Journal of Probability, 15 . pp. 1574-1593. ISSN 1083-6489

Full text not available from this repository.

Official URL: http://128.208.128.142/~ejpecp/viewarticle.php?id=...

Abstract

We consider a model of a discrete time "interacting particle system" on the integer line where infinitely many changes are allowed at each instance of time. We describe the model using chameleons of two different colours, viz., red (R) and blue (B). At each instance of time each chameleon performs an independent but identical coin toss experiment with probability a to decide whether to change its colour or not. If the coin lands head then the creature retains its colour (this is to be interpreted as a "success"), otherwise it observes the colours and coin tosses of its two nearest neighbours and changes its colour only if, among its neighbours and including itself, the proportion of successes of the other colour is larger than the proportion of successes of its own colour. This produces a Markov chain with infinite state space {R, B}Z. This model was studied by Chatterjee and Xu (2004) in the context of diffusion of technologies in a set-up of myopic, memoryless agents. In their work they assume different success probabilities of coin tosses according to the colour of the chameleon. In this work we consider the symmetric case where the success probability, a, is the same irrespective of the colour of the chameleon. We show that starting from any initial translation invariant distribution of colours the Markov chain converges to a limit of a single colour, i.e., even at the symmetric case there is no "coexistence" of the two colours at the limit. As a corollary we also characterize the set of all translation invariant stationary laws of this Markov chain. Moreover we show that starting with an i.i.d. colour distribution with density p Π [0,1] of one colour (say red), the limiting distribution is all red with probability π (α, p) which is continuous in p and for p "small" π(p) > p. The last result can be interpreted as the model favours the "underdog".

Item Type:Article
Source:Copyright of this article belongs to Institute of Mathematical Statistics.
Keywords:Coexistence; Learning from Neighbours; Markov Chain; Random Walk; Stationary Distribution
ID Code:72294
Deposited On:29 Nov 2011 13:40
Last Modified:29 Nov 2011 13:40

Repository Staff Only: item control page