Nonlinear experiments: optimal design and inference based on likelihood

Chaudhuri, Probal ; Per Mykland, A. (1993) Nonlinear experiments: optimal design and inference based on likelihood Journal of the American Statistical Association, 88 (422). pp. 538-546. ISSN 0162-1459

Full text not available from this repository.

Official URL: http://www.jstor.org/stable/10.2307/2290334

Abstract

Nonlinear experiments involve response and regressors that are connected through a nonlinear regression-type structure. Examples of nonlinear models include standard nonlinear regression, logistic regression, probit regression. Poisson regression, gamma regression, inverse Gaussian regression, and so on. The Fisher information associated with a nonlinear experiment is typically a complex nonlinear function of the unknown parameter of interest. As a result, we face an awkward situation. Designing an efficient experiment will require knowledge of the parameter, but the purpose of the experiment is to generate data to yield parameter estimates! Our principal objective here is to investigate proper designing of nonlinear experiments that will let us construct efficient estimates of parameters. We focus our attention on a very general nonlinear setup that includes many models commonly encountered in practice. The experiments considered have two fundamental stages: a static design in the initial stage, followed by a fully adaptive sequential stage in which the design points are chosen sequentially, exploiting a D-optimality criterion and using parameter estimates based on available data. We explore the behavior of the maximum likelihood estimate when observations are generated from such an experiment. Two major technical hurdles are (1) the dependent nature of the data obtained from an adaptive sequential experiment and (2) the randomness in the total Fisher information associated with the experiment. Our analysis exploits a martingale structure rooted in the likelihood. We derive sufficient conditions that will ensure convergence of the chosen design to a D-optimal one as the number of trials grows. Besides ensuring the large sample optimality of the design, the convergence of the average Fisher information provides an ergodicity condition related to the growth of the martingale processes intrinsically associated with the likelihood. This key observation eventually yields the first-order efficiency of the maximum likelihood estimate via martingale central limit theorem and confirms the asymptotic validity of statistical inference based on the likelihood.

Item Type:Article
Source:Copyright of this article belongs to American Statistical Association.
ID Code:74634
Deposited On:17 Dec 2011 10:08
Last Modified:17 Dec 2011 10:08

Repository Staff Only: item control page