Voronoi networks and their probability of misclassification

Krishna, K. ; Thathachar, M. A. L. ; Ramakrishnan, K. R. (2000) Voronoi networks and their probability of misclassification IEEE Transactions on Neural Networks, 11 (6). pp. 1361-1372. ISSN 1045-9227

Full text not available from this repository.

Official URL: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumb...

Related URL: http://dx.doi.org/10.1109/72.883447

Abstract

To reduce the memory requirements and the computation cost, many algorithms have been developed that perform nearest neighbor classification using only a small number of representative samples obtained from the training set. We call the classification model underlying all these algorithms as Voronoi networks (Vnets). We analyze the generalization capabilities of these networks by bounding the generalization error. The class of problems that can be solved by Vnets is characterized by the extent to which the set of points on the decision boundaries fill the feature space. We show that Vnets asymptotically converge to the Bayes classifier with arbitrarily high probability provided the number of representative samples grow slower than the square root of the number of training samples and also give the optimal growth rate of the number of representative samples. We redo the analysis for decision tree (DT) classifiers and compare them with Vnets. The bias/variance dilemma and the curse of dimensionality with respect to Vnets and DTs are also discussed.

Item Type:Article
Source:Copyright of this article belongs to IEEE.
ID Code:51325
Deposited On:28 Jul 2011 15:01
Last Modified:28 Jul 2011 15:01

Repository Staff Only: item control page