k-Nearest Neighbor (kNN) Classifier

Initializing live version
Download to Desktop

Requires a Wolfram Notebook System

Interact on desktop, mobile and cloud with the free Wolfram Player or other Wolfram Language products.

In a typical classification problem we wish to predict the output class, , given inputs, and . In the prototypical problem illustrated above, the observed training data consists of observations equally divided between the two classes. The 0/1 values are color-coded green and red. Based on this training data, our object is to find a predictor for new data, designated the test data. One method is to take the nearest neighbors of the new inputs and predict the new output based on the most frequent outcome, 0 or 1, among these neighbors. By taking odd we avoid ties. This is the kNN classifier and the idea is easily generalized to more than two output classes and more than two inputs. The kNN classifier is one of the most robust and useful classifiers and is often used to provide a benchmark to more complex classifiers such as artificial neural nets and support vector machines.

[more]

In this simple example, Voronoi tessellations can be used to visualize the performance of the kNN classifier. The solid thick black curve shows the Bayes optimal decision boundary and the red and green regions show the kNN classifier for selected .

Based only on this training dataset, it can be shown that is the best possible choice for . With , the expected misclassification rate on future data is minimized. See below for details.

In practice, though, we need to base the choice of solely on the training dataset. The pseudo-likelihood method of Holmes and Adams (2003) produces while leave-one-out cross-validation yields . Linear or logistic regression with an intercept term produces a linear decision boundary and corresponds to choosing kNN with about three effective parameters or .

[less]

Contributed by: Ian McLeod (March 2011)
Open content licensed under CC BY-NC-SA


Snapshots


Details

In §2.3.2 of [1], Hastie et al. pointed out that Voronoi tessellations may be used to visualize the performance of the kNN classifier and produced several examples. The data we use is generated independently of the mixture data used in those examples, but the overall setup is the same, that is, each class is generated from a mixture of ten normal distributions with the same means and variances as suggested in §2.3.4 of [1].

For this model, it can be shown that the optimal Bayes misclassification rate is . This assumes perfect knowledge of the model. If as the training sample size, , also increases, the misclassification rate of kNN will tend to for test data.

With a given finite set of training data (in the present case, ), we can ask what is the best possible choice of in the kNN algorithm to predict future test data. This can be determined by simulation. We simulated a test sample of size and calibrated the misclassification rate for . It was found that when and that the standard deviation for was sufficiently narrow to exclude other possible values of .

In-depth treatments of the kNN method are provided in chapter 13 of [1] and Hastie et al. (2009, Ch. 13) and §6.2 of [3].

[1] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed., New York: Springer, 2009.

[2] C. C. Holmes and N. M. Adams, "Likelihood Inference in Nearest-Neighbour Classification Models," Biometrika, 90, 2003 pp. 99–112.

[3] B. D. Ripley, Pattern Recognition and Neural Networks, Cambridge, UK: University Press, 1996.



Feedback (field required)
Email (field required) Name
Occupation Organization
Note: Your message & contact information may be shared with the author of any specific Demonstration for which you give feedback.
Send