The supervised machine learning algorithm [b]knn[/b] (which is short for [i]k[/i] nearest neighbor) is a classification algorithm that makes mathematically precise the idea that "birds of a feather flock together" to classify data. Just as with any machine learning algorithm, a knn model is "trained" on a set of data in which the classifications (or "labels" or "answers") are known, and then the model can be used to make predictions about classifications of new data where the classifications are unknown. [br][br]The knn algorithm can be trained on data with any number of numerical explanatory variables, and a single categorical response variable with any number of levels. [br][br]The algorithm works by plotting the explanatory variables of the training data in the Euclidean [b]explanatory space[/b]. [br][br]An unclassified point is then classified by taking a majority vote of the [i]k[/i] nearest neighbors in the training data. The number [i]k[/i] is known as a [b]hyper-parameter[/b] of the model and is selected by the researcher; typical selections are [i]k[/i]=1 or [i]k[/i]=3. [br][br]Ties are handled in any number of ways including no classification, or an increase or decrease in the hyper-parameter [i]k[/i]. In general there is no widespread agreement on how to handle ties, and different implementations of knn will proceed with different processes for ties.[br][br]Model performance of a knn model is not usually assessed by [i]p[/i]-values as is normal for regression models. Instead knn model skill is judged by the proportion of predictions the model gets correct on unseen "testing" data ([i]i[/i].[i]e[/i]. data in which the classifications are known but the model was not trained on). Higher proportions of correct predictions are better, and are usually assessed against what would be expected if random classifications were made. It is also common to use confusion matrices. This page does not talk about model skill or confusion matrices. Check back later for a separate tutorial on those concepts.
The applet below helps you explore knn in action. The plotted dots represent a training data set with two numerical explanatory variables, and a single three-level categorical response variable. [br][br]The two numerical explanatory variables range from about 20 to 80 in [i]x[/i] and 0 to 9 in [i]y[/i]. The plane the dots are plotted in is called the explanatory space, and in this example, it is a standard 2 dimensional Euclidean plane. Additional explanatory variables increase the dimensions of the explanatory space. [br][br]The single categorical response variable is plotted as the color of the dots. There are three levels of the response variable: Type 1 is red, Type 2 is green, and Type 3 is Blue.[br][br]The black dot located at (8.7,3.3) with the ellipse around it is an unclassified point waiting to be classified by you using the knn algorithm. What appears to be an ellipse is actually a Euclidean circle whose radius is controlled by the second black dot on the circle. The circle appears elliptical because the scale of the x and y axes are different. If the scale of the axes was the same (1:1, [i]x[/i]:[i]y[/i]) the ellipse would appear as a circle.[br][br]To use knn to classify the black dot at (8.7,3.3) increase the radius of the circle until the circle "captures" points from the training data set. The count of the training data captured inside the circle is tracked for you on the right. As you slowly expand the circle, you'll notice that a Type 2 and Type 3 point both enter the circle at the same radius, so a [i]k [/i]= 1 and [i]k[/i] = 2 classification of (8.7,3.3) is not possible because of the tie. However, as the radius of the circle increases, a second Type 3 point is captured resulting in a knn ([i]k [/i]= 3) classification of the black point as Type 3. [br][br]After this task, you can move the unclassified point to a new location -- try (60,4) -- and increase the radius of the circle to explore that point's [i]k[/i] nearest neighbors. For instance, at (60,4), the first three points are all Type 2 indicating that (60,4) would be knn classified as Type 2 with [i]k [/i]= 1 or [i]k[/i] = 2 or [i]k [/i]= 3.[br][br]Play around! You can't break anything. If you get lost you can always reset the app by pressing the circular arrow button in the top right or simply refreshing the page.
Use the applet above to classify each of the following points using first [i]k[/i] = 1 and then [i]k [/i]= 3.[br][br]1. (50,3)[br]2. (52,8)[br]3. (20,7)[br]4. (25,7)[br]5. (30,1)[br]6. (30,4)[br]7. (30,6)
While you're classifying, also contemplate or discuss these questions: [br][br]1.) Describe in plain English the main classification regions for both [i]k[/i] = 1 and [i]k[/i] = 3. (Hint: Use language like "when [i]x[/i] is greater than/less than/between __ and [i]y [/i]is greater than/less than/between __, then the k = __ classification is Type ___" that a human could easily understand)[br][br]2.) Are there any trouble areas where you get different results for [i]k [/i]= 1 and [i]k [/i]= 3?[br][br]3.) Would using [i]k [/i]> 3 improve classification?[br][br]4.) Would you increase or decrease the scale of the vertical axis to make the "circle" look like a circle?[br][br]5.) Any ideas what this dataset represents?
To answer the final discussion question and in case you are curious, the above example data are the results of a survey of people's favorite [url=https://en.wikipedia.org/wiki/List_of_Star_Wars_films#Skywalker_saga]Star Wars Trilogy[/url] in the Skywalker Saga. [br][br]The horizontal axis is the respondent's age in years. The vertical axis is the number of Skywalker Saga films the respondent has seen (0 on up to all 9 films). The color of the dot represent's the respondent's self-reported favorite trilogy; Type 1 is the prequel trilogy (Episodes 1-3, 1999-2005), Type 2 is the original trilogy (Episodes 4-6, 1977-1984), and Type 3 is the sequel trilogy (Episodes 7-9, 2015-2019).[br][br]The applet below is updated with labeled axes and an expanded legend. Additional color coding is also added to the unclassified point and circle when a majority of nearest neighbors is captured to make the classification more transparent without having to pay close attention to the counts in the table.