Feeds:
Posts
Comments

Posts Tagged ‘classification’

K Nearest Neighbor (KNN from now on) is one of those algorithms that are very simple to understand but works incredibly well in practice. Also it is surprisingly versatile and its applications range from vision to proteins to computational geometry to graphs and so on . Most people learn the algorithm and do not use it much which is a pity as a clever use of KNN can make things very simple. It also might surprise many to know that KNN is one of the top 10 data mining algorithms. Lets see why this is the case !

In this post, I will talk about KNN and how to apply it in various scenarios. I will focus primarily on classification even though it can also be used in regression). I also will not discuss much about Voronoi diagram or  tessellation.

KNN Introduction

KNN is an non parametric lazy learning algorithm. That is a pretty concise statement. When you say a technique is non parametric , it means that it does not make any assumptions on the underlying data distribution. This is pretty useful , as in the real world , most of the practical data does not obey the typical theoretical assumptions made (eg gaussian mixtures, linearly separable etc) . Non parametric algorithms like KNN come to the rescue here.

It is also a lazy algorithm. What this means is that it does not use the training data points to do any generalization. In other words, there is no explicit training phase or it is very minimal. This means the training phase is pretty fast . Lack of generalization means that KNN keeps all the training data. More exactly, all the training data is needed during the testing phase. (Well this is an exaggeration, but not far from truth). This is in contrast to other techniques like SVM where you can discard all non support vectors without any problem.  Most of the lazy algorithms – especially KNN – makes decision based on the entire training data set (in the best case a subset of them).

The dichotomy is pretty obvious here – There is a non existent or minimal training phase but a costly testing phase. The cost is in terms of both time and memory. More time might be needed as in the worst case, all data points might take point in decision. More memory is needed as we need to store all training data.

Assumptions in KNN

Before using KNN, let us revisit some of the assumptions in KNN.

KNN assumes that the data is in a feature space. More exactly, the data points are in a metric space. The data can be scalars or possibly even multidimensional vectors. Since the points are in feature space, they have a notion of distance – This need not necessarily be Euclidean distance although it is the one commonly used.

Each of the training data consists of a set of vectors and class label associated with each vector. In the simplest case , it will be either + or – (for positive or negative classes). But KNN , can work equally well with arbitrary number of classes.

We are also given a single number "k" . This number decides how many neighbors (where neighbors is defined based on the distance metric) influence the classification. This is usually a odd number if the number of classes is 2. If k=1 , then the algorithm is simply called the nearest neighbor algorithm.

KNN for Density Estimation

Although classification remains the primary application of KNN, we can use it to do density estimation also. Since KNN is non parametric, it can do estimation for arbitrary distributions. The idea is very similar to use of Parzen window . Instead of using hypercube and kernel functions, here we do the estimation as follows – For estimating the density at a point x, place a hypercube centered at x and keep increasing its size till k neighbors are captured. Now estimate the density using the formula,

    p(x) = \frac{k/n}{V}

Where n is the total number of V is the volume of the hypercube. Notice that the numerator is essentially a constant and the density is influenced by the volume. The intuition is this : Lets say density at x is very high. Now, we can find k points near x very quickly . These points are also very close to x (by definition of high density). This means the volume of hypercube is small and the resultant density is high. Lets say the density around x is very low. Then the volume of the hypercube needed to encompass k nearest neighbors is large and consequently, the ratio is low.

The volume performs a job similar to the bandwidth parameter in kernel density estimation. In fact , KNN is one of common methods to estimate the bandwidth (eg adaptive mean shift) .

KNN for Classification

Lets see how to use KNN for classification. In this case, we are given some data points for training and also a new unlabelled data for testing. Our aim is to find the class label for the new point. The algorithm has different behavior based on k.

Case 1 : k = 1 or Nearest Neighbor Rule

This is the simplest scenario. Let x be the point to be labeled . Find the point closest to x . Let it be y. Now nearest neighbor rule asks to assign the label of y to x. This seems too simplistic and some times even counter intuitive. If you feel that this procedure will result a huge error , you are right – but there is a catch. This reasoning holds only when the number of data points is not very large.

If the number of data points is very large, then there is a very high chance that label of x and y are same. An example might help – Lets say you have a (potentially) biased coin. You toss it for 1 million time and you have got head 900,000 times. Then most likely your next call will be head. We can use a similar argument here.

Let me try an informal argument here -  Assume all points are in a D dimensional plane . The number of points is reasonably large. This means that the density of the plane at any point is fairly high. In other words , within any subspace there is adequate number of points. Consider a point x in the subspace which also has a lot of neighbors. Now let y be the nearest neighbor. If x and y are sufficiently close, then we can assume that probability that x and y belong to same class is fairly same – Then by decision theory, x and y have the same class.

The book "Pattern Classification" by Duda and Hart has an excellent discussion about this Nearest Neighbor rule. One of their striking results is to obtain a fairly tight error bound to the Nearest Neighbor rule. The bound is

P^* \leq P \leq P^* ( 2 - \frac{c}{c-1} P^*)

Where P^* is the Bayes error rate, c is the number of classes and P is the error rate of Nearest Neighbor. The result is indeed very striking (atleast to me) because it says that if the number of points is fairly large then the error rate of Nearest Neighbor is less that twice the Bayes error rate. Pretty cool for a simple algorithm like KNN. Do read the book for all the juicy details.

Case 2 : k = K or k-Nearest Neighbor Rule

This is a straightforward extension of 1NN. Basically what we do is that we try to find the k nearest neighbor and do a majority voting. Typically k is odd when the number of classes is 2. Lets say k = 5 and there are 3 instances of C1 and 2 instances of C2. In this case , KNN says that new point has to labeled as C1 as it forms the majority. We follow a similar argument when there are multiple classes.

One of the straight forward extension is not to give 1 vote to all the neighbors. A very common thing to do is weighted kNN where each point has a weight which is typically calculated using its distance. For eg under inverse distance weighting, each point has a weight equal to the inverse of its distance to the point to be classified. This means that neighboring points have a higher vote than the farther points.

It is quite obvious that the accuracy *might* increase when you increase k but the computation cost also increases.

Some Basic Observations

1. If we assume that the points are d-dimensional, then the straight forward implementation of finding k Nearest Neighbor takes O(dn) time.
2. We can think of KNN in two ways  – One way is that KNN tries to estimate the posterior probability of the point to be labeled (and apply bayesian decision theory based on the posterior probability). An alternate way is that KNN calculates the decision surface (either implicitly or explicitly) and then uses it to decide on the class of the new points.
3. There are many possible ways to apply weights for KNN – One popular example is the Shephard’s method.
4. Even though the naive method takes O(dn) time, it is very hard to do better unless we make some other assumptions. There are some efficient data structures like KD-Tree  which can reduce the time complexity but they do it at the cost of increased training time and complexity.
5. In KNN, k is usually chosen as an odd number if the number of classes is 2.
6. Choice of k is very critical – A small value of k means that noise will have a higher influence on the result. A large value make it computationally expensive and kinda defeats the basic philosophy behind KNN (that points that are near might have similar densities or classes ) .A simple approach to select k is set  k=\sqrt{n}
7. There are some interesting data structures and algorithms when you apply KNN on graphs – See Euclidean minimum spanning tree and Nearest neighbor graph .

8. There are also some nice techniques like condensing, search tree and partial distance that try to reduce the time taken to find the k nearest neighbor. Duda et al has a discussion of all these techniques.

Applications of KNN

KNN is a versatile algorithm and is used in a huge number of fields. Let us take a look at few uncommon and non trivial applications.

1. Nearest Neighbor based Content Retrieval
This is one the fascinating applications of KNN – Basically we can use it in Computer Vision for many cases – You can consider handwriting detection as a rudimentary nearest neighbor problem. The problem becomes more fascinating if the content is a video – given a video find the video closest to the query from the database – Although this looks abstract, it has lot of practical applications – Eg : Consider ASL (American Sign Language)  . Here the communication is done using hand gestures.

So lets say if we want to prepare a dictionary for ASL so that user can query it doing a gesture. Now the problem reduces to find the (possibly k) closest gesture(s) stored in the database and show to user. In its heart it is nothing but a KNN problem. One of the professors from my dept , Vassilis Athitsos , does research in this interesting topic – See Nearest Neighbor Retrieval and Classification for more details.

2. Gene Expression
This is another cool area where many a time, KNN performs better than other state of the art techniques . In fact a combination of KNN-SVM is one of the most popular techniques there. This is a huge topic on its own and hence I will refrain from talking much more about it.

3. Protein-Protein interaction and 3D structure prediction
Graph based KNN is used in protein interaction prediction. Similarly KNN is used in structure prediction.

References

There are lot of excellent references for KNN. Probably, the finest is the book "Pattern Classification" by Duda and Hart. Most of the other references are application specific. Computational Geometry has some elegant algorithms for KNN range searching. Bioinformatics and Proteomics also has lot of topical references.

After this post, I hope you had a better appreciation of KNN !

 

Add to DeliciousAdd to DiggAdd to FaceBookAdd to Google BookmarkAdd to RedditAdd to StumbleUponAdd to TechnoratiAdd to Twitter

 

If you liked this post , please subscribe to the RSS feed.

Read Full Post »

In this series of articles , I intend to discuss Bayesian Decision Theory and its most important basic ideas. The articles are mostly based on the classic book "Pattern Classification" by Duda,Hart and Stork. If you want the ideas in all its glory, go get the book !

As I was reading the book, I realized that in its heart , this field is a set of mostly common sense ideas validated by rigorous mathematics. So I will try to discuss the basic ideas in plain english without much mathematical rigor. Since I am still learning how to best explain these complex ideas, any comments on how to improve will be welcome !

You may ask what happened to PCA . Well, I still intend to write more on it but have not found enough time to sit and write it all. I have written a draft version of it but felt it was too technical and not very intuitive.So I am hoping to rewrite it again.

Background

You would need to know basics of probability to understand the following. In particular the ideas of Prior , Posterior  and the idea of Likelihood  . Of course if you know all these , you will know the big idea of Bayes Theorem . I will try to explain them lightly but if you have any doubts check out Wikipedia or some old books.

We will take the example used in Duda et al. There are two types of fish : Sea Bass and Salmon. We catch a lot of fish of these two types but which are mixed together and our aim is to separate them by automation. We have a conveyor belt in which  the fishes come one by one and we need to decide if the current fish is sea bass or salmon. Of course, we want to make it as accurate as possible but also don’t want to spend lot of money on this project. This is in its heart a classification project. We will be given a few examples of both sea bass and salmon and based on it we need to infer the general characteristics using which we can distinguish them.

Basic Probability Ideas

Now let us be slightly more formal. We say that there are two "classes" of fish – sea bass and salmon. According to our system, there is no other type of fish. If we treat it as a state machine , then our system has two states. The book uses a notation of \omega_1 \; and \; \omega_2 to represent them. We will use the names seabass and salmon as it is more intuitive.

The first basic idea is that of prior probability  . This is represented as P(seabass) \; and \; P(salmon) which basically give the probability that the next fish in the conveyor is seabass or salmon. Of course, both of them have to sum to one. From the Bayesian perspective, this probability is usually obtained from prior (domain) knowledge. We will not talk about Frequentist interpretation as we will focus on Bayesian decision theory.

Let us assume that we use length of the fish to differentiate the fishes. So whenever the fish comes to the conveyor belt, we calculate its length (how, we don’t really care here ) . So we have transformed the fish into a simple representation using a single number, its length. So the length is a feature  that we use to classify and the step of converting  the fish into length is called feature extraction .

In a real life scenario, we will have multiple features and the input will converted to a vector. For eg we may use length , lightness of skin , fin length etc as feature. In this case , the fish will be transformed into a triplet. Converting the input to a feature vector makes further processing easy and more robust. We will usually use the letter x to represent the feature. So you can consider P(x) is the probability of evidence. Eg lets say we got a fish (we dont know what it is yet) of length 5 inches. Now P(x) gives the probability that some fish (either seabass or salmon) has the length 5 inches.

The next idea is that of likelihood . It is also called class conditional probability. It is represented as either P(x|seabass) \; or \; P(x|salmon) . The interpretation is simple. This answers the question that if the fish is seabass what is the probability that it will have length x inches (ditto salmon). Alternatively , what is the probability that a 5 inch seabass exists and so on. Or even how "likely" is a 5 inch seabass ?

The posterior probability is the other side of the story. This is represented by P(seabass|x) \; or \; P(salmon|x) . Intuitively, given that we have a fish of length x inches , what is the probability that it is a seabass (or salmon).  The interesting thing is that knowing prior probability and likelihood we can calculate posterior probability using the famous "Bayes Theorem". We can represent it in words as ,

posterior = \frac{likelihood \times prior}{evidence}

This gives another rationale for the word "likelihood". Among all other things being equal , the item with higher likelihood is more "likely" to final result. For eg if the likelihood of a 10 inch seabass is more than that of salmon then when we observe an unknown fish of length 10 inches , it is most likely a seabass.

PS : There is an excellent (but long) tutorial on Bayes Theorem at "An Intuitive Explanation of Bayes’ Theorem" . True to its title, it does try to explain the bizarre (atleast initially) result of Bayes Theorem using multiple examples. I highly recommend reading it.

Bayesian Decision Theory

Let us enter into the decision theory at last. In a very high level definition, you can consider decision theory as a field which studies about "decisions" (to classify as seabass or not to be) – more exactly, it considers these decisions in terms of cost or loss functions. (More on that later). In essence , you can think of decision theory as providing a decision rule which tells us what action to taken when we make a particular observation. Decision theory can be thought of as all about evaluating decision rules. (Of course, I am grossly simplifying things, but I think I have conveyed the essence).

Informal Discussion of Decision Theory for Two Class System with Single Feature

Let us take a look at the simplest application of decision theory to our problem. We have a two class system (seabass,salmon) and we are using a single feature (length) to make a decision. Be aware that length is not an ideal feature because many a time you will be having both seabass and salmon of same length (say 5 inches). So when we come across a fish with length 5 inches, we are stuck. We don’t know what decision to take as we know both seabass and salmon can be 5 inches. Decision theory to the rescue !

Instead of providing the theoretical ideas, I will discuss various scenarios and what is the best decision theoretic action to do. In all the scenarios let us assume that we want to be as accurate as possible.

Case I : We don’t know anything and we are not allowed to see the fish

This is the worst case to be in. We have no idea about seabass and salmon (a vegetarian , perhaps ? 🙂 ). You are also not allowed to see the fish. But you are asked is the next fish in conveyor a seabass or salmon ? All is not lost – The best thing to do is to randomize. So the decision rule is with probability 50% say the next fish is seabass and  with probability 50% say it is salmon.

Convince yourself that this is the best thing to do – Not only when the seabass and salmon are in 50:50 , even when they are in 90:10 ratio.

Case II : You know the prior probability but still you don’t see the fish

We are in a slightly better position here. We don’t get to see the fish yet , but we know the prior probability that the next fish is a seabass or salmon. ie We are give P(seabass) \; and \; P(salmon) Remember, we want to be as accurate as possible and we want to as reliable about accuracy rate as possible.

A common mistake to do is to randomize again. ie with P(seabass) say that the next fish is seabass and salmon otherwise. For eg let us say, P(seabass) = .70 \; and \; P(salmon) = 0.3 . Let me attempt an informal reasoning – In the (sample) worst case, you will get first 40 as seabass, next 30 as salmon and next 30 as seabass. But you say first 30 as salmon and next 70 as seabass. In this hypothetical example you are only at the most 40% accurate even though you can theoretically do better.

What does decision theory say here ? If P(seabass) > P(salmon) then ALWAYS say seabass. Else ALWAYS say salmon. In this case the accuracy rate is max(P(seabass),P(salmon)) . Conversely, the error rate is the minimum of both the prior probabilities. Convince yourself that this is the best you can do . It sure is counterintuitive to always say seabass when you know you will get salmon too. But we can easily prove that this is the best you can do "reliably".

Mathematically, decision rule is decide seabass if P(seabass) > P(salmon)   else decide salmon . 

Error is min(P(seabass),P(salmon) .

Case III : You know the likelihood function and the length but not the prior probability

This case is really hypothetical. ie we can see the fish and hence find its length. Let say x inches. We have P(x inches|seabass) \; and \; P(x inches | salmon) but we don’t know the prior probability. The decision rule here is : For each fish , find the appropriate likelihood values. If the likelihood of seabass is higher than that of salmon , say the fish is seabass and salmon otherwise.

Note that we are making a decision based on our "observation" in contrast to previous cases. Unless, you are really unlucky and the prior probabilities are really skewed you can do well with this decision rule.

Mathematically, decision rule is decide seabass if P(x|seabass) > P(x|salmon)   else  decide  salmon .

Case IV : You know the length, prior probability and the likelihood function

This is the scenario we are mostly in. We know the length of the fish (say 5 inches). We know the prior probability (say 60% salmon and 40% seabass). We also know the likelihood of them. (say P(5 inches|seabass) is 60% and P(5 inches|salmon) is 10% )

Now we can apply our favorite Bayes rule to get posterior probability. If the length of the fish is 5 inches then what is the probability that it a seabass ? A salmon ? Once you can calculate the posterior , the decision rule becomes simpler. If the posterior probability of seabass is higher than say the fish is seabass else say it is salmon.

Mathematically, decision rule is decide seabass if P(seabass|x) > P(salmon|x) else decide salmon . This rule is very important and is called as Bayes Decision Rule.

For this decision rule, the error is min(P(seabass|x),P(salmon|x)

We can expand Bayes decision rule using Bayes theorem.

Decide seabass if  p(x|seabass)P(seabass) > p(x|salmon)P(salmon)   else decide salmon.

There are two special cases.
1. If likelihood are equal then our decision depends on prior probabilities.
2. If prior probabilities are equal then our decisions depend on likelihoods.

We have only scratched the surface of decision theory. In particular we did not focus much on bounding the error today. Also we did not discuss the cases where there are multiple classes or features. Hopefully, I will discuss them in a future post.

Reference

Pattern Classification by Duda,Hart and Stork. Chapter 2.

Read Full Post »

Older Posts »