Feeds:
Posts
Comments

Archive for the ‘machine learning’ Category

The title of this post is very similar to my old berkeley ML lectures post. Machine Learning is one of the coolest fields in CS and I really want to master it. But so far I was having trouble finding the basic intuitions behind the course. When I read the books, it somehow feels that I am missing some overarching intuition behind the field and that is the root cause behind all my troubles. I "think" I now know what it is. My math background is not as strong as I want to be. I have been putting lot of efforts over the last few months to rectify it. I have been poring over books on linear algebra, probability and statistics. I am still lacking a bit in Calculus but I am sure I will be making up soon.

I took two great courses this semester and they played a role in bridging the gap . They are Computational Methods and Bioinformatics. Both of them are taught by great teachers and had lot of fun topics. Computational Methods was an exploratory course discussing about lot of math needed for CS – things like root finding , linear and non linear equation solving, least squares, Eigen values , SVD and monte-carlo methods . As you can see it is a great sampling of techniques widely used in machine learning and other related fields (eg robotics, vision etc). The text books and other reference materials did a good job of pointing applications in machine learning. So I think I now appreciate many of the derivations used in machine learning better now. Similarly my bioinformatics course showed the various ways in which machine learning techniques are adapted to fit to that domain. As a combo they did a world of good to me.

Since I am now more confident, I started searching for Machine Learning lectures to restart my quest. There are lot of exceptional course websites and notes, but I preferred the courses with video/audio lectures. The most popular seems to be Stanford’s CS 229. It is great and has an excellent collection of notes. I have listened to few of the lectures before bailing out. Hopefully this time , I will be able to complete them. Since I already had (and partially listened to ) Berkeley lecutures, I was searching for similar courses in MIT to complete the triumvirate. I was surprised that OCW did not have a video lecture for data mining / machine learning. It is a pity as I feel these two fields are becoming very important in CS now. As I was systematically going through old courses , I hit a jackpot.

I found MIT’s 6.867 Machine Learning course taught by Professor Leslie Kaelbling in fall of 2005. They had video lectures but all their links were broken. I spent quite some time hunting for good sources and with some luck found them ! One of the things I liked about this course is this : It was taught in 2005 – Slightly old but not too old – and hence the course topics looks relatively approachable by my standards. Additionally the initial lectures discuss the necessary background which is an additional plus.

The archived course website is at 6.867 Machine Learning Fall 2005 . The video links are at SMA’s course website. Please note that the webpage explicitly states that it is copyrighted 😦 Hopefully they wont mind using it for academic purposes. Surprisingly the videos are again in rm format which I hate. One of the reasons is that it causes my Ubuntu based players to crash occasionally. The steps to download are a bit different from the one in Berkeley course. My instructions below are for ISO section. I downloaded a video from each and was not able to find any difference and the ISO files were smaller than the prog files. Also note that the URLs are provided in reverse chrological order. So the first link in the website corresponds to last lecture.

Download steps :

1. Find the url of the file. For eg the url of first lecture in ISO recording is http://smasvr.nus.edu.sg:8080/ramgen/sma/2005-2006/sma5514-fall/sma-5514-lec-mit-54100-07sep2005-0930-iso.rm .

2. Extract the rm file name from the url . Here it is sma-5514-lec-mit-54100-07sep2005-0930-iso.rm .

3. Construct the rtsp url. As of now, the rtsp url looks like this – rtsp://137.132.165.11:554/sma/2005-2006/sma5514-fall/lectFile . So for our case it is rtsp://137.132.165.11:554/sma/2005-2006/sma5514-fall/sma-5514-lec-mit-54100-07sep2005-0930-iso.rm .

4. Download the file using mplayer. The one that worked for me is this : mplayer -noframedrop -dumpfile Lect1.rm -dumpstream rtsp://137.132.165.11:554/sma/2005-2006/sma5514-fall/sma-5514-lec-mit-54100-07sep2005-0930-iso.rm . Remember to replace the arguments for dumpfile and dumpstream for each download.

5. Convert to other formats. Use mencoder to convert to other formats. For some reason, all the formats I tried resulted in very large files. If anyone found a way to convert it effectively, please let me know.

6. Enjoy the videos !

Other Notes

1. The files will download in a slow fashion. The reason is that mplayer will try to download them as if you are actually watching them. I tried to play with the caching by setting a large value for the bandwidth parameter but it did caused some premature termination. If anyone got it working please comment .

2. I was also not able to convert the rm files to other formats avi or mp4 correctly. The files were of 2-3 times the size of the original rm file. If anyone found the proper incantation please let me know.

I plan to listen to them in December and January. If I learn any cool ideas, I will blog about them. Have fun with the video lectures !

Advertisements

Read Full Post »

Its been quite some time since I wrote a Data Mining post . Today, I intend to post on Mean Shift – a really cool but not very well known algorithm. The basic idea is quite simple but the results are amazing. It was invented long back in 1975 but was not widely used till two papers applied the algorithm to Computer Vision.

I learned this algorithm in my Advanced Data Mining course and I wrote the lecture notes on it. So here I am trying to convert my lecture notes to a post. I have tried to simplify it – but this post is quite involved than the other posts.

It is quite sad that there exists no good post on such a good algorithm. While writing my lecture notes, I struggled a lot for good resources 🙂 . The 3 “classic" papers on Mean Shift are quite hard to understand. Most of the other resources are usually from Computer Vision courses where Mean Shift is taught lightly as yet another technique for vision tasks  (like segmentation) and contains only the main intuition and the formulas.

As a disclaimer, there might be errors in my exposition – so if you find anything wrong please let me know and I will fix it. You can always check out the reference for more details. I have not included any graphics in it but you can check the ppt given in the references for an animation of Mean Shift.

Introduction

Mean Shift is a powerful and versatile non parametric iterative algorithm that can be used for lot of purposes like finding modes, clustering etc. Mean Shift was introduced in Fukunaga and Hostetler [1] and has been extended to be applicable in other fields like Computer Vision.This document will provide a discussion of Mean Shift , prove its convergence and slightly discuss its important applications.

Intuitive Idea of Mean Shift

This section provides an intuitive idea of Mean shift and the later sections will expand the idea. Mean shift considers feature space as a empirical probability density function. If the input is a set of points then Mean shift considers them as sampled from the underlying probability density function. If dense regions (or clusters) are present in the feature space , then they correspond to the mode (or local maxima) of the probability density function. We can also identify clusters associated with the given mode using Mean Shift.

For each data point, Mean shift associates it with the nearby peak of the dataset’s probability density function. For each data point, Mean shift defines a window around it and computes the mean of the data point . Then it shifts the center of the window to the mean and repeats the algorithm till it converges. After each iteration, we can consider that the window shifts to a more denser region of the dataset.

At the high level, we can specify Mean Shift as follows :
1. Fix a window around each data point.
2. Compute the mean of data within the window.
3. Shift the window to the mean and repeat till convergence.

 

Preliminaries

Kernels :

A kernel is a function that satisfies the following requirements :

1. \int_{R^{d}}\phi(x)=1

2. \phi(x)\geq0

Some examples of kernels include :

1. Rectangular \phi(x)=\begin{cases} 1 & a\leq x\leq b\\ 0 & else\end{cases}

2. Gaussian \phi(x)=e^{-\frac{x^{2}}{2\sigma^{2}}}

3. Epanechnikov \phi(x)=\begin{cases} \frac{3}{4}(1-x^{2}) & if\;|x|\leq1\\ 0 & else\end{cases}

Kernel Density Estimation

Kernel density estimation is a non parametric way to estimate the density function of a random variable. This is usually called as the Parzen window technique. Given a kernel K, bandwidth parameter h , Kernel density estimator for a given set of d-dimensional points is

{\displaystyle \hat{f}(x)=\frac{1}{nh^{d}}\sum_{i=1}^{n}K\left(\frac{x-x_{i}}{h}\right)}

 

Gradient Ascent Nature of Mean Shift

Mean shift can be considered to based on Gradient ascent on the density contour. The generic formula for gradient ascent is ,

x_{1}=x_{0}+\eta f'(x_{0})

Applying it to kernel density estimator,

{\displaystyle \hat{f}(x)=\frac{1}{nh^{d}}\sum_{i=1}^{n}K\left(\frac{x-x_{i}}{h}\right)}

\bigtriangledown{\displaystyle \hat{f}(x)=\frac{1}{nh^{d}}\sum_{i=1}^{n}K'\left(\frac{x-x_{i}}{h}\right)}

Setting it to 0 we get,

{\displaystyle \sum_{i=1}^{n}K'\left(\frac{x-x_{i}}{h}\right)\overrightarrow{x}=\sum_{i=1}^{n}K'\left(\frac{x-x_{i}}{h}\right)\overrightarrow{x_{i}}}

Finally , we get

{\displaystyle \overrightarrow{x}=\frac{\sum_{i=1}^{n}K'\left(\frac{x-x_{i}}{h}\right)\overrightarrow{x_{i}}}{\sum_{i=1}^{n}K'\left(\frac{x-x_{i}}{h}\right)}}

 

Mean Shift

As explained above, Mean shift treats the points the feature space as an probability density function . Dense regions in feature space corresponds to local maxima or modes. So for each data point, we perform gradient ascent on the local estimated density until convergence. The stationary points obtained via gradient ascent represent the modes of the density function. All points associated with the same stationary point belong to the same cluster.

Assuming g(x)=-K'(x) , we have

{\displaystyle m(x)=\frac{\sum_{i=1}^{n}g\left(\frac{x-x_{i}}{h}\right)x_{i}}{\sum_{i=1}^{n}g\left(\frac{x-x_{i}}{h}\right)}-x}

The quantity m(x) is called as the mean shift. So mean shift procedure can be summarized as : For each point x_{i}

1. Compute mean shift vector m(x_{i}^{t})

2. Move the density estimation window by m(x_{i}^{t})

3. Repeat till convergence

 

Using a Gaussian kernel as an example,

1. y_{i}^{0}=x_{i}
2. {\displaystyle y_{i}^{t+1}=\frac{\sum_{i=1}^{n}x_{j}e^{\frac{-|y_{i}^{t}-x_{j}|^{2}}{h^{2}}}}{\sum_{i=1}^{n}e^{\frac{-|y_{i}^{t}-x_{j}|^{2}}{h^{2}}}}}

 

Proof Of Convergence

Using the kernel profile,

{\displaystyle y^{t+1}=\frac{\sum_{i=1}^{n}x_{i}k(||\frac{y^{t}-x_{i}}{h}||^{2})}{\sum_{i=1}^{n}k(||\frac{y^{t}-x_{i}}{h}||^{2})}}

To prove the convergence , we have to prove that f(y^{t+1})\geq f(y^{t})

f(y^{t+1})-f(y^{t})={\displaystyle \sum_{i=1}^{n}}k(||\frac{y^{t+1}-x_{i}}{h}||^{2})-{\displaystyle \sum_{i=1}^{n}}k(||\frac{y^{t}-x_{i}}{h}||^{2})

But since the kernel is a convex function we have ,

k(y^{t+1})-k(y^{t})\geq k'(y^{t})(y^{t+1}-y^{t})

Using it ,

f(y^{t+1})-f(y^{t})\geq{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(||\frac{y^{t+1}-x_{i}}{h}||^{2}-||\frac{y^{t}-x_{i}}{h}||^{2})

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(y^{(t+1)^{2}}-2y^{t+1}x_{i}+x_{i}^{2}-(y^{t^{2}}-2y^{t}x_{i}+x_{i}^{2}))

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(y^{(t+1)^{2}}-y^{t^{2}}-2(y^{t+1}-y^{t})^{T}x_{i})

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(y^{(t+1)^{2}}-y^{t^{2}}-2(y^{t+1}-y^{t})^{T}y^{t+1})

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(y^{(t+1)^{2}}-y^{t^{2}}-2(y^{(t+1)^{2}}-y^{t}y^{t+1}))

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(y^{(t+1)^{2}}-y^{t^{2}}-2y^{(t+1)^{2}}+2y^{t}y^{t+1})

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(-y^{(t+1)^{2}}-y^{t^{2}}+2y^{t}y^{t+1})

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(-1)(y^{(t+1)^{2}}+y^{t^{2}}-2y^{t}y^{t+1})

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}-}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(||y^{t+1}-y^{t}||^{2})

\geq0

Thus we have proven that the sequence \{f(j)\}_{j=1,2...}is convergent. The second part of the proof in [2] which tries to prove the sequence \{y_{j}\}_{j=1,2,...} is convergent is wrong.

Improvements to Classic Mean Shift Algorithm

The classic mean shift algorithm is time intensive. The time complexity of it is given by O(Tn^{2}) where T is the number of iterations and n is the number of data points in the data set. Many improvements have been made to the mean shift algorithm to make it converge faster.

One of them is the adaptive Mean Shift where you let the bandwidth parameter vary for each data point. Here, the h parameter is calculated using kNN algorithm. If x_{i,k}is the k-nearest neighbor of x_{i} then the bandwidth is calculated as

h_{i}=||x_{i}-x_{i,k}||

Here we use L_{1}or L_{2} norm to find the bandwidth.

 

An alternate way to speed up convergence is to alter the data points
during the course of Mean Shift. Again using a Gaussian kernel as
an example, 

1. y_{i}^{0}=x_{i}
2. {\displaystyle y_{i}^{t+1}=\frac{\sum_{i=1}^{n}x_{j}e^{\frac{-|y_{i}^{t}-x_{j}|^{2}}{h^{2}}}}{\sum_{i=1}^{n}e^{\frac{-|y_{i}^{t}-x_{j}|^{2}}{h^{2}}}}}
3. x_{i}=y_{i}^{t+1}

Other Issues

1. Even though mean shift is a non parametric algorithm , it does require the bandwidth parameter h to be tuned. We can use kNN to find out the bandwidth. The choice of bandwidth in influences convergence rate and the number of clusters.
2. Choice of bandwidth parameter h is critical. A large h might result in incorrect
clustering and might merge distinct clusters. A very small h might result in too many clusters.

3. When using kNN to determining h, the choice of k influences the value of h. For good results, k has to increase when the dimension of the data increases.
4. Mean shift might not work well in higher dimensions. In higher dimensions , the number of local maxima is pretty high and it might converge to a local optima soon.
5. Epanechnikov kernel has a clear cutoff and is optimal in bias-variance tradeoff.

Applications of Mean Shift

Mean shift is a versatile algorithm that has found a lot of practical applications – especially in the computer vision field. In the computer vision, the dimensions are usually low (e.g. the color profile of the image). Hence mean shift is used to perform lot of common tasks in vision.

Clustering

The most important application is using Mean Shift for clustering. The fact that Mean Shift does not make assumptions about the number of clusters or the shape of the cluster makes it ideal for handling clusters of arbitrary shape and number.

Although, Mean Shift is primarily a mode finding algorithm , we can find clusters using it. The stationary points obtained via gradient ascent represent the modes of the density function. All points associated with the same stationary point belong to the same cluster.

An alternate way is to use the concept of Basin of Attraction. Informally, the set of points that converge to the same mode forms the basin of attraction for that mode. All the points in the same basin of attraction are associated with the same cluster. The number of clusters is obtained by the number of modes.

Computer Vision Applications

Mean Shift is used in multiple tasks in Computer Vision like segmentation, tracking, discontinuity preserving smoothing etc. For more details see [2],[8].

Comparison with K-Means

Note : I have discussed K-Means at K-Means Clustering Algorithm. You can use it to brush it up if you want.

K-Means is one of most popular clustering algorithms. It is simple,fast and efficient. We can compare Mean Shift with K-Means on number of parameters.

One of the most important difference is that K-means makes two broad assumptions – the number of clusters is already known and the clusters are shaped spherically (or elliptically). Mean shift , being a non parametric algorithm, does not assume anything about number of clusters. The number of modes give the number of clusters. Also, since it is based on density estimation, it can handle arbitrarily shaped clusters.

K-means is very sensitive to initializations. A wrong initialization can delay convergence or some times even result in wrong clusters. Mean shift is fairly robust to initializations. Typically, mean shift is run for each point or some times points are selected uniformly from the feature space [2] . Similarly, K-means is sensitive to outliers but Mean Shift is not very sensitive.

K-means is fast and has a time complexity O(knT) where k is the number of clusters, n is the number of points and T is the number of iterations. Classic mean shift is computationally expensive with a time complexity O(Tn^{2}).

Mean shift is sensitive to the selection of bandwidth, h. A small h can slow down the convergence. A large h can speed up convergence but might merge two modes. But still, there are many techniques to determine h reasonably well.

Update [30 Apr 2010] : I did not expect this reasonably technical post to become very popular, yet it did ! Some of the people who read it asked for a sample source code. I did write one in Matlab which randomly generates some points according to several gaussian distribution and the clusters using Mean Shift . It implements both the basic algorithm and also the adaptive algorithm. You can download my Mean Shift code here. Comments are as always welcome !

References

1. Fukunaga and Hostetler, "The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition", IEEE Transactions on Information Theory vol 21 , pp 32-40 ,1975
2. Dorin Comaniciu and Peter Meer, Mean Shift : A Robust approach towards feature space analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence vol 24 No 5 May 2002.
3. Yizong Cheng , Mean Shift, Mode Seeking, and Clustering, IEEE Transactions on Pattern Analysis and Machine Intelligence vol 17 No 8 Aug 1995.
4. Mean Shift Clustering by Konstantinos G. Derpanis
5. Chris Ding Lectures CSE 6339 Spring 2010.
6. Dijun Luo’s presentation slides.
7. cs.nyu.edu/~fergus/teaching/vision/12_segmentation.ppt

8. Dorin Comaniciu, Visvanathan Ramesh and Peter Meer, Kernel-Based Object Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence vol 25 No 5 May 2003.
9. Dorin Comaniciu, Visvanathan Ramesh and Peter Meer, The Variable Bandwidth Mean Shift and Data-Driven Scale Selection, ICCV 2001.

post to facebook add to del.icio.us Digg it Stumble It! add to ma.gnolia

Read Full Post »

Machine Learning is one of my areas of interest and I am trying to learn as much as possible. It is a very demanding but also a fun field .

Recently , I came across a huge list of video lectures on Machine Learning from the blog Onionesque Reality. More exactly, the post titled Demystifying Support Vector Machines for Beginners had a section “Webcasts/ Video Lectures on Learning Theory, Support Vector Machines and related ideas” where links to lot of lectures were provided. I sampled the lectures given here and I have to admit most of them are really high quality stuff.

As a first step, I decided to finish the Berkeley’s Machine Learning workshop. The website for video lectures is here. It contains 11 lectures over two days. I find it incredible that they were able to finish this off in two days. If I had been there my brain would have exploded with all this information  🙂

The courses cover most of the important areas of Machine Learning. Best of all, two of the lectures – on Graphical Models (my area of research) and Non Parametric Bayesian Models were taken by Professor Michael Jordan. Well, he is an excellent teacher and it is no wonder that many of his students are leaders in Machine Learning.

Overall , the courses were good – some excellent and some good.

The lectures were encoded in real player format which surprised me. It is hard but not impossible to play rm files in Linux. Even though all of  totem, mplayer or vlc can play it , the navigation was very awkward and clumsy and is prone to frequent crashes. I am thinking of converting them to mp4 format but have not found the exact way to do it .

The lectures were streamed via rtsp protocol. If you are using Linux and wanted to download the lectures there is an easy way to do it. Just click on one of the lectures and find its rtsp url. (Most of the time it is done by right clicking and saying Copy Url ).  You can then use mplayer to dump the file . The syntax is

mplayer -noframedrop -dumpfile Manifold_Learning_and_Visualization.rm -dumpstream rtsp://169.229.131.16:554/events/eecs/eecs_20070824_2.rm

The dumpfile argument takes the output file into which the lecture will be downloaded into. The dumpstream takes the rtsp url as input.

Have fun with the lectures !

Read Full Post »