Feeds:
Posts
Comments

Posts Tagged ‘matlab’

Fixing Matlab errors in Ubuntu 12.10

I recently installed Ubuntu 12.10 in my system and found that the upgrade a lot of my system. A reasonable number of fixes were simple but some of them were very annoying. In this post I just wanted to discuss two specific errors I faced and how to fix them.

I use the student version of Matlab (32 bit) on a 64 bit machine. As part of upgrade to Ubuntu 12.10 , the installer removed multiple 32 bit files which caused the issue. When I tried to run Matlab , I got the following errors :

~/matlab/bin/matlab: 1: ~/matlab/bin/util/oscheck.sh: /lib/libc.so.6: not found

~/matlab/bin/glnx86/MATLAB: error while loading shared libraries: libXpm.so.4: cannot open shared object file: No such file or directory

The first error is mostly harmless – I will describe how to fix it later in the blog post. Fixing the first error needed to install 386 (32bit) version of few packages. Of course, fixing one exposed the next error and so on. In the interest of time, I will put all the packages in the single command.

sudo apt-get install libxpm4:i386 libxmu6:i386 libxp6:i386

Running the above command worked and allowed Matlab to run. However, I then faced another issue – when I tried to save a plot, it was failing again with the following error : (fixing first caused the second)

MATLAB:dispatcher:loadLibrary Can’t load ‘~/matlab/bin/glnx86/libmwdastudio.so’: libXfixes.so.3: cannot open shared object file: No such file or directory.
??? Error while evaluating uipushtool ClickedCallback

MATLAB:dispatcher:loadLibrary Can’t load ‘~/matlab/bin/glnx86/libmwdastudio.so’: libGLU.so.1: cannot open shared object file: No such file or directory.
??? Error while evaluating uipushtool ClickedCallback

 

To fix this, run the following command :

sudo apt-get install libxfixes3:i386 libglu1-mesa:i386

Finally, to fix the innocuous error :

~matlab/bin/matlab: 1: /home/neo/gLingua/matlab/bin/util/oscheck.sh: /lib/libc.so.6: not found

do the following :

sudo ln -s /lib/x86_64-linux-gnu/libc-2.15.so /lib/libc.so.6

Of course, make sure the libc-2.xx.so version is the correct one before running this command.

 

Hope this post helped !

Advertisements

Read Full Post »

I recently got the following error when invoking matlab .

Unable to initialize com.mathworks.mlwidgets.html.HTMLPrefs
Fatal Error on startup: Failure loading desktop class

This was quite annoying and I did not find any useful webpage hits. I did lot of steps and subset of those steps solved the problem. I am giving the steps in the hope that it will be useful for some.

One of the Mathwork’s webpage suggested to add matlab to system path which I did as "export PATH=$PATH:/opt/matlab/bin" . I also added it in my .bashrc so that it gets picked in the future. I got the next error

Error: Cannot locate Java Runtime Environment (JRE).
The directory /opt/matlab/sys/java/jre/glnxa64/jre does not exist.

This was bit surprising as I invoked matlab with the glnx86 parameter which invokes it as 32 bit. (For installing and invoking Matlab in 32 bit check the instructions here ) . This was easily by creating an additional link for glnx64 that points to glnx32 at /opt/matlab/sys/java/jre. But that still did not fix the issue. At this time my guess was that somehow jre was screwed and hence tried to fix them. I had multiple jdk/jre installations like sun , openjdk, harmony etc. So I tried to test it with sun’s jdk.

I ran the command "sudo update-java-alternatives -s java-6-sun" and got the following error :

update-alternatives: error: no alternatives for xulrunner-1.9-javaplugin.so.
update-alternatives: error: alternative /usr/lib/jvm/java-6-sun/jre/lib/amd64/libnpjp2.so for mozilla-javaplugin.so not registered, not setting.
update-alternatives: error: no alternatives for xulrunner-1.9-javaplugin.so.

Installing the following packages kinda helped : default-jdk, xulrunner-dev, sun-java6-plugin. After installing, run the update-java-alternatives again. Open a new terminal , run matlab and this time it worked like a charm.

I am still in the dark for the root cause but these steps have solved the problem for me. Hopefully it solves for you too !

Read Full Post »

Its been quite some time since I wrote a Data Mining post . Today, I intend to post on Mean Shift – a really cool but not very well known algorithm. The basic idea is quite simple but the results are amazing. It was invented long back in 1975 but was not widely used till two papers applied the algorithm to Computer Vision.

I learned this algorithm in my Advanced Data Mining course and I wrote the lecture notes on it. So here I am trying to convert my lecture notes to a post. I have tried to simplify it – but this post is quite involved than the other posts.

It is quite sad that there exists no good post on such a good algorithm. While writing my lecture notes, I struggled a lot for good resources 🙂 . The 3 “classic" papers on Mean Shift are quite hard to understand. Most of the other resources are usually from Computer Vision courses where Mean Shift is taught lightly as yet another technique for vision tasks  (like segmentation) and contains only the main intuition and the formulas.

As a disclaimer, there might be errors in my exposition – so if you find anything wrong please let me know and I will fix it. You can always check out the reference for more details. I have not included any graphics in it but you can check the ppt given in the references for an animation of Mean Shift.

Introduction

Mean Shift is a powerful and versatile non parametric iterative algorithm that can be used for lot of purposes like finding modes, clustering etc. Mean Shift was introduced in Fukunaga and Hostetler [1] and has been extended to be applicable in other fields like Computer Vision.This document will provide a discussion of Mean Shift , prove its convergence and slightly discuss its important applications.

Intuitive Idea of Mean Shift

This section provides an intuitive idea of Mean shift and the later sections will expand the idea. Mean shift considers feature space as a empirical probability density function. If the input is a set of points then Mean shift considers them as sampled from the underlying probability density function. If dense regions (or clusters) are present in the feature space , then they correspond to the mode (or local maxima) of the probability density function. We can also identify clusters associated with the given mode using Mean Shift.

For each data point, Mean shift associates it with the nearby peak of the dataset’s probability density function. For each data point, Mean shift defines a window around it and computes the mean of the data point . Then it shifts the center of the window to the mean and repeats the algorithm till it converges. After each iteration, we can consider that the window shifts to a more denser region of the dataset.

At the high level, we can specify Mean Shift as follows :
1. Fix a window around each data point.
2. Compute the mean of data within the window.
3. Shift the window to the mean and repeat till convergence.

 

Preliminaries

Kernels :

A kernel is a function that satisfies the following requirements :

1. \int_{R^{d}}\phi(x)=1

2. \phi(x)\geq0

Some examples of kernels include :

1. Rectangular \phi(x)=\begin{cases} 1 & a\leq x\leq b\\ 0 & else\end{cases}

2. Gaussian \phi(x)=e^{-\frac{x^{2}}{2\sigma^{2}}}

3. Epanechnikov \phi(x)=\begin{cases} \frac{3}{4}(1-x^{2}) & if\;|x|\leq1\\ 0 & else\end{cases}

Kernel Density Estimation

Kernel density estimation is a non parametric way to estimate the density function of a random variable. This is usually called as the Parzen window technique. Given a kernel K, bandwidth parameter h , Kernel density estimator for a given set of d-dimensional points is

{\displaystyle \hat{f}(x)=\frac{1}{nh^{d}}\sum_{i=1}^{n}K\left(\frac{x-x_{i}}{h}\right)}

 

Gradient Ascent Nature of Mean Shift

Mean shift can be considered to based on Gradient ascent on the density contour. The generic formula for gradient ascent is ,

x_{1}=x_{0}+\eta f'(x_{0})

Applying it to kernel density estimator,

{\displaystyle \hat{f}(x)=\frac{1}{nh^{d}}\sum_{i=1}^{n}K\left(\frac{x-x_{i}}{h}\right)}

\bigtriangledown{\displaystyle \hat{f}(x)=\frac{1}{nh^{d}}\sum_{i=1}^{n}K'\left(\frac{x-x_{i}}{h}\right)}

Setting it to 0 we get,

{\displaystyle \sum_{i=1}^{n}K'\left(\frac{x-x_{i}}{h}\right)\overrightarrow{x}=\sum_{i=1}^{n}K'\left(\frac{x-x_{i}}{h}\right)\overrightarrow{x_{i}}}

Finally , we get

{\displaystyle \overrightarrow{x}=\frac{\sum_{i=1}^{n}K'\left(\frac{x-x_{i}}{h}\right)\overrightarrow{x_{i}}}{\sum_{i=1}^{n}K'\left(\frac{x-x_{i}}{h}\right)}}

 

Mean Shift

As explained above, Mean shift treats the points the feature space as an probability density function . Dense regions in feature space corresponds to local maxima or modes. So for each data point, we perform gradient ascent on the local estimated density until convergence. The stationary points obtained via gradient ascent represent the modes of the density function. All points associated with the same stationary point belong to the same cluster.

Assuming g(x)=-K'(x) , we have

{\displaystyle m(x)=\frac{\sum_{i=1}^{n}g\left(\frac{x-x_{i}}{h}\right)x_{i}}{\sum_{i=1}^{n}g\left(\frac{x-x_{i}}{h}\right)}-x}

The quantity m(x) is called as the mean shift. So mean shift procedure can be summarized as : For each point x_{i}

1. Compute mean shift vector m(x_{i}^{t})

2. Move the density estimation window by m(x_{i}^{t})

3. Repeat till convergence

 

Using a Gaussian kernel as an example,

1. y_{i}^{0}=x_{i}
2. {\displaystyle y_{i}^{t+1}=\frac{\sum_{i=1}^{n}x_{j}e^{\frac{-|y_{i}^{t}-x_{j}|^{2}}{h^{2}}}}{\sum_{i=1}^{n}e^{\frac{-|y_{i}^{t}-x_{j}|^{2}}{h^{2}}}}}

 

Proof Of Convergence

Using the kernel profile,

{\displaystyle y^{t+1}=\frac{\sum_{i=1}^{n}x_{i}k(||\frac{y^{t}-x_{i}}{h}||^{2})}{\sum_{i=1}^{n}k(||\frac{y^{t}-x_{i}}{h}||^{2})}}

To prove the convergence , we have to prove that f(y^{t+1})\geq f(y^{t})

f(y^{t+1})-f(y^{t})={\displaystyle \sum_{i=1}^{n}}k(||\frac{y^{t+1}-x_{i}}{h}||^{2})-{\displaystyle \sum_{i=1}^{n}}k(||\frac{y^{t}-x_{i}}{h}||^{2})

But since the kernel is a convex function we have ,

k(y^{t+1})-k(y^{t})\geq k'(y^{t})(y^{t+1}-y^{t})

Using it ,

f(y^{t+1})-f(y^{t})\geq{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(||\frac{y^{t+1}-x_{i}}{h}||^{2}-||\frac{y^{t}-x_{i}}{h}||^{2})

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(y^{(t+1)^{2}}-2y^{t+1}x_{i}+x_{i}^{2}-(y^{t^{2}}-2y^{t}x_{i}+x_{i}^{2}))

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(y^{(t+1)^{2}}-y^{t^{2}}-2(y^{t+1}-y^{t})^{T}x_{i})

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(y^{(t+1)^{2}}-y^{t^{2}}-2(y^{t+1}-y^{t})^{T}y^{t+1})

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(y^{(t+1)^{2}}-y^{t^{2}}-2(y^{(t+1)^{2}}-y^{t}y^{t+1}))

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(y^{(t+1)^{2}}-y^{t^{2}}-2y^{(t+1)^{2}}+2y^{t}y^{t+1})

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(-y^{(t+1)^{2}}-y^{t^{2}}+2y^{t}y^{t+1})

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(-1)(y^{(t+1)^{2}}+y^{t^{2}}-2y^{t}y^{t+1})

=\frac{1}{h^{2}}{\displaystyle \sum_{i=1}^{n}-}k'(||\frac{y^{t}-x_{i}}{h}||^{2})(||y^{t+1}-y^{t}||^{2})

\geq0

Thus we have proven that the sequence \{f(j)\}_{j=1,2...}is convergent. The second part of the proof in [2] which tries to prove the sequence \{y_{j}\}_{j=1,2,...} is convergent is wrong.

Improvements to Classic Mean Shift Algorithm

The classic mean shift algorithm is time intensive. The time complexity of it is given by O(Tn^{2}) where T is the number of iterations and n is the number of data points in the data set. Many improvements have been made to the mean shift algorithm to make it converge faster.

One of them is the adaptive Mean Shift where you let the bandwidth parameter vary for each data point. Here, the h parameter is calculated using kNN algorithm. If x_{i,k}is the k-nearest neighbor of x_{i} then the bandwidth is calculated as

h_{i}=||x_{i}-x_{i,k}||

Here we use L_{1}or L_{2} norm to find the bandwidth.

 

An alternate way to speed up convergence is to alter the data points
during the course of Mean Shift. Again using a Gaussian kernel as
an example, 

1. y_{i}^{0}=x_{i}
2. {\displaystyle y_{i}^{t+1}=\frac{\sum_{i=1}^{n}x_{j}e^{\frac{-|y_{i}^{t}-x_{j}|^{2}}{h^{2}}}}{\sum_{i=1}^{n}e^{\frac{-|y_{i}^{t}-x_{j}|^{2}}{h^{2}}}}}
3. x_{i}=y_{i}^{t+1}

Other Issues

1. Even though mean shift is a non parametric algorithm , it does require the bandwidth parameter h to be tuned. We can use kNN to find out the bandwidth. The choice of bandwidth in influences convergence rate and the number of clusters.
2. Choice of bandwidth parameter h is critical. A large h might result in incorrect
clustering and might merge distinct clusters. A very small h might result in too many clusters.

3. When using kNN to determining h, the choice of k influences the value of h. For good results, k has to increase when the dimension of the data increases.
4. Mean shift might not work well in higher dimensions. In higher dimensions , the number of local maxima is pretty high and it might converge to a local optima soon.
5. Epanechnikov kernel has a clear cutoff and is optimal in bias-variance tradeoff.

Applications of Mean Shift

Mean shift is a versatile algorithm that has found a lot of practical applications – especially in the computer vision field. In the computer vision, the dimensions are usually low (e.g. the color profile of the image). Hence mean shift is used to perform lot of common tasks in vision.

Clustering

The most important application is using Mean Shift for clustering. The fact that Mean Shift does not make assumptions about the number of clusters or the shape of the cluster makes it ideal for handling clusters of arbitrary shape and number.

Although, Mean Shift is primarily a mode finding algorithm , we can find clusters using it. The stationary points obtained via gradient ascent represent the modes of the density function. All points associated with the same stationary point belong to the same cluster.

An alternate way is to use the concept of Basin of Attraction. Informally, the set of points that converge to the same mode forms the basin of attraction for that mode. All the points in the same basin of attraction are associated with the same cluster. The number of clusters is obtained by the number of modes.

Computer Vision Applications

Mean Shift is used in multiple tasks in Computer Vision like segmentation, tracking, discontinuity preserving smoothing etc. For more details see [2],[8].

Comparison with K-Means

Note : I have discussed K-Means at K-Means Clustering Algorithm. You can use it to brush it up if you want.

K-Means is one of most popular clustering algorithms. It is simple,fast and efficient. We can compare Mean Shift with K-Means on number of parameters.

One of the most important difference is that K-means makes two broad assumptions – the number of clusters is already known and the clusters are shaped spherically (or elliptically). Mean shift , being a non parametric algorithm, does not assume anything about number of clusters. The number of modes give the number of clusters. Also, since it is based on density estimation, it can handle arbitrarily shaped clusters.

K-means is very sensitive to initializations. A wrong initialization can delay convergence or some times even result in wrong clusters. Mean shift is fairly robust to initializations. Typically, mean shift is run for each point or some times points are selected uniformly from the feature space [2] . Similarly, K-means is sensitive to outliers but Mean Shift is not very sensitive.

K-means is fast and has a time complexity O(knT) where k is the number of clusters, n is the number of points and T is the number of iterations. Classic mean shift is computationally expensive with a time complexity O(Tn^{2}).

Mean shift is sensitive to the selection of bandwidth, h. A small h can slow down the convergence. A large h can speed up convergence but might merge two modes. But still, there are many techniques to determine h reasonably well.

Update [30 Apr 2010] : I did not expect this reasonably technical post to become very popular, yet it did ! Some of the people who read it asked for a sample source code. I did write one in Matlab which randomly generates some points according to several gaussian distribution and the clusters using Mean Shift . It implements both the basic algorithm and also the adaptive algorithm. You can download my Mean Shift code here. Comments are as always welcome !

References

1. Fukunaga and Hostetler, "The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition", IEEE Transactions on Information Theory vol 21 , pp 32-40 ,1975
2. Dorin Comaniciu and Peter Meer, Mean Shift : A Robust approach towards feature space analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence vol 24 No 5 May 2002.
3. Yizong Cheng , Mean Shift, Mode Seeking, and Clustering, IEEE Transactions on Pattern Analysis and Machine Intelligence vol 17 No 8 Aug 1995.
4. Mean Shift Clustering by Konstantinos G. Derpanis
5. Chris Ding Lectures CSE 6339 Spring 2010.
6. Dijun Luo’s presentation slides.
7. cs.nyu.edu/~fergus/teaching/vision/12_segmentation.ppt

8. Dorin Comaniciu, Visvanathan Ramesh and Peter Meer, Kernel-Based Object Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence vol 25 No 5 May 2003.
9. Dorin Comaniciu, Visvanathan Ramesh and Peter Meer, The Variable Bandwidth Mean Shift and Data-Driven Scale Selection, ICCV 2001.

post to facebook add to del.icio.us Digg it Stumble It! add to ma.gnolia

Read Full Post »

Matlab  is one of the widely used product used for prototyping in data mining and machine learning. I am not a big fan of Matlab but it is very convenient for coding some algorithms. This post is a discussion about my experience in installing, running Matlab , debugging and compiling Matlab extensions in a 64 bit Ubuntu system.

One of the biggest problems I faced is that the student version of Matlab does not have a 64 bit edition working in Linux. Even though my lab would have had some commercial licenses, I was insistent on using my Linux machine for development. In  this process, I learned a lot of interesting stuff and hopefully this post will be of some help to people trying out Matlab in Linux.

As a note , all these instructions work for R2008b , although, I think it should apply for future versions as well. Also I am assuming that you are using Ubuntu , even though changing the steps to other Linux versions should be easy.

As a standard disclaimer, all these steps worked for me and I hope they do work for you too. But no guarantees 🙂

Installing 32 bit Matlab in a 64 bit Ubuntu

When I was installing first, I got the basic information from this post in Ubuntu Forums . The typical way to install Matlab is to invoke the install_unix.sh which will give out some error like this

"/media/cdrom0/unix/update/bin/glnxa64/xsetup: not found" .

If you get this error here are the steps to follow :

a. sudo apt-get install ia32-libs
If you are using AMD64, (or even otherwise) , install the ia32-libs package to get the 32 bit libraries.
b. You would also have to install a 32 bit JRE as Matlab depends on it. Since there are too many JREs (Sun,gcj,OpenJDK), I will leave the exact steps unspecified.
c. sudo /media/cdrom0/install_unix.sh -glnx86
This command forces Matlab to install as if it is a 32 bit machine.
d. Registering your Matlab
Follow the instructions in this MathWorks post for registering your Matlab.

Invoking Matlab In 32 bit mode

Since you installed Matlab in 32 bit mode, you have to invoke it in 32 bit. Assuming Matlab command is in your path , invoke Matlab as

matlab -glnx86 &

If it is not , give the full path as (of course, assuming it is installed in /opt )

/opt/matlab/bin/matlab -glnx86 &

For convenience , I have setup an alias for Matlab as

alias matlab=’matlab -glnx86′

You can put this line in your .bashrc or .zshrc file so that it is set every time.

Compiling C and C++ source files using MEX in Ubuntu

For my research, I was using a library which had part of the code in C and C++. Matlab allows extensions to be written in these languages. The library authors had helpfully compiled it for Windows but I was not able to use it in Linux directly. My scenario was even harder because , I not only had to compile them in Linux, I had to cross compile them in 32 bit mode.

Note : Since my library had both C and C++ files, I had given instructions for both of them. Most of the time , the libraries are written in either of these languages and hence you will need only part of the instructions. If you are using a 32 bit Linux system, ignore the cross compilation steps.

I have to acknowledge this post  for pointing me in the right direction. It is a little old , the steps did not work immediately for me (I had to install a few more packages) but my steps are simply an extension to his post.

Let us take it step by step.

a. Assume the file name is abc.c. You try mex abc.c and it spits out

 
Warning: You are using gcc version "4.4.0". The earliest gcc version supported with mex is "4.0.0". The latest version tested for use with mex is "4.2.0". To download a different version of gcc, visit http://gcc.gnu.org

b. So we need to install an older version of gcc to compile the code.

sudo apt-get install gcc-4.1 g++-4.1

c. Other packages "might" be needed but most of the time the above steps should work fine.
d. You must tell your mex to use gcc-4.1 instead of gcc-4.4 . To do this run the following command .

mex –setup

e. This will create a file ~/.matlab/R2008b/mexopts.sh. (If your version is different, the path will change too). Open it with some text editor.
f. If you are using 32 bit system (or installed 32 bit Matlab edition in a 64 bit system) go to the case glnx86 and change the following lines

CC=’gcc-4.4′ (or some other version)
CXX=’g++-4.4′ (or some other version)
to
CC=’gcc-4.1′
CXX=’g++-4.1′

g. If you are using 32 bit Matlab in 64 bit system then this script will always match glnxa64 case. So to force the system to use 32 bit mode, I added this line before the switch statement for architecture (around line no 34)

Arch="glnx86";

This should force the system to do a 32 bit compilation.
h. Try compiling the file again. If you get some error related to stubs (/usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: No such file or directory) , install the multilib packages needed for cross compilation.

sudo apt-get install gcc-4.1-multilib g++-4.1-multilib

i. If you still get some errors install the following packages. (I am not sure whether they are needed, but installing them solved the problem)

sudo apt-get install lib32gcc1 libc6-i386 libc6-dev-i386

j. Hopefully, by this time, the code must get compiled. If not , do a wild goose chase and install all possible 32 bit libraries that looked even remotely relevant like I did 😉
k. So far we have solved the compilation problems. You "might" get some error in linking. For example , I got this error when trying to link.

/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.1.3/libstdc++.so when searching for -lstdc++
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.1.3/libstdc++.a when searching for -lstdc++
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.1.3/libstdc++.so when searching for -lstdc++
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.1.3/libstdc++.a when searching for -lstdc++
/usr/bin/ld: cannot find -lstdc++

The step that solved this problem is this . (Beware, you changing some links in a critical system location , so use caution) . Assuming the /usr/lib32/libstdc++.so and /usr/lib32/libgcc_s.so dont exist , try

sudo ln -s /usr/lib32/libstdc++.so.6 /usr/lib32/libstdc++.so
sudo ln -s /usr/lib32/libgcc_s.so.1 /usr/lib32/libgcc_s.so

Of course, you might need to change the exact shared object filename (it may be different from so.6 )

l. After all these hard steps, your code should get compiled and linked successfully. Congratulations ( Evil smile 🙂 )

Some Random Thoughts

As I said, I am not a big fan of Matlab and Matlab in Linux is not helpful in changing that impression. The keyboard shortcuts (I use them a LOT) are totally different from other tools. Heck, they don’t even match the Matlab editor shortcuts in Windows. I can understand not using ctrl-c but others ? Now I have resorted to coding in vim and using the Matlab editor for only testing.

There are even more bad things. Unfortunately, the C library I am using seems to have some memory leak and I was trying to debug it. Actually, I am not sure if the Matlab code is bad or the C code. When I run it for large models it crashes with "Out Of Memory" error. The problem is that for some reason , the most useful commands for handling memory issues like memory and feature(‘memstats’) doesn’t seem to work (They throw, not supported error ).  I am not sure if this problem occurs as I am using 32-bit Matlab in 64 bit or it is common to Matlab in Linux in general. I am currently doing a very crude analysis using free and top (gasp !) and other convoluted means to test my code.  I am still researching more effective methods , so if any knows do chime in ! I will be very grateful ! If I find any good techniques, I will post them here.

If you are as fed up with Matlab as me , then there is an excellent open source alternative called Octave . It is mostly compatible with Matlab.  If you are a bit more careful while coding, you can write code that will run in both of them without changes. There are some minor portability issues but that is not a deal breaker. The differences are details here and here. In fact, I am in the process of converting my code to run in Octave and will blog about it once it is done. There is quite some development behind Matlab like GUI at XOctave and QtOctave. I hope the migration should be reasonably painless.

So unless you are using some very Matlab specific features, you are better off porting the code to Octave. Then you will have the best of both worlds.

Hope you found this post useful. I will blog about my experiences with Matlab  (gasp !), Octave, porting and other issues soon !

Read Full Post »