Perceptually Homogenous Colourmaps

A colourmap is a function which maps intensity values to RGB colours. They can be used to create pseudocolour images, often providing more contrast and allowing more details to be made out by the human eye. In fact, the study of human perception of colour is an extensive scietific field. But more on that later.

Profiling Performance

If you've ever had to speed up some code you wrote, then hopefully you'll know about code profiling. If not then you really need to find out about it! What profiling does is tell you where your program spent most of its computation time. Knowing this allows you to focus your code optimization where it really matters - on the slow bits!

But what about how well your code, or rather your algorithm, performs? If you want to improve your algorithms performance, and it consists of a set of distinct tasks, it's worth knowing which of those tasks you should spend your time improving in order to get a gain in performance. No point wasting your time developing a much better object detector if it's your object classifier that sucks! But in order to know which tasks to improve, we need to perform some kind of profiling on algorithm performance, instead of computation time. How?

Well that's where another great tip from Andrew Ng's Machine Learning course can help you out: ceiling analysis. It's a way of profiling performance of algorithms. I'll let Andrew explain how it works.

ML Fitting of Complex Distributions

This post gives me the chance to show off something I've just added support for: LaTeX equations using MathJax. Beautiful!

Let's imagine that I have some training data, $\{\x_1,..,\x_N\}$, and a non-negative function, $f(\x;\params)$, parameterized by $\params$, that I want to use as a probability distribution to fit to the data. The probability distribution is therefore given by

$$ p(\x;\params) = \frac{f(\x;\params)}{\int f(\vect{y};\params)\d\vect{y}}. $$

A common thing to do is to maximize the likelihood (ML learning, not to be confused with machine learning) of the data, w.r.t. $\params$, i.e.

$$ \params^* = \argmax_\params \left(\int f(\vect{y};\params)\d\vect{y}\right)^{-N}\prod_{i=1}^N f(\x_i;\params). $$

Whilst $f(\x;\params)$ is computable (since we defined it), the integral $\int f(\vect{y};\params) \d\vect{y}$ may not not be. For example, this is the case with complex models such as those defined by many deep networks. This creates a problem in ML fitting of the distribution.

The Future of Technical Computing

I've been thinking a bit in recent months about whether I should move to using a new programming langauge. Traditionally I've done prototyping of algorithms in MATLAB, and the development of real-time or production code in C++, using Visual Studio. However, for various reasons, which I'll discuss, I've considered moving to Julia. Here are my current thoughts on the subject.

The Perils of Overfitting

Given my last blog post on using learning curves to diagnose over or under-fitting, it's a fitting ;) time to share this video. In it some talented folks from Brown University and Georgia Tech lay bare the perils of both over and under-fitting, in a rather original way. So without further ado, take it away guys…

Analysing Learning Curves

I recently looked over Machine Learning high-flier Andrew Ng's Machine Learning course on Coursera. It's perfect for people who want to use machine learning as a tool, and is also a great primer for people who want to get into machine learning research, though it is far more vocational than theoretical.

Of course, being practical, it has all sorts of useful tips to help you get your methods working well. One such tip is some basic plotting and analysis of the learning curves of your current implementation, which can point you towards what to try next if things aren't quite working yet. This analysis can save you a lot of time! I recommend the course, but if you just want to know how to do this analysis then look no further than this blog post.

Get Lost in the Cloud

Hopefully you find TED talks as inspiring and thought provoking as I do. Anyway, I came across a real gem for all researchers the other day. Now this is nothing to do with cloud storage or cloud computing. It posits the idea that in order to have a great new idea, you have to get lost and confused. Mindblowingly simple, yet insightful. Enjoy…

Now get lost.

DeepMind Patents: I

I'm sure many of you will have heard about Google's £400m purchase of DeepMind, a startup founded to solve Artificial Intelligence (AI). They focussed on developing algorithms that would allow machines to learn how to play computer games. It's this generality which makes their approach AI rather than Machine Learning (ML). And apparently the computer gets pretty good, eventually beating human players.

What intrigues me is how this technology works. Of course much of it is a closely guarded secret, but prior to being bought, DeepMind did file three patents, so I thought it would be interesting to read these. In this blog post, I'll be taking a look at the first of them, US20140185959.