I have code to output both color and grey level bitmaps. Color makes the level curves much easier to see. A sample of what the color output can do lies behind this input image.
These links show the input files (which are grey level pix-maps) under a distance transform, using a variety of distance functions.
I will make the source code available to whomever wants it to play around with, but you will have to get in touch with me for it.
"What is a distance function?", you ask. Well lets look at an introduction to some metrics for pixel grids...
In general, a function, d that takes pairs of elements from a space, S, and returns a non-negative value, d:SxS -> S such that the following properties are satisfied is called a distance function.
In the discrete setting, two particular functions become obviously useful. The first is known as the Minkowski first-order metric, defined as follows.
This is often called the ``city-block'' metric and discs of points equidistant from a point form a diamond shape. In many ways this is the most trivial of metrics we could think of.
This is commonly known as the Minkowski infinite-order metric. This metric creates discs of a square shape, as we can see as we map the distance from the point in a grid.
The point of a distance transform is to assign a value to each pixel according to its depth within an image. This depth can be just the shortest distance to the exterior of the enclosing shape. Azriel Rosenfeld in his book on Digital Pictures gives an algorithm that would achieve this effect. By finding the radii of fusion for all the boundary points, it would be possible to assign all internal points a distance, according to the choice of metric that we wish to use. This will certainly work but any attempt to create a program using this approach would yield a horrible run time. I know because my first approach to solving this problem used a similar approach, but after waiting for 30 minutes on a screen sized input image, I decided an O(n4) was overly costly.
The current algorithm runs in O(n) time, with a fairly large overhead due to kernel memory allocation, but the algorithm processes images quickly. Most of the time spent waiting is in fact waiting for I/O when reading and writing the stored files. As you will see though this is only a few seconds.
The approach can be viewed as "peeling an onion" (if the input image is the onion!). Each layer is completely peeled off and all the pixels in that layer are assigned the appropriate value. As the points are "peeled" off, they use the metric of choice to determine any of their neighbours that have not been added to the next layer, and add them.
When the process terminates, the output image is scaled to use the maximal range of the grey-scale (each pixel has a 1-byte, 256 shade, range).