It seems I come across cool papers faster than I can write about them (that is, there’s a backlog of 10 papers I have been meaning to write about) This time, it’s “Visibility-Driven Transfer Functions” by Correa and Ma, Pacific Vis 2009. A pretty typical problem with transfer function assignment is that you will “run out of opacity”: if you assign values that are too high for some scalars, they will occlude inner surfaces. This happens a lot in practice on day-to-day volume render usage. I know it from personal experience, and because I have looked at the exploration trail of students on volume rendering assignments.

The cute idea of this paper, the way I see it, is to interpret a (simple, vanilla opacity 1D) transfer function as a “request for visibility”. That is, if you assign high opacity to four different scalars, what you really meant is “I want to see these voxels more than the other ones”. The way this works is that the authors compute a “visibility histogram”, which is an image-based measurement of how much each scalar is visible in the final rendering. If a visibility histogram is low where a TF opacity assignment is high, it means the TF does not do a good job for this particular view angle. So now you can explicitly optimize the TF so that the visibility histogram is appropriate. Pretty simple idea (in the best possible sense), and the results are very encouraging. I expect this to be widely adopted in the future.

One comment I have is the following: it seems that for most viewing angles, the TF after optimization is essentially a linear ramp of opacities for each interesting nested isosurface. If this were uniformly the case, then the optimization would be much simpler. Correa and Ma show a situation where this is not the case (figure 9d), and I think I have a theory which explains that. But it requires some setup, so bear with me.

First, there’s another cool paper that I’ve been meaning to write about: “Scale Invariant Volume Rendering”, Kraus, Vis 2005. The idea here is to write a volume rendering integral that is exactly the limit of infinitely many, infinitesimally opaque isosurfaces. The way this ends up looking like is that the volume rendering “physical space ray” ends up transformed to “data space”. Because you’re conceptually drawing infinitely many isosurfaces, it doesn’t matter whether the gradient magnitude as we traverse the ray is large or small: the final image looks the same way. So, Kraus argues this is “scale-invariant”, because the speed at which the ray traverses the data is irrelevant for the final result. This type of argument is intimately related with the work we presented at Vis 2008 and the co-area formula: when you’re switching between “physical space” and “data space”, you essentially need to divide by the gradient magnitude of the scalar field.

So, to come back to Correa and Ma’s paper. I have a sneaky suspicion that if one works with Kraus’s volume rendering integral, then it might be possible to work out the optimization in closed form (or at least something related to it). The reason I’m saying that is that the only figure for which the opacity bumps are not monotonically increasing (and almost linearly so) is the one where there’s a really large gradient in a piece of the scalar field that’s in view. It might be possible to use an approximation of this to brush transfer functions under the computational rug and simply have the users say “these are the scalars I want to see”. In fact, combine this with all the work that’s based on Gordon’s 1998 VolVis paper, and you might have a good robust automatic volume visualization tool.