Category Archives: visweek

VisWeek papers

(I might as well start calling this the ‘VisWeek’ blog or something like that if I were to consider the bulk of the posting material)

The program for VisWeek 2010 is now out, and there are more interesting papers this year than I ever remember seeing. In no particular order, here’s a list of papers I’ll try to find and read before VisWeek:

  • Marc Khoury, Rephael Wenger. On the Fractal Dimension of Isosurfaces
  • Min Chen, Heike Jänicke. An Information-theoretic Framework for Visualization
  • Lijie Xu, Teng-Yok Lee, Han-Wei Shen. An Information-Theoretic Framework for Flow Visualization
  • Paul Bendich, Herbert Edelsbrunner, Michael Kerber. Computing Robustness and Persistence for Images
  • Samuel Gerber, Peer-Timo Bremer, Valerio Pascucci, Ross Whitaker. Visual Exploration of High Dimensional Scalar Functions
  • Dirk J. Lehmann, Holger Theisel. Discontinuities in Continuous Scatterplots
  • Usman Alim, Torsten Möller, Laurent Condat. Gradient Estimation Revitalized
  • Marco Ament, Daniel Weiskopf, Hamish Carr. Direct Interval Volume Visualization
  • Ziyi Zheng, Wei Xu, Klaus Mueller. VDVR: Verifiable Visualization of Projection-Based Data
  • Waqas Javed, Bryan McDonnel, Niklas Elmqvist. Graphical Perception of Multiple Time Series
  • Stephan Diehl, Fabian Beck, Michael Burch. Uncovering Strengths and Weaknesses of Radial Visualizations-an Empirical Approach
  • Lars Grammel, Melanie Tory, Margaret-Anne Storey. How Information Visualization Novices Construct Visualizations
  • Hoi Ying Tsang, Melanie Tory, Colin Swindells. eSeeTrack-Visualizing Sequential Fixation Patterns
  • Rita Borgo, Karl Proctor, Min Chen, Heike Jänicke, Tavi Murray, Ian M. Thornton. Evaluating the Impact of Task Demands and Block Resolution on the Effectiveness of Pixel-based Visualization
  • Hadley Wickham, Dianne Cook, Heike Hofmann, Andreas Buja. Graphical Inference for Infovis
  • David Feng, Lester Kwock, Yueh Lee, Russell M. Taylor II. Matching Visual Saliency to Confidence in Plots of Uncertain Data
  • Nicholas Kong, Jeffrey Heer, Maneesh Agrawala. Perceptual Guidelines for Creating Rectangular Treemaps
  • Zhicheng Liu, John T. Stasko. Mental Models, Visual Reasoning and Interaction in Information Visualization: A Top-down Perspective
  • Caroline Ziemkiewicz, Robert Kosara. Laws of Attraction: From Perceived Forces to Conceptual Similarity
  • Aritra Dasgupta, Robert Kosara. Pargnostics: Screen-Space Metrics for Parallel Coordinates
  • Stefan Jänicke, Christian Heine, Marc Hellmuth, Peter F. Stadler, Gerik Scheuermann. Visualization of Graph Products
  • Stephen Ingram, Tamara Munzner, Veronika Irvine, Melanie Tory, Steven Bergner, Torsten Möller . DimStiller: Workflows for Dimensional Analysis and Reduction
  • Yu-Hsuan Chan, Carlos D. Correa, Kwan-Liu Ma. Flow-based Scatterplots for Sensitivity Analysis
  • Daniela Oelke, David Spretke, Andreas Stoffel, Daniel A. Keim. Visual Readability Analysis: How to Make Your Writings Easier to Read

I don’t know if all of these papers are available online, but when I find out about each of the papers as I read them, I’ll put a link up. Let me know if you think I missed anything tremendously exciting: I collected these from colleague suggestions, previous knowledge of the work, but mostly from the title and authors. And, obviously, this list is biased. As it goes, there might be many other lists like it, but this one is mine.

Advertisements

VisWeek Evaluation: Longer response to the question

(Sorry I don’t have the name of the person that asked the question) At the Q&A session of our paper, someone pointed out that the function we used only hits a fraction of the Marching Cubes cases, and that this could be trouble.

This is a great point. If you were using MMS to debug one particular algorithm (in this case, MC), you would certainly want to consider an analytical function that hits as many cases as possible. This is part of the verification pipeline that Tiago talked about earlier in the talk: the need for the constant cycle of “new tests -> verification -> more confidence -> new observations -> new tests” as a way of verifying our implementations. In this paper, we wanted a function that was both 1) complicated enough to trigger interesting behavior across a wide set of isosurfacing implementations and 2) simple enough that we could theoretically  analyze the expected convergence of the algorithm.

In the case of MC, geometric and normal convergence are pretty trivial (as Tiago briefly mentioned, it’s just Taylor series plus a dumb trick to turn algebraic distance into geometric distance). So essentially any analytical function whose isosurfaces you can get geometric distances from would work as a manufactured Marching Cubes solution.

VisWeek: give us data or source! (Also, twitter rules)

If you spot an annoying guy asking for the source or data used in talks and paper presentations, you now know who I am. If you give me your data, I can still run my code on your data. If you give me your code, I can still run your code on my data. However, if you give me neither, you better have an incredibly well-documented writeup. Otherwise, they’re just pictures in the paper. Please, PLEASE, everyone, let’s make an extra effort to share at least part of the materials.

Also, I love Twitter for the underlying live commentary on presentations. You should really be following the #visweek hashtag.

VisWeek: general impressions and things to see

I have not written much about what has been going on in the last couple of days, mostly because it seems like the continuous updates like I was doing last year will be impossible from the conference floor. However, I also feel like they feel better as twitter updates, and Marian Doerk has been posting a lot of these. Look for the latest about VisWeek on twitter under the #visweek hashtag, and somewhat longer updates about the conference here.
In general, I have been pretty pleased about the many different references to machine learning that I have heard and seen in the conference. I’m looking forward to “Visual Human+Machine Learning” at Vis, and I was impressed to see an entire one-day workshop/forum dedicated to bringing machine learning people and techniques to visualization. Props to FODAVA organizers!
One of the papers I’m looking forward to seeing is “Continuous Parallel Coordinates”. It is the followup paper to “Continuous Scatterplots”. After sitting at about 10 papers for VAST, I am convinced that continuous density-based plots are vastly useful, and much better than the discrete counterparts. I hope these techniques become popular.

I have not written much about what has been going on in the last couple of days, mostly because it seems like the continuous updates like I was doing last year will be impossible from the conference floor. However, I also feel like they might work better as twitter updates: Marian, TJ and Robert have been posting a lot of these. Look for the latest about VisWeek on twitter under the #visweek hashtag, and somewhat longer updates about the conference here.

In general, I have been pretty pleased about the many different references to machine learning that I have heard and seen in the conference. I’m looking forward to Visual Human+Machine Learning at Vis, and I was impressed to see an entire one-day workshop/forum dedicated to bringing machine learning people and techniques to visualization. Props to FODAVA organizers!

One of the papers I’m looking forward to seeing is Continuous Parallel Coordinates. (It is the followup to Continuous Scatterplots). After seeing 4 or 5 VAST papers that could clearly use some variant of these, I am convinced that continuous density-based plots (histograms, scatterplots, and now parallel coordinates) are vastly useful, and much better than the discrete counterparts. I hope these techniques become popular.

VAST Session 1: Spatio-Temporal Analytics

Hello everyone. I’m sitting on the first paper session of the day, but posting will be delayed since the wireless connection in the conference rooms is essentially non-existent. I have to confess I haven’t followed VAST too closely in its first few years. I was surprised to see a lot of interesting papers on the fast forward this morning, so you can expect at least some writing about them right here. Also, I would like to publicly congratulate whoever had the idea of USB proceedings: leafing through the papers on a USB drive is much easier than carrying the huge printed proceesings or having a DVD noisily spinning all the time.
The first talk is from the paper entitled “Interactive Visual Clustering of Large Collections of Trajectories”, by Andrienko and co-authors at the University of Pisa. It is (obviously) about a clustering approach to classifying trajectory data. The abstract claims that “structurally complex objects such as trajectories of moving entities cannot adequately be described as points in multi-dimensional spaces”. In my opinion, this is just incorrect (in machine learning this is widely used and known as the kernel trick). The talk did not give many details about what the clustering algorithm is actually doing, and seemed to be more about the system and the data itself than the underlying technique. Since I promised you before that I would be talking about fun things with inner product spaces, I will include an example of inner products for trajectory data.
The second talk is again about visualization of trajectory data: “Proximity-based Visualization of Movement Trace Data”, by Crnovrsanin and co-authors at UC Davis. The authors propose reducing the dimensionality of trajectory data by mapping a trajectory to a 2D line where the coordinates are time and distance to a point of interest. While this seems to be effective once you know the position of the point of interest (the case study they used were related to a bombing incident siulation), I also worry that line crossings in that diagram are perceptually important but not very informative unless the distance is small (the only thing you can say about two objects that are at distance k apart from a third object is that they’re at most at distance 2k from each other). The presenter claimed to have picked this representation over a 3D display of the lines because of the occlusion that would be present in such a display, but I’m not sure that would be significantly worse than the uninformative crossings in the 2D scenario. Their second case study was about comparing the proximity of elk and deer to roads, and show that deer tend to be closer to roads than elk. That’s a cool example, but would a simple statistic of average distance to road have been enough?
Writing these first two paragraphs, I missed the third talk by Steed and co-authors from the NRL and MSU, whose paper is entitled “Guided Analysis of Hurricane Trends Using Statistical Processes Integrated with Interactive Parallel Coordinates”, which from my completely ignorant glance and overhearing, seems to be about enriching parallel coordinate plots with statistical information.

Hello everyone. I’m sitting on the first paper session of the day, but posting will be delayed since the wireless connection in the conference rooms is essentially non-existent. I have to confess I haven’t followed VAST too closely in its first few years. I was surprised to see a lot of interesting papers on the fast forward this morning, so you can expect at least some writing about them right here. Also, I would like to publicly congratulate whoever had the idea of USB proceedings: leafing through the papers on a USB drive is much easier than carrying the huge printed proceesings or having a DVD noisily spinning all the time.

The first talk is from the paper entitled “Interactive Visual Clustering of Large Collections of Trajectories”, by Andrienko and co-authors at the University of Pisa. It is (obviously) about a clustering approach to classifying trajectory data. The abstract claims that “structurally complex objects such as trajectories of moving entities cannot adequately be described as points in multi-dimensional spaces”. In my opinion, this is incorrect (in machine learning this is widely used and known as the kernel trick). The talk did not give many details about what the clustering algorithm is actually doing, and seemed to be more about the system and the data itself than the underlying technique. Since I promised you before that I would be talking about fun things with inner product spaces, I will include an example of inner products for trajectory data.

The second talk is again about visualization of trajectory data: “Proximity-based Visualization of Movement Trace Data”, by Crnovrsanin and co-authors at UC Davis. The authors propose reducing the dimensionality of trajectory data by mapping a trajectory to a 2D line where the coordinates are time and distance to a point of interest. This is effective once you know the position of the point of interest (the case study they used were related to a bombing incident siulation), I worry that line crossings in that diagram are perceptually important but not very informative unless the distance is small (about the only thing you can say about two objects that are at distance k apart from a third object is that they’re at most at distance 2k from each other). The presenter claimed to have picked this representation over a 3D display of the lines because of the occlusion that would be present in such a display, but I’m not sure that would be significantly worse than the uninformative crossings in the 2D scenario. Their second case study was about comparing the proximity of elk and deer to roads, and show that deer tend to be closer to roads than elk. That’s a cool example, but would a simple statistic of average distance to road have been enough?

Writing these first two paragraphs, I missed the third talk by Steed and co-authors from the NRL and MSU, whose paper is entitled “Guided Analysis of Hurricane Trends Using Statistical Processes Integrated with Interactive Parallel Coordinates”, which from my completely ignorant 5-second glance, seems to be about enriching parallel coordinate plots with some sort of statistical information about the datasets in use.

I’m also missing the fourth talk, since I just snuck out of the room to post this. More soon!