Hello everyone. I’m sitting on the first paper session of the day, but posting will be delayed since the wireless connection in the conference rooms is essentially non-existent. I have to confess I haven’t followed VAST too closely in its first few years. I was surprised to see a lot of interesting papers on the fast forward this morning, so you can expect at least some writing about them right here. Also, I would like to publicly congratulate whoever had the idea of USB proceedings: leafing through the papers on a USB drive is much easier than carrying the huge printed proceesings or having a DVD noisily spinning all the time.
The first talk is from the paper entitled “Interactive Visual Clustering of Large Collections of Trajectories”, by Andrienko and co-authors at the University of Pisa. It is (obviously) about a clustering approach to classifying trajectory data. The abstract claims that “structurally complex objects such as trajectories of moving entities cannot adequately be described as points in multi-dimensional spaces”. In my opinion, this is incorrect (in machine learning this is widely used and known as the kernel trick). The talk did not give many details about what the clustering algorithm is actually doing, and seemed to be more about the system and the data itself than the underlying technique. Since I promised you before that I would be talking about fun things with inner product spaces, I will include an example of inner products for trajectory data.
The second talk is again about visualization of trajectory data: “Proximity-based Visualization of Movement Trace Data”, by Crnovrsanin and co-authors at UC Davis. The authors propose reducing the dimensionality of trajectory data by mapping a trajectory to a 2D line where the coordinates are time and distance to a point of interest. This is effective once you know the position of the point of interest (the case study they used were related to a bombing incident siulation), I worry that line crossings in that diagram are perceptually important but not very informative unless the distance is small (about the only thing you can say about two objects that are at distance k apart from a third object is that they’re at most at distance 2k from each other). The presenter claimed to have picked this representation over a 3D display of the lines because of the occlusion that would be present in such a display, but I’m not sure that would be significantly worse than the uninformative crossings in the 2D scenario. Their second case study was about comparing the proximity of elk and deer to roads, and show that deer tend to be closer to roads than elk. That’s a cool example, but would a simple statistic of average distance to road have been enough?
Writing these first two paragraphs, I missed the third talk by Steed and co-authors from the NRL and MSU, whose paper is entitled “Guided Analysis of Hurricane Trends Using Statistical Processes Integrated with Interactive Parallel Coordinates”, which from my completely ignorant 5-second glance, seems to be about enriching parallel coordinate plots with some sort of statistical information about the datasets in use.
I’m also missing the fourth talk, since I just snuck out of the room to post this. More soon!