Hello again from Vis – this is the final session I will be writing about, and I’m off to a bad start as I got here late, and only caught the Q&A session of the first paper. Maybe I’ll hop over to the perception session to catch some of the work there – I’m not entirely sure.
David Koop is now talking about VisComplete, which is joint work with me, Steve and our advisors. The idea behind VisComplete is to take a step further from the analogy work we did last year. In our previous paper, we tried to let the user tell us what they wanted to do, instead of having to show how they wanted to do it. In this year, we are trying to help the user figure out where he is going, by suggesting meaningful pieces of the pipelines based on a dataset of previously constructed visualizations. There have been lots of good questions and suggestions. Extending this to automatically pick out pieces to abstract away is something we should certainly think about.