MLUI 2021: Machine Learning from User Interactions for Visualization and Analytics

An IEEE VIS 2021 workshop

2019 Workshop Summary

Date and Location: October 20, 2019, 2:20-5:40pm PDT, in Room 3 at IEEE VIS

The Machine Learning from User Interactions (MLUI) workshop seeks to bring together researchers to share their knowledge and build collaborations at the intersection of the Machine Learning and Visualization fields, with a focus on learning from user interaction. Rather than focusing on what visualization can do to support machine learning (as in current Explainable AI research), this workshop seeks contributions on how machine learning can support visualization. Such support incorporates human-centric sensemaking processes, user-driven analytical systems, and gaining insight from data. Our intention in this workshop is to generate open discussion about how we currently learn from user interaction, how to build intelligent visualization systems, and how to proceed with future research in this area. We hope to foster discussion regarding systems, interaction models, and interaction techniques. Further, we hope to extend last year’s collaborative creation of a research agenda that explores the future of machine learning with user interaction.

Schedule

Session 1 (2:20pm - 3:50pm)

Session 2 (4:10pm - 5:40pm)

KEYNOTES

Keynote 1: Mixed-Initiative Visual Analytics: Model-Driven Views and Analytic Guidance

Abstract: The classic information visualization design pattern of “overview first, zoom and filter, details on demand” is no longer sufficient. As datasets have grown (and small displays have become more prevalent), simple overviews tend to be either too high-level to be useful, or too cluttered to reveal anything interesting. The task of exploring and sifting through the data is left only to the analyst in this traditional model. It doesn’t have to be this way: well-designed computational models of various sorts can augment the analysis process by highlighting patterns, suggesting next steps, and drawing attention to regions of potential interest. I seek to facilitate a closely coupled interaction between people and underlying computational models, mediated by visualizations, which include algorithmic transparency as to what guidance is being provided or what data is being hidden. In this talk, I will discuss the role of guidance in user interaction for visualization, framed by several recent research projects which use various types of models to improve the human-computer visual analytic complex.

Biography: Christopher Collins is the Canada Research Chair in Linguistic Information Visualization and an Associate Professor of Computer Science at Ontario Tech University. His research focus is interdisciplinary, combining information visualization and human-computer interaction with natural language processing, with a focus on interaction design and guidance in visual analytics. While his group is best known for text visualization, he collaborates across a wide variety of topics, including health informatics, machine learning, and computer security. His papers have been published in many venues, including IEEE Transactions on Visualization and Computer Graphics, have received awards at IEEE VIS and ACM CHI, and have been featured in popular media such as the CBS News and the New York Times Magazine. Dr. Collins is a past member of the executive of the IEEE Visualization and Graphics Technical Committee and regularly serves on the IEEE VIS Conference Organizing Committee. He received his Ph.D. in Computer Science from the University of Toronto.

Keynote 2: Supporting Analytical Conversations

Abstract: Why isn’t interacting with our data as simple as having a conversation with another person? In this talk I will explore how human-data interactions can be framed as conversations. These conversations can occur with, through, and around data. They occur through a variety of modalities, including but not limited to natural language. How should we structure and facilitate analytical conversations, and what can we learn from human-human interaction?

Biography: Melanie Tory manages user research at Tableau Software in the area of visual analytics. Much of her recent work has focused on natural language interaction with visualizations; her foundational research in this area contributed to the design of Tableau’s Ask Data feature. Before joining Tableau, Melanie was an Associate Professor in visualization at the University of Victoria. She earned her PhD in Computer Science from Simon Fraser University and her BSc from the University of British Columbia. Melanie is Associate Editor of IEEE Computer Graphics and Applications and has served as Papers Co-chair for the IEEE Information Visualization and ACM Interactive Surfaces and Spaces conferences.

WORKSHOP TOPICS

The topic of the workshop will focus on issues and opportunities related to the use of machine learning to learn from user interaction in the course of data visualization and analysis. Specifically, we will focus on research questions including:

SUBMISSIONS

We have two submission tracks: for papers and for posters.

Papers

We invite research and position papers between 5 and 10 pages in length (NOT including references). All submissions must be formatted according to the VGTC conference style template (i.e., NOT the journal style template that full papers use). Papers are to be submitted online through the Precision Conference System at the MLUI track. All papers accepted for presentation at the workshop will be published on IEEE Xplore and linked from the workshop website. All papers should contain full author names and affiliations. These papers are considered archival; reuse of the content in a follow-up publication is only permitted in a proper journal, and any extended version must extend the original paper by at least 30%. If applicable, a link to a short video (up to 5 min. in length) may also be submitted. The papers will be juried by the organizers and selected external reviewers and will be chosen according to relevance, quality, and likelihood that they will stimulate and contribute to the discussion. At least one author of each accepted paper needs to register for the conference (even if only for the workshop). Registration information will be available on the IEEE VIS website.

Important Dates

Submission deadline: July 15, 2019

Author notification: August 13, 2019

Camera-ready deadline: August 22, 2019

Speaker Schedule Available: September 15, 2019

Workshop: October 20 or 21, 2019

Posters

We invite both late-breaking work and contributions in this area from other research domains to submit extended abstracts between 2 and 4 pages in length (NOT including references). All submissions must be formatted according to the VGTC conference style template (i.e., NOT the journal style template that full papers use). Extended abstracts are to be submitted via email to our GMail account: learningfromusersworkshop@gmail.com. All abstracts accepted for presentation at the workshop will be linked from the workshop website. All abstracts should contain full author names and affiliations. If applicable, a link to a short video (up to 5 min. in length) may also be submitted. The abstracts will be juried by the organizers and selected external reviewers and will be chosen according to relevance, quality, and likelihood that they will stimulate and contribute to the discussion. At least one author of each accepted poster needs to register for the conference (even if only for the workshop). Registration information will be available on the IEEE VIS website.

Important Dates

Submission deadline: August 20, 2019

Author notification: September 1, 2019

Camera-ready deadline: October 1, 2019

Workshop: October 20 or 21, 2019

ORGANIZERS

John Wenskovitch, Virginia Tech (jw87@vt.edu)

Michelle Dowling, Virginia Tech (dowlingm@vt.edu)

Chris North, Virginia Tech

Remco Chang, Tufts University

Alex Endert, Georgia Tech

David Rogers, Los Alamos National Lab

Fabian C. Peña, Universidad de los Andes

Sriram Yarlagadda, DePaul University

Eli T. Brown, DePaul University