(Previous workshop pages: 2018, 2019, 2020)
2021 Workshop Summary
Date and Location: 24 October 2021, 12:00-3:30pm CDT, Virtually held with IEEE VIS (24-29 October 2021)
The Machine Learning from User Interactions (MLUI) workshop seeks to bring together researchers to share their knowledge and build collaborations at the intersection of the Machine Learning and Visualization fields, with a focus on learning from user interaction. Rather than focusing on what visualization can do to support machine learning (as in current Explainable AI research), this workshop seeks contributions on how machine learning can support visualization. Such support incorporates human-centric sensemaking processes, user-driven analytical systems, and gaining insight from data. Our intention in this workshop is to generate open discussion about how we currently learn from user interaction, how to build intelligent visualization systems, and how to proceed with future research in this area. We hope to foster discussion regarding systems, interaction models, and interaction techniques. MLUI 2021 largely plans to continue the momentum in this research area that the previous workshops have initiated.
WORKSHOP SCHEDULE
Session 1: 12:00-1:30pm CDT
- 12:00pm: Introduction and Welcome
- 12:05pm: Keynote 1: Opportunities for Understanding Semantics of User Interactions by Alex Lex
- 12:35pm: Keynote 2: User Interaction in Visual Analytics: Beyond Ephemeral Events by Alex Endert
- 1:05pm: Discussion – Future of the Field; Organizing Block 2 Topics
Session 2: 2:00-3:30pm CDT
- 2:00pm: Welcome Back
- 2:05pm: DocTable: Table-Oriented Interactive Machine Learning for Text Corpora, by Yeshwanth Devabhaktuni, Sriram Yarlagadda, David J Scroggins, Fang Cao, Franklin J Buitron, Eli T Brown (paper)
- 2:20pm: SHIM: Semantic Hierarchical Clustering with Interactive Machine Learning, by Fang Cao, Yuanwei Tu, Eli T Brown (paper)
- 2:35pm: FacetRules: Discovering and Describing Related Groups, by Lebna V Thomas, Jiahao Deng, Eli T Brown (paper)
- 2:50pm: Discussion – Breakout Discussions
- 3:20pm: Synthesis and Next Steps
KEYNOTES
Keynote 1: Opportunities for Understanding Semantics of User Interactions
Abstract: Most logging approaches record system events at a fairly low level of abstraction. In this talk, I will argue that higher levels of abstraction are possible and desirable. I will highlight opportunities for increasing semantics that software developers have by carefully recording meaningful events. I will then show that we can leverage algorithmic methods to infer user-intents. Finally, I will show opportunities for eliciting key information from insights directly from users. Explicitly asking users about their intentions has benefits for users, as they can later retrace their steps more efficiently, and system developers, as they can learn more about usage patterns of their system and motivations of their users. There are diverse user input modalities that can provide information at different levels of abstraction and invasiveness. These modalities range from multiple choice responses, to structured notes, to “think-aloud-like” approaches. In combination, these approaches are promising for building systems that have a better understanding of their users and hence can support users in their analytical tasks.
Biography: Alex is an Associate Professor of Computer Science at the Scientific Computing and Imaging Institute and the School of Computing at the University of Utah. I direct the Visualization Design Lab where we develop visualization methods and systems to help solve today’s scientific problems. Before joining the University of Utah, he was a lecturer and post-doctoral visualization researcher at Harvard University. He received his PhD, master’s, and undergraduate degrees from Graz University of Technology. In 2011, he was a visiting researcher at Harvard Medical School. He is the recipient of an NSF CAREER award and multiple best paper awards or best paper honorable mentions at IEEE VIS, ACM CHI, and other conferences. He also received a best dissertation award from his alma mater. He co-founded Datavisyn (http://datavisyn.io), a startup company developing visual analytics solutions for the pharmaceutical industry, where he is currently spending his sabbatical.
Keynote 2: User Interaction in Visual Analytics: Beyond Ephemeral Events
Abstract: User interaction in visual analytic tools is often treated as an event that causes a direct response in the system based on the operation performed. While effective for fostering exploration and analysis of people using these systems, user interactions contain signals about the people they were performed by. In this talk, I discuss opportunities for promoting user interactions to first order objects that can be used for a plethora of useful tasks by the system, including detecting cognitive biases that may exist, guiding and steering machine learning models, and more. I’ll provide examples of past and current research that explores this direction, and discuss potential future directions.
Biography: Alex Endert is an Associate Professor in the School of Interactive Computing at the Georgia Institute of Technology. He directs the Visual Analytics Lab and conducts research to help people make sense of data and models through interactive visualizations and visual analytic systems. The lab’s research is also often tested in practice in domains such as intelligence analysis, cyber security, manufacturing safety, and others. Our lab’s work is funded by NSF, DARPA, DOD, DHS, NIJ, and generous industry partners. In 2018, He was awarded an NSF CAREER Award for work on Visual Analytics by Demonstration. He received his Ph.D. in Computer Science from Virginia Tech in 2012. In 2013, his work on Semantic Interaction was awarded the IEEE VGTC VPG Pioneers Group Doctoral Dissertation Award, and the Virginia Tech Computer Science Best Dissertation Award.
WORKSHOP TOPICS
The topic of the workshop will focus on issues and opportunities related to the use of machine learning to learn from user interaction in the course of data visualization and analysis. Specifically, we will focus on research questions including:
- How are machine learning algorithms currently learning from user interaction, and what other possibilities exist?
- What kinds of interactions can provide feedback to machine learning algorithms?
- What can machine learning algorithms learn from interactions?
- Which machine learning algorithms are most applicable in this domain?
- How can machine learning algorithms be designed to enable user interaction and feedback?
- How can visualizations and interactions be designed to exploit machine learning algorithms?
- How can visualization system architectures be designed to support machine learning?
- How should we manage conflicts between the user’s intent and the data or machine learning algorithm capabilities?
- How can we evaluate systems that incorporate both machine learning algorithms and user interaction together?
- How can machine learning and user interaction together make both computation and user cognition more efficient?
- How can we support the sensemaking process by learning from user interaction?
SUBMISSIONS
This year, we plan to accept both short and long papers jointly in the same submission block. Full papers have a length of 5-10 pages (not including references), while short papers are 2-4 pages (plus references). Short papers are intended to capture either (1) limited aspects of a larger work that fit our call or (2) late-breaking work not yet mature enough for a full paper submission. The option of submitting a short paper replaces the posters track that we previously offered at MLUI.
Submissions should be uploaded to the MLUI 2021 track on PCS, which can be found under VGTC->VIS2021. All submissions will be reviewed by a committee of reviewers that we will organize. This committee will include the workshop committee members. The size of the committee will be determined by the number of submissions, such that each submission is reviewed by at least 2 committee members. Both full and short paper metadata (author information, title, university, etc.) as well as the submissions themselves will be posted to the workshop website in advance of the event. Workshop papers will be archived on IEEE Xplore following the conference.
Important Dates
Submission deadline: July 30, 2021
Author notification: August 31, 2021
Camera-ready deadline: September 10, 2021
Speaker Schedule Available: October 01, 2021
ORGANIZERS
John Wenskovitch, Pacific Northwest National Lab and Virginia Tech (john.wenskovitch@pnnl.gov)
Michelle Dowling, Grand Valley State University (dowlinmi@gvsu.edu)
Eli T. Brown, DePaul University
Ab Mosca, Northeastern University
Conny Walchshofer, Johannes Kepler University Linz
Marc Streit, Johannes Kepler University Linz
Kai Xu, Middlesex University
Steering Committee
Chris North, Virginia Tech
Remco Chang, Tufts University
Alex Endert, Georgia Tech
David Rogers, Los Alamos National Lab
Kris Cook, Pacific Northwest National Lab