This workshop is part of the IFIP TC.13 International Conference on Human-Computer Interaction – INTERACT 2019.
0900 – Welcome and Introduction
0915 – Lightning Talks by workshop participants (5 min each)
1030 – Morning break
1100 – Full-group brainstorming of possible project areas
1300 – Lunch break
1400 – Breakout Groups
1530 – Afternoon break
1600 – Breakout Groups
1630 – Report back from breakout groups
1700 - Brainstorm next steps
1730 – Workshop concludes
Interfacing AI with Social Sciences: the Call for a New Research Focus in HCI
Hamed Alavi and Denis Lalanne, Human-IST Institute, University of Fribourg, Switzerland
We provide arguments for the necessity of broadening the engagement of HCI in translating knowledge created in social sciences to a major force that can drive AI and direct the ways in which it will impact various aspects of our world. We begin to sketch the outline of this engagement as a research agenda within HCI in reference to some of the denitional manifesto on the HCI’s foundational role  as an action science . Also, a part of our own research that scrutinizes some of the major AI projects  informs the presented arguments.
Towards Diverse AI: Can an AI-Human Hybrid Council Prevent Future Apartheids?
Gabriel Diniz Junqueira Barbosa, Simone Diniz Junqueira Barbosa, PUC-Rio, Brazil
Articial intelligence (AI) is becoming more prevalent in today’s society. However, decisions are often made based on single AI models, which we call single-minded AI. The use of single-minded AI might bring great harm, while the use of AI-human collectives might help debias the decision-making process and thus promote better decisions. We illustrate through a speculative design action some of such potential risks and benets. Our goal is to help frame the discussion on some of the necessary advances in managing AI to allow for better human-AI collaboration.
You Should Not Control What You Do Not Understand: The Risks of Controllability in AI
Gabriel Diniz Junqueira Barbosa, Simone Diniz Junqueira Barbosa, PUC-Rio, Brazil
In this paper, we posit that giving users control over an articial intelligence (AI) model may be dangerous without their proper understanding of how the model works. Traditionally, AI research has been more concerned with improving accuracy rates than putting humans in the loop, i.e., with user interactivity. However, as AI tools become more widespread, high-quality user interfaces and interaction design become essential to the consumer’s adoption of such tools. As developers seek to give users more in uence over AI models, we argue this urge should be tempered by improving users’ understanding of the models’ behavior.
Using AI to Improve Product Teams’ Customer Empathy
Valentina Grigoreanu, Monty Hammontree, and Travis Lowdermilk, Microsoft, USA
During customer conversations, it is important to know both what questions to ask at any point during the development cycle, and how to ask them. Asking the right questions to capture rich, accurate, and relevant customer feedback is not easy, and professionally-trained researchers cannot be a part of every customer conversation. To scale out researchers’ knowledge, we built an artificial intelligence system, the VIVID whisper-bot, trained on three theories: the Hypothesis Progression Framework (contextual research questions for each product development phase), the VIVID grammar framework (asking who, what, why, how, where, how much, and when type questions to recreate rich stories), and the syntactical structure of biased and leading questions. The whisper-bot listens in on a customer conversation, highlights customers’ key verbalization (e.g., pain points using the product), and suggests follow-up interview questions (e.g., removing bias or enriching a story). It thereby encourages good interview practices for everyone, which we believe will increase empathy on product development teams, and lead to improvements in the products’ user experience.
A View from Outside the Loop
Anders Hedman, KTH The Royal Institute of Technology, Sweden
There is a growing interest today in how to combine human intelligence with artificial intelligence in the best possible ways. One reason for this interest is that in this territory of combined intelligence it is, in many cases, unclear how the total system of human and machine will behave, and unless we know that, how could we know what the perils and opportunities might be? It is clear then that we need a body of research to investigate the nature of humans in the loop in order to design wisely. Such wise design would prima facie appear to be achievable through analysis of existing and possible human in the loop systems from a vantage point outside of such systems. But, is such a vantage point achievable today and if so, will such a vantage point be available in the future? This paper considers the possibility of humans as always being in the loop and what it might mean for our understanding of humans in the loop.
Designing a Machine Learning-Based System to Augment the Work Processes of Medical Secretaries
Patrick Johansen, Rune Jacobsen, Lukas Bysted, Mikael Skov, Eleftherios Papachristos, Aalborg University, Denmark
Advances in Machine Learning (ML) provide new opportunities for augmenting work practice. In this paper, we explored how an ML-based suggestion system can augment Danish medical secretaries in their daily tasks of handling patient referrals and allocating patients to a hospital ward. Through a user-centred design process, we studied the work context and processes of two medical secretaries. This generated a model of how a medical secretary would assess a visitation suggestion, and furthermore, it provided insights into how a system could fit into the medical secretaries’ daily tasks. We present our system design and discuss how our contribution may be of value to HCI practitioners designing for work augmentation in similar contexts.
Building a Trustworthy Explainable AI in Healthcare
Retno Larasati and Anna DeLiddo, Knowledge Media Institute, The Open University, UK
The lack of clarity on how the most advanced AI algorithms do what they do creates serious concerns as to the accountability, trust and social acceptability of AI technologies. These concerns become even bigger when people’s well being is at stake, sucah as healthcare. This calls for systems enabling to make decisions transparent, understandable and explainable for users. This paper briefy discusses the trust in AI healthcare system, propose a framework relation between trust and characteristics of explanation, and possible future studies to build trustworthy Explainable AI.
MARVIN: Identifying Design Requirements for an AI powered Conversational User Interface for Extraterrestrial Space Habitats
Youssef Nahas, Christiane Heinicke and Johannes Schöning, University of Bremen, Germany and Center of Applied Space Technology \& Microgravity, Germany
In this workshop paper we report on our early work to design a conversational interface for astronaut scientists in an extraterrestrial habitat (e.g. a habitat on Moon or Mars). At the workshop we will report on our initial design and first evaluations of our conversational user interface called MARVIN. Our goal with MARVIN is to support scientists on their missions and during their daily (scientific) routines within and outside the habitat. We are installing our inter-face in MaMBA. The MaMBA project aims to build a first functional extrater-restrial habitat prototype.
Nonverbal Communication in Human-AI Interaction: Opportunities & Challenges
Joshua Newn, Ronal Singh, Fraser Allison, Prashan Madumal, Eduardo Velloso, and Frank Vetere, University of Melbourne, Australia
In recent years, we have explored the use of gaze is an important nonverbal communication signal and cue in everyday human-human interaction|for use with AI systems. Specically, our work investigated whether an articial agent, given the ability to observe human gaze, can make inferences on intentions, and how aspects of these inferences can be communicated to a human collaborator. We leveraged a range of human-computer interaction techniques to inform the design of a gaze-enabled articial agent that can predict and communicate predictions. In this paper, we include a snapshot of how AI and HCI can be brought together to inform the design of an explainable interface for an articial agent. To conclude, we outline the challenges we faced when designing AI systems that incorporate nonverbal communication stemming from our work.
As more and more artificial intelligent systems become incorporated into our every-day lives, it is critical that we understand the ways in which people will interact with these systems. Although some AI systems will be fully automated, a large number will be incorporated into a larger social ecosystem where people will be interacting with these systems. In some cases, advances in machine learning are enabling systems to make inferences on data that are more precise than human experts, however, there is also a growing body of literature that shows that these systems have inherent bias and can have a negative impact on human decision making . It is imperative that researchers understand the smart interplay of AI systems and human experts such that the combination of the two can leverage the inherent strength and weaknesses of each to lead to optimal results. In this workshop, we seek to bring together researchers from both Artificial Intelligence and Human-Computer Interaction communities to discuss concepts, systems, designs, and empirical studies focusing on the communication and cooperation between individual users and teams of users with AI systems.
We invite submissions for a one-day workshop to discuss critical questions related to Human+AI Collaboration in the development and deployment of Artificial Intelligence (AI) systems.
Papers should be 2-4 pages long according to the INTERACT 2019 (Springer LNCS Series) format, and may address any topics related to the workshop Themes. This includes but is not limited to ongoing work; reflections on past work; combining methods from HCI and design to AI; and emergent ethical, political, and social challenges.
The due date for submissions is no later than 29 April, 2019 Submissions should be uploaded to: https://easychair.org/conferences/?conf=aihciinteract2019. Participants will be selected based on the quality and clarity of their submissions as they reflect the interests of the workshop. Notifications will go out no later than 7 June, 2019. At least one author of each accepted position paper must attend the workshop, and all participants must register for both the workshop and at least one day of the conference.
An expert panel of 3-4 researchers will be recruited to review the submissions and participate in the conference.Participants will be selected based on their prior experience and interest in the workshop as well as the quality of their submissions. We will focus on recruiting from a diverse group of participants, with a balance of students and faculty; industry practitioners and academic audiences; contribution areas within HCI and AI research; and representation of different cultures, genders, and ethnic backgrounds.
The goal of this workshop is to bring together researchers from diverse communities such as Human-Computer Interaction, Machine Learning, Computer-Supported Cooperative Work, Interaction Design, Group Decision Support Systems, Visualisation, Philosophy and Ethics. This workshop will build on the insights gained from a larger workshop we are organising at CHI 2019 aka.ms/whereisthe human, but focus in more specifically in issues related to Human+AI interaction.
The structure of this workshop will be focused on creating research partnerships and identifying collaborative projects. Although we will spend time discussing key trends, challenges, and opportunities, the overall goal is to have a focused workshop that will initiate projects that will extend beyond the workshop itself. The intimate nature of workshops at INTERACT is an ideal venue for this type of workshop.
This workshop will focus on three sub-themes related to Human + AI Collaboration:
Integrating Artificial and Human Intelligence: AI systems and humans both have unique abilities and are typically better at certain complementary tasks than others. For instance, while AI systems can summarize voluminous data to identify latent patterns, humans can extract meaningful, relatable, and theoretically grounded insights from such patterns. What kind of research designs or problems are most amenable to and would benefit the most from combining artificial and human intelligence? What challenges might surface in attempting to do so? How do issues of trust and accountability impact results [5, 7]?
Collaborative Decision Making: How can we harness the best of humans and algorithms to make better decisions than either alone? How do we ensure that when there is a human-in-the-loop—such as in complex or life-changing decision-making—they remain critical and meaningful, while creating and maintaining an enjoyable user experience? Where is the line between decision support anticipating the needs of the user and it removing the user’s ability to bring in novel, qualitative critical knowledge to enable the system’s goals?
Explainable and Explorable AI: What does the human need to effectively utilize AI insights? How can users explore AI systems’ results and logic to identify failure modes that might not be easy to spot? Examples might be undesirable impacts on latent groups not corresponding to categories in the dataset , difficult-to-spot changes (‘concept drift’), or feedback loops in the socio-technical phenomena the AI system is modelling over time .
|Tom Gross is full professor and chair of Human-Computer Interaction at the Faculty of Information Systems and Applied Computer Science of the University of Bamberg, Germany. His research interests are particularly in the fields of Computer-Supported Cooperative Work, Human-Computer Interaction, and Ubiquitous Computing. He has participated in and coordinated activities in various national and international research projects and is a member of the IFIP Technical Committee on ‘Human Computer Interaction’ (TC.13). He has been conference co-chair and organiser of many international conferences.|
|Kori Inkpen is a Principal Researcher at Microsoft and a member of the MSR AI team. Dr. Inkpen’s research interests are currently focused on Human+AI Collaboration to enhance decision making, particularly in high-impact social contexts which inevitably delves into issues of Bias and Fairness in AI. Kori has been a core member of the CHI community for over 20 years.|
|Brian Y. Lim is an assistant professor in the Department of Computer Science at the National University of Singapore. He is leading the NUS Ubicomp Lab, where he and his team design, develop, and evaluate needs-driven infocomm technologies to address new societal challenges, such as urban systems, sustainability and energy management, healthcare and well-being. He has conducted research in intelligent systems across multiple modalities (IoT sensors, mobile interfaces, web and dashboards) and multiple scales (smartphones, smart homes, and smart cities). This allows me to develop impactful technological solutions for multiple domains, and to translate these innovations from the lab to society.|
|Michael Veale is a doctoral researcher in responsible public sector machine learning at the Dept. of Science, Technology, Engineering & Public Policy at University College London. His work spans HCI, law and policy, looking at how societal and legal concerns around machine learning are understood and coped with on the ground.|
Three key outcomes are expected from this workshop. First, community building and networking among key researchers in the area of Human+AI collaboration, with the potential to lead to future collaborations on projects or larger grant proposals. Second, an outline of important research directions for this emerging area. Third, one or more research projects that will continue beyond the workshop, the results of which will be published in premiere research venues.
1. Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Shu, J., Iqbal, S., Bennett, P.N., Inkpen, K., Teevan, J., Kikin-Gil, R. and Horvitz, E. Guidelines for Human-AI Interaction. In Proceedings of the Conference on Human Factors in Computing Systems - CHI 2019 (May 4-9, Glasgow, Scotland). ACM, N.Y., (to appear).
2. Gama, J., Žliobaitė, I., Bifet, A., Pechenizkiy, M., and Bouchachia, A. A survey on concept drift adaptation. ACM Computing Surveys 1, 1. https://doi.org/10.1145/2523813
3. Green, B., and Chen, Y. Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency - FAT* 2019 (January 29-31, Atlanta, GA). ACM N.Y.
4. Gross, T. Supporting Informed Negotiation Processes in Group Recommender Systems. i-com - Journal of Interactive Media 14, 1 (Jan. 2015). pp. 53-61.
5. Saxena, N.A., Huang, K., DeFilippis, E., Radanovic, G., Parkes, D.C., Liu, Y. How Do Fairness Deﬁnitions Fare? Examining Public Attitudes Towards Algorithmic Deﬁnitions of Fairness. In Proceedings of Association for the Advancement of Artificial Intelligence – AAAI 2019.
6. Veale M. and Binns, R. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4, 2. https://doi.org/10/gdcfnz
7. Yin, M., Wortman Vaughan, J., Wallach, H., Understanding the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of the Conference on Human Factors in Computing Systems - CHI 2019 (May 4-9, Glasgow, Scotland). ACM, N.Y., (to appear).
8. Eric P. S. Baumer. 2017. Toward Human-Centered Algorithm Design. Big Data & Society 4, 2.
9. Munmun De Choudhury and Emre Kiciman. 2018. Integrating artificial and human intelligence in complex, sensitive problem domains: Experiences from mental health. AI Magazine 39, 3.
10. Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kiciman. 2016. Social data: Biases, methodological pitfalls, and ethical boundaries.
11. Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the Conference on Human Factors in Computing Systems - CHI 2018 (April 21-26, Montreal, Canada). ACM, N.Y.