Crowd Computing & Human-Centered AI

We focus on core areas which are instrumental in developing the next generation of AI systems:
  • Human-in-the-loop AI
  • Human-AI interaction
  • User Modeling and Explainability
Our work considers the computational role of humans for AI, cast as "AI by humans", and the interactional role of humans with AI systems, cast as, "AI for humans". As algorithmic decision-making becomes prevalent across many sectors it is important to help users understand why certain decisions are proposed.

This research theme is a convergence of two research lines – "Epsilon" and "Kappa". The Human-in-the-loop AI and Human-AI interaction activities are jointly coordinated and led by Ujwal Gadiraju and Jie Yang. The User Modeling and Explainability activities are coordinated and led by Nava Tintarev.

Human-in-the-loop AI

Machine learning models have been criticized for the lack of robustness, fairness, and transparency. For models to learn comprehensive, fine-grained, and unbiased patterns, they have to be trained on a large number of high-quality data instances with the right distribution that is representative of real application scenarios. Creating such data is not only a long, laborious, and expensive process, but sometimes even impossible. In this theme, we analyze the fundamental computational challenges in the quest for robust, interpretable, and trustworthy AI systems. We argue that to tackle such fundamental challenges, research should explore a novel crowd computing paradigm where diverse and distributed crowds can contribute knowledge at the conceptual level.

Human-AI Interaction

In the light of recent advances in AI and the growing role of AI technologies in human-centered applications, a deeper exploration of interaction between humans and machines is the need of the hour. Within this theme of Human-AI interaction, we will explore and develop fundamental methods and techniques to harness the virtues of AI in a manner that is beneficial and useful to the society at large. From the interaction perspective, more robust and interpretable systems can help build trust and increase system uptake. As AI systems become more commonplace, people must be able to make sense of their encounters and interpret their interactions with such systems.

User Modeling & Explainability

Explanations are needed when there is a large knowledge gap between human and AI or information systems, or when joint understanding is only implicit. This type of joint understanding is becoming increasingly important for example when news providers, and social media systems such as Twitter and Facebook, filter and rank the information that people see. To link the mental models of both systems and people our work develops ways to supply users with a level of transparency and control that is meaningful and useful to them. We develop methods for generating and interpreting rich meta-data that helps bridge the gap between computational and human reasoning (e.g., for understanding subjective concepts such as diversity and credibility). We also develop a theoretical framework for generating better explanations (as both text and interactive explanation interfaces), which adapts to a user and their context. To better understand the conditions for explanation effectiveness, we look at when to explain (e.g., surprising content, lean in/lean out, risk, complexity); and what to adapt to (e.g., group dynamics, personal characteristics of a user).

Projects