Ujwal Gadiraju

Logo

Assistant Professor, TU Delft

Google Scholar Twitter DBLP TU Delft

An Introduction to Hybrid Human-Machine Information Systems
The Academic Fringe Festival

Kappa Lab: Crowd Computing


Human-Centered AI

Human-Centered AI is an emerging field that sits at the intersection of computer science, data science, and artificial intelligence. Human-Centered AI is concerned with how large groups of people can work together, potentially with artificial intelligence algorithms, to solve complex tasks that are currently beyond the capabilities of algorithms, and that cannot be solved by a single person alone. These complex tasks are mainly focused on the creation, enrichment, and interpretation of data, making human-centered computing a building block of data-driven AI systems. Examples of such tasks include the analysis and interpretation of Web data to spot and identify inappropriate content (e.g., hate speech, or fake news); the annotation of existing data sets to create ground truth data for the training of machine learning algorithms; the explanation of machine-generated results (e.g., automatic diagnostics, product recommendations) to help users decide whether to trust them.

Human-centered AI is an essential tool for any AI company: from Facebook to Microsoft, from Google to IBM, and from Spotify to Pandora, all major companies employ human-centered computing in their AI systems to fulfil their data needs, both by involving employees, and by reaching out to anonymous crowds through online marketplaces like Amazon Mechanical Turk and Appen.

The Human-Centered AI theme is led by assistant professors Dr. Ujwal Gadiraju and Dr. Jie Yang, in collaboration with Prof. Alessandro Bozzon and Prof. Geert-Jan Houben. Activities in this research line focus on the creation of computational methods and interaction techniques for human-in-the-loop AI systems, to address both problems of analysis and design of this class of systems. Our goal is to answer questions such as: How to engage and coordinate large groups of people in creating knowledge for augmenting machine learning systems? How to interpret and evaluate machine decisions in alignment with human understanding of the task? How to mediate the interaction between humans and machines to perform complex tasks that cannot be solved by either of them alone?


Human-AI Interaction

Principles for human-AI interaction have been discussed in the HCI community for several years. However, in the light of recent advances in AI and the growing role of AI technologies in human-centered applications, a deeper exploration is the need of the hour. Within the theme of Human-AI interaction, we will explore and develop fundamental methods and techniques to harness the virtues of AI in a manner that is beneficial and useful to the society at large. AI systems offer computational powers that vastly transcend human capabilities. In conjunction with the ability to autonomously detect data patterns and derive superior predictions, AI systems are projected to complement, transform and in several cases even substitute human decision-makers. This process broadly revolutionizes all the relevant stages of economical, political and societal decision-making. Despite these dynamics, the impact of AI systems on human behavior remains largely unexplored. We will address this crucial gap by carrying out interdisciplinary research to advance the current understanding of impact of AI systems on human decision-making.

Complex machine learning models are deployed in several critical domains including healthcare and autonomous vehicles nowadays, albeit as functional blackboxes. Consequently, there has been a recent surge in interpreting decisions of such complex models to explain their actions to humans. Models which correspond to human interpretation of a task are more desirable in certain contexts and can help attribute liability, build trust, expose biases and in turn build better models. It is therefore of paramount importance to understand how and which models conform to human understanding of various tasks.

This research theme is led by assistant professors Dr. Ujwal Gadiraju and Dr. Jie Yang, in collaboration with Prof. Alessandro Bozzon and Prof. Geert-Jan Houben. Activities in this broad research theme, will also focus on normative aspects of Human-AI interaction; the ethics surrounding the context of Human-AI interaction, responsible AI, bias and transparency in such interactions, as well as the concomitant aspects of trust and explainability.