Our work brings together research expertise in information retrieval (IR), recommender systems, natural language processing and machine learning. We focus on core IR topics, from algorithms to applications.
Our is aim to create more efficient, effective, and interpretable neural models for information retrieval. In our research on neural information retrieval, we are exploring the utility of large language models to enhance retrieval and ranking algorithms. We focus mostly on the issues of efficiency, including runtime and training efficiency, as well as explainability. To overcome these challenges, we are actively investigating methods to improve the runtime performance of large models, optimize training processes, and develop techniques for interpreting and explaining the decisions made by neural retrieval systems.
Predictive models based on complex large language models are widely used in various domains like search engines, recommender systems, health, legal, and finance. However, they often function as black boxes, providing predictions or rankings without fully understanding how different factors influence their outputs. Our research focuses on developing explainable AI techniques for information retrieval tasks from various angles. We are researching interpretability approaches to assess ranking models in web search, question-answering, and fact checking.
Despite significant advancements in question answering brought about by large language models, they still have limitations when it comes to certain types of questions. These include questions involving numerical proficiency, compound questions, and complex reasoning, such as fact verification across diverse sources. Our research is dedicated to comprehending these limitations of large language models in complex question answering tasks. Moreover, we are actively developing innovative approaches to address these challenges and make substantial advancements in real-world question answering applications.
IR technology embedded in our lives (Google or Amazon recommendations) responds to average users. Children have particular cognitive, social, physical, and emotional needs that make the information they seek, their experiences, sense making, and skills different from those of adults. children are not simply short users; they are unique users. As such, they use IR tools differently than adults. With research in this are we aim to empower children so they can proficiently conduct information discovery tasks. This involved better modeling their needs and expectations when interacting with IR systems, identifying challenges they face, and building the algorithmic bridges needed to address them: from new ranking models and novel ways to generate SERP to how to model.
In our quest to access information, we regularly interact with search, recommender, and question-answering systems. In theory, these systems serve a broad range of users in different domains attempting to address different tasks. But how do they fare when faced with users, contexts, and tasks they were not originally designed for? In this area of research, we focus on identifying limitations inherent in non-traditional, i.e., non-expected, ecosystems and designing the algorithms needed to address them. For this, we study how users interact with IR systems using multiple lenses; we also identify how to leverage human traits for modeling users and how that can impact IR technology design. Along the way, we question how we evaluate novel solutions in this area, in the absence of benchmarks or given other existing restrictions, e.g., federal regulations and fairness objectives.