The Challenges and Opportunities in Evaluating Generative Information Retrieval


Details
We're back from summer. And we couldn't be more excited to kick off the new academic year with a SEA XL devoted to The Challenges and Opportunities in Evaluating Generative Information Retrieval by Mark Sanderson.
This is a hybrid event. Where? You can join us in person at Lab42, room L3.36. Please note that this SEA meet-up starts at 16:00 CET. The Zoom link is visible once you "attend" the meetup on this page.
Speaker: Mark Sanderson (RMIT, Melbourne, Australia)
Title: The Challenges and Opportunities in Evaluating Generative Information Retrieval
Abstract: Evaluation has long been an important part of information retrieval research. Over decades of research, well established methodologies have been created and refined that for years have provided reliable relatively low cost benchmarks for assessing the effectiveness of retrieval systems. With the rise of generative AI and the explosion of interest in Retrieval Augmented Generation (RAG), evaluation is having to be rethought. In this talk, I will speculate on what might be solutions to evaluating RAG systems as well as highlighting some of the opportunities that are opening up. As important as it is to evaluate the new generative retrieval systems it is also important to recognize the traditional information retrieval has not yet gone away. However the way that these systems are being evaluated is undergoing a revolution. I will detail the transformation that is currently taking place in evaluation research. Here i will highlight some of the work that we've been doing at RMIT university as part of the exciting, though controversial, new research directions that generative AI is enabling.
We continue to count: this is SEA Talk #272

The Challenges and Opportunities in Evaluating Generative Information Retrieval