Skip to content

What we’re about

Information drives the planet. We organize talks around implementations of information retrieval, in search engines, in recommender systems, or in conversational assistants. Our meetups are usually held on the last Friday of the month, at Science Park Amsterdam. Usually, we have two talks in a row, one industrial, the other academic, 25+5 minutes each, no marketing, just algorithms, followed by drinks. We also host ad hoc "single shot" events whenever an interesting visitor stops and shares their work.

Search Engines Amsterdam is supported by the ELLIS unit Amsterdam.

Follow @irlab_amsterdam on Twitter for the latest updates.

Upcoming events

14

See all
  • SEA: October - Multimodal Retrieval

    SEA: October - Multimodal Retrieval

    Lab 42, Science Park 900, 1098 XH Amsterdam, Amsterdam, NL

    📍 Location: Room L3.36, Lab42, Science Park Amsterdam
    💻 Zoom link:
    https://uva-live.zoom.us/j/67649187004

    We’re excited to announce our upcoming SEA Talk on Multimodal Retrieval, featuring two speakers:

    Kishan Parshotam – Flume AI

    Title: The Unindexed Trillion Dollars Industry – Why search engines overlook this information, and how Flume is tackling it
    Abstract: To be announced
    Bio: Kishan Parshotam is the CTO and Co-Founder of Flume, where he is helping build next-generation search for unindexed industries. Previously, he led AI at AltScore, helping the fintech secure $3.5M in funding, and co-founded Just A.I., which developed scalable machine-learning tools for smallholder farmers. He also worked on computer vision research at Prosus Group, contributing to image search optimization and publishing at CVPR. Kishan holds a Master’s in Artificial Intelligence from the University of Amsterdam (2020).

    Jingfen Qiao – University of Amsterdam
    Title: Reproducibility, Replicability, and Insights into Visual Document Retrieval with Late Interaction
    Abstract: Visual Document Retrieval (VDR) is an emerging research area that focuses on encoding and retrieving document images directly, bypassing the dependence on Optical Character Recognition (OCR) for document search. A recent advance in VDR was introduced by ColPali, which significantly improved retrieval effectiveness through a late interaction mechanism. ColPali’s approach demonstrated substantial performance gains over existing baselines on an established benchmark.

    In this study, we investigate the reproducibility and replicability of VDR methods with and without late interaction mechanisms by systematically evaluating their performance across multiple pre-trained vision–language models. Our findings confirm that late interaction yields considerable improvements in retrieval effectiveness; however, it also introduces computational inefficiencies during inference. We further examine the adaptability of VDR models to textual inputs and assess their robustness across text-intensive datasets when scaling the indexing mechanism. Finally, we explore how query-patch matching contributes to VDR performance, finding that query tokens tend to match visually similar or contextually related patches rather than exact counterparts.

    Bio: Jingfen is a fourth-year PhD student at the IRLab, University of Amsterdam. Her research focuses on developing language models for dense and sparse retrieval, with a particular interest in multimodal document retrieval.

    Counter: SEA Talks #291 and #292.

    • Photo of the user
    • Photo of the user
    • Photo of the user
    16 attendees

Group links

Members

2,951
See all

Find us also at