Skip to content

Details

Searchplex invites the AI and Search community in the Netherlands to the inaugural edition of Find & Mind: The Search & AI Meetup in Amsterdam!

We’ll kick off with food, talks, and plenty of time for networking (in-person only).

This first edition features two talks tackling real challenges in building modern AI systems — from why retrieval is the foundation of every LLM pipeline, to how semantic caching makes query understanding fast and practical.

Time & Location
📅 September 17th, 2025 — from 17:30 onwards

📍 Luminis Amsterdam
Suikersilo-West 20, 1165 MP Halfweg

About The Community
Find & Mind is a new vendor-neutral, community-driven meetup in Amsterdam. We bring together people passionate about Search, Information Retrieval, RAG, LLMs, and Agentic AI.
Whether you’re building production search systems, experimenting with retrieval-augmented generation, or just curious about the future of information access — this is the place for you. All skill levels welcome.

Schedule

  • 17:30 – 18:15 · Walk-in & food
  • 18:15 – 18:45 · Every AI System is a Search System - Ravindra Harige (Searchplex)
  • 18:45 – 19:00 · Break
  • 19:00 – 19:30 · LLM Query Understanding without the LLM latency using Semantic Caching - Arian Stolwijk (Giftomatic)
  • 19:30 onwards · Networking & socializing

Talk Details
Every AI System is a Search System
We’re in the middle of a generative AI revolution. But as many of you have discovered building Retrieval-Augmented Generation systems, there’s a hard truth: the “G” is only as good as the “R.”
In this talk, we’ll unpack why. Success in AI today doesn’t come from prompt engineering tricks, it comes from solving the same problems search engineers have been tackling for decades. Chunking strategies, hybrid retrieval, multilingual and multimodal data, ranking pipelines, evaluation frameworks - this is where proof-of-concepts either succeed or quietly fail. And at enterprise scale, with privacy requirements, compliance constraints, and LLMOps overhead, the challenge only grows harder.
This is where our communities converge. The future of agentic systems and contextual AI won’t be built on generation alone: it will be built on retrieval that actually works. AI may be the new interface, but retrieval is the foundation.

Speaker: Ravindra Harige, Founder at Searchplex

LLM Query Understanding without the LLM latency using Semantic Caching
Using LLMs in a classical search setup to understand the user's query or rerank the results are expensive and can be slow, especially for a search-as-you-type search experience. At Giftomatic we are building a search solution that uses LLMs/Machine Learning for query understanding, and make it fast using caching. In this talk I will describe the architecture, how we 'understand' the user queries, how we use the 'understanding' of the query, increase the cache hits and prevent inappropriate cache hits.

Speaker: Arian Stolwijk, Software Engineer at Giftomatic

Meetup Organizers: Ravindra Harige (Searchplex) & Daniel Spee (Luminis)

Events in Halfweg, NL
Artificial Intelligence
Artificial Intelligence Applications
Search, Information Retrieval

Members are also interested in