Amsterdam JUG Meetup at Picnic


Details
Join in with the latest Amsterdam JUG Meetup at Picnic HQ in Amsterdam.
For more details and discussions on the below, go to bit.ly/join-foojay-slack to join the Friends of OpenJDK Foojay.io Slack and use the #jug-amsterdam channel for conversations related to the below.
Agenda
17:00 - Doors Open (And Food!)
18:00 - 18:30 Talk 1: "Datafaker: the Most Powerful Fake Data Generator Library" by Elias Nogueira from Backbase
18:30 - 19:00 Talk 2: "GraalVM in Action: Building a Polyglot Rule Engine for Dynamic Business Logic" by Rick Ossendrijver & Enric Sala from Picnic
19:00 Short Break
19:15 - 19:45 Talk 3: "LLMOps: A Developer’s Roadmap from Model to Production" by Soham Dasgupta from Microsoft
19:45 - 20:15 Talk 4: "Rediscovering Apollo 11: Using Java and Vector Search to Explore the Trip to the Moon" by Raphael de Lio from Redis
20:15 - Networking drinks
Talk 1: Datafaker: the Most Powerful Fake Data Generator Library
Data generators in software testing play a critical role in creating realistic and diverse datasets for testing scenarios. However, they present challenges, such as ensuring data diversity, maintaining quality, facilitating validation, and ensuring long-term maintainability.
While many engineers are familiar with these challenges, they often resort to non-specialized tools, such as the RandomStringUtils class from Apache Commons or the Random class, concatenating fixed data with it. This approach lacks scalability and may not yield a valid dataset.
Thankfully, we have Datafaker (datafaker.net), a library for Java and Kotlin to generate fake data, based on generators, that can be very helpful when generating test data to fill a database, to generate data for a stress test, or to anonymize data from production services.
With practical examples, you will learn how to generate data based on different or multiple locales, random enum values, different generators (such as address, code books, currency, date and time, finance, internet, measurement, money, name, time), custom (data) providers, sequences (collections and stream), date formats, expressions, transformations, and unique values.
In the end, the talk will also highlight patterns for generating better data, such as the Test Data Factory to add more control to the data generation.
Talk 2: GraalVM in Action: Building a Polyglot Rule Engine for Dynamic Business Logic
In today's fast-paced tech world, backend systems need to be flexible and self-service to support evolving business needs. For Picnic, this means building a backend that lets operators and analysts directly define and manage the logic that drives customer interactions, product personalization, and internal workflows.
Our solution is a Rule Engine platform where operators can easily attach logic and effects to events by creating, testing and managing their own rules. Powered by GraalVM's polyglot capabilities, it allows analysts and other stakeholders to write rules in JavaScript or Python. This event-driven system enables self-service without developer involvement. It handles actions across the Picnic system landscape, from updating customer data to triggering communications.
In this talk, we will discuss the architecture behind our Rule Engine and share some of the challenges we faced with GraalVM's polyglot capabilities. We will explain how we made Java-based event data accessible in guest languages. In addition, we will show how we provided extra context from our systems to the rules, and designed a simple Domain Specific Language for data retrieval and action triggering. Finally, we'll cover how we ensure fairness and maintain performance.
Come and learn how you can leverage the potential of GraalVM!
Talk 3: LLMOps: A Developer’s Roadmap from Model to Production
In the rapidly evolving landscape of LLMs, architects and developers face the challenge of effectively selecting a model, deploying and managing applications based on language models (LLMs or SLMs), such as GPT-4/Gemini/Phi3.
In this talk, I will discuss the intricacies of operationalizing LLMs, focusing on prompt engineering, fine-tuning, and deployment strategies. I will explore the transformative potential of LLMs and address the critical aspects of bias mitigation, ethical considerations, and risk management, while giving insights into streamlining the LLM lifecycle, from model discovery to prompt tuning, ensuring domain-specific grounding, leveraging advanced optimization for fine-tuning and managing at production.
The session aims to equip you with a comprehensive understanding of the processes and best practices for deploying, managing LLMs/SLMs at scale, while prioritizing safety and responsibility.
Talk 4: Rediscovering Apollo 11: Using Java and Vector Search to Explore the Trip to the Moon
What happens when you combine the Apollo program's historical data with modern AI tools? You get a way to interact with one of humanity's greatest adventures like never before! In this session, I'll show you how I use AI to explore Apollo mission data—aligning transcripts, telemetry, and images to uncover hidden connections and insights.
We'll dive into how Semantic Search helps make sense of unstructured text, why embeddings are the key to searching for intent instead of keywords, and how AI tools can enrich even the most complex datasets. Don't know what embeddings or vector databases are? Don't worry—I'll break it all down and show you how it works.
Come for the Moon missions, stay for the AI magic, and leave ready to create your own data-driven adventures!

Amsterdam JUG Meetup at Picnic