LLMs in Action: How Booking.com Scans, Detects, & Monitors Fake & Unsafe Content


Details
Explore the power of LLMs in this hands-on workshop featuring real use cases from Booking.com. Learn how to scan and detect fake and unsafe content in reviews, and discover how AI is applied to keep reviews trustworthy and the platform secure.
The workshop will be divided into two parts. The first part will introduce the business use case and explain how LLMs come to the rescue in detecting and preventing fake content. In the second part, participants will get hands-on experience working with script-generated reviews, gaining an overview of relevant models from Hugging Face, and training a model specifically for this purpose. You’ll also learn practical applications using Hugging Face's powerful transformers library.
Whether you're interested in LLMs, online safety, or the mechanics of content moderation, this session offers an in-depth look at the cutting-edge tools that help maintain a safe digital environment.
Compete to win prizes!
After the workshop we will have a quiz competition where you can win prizes! The details will be given on location.
Note
This event will not be recorded for later viewing on our YouTube channel.
Agenda
18:30 - Doors open, food and drinks
19:00 - Workshop
20:30 - Networking
GitHub Repo
https://github.com/pyladiesams/llms-scan-reviews-nov2024
Any questions --> amsterdam@pyladies.com

LLMs in Action: How Booking.com Scans, Detects, & Monitors Fake & Unsafe Content