Data & Analytics Wednesday - Reducing AI Mistakes


Details
Hallucination in the Wild: A Field Guide for LLM Users
Spotting, Understanding, and Reducing AI Mistakes
Large Language Models like ChatGPT are incredibly good at sounding smart—even when they’re completely wrong. This tendency to produce false or misleading information, often called hallucination, is one of the most persistent challenges in modern AI.
In this talk, Ashley Lewis from OSU Linguistics will explain why these models hallucinate, what makes it difficult for them to recognize uncertainty, and why existing solutions often fall short. She’ll also share insights from her research on building smaller, more efficient models that make fewer mistakes, how we can evaluate their trustworthiness more effectively, and what practical strategies—like better prompting—can reduce hallucinations in the tools we use today.
Whether you use LLMs every day or just wonder how they work, this talk offers a behind-the-scenes look at one of AI’s most pressing problems.
All CBUSDAW events are free thanks to our 2025 Sponsors: Clarivoy, Conductrics, What Box Consulting Group, and Piwik PRO.
Check out cbusdaw.com for more information.

Sponsors
Data & Analytics Wednesday - Reducing AI Mistakes