Skip to content

Hallucination Detection & Interpretability

Photo of Mario Gibney
Hosted By
Mario G. and Annie S.
Hallucination Detection & Interpretability

Details

We all know that Large Language Models (LLMs) can confidently emit falsehoods, a phenomenon known as hallucination. Joshua Carpeggiani will tell us about some interpretability methods - peering into the insides of the model and making sense of what we see - that might help detect and correct hallucinations.

We welcome a variety of backgrounds, opinions and experience levels.

Getting here:

Enter the lobby at 100 University Ave (right next to St Andrew subway station), and message Annie Sorkin on the meetup app or call her on 437-577-2379 to be let up to the sixth floor.
As a backup contact please reach out to Mario Gibney at 416-786-5403.

Precise Schedule & Locations:

From 6:00pm to 6:45 we will have pizza and refreshments in the kitchenette area on the 6th floor. This is a perfect time for networking and discussion of the latest in AI news.

The main presentation will begin in room 6H at 6:45, and run until approximately 8pm.

After this (starting around 8:30), some of us will congregate at Neo Coffee Bar (King/Spadina) for more casual chit-chat. Look for the table with the blue robot if you're just joining us at the cafe after the main presentation.

Photo of Toronto AI Safety group
Toronto AI Safety
See more events
100 University Ave
100 University Ave · Toronto, ON