Skip to content

Details

Getting here: Enter the lobby at 100 University Ave (right next to St Andrew subway station), and message Giles Edkins on the meetup app or call him on 647-823-4865 to be let up to room 6H.

Chain of Thought prompting - or giving a language model a scratchpad to jot down its thoughts before coming up with a final answer - is a popular way to improve performance in LLMs.

But does it also let us read the model's mind? Can we read off its verbalized thoughts to get insight into its reasoning processes? The evidence is mixed, suggesting models can hallucinate (or lie?) in their chain of thought reasoning.

We'll talk about why this is a problem, and look at some research. As usual it will be a presentation format with plenty of opportunities to interact and discuss.

Some papers to check out:

Events in Toronto, ON
Artificial Intelligence Applications
Critical Thinking
New Technology
Risk Management
Safety

Members are also interested in