Two talks: 1. In-Context Learning, 2. Encoded Moral Beliefs


Details
We have two great talks lined up!
1. In-Context Learning in Large Language Models
"In-context learning" is the ability to generate responses based on the context provided within a prompt, i.e. without training on new data.
Johannes von Oswald (ETH Zurich, Google Research) offers an explanation of this mysterious feature by showing how LLMs like ChatGPT can secretly implement well-known learning algorithms, and ends with discussing some well-known safety issues.
2. Evaluating the Moral Beliefs Encoded in Large Language Models
Nino Scherrer (FAR AI) presents a case study for the evaluation of moral preferences encoded in large language models. The comprehensive survey of over 1,300 moral scenarios finds surprising patterns of agreement and uncertainty exhibited by various LLMs. Their possible sources, and their implications on the future of LLMs are discussed.
The event will be hosted by Digicomp Zurich who kindly arranged a room for us.

Two talks: 1. In-Context Learning, 2. Encoded Moral Beliefs