Catching a Chinchilla [paper discussion]
Details
AI governance is an enormous challenge. The paper that we'll be discussing today addresses part of the question: suppose there are standards in place governing the training of large-scale ML projects. How can we ensure these standards are followed? And can we preserve privacy and security while doing it?
After a round of introductions and catching up on any AI news, we'll do a presentation on this paper (https://arxiv.org/abs/2303.11341) with plenty of opportunities for questions and discussion.
Some key questions to think about:
- the framework seems ambitious and involves the cooperation or involvement of a lot of actors. What would it take for there to be the political appetite for this kind of approach? Do we need to wait for some kind of disaster?
- is it technically feasible?
- the framework only covers training, not deployment of ML systems. How do you distinguish "good" from "bad" AI at the training stage?
- are very large scale training runs still going to be relevant in a few years' time?
- is there a different approach we can take instead?
We'll be in the "Raging Grannies" room on the 4th floor of the Centre for Social Innovation. This is the room directly connected to the kitchenette area. You can use the code 3961 to enter the building tonight. See you there!
