Skip to content

Details

Humans can often move naturally between perception and language: we can look at something and describe it, or read something and form a mental picture. In machine learning, though, models for language and vision are usually trained separately, so it is easy to assume that they learn entirely different internal representations.

The Platonic Representation Hypothesis argues that this may not be true, and presents evidence that different models can converge toward similar underlying representations. In this meetup, we will walk through the paper, unpack its main ideas and results, and discuss the possible implications for how we think about intelligence, multimodal models, and the structure of learned representations.

Paper URL: https://arxiv.org/pdf/2405.07987

=== ENTRY DETAILS ===

- QR code with entry information will be available soon, in the "Photos" section of this event page.
- Gate closes at 18:15 - no late entries.

Related topics

Events in Budapest, HU
Biotechnology
AI Algorithms
AI/ML
Artificial Intelligence
Machine Learning

You may also like