
Why Neural Network Representations Won’t Converge to Reality
Join us for an engaging discussion on a cutting-edge topic in artificial intelligence!
This is a ticketed event. Please register at this link.
In his enlightening talk, Robert Adragna will explore the intriguing Platonic Representation Hypothesis (Huh et al., 2024), which suggests that neural networks are gradually aligning with a universal statistical model of objective reality. He will examine how this theory seeks to explain the increasing similarities observed among models of varying architectures and datasets.
However, Robert argues that this convergence is practically impossible. To accurately represent reality, models must recognize the same real-world concept across its countless variations, an ideal that current theoretical and empirical research indicates we cannot achieve at scale. Instead, he proposes that these similarities arise from shared structural assumptions found in training data, rather than the identification of an objective reality.
Event Schedule:
- 6:00 to 6:30 - Food & Introductions 🍽️
- 6:30 to 7:30 - Presentation & Q&A 🎤
- 7:30 to 9:00 - Open Discussions 💬
If you are unable to attend in person, you can join the live stream starting at 6:30 PM via this link. 🌐
Location: 30 Adelaide East, 12th Floor, M5C 2C5, Toronto
OBS! Vi reserverar oss för eventuella felskrivningar i informationen som vi ger om det här evenemanget. Besök evenemangets hemsida för att säkerställa exempelvis datum, öppettider, priser och plats.
Liknande evenemang
30 Adelaide East, 12th Floor, M5C 2C5, Toronto




