Talk by Prof. Sebastian Goldt (INM-6/IAS-6 Seminar)

Start
6th October 2022 12:00 PM
End
6th October 2022 13:00 PM
Location
Bld. 15.22, E1, Room 3009 (Seminar room) / Online

Hosts: Prof. Moritz Helias, Claudia Merger

What do neural networks learn? On the interplay between data structure and representation learning

Neural networks are powerful feature extractors - but which features do they extract from their data? And how does the structure in the data shape the representations they learn? We investigate these questions by introducing several synthetic data models, each of which accounts for a salient feature of modern data sets: low intrinsic dimension of images [1], symmetries and non-Gaussian statistics [2], and finally sequence memory [3]. Using tools from statistics and statistical physics, we will show how the learning dynamics and the representations are shaped by the statistical properties of the training data.

[1] Goldt, Mézard, Krzakala, Zdeborová (2020) Physical Review X 10 (4), 041044 [arXiv:1909.11500]
[2] Ingrosso & Goldt (2022) PNAS, in press. [arXiv:2202.00565]
[3] Seif, Loos, Tucci, Roldán, Goldt, under review [arXiv:2205.14683]

Prof. Sebastian Goldt
International School of Advanced Studies (SISSA)
Trieste

Last Modified: 09.03.2024