Reizinger, Patrik, Bizeul, Alice, Juhos, Attila, Vogt, Julia, Balestriero, Randall, Brendel, Wieland, Klindt, David (October 2024) Cross-Entropy Is All You Need To Invert the Data Generating Process. arXiv. ISSN 2331-8422 (Submitted)
Preview |
PDF
10.48550.arXiv.2410.21869.pdf - Submitted Version Available under License Creative Commons Attribution. Download (872kB) | Preview |
Abstract
Supervised learning has become a cornerstone of modern machine learning, yet a comprehensive theory explaining its effectiveness remains elusive. Empirical phenomena, such as neural analogy-making and the linear representation hypothesis, suggest that supervised models can learn interpretable factors of variation in a linear fashion. Recent advances in self-supervised learning, particularly nonlinear Independent Component Analysis, have shown that these methods can recover latent structures by inverting the data generating process. We extend these identifiability results to parametric instance discrimination, then show how insights transfer to the ubiquitous setting of supervised learning with cross-entropy minimization. We prove that even in standard classification tasks, models learn representations of ground-truth factors of variation up to a linear transformation. We corroborate our theoretical contribution with a series of empirical studies. First, using simulated data matching our theoretical assumptions, we demonstrate successful disentanglement of latent factors. Second, we show that on DisLib, a widely-used disentanglement benchmark, simple classification tasks recover latent structures up to linear transformations. Finally, we reveal that models trained on ImageNet encode representations that permit linear decoding of proxy factors of variation. Together, our theoretical findings and experiments offer a compelling explanation for recent observations of linear representations, such as superposition in neural networks. This work takes a significant step toward a cohesive theory that accounts for the unreasonable effectiveness of supervised deep learning.
Item Type: | Paper |
---|---|
Subjects: | bioinformatics bioinformatics > computational biology > algorithms bioinformatics > computational biology bioinformatics > computational biology > algorithms > machine learning |
CSHL Authors: | |
Communities: | CSHL labs > Klindt lab |
SWORD Depositor: | CSHL Elements |
Depositing User: | CSHL Elements |
Date: | 29 October 2024 |
Date Deposited: | 30 Oct 2024 19:26 |
Last Modified: | 30 Oct 2024 19:26 |
Related URLs: | |
URI: | https://repository.cshl.edu/id/eprint/41721 |
Actions (login required)
Administrator's edit/view item |