• deck

  • [Private Defense] Breaking the Curse of Dimensionality in Deep Neural Networks by Learning Invariant Representations

  • Beating the curse of dimensionality in DNNs by learning invariant representations

    We aim to answer the fundamental question of how deep learning algorithms achieve remarkable success in processing high-dimensional data, such as images and text, while overcoming the curse of dimensionality. This curse makes it challenging to efficiently sample data and can result in a sample complexity, which is the number of data points needed to learn a task, scaling exponentially with the space dimension in generic settings. Our investigation centers on the idea that to be learnable, real-world data must be highly structured. We explore two aspects of this idea: (i) the hierarchical nature of data, where higher-level features are a composition of lower-level features, such as how a face is made up of eyes, nose, mouth, etc., (ii) the irrelevance of the exact spatial location of such features. Following this idea, we investigate the hypothesis that deep learning is successful because it constructs useful hierarchical representations of data that exploit its structure (i) while being insensitive to aspects irrelevant for the task at hand (ii).

  • [NeurIPS2022] Learning sparse features

  • [TOPML 2022] Diffeo relative stability in deep nets

    TOPML Workshop

  • PCSL retreat april22

  • [NeurIPS 2021] Diffeo relative stability in deep nets

    NeurIPS Conference 2021

  • diffeo internal gm

  • Diffeo stability in deep nets - Talk 2

    Teory of Neural Nets, internal seminar - July 12, 2021

  • Diffeo stability in deep nets - Talk 1

  • [Journal Club @EPFL] Prevalence of neural collapse during the terminal phase of deep learning training

  • Learning Features in Neural Nets: perks and drawbacks

    PhD Candidacy Examination @ Physics Doctoral School, EPFL

  • [Les Houches 2020] Geometric compression of invariant manifolds in neural nets

    Talk for the Statistical Physics and ML Summer Workshop @ Ecole de Physique des Houches, August 2020. Video recording: https://bit.ly/3kQBAYe (from minute 12)

  • Deep Reinforcement Learning

    Presentation for PCSL Group Meeting @ EPFL

  • FL vs LL - Network Symmetries

  • Replicated Affinity Propagation