Questo sito utilizza solo cookie tecnici per il corretto funzionamento delle pagine web e per il miglioramento dei servizi.
Se vuoi saperne di più o negare il consenso consulta l'informativa sulla privacy.
Proseguendo la navigazione del sito acconsenti all'uso dei cookie.
Se vuoi saperne di più o negare il consenso consulta l'informativa sulla privacy.
Proseguendo la navigazione del sito acconsenti all'uso dei cookie.
Seminario del 2019
2019
28 gennaio
Nicolas Macris
Seminario interdisciplinare
Generalized linear models arise in high-dimensional machine learning, statistics, communications and signal processing. In this talk we review such models in a teacher-student setting of supervised learning, and when the data matrix is random, as relevant in benchmark models of neural networks. Predictions for the mutual information and Bayes-optimal generalization errors have existed since a long time for special cases, e.g. for the perceptron or the committee machine, in the field of statistical physics based on spin-glass methods. We will explain recently developed mathematical techniques rigorously establishing those old conjectures and bring forward their algorithmic interpretation in terms of performance of message-passing algorithms. For many learning problems, we will illustrate regions of parameters for which message passing algorithms achieve the optimal performance, and locate the associated sharp phase transitions separating learnable and non-learnable regions. These rigorous results can serve as a challenging benchmark for multi-purpose algorithms.