Elenco seminari del ciclo di seminari
“GEOMETRIA E DEEP LEARNING”

2020
18 febbraio
P. Frosini
nel ciclo di seminari: GEOMETRIA E DEEP LEARNING
Seminario di algebra e geometria
In this talk we illustrate a new mathematical model for machine learning, which follows from the assumption that data cannot be studied directly, but only through the action of agents that transform them. In our framework each agent is represented by a group equivariant non-expansive operator acting on data. After endowing the space of agents with a suitable metric, we describe the main topological and geometrical properties of this space by means of methods developed for topological data analysis.
2020
24 febbraio
N. Quercioli
nel ciclo di seminari: GEOMETRIA E DEEP LEARNING
Seminario di algebra e geometria
In this talk we will briefly introduce a general mathematical framework for group equivariance in the machine learning context. The framework builds on a synergy between persistent homology and the theory of group actions. Our focus will be on illustrating some methods to build Goup Equivariant Non-Expansive Operators (GENEOs), which are maps between function spaces associated with groups of transformations. The development of these techniques will give us the opportunity to obtain a better approximation of thetopological space of all GENEOs.
2020
24 febbraio
N. Quercioli
nel ciclo di seminari: GEOMETRIA E DEEP LEARNING
Seminario di algebra e geometria
In this talk we will briefly introduce a general mathematical framework for group equivariance in the machine learning context. The framework builds on a synergy between persistent homology and the theory of group actions. Our focus will be on illustrating some methods to build Goup Equivariant Non-Expansive Operators (GENEOs), which are maps between function spaces associated with groups of transformations. The development of these techniques will give us the opportunity to obtain a better approximation of thetopological space of all GENEOs.
2020
21 maggio
Abstract: What are the fundamental quantities to understand the learning process of a deep neural network? Why are some datasets easier than other? What does it means for two tasks to have a similar structure? We argue that information theoretic quantities, and in particular the amount of information that SGD stores in the weights, can be used to characterize the training process of a deep network. In fact, we show that the information in the weights bounds the generalization error and the invariance of the learned representation. It also allows us to connect the learning dynamics with the so called "structure function" of the dataset, and to define a notion of distance between tasks, which relates to fine-tuning. The non-trivial dynamics of information during training give rise to phenomena, such as critical periods for learning, that closely mimics those observed in humans and may suggests that forgetting information about the training data is a necessary part of the learning process.