Questo sito utilizza solo cookie tecnici per il corretto funzionamento delle pagine web e per il miglioramento dei servizi.

Se vuoi saperne di più o negare il consenso consulta l'informativa sulla privacy.

Proseguendo la navigazione del sito o cliccando su "chiudi" acconsenti all'uso dei cookie.

Se vuoi saperne di più o negare il consenso consulta l'informativa sulla privacy.

Proseguendo la navigazione del sito o cliccando su "chiudi" acconsenti all'uso dei cookie.

“GEOMETRIA E DEEP LEARNING”

18

Feb

2020

P. Frosini

Group equivariant non-expansive operators and deep learning

seminario di algebra e geometria

In this talk we illustrate a new mathematical model for machine learning,
which follows from the assumption that data cannot be studied directly, but only through
the action of agents that transform them. In our framework each agent is represented
by a group equivariant non-expansive operator acting on data. After endowing the
space of agents with a suitable metric, we describe the main topological and geometrical
properties of this space by means of methods developed for topological data analysis.

24

Feb

2020

N. Quercioli

On some methods to build GENEOs in Topological Data Analysis

seminario di algebra e geometria

In this talk we will briefly introduce a general mathematical framework for group equivariance in the machine learning
context. The framework builds on a synergy between persistent homology and the theory of group actions. Our focus will be on illustrating some methods to build Goup Equivariant Non-Expansive Operators (GENEOs), which are maps between function spaces associated with groups of transformations. The development of these techniques will give us the opportunity to obtain a better approximation of thetopological space of all GENEOs.

24

Feb

2020

N. Quercioli

On some methods to build GENEOs in Topological Data Analysis

seminario di algebra e geometria

21

Mag

2020

Alessandro Achille

Structure of Learning Tasks and the Information in the Weights of a Deep Network

seminario di algebra e geometria

Abstract: What are the fundamental quantities to understand the learning process of a deep neural network? Why are some datasets easier than other? What does it means for two tasks to have a similar structure? We argue that information theoretic quantities, and in particular the amount of information that SGD stores in the weights, can be used to characterize the training process of a deep network. In fact, we show that the information in the weights bounds the generalization error and the invariance of the learned representation. It also allows us to connect the learning dynamics with the so called "structure function" of the dataset, and to define a notion of distance between tasks, which relates to fine-tuning. The non-trivial dynamics of information during training give rise to phenomena, such as critical periods for learning, that closely mimics those observed in humans and may suggests that forgetting information about the training data is a necessary part of the learning process.