Seminario del 2023

2023
29 marzo
The task of inverse problems is to determine an unknown quantity from measurements obtained through a forward operator, possibly corrupted by noise. Such problems are usually unstable: small perturbations of the observed measurements may cause large deviations in the reconstructed solutions. Variational regularization is a well-established technique to tackle ill-posedness, and it requires solving a minimization problem in which a mismatch functional is endowed with a suitable regularization term. The choice of such a functional is a crucial task, and it usually relies on theoretical suggestions as well as a priori information on the desired solution. In recent years, statistical learning has outlined a novel and successful paradigm for this purpose. Supposing that the exact solution and the measurements are distributed according to a joint probability distribution, which is partially known thanks to a suitable training sample, we can take advantage of this statistical model to design data-driven regularization operators. In this talk, I will consider linear inverse problems (associated with relevant applications, e.g., in signal processing and in medical imaging), and aim at learning the optimal regularization operator, first restricted to the family of generalized Tikhonov regularizers. I will discuss some theoretical properties of the optimal operator and show error bounds for its approximation as the size of the sample grows, both with a supervised-learning strategy and with an unsupervised-learning one. Finally, I will discuss the extension to different families of regularization functionals, with a particular interest in sparsity-promotion. This is based on joint work with G. S. Alberti, E. De Vito, M. Santacesaria (University of Genoa), and M. Lassas (University of Helsinki)

indietro