Seminario del 2021

2021
24 novembre
Luca Calatroni (CNRS, I3S, Sophia-Antipolis, France)
nell'ambito della serie: SEMINARI MAT/08 TEAM
Seminario di analisi numerica
We consider convex optimisation problems defined in the variable exponent Lebesgue space L^p(·)(Ω), where the functional to minimise is the sum of a smooth and a non-smooth term. Compared to the standard Hilbert setting traditionally considered in the framework of continuous optimisation, the space L^p(·) (Ω) has only a Banach structure which does not allow for an identification with its dual space, as the Riesz representation theorem does not hold in this setting. This affects the applicability of well-known proximal (a.k.a. forward-backward) algorithms, since the gradient of the smooth component here lives in a different space than the one of the iterates. To circumvent this issue, the use of duality mappings is required; they link primal and dual spaces in a nonlinear fashion, thus allowing a sensible definition of the algorithmic iterates. However, such nonlinearity introduces further difficulties in the definition of the proximal (backward) step and, overall, in the convergence analysis of the algorithm. To overcome the non-separability of the natural induced norm on L^p(·)(Ω), we consider modular functions allowing for a an appropriate definition of proximal algorithms in this setting for which convergence properties in function values can be proved. Some numerical examples showing the flexibility of our approach in comparison with standard (Hilbert, L^p with constant p) algorithms on some exemplar inverse problems (deconvolution, denoising) are showed.

indietro