Seminari periodici DIPARTIMENTO DI MATEMATICA
Seminari MAT/08 Team


13
Dic

Lunedì
Silvia Biasotti
TBA

seminario di analisi numerica

 ore 10:30
  presso Seminario II
TBA

24
Nov

2021
Luca Calatroni (CNRS, I3S, Sophia-Antipolis, France)
Proximal gradient algorithms in variable exponent Lebesgue spaces

seminario di analisi numerica

We consider convex optimisation problems defined in the variable exponent Lebesgue space L^p(·)(Ω), where the functional to minimise is the sum of a smooth and a non-smooth term. Compared to the standard Hilbert setting traditionally considered in the framework of continuous optimisation, the space L^p(·) (Ω) has only a Banach structure which does not allow for an identification with its dual space, as the Riesz representation theorem does not hold in this setting. This affects the applicability of well-known proximal (a.k.a. forward-backward) algorithms, since the gradient of the smooth component here lives in a different space than the one of the iterates. To circumvent this issue, the use of duality mappings is required; they link primal and dual spaces in a nonlinear fashion, thus allowing a sensible definition of the algorithmic iterates. However, such nonlinearity introduces further difficulties in the definition of the proximal (backward) step and, overall, in the convergence analysis of the algorithm. To overcome the non-separability of the natural induced norm on L^p(·)(Ω), we consider modular functions allowing for a an appropriate definition of proximal algorithms in this setting for which convergence properties in function values can be proved. Some numerical examples showing the flexibility of our approach in comparison with standard (Hilbert, L^p with constant p) algorithms on some exemplar inverse problems (deconvolution, denoising) are showed.

22
Nov

2021
Kai Bergermann (TU Chemnitz)
Matrix function-based centrality measures for multiplex networks

seminario di analisi numerica

We put established Krylov subspace methods and Gauss quadrature rules to new use by generalizing the class of matrix function-based centrality measures from single-layered to multiplex networks. Our approach relies on the supra-adjacency matrix as the network representation, which has already been used to generalize eigenvector centrality to temporal and multiplex networks. We discuss the cases of unweighted and weighted as well as undirected and directed multiplex networks and present numerical studies on the convergence of the respective methods, which typically requires only few Krylov subspace iterations. The focus of the numerical experiments is put on urban public transport networks.

15
Nov

2021
John Pearson
Some Developments in Preconditioning Time-Dependent PDE-Constrained Optimization Problems and Multiple Saddle Point Systems

seminario di analisi numerica

Optimization problems subject to PDE constraints form a mathematical tool that can be applied to a wide range of scientific processes, including fluid flow control, medical imaging, biological and chemical processes, and many others. These problems involve minimizing some function arising from a physical objective, while obeying a system of PDEs which describe the process. It is necessary to obtain accurate solutions to such problems within a reasonable CPU time, in particular for time-dependent problems, for which the “all-at-once” solution can lead to extremely large linear systems. In this talk we consider Krylov subspace methods to solve such systems, accelerated by fast and robust preconditioning strategies. A key consideration is which time-stepping scheme to apply — much work to date has focused on the backward Euler scheme, as this method is stable and the resulting systems are amenable to existing preconditioners, however this leads to linear systems of even larger dimension than those obtained when using other (higher-order) methods. We will summarise some recent advances in addressing this challenge, including a new preconditioner for the more difficult linear systems obtained from a Crank-Nicolson discretization, and a Newton-Krylov method for nonlinear PDE-constrained optimization. At the end of the talk we plan to discuss some recent developments in the preconditioning of multiple saddle-point systems, specifically positive definite preconditioners which may be applied within MINRES, which may find considerable utility for solving optimization problems as well as other applications. This talk is based on work with Stefan Güttel (University of Manchester), Santolo Leveque (University of Edinburgh), and Andreas Potschka (TU Clausthal).

02
Nov

2021
Nick Vannieuwenhoven
Riemannian optimization for the tensor rank decomposition

seminario di analisi numerica

The tensor rank decomposition or canonical polyadic decomposition (CPD) is a generalization of a low-rank matrix factorization from matrices to higher-order tensors. In many applications, multi-dimensional data can be meaningfully approximated by a low-rank CPD. In this talk, I will describe a Riemannian optimization method for approximating a tensor by a low-rank CPD. This is a type of optimization method in which the domain is a smooth manifold, i.e. a curved geometric object. The presented method achieved up to two orders of magnitude improvements in execution time for challenging small-scale dense tensors when compared to state-of-the-art nonlinear least squares solvers.