Archive 2023-2024

Comité local d’organisation


SPOT 90 – Lundi 1 juillet 2024 – Amphi A001

14h : Jonathan Chirinos Rodriguez (University of Genoa)

Machine Learning Techniques for Inverse Problems

Inverse problems serve as a general playground for analyzing many real-world applications. Typical examples are MRI, X-Ray CT, and image recovery. An inverse problem involves reconstructing an unknown source from limited and possibly distorted observations. The so-called data-driven techniques for solving inverse problems have become popular in recent years due to 1. their effectiveness in many practical scenarios and 2. the reduced need for prior knowledge. Yet, few theoretical guarantees have been provided to date. In this talk, we aim to bridge this gap in several key directions.
First, we propose and study a statistical machine learning approach, based on Empirical Risk Minimization (ERM), to determine the best regularization parameter given a finite set of examples. Our main contribution is a theoretical analysis, showing that, if the number of examples is big enough, this approach is optimal and adaptive to the noise level and the smoothness of the solution. We showcase the applicability of our framework to a broad class of inverse problems, including spectral regularization methods and sparsity-promoting norms. Numerical simulations further support and illustrate the theoretical findings.
Moreover, we introduce a data-driven approach for constructing (firmly) nonexpansive operators. We present the utility of such a technique in the context of Plug-and-Play methods, where one proximal operator in classical algorithms such as Forward-Backward Splitting or the Chambolle–Pock primal-dual iteration is substituted by an operator that aims to be firmly nonexpansive. We establish a rigorous theoretical framework for learning such operators using an ERM approach. Further, we derive a solution that is ensured to be firmly nonexpansive and piecewise affine in the convex envelope of the training data. We prove that such an operator converges to the best empirical solution when increasing the number of points inside the envelope. Finally, we propose a practical implementation strategy and an application in the context of image denoising.

15h – Charles Dossal (Institut de Mathématiques de Toulouse)

Un tour d’horizon des résultats récents sur les algorithmes inertiels dans un cadre déterministe

Les algorithmes inertiels pour l’optimisation datent des années 60 avec les travaux de Polyak. L’étude de ces algorithmes a connu un regain d’intérêt ces dernières années après presque 30 ans de somnolence. Alors que le ML nous incite à nous poser la question du non convexe et des algorithmes stochastiques, nous verrons que même dans un cadre aussi simple que celui de l’optimisation convexe déterministe, tous les résultats n’étaient pas connus et que de nombreuses zones d’ombres ont été dissipées ces dernières années par différentes équipes de chercheuses et chercheurs. Ces nouveaux résultats déploient des outils et donnent des clés pour aborder les algorithmes stochastiques et l’optimisation de fonctions non convexes. Cet exposé propose d’y jeter un oeil et de comprendre pourquoi finalement Nesterov et Polyak n’avaient pas déjà fait le tour de la question au début des années 80.


 

SPOT 89 – lundi 27 mai 2024

14h – Aurélien Bellet (INRIA Montpellier)

Differentially Private Optimization with Coordinate Descent and Fixed-Point Iterations

Machine learning models are known to leak information about individual data points used to train them. Differentially private optimization aims to address this problem by training models with strong differential privacy guarantees. This is achieved by adding controlled noise to the optimization process, for instance during the gradient computation steps in the case of the popular DP-SGD algorithm. In this talk, I will discuss how to beyond DP-SGD by (i) introducing private coordinate descent algorithms that can better exploit the problem structure, and (ii) leveraging the framework of fixed-point iterations to design and analyze new private optimization algorithms for centralized and federated settings.

15h – Tung Le (Toulouse School of Economics)

Multi-layer sparse matrix factorization from an optimization point of view

Approximating a dense matrix by a product of sparse factors is a fundamental problem for many signal processing and machine learning tasks. It can be formulated as a constrained optimization problem and decomposed into two subproblems: finding the positions of the non-zero coefficients in the sparse factors, and determining their values. While the first step is usually seen as the most challenging one due to its combinatorial nature, this talk focuses on the second step, referred to as sparse matrix approximation with fixed support (FSMF). For the setting involved with only two factors, we show that (FSMF) is already NP-hard and the existence of minimizers are not always guaranteed. On a more brighter side, we present a non-trivial family of support constraints such that optimal solutions can be calculated efficiently (in polynomial time). This result is further extended to the setting of multiple factors with the so-called « butterfly structure » – the butterfly factorization problem. In particular, our proposed algorithm guarantees that the error approximation given by its output factors is at most a constant time the optimal error approximation. To the best of our knowledge, this is the first « quasi-optimal » guarantee for the butterfly factorization problem.


 

SPOT 88 - Lundi 11 Mars 2024 (ENSEEIHT)

14h – Giancarlo Bigi, Università di Pisa, Italie

Title : Equilibria and bilevel variational problems via approximation

Abstract : Bilevel structures provide natural models for the selection between the multiple equilibria of variational problems such as, for instance, multi-agent systems that can be partially controlled. The lower-level describes the equilibria while the upper-level explicitly addresses the selection criterion through a suitable objective function. Hierarchical programs of this kind are simpler than more general bilevel structures as the lower-level problems are non-parametric with respect to the upper level variables.
This talk focuses on hierarchical programs whose non-parametric lower-level is given by a monotone variational inequality. In order to tackle this kind of bilevel variational problems, suitable approximated versions of the lower-level are exploited. On the one hand, the approximation does not perturb the original [exact] program too much and allows for some additional flexibility in the choice. On the other hand, it allows relying on suitable exact penalty schemes by recovering those regularity conditions that the original problem does not satisfy.
Unlike Tichonov type approaches, exact penalisation allows bounding penalty parameters and therefore prevents the numerical issues of their blow up. However, turning exact penalisation into concrete convergent algorithms requires convexity assumption that the approximation does not guarantee unless the variational inequality is affine. In this latter case convergence properties of the resulting algorithms are analysed in detailed and some preliminary numerical results are reported.  Finally, some heads up are given for the general case: an approximation of the corresponding Minty variational inequality can be exploited to recover convexity so that in principle the penalisation approach works fine as well. Anyway, the degree of approximation worsens and, above all, tough computational challenges arise since convexity is achieved through the value function of nonconvex programs.
15h – Emmanuelle Claeys, Université Paul Sabatier, Toulouse

Titre : Introduction au RL et aux bandits multi-armés pour des problématiques industrielles

Abstract : Dans cet exposé nous présenterons l’intérêt des modèle de bandits dans les problématiques de RL (gestion de l’exploration/exploitation) ainsi que dans un exemple industriel. Nous ferons une introduction aux algorithmes Policy, Value Iteration et Q-learning puis nous présenterons comment les modèles de bandits permettent de limiter le coût de l’exploration en présentant des garanties théoriques. Nous donnerons enfin une application industrielle dans le cas d’un AB Test.


 

SPOT 87 - Lundi 5 Février 2024

14h – Cassio Fraga Dantas (INRAE Montpellier)

Pushing the limits of safe screening techniques
Safe screening is a powerful tool to accelerate the convergence of sparse optimization solvers by performing early identification of zero coordinates while holding strong theoretical guarantees. Intially proposed for the Lasso, these techniques have later been extended to a larger range of problems including: non-negative and group Lasso, sorted-L1 penalized estimation (SLOPE), regularized logistic regression or even metric learning (via a regularized triplet loss minimization) and SVM. In this talk, we explore recent approaches that expand the applicability of safe screening in two main directions: 1) tackling a larger family of data-fidelity functions by relaxing global regularity hypotheses into local ones; 2) moving away from the sparsity contraint and leveraging the same formalism on box-constrained problems

15h – François Malgouyres (IMT Univ. Paul Sabatier Toulouse)

Two contributions to machine learning 
The presentation covers two separate topics.
     In the first part, I will motivate the study of geometric properties of neural networks, describe a notion of functional dimension and show some experiments on this quantity in a learning context. The experiments highlight a geometry-induced implicit regularization phenomenon.
     In the second part, I will describe an original support exploration algorithm for sparse support recovery. The new algorithm is an instance of the straight-through-estimator. Its main interest is to be more exploratory than the state-of-the-art algorithms and thus to perform better in difficult cases, when the columns of the sampling matrix are very coherent.

SPOT 86 - Lundi 4 Décembre 2023

14h – Delphine Sinoquet (IFP-Energies Nouvelles, Paris)

Black-box optimization with hidden constraints

Real industrial studies often give rise to optimization problems involving time-consuming complex simulators that can produce some failures or instabilities for some input sets: for instance, convergence issues of the numerical scheme of partial derivative equations. The set of inputs corresponding to failures is often not known a priori and corresponds to a hidden constraint, also called a crash constraint. Since the observation of a simulation failure might be as costly as a feasible simulation, we seek to learn the feasible region to guide the optimization process in areas without simulation failures. Therefore, we propose to couple some black-box optimization methods with a Gaussian process classifier active learning method to learn the crash domain. A dedicated infill criterion is proposed to choose new simulations to, on one side, reduce the uncertainty on the feasible domain learning and, on the other side, explore promising feasible regions regarding the optimization objective.

14h45. Emilien Flayac (ISAE-Supaéro, Toulouse)

Méthodes couplées d’estimation et commande optimale non-linéaire sous incertitude avec application à l’aéronautique et à la robotique

Les problèmes d’estimation et de commande optimale de systèmes dynamiques non-linéaires en milieu incertain apparaissent dans nombreux champs de l’aérospatial (planification et suivi de trajectoire, rendez-vous, contrôle d’écoulement, déformation d’aile, localisation, cartographie, etc…). Habituellement, les problèmes d’estimation et de contrôle sont résolus séparément en invoquant le classique principe de séparation. Cependant, ce principe n’est pas valable dans de nombreux cas pratiques et un couplage est alors nécessaire. La présentation portera ainsi sur une étude théorique d’une nouvelle formulation d’un problème couplé et sur plusieurs méthodes d’estimation et de commande permettant de réaliser ce couplage et leur application au problème de navigation par corrélation de terrain.

Ces deux exposés de SPOT sont intégrés dans la journée Programmation Mathématique Non Linéaire du GdR-ROD.


 

SPOT 85 - Lundi 6 Novembre 2023

14h – Lucas Slot (ETH Zürich)

The Christoffel-Darboux kernel for topological data analysis

Persistent homology has been widely used to study the topology of point clouds in R^n. Standard approaches are very sensitive to outliers, and their computational complexity depends badly on the number of data points. In this talk we introduce a novel persistence module for a point cloud using the theory of Christoffel-Darboux kernels. This module is robust to (statistical) outliers in the data, and can be computed in time linear in the number of data points. We illustrate the benefits and limitations of our new module with various numerical examples in R^n, for n=1,2,3.
Our work expands upon recent applications of Christoffel-Darboux kernels in the context of statistical data analysis and geometric inference due to Lasserre-Pauwels-Putinar (2022). There, these kernels are used to construct a polynomial whose level sets capture the geometry of a point cloud in a precise sense. We show that the persistent homology associated to the sublevel set filtration of this polynomial is stable with respect to the Wasserstein distance. Moreover, we show that the persistent homology of this filtration can be computed in singly exponential time in the ambient dimension n, using a recent algorithm of Basu-Karisani (2022).

Based on joint work with Pepijn Roos Hoefgeest.

15h – Frédéric Messine (ENSEEIHT-LAPLACE, Toulouse)

Optimisation Topologique en Electromagnétisme et puis Création d’une Startup

Dans cet exposé, je présenterai l’optimisation topologique de manière assez générale : intérêts et difficultés. Dans un second temps, on parlera des méthodes adjointes en électromagnétisme pour effectuer des calculs efficaces de dérivées. Cette méthode dite adjointe a été introduite en mécanique et a permis d’obtenir de superbes résultats pour des designs de structures. On verra ensuite l’intérêt de la transposition de ces méthodes à l’électromagnétisme afin de concevoir des propulseurs à plasma de satellites et des machines électriques.
Dans une troisième et dernière partie, je présenterai quelque peu mon expérience personnelle en tant que co-fondateur dune startup et j’essaierai de répondre aux questions : comment, quand et pourquoi un universitaire peut-il se retrouver impliqué dans la création d’une start-up ?


 

SPOT 84 - Lundi 9 Octobre 2023

14h – Jérôme Malick (CNRS, LJK, Grenoble)

Optimization for more robust, resilient, responsible AI

 This talk will be a gentle introduction to — and a passionate advocacy for — distributionally robust optimization (DRO). Beyond the classical empirical risk minimization paradigm in machine learning, DRO has the ability to effectively address data uncertainty and distribution ambiguity, thus paving the way to more robust and fair models. In this talk, I will highlight the key mathematical ideas, the main algorithmic challenges, and some versatile applications of DRO.

15h – Franck Iutzeler (Univ. Paul Sabatier, IMT, Toulouse)

Entropy-regularized Distributionnally Robust Optimization for statistical learning

In statistical learning, the distribution of the data can change between training and practical use-cases due to biases or distribution shifts. A remedy for this obstacle is to train a model on the worst distribution for the objective that is close to the data (in the sense of the Wasserstein distance). Since this problem is often intractable in practice, we first show how to regularize it using entropic regularization in order to implement numerical methods, while controlling this approximation.  Finally, we will also consider the statistical guarantees of such models.