Archive 2023-2024

Comité local d’organisation


SPOT 88 - Lundi 11 Mars 2024 (ENSEEIHT)

14h – Giancarlo Bigi, Università di Pisa, Italie

Title : Equilibria and bilevel variational problems via approximation

Abstract : Bilevel structures provide natural models for the selection between the multiple equilibria of variational problems such as, for instance, multi-agent systems that can be partially controlled. The lower-level describes the equilibria while the upper-level explicitly addresses the selection criterion through a suitable objective function. Hierarchical programs of this kind are simpler than more general bilevel structures as the lower-level problems are non-parametric with respect to the upper level variables.
This talk focuses on hierarchical programs whose non-parametric lower-level is given by a monotone variational inequality. In order to tackle this kind of bilevel variational problems, suitable approximated versions of the lower-level are exploited. On the one hand, the approximation does not perturb the original [exact] program too much and allows for some additional flexibility in the choice. On the other hand, it allows relying on suitable exact penalty schemes by recovering those regularity conditions that the original problem does not satisfy.
Unlike Tichonov type approaches, exact penalisation allows bounding penalty parameters and therefore prevents the numerical issues of their blow up. However, turning exact penalisation into concrete convergent algorithms requires convexity assumption that the approximation does not guarantee unless the variational inequality is affine. In this latter case convergence properties of the resulting algorithms are analysed in detailed and some preliminary numerical results are reported.  Finally, some heads up are given for the general case: an approximation of the corresponding Minty variational inequality can be exploited to recover convexity so that in principle the penalisation approach works fine as well. Anyway, the degree of approximation worsens and, above all, tough computational challenges arise since convexity is achieved through the value function of nonconvex programs.
15h – Emmanuelle Claeys, Université Paul Sabatier, Toulouse

Titre : Introduction au RL et aux bandits multi-armés pour des problématiques industrielles

Abstract : Dans cet exposé nous présenterons l’intérêt des modèle de bandits dans les problématiques de RL (gestion de l’exploration/exploitation) ainsi que dans un exemple industriel. Nous ferons une introduction aux algorithmes Policy, Value Iteration et Q-learning puis nous présenterons comment les modèles de bandits permettent de limiter le coût de l’exploration en présentant des garanties théoriques. Nous donnerons enfin une application industrielle dans le cas d’un AB Test.


 

SPOT 87 - Lundi 5 Février 2024 (ENSEEIHT)

14h – Cassio Fraga Dantas (INRAE Montpellier)

Pushing the limits of safe screening techniques
Safe screening is a powerful tool to accelerate the convergence of sparse optimization solvers by performing early identification of zero coordinates while holding strong theoretical guarantees. Intially proposed for the Lasso, these techniques have later been extended to a larger range of problems including: non-negative and group Lasso, sorted-L1 penalized estimation (SLOPE), regularized logistic regression or even metric learning (via a regularized triplet loss minimization) and SVM. In this talk, we explore recent approaches that expand the applicability of safe screening in two main directions: 1) tackling a larger family of data-fidelity functions by relaxing global regularity hypotheses into local ones; 2) moving away from the sparsity contraint and leveraging the same formalism on box-constrained problems

15h – François Malgouyres (IMT Univ. Paul Sabatier Toulouse)

Two contributions to machine learning 
The presentation covers two separate topics.
     In the first part, I will motivate the study of geometric properties of neural networks, describe a notion of functional dimension and show some experiments on this quantity in a learning context. The experiments highlight a geometry-induced implicit regularization phenomenon.
     In the second part, I will describe an original support exploration algorithm for sparse support recovery. The new algorithm is an instance of the straight-through-estimator. Its main interest is to be more exploratory than the state-of-the-art algorithms and thus to perform better in difficult cases, when the columns of the sampling matrix are very coherent.

SPOT 86 - Lundi 4 Décembre 2023 (ENSEEIHT)

14h – Delphine Sinoquet (IFP-Energies Nouvelles, Paris)

Black-box optimization with hidden constraints

Real industrial studies often give rise to optimization problems involving time-consuming complex simulators that can produce some failures or instabilities for some input sets: for instance, convergence issues of the numerical scheme of partial derivative equations. The set of inputs corresponding to failures is often not known a priori and corresponds to a hidden constraint, also called a crash constraint. Since the observation of a simulation failure might be as costly as a feasible simulation, we seek to learn the feasible region to guide the optimization process in areas without simulation failures. Therefore, we propose to couple some black-box optimization methods with a Gaussian process classifier active learning method to learn the crash domain. A dedicated infill criterion is proposed to choose new simulations to, on one side, reduce the uncertainty on the feasible domain learning and, on the other side, explore promising feasible regions regarding the optimization objective.

14h45. Emilien Flayac (ISAE-Supaéro, Toulouse)

Méthodes couplées d’estimation et commande optimale non-linéaire sous incertitude avec application à l’aéronautique et à la robotique

Les problèmes d’estimation et de commande optimale de systèmes dynamiques non-linéaires en milieu incertain apparaissent dans nombreux champs de l’aérospatial (planification et suivi de trajectoire, rendez-vous, contrôle d’écoulement, déformation d’aile, localisation, cartographie, etc…). Habituellement, les problèmes d’estimation et de contrôle sont résolus séparément en invoquant le classique principe de séparation. Cependant, ce principe n’est pas valable dans de nombreux cas pratiques et un couplage est alors nécessaire. La présentation portera ainsi sur une étude théorique d’une nouvelle formulation d’un problème couplé et sur plusieurs méthodes d’estimation et de commande permettant de réaliser ce couplage et leur application au problème de navigation par corrélation de terrain.

Ces deux exposés de SPOT sont intégrés dans la journée Programmation Mathématique Non Linéaire du GdR-ROD.


 

SPOT 85 - Lundi 6 Novembre 2023 (ENSEEIHT – salle des thèses C002)

14h – Lucas Slot (ETH Zürich)

The Christoffel-Darboux kernel for topological data analysis

Persistent homology has been widely used to study the topology of point clouds in R^n. Standard approaches are very sensitive to outliers, and their computational complexity depends badly on the number of data points. In this talk we introduce a novel persistence module for a point cloud using the theory of Christoffel-Darboux kernels. This module is robust to (statistical) outliers in the data, and can be computed in time linear in the number of data points. We illustrate the benefits and limitations of our new module with various numerical examples in R^n, for n=1,2,3.
Our work expands upon recent applications of Christoffel-Darboux kernels in the context of statistical data analysis and geometric inference due to Lasserre-Pauwels-Putinar (2022). There, these kernels are used to construct a polynomial whose level sets capture the geometry of a point cloud in a precise sense. We show that the persistent homology associated to the sublevel set filtration of this polynomial is stable with respect to the Wasserstein distance. Moreover, we show that the persistent homology of this filtration can be computed in singly exponential time in the ambient dimension n, using a recent algorithm of Basu-Karisani (2022).

Based on joint work with Pepijn Roos Hoefgeest.

15h – Frédéric Messine (ENSEEIHT-LAPLACE, Toulouse)

Optimisation Topologique en Electromagnétisme et puis Création d’une Startup

Dans cet exposé, je présenterai l’optimisation topologique de manière assez générale : intérêts et difficultés. Dans un second temps, on parlera des méthodes adjointes en électromagnétisme pour effectuer des calculs efficaces de dérivées. Cette méthode dite adjointe a été introduite en mécanique et a permis d’obtenir de superbes résultats pour des designs de structures. On verra ensuite l’intérêt de la transposition de ces méthodes à l’électromagnétisme afin de concevoir des propulseurs à plasma de satellites et des machines électriques.
Dans une troisième et dernière partie, je présenterai quelque peu mon expérience personnelle en tant que co-fondateur dune startup et j’essaierai de répondre aux questions : comment, quand et pourquoi un universitaire peut-il se retrouver impliqué dans la création d’une start-up ?


 

SPOT 84 - Lundi 9 Octobre 2023 (ENSEEIHT – salle des thèses C002)

14h – Jérôme Malick (CNRS, LJK, Grenoble)

Optimization for more robust, resilient, responsible AI

 This talk will be a gentle introduction to — and a passionate advocacy for — distributionally robust optimization (DRO). Beyond the classical empirical risk minimization paradigm in machine learning, DRO has the ability to effectively address data uncertainty and distribution ambiguity, thus paving the way to more robust and fair models. In this talk, I will highlight the key mathematical ideas, the main algorithmic challenges, and some versatile applications of DRO.

15h – Franck Iutzeler (Univ. Paul Sabatier, IMT, Toulouse)

Entropy-regularized Distributionnally Robust Optimization for statistical learning

In statistical learning, the distribution of the data can change between training and practical use-cases due to biases or distribution shifts. A remedy for this obstacle is to train a model on the worst distribution for the objective that is close to the data (in the sense of the Wasserstein distance). Since this problem is often intractable in practice, we first show how to regularize it using entropic regularization in order to implement numerical methods, while controlling this approximation.  Finally, we will also consider the statistical guarantees of such models.