Archive 2022-2023

Comité local d’organisation


 

SPOT 83 - Lundi 12 juin 2023 (Amphi A002 de l’ENSEEIHT)

14h – Zheng Qu (University of Hong Kong)

Globally solving concave quadratic program via doubly nonnegative relaxation
Motivated by an optimization problem arising from the detection of new protein/genome sequence, we consider the problem of maximizing a convex quadratic function over a bounded polyhedral set. We design a new framework based on SDP relaxation and cutting plane method for solving the associated reference value problem. The major novelty is a new way to generate valid cut through the doubly nonnegative (DNN) relaxation. We establish various theoretical properties of the DNN relaxation. This includes its equivalence with the Shor relaxation of the equivalent quadratically constrained problem, the strong duality and generation of valid cut from an approximate solution of the DNN relaxation returned by an arbitrary SDP solver. Computational results on both real and synthetic data demonstrate the efficiency of the proposed new method and its ability to solve high dimensional problems with dense data. In particular, our new algorithm successfully solved in 3 days the problem arising from computational biology for a dataset containing more than 300,000 instances of dimension 100. In contrast, CPLEX or Gurobi is estimated to need years of computational time for the same dataset on the same computing platform.

15h – Hippolyte Labarrière (INSA Toulouse)

Maîtriser l’inertie en optimisation convexe
Cet exposé porte sur les méthodes inertielles de premier ordre en optimisation convexe. L’inertie est un moyen efficace d’améliorer la vitesse de convergence d’ algorithmes classiques pour minimiser des fonctions convexes. Elle peut cependant avoir des effets négatifs lorsqu’elle est mal calibrée, engendrant un comportement oscillatoire des itérés.
Après avoir présenté le parallèle qui peut être fait entre schémas d’optimisation et systèmes dynamiques, on présentera deux types de stratégie pour maîtriser l’inertie. La première consiste à couper l’inertie à la main via des schémas de restart, la problématique étant de déterminer une règle de redémarrage pertinente. La deuxième s’appuie sur l’analyse d’un système dynamique pour introduire de nouveaux schémas contenant des termes de friction additionnels. Cette démarche est motivée par les propriétés d’atténuation des oscillations vérifiées par le système dynamique et ses schémas dérivés.

 


 

SPOT 82 - Lundi 22 mai 2023 (salle des thèses de l’ENSEEIHT)

14h – Leonardo Tomazeli Duarte (University of Campinas)
Multi-objective optimization for latent variable analysis: applications to source separation and dimensionality reduction
Latent variable analysis (LVA) often relies on a mono-objective formulation exploiting a given property of the desired information. For example, in classical blind source separation, most algorithms make use of a single separation criterion associated with a given property of the sources. However, in many practical situations, there is more than one property to be exploited and, as a consequence, a set of separation criteria may be used to recover the original signals. In this context, we consider an approach to deal with the separation problem based on multi-objective optimization. Such an approach provides the user a set of non-dominated solutions from which she can select one, for instance, according to prior knowledge on the problem. In order to verify the application of the proposed framework, we provided numerical experiments by considering both synthetic data and actual data acquired by chemical sensors. In a second application, we also introduce a multiobjective formulation of principal component analysis which can be used, for instance, in the context of fairness-aware machine learning.
15h – Paul Escande (Institut de Mathématiques de Toulouse)
On the Concentration of the Minimizers of Empirical Risks

Obtaining guarantees on the convergence of the minimizers of empirical risks to the ones of the true risk is a fundamental matter in statistical learning. Instead of deriving guarantees on the usual estimation error, we will explore concentration inequalities on the distance between the sets of minimizers of the risks. We will argue that for a broad spectrum of estimation problems, there exists a regime where optimal concentration rates can be proven. The bounds will be showcased on a selection of estimation problems such as barycenters on metric space with positive or negative curvature, subspaces of covariance matrices.

 


SPOT 81 – Lundi 3 avril 2023 (salle des thèses de l’ENSEEIHT)

14h – Samuel Vaiter (Université Côte D’Azur)

A very biased introduction to bilevel optimization in machine learning

Bilevel optimization (i.e., the problem of minimizing a value function which involve the arg-minimum of another function), appears in many areas of machine learning.
I will highlight some current ideas on how to solve them in the context of large-scale empirical risk minimisation, how to prove their convergence rates and how do they perform on practical applications. Joint works with P. Ablin / M. Dagréou / T. Moreau

15h – Mathieu Serrurier (Institut de Recherche en Informatique de Toulouse)

Building and training 1-Lipschitz Neural Networks by using Optimal Transport

The lack of robustness and explainability in neural networks is directly linked to the arbitrarily
high Lipschitz constant of deep models. Although constraining the Lipschitz constant has been shown to improve these properties, it can make it challenging to learn with classical loss functions. In this presentation, we explain how to control this constant, and demonstrate that training such networks requires defining specific loss functions and optimization processes. To this end, we propose a loss function based on optimal transport that not only certifies robustness but also converts adversarial examples into provable counterfactual examples.

 


 

SPOT 80 – Lundi 6 Mars 2023 (salle des thèses de l’ENSEEIHT)

14h – Henrique Goulart  (INP-ENSEEIHT)

Maximum likelihood estimation of large spiked tensor models
This talk will focus on maximum likelihood estimation (MLE) of a rank-one tensor from observations corrupted by Gaussian noise in the large-dimensional regime. In this problem, the optimal recovery performance exhibits an abrupt phase transition as the signal-to-noise ratio varies, which is related to the behavior of the associated random landscape. We will discuss these aspects and explain recent results, which are largely based on spin glass theory. Then, we will describe our own approach to analyze this problem, which relies on more elementary tools coming from tensor spectral theory and random matrix theory, and should more easily lend itself to relevant extensions. Finally, we will discuss connections with other works and (actual and potential) extensions.

15h – Mathurin Massias (Inria Lyon)

Recent optimization improvements in sparse regression through implicit bias, non-convexity and manifold identification 
In this talk we will review recent developments improving the efficiency of sparse estimators, both from the statistical and optimization point of view. The first part of the talk deals with iterative regularization, a cheap regularization framework that sheds light on the popular early stopping practice. In the second part, we’ll discuss improved alternatives to the L1 penalization, the challenges they pose, and how to address them.
References:
Molinari, Massias, Rosasco and Villa, « Iterative regularization for low complexity regularizers », https://arxiv.org/abs/2202.00420
Bertrand, Klopfenstein, Bannier, Gidel and Massias, « Beyond L1: faster and better sparse models with skglm », https://arxiv.org/abs/2204.07826
Larsson, Klopfenstein, Massias and Wallin, « Coordinate descent for SLOPE », https://arxiv.org/abs/2210.14780

 

SPOT 79 – Lundi 6 Février 2023 (salle des thèses de l’ENSEEIHT)

14h – Luca Calatroni (Laboratoire I3S, Sophia Antipolis)

Scaled, inexact and adaptive FISTA for (strongly) convex optimisation

We design an inexact, scaled and generalised Fast Iterative Soft-Thresholding Algorithm (FISTA) for minimising the sum of two (possibly strongly) convex functions. Inexactness is explicitly taken into account for describing situations where proximal operators cannot be evaluated in closed form. The use of data-dependent scaling allows to incorporate Newton-type information along the optimisation via suitable variable-metric updates. Finally, non-monotone backtracking is used to improve convergence speed.  Linear convergence for the function values is proved with rates depending on backtracking/scaling and strong convexity parameters. The performance of the proposed algorithm, named SAGE-FISTA is validated on exemplar imaging problems where sparsity-promoting (l_1, TV) regularisation is combined with standard data-fidelity terms. The results show improved performance and efficiency under limited computational budget.

15h – Igor Klep (University of Ljubljana)

Noncommutative polynomial optimization: a tutorial

Optimization problems involving polynomial data arise across many areas of modern science and engineering. Due to recent advances, such as the Lasserre sum of squares hierarchy, these can nowadays be efficiently solved using semidefinite programming (SDP).
Several areas of science regularly optimize functions involving polynomial data in noncommuting variables, such as matrices or operators. For instance, control theory, quantum information theory or quantum chemistry.
The talk will present, in a self-contained style with lots of examples, some first steps in noncommutative polynomial optimization. Topics covered will include sums of hermitian squares, Positivstellensätze, and convergent SDP hierarchies.

 


 

SPOT 78  – Lundi 9 Janvier 2023 (salle des thèses de l’ENSEEIHT)

14h – Jean-Baptiste Hiriart-Urruty (IMT, Univ. Paul Sabatier Toulouse)

More on second-order differentiability properties of the Moreau regularization-approximation of a convex function

In this work, which constitutes the matter of a paper submitted for publication in Optimization Methods and Software, we revisit and improve second-order (classical) differentiability properties of the so-called Moreau regularized version of a (extended-valued) convex function. We delineate what to expect, what can be obtained. Our results are sharp, as several examples show it. See also abstract-jbhu-09-jan-23.

15h – Pierre Boudier (Lead of the computer hardware and software architecture at Intel)

Accélération de l’algèbre linéaire en multi précision pour le machine learning

Le machine learning a connu un développement important depuis 10 ans, notamment grâce à la disponibilité de hardware pour accélérer les nombreux calculs d’algèbre linéaire. Dans cette présentation, nous aurons un aperçu du fonctionnement d’un GPU, et comment la multiplication de matrice peut être implémentée avec différents niveaux de précision. Les techniques classiques de localisation dans les caches, hardware multi threading, SIMD ou SIMT units, systolic array, prefetching seront abordées.


 

SPOT 77  – Lundi 28 Novembre 2022 (salle des thèses de l’ENSEEIHT)

Organisé par Cédric Févotte (IRIT-CNRS Toulouse) en coordination avec le comité SPOT.

15h – Rémi Gribonval (INRIA & ENS Lyon)

Rapture of the deep: highs and lows of sparsity in a world of depths

Attempting to promote sparse connections in neural networks is natural to control their computational complexity. Besides, given its thoroughly documented role in inverse problems and variable selection, sparsity also has the potential to give rise to learning mechanisms endowed with certain interpretability guarantees. Through an overview of recent explorations around this theme, I will compare and contrast classical sparse regularization for inverse problems with multilayer sparse regularization. During our journey I will highlight the potential of an invariant path-embedding of the parameters of a deep network, both to learn the parameters and to analyze their identifiability from the function implemented by the network. In the process, we will be remembered that there is life beyond gradient descent, as illustrated by an algorithm that brings speedups of up to two orders of magnitude when learning certain fast transforms via multilayer sparse factorization.

16h – Matthieu Kowalski (Univ. Paris-Sud)

Use a Lasso to grab Champagne by looking at the MAP with Bernoulli and Gauss

Sparse coding is now one of the state-of-art approaches for solving inverse problems. We discuss two Maximum a Posteriori (MAP) interpretations for state-of-the-art methods used in Sparse Inverse Problems: the joint-MAP and the Marginal-MAP. Canonically rooted in a Bayesian framework, sparsity is modeled by a general Spike and Slab distribution. The focus is on the solution’s support recovery rather than the signal amplitude itself. We study the prominent Bernoulli-Gaussian model and show that a well-re-parametrized joint-MAP may lead to nice local minimizers of the marginal-MAP. Additionally, we explore common convex relaxations of the support onto a continuous space and encompass them under the scope of a parametrized distribution. Upon describing the behavior of a few said relaxations, equivalences are established between the Bernoulli-Gaussian joint-MAP, marginal-MAP, and well-studied methods such as the Lasso and Sparse Bayesian Learning. The given links allow revisiting the CHAMPAGNE procedure as a Reweighted Lasso procedure. In the second part, we will discuss one of the major drawbacks of these methods, which is the tuning of the so-called hyperparameter. We provide an Expectation-Maximization (EM) algorithm to estimate the parameters of the Bernoulli-Gaussian model for denoising a sparse signal corrupted by white Gaussian noise. Then, building on the Expectation-Maximization interpretation of the so-called Iterative Shrinkage/Thresholding Algorithm (ISTA), we provide a simple iterative algorithm to blindly estimate all the model parameters in the linear inverse problem context.


 

SPOT 76  – Lundi 14 Novembre 2022 (salle des thèses de l’ENSEEIHT)

14h – Jérôme Bolte (TSE et Univ. Toulouse Capitole)

A Glance at Nonsmooth Automatic differentiation

Nonsmooth automatic differentiation is one of the core learning mechanism in modern AI. I will show how a recent theory we developed « Conservative gradients » helps to understand this process and related fundamental phenomena, such as the convergence of learning phases in deep learning, the optimization of learning parameters, the nonsmooth cheap gradient principle, or the differentiation of algorithms. Joint work with E. Pauwels.

15h – Didier Henrion (LAAS-CNRS Toulouse and Czech Tech. Univ. Prague)

The Moment-SOS Hierarchy

The polynomial optimization problem (POP) is a very general problem which seeks to minimize a polynomial of many real variables subject to polynomial inequalities. Its special case is the problem of finding real solutions of a system of polynomial equalities and inequalities. This NP-hard problem has many applications in fields such as statistics, signal processing, machine learning, computer vision, computational geometry, and control engineering. The Moment-SOS hierarchy is an approach to the POP that allows us to solve it globally at the price of solving a family of convex (semidefinite) optimization problems of increasing size. The lecture will introduce the approach, describe its main milestones during the last two decades, as well as applications in statistics, signal processing and control. The focus will be on the computational features of the Moment-SOS hierarchy, its limitations and current efforts to overcome them. Mostly joint work with J. B. Lasserre.