Archive 2019-2020

Comité local d’organisation


SPOT 70 - le lundi 9 mars 2020 en salle des thèses

14h – Augustin Cosse (ENS Paris) - Recent Progress on Semidefinite Programming Relaxations

Over the last few years, semidefinite programming relaxations have emerged as a powerful tool to either derive approximate solutions to hard problems, or to obtain stability guarantees for tractable yet non convex particular instances of such problems. When dealing with large scale applications however, the attractivity of such relaxations remains limited, essentially because of the memory required to store the matrices involved. In this talk, I will discuss recent progress on two successful applications of those relaxations: matrix completion and super-resolution. In the first framework, we are interested in the recovery of a low rank matrix from a subset of its entries. In the second framework, we get access to the Fourier transform of a multi-atomic measure, up until a frequency cut-off Omega_c, and we want to recover this measure to arbitrary precision. For both problems, I will discuss provable recovery through semidefinite programs as well as scalability of this recovery. Joint work with Gabriel Peyré, Irène Waldspurger and Laurent Demanet

15h – Claus Gwiggner (University of Hamburg) – Priori Sequencing with Uncertain Release Dates

Scheduling with uncertain release dates has received little attention in the literature, although transportation, production and supply chain problems often exhibit such properties. In this talk, a sequence that minimizes the expected total weighted completion time under uncertain release dates is determined. The approach is the formulation of a two-stage model, where an a priori sequence is established in the first stage and a scheduling operation is performed in the second stage. The major difficulty is an exponential number of the joint realization of the release dates. This difficulty is resolved by a Markov model of the optimal solution of the second stage. It allows to solve instances to optimality with millions of scenarios.

Moreover, in special cases, the optimal policy turns out to shift a single task at the beginning of a sequence that is built according to the Weighted Shortest Expected Processing Times rule, depending on the variance of the release dates.

Key words : Stochastic sequencing, Two-stage programming, Implicit representation, Markov chain


 

SPOT 69 - le lundi 3 février 2020 en salle A001

14h – Mateusz Skomra (ENS Lyon)

Tropical approach to semidefinite programming and mean payoff games

Semidefinite programming (SDP) is a fundamental tool in convex and polynomial optimization. It consists in minimizing linear functions over spectrahedra (sets defined by linear matrix inequalities). In particular, SDP is a generalization of linear programming. In this talk, we discuss the nonarchimedean analogue of SDP, replacing the field of real numbers by the field of Puiseux series. Our methods rely on tropical geometry and, in particular, on the study of tropicalization of spectrahedra. We show that, under genericity conditions, tropical spectrahedra encode Shapley operators associated with stochastic mean payoff games. As a result, a large class of semidefinite feasibility problems defined over Puiseux series can be solved efficiently using combinatorial algorithms designed for stochastic games. Conversely, we use tropical spectrahedra to introduce a condition number for stochastic mean payoff games. We show that this conditioning controls the number of value iterations needed to decide whether a mean payoff game is winning. In particular, we obtain a pseudopolynomial bound for the complexity of value iteration provided that the number of random positions is fixed.  The talk is based on joint works with X. Allamigeon, S. Gaubert, and R. Katz.

15h – Pierre Weiss (IMT-CNRS Toulouse)

Sampling rates for l1-synthesis

In this talk, I will try to shed some light on the geometry of l1-based signal recovery. This setting has been studied extensively in the last decade with the advent of compressed sensing and super-resolution. There is however clear evidence that the best existing theories are still unable to predict some of the observed numerical behaviours, especially when using redundant dictionaries. I will work in the simple framework of under-sampled noisy Gaussian measurements, and try to explain how the geometry of high dimensional convex cones enters in the game and how certain quantities such as the Renegar condition number or the circumangle of a descent cone can provide decent bounds on the number of measurements sufficient for efficient recovery.

SPOT 68 – lundi 13 janvier 2020 – ENSEEIHT, salle des thèses

14h - Sylvain Sorin (Sorbonne Université, Paris) - No-regret procedures in online learning, games and convex optimization 

The purpose of this talk is to underline links between no-regret algorithms used in on-line learning, games and convex optimization. In particular we will study continuous and discrete time versions and their connections. We will comment on recent advances on:

  • link with variational inequalities,
  • speed of convergence of the evaluation,
  • convergence of the trajectories.

15h – Matthieu Serrurier (IRIT Univ. Toulouse) – Introduction aux réseaux génératifs antagonistes (GANs/Generative Adversarial Networks)

Le principe d’un GAN est de construire un générateur d’une loi dont on ne connaît qu’un ensemble fini d’observations. Dans cette introduction, nous décrirons comment ce problème se modélise à l’aide de réseaux de neurones et les différentes applications des GANs. Nous présenterons ensuite le problème d’optimisation sous-jacent et sa reformulation. Nous aborderons les limites de ce type d’approche à la fois du point de vue optimisation et du point de vue statistique. Enfin, nous verrons comment le problème se généralise et les différentes extensions qui en découlent.


 

SPOT 67 – exceptionnellement le matin du jeudi 19 décembre à l’ENSEEIHT, salle des thèses

9h30 – Charles Audet (Polytechnique Montréal)

Dynamic improvements of static surrogates in direct search optimization
The present work is in a context of derivative-free optimization involving direct search algorithms guided by surrogate models of the original problem.  These models are classified into two categories: static surrogates and dynamic models.  This work introduces the quadratic hybrid model (HQM), that dynamically corrects information from a static surrogate. Instead of bringing an additive or multiplicative correction, the HQM generalizes these two types of corrections by considering the static model as an input variable of the quadratic model.  Numerical tests are performed with the Mads algorithm on three multidisciplinary and one simulation-based engineering problems.  The results show that the contribution of the HQM to the Mads algorithm is to solve problems at greater precision for the same computational budget.

10h30 – Sandra Ngueveu (ENSEEIHT et LAAS-CNRS Toulouse)

Mixed integer optimization using piecewise linear function fitting

We present an efficient algorithm able to over-estimate/under-estimate/approximate any arbitrary univariate nonlinear derivable function by a non necessarily continuous piecewise linear function that minimizes the number of linear segments with a guaranteed tolerance. The main methodological contributions are (i) a generalization of previous work to a larger class of tolerance types than the absolute and relative tolerances and (ii) a reformulation technique allowing any approximation problem of convex or concave function with any tolerance type that preserves concavity, to be reduced to fitting a piecewise linear function within a bounded corridor.
The resulting method is used to solve with a performance guarantee certain classes of MINLP problems with linear constraints and a non-linear objective function. Computational results on nonlinear functions approximation benchmarks, network design problems with congestion and energy optimization with non-linear demand/cost functions show the efficiency of the method with regards to the state of the art.


 

SPOT 66 – le 18 novembre 2019 à l’ENSEEIHT, salle des thèses

14h – Radu Ioan Bot (University of Vienna)

A primal-dual dynamical approach to structured convex minimization problems

In this talk we propose a primal-dual dynamical approach to the minimization of a structured convex function consisting of a smooth term, a nonsmooth term, and the composition of another nonsmooth term with a linear continuous operator. To this end we introduce a dynamical system for which we prove that its trajectories asymptotically converge to a saddle point of the Lagrangian of the underlying convex minimization problem as time tends to infinity. In addition, we provide rates for both the violation of the feasibility condition by the ergodic trajectories and the convergence of the objective function along these ergodic trajectories to its minimal value. Explicit time discretization of the dynamical system results in a numerical algorithm which is a combination of the linearized proximal method of multipliers and the proximal ADMM algorithm.

15h – Tomaso Cesari (IMT University of Toulouse)

Non-convex optimization and bandit learning

(Subtitle: Making the right choices and feeling no regret)

In this presentation we will address the problem of finding a global maximizer of a (non-globally) Lipschitz mutivariate function f, using as few (possibly noisy) evaluations of f as possible. Borrowing ideas and notation from the bandit learning literature, we derive new regret bounds for the classical Piyavskii–Shubert algorithm in both the deterministic setting (in which an exact value f(x) can be accessed whenever a point x is queried) and the stochastic setting (in which values are observed up to subgaussian perturbations). We show that the regret of the Piyavskii–Shubert not only matches the best bounds achieved by other known algorithms in terms of the near-optimality dimension of f, but it also shows better empirical performance in practice.

SPOT 65 – le 4 novembre  2019 à l’ENSEEIHT

14h – Pierre-Antoine Absil (Univ Catholique Louvain, Belgique)

Optimization on manifolds: methods and applications

This talk concerns applications of differential geometry in numerical
optimization. They arise when the optimization problem can be formulated
as finding an optimum of a real-valued cost function defined on a smooth
nonlinear search space. Oftentimes, the search space is a « matrix
manifold », in the sense that its points admit natural representations in
the form of matrices. In most cases, the matrix manifold structure is
due either to the presence of certain nonlinear constraints (such as
orthogonality or rank constraints), or to invariance properties in the
cost function that need to be factored out in order to obtain a
nondegenerate optimization problem. Manifolds that come up in practical
applications include the rotation group SO(3) (generation of rigid body
motions from sample points), the set of fixed-rank matrices (low-rank
models, e.g., in collaborative filtering), the set of 3×3 symmetric
positive-definite matrices (interpolation of diffusion tensors), and the
shape manifold (morphing).

In the recent years, the practical importance of optimization problems
on manifolds has stimulated the development of geometric optimization
algorithms that exploit the differential structure of the manifold
search space. In this talk, we give an overview of geometric
optimization algorithms and their applications, with an emphasis on the
underlying geometric concepts and on the numerical efficiency of the
algorithm implementations.

 

15h – Victor Magron (LAAS-CNRS Toulouse)

The quest of efficiency and certification in polynomial optimization

In 2001, Lasserre introduced a nowadays famous hierarchy of relaxations, called the moment-sums of squares hierarchy, allowing one to obtain a converging sequence of lower bounds for the minimum of a polynomial over a compact semialgebraic set. Each lower bound is computed by solving a semidefinite program (SDP).

There are two common drawbacks related to this hierarchy. First, the size of the SDP problems arising from the hierarchy grows rapidly. Second, one relies on numerical SDP solvers, implemented in finite-precision, thus providing only approximate certificates.

In this talk, we explain how to address these two efficiency and certification issues.

In the first part, we recall how to exploit the sparsity pattern arising in polynomial optimization problems.  We present one application for an optimization problem in commuting variables, which is a framework to provide upper bounds on absolute roundoff errors of floating-point nonlinear programs. Then we present one application for optimization problems in non-commuting variables, which is a converging hierarchy of SDP relaxations for eigenvalue and trace optimization.

In the second part, we provide a hybrid symbolic-numeric algorithm computing exact rational sums of squares (SOS) decompositions for polynomials lying in the interior of the SOS cone. This algorithm computes an approximate decomposition for a perturbation of the input polynomial with an arbitrary-precision SDP solver. An exact decomposition is obtained thanks to the perturbation terms.  Then, we apply this algorithm to compute exact Polya and Putinar’s representations respectively for positive definite forms and polynomials positive over basic compact semialgebraic sets.


SPOT 64 le lundi 9 Septembre 2019 en salle A203 à l’ENSEEIHT

14h – Simone Naldi (Université de Limoges) – Infeasibility certificates in conic programming

The feasible set in a conic program is the intersection of a convex cone with an affine space. In this talk, I will be interested in the feasibility problem of conic programming: How to decide whether an affine space intersects a convex cone? Can we compute certificates of infeasibility? The problem is harder than expected since in (non-linear) conic programming, several types of infeasibility might arise. I will discuss a new contribution joint with R. Sinn (https://arxiv.org/abs/1810.11792), in which we revisit the classical facial reduction algorithm from the point of view of projective geometry. This leads us to a homogenization strategy for the general conic program. For semidefinite programs, this yields infeasibility certificates that can be checked in polynomial time. We also propose a refined type of infeasibility, which we call « stable infeasibility”, for which rational infeasibility certificates exist.

15h – Lilian Glaudin (Université Toulouse Capitole) – Proximal activation of smooth functions in convex optimization

The common practice in convex optimization is to use the smoothness of the functions if they are smooth. In this talk, we will propose some key to better understand implicit and explicit algorithm and show that this approach  is not necessarily the most efficient numerically. We will present many smooth functions which we know how to compute their proximity operator. Some numerical examples will be provided to support our point of view.