Comité local d’organisation
- Jérôme Bolte (UT1 et TSE)
- Sonia Cafieri (ENAC)
- Serge Gratton (INP-ENSEEIHT, IRIT et CERFACS)
- Didier Henrion (LAAS-CNRS)
- Pierre Maréchal (UPS et IMT)
- Frédéric Messine (INP-ENSEEIHT et LAPLACE)
- Edouard Pauwels (UPS et IRIT)
- Aude Rondepierre (INSA et IMT)
SPOT 45. Lundi 3 juillet 2017. Lieu : salle des thèses N7.
14h – Juan Peypouquet (Universidad Tecnica Federico Santa Maria, Chile)
Fast convergent first-order method bearing second-order information
We propose a model for a class of first-order methods as an inertial system with Hessian-driven damping. The model combines several features of the Levenberg-Marquardt algorithm and Nesterov’s acceleration scheme for first-order algorithms. We obtain a second-order system (in time and space), which can be interpreted as a first-order one by an appropriate transformation. The resulting method is easily implementable, more stable than classical accelerated methods, and just as fast.
15h – Hector Ramirez (Universidad de Chile)
Bioremediation of water resources: An optimal control approach
This talk deals with the bioremediation, in minimal time, of a water resource (such as lakes, reservoirs, etc.) using a single continuous bioreactor. The bioreactor is connected to the reservoir through several pumps. Typically, one pump extracts polluted water and other one injects back sufficiently clean water with the same flow rate. However, we also analyze more complex pumps configurations. So, we state minimal-time optimal control problems where the control variables are related to the inflow rates of the pumps. For those problems, we analyze the existence of their solutions as well as their optimal synthesis (via Pontryaguin’s Principle). We also obtain, for some configurations, explicit bounds on their value functions via Hamilton–Jacobi–Bellman techniques.
SPOT 44. Lundi 19 juin 2017. Lieu : salle des thèses N7.
14h – Olivier Ruatta (XLIM, Univ. Limoges) : Contours libres et applications
15h – Cédric Févotte (IRIT, Université de Toulouse) : Nonnegative matrix factorisation for spectral unmixing
Résumé : Data is often available in matrix form, in which columns are samples, and processing of such data often entails finding an approximate factorisation of the matrix in two factors. The first factor (the “dictionary”) yields recurring patterns characteristic of the data. The second factor (“the activation matrix”) describes in which proportions each data sample is made of these patterns. In the last 15 years, nonnegative matrix factorisation (NMF) has become a popular technique for analysing data with nonnegative values, with applications in many areas such as in text information retrieval, hyperspectral imaging or audio signal processing. The presentation will give an overview of NMF with a focus on majorisation-minimisation algorithms and will describe spectral unmixing applications in audio signal processing (source separation, denoising) and remote sensing.
SPOT 43. Lundi 22 mai 2017. Lieu : salle des thèses N7.
14h – Bijan Mohammadi (Univ. Montpellier) – Data Analytic UQ cascade for aircraft shape design
We present an original framework for uncertainty quantification (UQ) in optimization. It is based on a cascade of ingredients with growing computational complexity for both forward and reverse uncertainty propagation. The approach is merely geometric. It starts with a complexity-based splitting of the independent variables and the definition of a parametric optimization problem. Geometric characterization of global sensitivity spaces through their dimensions and relative positions by the principal angles between global search subspaces bring a first set of information on the impact of uncertainties on the functioning parameters on the optimal solution. Joining the multi-point descent direction and the quantiles on the optimization parameters permits to define the notion of Directional Extreme Scenarios (DES) without sampling of large dimension design spaces. One goes beyond DES with Ensemble Kalman Filters (EnKF) after the multi-point optimization algorithm is cast into an ensemble simulation environment. This formulation accounts for the variability in large dimension. The UQ cascade ends with the joint application of the EnKF and DES leading to the concept of Ensemble Directional Extreme Scenarios (EDES) which provides more exhaustive possible extreme scenarios knowing the Probability Density Function of our optimization parameters. The presentation also addresses the issue of reduced order modelling through convolutional neural networks and shows how these can be used for the solution of direct and inverse problems. The different ingredients are illustrated on different industrial applications with particular emphasis on aircraft shape design in the presence of operational and geometrical uncertainties is addressed.
15h – Thibaut Verrron (IRIT, ENSEEIHT) – Classification algébrique pour le contrôle optimal en Imagerie à Résonance Magnétique
L’imagerie à résonance magnétique (IRM) est un procédé d’imagerie médicale reposant sur la mesure des réactions des substances biologiques à l’application d’un champ magnétique. Les travaux de Steffen Glaser et de son équipe en 2012 ont montré que la théorie du contrôle optimal permet de contrôler cette réaction plus efficacement que les heuristiques généralement utilisées.
Il s’avère que les points d’équilibre et d’autres invariants du système sont donnés par des ensembles semi-algébriques réels, ce qui permet d’utiliser des algorithmes de calcul formel pour les classifier : bases de Gröbner, décomposition cylindrique algébrique… Certains systèmes considérés sont hors de portée des algorithmes existants, mais ils présentent une structure qui permet d’adapter les stratégies de calcul et de rendre les calculs réalisables.
SPOT 42. Lundi 24 avril 2017. Lieu : salle des thèses N7.
14h – Charles Audet (Polytechnique Montreal) : Parameter tuning: Runge-Kutta case study
My main research interest revolves on devising blackbox optimization methods, analyzing they behavior using tools from nonsmooth calculus and applying them to real engineering problems. This talk focuses on parameter tuning through blackbox optimization.
The Runge-Kutta class of iterative methods is designed to approximate solutions of a system of ordinary differential equations (ODE). The second-order class of Runge-Kutta methods is determined by a system of $3$ nonlinear equations and $4$ unknowns, and includes the modified-Euler and mid-point methods. The fourth-order class is determined by a system of $8$ nonlinear equations and $10$ unknowns. This work formulates the question of identifying good values of these $8$ parameters for a given family of ODE as a blackbox optimization problem.
The objective is to determine the parameter values that minimize the overall error produced by a Runge-Kutta method on a training set of ODE. Numerical experiments are conducted using the Nomad direct-search optimization solver.
15h – Pas d’exposé
SPOT 41. Lundi 13 mars 2017. Lieu: ENSEEIHT, Salle des thèses.
14h: Thorsten Hohage, Univ. Goettingen, Germany
Higher order convergence rates for inverse problems in Banach spaces
Over the last ten to twelve years the focus of interest in regularization theory for inverse problems has shifted from spectral theoretical tools to variational methods since they are more flexible in describing a-priori information on the solution and the data noise and since they not only work in Hilbert spaces, but also in Banach space. In this framework so-called source conditions, which describe smoothness of the solution relative to the smoothing properties of the forward operator and which determine the rate of convergence as the noise level tends to zero, are no longer formulated in terms of the functional calculus, but in terms of variational inequalities. As a main drawback, such variational source conditions can only describe low order smoothness. In 2013, Grasmair has increased the order by one step up to the saturation of Tikhonov regularization by considering the dual of the Tikhonov minimization problem. We considerably extend Grasmair’s approach and derive variational source conditions of arbitrarily high order. Under these new conditions we prove convergence rates of Bregman iterations and show their optimality in a Hilbert space setting. In many important situations, our new conditions can be characterized as standard smoothness assumption. Our theoretical results are illustrated by numerical experiments for entropy regularization.
15h: Anne Vanhems, Toulouse Business School and Toulouse School of Economics
A mollifier approach to the deconvolution of random variables
The objective of this work is to propose an alternative approach to standard regularization methods for deconvolution of random variables. This problem is well-known to be ill-posed. Its resolution has been addressed in many publications, among which the deconvolution kernel approach and the Tikhonov approach. The main drawback of the latter approach is that the original equation is significantly perturbed, which leads to a difficult tradeoff: a strong regularization parameter induces a strong model perturbation (Charybdis); a weak regularization parameter yields a unstable solution (Scylla). In this talk, we propose an alternative regularization scheme, in which this tradeoff will become much less crucial. The corresponding methodology appeals to the notion of mollification.
SPOT 40. Lundi 13 février 2017. Lieu: ENSEEIHT, Salle des thèses.
14h: Stefan Ulbrich, Tech. Univ. Darmstadt
Preconditioners for time-dependent PDE-constrained optimization and an implementation based on Parareal time-domain decomposition
15h: Benoît Pauwels, IRT Saint-Exupéry Toulouse
Direct search for noisy functions
SPOT 39. Lundi 12 décembre 2016. Lieu: ENSEEIHT, Salle des thèses.
14h: Didier Aussel, Univ. Perpignan
Nash equilibrium in the domain of energy exchanges : models, results and challenges
Nash equilibrium models provide a perfect setting for the modelization of non-cooperative exchanges. In this talk we will mainly focus on various aspects of the subclass of multi-leader-follower games, a model combining Nash games and bilevel problems. Based on two examples linked to energy exchanges, we will illustrate how valuable these models can be. The first example, resulting of a collaboration with ENSIACET, deals with the design of industrial ecoparks. The second example concerns the modelization of « pay-as-bid » markets and in particular adjustment markets and correspond to join works with EDF R&D. Existence results as well as numerical simulation will be provided. Bilevel problems and Nash games are known to be difficult problem and we will also emphasize how subtle can be their reformulation throught first order conditions (KKT system).
15h: Philippe Mahey, Univ. Blaise Pascal Clermont-Ferrand
Generalized primal-dual monotone operator splitting methods and applications to decomposition
Les techniques d’éclatement d’opérateurs ont reçu un regain d’intérêt depuis quelques années, motivées par les applications en Traitement de Signal, Classification et Apprentissage, voire comme certains l’affirment au ‘Big Data’. A partir d’un modèle primal-dual proposé par Chambolle et Pock, on explorera diverses généralisations qui recouvrent plusieurs algorithmes connus, les plus anciens comme Douglas-Rachford (en fait l’algorithme des Directions Alternées de Multiplicateurs) ou plus récemment ceux décrits dans le survey de Shefi et Teboulle. L’application de ces schémas algorithmiques à la décomposition de problèmes convexes conduisant à des Lagrangiens Augmentés séparables sera présentée ensuite.
SPOT 38. Lundi 7 novembre 2016. Lieu: ENSEEIHT, Salle des thèses.
14h: Antoine Hochart, Univ. Toulouse Capitole
Generic uniqueness of the bias vector of finite stochastic games and application to policy iteration algorithm
Zero-sum stochastic games with mean-payoff criteria can be studied by means of a nonlinear spectral problem. When the state space is finite, the latter consists in finding an additive eigenvalue and an additive eigenvector (called bias) of the dynamic programming operator. The eigenvalue yields the average payoff per time unit, and the bias allows one to determine optimal stationary strategies. The existence of an eigenpair is generally related to ergodicity conditions. A basic issue is to understand for which classes of games the bias is unique (up to an additive constant). We consider here a game with finite state and action spaces and perfect information, thinking of the stage payments as variable parameters, transition probabilities being fixed. We show that the bias vector, thought of as a function of the stage payments, is generically unique. As an application, we obtain a perturbation scheme allowing one to solve degenerate instances of stochastic games by policy iteration. (Joint work with Marianne Akian and Stéphane Gaubert.)
15h: Cédric Josz, LAAS-CNRS Univ. Toulouse
Polynomial Optimization and Semidefinite Programming in Complex Numbers
Multivariate polynomial optimization where variables and data are complex numbers is a non-deterministic polynomial-time hard problem that arises in various applications such as electric power systems, imaging science, signal processing, and quantum mechanics. We transpose to complex numbers the Lasserre hierarchy which aims to solve real polynomial optimization problems to global optimality. This brings complex semidefinite programming into the picture and calls for an interior-point algorithm in complex numbers. The Nesterov-Todd direction will be discussed and supplemented by numerical results on the European high-voltage electricity transmission network.
SPOT 37. Lundi 10 octobre 2016. Lieu: ENSEEIHT, Salle des thèses.
14h: Hélène Frankowska, CNRS et Inst. Math Jussieu, Univ. Pierre et Marie Curie, Paris
Fonction Valeur, Relaxation et Conditions de transversalité en Contrôle Optimal à l’Horizon Infini
Dans cet exposé je discuterai les conditions nécessaires d’optimalité (principe de maximum et les relations de sensibilité) pour les problèmes de contrôle à l’horizon infini impliquant un coût avec ou sans le facteur d’actualisation. Dans ce cadre les difficultés majeures sont liées au choix des conditions de transversalité à l’infini ainsi qu’au fait que le principe de maximum puisse être anormal. Pour les éviter, certains chercheurs ont proposé de faire intervenir la fonction valeur pour une classe restreinte des problèmes (avec le facteur d’actualisation). Je présenterai des résultats dans le cadre très général et ferai un lien entre la normalité du principe de maximum et des propriétés « géométriques » de la fonction valeur. Les conditions de transversalité feront intervenir des divers sous/sur-différentiels de la fonction valeur (article en collaboration avec P. Cannarsa).
15h: Krzysztof Kurdyka, Univ. Savoie, Chambéry
Convexifying positive polynomials and a proximity algorithm
We prove that if $f$ is a positive $C^2$ function on a convex compact set $X$ then it becomes strongly convex when multiplied by $(1+|x|^2)^N$ with $N$ large enough. For $f$ polynomial we give an explicit estimate for $N$, which depends on the size of the coefficients of $f$ and on the lower bound of $f$ on$X$. As an application of our convexification method we propose an algorithm which for a given polynomial $f$ on a convex compact semialgebraic set $X$ produces a sequence (starting from an arbitrary point in $X$) which converges to a (lower) critical point of $f$ on $X$. The convergence is based on the method of talweg which is a generalization of the Lojasiewicz gradient inequality. (Joint work with S. Spodzieja).
SPOT 36. Lundi 19 septembre 2016. Lieu: ENSEEIHT, Salle des thèses.
14h: David Monniaux, VERIMAG, Univ. Grenoble
Satisfiability and optimization modulo theory
Satisfiability modulo theory (SMT) consists in testing the satisfiability of first-order formulas over linear integer or real arithmetic, or other theories. The formulas accepted may include conjunctions, disjunctions and even quantifiers. This approach combines a Boolean satisfiability (SAT) solver with a decision procedure for arithmetic. It has been extended to optimization.
15h: Pierre-Loïc Garoche, ONERA Toulouse
Convex optimization-based static analysis for control systems
Ces travaux s’intéressent spécifiquement à la vérification d’implantation d’algorithmes de contrôle comme ceux utilisés dans les avions civils. Ces contrôleurs sont essentiellement conçus en combinant des contrôleurs simples (linéaires par morceaux) avec des logiques de détection de fautes et de reconfiguration. L’analyse statique de ces systèmes est aujourd’hui difficile car les propriétés invariantes de ces contrôleurs sont souvent super-linéaires, par exemple quadratiques, alors que les méthodes de calcul d’invariants par interprétation abstraite ou de preuve automatique avec des solveurs de satisfiabilité (SMT) traitent assez mal les systèmes ou propriétés non linéaires. Les travaux présentés, inspirés des méthodes utilisés en analyse temporelle dans le monde du contrôle, reposent sur la formulation de ces problèmes et leur résolution comme des problèmes d’optimisation convexe dans le cône des matrices définies positives (programmation SDP avec des inégalités linéaires de matrices (LMI) ou polynômes sommes de carrés). Ces approches permettent de synthétiser automatiquement des invariants quadratiques ou polynomiaux, capables de capturer le comportement de ces implantations de contrôleurs. Les méthodes proposées s’appliquent tant au niveau modèle qu’au code, et traitent spécifiquement des problématiques liées à l’utilisation d’arithmétique en virgule flottante. Les perspectives de ces travaux concernent essentiellement la vérification d’implantation d’algorithmes numériques embarqués. L’ambition est de traiter plus de propriétés et plus de systèmes. Plus de propriétés signifie, tant au niveau des modèles que du code, d’exprimer formellement et de vérifier automatiquement des propriétés systèmes plus intéressantes que le simple comportement borné du système. Pour les algorithmes de contrôle il s’agit des propriétés de stabilité, robustesse et performance (overshoot, time to settle, etc). Pour d’autres types d’algorithmes, il s’agira de bornes sur le temps de convergence, ou la garantie de satisfaire certaines contraintes sur la solution calculée. Plus de systèmes signifie la volonté de traiter plus que de simple contrôleurs linéaires. Je souhaite développer des ana- lyses capables de traiter des systèmes simples mais réalistes (LPV, interpolation linéaire de contrôleurs, …) aux fonctions numériques plus avancées comme utilisée dans des algorithmes de calcul de trajectoire pour l’anti-collision ou l’atterrissage planétaire. L’ensemble des méthodes et outils développés seront intégrés dans des ateliers et démonstrateurs open-source applicables sur des systèmes de taille significative.