Comité local d’organisation
- Jérôme Bolte (UT1 et TSE)
- Sonia Cafieri (ENAC)
- Serge Gratton (INP-ENSEEIHT, IRIT et CERFACS)
- Didier Henrion (LAAS-CNRS)
- Pierre Maréchal (UPS et IMT)
- Frédéric Messine (INP-ENSEEIHT et LAPLACE)
- Edouard Pauwels (UPS et IRIT)
- Aude Rondepierre (INSA et IMT)
SPOT 63– Lundi 1 Juillet 2019 – Amphi des thèses ENSEEIHT
14h – Bachir El Khadir – Princeton Univ, USA
Power and Limitations of Sum of Squares Programming for Stability Analysis of Dynamical Systems
We study the power and limitations of sum of squares optimization and semialgebraic Lyapunov functions for proving asymptotic stability of polynomial dynamical systems. We give the first example of a globally asymptotically stable polynomial vector field with rational coefficients that does not admit a polynomial (or even analytic) Lyapunov function in any neighborhood of the origin. We show, however, that if the polynomial vector field is homogeneous, then its asymptotic stability is equivalent to existence of a rational Lyapunov function whose inequalities have sum of squares proofs. This statement generalizes the classical result in control on the equivalence between asymptotic stability of linear systems and existence of a quadratic Lyapunov function satisfying a certain linear matrix inequality.
15h – Tillmann Weisser – Los Alamos Nat Lab, USA
Relaxations and Uncertainty Quantification for the Power Grid
SPOT 62 – Lundi 3 Juin 2019 – Amphi des thèses ENSEEIHT
14h – Lorick Huang (IMT) – Quelques contributions autour des algorithmes de Robbins Monro
Les algorithmes de Robbins-Monro sont des procédures récursives basés sur des simulations qui permettent d’approcher le zéro d’une fonction pouvant s’écrire comme une espérance. Dans cet exposé, je présenterai deux résultats originaux autours de la convergence de ces algorithmes.
Sans trop rentrer dans les détails, je présenterai ces algorithmes et mes résultats; un premier résultat de nature numérique sur l’extrapolation de Richardson Romberg et un second de nature plus théorique sur un théorème limite local.
15h – Yacine Chitour (Paris Saclay/L2S) – Aspects dynamiques des algorithmes de gradient pour l’apprentissage
Dans cet exposé, nous présentons un travail en cours avec Z. Liao et R. Couillet. Nous proposons de décrire le comportement empirique de certains algorithmes de gradient pour l’apprentissage sous la forme d’une conjecture dite »overfitting conjecture », qui adopte le point de vue des systèmes dynamiques déterministes. Pour l’apprentissage linéaire profond, nous montrons la convergence des trajectoires vers des points critiques et la résolution de la conjecture dans le cas d’un système avec une couche. Une série de problèmes ouverts est proposée.
SPOT 61 – Lundi 13 Mai 2019 – Amphi des thèses ENSEEIHT
14h – Jérôme Fehrenbach (IMT) – Shape optimization using Gauss-Newton method
Shape optimization aims at determining the shape of a domain where the solution of a partial differential equation is optimal with respect to some objective. In general only local optima can be found. We will overview classical methods, among them the method of choice is a descent using first order information. When the objective is a quadratic cost function we will define a Gauss-Newton method to solve the problem, and we will document this method by showing numerical results. The oscillations shown by the classical shape gradient method are removed and the convergence to a local minimizer is faster.
15h – Jean-Baptiste Hiriart-Urruty (IMT) – Three (small) examples of problems of « optimal curves » which are still open
The so-called open problems, the conjectures … have always been driving forces in the advance of knowledge in mathematics, but also in their learning, at all levels : colleges, high schools, mathematical training in universities. To propose such questions, in the form of challenges “Who will do the best ?” is even a suggested activity in the teaching methods because it is very formative. Added to this is the need for a demonstration or proof of an advanced assertion : to propose the best known solution does not mean that there is no better one ; a solution is the best when it has been proved that it is … Number Theory like Geometry are major problem areas containing problems for which we can propose the best known solutions, but without knowing, for as much, if they are the best possible, in other words without being able to prove the optimality of these solutions (in a sense to be defined).
We propose in this talk three problems of geometry (with variants), with a variatio- nal flavor, of the type “Find the curve of minimum length such as …” ; the first one is somehow classical, the next two are more original. In all three cases, we indicate how far mathematicians have been able to go; none of them has been fully resolved, as yet. Of course, we do not expect the listener to solve these problems … But what we propose can be used as material for a research process, between students, colleagues, on “how to bring and/or improve” a proposed answer to a given problem, each one making his imagination and mathematical ability work.
- The isotropic viscous damping with vanishing coefficient α/t is related to the Nesterov accelerated gradient method and to the FISTA method, as recently shown by Su-Boyd-Candès. We will review recent results regarding the fast convergence properties of these methods. They depend on the value of α relative to the critical value α =3. For α>3, any trajectory converges weakly to a minimizer of f, and f(x(t))- min(f) = o(1/t2). We will also show the natural link with the Ravine method of Gelfand-Tsetlin (1961).
- The Hessian driven damping takes into account the geometry of f. It is linked to the Newton and the Levenberg-Marquardt methods. It was introduced in the context of optimization by Alvarez-Attouch-Bolte-Redont (J. Math. Pures et Appl. 2002). Since then, several variants and associated algorithms have been considered. The above formulation has been studied by Attouch-Peypouquet-Redont (JDE 2016). The introduction of the Hessian induces multiple favorable effects on convergence properties. While maintaining the convergence rate of values of the Nesterov-type methods, it provides the fast convergence to zero of the gradients ∇f (x(t)). For poorly conditioned minimization problems, it neutralizes wild transverse oscillations.
Surprisingly, the presence of the Hessian driven damping makes the system well-posed for a general proper lower-semicontinuous convex function f : H → R∪{+∞}. This is based on the crucial property that the inertial dynamic with Hessian driven damping can be written equivalently as a first-order system in time and space, allowing it to be extended simply by replacing the gradient with the sub-differential. The extension to the case of the sum of two potential functions « non-smooth + smooth » was exploited by Attouch-Maingé-Redont (DEA 2012) to model non-elastic shocks in mechanics and PDE’s. The study of the associated algorithms for structured convex optimization is an ongoing research topic. We will present a new proximal-based inertial algorithm with fast convergence properties combining Nesterov acceleration with Hessian damping. Based on the dynamical interpretation of the algorithms, we show the effect of temporal scaling on the convergence rates. The study of the dynamic system above has natural connection with control theory. Passing from open-loop control (as in Nesterov’s method) to closed-loop control is an active research topic. For Newton’s regularized method (without inertia), it was examined by Attouch-Redont-Svaiter (JOTA 2013). This study in the context above, as well as the restarting method, is an interesting direction of research.
Convergence de méthodes de gradient stochastiques à biais persistant
Les algorithmes d’Approximation Stochastique, dont l’algorithme de gradient à pas décroissant est un exemple, sont des procédures itératives pour la recherche de zero d’une fonction non explicite. Ils reposent sur une approximation Monte Carlo de cette fonction objectif. Néanmoins, dans beaucoup d’applications, cette approximation stochastique est biaisée avec un biais qui, pour des raisons de coût de calcul, ne disparaît pas au fil des itérations. La persistance de ce biais pose la question du bien-fondé de telles procédures stochastiques.
Dans cet exposé, nous motiverons ce contexte par des applications en Apprentissage Statistique, et illustrerons les résultats théoriques par une extension de l’algorithme de gradient stochastique à l’optimisation de fonctions convexes composites. Nous démontrerons la convergence de ces algorithmes et de versions accélérées, même dans le cas d’un biais persistant. Nous commenterons aussi le parallèle entre ces méthodes et des algorithmes Expectation-Maximization stochastiques (Monte Carlo EM, Stochastic Approximation EM).
15h – Adrien Taylor (INRIA ENS Paris)
Computer-assisted analyses of first-order methods (via semidefinite programming)
We present the performance estimation approach. The methodology aims at automatically analyzing worst-case convergence properties of first-order methods for solving (mostly convex) optimization problems. In particular, it allows obtaining (tight) guarantees for fixed-step first-order methods involving a variety of oracles – that includes explicit, projected, proximal, conditional, inexact, or stochastic (sub)gradient steps – and a variety of convergence measures. This presentation aims at providing a high-level introduction on the current state of the framework, mostly through examples. The talk is based on joint works with great collaborators: François Glineur (UCLouvain), Julien Hendrickx (UCLouvain), Etienne de Klerk (Tilburg/TU Delft), Yoel Drori (Google), Pontus Giselsson (Lund), Carolina Bergeling (Lund), Ernest Ryu (UCLA) and Francis Bach (Inria/ENS Paris). The methodology is implemented within the package « PESTO » (for « Performance EStimation TOolbox », avaible at https://github.com/AdrienTaylor/Performance-Estimation-Toolbox), which allows using the framework without the SDP modelling steps.
A set A, in the extended complex plane, is called convex with respect to a pole u, if for any x,y in A the arc on the unique circle through x,y and u, that connects x and y but does not contain u, is in A. If the pole u is taken at infinity, then this notion reduces to the usual convexity. Polar convexity is connected with the classical Gauss-Lucas’ and Laguerre’s theorems for complex polynomials. If a set is convex with respect to u and contains the zeros of a polynomial, then it contains the zeros of its polar derivative with respect to u. A set may be convex with respect to more than one pole. The main goal of this talk is to present the main relationships between sets and their poles.
This is a joint work with Blagovest Sendov from the Bulgarian Academy of Sciences.
We will explain how these decays can be obtained studying an ODE.
SPOT 56 – Lundi 3 décembre – Amphi A001 ENSEEIHT
SPOT 55. Lundi 1 Octobre 2018. Lieu : N7, salle des thèses
14h – Salma Kuhlmann (Univ Constance, Allemagne) - Positive polynomials and moment problems
Hilbert’s 17th problem asked whether a real polynomial p(x1,..,xn) which takes non-negative values as a function on R^n is a finite sum of squares (SOS) of real rational functions q(x1,..,xn)/r(x1,..,xn). A complete positive answer was provided by Artin and Schreier (1927), giving birth to real algebraic geometry. The question when the (SOS) representation is denominator free is however of particular interest for applications. In his pioneering 1888 paper, Hilbert gave a general answer (in terms of degree and number of variables). Subsequent general results, such as Krivine’s Positivstellensatz, pertain to a relative situation, where one considers polynomials non-negative on a basic closed semi-algebraic set K and SOSs weighted with inequalities defining K. Stronger results hold when K is compact; the Archimedean Positivstellensatz of Putinar and Jacobi-Prestel is a fundamental tool
in theory and applications. By the classical Riesz-Haviland theorem (1930s), the problem of characterizing positive polynomials on a given closed subset K of R^n is dual to the finite dimensional moment problem (i.e. that of representing a linear functional on the polynomial algebra R[x1,..,xn] as integration with respect to a Borel measure). An algebraic approach was taken in a series of papers by Ghasemi-Kuhlmann-Marshall (2013-2016) who study the moment problem on a general not necessarily finitely generated commutative unital real algebra, a context adapted to infinite dimensional moment problems. In this talk I will survey (with examples) various Positivstellensaetze and their corresponding moment problem interpretations.
15h – Jean-Claude Yakoubsohn (Univ Paul Sabatier Toulouse) – Certified and Fast Computation of SVD
This work is in progress with the collaboration Joris Van Der Hoeven of the Laboratory LIX. We define and study a new method to approximate locally the SVD of a general matrix: this method has a local quadratic convergence. This solves the problem of the fast certified approximation of the multiple singular values or clusters of singular values. We also give result of complexity of this approximation using such a type of homotopy method.
Juin 2018 : Le séminaire SPOT est terminé pour l’année 2017-2018. De juin à septembre 2018 a lieu le semestre thématique Optimisation financé par le Labex CIMI. Plus de détails à l’adresse : http://cimi.univ-toulouse.fr/optimisation/
Evènements :
Juin 4 – 5, 2018 | Master Class sur les Méthodes hybrides pour l’Optimisation Combinatoire / Mixte |
Juin 27 – 29, 2018 | Workshop Optimisation: aspects fondamentaux et exploitation de la structure. |
Septembre 3 – 7, 2018 | Spring School Contrôle optimal numérique. |
Septembre 10 – 13, 2018 | Workshop Optimization and Learning. |