SPOT 103 – Lundi 2 février 2026
14h – François Pacaud (Mines Paris-PSL) – GPU-accelerated interior-point solvers
In recent years, GPU-accelerated optimization solvers based on second-order methods (e.g., interior-point methods) have gained momentum with the advent of mature and efficient GPU-accelerated direct sparse linear solvers, such as cuDSS. In this talk, we provide an overview of the state of the art in GPU-based second-order solvers, focusing on pivoting-free interior-point methods for large and sparse linear and nonlinear programs. We begin by highlighting the capabilities and limitations of the currently available GPU-accelerated sparse linear solvers. Next, we discuss different formulations of the Karush-Kuhn-Tucker systems for second-order methods and evaluate their suitability for pivoting-free GPU implementations. We present numerical experiments demonstrating the scalability of GPU-based optimization solvers. We observe speedups often exceeding 10× compared to comparable CPU implementations on large-scale instances. We discuss future plan to improve our interior-point method, in particular to support the solution of large-scale semi-definite programs (SDPs).
15h – Benjamin Charlier (INRAE Toulouse) – Automatic Differentiation and Iterative Solvers for Scalable Kernel-Based Computational Methods
Massively parallel hardware such as GPUs now provides much of the computational power in modern clusters. Developing user-friendly libraries that leverage these capabilities while remaining compatible with optimization-based methods is essential for analyzing real-world datasets, including applications in biological and medical imaging.
In this talk, we focus on kernel methods, which typically involve computations in a Reproducing Kernel Hilbert Space and align well with GPU architecture constraints. We show how they can benefit from memory- and computation-efficient numerical schemes, as implemented in the KeOps library.
We first illustrate how symbolic tensor abstractions combined with automatic differentiation enable concise and efficient implementations of Hamiltonian formulations for non-rigid shape registration. We then discuss how iterative methods, such as conjugate gradient solvers, support scalable linear algebra computations for datasets with millions of points.
SPOT 102 – Lundi 12 janvier 2026
14h – Joseph Morlier (ISAE-SUPAERO) – Embedding Sustainability in Design Optimization: Which Mathematical Recipes?
This work presents mathematical methods to embed sustainability criteria, such as CO₂ footprint and Life Cycle Assessment (LCA), directly into multidisciplinary design optimization (MDO). The main challenge lies in handling environmental data coming from discrete material databases. Two classes of problems are addressed: P1, with fixed or imposed topology, and P2, with free topology. For P1, continuous relaxation techniques allow eco-material selection to be integrated into the global MDO loop. Applications include a solar-powered HALE aircraft minimizing CO₂ emissions through coupled aerodynamic, structural, and energy disciplines. Variational Autoencoders are introduced to map discrete material properties into a continuous latent space, enabling gradient-based multi-objective optimization. For P2, SIMP-based topology optimization is extended with environmental and manufacturing considerations. The approach is successfully applied to metallic and composite structures, relying on surrogate models to reduce computational cost.
15h – Cheik Traoré (Toulouse School of Economics) – Stochastic proximal methods and variance reduction
Stochastic algorithms, particularly stochastic gradient descent (SGD), have become the preferred methods in data science and machine learning. SGD is indeed efficient for large-scale problems. However, due to its variance, its convergence properties are unsatisfactory. This issue has been addressed by variance reduction techniques such as SVRG and SAGA. Recently, the stochastic proximal point algorithm (SPPA) emerged as an alternative and was shown to be more robust than SGD with respect to step size settings. In this talk, we will examine the SPPA algorithm. Specifically, we will demonstrate how variance reduction techniques can improve the convergence rates of stochastic proximal point methods, as has already been demonstrated for SGD.
SPOT 100-101 – Monday 13 October 2025
A special event to celebrate the 100th edition of SPOT, with 4 talks during a whole optimization afternoon !
Curiosities and counterexamples in smooth convex optimization
Conjectures in Real Algebra and Polynomial Optimization through High Precision Semidefinite Programming
Weak optimal transport with moment constraints: constraint qualification, dual attainment and entropic regularization
The Moment-SOS hierarchy for computing: I: Mixtures of Gaussians closest (in W_2-Wasserstein distance) to a given measure. II: The total variation distance between two given probability measures
We present two recent applications of the Moment-SOS hierarchy.
I. We first consider an optimal transport formulation for computing
mixtures of Gaussians that minimize the W_2-Wasserstein distance to a given measure.
II. We next consider the problem of computing the total variation between two given measures.
For each problem we provide an associated hierarchy of semidefinite relaxations that converges to
the desired result. Importantly, in both cases the support of the input measures is not assumed to be compact.
Finally, the approach for problem II can also be used to solve problem I with the TV distance rather than the
W_2 Wasserstein distance.

