Description of the project

Positioning and main objectives of the proposal.

Nowadays, datascience problems involving the use of ML commonly requires the optimization of complex data-driven objective functions inspired by statistical principles to yield supervised learning algorithms having robustness properties with respect to the observations. While standard statistical problems involve the optimization of smooth convex functions, there is currently an important need to develop more sophisticated optimization techniques for solving modern ML problems. A typical situation is the need to deal with the lack of smoothness and/or the non-convexity of the objective functions arising from modern ML problems such as deep neural networks training. Nevertheless, at the present time, most of the ML optimization tasks in AI rely on ad-hoc developments or heuristic principles but there is a little amount of general mathematical understanding of their good empirical results. The goal of this project is thus to better understand the theoretical and numerical aspects of ML algorithms for AI based on optimization and sampling with iterative methods. We aim to develop novel numerical methods in deterministic and on-line optimisation for ML, to study their mathematical properties from the statistical, optimisation and computational cost point of views, and to implement these algorithms on challenging issues in various domains such as image analysis, bio-informatics or data processing on graphs. In such contexts, a central issue is that neither the data-driven objective function to optimize nor its gradient are tractable, but they can generally be seen as an expectation whose distribution can be approximated (online learning or MCMC sampling provide examples of approximations). This thus leads to the additional difficulty of handling noisy settings. Thus, a large part of the project is focused on stochastic methods for biased estimation of an untractable expectation intertwined with optimization eventually with the help of mini-batches sampling techniques (Bottou, Curtis, Nocedal (2018), Gadat, Panloup (2017), Atchadé, Fort, Moulines (2017)).
Specific issues in deep learning, generative models and adversarial learning.

Deep neural networks have clearly shown their impressive practical abilities to solve complex learning problems. They have clearly attained a striking popularity in the last years, but the understanding of their astonishing performances remain largely open. It is thus legitimate to study the mathematical aspects of deep learning algorithms, and this represents an important motivation of the MaSDOL project. In particular, we plan to put an emphasis on the analysis of generative adversarial networks (GANs) introduced in Goodfellow et al. (2014). GANs are a class of unsupervised ML techniques developed for the estimation of any distribution of high-dimensional data and the sampling of new elements that mimic the observations. GANs are currently receiving a huge interest in the ML community, and they have a wide spectrum of applications such as generating artificial random pictures of 2D scenes that look authentic to human eyes. Yet various basic questions around the use of GAN are far from being solved.

Comments are closed.