Optimization Chair of Grenoble IA Institute (MIAI)

Scope and goals

In this chair, we intend to bring together state-of-the-art optimization and game-theoretic methodologies to advance the mathematical foundations of AI. More specifically, our objective is to leverage the specific structure of optimization problems that arise in machine learning in order to (i) provide tight theoretical guarantees for robust/adversarial learning and (ii) design new distributed/online optimization algorithms for machine learning problems.

Members

  • Jérôme Malick, CNRS, LJK
  • Franck Iutzeler, UGA, LJK
  • Roland Hildebrand, CNRS, LJK
  • Panayotis Mertikopoulos CNRS, LIG
  • Yurii Nesterov, UCL, Louvain (Belgium)
  • + talented students: Yu-Guan Hiesh, Selim Chraibi, Yassine Laguel, Matthias Chastan, Yassine Laguel, Waiss Azizian

Overview

Video of the March 2022 MIAI meeting: from minute 58, I propose an overview of the Chair activities, insisting at the end on our work on federated learning. See also the poster below prepared for the MIAI evaluation by the international committee.

Highlights

Right Align

Let us describe three achievements of the Chair in 2021. (1) We proposed and analyzed a decentralized asynchronous optimization method for open networks when agents can join and leave the network at any time. Moreover the analysis and the algorithm generalize to a flexible setting in multi-agent online learning. (2) We provided a federated learning framework handling heterogeneous client devices. More precisely, we introduced a stochastic optimization algorithm compatible with the industrial constraints of federated learning (secure aggregation, differentially private computing subroutines, fed averaging for on-device computation). (3) We provided a local convergence rate of optimistic mirror descent methods in stochastic variational inequalities, a class of optimization problems with important applications to learning theory and machine learning. More precisely, this refined analysis subsumes non-monotone and non-Euclidean settings, and covers the variants used for training adversarial networks.