Optimization Chair of Grenoble IA Institute (MIAI)

Scope and goals

In this chair, we intend to bring together state-of-the-art optimization and game-theoretic methodologies to advance the mathematical foundations of AI. More specifically, our objective is to leverage the specific structure of optimization problems that arise in machine learning in order to (i) provide tight theoretical guarantees for robust/adversarial learning and (ii) design new distributed/online optimization algorithms for machine learning problems.

Members

  • Jérôme Malick, CNRS, LJK
  • Franck Iutzeler, UGA, LJK
  • Roland Hildebrand, CNRS, LJK
  • Panayotis Mertikopoulos CNRS, LIG
  • Yurii Nesterov, UCL, Louvain (Belgium)
  • + talented students: Yu-Guan Hiesh, Selim Chraibi, Yassine Laguel, Matthias Chastan, Yassine Laguel, Waiss Azizian

Highlights

Right Align

Let us describe three achievements of the Chair in 2021. (1) We proposed and analyzed a decentralized asynchronous optimization method for open networks when agents can join and leave the network at any time. Moreover the analysis and the algorithm generalize to a flexible setting in multi-agent online learning. (2) We provided a federated learning framework handling heterogeneous client devices. More precisely, we introduced a stochastic optimization algorithm compatible with the industrial constraints of federated learning (secure aggregation, differentially private computing subroutines, fed averaging for on-device computation). (3) We provided a local convergence rate of optimistic mirror descent methods in stochastic variational inequalities, a class of optimization problems with important applications to learning theory and machine learning. More precisely, this refined analysis subsumes non-monotone and non-Euclidean settings, and covers the variants used for training adversarial networks.

Representative publications

(see more on the MIAI's HAL page)
  • The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities
    Waiss Azizian, Franck Iutzeler, Jerome Malick, Panayotis Mertikopoulos
    COLT 2021 [paper][preprint]

  • Explore aggressively, update conservatively: Stochastic extragradient with variable stepsize scaling
    Yu-Guan Hsieh, Franck Iutzeler, Jerome Malick, Panayotis Mertikopoulos
    NeurIPS 2020 [paper][preprint]